Thursday, May 14, 2026
Managed by Visioneerit
IndustrialBriefs
Managed by Visioneerit

Rethinking AI Alignment: A Virtue-Ethical Approach

A recent essay challenges the traditional notion of goal-oriented rationality and offers a new framework for understanding human action and decision-making, with implications for AI alignment. By adopting a virtue-ethical approach, we can develop AIs that are aligned with human practices, rather tha

Advertisement
Rethinking AI Alignment: A Virtue-Ethical Approach

Rethinking AI Alignment: A Virtue-Ethical Approach

The concept of AI alignment has been a topic of discussion in the field of artificial intelligence, with many experts exploring ways to ensure that AI systems act in accordance with human values. A recent essay by Peli Grietzer, titled 'After Orthogonality: Virtue-Ethical Agency and AI Alignment,' presents a unique perspective on this issue, arguing that rational people don't have goals, and that rational AIs shouldn't have goals either.

A New Perspective on Rational Action

According to Grietzer, human actions are rational not because they are directed towards specific goals, but because they are aligned with practices - networks of actions, action-dispositions, action-evaluation criteria, and action-resources that structure, clarify, develop, and promote themselves. This perspective challenges the traditional notion of goal-oriented rationality and offers a new framework for understanding human action and decision-making.

Implications for AI Alignment

If we want AIs that can genuinely support, collaborate with, or even comply with human values, we need to rethink our approach to AI alignment. By adopting a virtue-ethical approach, we can focus on developing AIs that are aligned with human practices, rather than trying to program them with specific goals or objectives.


Source: source. Read the original story →

Advertisement
Advertisement
Advertisement