Rethinking AI Alignment: A Virtue-Ethical Approach
The concept of AI alignment has been a topic of discussion in the field of artificial intelligence, with many experts exploring ways to ensure that AI systems act in accordance with human values. A recent essay by Peli Grietzer, titled 'After Orthogonality: Virtue-Ethical Agency and AI Alignment,' presents a unique perspective on this issue, arguing that rational people don't have goals, and that rational AIs shouldn't have goals either.
A New Perspective on Rational Action
According to Grietzer, human actions are rational not because they are directed towards specific goals, but because they are aligned with practices - networks of actions, action-dispositions, action-evaluation criteria, and action-resources that structure, clarify, develop, and promote themselves. This perspective challenges the traditional notion of goal-oriented rationality and offers a new framework for understanding human action and decision-making.
Implications for AI Alignment
If we want AIs that can genuinely support, collaborate with, or even comply with human values, we need to rethink our approach to AI alignment. By adopting a virtue-ethical approach, we can focus on developing AIs that are aligned with human practices, rather than trying to program them with specific goals or objectives.
Source: source. Read the original story →