作者
Carles Sierra, Nardine Osman, Pablo Noriega, Jordi Sabater-Mir, Antoni Perelló
发表日期
2021/10/18
期刊
arXiv preprint arXiv:2110.09240
简介
principles that should govern autonomous AI systems. It essentially states that a system's goals and behaviour should be aligned with human values. But how to ensure value alignment? In this paper we first provide a formal model to represent values through preferences and ways to compute value aggregations; i.e. preferences with respect to a group of agents and/or preferences with respect to sets of values. Value alignment is then defined, and computed, for a given norm with respect to a given value through the increase/decrease that it results in the preferences of future states of the world. We focus on norms as it is norms that govern behaviour, and as such, the alignment of a given system with a given value will be dictated by the norms the system follows.
引用总数
202020212022202320243471011
学术搜索中的文章
C Sierra, N Osman, P Noriega, J Sabater-Mir, A Perelló - arXiv preprint arXiv:2110.09240, 2021