Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable
Natural Language Engineering, 2023•cambridge.org
There has been considerable work recently in the natural language community and
elsewhere on Responsible AI. Much of this work focuses on fairness and biases (henceforth
Risks 1.0), following the 2016 best seller: Weapons of Math Destruction. Two books
published in 2022, The Chaos Machine and Like, Comment, Subscribe, raise additional
risks to public health/safety/security such as genocide, insurrection, polarized politics,
vaccinations (henceforth, Risks 2.0). These books suggest that the use of machine learning …
elsewhere on Responsible AI. Much of this work focuses on fairness and biases (henceforth
Risks 1.0), following the 2016 best seller: Weapons of Math Destruction. Two books
published in 2022, The Chaos Machine and Like, Comment, Subscribe, raise additional
risks to public health/safety/security such as genocide, insurrection, polarized politics,
vaccinations (henceforth, Risks 2.0). These books suggest that the use of machine learning …
There has been considerable work recently in the natural language community and elsewhere on Responsible AI. Much of this work focuses on fairness and biases (henceforth Risks 1.0), following the 2016 best seller: Weapons of Math Destruction. Two books published in 2022, The Chaos Machine and Like, Comment, Subscribe, raise additional risks to public health/safety/security such as genocide, insurrection, polarized politics, vaccinations (henceforth, Risks 2.0). These books suggest that the use of machine learning to maximize engagement in social media has created a Frankenstein Monster that is exploiting human weaknesses with persuasive technology, the illusory truth effect, Pavlovian conditioning, and Skinner’s intermittent variable reinforcement. Just as we cannot expect tobacco companies to sell fewer cigarettes and prioritize public health ahead of profits, so too, it may be asking too much of companies (and countries) to stop trafficking in misinformation given that it is so effective and so insanely profitable (at least in the short term). Eventually, we believe the current chaos will end, like the lawlessness in Wild West, because chaos is bad for business. As computer scientists, this paper will summarize criticisms from other fields and focus on implications for computer science; we will not attempt to contribute to those other fields. There is quite a bit of work in computer science on these risks, especially on Risks 1.0 (bias and fairness), but more work is needed, especially on Risks 2.0 (addictive, dangerous, and deadly).
Cambridge University Press
以上显示的是最相近的搜索结果。 查看全部搜索结果