Deep automation bias: how to tackle a wicked problem of AI?

S Strauß - Big Data and Cognitive Computing, 2021 - mdpi.com
S Strauß
Big Data and Cognitive Computing, 2021mdpi.com
The increasing use of AI in different societal contexts intensified the debate on risks, ethical
problems and bias. Accordingly, promising research activities focus on debiasing to
strengthen fairness, accountability and transparency in machine learning. There is, though,
a tendency to fix societal and ethical issues with technical solutions that may cause
additional, wicked problems. Alternative analytical approaches are thus needed to avoid this
and to comprehend how societal and ethical issues occur in AI systems. Despite various …
The increasing use of AI in different societal contexts intensified the debate on risks, ethical problems and bias. Accordingly, promising research activities focus on debiasing to strengthen fairness, accountability and transparency in machine learning. There is, though, a tendency to fix societal and ethical issues with technical solutions that may cause additional, wicked problems. Alternative analytical approaches are thus needed to avoid this and to comprehend how societal and ethical issues occur in AI systems. Despite various forms of bias, ultimately, risks result from eventual rule conflicts between the AI system behavior due to feature complexity and user practices with limited options for scrutiny. Hence, although different forms of bias can occur, automation is their common ground. The paper highlights the role of automation and explains why deep automation bias (DAB) is a metarisk of AI. Based on former work it elaborates the main influencing factors and develops a heuristic model for assessing DAB-related risks in AI systems. This model aims at raising problem awareness and training on the sociotechnical risks resulting from AI-based automation and contributes to improving the general explicability of AI systems beyond technical issues.
MDPI
以上显示的是最相近的搜索结果。 查看全部搜索结果