KANDINSKYPatterns--An experimental exploration environment for Pattern Analysis and Machine Intelligence

A Holzinger, A Saranti, H Mueller - arXiv preprint arXiv:2103.00519, 2021 - arxiv.org
Machine intelligence is very successful at standard recognition tasks when having high-
quality training data. There is still a significant gap between machine-level pattern …

[PDF][PDF] Toward human-level concept learning: Pattern benchmarking for AI algorithms

A Holzinger, A Saranti, A Angerschmid, B Finzel… - Patterns, 2023 - cell.com
Artificial intelligence (AI) today is very successful at standard pattern-recognition tasks due to
the availability of large amounts of data and advances in statistical data-driven machine …

[HTML][HTML] Kandinsky patterns

H Müller, A Holzinger - Artificial intelligence, 2021 - Elsevier
Abstract Kandinsky Figures and Kandinsky Patterns are mathematically describable, simple,
self-contained hence controllable synthetic test data sets for the development, validation and …

Kandinsky patterns as iq-test for machine learning

A Holzinger, M Kickmeier-Rust, H Müller - … Extraction: Third IFIP TC 5, TC …, 2019 - Springer
AI follows the notion of human intelligence which is unfortunately not a clearly defined term.
The most common definition given by cognitive science as mental capability, includes …

How intelligent are convolutional neural networks?

Z Yan, XS Zhou - arXiv preprint arXiv:1709.06126, 2017 - arxiv.org
Motivated by the Gestalt pattern theory, and the Winograd Challenge for language
understanding, we design synthetic experiments to investigate a deep learning algorithm's …

Back to the feature: A neural-symbolic perspective on explainable AI

A Campagner, F Cabitza - … Learning and Knowledge Extraction: 4th IFIP …, 2020 - Springer
We discuss a perspective aimed at making black box models more eXplainable, within the
eXplainable AI (XAI) strand of research. We argue that the traditional end-to-end learning …

NxPlain: Web-based Tool for Discovery of Latent Concepts

F Dalvi, N Durrani, H Sajjad, T Jaban, M Husaini… - arXiv preprint arXiv …, 2023 - arxiv.org
The proliferation of deep neural networks in various domains has seen an increased need
for the interpretability of these models, especially in scenarios where fairness and trust are …

[PDF][PDF] Deconstructing the Final Frontier of Artificial Intelligence: Five Theses for a Constructivist Machine Learning.

T Schmid - aaai Spring Symposium: Combining Machine Learning …, 2019 - ceur-ws.org
Ambiguity and diversity in human cognition can be regarded a final frontier in developing
equivalent systems of artificial intelligence. Despite astonishing accomplishments, modern …

Changes from the trenches: Should we automate them?

Y Golubev, J Li, V Bushev, T Bryksin… - arXiv preprint arXiv …, 2021 - arxiv.org
Code changes constitute one of the most important features of software evolution. Studying
them can provide insights into the nature of software development and also lead to practical …

More interpretable decision trees

E Gilmore, V Estivill-Castro, R Hexel - Hybrid Artificial Intelligent Systems …, 2021 - Springer
We present a new Decision Tree Classifier (DTC) induction algorithm that produces vastly
more interpretable trees in many situations. These understandable trees are highly relevant …