From distributed machine to distributed deep learning: a comprehensive survey

M Dehghani, Z Yazdanparast - Journal of Big Data, 2023 - Springer
Artificial intelligence has made remarkable progress in handling complex tasks, thanks to
advances in hardware acceleration and machine learning algorithms. However, to acquire …

Accumulated decoupled learning with gradient staleness mitigation for convolutional neural networks

H Zhuang, Z Weng, F Luo, T Kar-Ann… - … on Machine Learning, 2021 - proceedings.mlr.press
Gradient staleness is a major side effect in decoupled learning when training convolutional
neural networks asynchronously. Existing methods that ignore this effect might result in …

Toward model parallelism for deep neural network based on gradient-free ADMM framework

J Wang, Z Chai, Y Cheng, L Zhao - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
Alternating Direction Method of Multipliers (ADMM) has recently been proposed as a
potential alternative optimizer to the Stochastic Gradient Descent (SGD) for deep learning …

Cortico-cerebellar networks as decoupling neural interfaces

J Pemberton, E Boven, R Apps… - Advances in neural …, 2021 - proceedings.neurips.cc
The brain solves the credit assignment problem remarkably well. For credit to be assigned
across neural networks they must, in principle, wait for specific neural computations to finish …

A hybrid parallelization approach for distributed and scalable deep learning

SB Akintoye, L Han, X Zhang, H Chen, D Zhang - IEEE Access, 2022 - ieeexplore.ieee.org
Recently, Deep Neural Networks (DNNs) have recorded significant success in handling
medical and other complex classification tasks. However, as the sizes of DNN models and …

A Survey From Distributed Machine Learning to Distributed Deep Learning

M Dehghani, Z Yazdanparast - arXiv preprint arXiv:2307.05232, 2023 - arxiv.org
Artificial intelligence has achieved significant success in handling complex tasks in recent
years. This success is due to advances in machine learning algorithms and hardware …

Energy-efficient DNN training processors on micro-AI systems

D Han, S Kang, S Kim, J Lee… - IEEE Open Journal of the …, 2022 - ieeexplore.ieee.org
Many edge/mobile devices are now able to utilize deep neural networks (DNNs) thanks to
the development of mobile DNN accelerators. Mobile DNN accelerators overcame the …

Brain-Inspired Machine Intelligence: A Survey of Neurobiologically-Plausible Credit Assignment

AG Ororbia - arXiv preprint arXiv:2312.09257, 2023 - arxiv.org
In this survey, we examine algorithms for conducting credit assignment in artificial neural
networks that are inspired or motivated by neurobiology, unifying these various processes …

Approximate to be great: Communication efficient and privacy-preserving large-scale distributed deep learning in Internet of Things

W Du, A Li, P Zhou, Z Xu, X Wang… - IEEE internet of things …, 2020 - ieeexplore.ieee.org
The increasing Internet-of-Things (IoT) devices have produced large volumes of data. A
deep learning technique is widely used to analyze the potential value of these data due to its …

Deep Reinforcement Learning With Multiple Unrelated Rewards for AGV Mapless Navigation

B Cai, C Wei, Z Ji - IEEE Transactions on Automation Science …, 2024 - ieeexplore.ieee.org
Mapless navigation for Automated Guided Vehicles (AGV) via Deep Reinforcement
Learning (DRL) algorithms has attracted significantly rising attention in recent years …