关注
Hassan Ali
Hassan Ali
Ph.D. Candidate, UNSW Sydney
在 unsw.edu.au 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Qusecnets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks
F Khalid, H Ali, H Tariq, MA Hanif, S Rehman, R Ahmed, M Shafique
2019 IEEE 25th International Symposium on On-Line Testing and Robust System …, 2019
492019
All your fake detector are belong to us: evaluating adversarial robustness of fake-news detectors under black-box settings
H Ali, MS Khan, A AlGhadhban, M Alazmi, A Alzamil, K Al-Utaibi, J Qadir
IEEE Access 9, 81678-81692, 2021
482021
Fadec: A fast decision-based attack for adversarial machine learning
F Khalid, H Ali, MA Hanif, S Rehman, R Ahmed, M Shafique
2020 International Joint Conference on Neural Networks (IJCNN), 1-8, 2020
40*2020
Secure and trustworthy artificial intelligence-extended reality (AI-XR) for metaverses
A Qayyum, MA Butt, H Ali, M Usman, O Halabi, A Al-Fuqaha, QH Abbasi, ...
ACM Computing Surveys, 2023
252023
Spie-aapm-nci breastpathq challenge: an image analysis challenge for quantitative tumor cellularity assessment in breast cancer histology images following neoadjuvant treatment
N Petrick, S Akbar, KH Cha, S Nofech-Mozes, B Sahiner, MA Gavrielides, ...
Journal of Medical Imaging 8 (3), 034501-034501, 2021
252021
Towards secure private and trustworthy human-centric embedded machine learning: An emotion-aware facial recognition case study
MA Butt, A Qayyum, H Ali, A Al-Fuqaha, J Qadir
Computers & Security 125, 103058, 2023
232023
Sscnets: Robustifying dnns using secure selective convolutional filters
H Ali, F Khalid, HA Tariq, MA Hanif, R Ahmed, S Rehman
IEEE Design & Test 37 (2), 58-65, 2019
18*2019
Tamp-X: Attacking explainable natural language classifiers through tampered activations
H Ali, MS Khan, A Al-Fuqaha, J Qadir
Computers & Security 120, 102791, 2022
152022
Has-nets: A heal and select mechanism to defend dnns against backdoor attacks for data collection scenarios
H Ali, S Nepal, SS Kanhere, S Jha
arXiv preprint arXiv:2012.07474, 2020
112020
Robust Encrypted Inference in Deep Learning: A Pathway to Secure Misinformation Detection
H Ali, RT Javed, A Qayyum, A AlGhadhban, M Alazmi, A Alzamil, ...
IEEE Transactions on Dependable and Secure Computing, 2024
8*2024
Con-detect: Detecting adversarially perturbed natural language inputs to deep classifiers through holistic analysis
H Ali, MS Khan, A AlGhadhban, M Alazmi, A Alzamil, K Al-Utaibi, J Qadir
Computers & Security 132, 103367, 2023
82023
Consistent Valid Physically-Realizable Adversarial Attack against Crowd-flow Prediction Models
H Ali, MA Butt, F Filali, A Al-Fuqaha, J Qadir
IEEE Transactions on Intelligent Transportation Systems, 2023
42023
Membership Inference Attacks on DNNs using Adversarial Perturbations
H Ali, A Qayyum, A Al-Fuqaha, J Qadir
arXiv preprint arXiv:2307.05193, 2023
32023
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
S Al-Maliki, A Qayyum, H Ali, M Abdallah, J Qadir, DT Hoang, D Niyato, ...
IEEE Transactions on Artificial Intelligence, 2024
22024
RS100K: Road-Region Segmentation Dataset for Semi-supervised Autonomous Driving in the Wild
MA Butt, H Ali, A Qayyum, W Sultani, A Al-Fuqaha, J Qadir
International Journal of Computer Vision, 1-19, 2024
2024
Robust Surgical Tools Detection in Endoscopic Videos with Noisy Data
A Qayyum, H Ali, M Caputo, H Vohra, T Akinosho, S Abioye, I Berrou, ...
arXiv preprint arXiv:2307.01232, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–16