关注
Eashan Adhikarla
Eashan Adhikarla
在 lehigh.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks
K Zhang, J Yu, E Adhikarla, R Zhou, Z Yan, Y Liu, Z Liu, L He, BD Davison, ...
arXiv preprint arXiv:2305.17100, 2023
622023
Face mask detection on real-world Webcam images
E Adhikarla, BD Davison
Proceedings of the Conference on Information Technology for Social Good, 139-144, 2021
122021
Exploring the bbrv2 congestion control algorithm for use on data transfer nodes
B Tierney, E Dart, E Kissel, E Adhikarla
2021 IEEE Workshop on Innovating the Network for Data-Intensive Science …, 2021
112021
BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision
K Zhang, J Yu, Z Yan, Y Liu, E Adhikarla, S Fu, X Chen, C Chen, Y Zhou, ...
Language, and Multimodal Tasks. arXiv [csCL], 2023
62023
BiomedGPT: a unified and generalist biomedical generative pre-trained transformer for vision, language, and multimodal tasks. arXiv [Preprint]. 2023 [cited August 21, 2023]
K Zhang, J Yu, Z Yan, Y Liu, E Adhikarla, S Fu
3
Biomedgpt: A unified and generalist biomedical generative pre-trained transformer for vision, language, and multimodal tasks, 2023
Z Kai, Y Jun, A Eashan, Z Rong, Y Zhiling, L Yixin, L Zhengliang, H Lifang, ...
URL https://arxiv. org/abs/2305.17100.[Google Scholar], 0
3
Robust Computer Vision in an Ever-Changing World: A Survey of Techniques for Tackling Distribution Shifts
E Adhikarla, K Zhang, J Yu, L Sun, J Nicholson, BD Davison
arXiv preprint arXiv:2312.01540, 2023
12023
Memory Defense: More Robust Classification via a Memory-Masking Autoencoder
E Adhikarla, D Luo, BD Davison
arXiv preprint arXiv:2202.02595, 2022
12022
Unified-EGformer: Exposure Guided Lightweight Transformer for Mixed-Exposure Image Enhancement
E Adhikarla, K Zhang, RG VidalMata, M Aithal, NA Madhusudhana, ...
arXiv preprint arXiv:2407.13170, 2024
2024
Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model Eras
J Yu, Y Dai, X Liu, J Huang, Y Shen, K Zhang, R Zhou, E Adhikarla, W Ye, ...
arXiv preprint arXiv:2404.18961, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–10