How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs H Tu*, C Cui*, Z Wang*, Y Zhou, B Zhao, J Han, W Zhou, H Yao, C Xie ECCV 2024, 2024 | 31 | 2024 |
SeSy: Linguistic steganalysis framework integrating semantic and syntactic features J Yang, Z Yang, S Zhang, H Tu, Y Huang IEEE Signal Processing Letters 29, 31-35, 2021 | 28 | 2021 |
Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence B Peng, D Goldstein, Q Anthony, A Albalak, E Alcaide, S Biderman, ... COLM 2024, 2024 | 16 | 2024 |
Linguistic Steganalysis Toward Social Network J Yang, Z Yang, J Zou, H Tu, Y Huang IEEE Transactions on Information Forensics and Security (TIFS) 18, 859-871, 2022 | 16 | 2022 |
Adavae: Exploring adaptive gpt-2s in variational auto-encoders for language modeling H Tu, Z Yang, J Yang, Y Huang arXiv preprint arXiv:2205.05862, 2022 | 11 | 2022 |
Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning B Zhao*, H Tu*, C Wei, J Mei, C Xie ICLR 2024 (Spotlight), 2024 | 10 | 2024 |
Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics H Tu*, B Zhao*, C Wei, C Xie NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023 | 10 | 2023 |
PCAE: A framework of plug-in conditional auto-encoder for controllable text generation H Tu, Z Yang, J Yang, S Zhang, Y Huang Knowledge-Based Systems 256, 109766, 2022 | 5 | 2022 |
Pixel-Stega: Generative image steganography based on autoregressive models S Zhang, Z Yang, H Tu, J Yang, Y Huang arXiv preprint arXiv:2112.10945, 2021 | 4 | 2021 |
What If We Recaption Billions of Web Images with LLaMA-3? X Li*, H Tu*, M Hui*, Z Wang*, B Zhao*, J Xiao, S Ren, J Mei, Q Liu, ... arXiv preprint arXiv:2406.08478, 2024 | 3 | 2024 |
How Far Are We From AGI T Feng*, C Jin*, J Liu*, K Zhu*, H Tu, Z Cheng, G Lin, J You arXiv preprint arXiv:2405.10313, 2024 | 3 | 2024 |
ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue H Tu, Y Li, F Mi, Z Yang EMNLP 2023 (Oral), 2023 | 3 | 2023 |
ZeroGen: Zero-shot Multimodal Controllable Text Generation with Multiple Oracles H Tu, B Yang, X Zhao NLPCC 2023, 2023 | 2 | 2023 |
FET-LM: Flow-Enhanced Variational Autoencoder for Topic-Guided Language Modeling H Tu, Z Yang, J Yang, L Zhou, Y Huang IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023 | 2 | 2023 |
An Overview on Controllable Text Generation via Variational Auto-Encoders H Tu, Y Li arXiv preprint arXiv:2211.07954, 2022 | 2 | 2022 |
Autoregressive Pretraining with Mamba in Vision S Ren, X Li, H Tu, F Wang, F Shu, L Zhang, J Mei, L Yang, P Wang, ... arXiv preprint arXiv:2406.07537, 2024 | 1 | 2024 |
MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation? Z Chen, Y Du, Z Wen, Y Zhou, C Cui, Z Weng, H Tu, C Wang, Z Tong, ... arXiv preprint arXiv:2407.04842, 2024 | | 2024 |