A survey of safety and trustworthiness of large language models through the lens of verification and validation
Large language models (LLMs) have exploded a new heatwave of AI for their ability to
engage end-users in human-level conversations with detailed and articulate answers across …
engage end-users in human-level conversations with detailed and articulate answers across …
Towards verifying the geometric robustness of large-scale neural networks
Deep neural networks (DNNs) are known to be vulnerable to adversarial geometric
transformation. This paper aims to verify the robustness of large-scale DNNs against the …
transformation. This paper aims to verify the robustness of large-scale DNNs against the …
Maximum output discrepancy computation for convolutional neural network compression
Network compression methods minimize the number of network parameters and
computation costs while maintaining desired network performance. However, the safety …
computation costs while maintaining desired network performance. However, the safety …