Adversarial Machine Learning in the Context of Network Security: Challenges and Solutions
M Khan, L Ghafoor - Journal of Computational Intelligence …, 2024 - thesciencebrigade.com
With the increasing sophistication of cyber threats, the integration of machine learning (ML)
techniques in network security has become imperative for detecting and mitigating evolving …
techniques in network security has become imperative for detecting and mitigating evolving …
Error analysis prompting enables human-like translation evaluation in large language models: A case study on chatgpt
Generative large language models (LLMs), eg, ChatGPT, have demonstrated remarkable
proficiency across several NLP tasks such as machine translation, question answering, text …
proficiency across several NLP tasks such as machine translation, question answering, text …
Learning graph neural networks for image style transfer
State-of-the-art parametric and non-parametric style transfer approaches are prone to either
distorted local style patterns due to global statistics alignment, or unpleasing artifacts …
distorted local style patterns due to global statistics alignment, or unpleasing artifacts …
A survey on non-autoregressive generation for neural machine translation and beyond
Non-autoregressive (NAR) generation, which is first proposed in neural machine translation
(NMT) to speed up inference, has attracted much attention in both machine learning and …
(NMT) to speed up inference, has attracted much attention in both machine learning and …
Redistributing low-frequency words: Making the most of monolingual data in non-autoregressive translation
Abstract Knowledge distillation (KD) is the preliminary step for training non-autoregressive
translation (NAT) models, which eases the training of NAT models at the cost of losing …
translation (NAT) models, which eases the training of NAT models at the cost of losing …
Token-level self-evolution training for sequence-to-sequence learning
Adaptive training approaches, widely used in sequence-to-sequence models, commonly
reweigh the losses of different target tokens based on priors, eg word frequency. However …
reweigh the losses of different target tokens based on priors, eg word frequency. However …
Directed acyclic transformer for non-autoregressive machine translation
Abstract Non-autoregressive Transformers (NATs) significantly reduce the decoding latency
by generating all tokens in parallel. However, such independent predictions prevent NATs …
by generating all tokens in parallel. However, such independent predictions prevent NATs …
Order-agnostic cross entropy for non-autoregressive machine translation
We propose a new training objective named order-agnostic cross entropy (OaXE) for fully
non-autoregressive translation (NAT) models. OaXE improves the standard cross-entropy …
non-autoregressive translation (NAT) models. OaXE improves the standard cross-entropy …
Quantum Computing and AI in the Cloud
H Padmanaban - Journal of Computational Intelligence and …, 2024 - thesciencebrigade.com
The intersection of quantum computing and artificial intelligence (AI) within the cloud
environment represents a paradigm shift in the capabilities of computational technologies …
environment represents a paradigm shift in the capabilities of computational technologies …
Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction
Recent research shows that pre-trained language models (PLMs) suffer from" prompt bias"
in factual knowledge extraction, ie, prompts tend to introduce biases toward specific labels …
in factual knowledge extraction, ie, prompts tend to introduce biases toward specific labels …