[HTML][HTML] Deceptively Simple yet Profoundly Impactful: Text Messaging Interventions to Support Health

B Suffoletto - Journal of Medical Internet Research, 2024 - jmir.org
This paper examines the use of text message (SMS) interventions for health-related
behavioral support. It first outlines the historical progress in SMS intervention research …

Detectors for safe and reliable llms: Implementations, uses, and limitations

S Achintalwar, AA Garcia, A Anaby-Tavor… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output
to biased and toxic generations. Due to several limiting factors surrounding LLMs (training …

A field guide to automatic evaluation of llm-generated summaries

TA van Schaik, B Pugh - Proceedings of the 47th International ACM …, 2024 - dl.acm.org
Large Language models (LLMs) are rapidly being adopted for tasks such as text
summarization, in a wide range of industries. This has driven the need for scalable …

Ai safety in generative ai large language models: A survey

J Chua, Y Li, S Yang, C Wang, L Yao - arXiv preprint arXiv:2407.18369, 2024 - arxiv.org
Large Language Model (LLMs) such as ChatGPT that exhibit generative AI capabilities are
facing accelerated adoption and innovation. The increased presence of Generative AI (GAI) …

Safeguarding Large Language Models: A Survey

Y Dong, R Mu, Y Zhang, S Sun, T Zhang, C Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
In the burgeoning field of Large Language Models (LLMs), developing a robust safety
mechanism, colloquially known as" safeguards" or" guardrails", has become imperative to …

“It happened to be the perfect thing”: experiences of generative AI chatbots for mental health

S Siddals, J Torous, A Coxon - npj Mental Health Research, 2024 - nature.com
The global mental health crisis underscores the need for accessible, effective interventions.
Chatbots based on generative artificial intelligence (AI), like ChatGPT, are emerging as …

Lora-guard: Parameter-efficient guardrail adaptation for content moderation of large language models

H Elesedy, PM Esperança, SV Oprea… - arXiv preprint arXiv …, 2024 - arxiv.org
Guardrails have emerged as an alternative to safety alignment for content moderation of
large language models (LLMs). Existing model-based guardrails have not been designed …

A taxonomy of multi-layered runtime guardrails for designing foundation model-based agents: Swiss cheese model for ai safety by design

M Shamsujjoha, Q Lu, D Zhao, L Zhu - arXiv preprint arXiv:2408.02205, 2024 - arxiv.org
Foundation Model (FM) based agents are revolutionizing application development across
various domains. However, their rapidly growing capabilities and autonomy have raised …

Building a domain-specific guardrail model in production

M Niknazar, PV Haley, L Ramanan, ST Truong… - arXiv preprint arXiv …, 2024 - arxiv.org
Generative AI holds the promise of enabling a range of sought-after capabilities and
revolutionizing workflows in various consumer and enterprise verticals. However, putting a …

PE-GPT: A New Paradigm for Power Electronics Design

F Lin, X Li, W Lei, JJ Rodriguez-Andina… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Large language models (LLMs) have shown exciting potential in powering the growth of
many industries, yet their adoption in the power electronics (PE) sector is hindered by a lack …