Learning from a Friend: Improving Event Extraction via Self-Training with Feedback from Abstract Meaning Representation

Z Xu, JY Lee, L Huang - Findings of the Association for …, 2023 - aclanthology.org
Data scarcity has been the main factor that hinders the progress of event extraction. To
overcome this issue, we propose a Self-Training with Feedback (STF) framework that …

Language Model Based Unsupervised Dependency Parsing with Conditional Mutual Information and Grammatical Constraints

J Chen, X He, Y Miyao - Proceedings of the 2024 Conference of …, 2024 - aclanthology.org
Previous methods based on Large Language Models (LLM) perform unsupervised
dependency parsing by maximizing bi-lexical dependence scores. However, these previous …

Dynamic programming in rank space: Scaling structured inference with low-rank HMMs and PCFGs

S Yang, W Liu, K Tu - arXiv preprint arXiv:2205.00484, 2022 - arxiv.org
Hidden Markov Models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) are
widely used structured models, both of which can be represented as factor graph grammars …

Simple Hardware-Efficient PCFGs with Independent Left and Right Productions

W Liu, S Yang, Y Kim, K Tu - arXiv preprint arXiv:2310.14997, 2023 - arxiv.org
Scaling dense PCFGs to thousands of nonterminals via a low-rank parameterization of the
rule probability tensor has been shown to be beneficial for unsupervised parsing. However …

Forming trees with treeformers

N Patel, J Flanigan - arXiv preprint arXiv:2207.06960, 2022 - arxiv.org
Human language is known to exhibit a nested, hierarchical structure, allowing us to form
complex sentences out of smaller pieces. However, many state-of-the-art neural networks …

Improve event extraction via self-training with gradient guidance

Z Xu, JY Lee, L Huang - arXiv preprint arXiv:2205.12490, 2022 - arxiv.org
Data scarcity has been the main factor that hinders the progress of event extraction. To
overcome this issue, we propose a Self-Training with Feedback (STF) framework that …

[图书][B] Nondeterministic Stacks in Neural Networks

B DuSell - 2023 - search.proquest.com
Human language is full of compositional syntactic structures, and although neural networks
have contributed to groundbreaking improvements in computer systems that process …

[PDF][PDF] 1 Injecting constraints into machine learning models

JY Lee - leejayyoon.github.io
Injecting human knowledge into neural networks is a crucial but non-trivial task as neural
networks are often treated as a black-box function, making their inner workings …