Learning from a Friend: Improving Event Extraction via Self-Training with Feedback from Abstract Meaning Representation
Data scarcity has been the main factor that hinders the progress of event extraction. To
overcome this issue, we propose a Self-Training with Feedback (STF) framework that …
overcome this issue, we propose a Self-Training with Feedback (STF) framework that …
Language Model Based Unsupervised Dependency Parsing with Conditional Mutual Information and Grammatical Constraints
Previous methods based on Large Language Models (LLM) perform unsupervised
dependency parsing by maximizing bi-lexical dependence scores. However, these previous …
dependency parsing by maximizing bi-lexical dependence scores. However, these previous …
Dynamic programming in rank space: Scaling structured inference with low-rank HMMs and PCFGs
Hidden Markov Models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) are
widely used structured models, both of which can be represented as factor graph grammars …
widely used structured models, both of which can be represented as factor graph grammars …
Simple Hardware-Efficient PCFGs with Independent Left and Right Productions
Scaling dense PCFGs to thousands of nonterminals via a low-rank parameterization of the
rule probability tensor has been shown to be beneficial for unsupervised parsing. However …
rule probability tensor has been shown to be beneficial for unsupervised parsing. However …
Forming trees with treeformers
N Patel, J Flanigan - arXiv preprint arXiv:2207.06960, 2022 - arxiv.org
Human language is known to exhibit a nested, hierarchical structure, allowing us to form
complex sentences out of smaller pieces. However, many state-of-the-art neural networks …
complex sentences out of smaller pieces. However, many state-of-the-art neural networks …
Improve event extraction via self-training with gradient guidance
Data scarcity has been the main factor that hinders the progress of event extraction. To
overcome this issue, we propose a Self-Training with Feedback (STF) framework that …
overcome this issue, we propose a Self-Training with Feedback (STF) framework that …
[图书][B] Nondeterministic Stacks in Neural Networks
B DuSell - 2023 - search.proquest.com
Human language is full of compositional syntactic structures, and although neural networks
have contributed to groundbreaking improvements in computer systems that process …
have contributed to groundbreaking improvements in computer systems that process …
[PDF][PDF] 1 Injecting constraints into machine learning models
JY Lee - leejayyoon.github.io
Injecting human knowledge into neural networks is a crucial but non-trivial task as neural
networks are often treated as a black-box function, making their inner workings …
networks are often treated as a black-box function, making their inner workings …