Specter: Document-level representation learning using citation-informed transformers
Representation learning is a critical ingredient for natural language processing systems.
Recent Transformer language models like BERT learn powerful textual representations, but
these models are targeted towards token-and sentence-level training objectives and do not
leverage information on inter-document relatedness, which limits their document-level
representation power. For applications on scientific documents, such as classification and
recommendation, the embeddings power strong performance on end tasks. We propose …
Recent Transformer language models like BERT learn powerful textual representations, but
these models are targeted towards token-and sentence-level training objectives and do not
leverage information on inter-document relatedness, which limits their document-level
representation power. For applications on scientific documents, such as classification and
recommendation, the embeddings power strong performance on end tasks. We propose …
以上显示的是最相近的搜索结果。 查看全部搜索结果