You can teach an old dog new tricks! on training knowledge graph embeddings
2020•madoc.bib.uni-mannheim.de
Knowledge graph embedding (KGE) models learn algebraic representations of the entities
and relations in a knowledge graph. A vast number of KGE techniques for multi-relational
link prediction have been proposed in the recent literature, often with state-of-the-art
performance. These approaches differ along a number of dimensions, including different
model architectures, different training strategies, and different approaches to
hyperparameter optimization. In this paper, we take a step back and aim to summarize and …
and relations in a knowledge graph. A vast number of KGE techniques for multi-relational
link prediction have been proposed in the recent literature, often with state-of-the-art
performance. These approaches differ along a number of dimensions, including different
model architectures, different training strategies, and different approaches to
hyperparameter optimization. In this paper, we take a step back and aim to summarize and …
Abstract
Knowledge graph embedding (KGE) models learn algebraic representations of the entities and relations in a knowledge graph. A vast number of KGE techniques for multi-relational link prediction have been proposed in the recent literature, often with state-of-the-art performance. These approaches differ along a number of dimensions, including different model architectures, different training strategies, and different approaches to hyperparameter optimization. In this paper, we take a step back and aim to summarize and quantify empirically the impact of each of these dimensions on model performance. We report on the results of an extensive experimental study with popular model architectures and training strategies across a wide range of hyperparameter settings. We found that when trained appropriately, the relative performance differences between various model architectures often shrinks and sometimes even reverses when compared to prior results. For example, RESCAL~\citep {nickel2011three}, one of the first KGE models, showed strong performance when trained with state-of-the-art techniques; it was competitive to or outperformed more recent architectures. We also found that good (and often superior to prior studies) model configurations can be found by exploring relatively few random samples from a large hyperparameter space. Our results suggest that many of the more advanced architectures and techniques proposed in the literature should be revisited to reassess their individual benefits. To foster further reproducible research, we provide all our implementations and experimental results as part of the open source LibKGE framework.
madoc.bib.uni-mannheim.de
以上显示的是最相近的搜索结果。 查看全部搜索结果