Contrastive Learning as Kernel Approximation
KC Tsiolis - arXiv preprint arXiv:2309.02651, 2023 - arxiv.org
In standard supervised machine learning, it is necessary to provide a label for every input in
the data. While raw data in many application domains is easily obtainable on the Internet …
the data. While raw data in many application domains is easily obtainable on the Internet …
Learning efficient task-specific meta-embeddings with word prisms
J He, KC Tsiolis, K Kenyon-Dean… - arXiv preprint arXiv …, 2020 - arxiv.org
Word embeddings are trained to predict word cooccurrence statistics, which leads them to
possess different lexical properties (syntactic, semantic, etc.) depending on the notion of …
possess different lexical properties (syntactic, semantic, etc.) depending on the notion of …
Free energy node embedding via generalized skip-gram with negative sampling
A widely established set of unsupervised node embedding methods can be interpreted as
consisting of two distinctive steps: i) the definition of a similarity matrix based on the graph of …
consisting of two distinctive steps: i) the definition of a similarity matrix based on the graph of …
Sentiment Text Analysis for Providing Individualized Feed of Recommendations Using Reinforcement Learning
SD Kaftanis… - 2022 International Joint …, 2022 - ieeexplore.ieee.org
We present an implementation of a new method of producing personalized text
recommendations to users based on their sentiment. This method aims to identify patterns in …
recommendations to users based on their sentiment. This method aims to identify patterns in …
Node Embedding based on the Free Energy Distance
We propose a novel unsupervised node embedding method leveraging the free energy
distance. More precisely, we build a matrix to encode pairwise node similarities based on …
distance. More precisely, we build a matrix to encode pairwise node similarities based on …
[PDF][PDF] No Context Left Behind: Meta-Embeddings with Word Vectors Trained on Different Notions of Context
KC Tsiolis - kctsiolis.github.io
Recent work has shown that combining pretrained word embeddings from different
algorithms into a single word embedding leads to improved intrinsic and extrinsic …
algorithms into a single word embedding leads to improved intrinsic and extrinsic …