Embedding (machine learning)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Embedding in machine learning refers to a representation learning technique that maps complex, high-dimensional data into a lower-dimensional vector space of numerical vectors.[1]

Technique

[edit | edit source]

It also denotes the resulting representation, where meaningful patterns or relationships are preserved. As a technique, it learns these vectors from data like words, images, or user interactions, differing from manually designed methods such as one-hot encoding.[2] This process reduces complexity and captures key features without needing prior knowledge of the domain.

Similarity

[edit | edit source]

In natural language processing, words or concepts may be represented as feature vectors, where similar concepts are mapped to nearby vectors. The resulting embeddings vary by type, including word embeddings for text (e.g., Word2Vec), image embeddings for visual data, and knowledge graph embeddings for knowledge graphs, each tailored to tasks like NLP, computer vision, or recommendation systems.[3] This dual role enhances model efficiency and accuracy by automating feature extraction and revealing latent similarities across diverse applications.

To measure the distance between two embeddings, a similarity measure can be used to find the overall similarity of the concepts represented by the embeddings. If the vectors are normalized to have a magnitude of 1, then the similarity measures are proportional to cos(θab).[4]

Similarity Measures
Name Meaning Formula Formula (Scalar) Similarity Correlation
Euclidean Distance Distance between ends of vectors |ab| (anbn)2 Negative Correlation
Cosine Similarity Cosine of angle θ between vectors ab|a||b| anbn(an2)(bn2) Positive Correlation
Dot Product Cosine similarity multiplied by the lengths of both vectors ab anbn Positive Correlation

The cosine similarity disregards the magnitude of the vector when determining similarity, so it is less biased towards training data that appears very frequently. The dot product includes the magnitude inherently, so it will tend to value more popular data.[4] Generally, for high-dimensional vector spaces, vectors tend to converge in distance, so Euclidean distance becomes less reliable for large embedding vectors.[5]

See also

[edit | edit source]

References

[edit | edit source]
  1. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  2. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  3. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  4. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  5. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).