As an Amazon Associate I earn from qualifying purchases.

Using hyperboloids to improve product retrieval

[ad_1]

Many machine learning models depend on the concept of embedding, or mapping data to a representational space, where it can be manipulated or measured in useful ways. Usually, a data embedding is a point in the space — a vector.

In recent years, researchers at Amazon and elsewhere have been investigating the idea of hyperbolic embedding, or embedding data, not as points in space, but as higher-dimensional analogues of rectangles on a curved surface. This has numerous advantages, one of which is the ability to capture hierarchical relationships between data points.

At this year’s International Conference on Web Search and Data Mining (WSDM), we and our colleagues are presenting a paper on the use of hyperbolic embeddings for product retrieval. Because product catalogues are often organized hierarchically, with individual products belonging to a succession of more and more general categories (e.g., tablet/computer/electronics), hyperbolic embeddings are suited particularly well to this task.

In our approach, we represent a query — say, “Fire TV” — as a rectangle in hyperbolic space, known as a hyperboloid. Query matches are those products whose vector embeddings lie within the hyperboloid’s boundaries.

A new product retrieval method embeds queries as hyperboloids, or higher-dimensional analogues of rectangles on a curved surface. Each hyperboloid is represented by two vectors: a centroid vector, which defines the hyperboloid’s center, and a limit vector. The embedding of a multi-term query is the intersection (red polygon) of the embeddings of its component terms.

In experiments, we compared this approach to nine different methods that use vector embeddings and one method that embeds data as rectangular boxes in Euclidean space — essentially, non-curved versions of hyperboloids.

We used two different datasets and five different measures of retrieval accuracy and found that our approach was the best performer across the board. In some cases, the improvements were dramatic — as much as 33% relative to the best vector embedding method and 27% relative to the Euclidean box embedding.

HypEProjection-16x9.png

Related content

Novel embedding scheme enables a 7% to 33% improvement over its best-performing predecessors in handling graph queries.

Our approach also aids in model interpretability, as we use an attention mechanism to determine which elements of a query string are most relevant to which attributes of a product. The attention values for a given query provide an easy way to visualize the model’s rationale for selecting a certain product.

For instance, one experiment showed that when the query included the phrase “daily moisturizer”, the model attended to the word “moisturizer” when selecting products that had the word “lotion” in their titles.

Hyperbolic embeddings

An advantage of both Euclidean box embeddings and hyperbolic embeddings is that they can expand and contract according to the generality of a query. With either approach, for instance, the embedding corresponding to the query “Fire” — which would also encompass Fire tablets and Fire cubes — would be larger than the embedding corresponding to the query “Fire TV”.

Embedding space-phone.png

Related content

New “Mad Libs” technique for replacing words in individual sentences is grounded in metric differential privacy.

By the same token, both approaches offer an efficient way to combine queries. For instance, the embedding of the query “Fire TV stick with Alexa” would be the intersection of the embeddings corresponding to “Fire TV stick” and “Alexa”, while the embedding of the query “Fire or Kindle” would be the union of the embeddings for “Fire” and “Kindle”.

Where hyperbolic space has an advantage over Euclidean space is in representing hierarchies. Hyperbolic space is intrinsically curved, which means it gives you the representational capacity of curvature for free.

For instance, a hierarchical tree can be mapped onto a ball such that the root of the tree is at the center of the ball, its leaf nodes are on the surface, and the other layers of the tree fall at regular distances in-between. In Euclidean space, representing that ball requires three dimensions, but in hyperbolic space, it requires only two. This dimensionality reduction enables hyperboloids to model hierarchical relationships efficiently, even when the hierarchical trees are enormous.

Chandan Reddy.png

Related content

Amazon Scholar Chandan Reddy on the trends he sees in knowledge discovery research and their implications for his own work.

In our paper, we define hyperboloids using two vectors: one vector indicates the center (centroid) of the hyperboloid, and the other indicates the distance from the centroid to the hyperboloid’s edge. This compact representation further increases the efficiency of computing in hyperbolic space.

The model

Our machine learning model takes as inputs both a product query and the titles of candidate products. All the input texts are then broken into overlapping three-character chunks, or trigrams.

An encoder maps the trigrams, for both query and products, to hyperbolic space. The query mappings are hyperboloids, while the product mappings are hyperbolic vectors. An intersection layer then produces a new set of hyperboloids by finding the intersection of every pair of trigram embeddings from the query.

Both the query trigrams and their intersections then pass to an attention layer, which, during training, learns which query elements are most relevant to which product titles. The embedding of each product title also passes to a self-attention layer, which learns which title elements tend to be most pertinent to product retrieval queries.

From the attention values, the model computes a new set of vectors, representing the centroids of new query embedding hyperboloids and new embeddings of product titles, all biased toward the features the attention model identifies as most important. The intersection of hyperboloids and product vectors determines which products are presented to the customer, in what order.

Note that we don’t train the model directly on representations of data hierarchies. To the extent that it is using hierarchical relationships, it simply learns them from training data.

The weights computed by the attention mechanism provide a way to visualize the rationales for the product retrieval model’s decisions. In these figures, the y-axis represents trigrams of the query, and the x-axis represents trigrams of the product title. Sometimes, the mechanism finds lexical matches, such as “leatherer” with “leatherer” in the first grid. But often, the matches are semantic, such as “lotion” with “moisturizer” or “driver” with “clubs”.

In our experiments, we measured the performance of our model and ten baselines using five metrics. Three of the metrics were variations of normalized discounted cumulative gain (NDCG), which considers not only how many relevant results are contained in the top N but how highly they rank. We measured NDCG for the top three results (NDCG@3), the top five (NDCG@5), and the top 10 (NDCG@10). We also used mean average precision, which measures the fraction of relevant results, and mean reciprocal rank, which assigns relevant results fixed scores depending on where in the list they fall.

ANTHEM’s experimental results.

As can be seen, on all five measures, on both a public dataset and a private dataset, our model — which we call ANTHEM, for AtteNTive Hyperbolic Entity Model — yielded the best results. On the private dataset, the gains over the best-performing vector embedding model (BERT) were consistently around 30%. On the public dataset, they were consistently around 9%.

Relative to the model that used Euclidean box embeddings (E-ANTHEM), the greatest gains came on NDCG@10 — 21% on the private dataset, 8% on the public. This is likely because of the hierarchical information that ANTHEM captures. That is, Euclidean embeddings may do a good job of finding the top matches, but ANTHEM does a better job of exploring the hierarchical product categories those matches belong to.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

myrzone.com- Expect more Pay Less
Logo