As an Amazon Associate I earn from qualifying purchases.

Anomaly detection for graph-based data

[ad_1]

Anomaly detection is the identification of data that diverges significantly from established norms, which can indicate harmful activity. It’s a particularly tough challenge in the case of graph-based data, where anomaly detection is based not only on data values but on topological relationships within the graph. Because anomalies tend to be rare, it can be hard to find enough examples to train a machine learning model on the complexities of anomaly detection in graphs.

In a paper we presented last week at the International Conference on Web Search and Data Mining (WSDM), we describe a new method for synthesizing training data for graph-based anomaly detectors. Our method combines a variational graph autoencoder, which learns a probability distribution that can be used to generate random samples, with diffusion modeling, which learns to convert random noise into intelligible outputs.

In tests, we compared anomaly detectors trained with synthetic data generated through our method with detectors trained using five previous approaches to data augmentation. We compared the models on five datasets, using three different metrics, for a total of 15 experiments. In 13 of those experiments, our model came out on top; different models were the top performers on the other two.

Graph-based modeling

Graphs are the natural way to represent data movement through networks, whether they’re computer networks, communication networks, or networks of interactions, as between buyers and sellers on an e-commerce site. Anomaly detection in graphs can thus help detect server attacks, spam, fraud, and other types of abuse.

SLIDR architecture 16x9.png

Related content

Slice-level detection of robots (SLIDR) uses deep-learning and optimization techniques to ensure that advertisers aren’t charged for robotic or fraudulent ad clicks.

In recent years, graph analysis has, like most fields, benefited from deep learning. Graph neural networks build up graph representations iteratively: first, they embed the data corresponding to nodes in the graph; then they produce embeddings that combine node embeddings with those of adjacent nodes; then they produce embeddings that combine those higher-level embeddings; and so on, until some fixed termination point. Ultimately, the model produces embeddings that capture information about whole neighborhoods of the graph. (In our experiments, we decided on four-hop neighborhoods.)

The complexity of graphs — the need to represent data both topologically and quantitatively — means that models for analyzing them need extra training data, which can be scarce in the wild. Hence the need for synthetic training data.

Latent-space diffusion

At its core, our data synthesis model is a variational graph autoencoder. “Autoencoder” means that it’s trained to output the same data it receives as input. In-between the input and output layers, however, is a bottleneck layer that forces the network to learn a compressed representation of the input.

“Variational” means that the model’s training objective encourages it not only to faithfully reproduce inputs but also to learn compressed representations whose distribution adheres to some prespecified shape, such as a Gaussian distribution. This means that, during the data synthesis phase, random samples from that distribution are likely to result in realistic-looking data.

The autoencoder’s compressed representations define a representational space, and it’s within that space that we apply diffusion modeling. The autoencoder produces an embedding of the input graph, and our model iteratively adds noise to it. A denoiser then performs the same process in reverse, iteratively denoising the embedding.

Our approach applies diffusion modeling within the representational space (latent space) defined by the graph encoder. Noise is added to the input embedding (z0) in T discrete steps; the embedding is then denoised in another T steps.

 This is, effectively, a second check to ensure that the synthetic data looks like real data. If the distribution learned by the autoencoder doesn’t completely capture the characteristics of anomalous data, the addition of noise can “blur out” the mischaracterized features. The denoising step then fills in the blurred-out features with features more consistent with the training data.

Data synthesis

Our approach has a couple other wrinkles designed to improve the quality of the synthesized data. One is that, after the diffusion process, the reconstituted graph embedding passes to not one but several decoders, each specialized for a different aspect of the graph.

At minimum, there are two decoders, one for node features and one for graph structure. If the graphs in question include time series data, we use a third decoder to assign time stamps to nodes.

Faloutsos small.png

Related content

In the field of knowledge discovery, says Amazon Scholar Christos Faloutsos, “We want to choose the right tool for the given application.”

Another wrinkle is that, during training, we label graph nodes as either anomalous or normal, then train on both positive and negative examples. This helps the model learn the distinction between the two. But it also means that the model learns a distribution that’s conditioned on class labels, so that during synthesis, we can steer it toward samples that will result in graphs that contain anomalies.

Finally, it was important that our model be able to generate heterogeneous graphs — that is, graphs with different node and edge types. In an e-commerce setting, for instance, nodes might represent buyers, sellers, and product pages, while edges might represent purchases, product views, reviews, and the like.

Consequently, as the encoder in our autoencoder, we use a heterogeneous-graph transformer, a module that has several modifications to enable it to handle heterogeneous graphs, including separate attention mechanisms for different node or edge types.

Taken together, these features of our model enable it to outperform its predecessors, and in the paper, we report an ablation study showing that each of these features contributes significantly to our model’s success.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

myrzone.com- Expect more Pay Less
Logo