As an Amazon Associate I earn from qualifying purchases.

Automatically generating labeled training images

[ad_1]

Machine learning (ML) models thrive on data, but gathering and labeling training data can be a resource-intensive process. A common way to address this challenge is with synthetic data, but even synthetic data usually requires laborious hand annotation by human analysts.

At this year’s Computer Vision and Pattern Recognition conference (CVPR), we presented a method called HandsOff that eliminates the need to hand-annotate synthetic image data. By leveraging a small number of existing labeled images and a generative adversarial network (GAN), HandsOff can produce an infinite number of synthetic images, complete with labels.

In experiments, HandsOff achieves state-of-the-art performance on key computer vision tasks such as semantic segmentation, keypoint detection, and depth estimation — all while requiring fewer than 50 pre-existing labeled images.

PV-CVPR.16x9.png

Related content

Four CVPR papers from Prime Video examine a broad set of topics related to efficient model training for understanding and synthesizing long-form cinematic content.

During training, GANs learn a mapping between sample images and points in a representational space, the latent space. Randomly selecting a point in the latent space enables the GAN to generate a novel image, one that wasn’t in its training data.

HandsOff relies on GAN inversion, or learning to predict the point in the latent space that corresponds to an input image. By applying GAN inversion to labeled images, we produce a small dataset of points and labels that can be used to train a third model, one that labels the points in the GAN’s latent space. Now, we can generate an image using a GAN, feed its corresponding point into the third model to generate the image’s label, and repeat this process over and over to produce labeled datasets of arbitrary size.

An overview of the HandsOff procedure.

GANs

GANs generate images by transforming random vectors into sample images. GAN training involves two models, a generator and a discriminator. The discriminator tries to learn the difference between real images and images output by the generator, and the generator learns to generate images that can fool the discriminator.

During training, the generator learns a probability distribution over images and encodes the variation in natural images using the variation in random vectors. A random vector that generates an image can be perturbed slightly to modify semantically meaningful aspects of the image, such as lighting, color, or position. These random vectors thus serve as representations of an image in a latent space.

GAN inversion

Some approaches to GAN inversion modify the generator’s parameters in order to make the mapping between images and points in the latent space more precise. But because we intend to use our trained GAN for synthetic-data generation, we don’t want to mess with its parameters.

Nonlinear trajectory.16x9.png

Related content

By plotting nonlinear trajectories through a GAN’s latent space, the method enables certain image attributes to vary while others are held fixed.

Instead, we train a second GAN inversion model to map images to points in the space. Initially, we train it directly on the mapping task: we generate an image from a latent vector, and the latent vector is the training target for the GAN inversion model.

Next, however, we fine-tune the GAN inversion model using the learned perceptual image patch similarity (LPIPS) loss. With LPIPS, we feed two images to a trained computer vision model — usually an object detector — and measure the distance between the images by comparing the outputs of each layer of the model.

We then optimize the GAN inversion model to minimize the LPIPS difference between the images produced by the ground truth latent vector and the latent vector estimated by our model. In other words, we ensure that, when the model fails to predict the exact latent vector for an input image, it still predicts the latent vector for a similar image. The assumption is that in this case, the label for the true image will also apply to the similar image; this helps ensure label accuracy when we train our image labeler.

LPIPS optimization is extremely nonconvex, meaning that the multidimensional map of model parameters against LPIPS measures has many peaks and valleys, making global optimization difficult. To address this, we simply cap the number of training iterations we perform to fine-tune the model. We may end up in a local minimum or halfway down a slope, but in practice, this proves to improve model performance.

Generating labels

Once our GAN inversion model is trained, we’re able to associated labeled images with particular vectors in the latent space. We can then use those images to train a model that labels GAN outputs.

We don’t want to simply train the model to label latent vectors, however: the parameters of the GAN decoder, which converts the latent vector into an image, capture a good deal of information that is only implicit in the vector. This is particularly true of the StyleGAN which passes the latent vector through a sequence of style blocks, each of which regulates a different aspect of the output image’s style.

To train our labeling model, we thus use a “hypercolumn” representation of GAN-generated images, which associates each pixel in the output image with the corresponding outputs of every style block in the GAN. This does require some upsampling of the sparser image representations in the GAN’s upper layers and some downsampling of the denser image representations of the lower layers.

To evaluate HandsOff, we compared it to three baselines: two prior approaches to generating labeled images and one transfer learning baseline, in which we fine-tuned a pretrained object classifier to produce labeled images. We then used synthetic images generated by all four models to train computer vision models on five different tasks, and across the board, HandsOff outperformed the best-performing baselines, sometimes quite dramatically — 17% on one dataset.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

myrzone.com- Expect more Pay Less
Logo