Road images augmentation with synthetic traffic signs using neural networks

Traffic sign recognition is a well-researched problem in computer vision. However, the state of the art methods works only for frequent sign classes, which are well represented in training datasets. We consider the task of rare traffic sign detection and classification. We aim to solve that problem by using synthetic training data. Such training data is obtained by embedding synthetic images of signs in the real photos. We propose three methods for making synthetic signs consistent with a scene in appearance. These methods are based on modern generative adversarial network (GAN) architectures. Our proposed methods allow realistic embedding of rare traffic sign classes that are absent in the training set. We adapt a variational autoencoder for sampling plausible locations of new traffic signs in images. We demonstrate that using a mixture of our synthetic data with real data improves the accuracy of both classifier and detector.


Introduction
Modern computer vision methods are based on machine learning techniques and require labelled datasets for training.The accuracy of the trained model depends on the size and quality of the available dataset.Dataset labelling is a labor-consuming and time-consuming process that is prone to errors.In contrast, synthetic data generation can produce virtually unlimited training datasets without annotation errors.This is why methods for generating synthetic images are actively investigated in recent years.
In this paper, we consider the task of generating artificial data for training traffic sign recognition models.Traffic sign recognition is a significant problem which gains the stable interest of researchers over the years.Traffic sign detection and classification are used in driver assistance systems, self-driving cars, for maintaining up-to-date high-resolution Modern open datasets for traffic sign recognition can contain thousands of frames with two hundred classes.However, a distinctive feature of the traffic sign recognition problem is a significant amount of rare classes.Objects of such classes can either be present in small amounts in datasets or absent.But it is still required to train recognition algorithms for such traffic sign classes since the importance of rare classes on the road is no less than that of frequent.We investigate modern methods for generating synthetic training data using neural networks.Since even state-ofthe-art methods are unable to generate the whole photos of the traffic scene with photo-realistic quality, we propose to embed artificial signs in real images.Two questions arise immediately: how to make the inserted object consistent in appearance with the scene and where to position it.
We focus on the recognition of rare traffic sign classes.Since such signs are absent or limited in the real dataset, we can't directly train a neural network to generate images of such signs.Instead, we aim to create a synthetic traffic sign processing method that will improve the realism of simple synthetic images obtained from the sign icon.We propose three processing methods based on generative adversarial networks [1,2,3].We embed artificial signs in reals images instead of already existing traffic signs.To do this, we first remove the existing ones via inpainting and then place synthetic signs on their places (see Figure 1b).Inpainting is done using a neural network that is trained separately or jointly with the sign processing method.Such a technique allows us to augment images with rare sign classes with the correct geometric placement and evaluate the individual contribution of object processing methods.
In the second part of our work, we adopt a method based on variational autoencoder [4] to predict the correct location and size for synthetic traffic sign insertion.To predict plausible traffic sign placement in a frame, we first automatically obtain semantic segmentation of the image and then sample locations using variation autoencoder.An example of the obtained heatmap is shown in Figure 1c.After obtaining locations, we insert synthetic traffic signs in addition to real traffic signs in a frame (see Figure 1d).
Overall, we propose three methods for processing synthetic traffic signs and a new method for their placement on real road images at the geometrically correct position.Proposed methods allow augmenting the real road images with high-quality synthetic traffic signs for classes, which are absent in the real training dataset.We have conducted an extensive experimental evaluation of the proposed methods.It has demonstrated that usage of generated data improves the quality of traffic sign detection and classification, especially for the rare classes.

Synthetic image generation and processing
The augmentation of real images with new synthetic objects can be implemented using different methods.The simplest and most obvious way is to draw an object without any processing [5,6].However, this approach will lead to unrealistic images and does not allow to obtain high-quality synthetic samples.Recently generative adversarial neural networks [1] have been applied to such problems.Such methods perform image processing so that artificial objects matches the background in colour and lighting [7,8,9].However, the geometric position and shape of the embedded objects are still not taken into account.
The basic idea of a generative adversarial network is to have two separate parts -a generator and a discriminator.The generator creates synthetic images.Discriminator learns to distinguish generated images from real ones.These neural network's components try to deceive each other during the training process.In [10], for image generation, convolutional neural network architecture with transposed convolutions was proposed to increase the resolution of generated images.The proposed approach with convolutional layers made it possible to train the neural network faster and improve the quality.Other authors [11] used the Laplace pyramid and several generators and discriminators.Also, the researchers proposed work on the conditional generation of an object with a given class [12].The generator receives not only random noise at the input but also the class label of the object to generate.
GAN models have been successfully applied for image transfer between domains.One of the notable examples is CycleGAN [2], which doesn't need labelled pairs of images from the source and target domains for training.It has two generators and two discriminators.Suppose we have two image domains A and B. The first generator learns to transfer images from A to B, and the second generator on the contrary from B to A. First discriminator trains to distinguish synthetic and real images from B, and the second discriminator vice versa.During the inference process, only the desired generator is used.
The rapid progress of GANs is quite astonishing.In 2019 a StyleGAN architecture was proposed [3], which demonstrates a surprisingly realistic generation of people's faces.They were generated from random vectors, which were at first transformed by a small fully-connected part of a neural network to obtain a vector in intermediate latent space.The Adaptive Instance normalization (AdaIN) layers [13] are used in the generator to transfer information from vector in latent space.Also, random noise is actively added to the architecture in the intermediate layers to obtain a variety in the small details of the individual generated images.Our proposed methods for high-quality synthetic traffic sign generation are based on this approach.
The previous methods don't predict the location of embedded objects.In the paper [14], the authors suggested the adversarial approach for generating synthetic object placement and processing.A proposed neural network has a branch for predicting the location and size of a new object.A simple colour correction of six predicted parameters is used for the first stage of object processing.Then a refinement network is used to improve object consistency with the background.As usual, architecture has discriminator for distinguishing synthetic image and new segmentation network which learns to predict a mask of the artificial object.
The usage of synthetic training data for accuracy improvement of recognition models is actively investigated.In [15], the quality of the re-identification of people in video was improved by adding synthetic data to real data.In [16], synthetic data were added to the training set to improve the quality of liver lesions classification.In papers [17,18], game engines are used to generate the labelled city scenes.Synthetic data made it possible to improve the quality of the final algorithm and reduce the requirements for the amount of real data by three times.
The authors of paper [19] suggested generating synthetic road images from the GTA computer game by transferring data from one domain to another.As a target domain, they used images from the Cityscapes [20] dataset.Their approach is based on CycleGAN architecture.

Predicting synthetic object locations
Most modern neural network architectures for changing position and parameters of objects are based on Spatial Transformer Networks [21].The idea of such architectures is to add a separate part of a neural network that will generate affine transformation parameters for an added object according to the background.The resulting affine transformation is applied to a grid of pixels that specify where and which pixel of an object will be positioned.The article shows that these transformations are differentiable and can be optimized in neural networks.
Spatial Transformer Networks became popular for synthetic object placement.Authors of works [14,22] based their approaches for generating locations of new entities by predicting affine transformations.Besides discriminator during the training process, these methods relied on an additional signal such as image segmentation network or target classifier/detector networks.Disadvantages of a spatial transformer are bad convergence, instability, and complex training process.
Article [4] proposed a VAE-based approach for object placement on road images.The algorithm has two separate modules for determining where and what could be placed in the picture.A generator with Spatial Transformer Networks is used in the first module to determine where to place the object.In the second module, there is a generator for the shape of an embedded object to determine what exactly needs to be placed.

Traffic sign generation and recognition
The traffic sign recognition methods have a long history.Early approaches were based on finding corners and feature points in images [23].Usage of synthetic training datasets has been investigated since 2007 [24].Generation of syn-thetic examples for training traffic sign classifiers has been implemented by applying affine transformations to sign pictograms.
In [25] a four-stage system for detection and classification of traffic signs was proposed.It included a cascade detector and a set of neural network classifiers for each type of traffic sign.This model was trained with synthetic data proposed in paper [26].A suggested approach for the generation of synthetic data used heuristic methods, based on computer graphics.But it also tried to predict the best parameters for current data.Paper showed that models trained on synthetically generated data could produce good results.
Since the introduction of modern deep learning methods, they have been applied to the traffic sign recognition.In the paper [27], the authors first collected a massive training dataset and then proposed a fully convolutional simple neural network architecture for the simultaneous detection and classification of traffic signs.
In [28] synthetic traffic signs with poles were generated by computer graphics methods and then improved with a neural network, base on CycleGAN.To preserve the traffic sign class while processing an additional identity loss has been using during the training of this model.Then artificial traffic signs were embedded into reals photos.Simple heuristics based on a reconstruction of camera parameters and simple 3D-modelling was used to determine new signs location.Experimental evaluation showed that this approach works better compared to random object placement.However, the best results were obtained if new artificial traffic signs replace existing ones.
Currently, the best results in traffic sign recognition are achieved by adapting modern detection architectures.For example, in [29,30] anchor-based methods have been used, with specific optimizations for speed.In [29] authors used the ResNet-50 as the backbone to build a pyramidal feature network.In [30] MobileNet-backbone with suggested Localization network is used.
Conventional convolutional neural networks can be used for traffic signs classification, such as AlexNet [31], ResNet [32] etc.In 2019, the article [33] proposed to apply the special pre-processing procedure to the traffic signs before classification.Then, processed signs are fed to the input of a small neural network, based on LeNet.
Different convolutional architectures were tested on the traffic sign classification problem in the article [34].Authors showed that CNN architectures are well suited for this task and drew attention to the lack of real data.The authors concluded that techniques like image pre-processing and data-augmentation are useful to improve classification accuracy.
Another approach aimed at solving the problem of rare traffic signs classification was proposed in 2020 [35].Authors use WideResNet [36], trained with contrastive loss, for feature extraction.One discriminator is used to distinguish rare and frequent classes.Authors proposed to classify frequent classes using features from the last layer of the neural network, and rare classes using the nearest neighbour method.

Proposed methods
We explored two different ways to embed traffic signs in images: • Replacement of existing real traffic sings with artificial ones.In this case, we use inpainting at the place of the real sign to generate plausible background.Then artificial traffic sign is embedded on top of it.This way of generating synthetic data allows increase training set with new examples of rare classes with the correct geometric position.The article [28] showed that this approach improves the quality of neural networks for classification and detection.It also allows us to evaluate better the individual contribution of proposed processing methods for improving target neural networks quality.For inpainting we used Edgeconnect [37] architecture.
• Embedding additional artificial signs in new positions.In this case, we need to learn how to find the most suitable position for the new traffic signs first, and then perform their processing.To find the correct position of new traffic signs, a neural network architecture based on [4] was chosen.

Processing of embedded traffic signs
In both ways, we need to process artificial signs to improve visual consistency with the background.We propose three models for this task.The first two of them are trained together with the inpainting network, and the third is trained separately.The first two models are based on the ideas of CycleGAN, where network performs transferring from the domain of artificial signs to the domain of real ones.The third model is fundamentally different and inspired by the ideas of StyleGAN, in which the neural network itself learns how to generate correct traffic sign icons that are consistent with the background.
All proposed approaches take into account the context of the image around the embedded sign.That is the main difference from existing methods for artificial traffic sign processing.

First approach ("Pasted")
In this approach, we train together neural networks for inpainting and processing of embedded traffic sign.We use Edgeconnect [37] as the basis for the inpainting architecture.We remove the part of the model for object boundaries generation.The whole proposed architecture consists of two generators and two discriminators.
The first generator receives an input image patch of size 128 × 128 pixels and a mask of a removed part in the middle of it.The output from the first generator inpaints the removed part.The first discriminator receives either the inpainted patch of an image or the original patch without the removed part and learns to distinguish the real ones from the generated ones.During training, this patch is cut out from random places of source pictures, and a random rectangle is removed from it exactly in the middle so that each side of the removed part is not more than 64 pixels.
The icon of a traffic sign is then embedded in the middle of the output of the first generator, so that icon's maximum side is 64 pixels minus a small random number.This patch with an icon is fed to the input of the second generator, which should improve the visual quality of the fragment.The second discriminator receives at the input either output of the second generator, or a real patch with a traffic sign and learns to distinguish them from each other.
Both generators will inevitably change background with uncut part of the image, so the pixels around the cut-out patch are restored by the mask of the removed part.
We chose cross-entropy as an adversarial loss function of both discriminators.Additionally, the first generator has an L 1 −loss, perception loss and style loss [38] between the inpainted by the first generator patch and the correct image.For the second generator, there is an L 1 − loss for the background around the sign so that it does not change.Also, a perception loss was added between the input and output of the second generator and style loss between the output of the second generator and the output of the first generator, before embedding the sign icon.
The architecture diagrams of neural networks are shown in figure 2. During inference and generating of the synthetic data set, patches with real traffic signs are cut out of the image and replaced with patches with embedded artificial signs.

Second approach ("Cycled")
In the first approach, real traffic signs in patches are used only in the second discriminator.That means that the first generator performs inpainting of patches for which the correct background is known in advance, but the second generator embeds the sign when the true result of embedding is unknown.We decided to add a second data stream to the training process, where the input will be fed with the patch, in which the real sign was previously located, but was cut out.Next, for the second data stream, the inpainting of the cut-out part of the fragment of the picture is performed by the first generator.Here, unlike the first stream, the true output of the first generator is unknown.Then the icon of the sign of the same class, which was in a real patch, is embedded.This icon is processed by the second generator.As a result, the entire neural network should ideally get a picture identical to the original one.
In addition, L 1 − loss, perception loss and style loss between the outputs of the second generator and the real image were added as loss functions for the second data stream.Also, we added L 1 − loss between the input and output of the first generator around the area of cut out a rectangle in a fragment.
The architecture of the neural network itself does not differ from the first approach.Cross-entropy is used similarly to the first as an adversarial loss function of discriminators in the second data stream.
The scheme of the second data stream is shown in the figure 3.

Third approach ("Styled")
Two previous models have shown good quality already, but we have decided to use a more advanced generator to push the quality further.Both previous models combine two neural networks for inpainting and processing images, which are trained simultaneously.In this method, we train two parts separately.As a neural network for inpainting, we use an architecture similar to the previous approaches, based on the EdgeConnect.
Let us consider in more detail the second neural network for the processing of embedded signs in patches.We use StyleGAN [3] as the basis for this model.It will not process an icon which already embedded in the background, but it will generate a traffic sign consistent with the back-ground.To achieve this result, we have made several significant changes to the StyleGAN: • Instead of generating a feature vector from random noise as in the original fully connected neural network, we propose to use two convolutional neural subnets.
The first convolutional subnet gets as input an image of a 64 × 64-sized icon embedded in a 128 × 128-sized background patch, where the real traffic sign used to be located before.Subnet converts it to a vector of length 548.The second convolutional subnet receives resized to 64×64 input fragment (the real size is 128×128) of the background without a sign.At the output from it, a vector of length 64 is obtained.Next, the two resulting vectors are concatenated into the vector v desc of length 612.
• A simple two-layer classifier has been added, which, using the vector v desc , tries to determine the class of a traffic sign from 205 possible.This classifier improved the quality of the generated images.It seems to us that this is happening because it regularizes the neural network so that it encodes exactly the properties related to the class of the sign and not its appearance.
• The process of generating a sign does not begin with a trained constant activation 4 × 4 map, but with a map obtained from v desc using one fully-connected layer.
• An additional second discriminator is added, which distinguishes the synthetic sign embedded into the background patch from the real one.
• As in the original StyleGAN, all parts of the neural network are first trained for small 8 × 8 pictures, then the layers of generators and discriminators are gradually turned on up to the size of the 64 × 64 sign icon located in the center of the background with a size of 128 × 128.
An adversarial loss function was WGAN-GP in both discriminators.Also with V GG13 neural network we added perception loss between the output of the neural network and the icon, embedded into the background without processing.Additionally, we used a small weight perception loss between the background itself and the output of a neural network with a sign.It has been observed that this increases the realism of the generated images.
During training, synthetic traffic signs are located in places where there used to be real traffic signs.That is why we previously performed inpainting of real signs.
The processing scheme of traffic signs in the third model is shown in figure 4. The proposed method allowed a generation of images with good quality and exceeded two previous models in experimental evaluations.

Location of embedded traffic signs in real road images
We have also examined the geometric positioning of traffic signs in the image.We train a neural network that will find appropriate places for additional traffic signs on road images.
We have used sampling with kernel density estimation from the distribution of existing labelled data positions as a baseline for the placement of additional traffic signs.This approach does not take into account the features of each particular image.It was trained on real labelled training samples.We built three different distributions -to sample coordinates of the signs in the image, sample their sizes and the number of signs in the current image.
Next, we have tried the approach from [4].In our work, we have used the only where module, while what module was disabled.We didn't find any papers, where such type of neural networks was used for traffic sign placement previously.
For the given image, the model tries to predict the correct distribution of sizes and locations of object instances.It is a generative adversarial network, where the generator as input takes semantic segmentation of image and a random vector.As output, it returns parameters of affine transformation without rotation for an appropriate bounding box for a new sign.
This architecture has two discriminators.First, D 1 learns to differentiate real and generated affine parameters for the current image.Second, D 2 learns to distinguish whether a new bounding object is consistent with the input semantic map.Cross-entropy is used as an adversarial loss for them.During training, this module has two paths -unsupervised and supervised.An unsupervised path has only a second discriminator D 2 , while a supervised has both D 1 and D 2 .
For unsupervised path, architecture has input reconstruction loss which aims to reconstruct input semantic map and random vector from intrinsic representation for STN subnetwork using L 1 − loss.This helps to ensure that encoded representation has significant information from input data and partially solves the problem of model collapsing to a few numbers of modes and not covering the entire distribution.
In the supervision path, we already have one of the real positions of traffic signs.This information should be conveyed to architecture.To achieve this, the network has an additional submodule that encodes real affine transformation to the input vector (instead of random) and output transformation should be the same as the input.Kullback-Leibler divergence term in loss helps this submodule to learn the correct distribution for encoding.D 1 tries to distinguish synthetic parameters.This path also helps positions determined by the transform to become more diverse.
This neural network for determining the location of objects is based on semantic maps.Since the RTSD dataset does not have semantic segmentation, we first conducted experiments in which RGB road images are fed to the input of a neural network.With such training, we were not able to achieve acceptable quality and the generated distributions themselves collapsed into degenerate when all new signs are located in the same place for every image.
To solve the problem of missing RTSD semantic segmentation, we have applied to our dataset the semantic segmentation model, trained on Cityscapes dataset.We have used the pre-trained method 'Fast Semantic Segmentation' [39].It generates plausible semantic segmentation.We have used the obtained semantic maps to train where module of the neural network for object placement.
After that, we have used a trained neural network to sample the locations and sizes of new traffic signs.When generating them, we have made sure that the new examples did not overlap.The number of traffic signs for each image has been determined using Gaussian kernel density estimation.A full pipeline of new traffic sign generation process is portrayed in figure 5.

RTSD Dataset
As real data, we took Russian traffic sign dataset RTSD [40].It consists of 205 classes, of which 99 are found only  Also for all 205 classes, we had high-resolution icons of traffic signs with their masks.
We compared our proposed approach for embedding synthetic objects in pictures with three already existed methods for traffic sign [28] processing: • Synt -this is a simple synthetic, which was obtained by embedding signs on the background and applying a transformation of sign with random parameters to the icon: rotate, shift, contrast change, Gaussian blur, motion blur.
• CGI -samples, which were obtained by rendering three-dimensional models of traffic signs on pillars in real road images.
• CGI-GAN -in this sample, traffic signs are transformed from the CGI collection to better ones using CycleGAN.
• Inpaint -this is a simple synthetic data for the detector, in which an icon of a traffic sign is drawn in the image without any processing.

Generated data sets
To begin with, we conducted experiments in which synthetic traffic signs were embedded in places where real ones already existed.This secured the correct geometric placement of synthetic signs.For detector, the number of images and signs in the synthetic set is the same as in the real dataset.For classifier, the number of samples is the same as for previously existed synthetic data sets.Let's introduce abbreviations for the proposed three models: • Pasted -results of first approach.
• Cycled -results of second approach.
• Styled -results of third approach.
Then we generated a synthetic set for the detector in which new places of traffic signs were determined or using kernel density estimation (KDE) or using a special neural network (NN).At the same time, the processing of synthetic signs was made only by the third Styled approach, which showed itself best for rare traffic signs.In this method, we conducted various experiments.Let's introduce abbreviations for proposed approaches: • KDE-additional and NN-additional -Locate new traffic signs in addition to existing ones.
• KDE-only-synt and NN-only-synt -Perform inpainting of real signs to remove them, and then place synthetic signs in new places.As a result, there are no signs in such pictures at the places of previously existed.
• KDE-manystyled and NN-manystyled -Perform inpainting of real signs, and then place synthetic signs in both new places and existed in real places.
The number of images in each set was the same as the number of images in the training set, because each training image was augmented exactly one time.

Traffic sign recognition system
As an object detector, we use PVANet [41], which is based on the Faster R-CNN approach.We evaluated detection output on a test set before and after we applied the classifier.The area under curve (AUC) was used to measure detector quality.
As traffic sign classifiers we chose two models based on WideResNet [36].The first one is a simple classifier model with WideResNet architecture.It takes an image of size 64 × 64 pixels and predicts one of the 205 sign classes.On the features extracted by this neural network, we trained a simple k-NN classifier.It operates on an index that consists of synthetic examples of traffic signs.The second method is designed specifically to handle the case of rare traffic sign classes.It is proposed in paper [35].In this method, rare and frequent classes are treated differently.First, WideRes-Net features are extracted at a penultimate layer of the neural network.These features are then used in Random Forest to classify whether a sign is rare or frequent.Frequent signs are classified with the Softmax layer on top of WideRes-Net.Rare classes are passed into a k-NN classifier.This classifier shows better quality compared to the first classifier [35].To measure quality, we first calculated overall accuracy on the test set.In the same tables, separately for rare and frequent classes, we calculated the micro-averaged Recall (formula 1), as it is important for us to understand how many signs we find from the available ones.M is the number of classes.For class i we define T P i , F N i , F P i as the number of true positives, false negatives and false positives respectively.
Next, we compared the macro-averaged Precision(formula 2), Recall(formula 3), and F1(formula 4) measures for all classes and separately for rare and frequent classes.

Evaluation results
During our experiments, we trained neural networks for classification and detection both on mixtures of real data with synthetic data and on synthetic data alone.A comparison of traffic signs examples for a classifier can be seen in figure 6. Examples of road images with synthetic signs are in figure 7.

Classifier results
Here we describe the results of experiments with classifiers.We compared the proposed approach with the best previous method, which uses synthetic data CGI − GAN [35].Table 3 shows measurements of a simple WideResNet classifier trained on a mixture of real and synthetic samples with a k-NN index trained on its features.For all classes we measured accuracy, but separately for rare and frequent classes we measured micro-average recall.Table 4 shows measurements of a simple WideResNet classifier, trained only on synthetic samples.Table 5 summarizes measurements of improved WideResNet classifier trained on a mixture of real and synthetic samples.Table 6 shows metrics of an improved WideResNet classifier trained only on synthetic samples.Table 8 shows macro-averaged Precision, Recall, and F1 measures for WideResNet classifiers trained only on synthetic samples.And table 7 for classifiers trained on a mixture of real and synthetic samples.
The best results are highlighted in tables.Obtained values show that approaches Cycled and Styled compete in terms of quality for target classifiers.It's hard to say which data is better.It depends on the specific task in which target classifiers will be used.However, 94.11% is the best accuracy value that was obtained by classifying both rare and frequent classes with an improved classifier (table 5).It was achieved by training using the Styled data and classification by index from corresponding synthetic data.Previously, the best quality was 93.52%.Micro-average recall of rare traffic signs has also greatly improved from 70.16 to 76.33.Table 3: Simple WideResNet classifier trained on a mixture of real and synthetic samples with a k-NN index on its features.
accuracy, the value is 92.82% (this is less than 94.11% for the improved one).Macro-averaged precision, recall and F1 also better with improved classifier than with usual.This once again confirms the assumption of previous article [35].
It is also seen that proposed synthetic data significantly improve the quality of classification when training only on synthetic data.Previously, the best accuracy was 60.55%, and now 73.03% (table 6).Therefore we conclude that usage of proposed synthetic samples during training the process allows improving the quality of the WideResNet classifier.

Detector results
Next, we present the results of experiments with a detector.

Figure 1 :
Figure 1: Example of fragment with 6 traffic signs.Here on one fragment real ones are replaced with new synthetic.On another fragment there are embedded new signs

Figure 2 :
Figure 2: The architecture of the first approach.

Figure 3 :
Figure 3: Additional data route in second approach.

Figure 4 :
Figure 4: Architecture of generator for processing in third approach.
in the test set and are completely absent in the training set, and 106 classes are present in the training set.The set contains data for training detectors and classifiers of traffic signs.Train and test data statistics can be found in tables 1

Figure 5 :
Figure 5: A full pipeline of proposed traffic sign placement method.

Figure 7 :
Figure 7: Comparison of real images; images from Styled set, where real images replaced with synthetic ones; images from NN-additional set, where new traffic signs were located in addition to existing ones.

Table 1 :
Statistics for detection task RTSD dataset.

Table 2 :
Statistics for classification task RTSD dataset.

Table 7
also shows that we were able to improve macroaveraged precision, recall and F1 for the simple and improved WideResNet classifiers using the proposed synthetic data in comparison with the CGI − GAN .The best results were shown by methods Cycled and Styled.For all classes, F1-measure has grown from 72.38 to 76.24 with Styled, for rare from 52.07 to 58.73 with Cycled, and for frequent ones from 91.34 to 92.89 with Styled.
Figure 6: Comparison of different synthetic traffic signs types.

Table 4 :
Simple WideResNet classifier trained only on synthetic samples.

Table 5 :
Table 9shows AUC values for a detector trained on a mixture of real and synthetic samples.The table10shows Improved WideResNet classifier trained on a mixture of real and synthetic samples.