In ordinary day-to-day behaviour humans identify the intentions of others by drawing on knowledge of the world that they have accumulated throughout their lifetime. On the contrary, for intelligent surveillance system (Hu et al. (2017)), world knowledge is very limited, thus making it very difficult to make such inferences. Eyes and their movements can represent feelings and desire, reveal human attention and play an important role in social communication. Therefore, gaze estimation method becomes an effective means to guide the intelligent surveillance system to recognize Wang et al. (2016b); Feng et al. (2018) the personal intention. We can capture people’s attention priority via gaze estimation technology. Furthermore,it makes the judgement of people’s criminal intent effective.
There is no denying that, in recent years, gaze estimation has been able to meet the needs of actual landing scenarios such as intelligent surveillance system under the training of a large amount of data. However, due to the high cost of time and bankroll, solutions are required to tackle these problems. When it comes to this matter, human give priority to the synthetic image because the annotations are automatically available. However, learning the misleading synthetic images cause owing to the gap between synthetic and real image distributions-synthetic data is not the copy of the realism, the details represented confuse the network and render it fail to complete the mission.
As such, one solution is to improve the simulator. But increasing the authenticity is computationally expensive, designing a renderer is a heavy workload, and the top renderer may still be difficult to model all the features of the real image. This may make the model over fitting in the ”unreal” details of the synthetic image. The other solution is to improve the distribution of synthetic images and make them closer to the real pictures. The current method of state-of-the-art is Shrivastava et al. (2016)
. We adopt a neural network model similar to Generative Adversarial Networks (GAN). The main use of GAN was to train computers to generate some emanational pictures. To be graphic, it uses a synthetic-image-producing network to be against another dataset that produces real pictures, and then distinguish it with a separate distinction network. On the base of GAN, they make some big difference on models. For example, they input synthetic images instead of random vectors and propose a learning model called Simulated + Unsupervised ultimately.
The contribution of this paper to computer vision, in addition to a new learning model, also includes using the model successfully train an optimized network (Refiner) on the premise of no artificial annotation and rendering computers generate more real synthetic images. However, the disadvantage of the method is that the distortion is not improved and the authenticity level is not stable. So, to solve this problem, we put forward a new structure, which can improve synthetic images, via the reference to the idea of style transformation to efficiently reduce the distortion of pictures and minimize the need of real data annotation. The same as general GAN structure, our framework also includes the generation network G and the distinction network D. We improve the structure of the image generation part and change the input from the random vector to the content of real image distribution and the simulation picture together. It will make the generation more stable, avoiding the randomness of distribution. It will also achieve a stable distribution in a short time. We modify the way of loss evaluating of the distinction network and add regular items to ensure the authenticity of the pictures.
In summary, our contributions are five-fold:
1. We propose a new structure, which can improve synthetic images, via the reference to the idea of style transformation to efficiently reduce the distortion of pictures and minimize the need of real data annotation.
2. We improve the structure of the image generation part and modify the way of loss evaluating of the distinction network and add regular items to ensure the authenticity of the pictures. It will make the generation more stable, avoiding the randomness of distribution.
3. We performance experiments to verify proposed structure can generate highly realistic images steadily by qualitative and user research. Meanwhile, the training model of gaze estimation is used to evaluate produced images quantitatively. Compared with the synthetic images used, we implemented the best results on multiple datasets.
2 Related Works
The most prominent contemporary approach to refine synthetic images (change the distribution of synthetic images) is based on generative adversarial networks (GANs).The GANs framework learns a generator and a discriminator with competing losses. The goal of generator is to map the a random vector to a realistic image,whereas the goal of the discriminator is to distinguish the generated and the real images. In the original work of Goodfellow et al. (2014), GANs (Goodfellow et al. (2014)) were used to generate visually realistic images. Since then ,many improvements have been proposed to realism synthetic images. Wang and Gupta (2016) used a Structures GANs to learn surface normals and then combine it with a Style GANs to generate natural indoor scenes. Dosovitskiy and Brox (2016)
introduced a family of composite loss functions for image synthesis, which combined regression over the activations of a fixed perceiver network with a GANs (Goodfellow et al. (2014)) loss. Wang et al. (2015c) trained a Stacked Convolutional Auto-Encoder on synthetic and real data to learn the low-level representations of their font detector ConvNet.
), and further not only proposed an iterative structured low-rank optimization method to multi-view spectral clustering (Wang et al. (2016c) Wang et al. (2018a)), but also a collaborative deep network for robust landmark retrieval(Wang et al. (2017)). Wu et al. (2018a)
proposed a principled deep feature embedding approach for person reidentification and presented a novel deep attention-based spatially recursing model for fine-grained visual recognition(Wu et al. (2018c, d, b)).
The most relevant to our work is Shrivastava et al. (2016) which propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. They develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. Similar with Shrivastava et al. (2016), we also uses an adversarial network similar to Generative Adversarial Networks (GANs) to refine the synthetic images, but we improve the structure of the image generation part and change the input from the random vector to the content of real image distribution and the simulation picture together. It will make the generation more stable, avoiding the randomness of distribution. It will also achieve a stable distribution in a short time. Besides that, we modify the way of loss evaluating of the distinction network and add regular items to ensure the authenticity of the pictures.
Style transfer algorithms is another way to change the distribution of images. Global style transfer algorithms process an image by applying a spatially-invariant transfer function. These methods are effective and can handle simple styles like global color shifts (e.g., sepia) and tone curves (e.g., high or low contrast). For instance, Reinhard et al. match the means and standard deviations between the input and reference style image after converting them into a decorrelated color space. Local style transfer algorithms based on spatial color mappings are more expressive and can handle a broad class of applications. For instance,Dosovitskiy et al. (2017)
train a ConvNet to generate images of 3D models, given a model ID and viewpoint. The network thus acts directly as a rendering engine for the 3D model. pix2pix ofIsola et al. (2017), which uses a conditional GAN to learn a mapping from input to output images. Similar ideas have been applied to various tasks such as generating photographs from sketches or from attribute and semantic layouts (Karacan et al. (2016)). Unlike the earlier work, our approach improve synthetic images, via the reference to the idea of style transformation to efficiently reduce the distortion of pictures and minimize the need of real data annotation.
3 Proposed Method
Our proposed network (As Fig.1) takes two images with their mask: the reference style image which is a set of naturalistic eye image from video of driving environment or naturalistic eye image dataset. A stylized and retouched image referred as the input image from synthetic image dataset. We use this to train the gaze estimation, as we seek to transfer the style of the reference to the input while keeping the content and spatial information due to its importance in appearance-based gaze estimation. The proposed network can be divided into four parts: coarse segmentation network, feature extraction network, Generator and Discriminator.
We train the semantic segmentation network which builds upon an efficient redesign of convolutional blocks with residual connections to segment, according to the line of gaze estimation for the naturalistic image. One of the great benefits of synthetic data is that its semantic information is clearer. Thus the challenge is mainly on segment naturalistic image. Residual connections can avoid the degradation problem with a large amount of stacked layers. Our architecture is fully depicted in fig.2.is shown under each block.
As we know, Residual block consist of many stacked Residual Units and each unit can be expressed in a general form as , where and are input and output of the unit, and F is a residual function. In ,,
is an identity mapping and f is a ReLU function. We try to change the residual network structure makes the association between features stronger. Furthermore, in consideration of simplifying the task, we only mark two kinds of information on the naturalistic image: the pupil and the iris. However, many naturalistic images are influenced by light and other factors, and sometimes the pupil and the iris cannot be completely separated, to avoid ”orphan semantic labels” that are only present in the input image, which the ”orphan labels” usually are pupil region because of the outdoor illumination effect, we constrain the pupil semantic region to be set as the center of iris region. We have also observed that the segmentation does not need to be pixel accurate since eventually, the output is constrained by feature extraction network.
3.1 Feature Extraction network
The architecture of the feature extraction network is shown as Fig.3, the network has an encoder-decoder structure with skip connections. To ensure the features are consistent within each instance, we add an instance wise average pooling layer to the putput of the encoder to compute the average feature for the instance. The decoder uses the representation to synthesize progressively finer feature maps.
Our encoder is based on VGG-19. The network consists of five models and each module contains a number of convolutional layers with layer normalization, ReLU and average pooling. The first module has two convolutional layers, while each of the other modules have three.
Our decoder is based on the cascaded refinement network (CRN). The network is a cascade of refinement modules. Each refinement module contains two convolutional layers with layer normalization and Leaky ReLU.
3.2 Generator G
We decompose the generator into two-subnetworks:G1 and G2. We term G1 as the global generator network and G2 as the local enhancer network. The generator is then given by the tuple as visualized in Fig.5. The global generator The global generator network operates at a resolution of , and the local enhancer network outputs an image with a semantic layouts that is the output of the previous semantic segmentation network (Deng et al. (2018)).
Our global generator is built on the architecture proposed by Johnson et al. , which has been proven successful for neural style transfer on images. It consists of 3 components: a convolutional front-end G1(F), a set of residual blocks G1(R) and a transposed convolutional back-end G1(B).
The local enhancer network also consists of 3 components: a convolutional front-end G2(F) , a set of residual blocks G2(R), and a transposed convolutional back-end G2(B). Different from the global generator network, a semantic label map is passed through the 3 components sequentially to output an image with instance segmentation information and the input to the residual block G2(R) is the element-wise sum of two feature maps: the output feature map of G2(F) , and the last feature map of the back-end of the global generator network G1(B). This helps integrating the global information from G1 to G2.
During training, we first train the global generator and then train the local enhancer in the order of their scale. We then jointly fine-tune all the networks together. We use this generator design to effectively aggregate global and local information for the image synthesis task.
3.3 Discriminator D
Realistic image synthesis poses a great challenge to the GAN discriminator design. To differentiate distribution real and synthesized images, the discriminator needs to have a large receptive field with instance segmentation information on global and local images. This would require either a deeper network or larger convolutional kernels. As both choices lead to an increased network capacity, overfitting would become more of a concern. Also, both choices require a larger memory footprint for training, which is already a scarce resource for realistic image generation. Inspired by Style Transfer, we propose Discriminator D with novel loss function which is a pretrained VGG-19 ( citeSimonyan2014) network and made some key modifications to the standard perception losses to keep the distribution of the naturalistic images and content of the synthetic images to the fullest extent. As Fig.5 shows that instead of taking only RGB color channels into consideration, our network utilizes the representations of both color and semantic features for style transfer. With the semantic features, we can address the spatial arrangement information and avoid the spatial configuration of the image being disrupted because of the style transformation.
Feature Gram matrices are effective at representing texture, because they capture global statistics across the image due to spatial averaging. Since textures are static, averaging over positions is required and makes Gram matrices fully blind to the global arrangement of objects inside the reference real image. So if we want to keep the global arrangement of objects, make the gram matrices more controllable to compute over the exact region of entire image, we need to add some texture information to the image. Luan et al. (2017) present a method which add the masks to the input image as additional channels and augment the neural style algorithm by concatenating the segmentation channels, inspired by it, mask is added as the texture information we need to compute over the exact region of entire image, thus the style loss can be denoted as:
where C is the number of channels in the semantic segmentation mask and indicates the
-th convolutional layer of the deep convolutional neural network. Each layer withdistinct filters has feature maps each of size , where is the height times the width of the feature map. So the responses in each layer can be stored in a matrix where is the activation of the filter at position in each layer .
is the segmentation mask in each layer with the channel c. is the weight to configure layer preferences of global losses which calculated between raw input image and features which was extracted by feature extraction network. is the weight to configure layer preferences of local losses which calculated between input segmentation image and features which was extracted by feature extraction network with the input of segmentation image.
We now describe how we regularize this optimization scheme to preserve the structure of the input image and produce realistic but no distorted outputs. Our strategy is to express this constraint not on the output image directly but on the transformation that is applied to the input image. We name the vectorized version () of the output image O in channel c and define the following regularization term that penalizes outputs that are not well explained by a locally affine transform:
We formulate the realistic but no distorted style transfer objective by combining all 3 components together:
Our full objective combines both GAN loss and style tranfer loss as:
where controls the importance of the two terms.
4 Experimental Results
4.1 Implementation Details
This section describes the implementation details of our approach. We employed the pre-trained VGG-19 as the feature extractor. We chose conv( for this layer, for other layers) as the local content representation, and conv, conv, conv, conv, and conv( for these five layers, for all other layers) as the local style representation. conv ( for this layer, for other layers) as the global content representation, and conv, conv, conv, conv, and conv ( for these five layers, for all other layers) as the global style representation. We used these layer preferences and parameters for all the results.
In order to validate the effectiveness of the proposed method for controllable style transfer, we performed an experiment on LPW dataset Tonsen et al. (2016) which cover people with different ethicalities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions.
In order to verify the effectiveness of the proposed method for gaze estimation, 3 public datasets were used to train the estimator with k-NN Wang et al. (2018b), MPIIGaze dataset Zhang et al. (2017) is used for test the accuacry. Three public datasets are:
UTView Sugano et al. (2014a): The data of subjects S0-S8 in UTView are used as subject 1–9 in our dataset. In total, there are 144 (head pose) 160 (gaze directions) 9 (subjects) 20,7360 training samples.
SynthesEyes Wood et al. (2015): contains 11,382 synthesized close-up images of eyes. There are ten dynamic eye region model in this collection. The eye images are under a wide range of head poses, gaze directions, and illumination conditions.
UnityEyes Wood et al. (2016): can rapidly synthesize large amounts of variable eye region images as training data. The model is based on high-resolution 3D face scans and uses real-time approximations for complex eyeball materials and structures as well as anatomically inspired procedural geometry methods for eyelid animation. Here, the dataset contains 28,332 synthetic eye images with different eye region model and eyeball materials.
4.2 Qualitative Results
To evaluate the qualification of our result, we compare proposed method with three state-of-the-art method, to compare the effective of proposed GAN with style transfer architecture, we compare with the Gatys et al. (2015) and Feifei Li et al.Johnson et al. (2016) which only use style transfer and Shrivastava et al. (2016) which only use GAN. Basides that, we show the result of without modify generator and without modify discriminator. With all this five baseline method, we show the result of two different dataset which is UnityEyes Wood et al. (2016) and SynthesEyes Wood et al. (2015). As Fig.8 and Fig.9 we can see that if closely observed, it can be seen that none of these styles has similar gaze angle with naturalistic images. The skin texture and the iris region in the refined synthetic images are qualitatively significantly more similar to the real images than to the synthetic images, it can be observed that the proposed method is more similar with real conditions by light and achieves outstanding results above Gatys et al. (2015) and Feifei Li et al.Johnson et al. (2016). What’s more, compare with without modify generator and without modify discriminator, the distribution of pupil and iris regions are dramatically clear.
In order to validate the effectiveness of the proposed method, we compared it with available methods for several iteration in Fig.10 and Fig.11 on different dataset. ”Iter” means the number of iteration. Because Shrivastava et al. (2016) is not stable so we only compare our method with Gatys et al. (2015) and Feifei Li et al.Johnson et al. (2016), we can see that after iteration for serval iteration, proposed method can achieve stable distribution with less distortion, thus our result can be used as to train a stable gaze estimator.
4.3 Appearance-based Gaze Estimation
To verify the effectiveness of the proposed method, we perform experiments to assess both the quality of our refined images and their suitability for appearance-based gaze estimation. We use COCO dataset to train the train net of coarse model net. And few of images from MPIIGaze dataset are chosen as target images. The gaze estimation dataset consists of 28,332 synthetic images from eye gaze synthesizer UnityEyes-fine dataset, six subjects of UTview datset and 350,428 real images from the MPIIGaze dataset. For UTview Zhang et al. (2015), the data of subjects S0, S2, S3, S4, S6 and S8 in UTView are used as subject 1–6 in our dataset. In total, there are 144 (head pose) 160 (gaze directions) 6 (subjects) 138,240 training samples and 8 (head pose) 160 (gaze directions) 6 (subjects) 7680 testing samples.
|ALR (Lu et al., 2014)||16.7||R|
|SVR Schneider et al. (2014)||16.6||R|
|RF Sugano et al. (2014b)||15.4||R|
|CNN with UT Zhang et al. (2015)||13.2||R|
|K-NN with UT (ours)||8.9||R|
|CNN with UT (ours)||10.2||R|
|K-NN with Refined UnityEyes|
|Wood et al. (2015)||10.2||S|
|CNN with Refined UnityEyes|
|Wood et al. (2015)||11.5||S|
|CNN with Refined UnityEyes|
|(SimGANs Shrivastava et al. (2016))||8.0||S|
|K-NN with Refined UnityEyes(ours)||8.3||S|
|CNN with Refined UnityEyes(ours)||7.7||S|
We evaluate the ability of our method for appearance-based gaze estimation from real dataset and synthetic image dataset. ALR (Lu et al., 2014), SVR Schneider et al. (2014), RF Sugano et al. (2014b), convolution neural network Baltrusaitis
and KNNWood et al. (2015) are compared with our method as baseline methods. Similar to Wood et al. (2016), we train a convolution neural network (CNN) to predict the eye gaze direction. For RF training, pixel-wise data is employed to represent the original eye image by converting it to column vector, the number of trees during training is set to . For K-NN with UnityEyes refined images or UTview real images, considering that the computation cost increases with neighbor samples number, it can be found that a high-quality gaze estimator is obtained when the neighbor samples number is set to 50, which costs a shorter operating time. A comparison to the state-of-the-art can be shown in Table.1. Training the CNN on the refined images outperforms the state-of-the-art on the part of MPIIGaze dataset. We observe a large improvement in performance from training on the refined images and an significant improvement compared to the state-of-the-art.
We propose a coarse-to-fine eye synthesis method through adversarial training to speed up refining synthetic images with less unlabeled real data. We make several key modifications to the GANs to make the net become an efficient refine model net to improve the suitability of gaze estimation and made the image not distorted. Comparing with the baseline methods, a large improvement in performance from training on the refined images is observed and the quantity of real data reduces by more than one order of magnitude.
The authors sincerely thank the editors and anonymous reviewers for the very helpful and kind comments to assist in improving the presentation of our paper. This work was supported in part by the National Natural Science Foundation of China Grant 61370142 and Grant 61272368, by the Fundamental Research Funds for the Central Universities Grant 3132016352, by the Fundamental Research of Ministry of Transport of P. R. China Grant 2015329225300.
- Deng et al. (2018) Deng, R., Shen, C., Liu, S., Wang, H., Liu, X., 2018. Learning to predict crisp boundaries. arXiv preprint arXiv:1807.10097 .
- Dosovitskiy and Brox (2016) Dosovitskiy, A., Brox, T., 2016. Generating images with perceptual similarity metrics based on deep networks. CoRR abs/1602.02644. URL: http://arxiv.org/abs/1602.02644, arXiv:1602.02644.
- Dosovitskiy et al. (2017) Dosovitskiy, A., Springenberg, J.T., Tatarchenko, M., Brox, T., 2017. Learning to generate chairs, tables and cars with convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 692--705. doi:10.1109/TPAMI.2016.2567384.
- Feng et al. (2018) Feng, L., Wang, H., Jin, B., Li, H., Xue, M., Wang, L., 2018. Learning a distance metric by balancing kl-divergence for imbalanced datasets. IEEE Transactions on Systems, Man, and Cybernetics: Systems .
- Gatys et al. (2015) Gatys, L.A., Ecker, A.S., Bethge, M., 2015. A neural algorithm of artistic style. CoRR abs/1508.06576. URL: http://arxiv.org/abs/1508.06576, arXiv:1508.06576.
- Goodfellow et al. (2014) Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A.C., Bengio, Y., 2014. Generative adversarial networks. CoRR abs/1406.2661. URL: http://arxiv.org/abs/1406.2661, arXiv:1406.2661.
- Hu et al. (2017) Hu, Q., Wang, H., Li, T., Shen, C., 2017. Deep cnns with spatially weighted pooling for fine-grained car recognition. IEEE Transactions on Intelligent Transportation Systems 18, 3147--3156.
Isola et al. (2017)
Isola, P., Zhu, J., Zhou,
T., Efros, A.A., 2017.
Image-to-image translation with conditional adversarial networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967--5976.doi:10.1109/CVPR.2017.632.
Johnson et al. (2016)
Johnson, J., Alahi, A.,
Fei-Fei, L., 2016.
Perceptual losses for real-time style transfer and super-resolution, in: Leibe, B., Matas, J., Sebe, N., Welling, M. (Eds.), Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II, Springer. pp. 694--711.URL: https://doi.org/10.1007/978-3-319-46475-6_43, doi:10.1007/978-3-319-46475-6_43.
- Karacan et al. (2016) Karacan, L., Akata, Z., Erdem, A., Erdem, E., 2016. Learning to generate images of outdoor scenes from attributes and semantic layouts .
- Lu et al. (2014) Lu, F., Sugano, Y., Okabe, T., Sato, Y., 2014. Adaptive linear regressionfor appearance-based gaze estimation. IEEE Trans. Pattern Anal. Mach. Intell. 36, 2033--2046. URL: https://doi.org/10.1109/TPAMI.2014.2313123, doi:10.1109/TPAMI.2014.2313123.
- Luan et al. (2017) Luan, F., Paris, S., Shechtman, E., Bala, K., 2017. Deep photo style transfer, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6997--7005. URL: doi.ieeecomputersociety.org/10.1109/CVPR.2017.740, doi:10.1109/CVPR.2017.740.
- Qvarfordt and Hansen (2016) Qvarfordt, P., Hansen, D.W. (Eds.), 2016. Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, ETRA 2016, Charleston, SC, USA, March 14-17, 2016, ACM. URL: http://dl.acm.org/citation.cfm?id=2857491.
- Schneider et al. (2014) Schneider, T., Schauerte, B., Stiefelhagen, R., 2014. Manifold alignment for person independent appearance-based gaze estimation, in: 2014 22nd International Conference on Pattern Recognition, pp. 1167--1172. doi:10.1109/ICPR.2014.210.
- Shrivastava et al. (2016) Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R., 2016. Learning from simulated and unsupervised images through adversarial training. CoRR abs/1612.07828. URL: http://arxiv.org/abs/1612.07828, arXiv:1612.07828.
- Sugano et al. (2014a) Sugano, Y., Matsushita, Y., Sato, Y., 2014a. Learning-by-synthesis for appearance-based 3d gaze estimation, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, IEEE Computer Society. pp. 1821--1828. URL: https://doi.org/10.1109/CVPR.2014.235, doi:10.1109/CVPR.2014.235.
- Sugano et al. (2014b) Sugano, Y., Matsushita, Y., Sato, Y., 2014b. Learning-by-synthesis for appearance-based 3d gaze estimation, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1821--1828. doi:10.1109/CVPR.2014.235.
- Tonsen et al. (2016) Tonsen, M., Zhang, X., Sugano, Y., Bulling, A., 2016. Labelled pupils in the wild: a dataset for studying pupil detection in unconstrained environments, in: Qvarfordt and Hansen (2016). pp. 139--142. pp. 139--142. URL: http://doi.acm.org/10.1145/2857491.2857520, doi:10.1145/2857491.2857520.
- Wang et al. (2016a) Wang, H., Feng, L., Yu, L., Zhang, J., 2016a. Multi-view sparsity preserving projection for dimension reduction. Neurocomputing 216, 286--295.
Wang et al. (2016b)
Wang, H., Feng, L., Zhang,
J., Liu, Y., 2016b.
Semantic discriminative metric learning for image similarity measurement.IEEE Transactions on Multimedia 18, 1579--1589.
- Wang and Gupta (2016) Wang, X., Gupta, A., 2016. Generative image modeling using style and structure adversarial networks.
- Wang et al. (2015a) Wang, Y., Lin, X., Wu, L., Zhang, W., 2015a. Effective multi-query expansions: Robust landmark retrieval, in: ACM Multimedia 2015.
- Wang et al. (2017) Wang, Y., Lin, X., Wu, L., Zhang, W., 2017. Effective multi-query expansions: Collaborative deep networks for robust landmark retrieval. IEEE Transactions on Image Processing 26, 1393--1404.
- Wang et al. (2015b) Wang, Y., Lin, X., Wu, L., Zhang, W., Zhang, Q., Huang, X., 2015b. Robust subspace clustering for multi-view data by exploiting correlation consensus. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 24, 3939--49.
- Wang and Wu (2018) Wang, Y., Wu, L., 2018. Beyond low-rank representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering. Neural Networks 103, 1--8.
- Wang et al. (2018a) Wang, Y., Wu, L., Lin, X., Gao, J., 2018a. Multiview spectral clustering via structured low-rank matrix factorization. IEEE Transactions on Neural Networks and Learning Systems .
- Wang et al. (2016c) Wang, Y., Zhang, W., Wu, L., Lin, X., Fang, M., Pan, S., 2016c. Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering, in: IJCAI2016, pp. 2153--2159.
- Wang et al. (2018b) Wang, Y., Zhao, T., Ding, X., Peng, J., Bian, J., Fu, X., 2018b. Learning a gaze estimator with neighbor selection from large-scale synthetic eye images. Knowl.-Based Syst. 139, 41--49. URL: https://doi.org/10.1016/j.knosys.2017.10.010, doi:10.1016/j.knosys.2017.10.010.
- Wang et al. (2015c) Wang, Z., Yang, J., Jin, H., Shechtman, E., Agarwala, A., Brandt, J., Huang, T.S., 2015c. Deepfont: Identify your font from an image. CoRR abs/1507.03196. URL: http://arxiv.org/abs/1507.03196, arXiv:1507.03196.
- Wood et al. (2016) Wood, E., Baltrusaitis, T., Morency, L., Robinson, P., Bulling, A., 2016. Learning an appearance-based gaze estimator from one million synthesised images, in: Qvarfordt and Hansen (2016). pp. 131--138. pp. 131--138. URL: http://doi.acm.org/10.1145/2857491.2857492, doi:10.1145/2857491.2857492.
- Wood et al. (2015) Wood, E., Baltrusaitis, T., Zhang, X., Sugano, Y., Robinson, P., Bulling, A., 2015. Rendering of eyes for eye-shape registration and gaze estimation, in: 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, IEEE Computer Society. pp. 3756--3764. URL: https://doi.org/10.1109/ICCV.2015.428, doi:10.1109/ICCV.2015.428.
- Wu et al. (2018a) Wu, L., Wang, Y., Gao, J., Li, X., 2018a. Deep adaptive feature embedding with local sample distributions for person re-identification. Pattern Recognition 73, 275--288.
- Wu et al. (2018b) Wu, L., Wang, Y., Gao, J., Li, X., 2018b. Where-and-when to look: Deep siamese attention networks for video-based person re-identification. arXiv preprint arXiv:1808.01911 .
- Wu et al. (2018c) Wu, L., Wang, Y., Li, X., Gao, J., 2018c. Deep attention-based spatially recursive networks for fine-grained visual recognition. IEEE Transactions on Cybernetics .
- Wu et al. (2018d) Wu, L., Wang, Y., Li, X., Gao, J., 2018d. What-and-where to match: Deep spatially multiplicative integration networks for person re-identification. Pattern Recognition 76, 727--738.
- Zhang et al. (2015) Zhang, X., Sugano, Y., Fritz, M., Bulling, A., 2015. Appearance-based gaze estimation in the wild, in: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4511--4520. doi:10.1109/CVPR.2015.7299081.
- Zhang et al. (2017) Zhang, X., Sugano, Y., Fritz, M., Bulling, A., 2017. Mpiigaze: Real-world dataset and deep appearance-based gaze estimation. CoRR abs/1711.09017. URL: http://arxiv.org/abs/1711.09017, arXiv:1711.09017.