On their own, state-of-the-art deep learning systems are typically capable of achieving accuracies between 80% and 99% for tasks such as image classification, given sufficient training data. While satisfactory for research purposes, these accuracies may not suffice for using deep learning in everyday tools. For example, 99% accuracy in a self-driving car would lead to unacceptable deaths. In the case of self-driving cars, whenever the deep learning system has low confidence in its next course of action, operation of the vehicle would be turned over to a human. Facebook’s facial recognition provides another example of how a human in the loop supplements a deep learning system. While Facebook’s deep learning-based facial recognition system can achieve some 97% accuracy(Taigman, 2014)
, Facebook will still call upon its users to help verify a given face whenever the deep learning classifier has low confidence in its prediction. Thus, human in the loop centric design enables deep learning’s ubiquity in every-day applications.
The image organization task requires human in the loop design – only the human analyzing the photos knows the optimal organization of images for their particular task. Machines may be good at grouping images in ways that make sense to machines, but not to humans, unless explicitly primed with some notion of what the algorithm should group by (eg. color or shape) (Hodas & Endert, 2016). Sharkzor supports human-machine interaction by using physical proximity on the canvas and grouping as information for the system. The essence of Sharkzor is the power to embed an image’s high dimensionality into a two-dimensional space. In the Sharkzor system we aim to keep the human at the center of the task, rather than as a simple and tedious algorithm trainer.
Users move images around on a 2D web-based canvas, arranging things freeform or into groups. Upon request, the Sharkzor system repositions images or regroups them to reflect its assessment of the user’s mental model. The user may then refine the system’s suggestions. In this way, the user may retrain the system, and the user may understand Sharkzor’s deep learning models by assessing the system’s organization of images in comparison to the user’s mental model. Thus, Sharkzor enables dynamic human in the loop machine learning to enable image sort and summary.
1.1 Related Work
Some of the earliest and contemporarily relevant research dealing with human-automation interaction can be traced back to (Parasuraman et al., 2000) and (Cummings, 2004). In (Cummings, 2004), research concentrated on Navy weapons operators who had to synthesize instant messaging data from multiple sources to make supervisory decisions on how to control Tomahawk missiles. In (Yu et al., 2015), an iterative loop consisting of humans and deep learning is used to generate a dataset containing some one million labeled images of ten scene categories and twenty object categories. Finally, in (Wang et al., 2016)
the researchers were able to utilize a human-in the loop system to train a convolutional neural network to segment foreground moving objects in surveillance videos. The performance of the classifier rivaled that of humans while reducing the manual labor involved in ground truthing the videos by up to forty times. Recently,(Hodas & Endert, 2016) presented a system for using the two-dimensional arrangements of images to capture mental model of the user and position images accordingly. However,this system required the user to touch every image.111See (Hodas & Endert, 2016) for a review of other systems that automate 2D arrangements of images. Sharkzor leverages deep learning to learn the user’s mental model and apply it to images it has not seen before.
2 Human-In the Loop Design
Though many applications of machine learning to human in the loop tasks utilize the human as a fall-back when the algorithm is unsure, we instead focus on using machine learning algorithms to augment the user’s own organizational methodology. Our user-centered approach, which focuses primarily on optimizing the user and their task, also support users with different and unexpected workflows.
The Sharkzor system works more similarly to a recommender system like Netflix or Amazon. Sharkzor requires few training examples (as few as 2, initially), and instead integrates the training process seamlessly into the user’s normal workflow. Sharkzor also provides feedback and insight into the machine learning process by way of confidence visualization and image heatmap overlay.
2.1 User-Centered Design
The User-Centered Design Cycle (Norman & Draper, 1986) describes a method for designing and developing systems that focus, as the name suggests, on the user. We utilized this method for researching, designing, developing the Sharkzor software.
During the research phase of the cycle, we inventoried features of other machine learning image organization tools (Farbman, ) and general image organization tools (Stefaner, ; piv, ). We then refined and elaborated on the supported tasks to support the intended Sharkzor user. We created a task taxonomy, grouping the lower level tasks and relevant interactions into three primary system tasks: triage, organize and automate.
The Sharkzor interface is designed to be a flexible system that allows users to complete their tasks in non-predetermined ways. We provide a system where how and when to utilize the machine learning augmentation is up to the user throughout their workflow.
2.1.1 User Tasks
We identified user tasks and interactions throughout our research phase, then prioritized and organized the interactions using our task taxonomy based around the main triage, organize and automate task families.
task involves taking the user’s initial query result image pile and turning it into an accurate visual representation of the user’s mental model. Functionality supporting this task includes pre-clustering and image interact modes. With images pre-clustered by visual similarity, the user can quickly grab groups of similar images from the canvas (Figure1A) and then organize them into groups (Figure 1B) using grid view mode. The grid view mode (Figure 2A) was designed specifically to support the triage task where on the initial canvas many images may be hidden behind others.
The Organize task revolves around user interaction with the canvas to create a visual model of image organization. Key functionality includes machine feedback via proximity (Figure 1C) and grouping (Figure 1B).
Once the other tasks have been completed within the user’s workflow, Sharkzor can then expedite the user’s workflow by automating common tasks. Sharkzor uses deep neural networks to automatically populate the visual representation of the user’s mental model. Functionality supporting this task includes auto-positioning and auto-grouping of images.
2.2 Toward Understandable Machine Learning
The Sharkzor system has affordances which provide transparency and increase trust in the underlying machine learning algorithms. These affordances are a step in the direction of making machine learning results less of a black box and more understandable to humans. These features include the (1) auto-group feedback, that images similar to those already grouped are added to existing groups (Figure 2B) and (2) the confidence visualization (Figure 2C) which provides a qualitative view of the algorithm’s confidence in group assignment.
3 Deep Learning
Sharkzor leverages multiple deep learning techniques to facilitate image identification and organization, including size-agnostic classification, pre-clustering and few-shot learning. These algorithms and methods are deployed into flexible and modular micro-services.
Pre-clustering provides users with a starting arrangement of images from which to begin interacting. A widely used algorithm exists precisely for this task in t-distributed stochastic neighbor embedding (tSNE) (Maaten & Hinton, 2008). Unfortunately, tSNE becomes computationally prohibitive when the number of starting dimensions is greater than
, as the case for image data. To this end, for Sharkzor a series of steps are taken to reduce an image’s dimensionality before being passed to tSNE. The first step utilizes an autoencoder to compress images to 256 dimensions. The autoencoder is pre-trained on ImageNet images(Deng et al., 2009)
. After feature extraction using the autoencoder, principal component analysis is used to further reduce an image’s dimensionality to eight dimensions. Finally, tSNE takes the image’s eight dimensional representation and yields the desired two-dimensional embedding. The coordinates of the tSNE output are then utilized by the canvas for the initial image placement when the user starts Sharkzor.
3.2 Few-Shot Learning
To address the requirement of users being able to create arbitrary image-related mental models, we aren’t able to use traditional multi-label classification techniques. This is because the user may be interested in clustering images into arbitrarily complex arrangements. To make a robust system that can adapt to user supplied groups, we leverage learning techniques requiring few trainging examples.
overcome the issue of requiring many hundreds of labeled examples as they classify images in relation to other images. We accomplish this specifically by having our network learn a binary classification task to learn the probability that a reference image belongs in a class, where a class is a single image or a collection of images tagged by the user.
This provides key benefits in being flexible with respect to the number of groups. If, for instance, we applied a final -way classification layer on the few-shot model, such as in (Vinyals et al., 2016), we would be locked into a fixed number of groups. To support multi-group classification of images, we collect all of the output probabilities from comparing each ungrouped image to each user-provided group and then take the over the set of probabilities. To allow pictures to remain ungrouped, we only assign a image to a group if certainty exceeds a threshold.
3.3 Training the neural networks
Sharkzor leverages standard image datasets for training and performance assessment. These datasets include CalTech 256 (Griffin et al., 2007), CIFAR10 & CIFAR100 (Krizhevsky & Hinton, 2009), Visual Genome (Krishna et al., 2016), and Omniglot (Lake et al., 2015).
It should be noted that the Sharkzor networks are trained to be explicitly class and data agnostic. Training the few-shot learning technique using common datasets ensures that the system functions as expected. We can visually confirm results, and quickly experiment to benchmark things like performance versus class sizes.
3.4 Machine & Deep Learning Service
The machine & deep learning micro-services are responsible for providing all functionality to requestors via the image service. To operate, we initially leverage transfer learning by extracting features from ResNet(He et al., 2015) for all of our images. These features are then used for everything from training our exemplar regression model to training the few-shot model.
In this work we describe our approach to accomplishing a deep learning assisted platform for visual triage, sort and summary of images, which encodes a user’s mental model through human-in-the-loop interaction. We designed Sharkzor using a micro-services philosophy that aids users in instilling their mental model into the application through methods for pre-clustering, auto-grouping, and repositioning images using traditional machine learning, transfer learning and few-shot learning.
This work was funded the U.S. Government.
- (1) Microsoft live labs pivot. www.microsoft.com/silverlight/pivotviewer/. Accessed: 2017-06-04.
- Cummings (2004) Cummings, M. L. The need for command and control instant message adaptive interfaces: Lessons learned from tactical tomahawk human-in-the-loop simulations. CyberPsychology and Behavior, 7:653–661, 2004.
- Deng et al. (2009) Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255. IEEE, 2009.
- (4) Farbman, j; Rasmussen, Chris. Social media picture explorer. https://github.com/ngageoint/social-media-picture-explorer. Accessed: 2017-06-04.
- Griffin et al. (2007) Griffin, G., Holub, A., and Perona, P. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007. URL http://authors.library.caltech.edu/7694.
- He et al. (2015) He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385.
- Hodas & Endert (2016) Hodas, Nathan Oken and Endert, Alex. Adding semantic information into data models by learning domain expertise from user interaction. IEEE VIS, 2016.
- Koch (2015) Koch, Gregory. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015.
- Krishna et al. (2016) Krishna, Ranjay, Zhu, Yuke, Groth, Oliver, Johnson, Justin, Hata, Kenji, Kravitz, Joshua, Chen, Stephanie, Kalantidis, Yannis, Li, Li-Jia, Shamma, David A, Bernstein, Michael, and Fei-Fei, Li. Visual genome: Connecting language and vision using crowdsourced dense image annotations. 2016. URL https://arxiv.org/abs/1602.07332.
- Krizhevsky & Hinton (2009) Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images. 2009.
- Lake et al. (2015) Lake, Brenden M., Salakhutdinov, Ruslan, and Tenenbaum, Joshua B. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. ISSN 0036-8075. doi: 10.1126/science.aab3050. URL http://science.sciencemag.org/content/350/6266/1332.
- Maaten & Hinton (2008) Maaten, L. and Hinton, G. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579–2605, 2008.
- Norman & Draper (1986) Norman, Donald A. and Draper, Stephen W. User Centered System Design; New Perspectives on Human-Computer Interaction. L. Erlbaum Associates Inc., Hillsdale, NJ, USA, 1986. ISBN 0898597811.
- Parasuraman et al. (2000) Parasuraman, R., Sheridan, T., and Wickens, C. D. A model for types and levels of human interaction with automation. IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans, 30:286–297, 2000.
- (15) Stefaner, Moritz. Revisit v2. http://truth-and-beauty.net/projects/revisit/. Accessed: 2017-06-04.
- Taigman (2014) Taigman, Y. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.
- Vinyals et al. (2016) Vinyals, Oriol, Blundell, Charles, Lillicrap, Tim, Wierstra, Daan, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630–3638, 2016.
- Wang et al. (2016) Wang, Y., Luo, Z., and Jodoin, P. Interactive deep learning method for segmenting moving objects. Pattern Recognition Letters, 2016.
- Yu et al. (2015) Yu, F., Seff, A., Zhang, Y., Song, S, Funkhouser, T., and Xiao, J. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.