Towards an All-Purpose Content-Based Multimedia Information Retrieval System

02/11/2019 ∙ by Ralph Gasser, et al. ∙ Universität Basel 0

The growth of multimedia collections - in terms of size, heterogeneity, and variety of media types - necessitates systems that are able to conjointly deal with several forms of media, especially when it comes to searching for particular objects. However, existing retrieval systems are organized in silos and treat different media types separately. As a consequence, retrieval across media types is either not supported at all or subject to major limitations. In this paper, we present vitrivr, a content-based multimedia information retrieval stack. As opposed to the keyword search approach implemented by most media management systems, vitrivr makes direct use of the object's content to facilitate different types of similarity search, such as Query-by-Example or Query-by-Sketch, for and, most importantly, across different media types - namely, images, audio, videos, and 3D models. Furthermore, we introduce a new web-based user interface that enables easy-to-use, multimodal retrieval from and browsing in mixed media collections. The effectiveness of vitrivr is shown on the basis of a user study that involves different query and media types. To the best of our knowledge, the full vitrivr stack is unique in that it is the first multimedia retrieval system that seamlessly integrates support for four different types of media. As such, it paves the way towards an all-purpose, content-based multimedia information retrieval system.



There are no comments yet.


page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As media collections grow larger and become more diverse, the quest for accessing the knowledge contained within these collections becomes more arduous. This is mainly due to the lack of proper tools for satisfying a particular information need. The classical approach of annotating media objects and retrieving them later based on this metadata has several shortcomings. Firstly, the sheer amount of data and the pace at which multimedia collections grow makes the laborious task of prior annotation ever more daunting. Secondly, textual descriptions tend to be subjective due to personal experience, expertise, language, and culture. And thirdly, it is difficult to describe temporal evolution, e.g., in videos, in a way that enables others to retrieve the desired object later, i.e., to anticipate all future possible searches at the time an object is annotated. In order to overcome these limitations, retrieval systems need to take the objects’ content into account. However, most existing content-based multimedia retrieval systems only address a single media type and do not support the search in several or even across different modalities.

In this paper we present an extended version of vitrivr – a scalable, open source, content-based multimedia information retrieval stack [1]. vitrivr is the successor of the IMOTION system [2], which has originally been designed for multimedia retrieval in large video collections. The work described herein directly builds on these previous efforts. We leverage ADAM [3]

, a storage engine that facilitates fast and scalable k-Nearest Neighbours (kNN) look-up in high-dimensional vector spaces for the purpose of multimedia retrieval, and Cineast 


, a modular feature extraction and retrieval engine developed for video retrieval. As part of our work, we have integrated different, media type specific content-based retrieval techniques into Cineast so as to contrive a solution that is capable of managing and searching not only in video compilations, but large, mixed multimedia collections.

The contribution of the paper is twofold: First, we introduce the architecture and the supported features of vitrivr, our scalable content-based multimedia information retrieval stack. Second, we show the effectiveness of vitrivr by means of a user study that considers a very heterogeneous set of search tasks.

The remainder of this paper is structured as follows: Section 2 surveys related work. Section 3 outlines some of the retrieval techniques applied by vitrivr and Section 4 describes implementation aspects. Section 5 presents the evaluation of the multimodal retrieval support in vitrivr and discusses the results. Section 6 concludes.

2 Related Work

(a) Visual QbE
(b) Visual QbS
(c) QbE for audio
(d) QbE for 3D models
(e) QbS for 3D models
Figure 1: Illustration of how the different retrieval methods present themselves in the user interface. We have query terms for QbE (0(a)) and QbS (0(b)) of videos and images, QbE of audio (0(c)) and QbE (0(d)) and QbS (0(e)) of 3D models. In every case, the user can either select or create a reference document that is later used for look-up.

Currently, there seems to be very little work on integrated solutions for content-based retrieval of different types of media. Most research in the field focuses on a particular modality like audio, video, or 3D or even further specialized subdomains, like for example, music, speech, or environmental sounds for audio.

Some examples of general-purpose retrieval systems involve QBIC [5] and MUVIS [6], both of which support retrieval of images and video. Only MUVIS, however, has added support for audio both stand-alone and interlaced with video. Moreover, both systems are not available publicly.

2.1 Content-Based Image Retrieval

Early work on image matching and retrieval started in the late 1970’s and it has become a fundamental aspect of many problems in computer vision. General-purpose color-based CBIR systems very often employ histograms in different color spaces, color layout and region-based search, or a combination thereof


. Typical techniques for identifying shapes involve edge histograms or image moments, like for instance centroid distances

[7, 8].

More recent developments in image retrieval gave rise to advanced techniques like SIFT

[9], SURF [10] for local descriptors and VLAD [11] or Fisher Vectors [12] for aggregation. Once local feature descriptors have been obtained by means of SIFT, SURF or a similar approach, it is also possible to apply a Bag of Words (BoW) model to create a global, aggregated feature vector [13, 14, 15].

2.2 Content-Based Music Retrieval

Content-based Music Retrieval (CBMR) or just Music Information Retrieval (MIR) is a multidisciplinary field that straddles different domains ranging from computer science to psychology. There is a large community surrounding MIR organized in the International Society for Music Information Retrieval (ISMIR)111, which holds the annual MIREX evaluation campaign for MIR algorithms [16].

Generally, MIR tasks can be characterized by either their specificity and their granularity. Based on these two dimensions, [17]classifies existing Query-by-Example techniques for music into four larger categories: audio identification (fingerprinting), audio matching, version identification and category-based retrieval.

The problem of audio fingerprinting consists in finding exact matches given a short segment of music, that is, the recording the segment belongs to. This problem has been largely solved and different methods have been developed and are being used in commercial applications. Prominent examples include Shazaam222 or Mel-frequency Cepstrum Coefficients (MFCC) [18, 19, 20]. In contrast, even though there is a lot of ongoing research, no best practices have evolved yet for audio matching or version identification and a lot of working examples trade retrieval accuracy for scalability or vice versa. These two tasks consist in finding different variations of a given piece of music, for example, a live recording, a cover version, or a remix. The notion of similarity, hence, becomes more fuzzy in these cases. Generally, chroma-based features, like Pitch Class Profiles (PCP) [21, 22] or variations thereof [23] were shown to be well suited for these MIR tasks, but some systems also exploit rhythm or melody [24]. For instance, [25] proposes a version identification scheme based on previous work by Casey et al. [26, 27]. The proposed technique involves comparing pitch class profiles for overlapping shingles (audio fragments) of fixed length. The authors were able to demonstrate that this technique scales well, especially when combined with Locality Sensitive Hashing (LSH) [28].

2.3 Content-Based 3D Model Retrieval

Similarity of 3D models can be assessed by means of many different descriptors of which feature vectors, histograms, and statistical moments are only three examples. Multiple surveys [29, 30] list and classify the many methods for content-based 3D model retrieval.

In [31, 32], a linear combination of spherical harmonics is used to obtain a function that approximates the 3D model’s surface. The weight coefficients of this function serve as components in a feature vector. On the other hand, [33] proposes light field descriptors for 3D models, which are based on projections of the 3D model onto the faces of a circumscribing dodecahedron. Subsequently, classical CBIR techniques can be applied on the resulting images to extract feature descriptors, namely calculation of Zernike moments and Fourier descriptors for the resulting shape.

3 Retrieval Methods

The vitrivr system allows for Query-by-Example (QbE) and Query-by-Sketch (QbS). The latter is only supported for visual modalities, that is, videos, images, and 3D models. The QbE paradigm takes a reference document, for instance an example image or a short audio snippet, and tries to find documents in the corpus that are similar to the reference. In contrast to this is the More-Like-This query mode bases queries on previously retrieved documents which are already known to the system. QbS is a special case of QbE in which the reference document is a hand-drawn sketch produced by the user. For this case, the user interface provides a simple canvas that allows the user to directly create these sketches. For videos, there is an additional variant of QbS which enables a user to sketch motion paths, which we refer to as Query-by-Motion (QbM).

Figure 1 illustrates how users can interact with the different retrieval methods in the user interface. Each Figure 0(a) to 0(e) depicts one query term, each using a different type of either selected or hand-crafted reference document.

3.1 Retrieval of Images and Videos

The retrieval of visual modalities in images and video is largely based on the pre-existing capabilities of Cineast. Most of the features employed by the original version can be directly applied to still images as the state-of-the-art in video retrieval can be, with a few exceptions, reduced to the use of image retrieval techniques in combination with keyframing. All the original Cineast features are also supported by the current version.

In the extended system, we have added feature modules based on SURF [10] and HOG [34], combined with a simple BoW model to further the support for exact matches in both images and videos. The codebooks for these tasks were derived from the MIR Flickr 25k [35] collection.

3.2 Retrieval of Music

The new feature modules for music retrieval are largely based on HPCP [21, 22] and CENS [36] features combined with the shingling approach proposed by [25]. These features can be used for audio matching tasks. Moreover, we have added some fingerprinting methods, namely a feature based on MFCC [18] and one inspired by the Shazam algorithm [19].

As one can see in Figure 0(c), the user interface allows a user to upload short audio segments of a few seconds to be used as reference documents. Furthermore, the UI can be used to specify the type of query that should be executed along the dimensions mentioned in Section 2, namely audio fingerprinting, version identification, or audio matching. These settings influence the feature modules that are being executed by Cineast. For pure fingerprinting tasks, only the fingerprinting modules will be used whereas for audio matching, the CENS and HPCP modules are used and fingerprinting modules are left out.

The current version of Cineast only includes audio features that can be used for music retrieval. However, it is possible to add more feature modules to add support for speech or general purpose audio.

3.3 Retrieval of 3D Models

The new feature modules for QbE in 3D model retrieval are based on spherical harmonics as proposed by [31, 32]. For QbS, the users can draw projections of the object as perceived when looking at the model from a specific angle. For similarity search, we compare Fourier and Zernike coefficients calculated for the model’s projections onto a circumscribing dodecahedron. This is referred to as light field descriptor and was proposed by [33].

The user interface allows the user to either upload a reference model for QbE or to sketch a 2D silhouette of the desired object and retrieve 3D models based on its shape. See Figures 0(d) and 0(e) for how this looks like.

4 Implementation

The entire vitrivr stack consists of three components: a web-based user interface called Vitrivr-NG, the feature extraction and query processing engine Cineast and ADAM, a storage layer for high-dimensional feature vectors. In addition, a web server is required in order to host and serve the multimedia files and derivatives like thumbnails or clips.

Figure 2: Illustration of Cineast’s system architecture. The main modules are the file handling module, the ingest runtime, and the retrieval runtime. These three modules facilitate the offline ingest and the online retrieval workflow supported by the system. Outside the system context lies the storage layer for feature vectors and the user interface.

4.1 The Retrieval Engine

Cineast is a modular feature extraction and multimedia retrieval engine implemented in Java and forms the core of the entire stack. Its architecture is illustrated in Figure 2. Cineast supports two types of workflows: The offline ingest workflow and the online retrieval workflow. The offline workflow consists in decoding and segmentation of multimedia files and Cineast includes support for a wide variety of formats through libraries like FFMPEG333 and TwelveMonkeys444 In the process, the derived segments are ultimately handed to an extraction pipeline where they are processed by different feature modules. The online retrieval workflow parses queries submitted by users and utilizes the provided reference documents (e.g., a short audio segment or a sketch) to derive feature descriptors also using the aforementioned feature modules. The feature modules thus provide the main functionality of Cineast both during the online and the offline workflow. They derive feature descriptors from a segment, generate feature vectors and then use the storage layer to either persist them (offline) or perform a look-up (online).

The query functionality of Cineast can be accessed through a WebSocket and a RESTful API. The API supports different types of actions, like simple ID or keyword-based look-ups, kNN search based on a provided reference document or kNN search using an existing entry in the database (“More-Like-This” queries).

Cineast’s modular architecture allows for easy extension in terms of supported features just by adding new feature modules and composing these modules into different feature categories. As part of the work reported in this paper, we have added 14 new feature modules in order to support the query modes described in Section 3.

4.2 The Storage Layer

ADAM is a database system that is able to persistently store and retrieve multimedia objects on a large scale. Most importantly, it allows for efficient kNN search in high-dimensional vector spaces, which is crucial for content-based multimedia retrieval. It employs different exact and approximative indexing strategies like Spectral Hashing (SH) [37], LSH [28] and Vector-Approximation (VA) files [38]. In addition, ADAM also supports the storage of ordinary entities with textual, numerical, and temporal information. We refer to [3] for more information on ADAM.

ADAM acts as the storage layer in the vitrivr stack. It contains all the information on multimedia objects, segments thereof, and the extracted feature vectors. This also includes technical information about files and file metadata. Queries formulated by the Cineast engine are delegated to ADAM, which executes them and returns the results.

4.3 The User Interface

The web-based user interface of vitrivr integrates query and display mechanisms for the different media types. It was built using Angular 555 and has been written in TypeScript 2.1. Communication with Cineast takes place through the aforementioned RESTful and WebSocket API’s. An impression of the UI is given in Figure 3.

The main design goal for the UI was to build a modular, extensible, web-based user interface that maintains the functionality of the original version [1] and extends it, so as to support queries for and across different modalities. This includes not only building such queries but also presenting the different types of results in a consistent manner. The Angular framework is well suited to that end, because its modular architecture allows us to easily add and remove components as the stack evolves.

Figure 3: Impression of the web-based user interface for the system. On the left-hand part, users can formulate queries. Results are presented in the middle part of the UI. The right-hand part of the UI can be used to refine the result set after a query has been issued, by filtering media types or weighting features differently. The green color coding is used to indicate relevance.

Mainly, the user interface assists the user in composing queries from different building blocks. The first block is called query component. Each query component consists of multiple query terms that can be toggled. A query component must at least contain one active query term and only one instance of a query term of the same type can be active per component. The individual query terms differ in the kind of reference document that is being used. For example, the image query term enables users to upload a reference image or sketch one themselves whereas the audio query term allows the user to upload short audio clips. Some examples of query terms supported by vitrivr are depicted in Figure 1. Upon execution, the query terms within a query component are connected by a logical AND relationship whereas different query components are connected by a logical OR. This simple model allows the user to formulate complex queries within and across different modalities. The scheme can also be easily extended with new types of queries, like for instance, motion sketches or search for textual data, by just adding a new type of query term.

5 Evaluation

We evaluate vitrivr’s retrieval effectiveness in terms of utility for the end user. The evaluation is based on two test sets (A and B) that comprise twelve similar scenarios each. There are three scenarios per domain, that is, image (1 to 3), audio (4 to 6), video (7 to 9) and 3D model retrieval (10 to 12) and each scenario focuses on different aspects within the respective domain. A scenario comprises a simple objective (information need), which the users are expected to carry out using the web-based user interface. Each scenario is described textually and in some cases the textual description is supported by an illustrative image, for example depicting the scene the user should find in a video. Unless otherwise stated, the users are not allowed to use these helper images directly as reference for the query. However, they are allowed to, for example, use a helper image as template for creating their own sketch. Figure 4 gives some examples of these helper images. Some scenarios also comprise additional material the participants can use to perform the task at hand. For instance, we provide short audio snippets for the audio retrieval tasks.

In general, users are free to execute as many queries as they please until they are either satisfied with the result or they decide to give up. Here, we apply the principle that searching is an iterative process [39]. Unless otherwise stated, users are allowed to leverage all of the UI’s capabilities, namely QbS, QbE (query based on external document), and More-Like-This (query based on previously retrieved document) queries. Additionally, they may also use the refinement functionality provided by the UI. Applying the principle of the least effort [39, 40], we expect the users to take the course of action that they believe to be connected with the least expenditure. Once a user has obtained and accepted a result for a scenario, they are required to rate the top documents on the following four point scale:

  • Resulting document is considered irrelevant.

  • Resulting document is considered slightly relevant.

  • Resulting document is considered very relevant.

  • Resulting document is considered highly relevant, close to identity.

The relevance judgments are then aggregated into MAP, MRR, NDCG@15 and p@15 values per scenario. For binary metrics, ratings of and are considered to be hits and values of and are considered to be misses. Furthermore, we decide for each scenario whether the user was able to fulfill the objective (success rate) based on the presence of at least one high-relevance rating and how many queries were required on average.

5.1 Test Collections

For the evaluation, we have assembled our own test collections. We selected random items from the Freemusicarchive, Pixabay, and Thingiverse. In order to construct examples for audio matching, we included specific instances of contemporary and popular music from various sources. Table 1 lists all the collections that were used in the evaluation.

5.2 Results

The outcome of the user-driven evaluation is summarized in Table 2. In total, datasets were gathered. 13 participants did test set A and the other twelve worked through test set B.

Domain Name Entries References
Image Pixabay 164512
Video OSVC 200 See [41]
Audio Freemusicarchive 4335
Audio Misc. sources 62 -
3D NTU 3D (1-4) 4003 See [42]
3D Thingiverse 8966
Table 1: List of all the collections used during the evaluation. The table contains information regarding the size and source.
# NDCG@15 p@15 MRR MAP Success rate # queries
1 0.91 0.49 0.60 0.17 0.77 0.33 0.33 0.12 0.92 0.67 1.8 1.7
2 0.96 0.94 0.13 0.16 0.92 1.0 0.10 0.11 0.92 1.0 2.2 1.1
3 0.67 0.87 0.11 0.08 0.63 0.80 0.08 0.08 0.69 1.0 3.3 2.2
4 0.97 0.98 0.08 0.07 1.0 1.0 0.07 0.07 1.0 1.0 1.0 1.0
5 0.61 0.97 0.07 0.08 0.47 1.0 0.06 0.08 0.85 1.0 1.8 1.0
6 0.87 0.86 0.14 0.13 1.0 0.92 0.10 0.10 1.0 1.0 1.2 1.3
7 0.85 0.30 0.11 0.03 0.56 0.17 0.08 0.03 0.77 0.33 1.7 2.3
8 0.74 0.77 0.14 0.09 0.63 0.66 0.10 0.08 1.0 1.0 1.0 1.2
9 0.73 0.54 0.07 0.05 0.64 0.39 0.07 0.05 0.92 0.67 1.7 2.0
10 0.83 0.93 0.12 0.10 0.70 1.0 0.09 0.08 0.69 1.0 2.8 4.5
11 0.97 0.92 0.69 0.23 1.00 1.0 0.38 0.15 1.0 1.0 2.6 1.6
12 0.89 0.90 0.26 0.34 0.77 1.0 0.16 0.20 0.77 1.0 2.2 2.5
Table 2: Averaged, per scenario results of the user-driven evaluation. The success rate indicates in how many cases at least one highly relevant item was obtained. The last column indicates how many queries were executed on average.

5.3 Discussion

In this section, we discuss the evaluation results scenario by scenario. Scenarios 1 and 2 aimed at QbE for images. In the first task, users were free to pick a reference image of their choice whereas for the second task, users were provided with a slightly altered version (blurring, discoloration) of an image that was contained in the collection. Interestingly, not all users considered the results returned in A1 and B1 to be relevant. A success rate of and , respectively, is not great for a simple task like this and neither is a MRR of and . As it turns out, the results are very dependent on the reference image. This is also reflected by the p@15 value, which is considerably higher for A1 than for B1. Our findings suggest that if the general color setting of the reference image remotely matches an image in the database, the latter is set to rank high in the list of results regardless of whether it depicts the same thing conceptually. Hence, the color features outweigh features that take local structures into consideration.

For scenarios A2 and B2, and of the users were able to obtain the copy of the reference image and the MRR of and indicates that the desired image was at rank 1 most of the time. Thus, the system seems to be pretty robust against minor alterations of the reference image. From the p@15 value, we must deduce that a majority of the remaining results were considered to be irrelevant or only slightly relevant. The NDCG@15 implies, however, that the ranking coincided pretty well with the rating of the users.

Scenarios A3 and B3 involved QbS tasks for images. In both cases, the users had to find a particular logo or icon based on a sketch. Figure 4 depicts the helper image (3(b)) and an example sketch produced by a participant (3(e)) of scenario B3. Success rates of and , respectively, and a MRR of and indicate that the majority of users were able to retrieve the item of interest and that the relevant item was placed in the top half of the result set. However, not all users managed to retrieve it in the case of A3. It is also worth noting that it took the users and queries on average to obtain the results. This is approximately one query more than for scenarios 1 and 2, which is likely due to the fact that most users required multiple attempts at sketching the item of interest. Also, according to feedback, a lot of users employed the More-Like-This functionality to push the desired item from higher ranks to the top.

Scenarios 4 and 5 were pure audio fingerprinting tasks of which A4 and B4 encompassed a simple, music segment as reference document and A5 and B5 encompassed a music excerpt with a mix of white and rose noise. With the exception of A5, the success rate was for these tasks, which means that the audio segment in question could always be retrieved. The MRR value for these tasks (again with the exception of A5) was , indicating that the relevant document had top rank. The remaining results in the top 15 ranks can be considered irrelevant hits, which explains the low p@15 values. As most users seem to have agreed with the proposed ranking (3 for the first item, 0 for the rest) the NDCG@15 tends to be close to . In summary, one can state that the fingerprinting works and exhibits some robustness to noise.

Scenarios A6 and B6 were audio matching scenarios. In both scenarios, the users were supposed to find the original version and a cover version of the same musical piece. Again, the high success rate of for both tasks indicates that at least one of the versions could be retrieved. The MRR here lies between and , which indicates a rank between and for the first, highly relevant item in the list. The cover version was also retrieved in most cases. Furthermore, some of the other top 15 items were considered to be highly or at least very relevant. Both these facts contribute to a p@15 value between and . The NDCG@15 value indicates that the ranking coincides with the user rating in many instances. It is, however, not perfect.

(a) A7, illustrative image
(b) B3, illustrative image
(c) B12, illustrative image
(d) A7, user sketch (✓)
(e) B3, user sketch (✓)
(f) B12, user sketch (✓)
Figure 4: A selection of illustrations shown to and sketches produced by the participants of the evaluation. The examples here include QbS tasks for video (A7), still images (B3) and 3D models (B12). All the depicted sketches were conductive to successfully retrieving an object of interest from the corpus.

Scenarios A7 and B7 were pure QbS tasks for video in which users were asked to sketch a scene based on a presented helper image. Figure 4 depicts the helper image (3(a)) and an example sketch produced by a participant (3(d)) of scenario A7. Judging from the the comparatively low success rates of and , respectively, and the low MRR value, this tasks was very challenging for users to complete. Based on user feedback, especially the B7 reference image used a very disadvantageous color palette which was very difficult to reproduce without the help of advanced painting tools. These examples – together with A3 and B3 – confirm the difficulties of QbS, especially for complex imagery. Again, it is worth noting that both A7 and B7 required to queries per image — more than the QbE-based tasks.

In scenarios A8 and B8, participants were tasked to combine an audio excerpt with a reference image of choice to find a particular scene in a video. The success rate of is positively surprising, especially as it is higher than for A9 and B9, where users were only allowed to use a reference image alone. This indicates that adding another modality to the mix indeed brings some advantages even though the relative contribution of the audio features depend on the provided reference image. However, the ranking of the results does not always seem to agree with the user ratings as we can read from the NDCG value between and and precision tends to be rather low. The latter can be attributed to the fact that we were actually looking for a particular scene which is unique in the entire collection, both in terms of the visual as well as the auditory part. As in scenarios A4, A5, B4, and B5, the audio fingerprinting feature, which was used most of the time, reliably produces one accurate hit and a lot of seemingly unrelated results.

In scenarios A9 and B9 users had to retrieve a specific scene based on a provided but distorted image (A9) or example image of their choice (B9). Unsurprisingly, the success rate for A9 was almost higher than for B9. Also, the MRR in both cases was relatively low as it ranged between and . This indicates that it was difficult for the users to bring the desired video to the top rank.

Scenarios A10 and B10 tasked the users to find a type of 3D model based on a 2D sketch. For example, in scenario A10, users were asked to retrieve a model of the starship enterprise. From the success rate of and , respectively, and the MRR of and , we can deduce that most of the users succeeded in finding a relevant model and that if they found it, it was ranked at top position. However, as for all the QbS tasks so far, the number of queries is comparatively higher than for the other tasks. In fact, users required an average of and queries in order to fulfill A10 and B10, respectively.

Scenarios A11 and B11 were QbE tasks for 3D model retrieval involving a provided reference document each. This task was apparently straightforward in both cases. Both scenarios show a success rate and a MRR value of , which means that highly relevant items could always be obtained and were always placed at the top rank. What is interesting, though, is the large discrepancy of p@15 values, which was for A11 and for B11. This can be attributed to the difference of retrieval performance of the spherical harmonics features for different classes of objects as reported by [29, 30].

In scenarios A12 and B12, we asked the user to find a described 3D model of interest by whatever means they prefer. They were allowed to use external resources like Google. Interestingly, most users chose the QbS mode here and were able to obtain relevant items in most of the cases. Figure 4 depicts the helper image (3(c)) and an example sketch produced by a participant (3(f)) of scenario B12. The scenario stated that the user was supposed to find a 3D model of a gear. The p@15 values of and indicate that additional items were also found that were considered to be marginally relevant. Judging from the NDCG@15, the ranking by vitrivr coincided pretty well with the rating provided by the users.

6 Conclusion and Future Work

In this paper, we were able to demonstrate that vitrivr, a software stack originally designed for content-based video retrieval, can be seamlessly extended to support additional modalities like audio, images, or 3D models.

We have added additional feature modules that describe different properties of images, music, and 3D models and build on ideas from various authors. With this, we could show that combining modalities, as in the case of video, can have a positive effect on retrieval performance.

To the best of our knowledge, we have thereby created the first integrated content-based multimedia retrieval stack — taking the idea behind [6, 43] one step further. Meanwhile, we laid the foundation for future work in the multimedia retrieval domain, as vitrivr can be considered a framework to design, implement, and test new retrieval techniques and adapt them for specific use cases and requirements. The entire vitrivr stack has been made available as open source software and it can be downloaded from GitHub666

In our future work, we plan to extend the audio retrieval capabilities of vitrivr. The majority of music features that have been added as part of this work are based on chroma and melody. Hence, we will invest in additional features based on, for instance, rhythm and tempo, and in assessing their influence on the retrieval effectiveness for mid-specificity tasks like version identification or audio matching. In addition, we plan to lower specificity beyond that of audio matching and add additional query modes like Query-by-Humming. Also, we want to add support for other types of audio like speech or environmental sound.

On the 3D model retrieval side it would be interesting to take the QbS paradigm for 3D models one step further and add support for QbE with arbitrary images. This would allow for completely novel use cases, like finding 3D models that were used in a scene of a rendered video. However, it would also require more advanced image segmentation. Last but not least, we plan on combining the current state of the vitrivr stack with recent development in deep learning. To that end, we have already added an integration layer for TensorFlow

777 based models.


This work was partly supported by the Swiss National Science Foundation, project IMOTION (20CH21_151571).


  • [1] Luca Rossetto, Ivan Giangreco, Claudiu Tănase, and Heiko Schuldt. vitrivr: A Flexible Retrieval Stack Supporting Multiple Query Modes for Searching in Multimedia Collections. In Proceedings of the 2016 ACM Conference on Multimedia, pages 1183–1186, Amsterdam, The Netherlands, 2016. ACM.
  • [2] Luca Rossetto, Ivan Giangreco, Heiko Schuldt, Stéphane Dupont, Omar Seddati, Metin Sezgin, and Yusuf Sahillioğlu. Imotion—a content-based video retrieval engine. In International Conference on Multimedia Modeling, pages 255–260. Springer, 2015.
  • [3] Ivan Giangreco and Heiko Schuldt. ADAM: Database Support for Big Multimedia Retrieval. Datenbank-Spektrum, 16(1):17–26, 2016.
  • [4] Luca Rossetto, Ivan Giangreco, and Heiko Schuldt. Cineast: a multi-feature sketch-based video retrieval engine. In 2014 IEEE International Symposium on Multimedia, pages 18–23. IEEE, 2014.
  • [5] Myron Flickner, Harpreet Sawhney, Wayne Niblack, Jonathan Ashley, Qian Huang, Byron Dom, Monika Gorkani, Jim Hafner, Denis Lee, Dragutin Petkovic, David Steele, and Peter Yanker. Query by image and video content: the QBIC system. Computer, 28(9):23–32, 1995.
  • [6] Serkan Kiranyaz, Kerem Caglar, Esin Guldogan, Okay Guldogan, and Moncef Gabbouj. MUVIS: A Content-based Multimedia Indexing and Retrieval Framework. In Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings., volume 1, pages 1–8. IEEE, 2003.
  • [7] Nidhi Singhai and Shishir K Shandilya. A survey on: content based image retrieval systems. International Journal of Computer Applications, 4(2):22–26, 2010.
  • [8] Dengsheng Zhang and Guojun Lu. A Comparative Study of Fourier Descriptors for Shape Representation and Retrieval. In ACCV2002: The 5th Asian Conference on Computer Vision, pages 1–6, Melbourne, Australia, 2002.
  • [9] David G Lowe. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2):91–110, 2004.
  • [10] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding, 110:346–359, September 2008.
  • [11] Hervé Jégou, Matthijs Douze, Cordelia Schmid, and Patrick Pérez. Aggregating local descriptors into a compact image representation. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 3304–3311, San Francisco, CA, USA, 2010. IEEE.
  • [12] Florent Perronnin and Christopher Dance. Fisher kernels on visual vocabularies for image categorization. In 2007 IEEE conference on computer vision and pattern recognition, pages 1–8. IEEE, 2007.
  • [13] Jun Yang, Yu-Gang Jiang, Alexander G Hauptmann, and Chong-Wah Ngo.

    Evaluating bag-of-visual-words representations in scene classification.

    In Proceedings of the international workshop on multimedia information retrieval, volume 63, pages 197–206, Augsburg, Germany, 2007.
  • [14] Jialu Liu. Image retrieval based on bag-of-words model. arXiv preprint arXiv:1304.5168, 2013.
  • [15] Nasir Ahmad. Evaluation of SIFT and SURF using Bag of Words Model on a Very Large Dataset. Sindh University Research Journal (Science Series), 45(3):492–495, 2013.
  • [16] J Stephen Downie, Andreas F Ehmann, Mert Bay, and M Cameron Jones. The music information retrieval evaluation eXchange: Some observations and insights. Advances in Music Information Retrieval, pages 93–115, 2010.
  • [17] Peter Grosche, Meinard Müller, and Joan Serrà. Audio content-based music retrieval. In Dagstuhl Follow-Ups, volume 3. 2012.
  • [18] Jonathan T Foote. Content-based retrieval of music and audio. In Proc. SPIE 3229, Multimedia Storage and Archiving Systems II, pages 138–147, 1997.
  • [19] Avery Wang. The Shazam Music Recognition Service. Communications of the ACM, 49(8):44–48, 2006.
  • [20] Sigurdur Sigurdsson, Kaare Brandt Petersen, and Tue Lehn-Schiøler. Mel frequency cepstral coefficients: An evaluation of robustness of mp3 encoded music. In ISMIR, pages 286–289, 2006.
  • [21] Takuya Fujishima. Realtime Chord Recognition of Musical Sound: A System Using Common Lisp Music. In ICMC Proceedings, volume 9, pages 464–467, 1999.
  • [22] Emilia Gómez. Tonal Description of Music Audio Signals. Doctoral dissertation, Universitat Pompeu Fabra, Barcelona, 2006.
  • [23] Frank Kurth and Meinard Müler. Efficient Index-Based Audio Matching. IEEE Transactions on Audio, Speech and Language Processing, 16(2):382–395, 2008.
  • [24] Justin Salamon, Joan Serrà, and Emilia Gómez. Tonal representations for music retrieval: from version identification to query-by-humming. International Journal of Multimedia Information Retrieval, 2(1):45–58, 2013.
  • [25] Peter Grosche and Meinard Muller. Toward characteristic audio shingles for efficient cross-version music retrieval. In International Conference on Acoustics, Speech and Signal Processing, pages 473–476. IEEE, 2012.
  • [26] Michael Casey and Malcolm Slaney. Song Intersection by Approximate Nearest Neighbor Search. In Proc Int Society Music Information Retrieval Conf (ISMIR), pages 144–149, 2006.
  • [27] Michael Casey, Christophe Rhodes, and Malcolm Slaney. Analysis of Minimum Distances in High-Dimensional Musical Spaces. IEEE Transactions on Audio, Speech and Language Processing, 16(5):1015–1028, 2008.
  • [28] Piotr Indyk and Rajeev Motwani.

    Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality.


    Proceedings of the 30th Annual ACM Symposium on Theory of Computing

    , pages 604–613, 1998.
  • [29] Johan W H Tangelder and Remco C Veltkamp. A survey of content based 3D shape retrieval methods. Multimedia Tools and Applications, 39(3):441–471, 2007.
  • [30] Benjamin Bustos, Daniel Keim, Dietmar Saupe, Tobias Schreck, and Dejan Vranić. An experimental effectiveness comparison of methods for 3D similarity search. International Journal on Digital Libraries, 6(1):39–54, 2006.
  • [31] Dietmar Saupe and D.V. Vranic. 3D model retrieval with spherical harmonics and moments. In Proceedings of the 23rd DAGM-Symposium on Pattern Recognition, volume 2191, pages 392–397. Springer, 2001.
  • [32] Michael Kazhdan, Thomas Funkhouser, and Szymon Rusinkiewicz. Rotation Invariant Spherical Harmonic Representation of 3D Shape Descriptors. In Eurographics Symposium on Geometry Processing, volume 43, pages 156–164, 2003.
  • [33] Ding-Yun Chen, Xiao-Pei Tian, Yu-Te Shen, and Ouh. On Visual Similarity Based 3D Model Retrieval. In Eurographics, volume 22, pages 313–318, 2003.
  • [34] R.K. McConnell. Method of and apparatus for pattern recognition, 1986. US Patent 4,567,610.
  • [35] Mark J Huiskes and Michael S Lew. The MIR Flickr Retrieval Evaluation. In ACM International Conference on Multimedia Information Retrieval (MIR’08), Vancouver, Canada, 2008.
  • [36] Meinard Müller, Frank Kurth, and Michael Clausen. Chroma-based statistical audio features for audio matching. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005., pages 275–278, New Paltz, NY, USA, 2005. IEEE.
  • [37] Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral Hashing. Advances in Neural Information Processing Systems, (1):1–8, 2008.
  • [38] Roger Weber, Stephen Blott, Mountain Ave, and Murray Hill. Similarity-Search Analysis Methods and Performance Study for in High-Dimensional Spaces. In Proceedings of the 24th VLDB Conference, pages 194–205, 1998.
  • [39] Bernard J Jansen and Soo Young Rieh. The Sventeen Theoretical Constructs of Information Searching and Information Retrieval. Journal of the American Society for Information Science and Technology, 61(8):1517–1534, 2010.
  • [40] George Kingsley Zipf. Human Behavior and the Principle of Least Effort. Addison-Wesley, 1994.
  • [41] Luca Rossetto, Ivan Giangreco, and Heiko Schuldt. OSVC-Open Short Video Collection 1.0. Technical report, University of Basel, Basel, 2015.
  • [42] Ding-Yun Chen, Xiao-Pei Tian, Yu-Te Shen, and Ming Ouhyoung. On Visual Similarity Based 3D Model Retrieval. Computer Graphics Forum, 22(3):223–232, 2003.
  • [43] Patrick M Kelly and Michael Cannon. Query by image example: the CANDID approach. Integration The Vlsi Journal, (April):20–24, 2000.