Suppose that you move to a new city and are interested in exploring the local music scene. Typically, you might pick up the arts section of the local newspaper or go online to find a community notice board. Either way you would likely come to a long listing of music events where each event description would provide a small amount of contextual information: the names of the artists, the name and location of the venue, the date and start time of the event, the price of the tickets, and perhaps a few genre labels or a sentence fragment that reflects the kind of music you would expert to hear at the event.
While this “public list of events” model has been successful at getting fans to music events for many decades, we can use modern recommender systems to make music event discovery more efficient and effective. For example, companies like BandsInTown111https://www.bandsintown.com and SongKick222https://www.songkick.com/ help users track artists so that that the user can be notified when a favorite artist will be playing nearby. They also recommend upcoming events with artists who are similar to one or more of the artists that the user has selected to track. These services have been successful in growing both the number of users and in the number of artists and events covered by their service. For example, BandsInTown claims to have 38 million users and lists events for over 430,000 artists333According to https://en.wikipedia.org/wiki/Bandsintown on March 28, 2018.. Event listings are added by aggregating information of ticket sellers (e.g., Ticketmaster444https://www.ticketmaster.com/, TicketFly555https://www.ticketfly.com/) and by artist managers and booking agents who have the ability to directly upload tour dates for their touring artists to these services.
While this coverage is impressive, a large percentage of the events found in local newspapers are not listed on these commercial music event recommendation services. Many talented artists play at small venues (e.g., neighborhood pubs, coffee shops, and DIY shows) and are often not represented by (pro-active, tech-savvy) managers. Yet many music fans enjoy the intimacy of a small venue and a personal connection with local artists and may have a hard time discovering these events.
As such, our goal is to develop a locally-focused music event recommendation system to help foster music discovery within a local music community. Here we define local as all music events within a small geographic region (e.g., 10 square miles). This includes national and regional touring acts who may pass through town but it also includes non-touring artists (e.g., a high school punk band, a barber shop quartet, a jazz trio from the nearby music conservatory, or a neighborhood hip hop collective.)
What makes this problem technically challenging is that a large percentage of our local artists have a small digital footprint or no digital footprint at all. That is, we may not be able to find these artists on sites that typically provide useful music information  (e.g., Spotify666https://developer.spotify.com/, Last.fm777https://www.last.fm/api, AllMusic888https://www.allmusic.com/). Similarly, we often do not have music recordings from these artists so we will not be able to make use of content-based methods for automatic tagging  or acoustic similarity. Rather, we will rely the small amount of contextual information that can be scraped from the event listings in the local newspaper or community notice board.
We will first introduce the concept of a Music Event Graph as a 4-partite graph that connects genre tags to popular artists to event artists to events. We then use latent semantic analysis (LSA)  to embedding tags and artists into a latent feature space. We show that LSA is particularly advantageous when considering new or not well-known (long-tail) artists who have small digital footprints. This approach also allows us to independently control the popularity bias of our event recommendation algorithm so that events with popular artists are no more or less likely to be recommended than events featuring more obscure local artists.
2 Related Work
We have been unable to find previous research on the specific task of music event recommendation though there is a significant amount of work on both music recommendation [4, 16] (i.e., recommending songs and artists) and event recommendation [11, 6, 14] (i.e., events posted on social networking sites.) It both cases, it is common to explore content-based (i.e., the substance of an item), collaborative filtering-based (e.g., usage patterns from a large group of users), and hybrid approaches. We consider our approach to be a hybrid approach since we make use of both social tags (content) and artist similarity (collaborative filtering999While the details of the Last.fm algorithm for computing artist similarity remain a corporate trade secret, it would be reasonable to expect that these scores are computed using some form of collaborative filtering based on the large quantities of user listening histories that they collect .) scores from Last.fm.
As with many successful recommender systems, we make use of matrix factorization to embed data into a low dimensional space . In particular, we use Latent Semantic Analysis (LSA)  which is a common approach used in both text information retrieval  and music information retrieval systems (e.g., [10, 9, 15]). LSA is relatively easy to implement101010For example, see http://scikit-learn.org/stable/modules/generated/ sklearn.decomposition.TruncatedSVD.html, can improve recommendation accuracy, provides a compact representation of the data, works well with sparse input data, and can help alleviate problems caused by synonymy and polysemy . We should note that other embedding techniques, such as probabilistic LSA  and latent Dirichlet allocation (LDA) , could also be used as an alternative to LSA.
3 Event Recommendation
When developing an event recommendation system, we will consider an interactive experience with three steps:
User selects genre tags: Ask the user to select one or more tags from a list of board genres (“rock”, “hip hop”, “reggae”) based on the most common genres of the artists who are playing at upcoming local events.
User selects preferred popular artists: Ask the user to select one or more artists from a list of recognizable mainstream artists (The Beatles, Jay-Z, Bob Marley) based on the selected genres and related to the artists who are playing an upcoming event.
Display of recommended event list: Show recommended events (with justification) to the user based on the the selected genre tag and popular artist preferences.
This is a common onboarding process for both commercial music event services (e.g., BandsInTown) and music streaming services (e.g., Apple Music) since it quickly gives recommender systems a small but sufficient amount of music preference information for new users. After onboarding, a user can drill down into specific artists or events, as well as listen to related music, explore a map of venues, etc.
In this section, we describe the concept of a Music Event Graph and show how we can use it to efficiently recommend local music events based on the music preference information that is collected during user onboarding.
3.1 Music Event Graphs
When considering event recommendation, there are two phases that we need to consider: offline computation of relevance information for all upcoming events and real-time personalized event recommendation. We will use a Music Event Graph to help us structure our event recommendation system. The music event graph is a k-partite graph with levels. Our four levels represent common genre tags, popular artists, event artists, and events as is shown in Figure 1
To construct the graph, we follow the following steps:
Collect a set of upcoming local events
Construct the set of event artists from all of the local events
Find the most frequently used genre tags (e.g., “rock”, “jazz”, “hip hop”) associated with the event artists.
Using the genre tags, create a set of popular artists by selecting the most well-known artists that are strongly associated with each genre.
For each event artist, find the most similar artists from the set of popular artists.
In Section 4, we will describe how we use harvested tags and artist similarity information to compute similarity between pairs of artists, as well as between artists and tags. These similarities are represented as real-valued weights, and as such, the event graph contains weighted edges.
Based on the interactive design described above, we can efficiently recommend events using a Music Event Graph. The user selects one or more preferred genres and then a set of relevant popular artists. Next our algorithm selects the event artists and their related events that are connected to the user’s selected genres and popular artists. This graph traversal algorithm is depicted in Figure 2.
We note that our algorithm uses weighted edges to compute a user-specific relevance scores for each event as we move from left to right in the graph structure. In addition, we can use the graph structure to provide recommendation transparency  by keeping track of the paths that are used to get from the user genre and popular artist selections to the recommend event artists and events.
4 Artist Similarity and Tag Affinity
At the core of the event recommendation system, we use Latent Semantic Analysis (LSA) when calculating artist similarity and artist-tag affinity. That is, we use truncated single value decomposition (SVD) to transform a large, sparse data matrix of artist similarity and tag information into a lower dimensional matrix such that each artist and tag is embedded into a dense, k-dimensional latent feature space. Note that
Before we describe LSA, we will start with some useful notation for our problem setup:
: set of artists.
: set of tags. Tags are any free text token that can be used to describe music. This may include genres, emotions, instruments, usages, etc.
: a small subset of genre tags (e.g., “rock”, “country”, “blues”) that are frequently used to categorize music.
: set of popular artists where each artist in the set is none of the most recognizable artists associated with at least one of the genre tags .
: set of local music events
: set of event artists where each artist has one or more upcoming events in
: set of features where . That is, we will describe each artist as a (sparse) feature vector of artist similarity and tag affinity values in
: (sparse) raw data matrix. The dimension of where the represents the affinity between the -th artist (a row) and -th feature (column). A value of 0 represents either no affinity or unknown affinity. Note that all artists are self-similar so that . In terms of practical implementation, we can construct by stacking our artist similarity matrix next to our
artist-tag affinity matrix.
LSA uses the truncated SVD algorithm to decompose the raw data matrix as follows:
such that the matrix is a rank- approximation of , is an matrix, is a diagonal
matrix of singular values, andis a matrix. We will then project each artist and tag in a dimensional latent feature space:
where or equivalently by construction. That is, the first columns of represent artists and the last columns represent tags all embedded into the same dimensional space. We can also embed a new artist with raw feature vector by computing
so that is projected in the same latent feature space.
Finally, we can compute artist-artist, artist-tag, or tag-tag similarity in the embedded space by comparing their respective (column) vectors in . For example, if we have two latent feature vectors and
, we can compute their cosine similarity:
where and are -dimensional vectors and is the l2-norm of a vector . One nice property of cosine similarity, is that it tends to remove popularity bias. That is, we normalize the feature vectors by their length (l2-norm) such that each artist (and tag) vector is the same length. Without length normalization, popular artists which tend to have a bigger digital footprint (resulting in a denser raw feature vector with a bigger l2-norm) tend to produce larger similarity scores on average than if we did not normalize by length.
5 Event and Artist Data
The data for our experiments is constructed by scraping local events from both TicketFly111111https://www.ticketfly.com scraped February 15, 2018. and the web-based public event calendar from a local newspaper121212Details omitted during anonymous review process.. We collected a total of events with 66 events from TicketFly, 36 events from the local newspaper, and 6 overlapping events between both websites. These events produced a set event artists. We are also able to download short biographies of almost all of the event artists for events obtained from Ticketfly. The local newspaper only provides us with 1 to 3 genre tags for about half of the events we obtained from their site.
We then used the Last.fm API 131313https://www.last.fm/api to collect music information (popularity, biography text, artist similarity scores, and tags affinity scores) for each of our event artists. We then use snowball sampling on the similar artists and obtain this same Last.fm music information. We continue sampling these non-event artists until we have a set of 10,000 artists (i.e., .)
We define our set of tags as the 1585 tags which are associated with 20 or more artists. Our set of genre tags are the top 20 tags which are most frequently associated with our event artists . These include tags like “rock”, “jazz”, and “reggae”. However, we manually prune tags which are obviously not genres like “seen live” and “favorites”. Finally, for each artist, we concatenate all available biographies (Last.fm, TicketFly, local newspaper) and attempt to find each of our tags in the combined biography text. If a tag is found, we label the artist with that tag. This is especially important since otherwise, many of our event artists would not be labeled with any tags. In the end, we have 977,270 artist similarities and 456,867 artist-tag affinities.
6 Exploring Artist Similarity in the Long Tail
The core of our local event recommendation algorithm is our artist similarity calculation based on Latent Semantic Analysis (LSA). In this section, we show that most local event artists are relatively obscure long-tail artists and that they tend to have small digital footprints. We also explore the relationship between digital footprint size and the accuracy of our artist similarity calculation.
6.1 Long-tail Event Artists
In the top plot of Figure 3, we rank all 10,000 of our artists by their Last.fm listener counts. This shows a typical long-tail (power-law) distribution where a small number of popular artists in the short-head (left) receive much more attention than the vast majority of other artists in the long tail (right) [4, 1]. For example, 16.3% of the most popular artists represent 80% of the listener counts. In the bottom plot, we show a histogram of the event artists’ Last.fm listener counts broken down into deciles. We note that a disproportionate number of local event artists reside in the long-tale of this popularity distribution. In particular, 99 of the 154 event artists (64.2%) are in the lowest three deciles of the ranking.
6.2 The Digital Footprint of Event Artists
As we discussed in the Introduction, obscure artists tend to have small digital footprints. To show this, we will consider the digital footprint of an artist to be the number of artist similarities plus the number of tag affinities for that artist. Equivalently, it is the number of nonzero values in the row of our raw data matrix that is associated with the artist. We note that digital footprint size is correlated with popularity rank () such that popular artists tend to have a larger digital footprint.
In Figure 4, we plot the empirical cumulative distribution for both event artists and all artists as a function of the digital footprint size. We see that about 27.2% of the event artists have 15 or fewer digital footprints whereas only 2.8% of all artists have so few digital footprints. This suggests that it will be important for us to design an artist similarity algorithm that works well in this small digital footprint setting.
6.3 Artist Similarity with Latent Semantic Analysis
In Section 4, we introduced LSA as a algorithm for computing artist similarity. However, as we observed in the previous subsection, we are particularly interested in the case where an artist is represented by a small number of artist similarities and tag affinities (i.e., a small digital footprint.) To explore this, we will artificially reduce the digital footprint of artists to a fixed sized and see how well LSA is able to accurately compute artist similarity.
To do this, we randomly split our data set of artists into a training set with artists and a test set of artists. Note that this involves removing 1000 rows and 1000 columns from our raw data matrix since artists are also features. The training data will be used to to calculate our matrix decompositions for a given embedding dimension .
Before projecting the into the latent feature space, we limit the digital footprint size of each artists by randomly selecting artist similarity and tag affinity features to zero out. We can then project into the latent feature space and calculate the cosine distance between each pair of test set artists. Finally, we can calculate the Area Under the ROC Curve (AUC)  for each artist where the original artist similarities serve as the ground truth.
Figure 5 shows a plot of artificially reduced digital footprint size verses average AUC over the 1,000 test set artists for various LSA embedding dimensions. We also plot the curve for when we compute cosine distances between the raw test artist vectors without projecting into a latent feature space. Here we note that LSA shows a improvement over raw cosine distance in small footprint setting of between 1 and 16 nonzero features. Once the digital footprint is larger than 128 nonzero features, the raw cosine approach slightly out performs LSA-based approach. However, the compactness of representing each artist with 32 or 64 floating point numbers may be advantageous in terms of storage size and computation time when we consider a much larger set of artists and tags. As such, we will use 64-dimensional LSA embeddings for the remaining experiments in this paper.
7 Exploring Event Recommendation
To explore the performance of event recommendation using event graphs and LSA-based artist similarity, we conducted a small user study with a short 2-phase survey. We recruited 51 participants who were very familiar with the local music scene and attend live events in the area on a weekly basis. In the first phase of our survey, we asked participants to select between 1 and 3 genres from a set of 20 common genres. For each selected genre, the test subject was then asked to select between 1 and 3 artists from a set of 16 popular artists that were representative of the genre (i.e., having a high cosine similarity score between the 64-dimensional latent feature vectors of the genre and the artist.) In the second stage, participants were shown a list of the 154 event artists in our data set. They were asked to select all artists that they would like to see at a live event in the local area and were required to select 5 or more event artists.
To evaluate our system, we use each test subject’s selected genres or popular artists from phase 1 of the survey to rank order the 154 event artists using one of the approaches described below. In all cases, we embedded artists and tags into a 64-dimensional latent feature space using LSA with the data set that is described in Section 5. We then calculate the area under the ROC curve (AUC) for each user where ground truth relevance is determined from phase 2 of the survey.
Each test subject provides multiple genre and multiple popular artist preferences. We explore a number of ways to combine these preferences to produce one ranking of the event artists for each test subject. We consider early fusion and late fusion steps for a number of approaches. In early fusion, we start with a set of latent feature vectors where each vector is associated with one of the users genre or artist preferences. We consider three approaches:
average the latent feature vectors into one vector
cluster the latent feature vectors and use the centroid vectors
none use all of the latent feature vectors
When clustering, we use the -means clustering algorithm141414http://scikit-learn.org/stable/modules/generated/ sklearn.cluster.KMeans.html with the number of clusters () equal to the rounded natural log of the number of user preferences.
For late fusion, we must output one ranking of the event artists for each user. We consider three approaches
average cosine ranks event artists by the average of the cosine similarity scores between the event artist vector and each vector in the set of user preference vectors.
average rank creates one ranking of event artists for each user preference vector, calculates the average rank for each event artist over this set of rankings, and then ranks them by this average rank.
interleave creates a set of rankings of the event artists for each user preference vector, and then constructs a final ranking by alternating between these ranking lists and picking top remaining artists that have not already been added to the final ranking.
|Early / Late Fusion||Artists||Genres||Both|
|none / avg. cosine||.79 (.09)||.69 (.16)||.74 (.12)|
|none / avg. rank||.75 (.11)||.66 (.15)||.76 (.09)|
|none / interleave||.79 (.11)||.69 (.15)||.71 (.15)|
|average / cosine||.79 (.09)||.69 (.16)||.74 (.12)|
|cluster / avg. cosine||.78 (.09)||.69 (.18)||.74 (.11)|
|cluster / avg. rank||.74 (.13)||.66 (.20)||.68 (.17)|
|cluster / interleave||.78 (.10)||.69 (.16)||.75 (.09)|
Event artist recommendation performance. The mean and standard deviation of AUC for our 8 expert test subjects when considering popular artist preferences, genre preferences, and both preferences together. See text for details on the seven approaches and the two baselines.
Table 1 shows average AUCs (and standard deviations) for our seven early/late fusion approaches when we use each user’s popular artist preferences, genre tag preferences, and both sets of preferences together. We also include a popularity baseline that ranks all event artists by their Last.fm listener count as well as a random shuffle baseline. We observe that artist preferences alone result in the best performance and a number of our proposed early/late fusion approaches produce similar results.
We should also mention that we collected survey data from individuals who attended local shows on a less frequent (monthly) basis. The results for these test subjects was significantly lower (average AUC of 0.61) and more variable (AUC standard deviation of 0.15) for our best performing approach (Genre Preferences / None / Interleave.) Having done error analysis on many of these less regular attendees, we often found that they selected a very eclectic set of event artists which did not match their preferences. As such, it would have been difficult for any recommender system to make accurate recommendations for many of these test subjects. This suggests that test subjects need to have a high level of familiarity with the local music community in order to provide useful ground truth for our experiment.
In this paper, we explored the understudied task of local music event recommendation. This is an exciting task for the research community because it involves many interesting problems: long-tail recommendation, the new user & new artist cold start problems, multiple types of music information (artist similarity, tags), and user preference modeling. It is also an interesting problem outside of the academic research community since music event recommender systems can be used to help grow and support the local arts community. By promoting the work of talented local musicians, such systems can help fans discover new artists and help musicians reach new audiences. These audiences in turn attend more events which help sustain concert venues, music festivals, and other (local) businesses who benefit from direct ticket sales and other forms of indirect support (e.g., food, drinks, merchandise.)
While we were able to evaluate our system using a survey of local music experts, a more natural way to evaluate music event recommendation would be to build an interactive application thats collects user feedback over a longer period of time. We plan to develop such an app in the coming months and hope that it will be useful for expanding on the research that is presented in this paper.
-  Chris Anderson. The long tail. Wired magazine, 12(10):170–177, 2004.
-  Luke Barrington, Reid Oda, and Gert RG Lanckriet. Smarter than genius? human evaluation of music recommender systems. In ISMIR, volume 9, pages 357–362. Citeseer, 2009.
David M Blei, Andrew Y Ng, and Michael I Jordan.
Latent dirichlet allocation.
Journal of machine Learning research, 3(Jan):993–1022, 2003.
-  Oscar Celma. Music recommendation. In Music recommendation and discovery, pages 43–85. Springer, 2010.
-  Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391, 1990.
-  Simon Dooms, Toon De Pessemier, and Luc Martens. A user-centric evaluation of recommender algorithms for an event recommendation system. In RecSys 2011 Workshop on Human Decision Making in Recommender Systems (Decisions@ RecSys’ 11) and User-Centric Evaluation of Recommender Systems and Their Interfaces-2 (UCERSTI 2) affiliated with the 5th ACM Conference on Recommender Systems (RecSys 2011), pages 67–73. Ghent University, Department of Information technology, 2011.
Probabilistic latent semantic analysis.
Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 289–296. Morgan Kaufmann Publishers Inc., 1999.
-  Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009.
-  Cyril Laurier, Mohamed Sordo, Joan Serra, and Perfecto Herrera. Music mood representations from social tags. In ISMIR, pages 381–386, 2009.
-  Mark Levy and Mark Sandler. A semantic space for music derived from social tags. International Society for Music Information Retrieval Conference, 1:12, 2007.
-  Augusto Q. Macedo, Leandro B. Marinho, and Rodrygo L.T. Santos. Context-aware event recommendation in event-based social networks. In Proceedings of the 9th ACM Conference on Recommender Systems, RecSys ’15, pages 123–130, New York, NY, USA, 2015. ACM.
-  Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, et al. Introduction to information retrieval, volume 1. Cambridge university press Cambridge, 2008.
-  Brian McFee, Luke Barrington, and Gert Lanckriet. Learning content similarity for music recommendation. IEEE transactions on audio, speech, and language processing, 20(8):2207–2218, 2012.
-  Einat Minkov, Ben Charrow, Jonathan Ledlie, Seth Teller, and Tommi Jaakkola. Collaborative future event recommendation. In Proceedings of the 19th ACM international conference on Information and knowledge management, pages 819–828. ACM, 2010.
-  Sergio Oramas, Mohamed Sordo, Luis Espinosa Anke, and Xavier Serra. A semantic-based approach for artist similarity. In ISMIR, pages 100–106, 2015.
-  Markus Schedl, Hamed Zamani, Ching-Wei Chen, Yashar Deldjoo, and Mehdi Elahi. Current challenges and visions in music recommender systems research. arXiv preprint arXiv:1710.03208, 2017.
-  Rashmi Sinha and Kirsten Swearingen. The role of transparency in recommender systems. In CHI’02 extended abstracts on Human factors in computing systems, pages 830–831. ACM, 2002.
-  Douglas Turnbull, Luke Barrington, and Gert RG Lanckriet. Five approaches to collecting tags for music. In ISMIR, volume 8, pages 225–230, 2008.
-  Douglas Turnbull, Luke Barrington, David Torres, and Gert Lanckriet. Semantic annotation and retrieval of music and sound effects. IEEE Transactions on Audio, Speech, and Language Processing, 16(2):467–476, 2008.