Analyzing Visual Mappings of Traditional and Alternative Music Notation

10/25/2018
by   Matthias Miller, et al.
0

In this paper, we postulate that combining the domains of information visualization and music studies paves the ground for a more structured analysis of the design space of music notation, enabling the creation of alternative music notations that are tailored to different users and their tasks. Hence, we discuss the instantiation of a design and visualization pipeline for music notation that follows a structured approach, based on the fundamental concepts of information and data visualization. This enables practitioners and researchers of digital humanities and information visualization, alike, to conceptualize, create, and analyze novel music notation methods. Based on the analysis of relevant stakeholders and their usage of music notation as a mean of communication, we identify a set of relevant features typically encoded in different annotations and encodings, as used by interpreters, performers, and readers of music. We analyze the visual mappings of musical dimensions for varying notation methods to highlight gaps and frequent usages of encodings, visual channels, and Gestalt laws. This detailed analysis leads us to the conclusion that such an under-researched area in information visualization holds the potential for fundamental research. This paper discusses possible research opportunities, open challenges, and arguments that can be pursued in the process of analyzing, improving, or rethinking existing music notation systems and techniques.

READ FULL TEXT VIEW PDF

page 1

page 3

page 4

09/04/2020

Augmenting Sheet Music with Rhythmic Fingerprints

In this paper, we bridge the gap between visualization and musicology by...
08/21/2019

Framing Visual Musicology through Methodology Transfer

In this position paper, we frame the field of Visual Musicology by provi...
12/15/2020

An Artistic Visualization of Music Modeling a Synesthetic Experience

This project brings music to sight. Music can be a visual masterpiece. S...
12/14/2021

Visualizing Ensemble Predictions of Music Mood

Music mood classification has been a challenging problem in comparison w...
03/23/2022

CorpusVis: Visual Analysis of Digital Sheet Music Collections

Manually investigating sheet music collections is challenging for music ...
10/30/2014

Tasks that Require, or can Benefit from, Matching Blank Nodes

In various domains and cases, we observe the creation and usage of infor...
12/11/2018

A Functional Taxonomy of Music Generation Systems

Digital advances have transformed the face of automatic music generation...

1 Background

Depicting music in a visual form is a complex task containing several representation issues [10]. The existence of a vast variety of different music notations shows that it is overly complicated to encode all musical features into a single and consistent system because of the limited number of visual encoding channels. Designing a notation system requires one to make fundamental design decisions that depend on the target audience and the application area. Making such design decisions limits the range of possible applications which is acceptable if some use cases are well-supported even though other tasks are not. One reason why specialized notation is restricted is loss of information. For instance, tailoring a notation system to a specific instrument makes reading more difficult for musicians who are not acquainted with the instrument. A further reason is that music is subject to a continuing progress due to the development of new instruments, genres, and even rearrangements of existing works, making it impossible to take features into account that will be introduced at a later point in time. The range of application of music notation can be divided into categories (examples are given by reference), which contain but are not limited to (live) performance [15, 40], analysis [28], art [32, 21], education [45], instrument support [34], composition [3], and entertainment [14].

Dannenberg states that the diversity between the single categories requires one to view music and its structure based on different levels of representation, since each notation design may contain information that is not available in other systems [10]. For instance, to analyze music, often abstract transformations are used to provide means of comparison that enable analysts to detect patterns and differences of musical pieces instead of focusing on single details [36]. Among others, music analysts are interested in understanding the harmonic progressions and relationship of a musical piece. Malandrino et al. propose a visualization approach to emphasize the harmonic structure of a composition by employing color to indicate tonal progressions [27]. Many approaches that propose an instrument-oriented design can also be assigned to the category ‘Education’ since visual metaphors that are based on the visual appearance of the respective instrument provide high potential for learning a new instrument. For example, Dascălue et al. utilize the keyboard design to create an instrument teaching platform for adult learners [11].

To summarize the broad range of music notation systems into a single framework, we first span a Music Notation Design Space.

2 Music Notation Design Space

Visually representing music requires a suitable mapping from heterogeneous input formats to visual channels. If music exists in audio format, extracting structural features is necessary before applying a visual encoding. Since the visualization or notational representation of music is ambiguous, even existing visual representations can be examined and altered to be restructured to support different situations. Music is an intricate form of art containing both structural, as harmony or rhythm, and non-structural properties like emotion and imagination. The former is mathematically formalizable. Nevertheless, there are multiple attributes of music that hardly can be communicated using formal notation. For instance, performing music often involves a specific level of expressiveness when interpreted that is difficult to formalize or to visually encode. Consequently, dealing with music as notation entails several representational issues. This problem of representation is responsible for the existing variety of different musical representations. To analyze existing music notation specifications, we necessitate extracting musical features that are commonly used to visually represent music. Due to the hierarchical structure of music, we introduce a determined level of abstraction in subdividing music into four meta-features which comprise concrete musical dimensions each of which can be subject to visual representation in musical notation systems.

Figure 1: Our proposed Music Notation Visualization Pipeline is based on Card et al.’s Information Visualization Reference Model [5]. Transforming musical data into visual music notation as a multi-step process including extraction of musical features (Step 1), data transformation (Step 2), visual mapping (Step 3) and encoding into visual structures (Step 4). Moreover, allowing users to interact with the system and changing the notation enables customization on views (Step 5) and enables notation improvement to fit the users’ tasks.

2.1 Musical meta-features and notational dimensions

We suggest a set of meta-features that can be extracted from music each of which characterizes a unique musical attribute. Every meta-feature contains multiple dimensions that describe its specific musical part in detail. Subsequently, we consider music to be generally a composition of rhythm, harmony, dynamics, and instructions (see Step 1 in Figure 1).

Rhythm describes the tempo (beats per minute) or speed including meter and time signatures. Pauses also belong to rhythm, since they influence the rhythmic movement or flow in music. Rhythm is the cause why music can be categorized and analyzed as time-series data and is a fundamental aspect in the design of music notation.

Harmony depends on how multiple tones simultaneously compose specific pitch levels. This co-occurrence of notes determines the harmonic progression of a musical piece. In music notation, one must differentiate between accidental and normal notes. Moreover, the range of a given part is defined by the octaves or height of note. We assign duration to be part of every note since varying tone lengths directly affect the harmonic behavior. Often, dynamics is used to describe the loudness or the development of intensity including the transitions between different volume levels as well as accents and abrupt changes of the musical progression.

Dynamics comprises volume (or intensity), articulation and phrasing since phrasing and volume are frequently combined to partition music into connected segments. Besides, articulation also shapes musical dynamics in combination with intensity and phrasing.

Instructions encapsulate contextual notational information such as timbre, arrangement, baseline and which finger should be used to play a specific note. ‘Arrangement’ includes the structure of a musical piece comprising repetitions and instrumentation. Instrumentation influences the timbre and is regarded as a separate dimension which is often implicitly represented. Depending on the instrument, the CMN uses different baselines encoded by respective clefs at the beginning of the notation indicating the pitch of the displayed notes.

2.2 Users and Tasks

Different types of users need suitable music representations: music composers must be able to expressively communicate their thoughts by symbols, signs and other instructions. They achieve this by mapping musical thoughts to be structured and represented in such a way that the composed music can be reconstructed as imagined by the author. In contrast, performers should be provided with a musical notation that is easy to understand and suitable for the instrument they are playing. The notation of a musical piece can vary between different instruments to meet task-oriented requirements. Imagine a musical director, who must be aware of all instrument parts of a whole orchestra. Providing an overview of all involved instruments in this case is beneficial. On the contrary, single instrumentalists do not profit from such an orchestra notation, but could be even distracted while playing their own part. This example emphasizes the role and importance of different music notations for diverse tasks. Learning an instrument is another field of application: understanding the CMN can be a challenging task. Tailoring music notation to the user while using intuitive representations such as visual instrument metaphors supports novices during the learning process.

Rather abstract representations are useful for music analysts who are more interested in patterns, progressions, and general differences between musical pieces than in specific details.

2.3 Visual Mapping of Musical Dimensions

Simultaneously representing multiple features by visual cues is substantial when it comes to providing musical notation information.

During the design process of music notation, considering the Gestalt Laws

is helpful to estimate the effectiveness of a visual cue. Since visual variables are the foundation for information visualization, which musical notation is a subset of, we take Bertin’s seven visual variables into account:

position, size, shape, value, color, orientation, and texture [2]. Munzner extends this list by motion, curvature, volume, and spatial region. Munzener divides the visual channels into magnitude channels, which comprise ordered attributes, and identity channels that should be used to encode rather categorical data attributes which does not have an implicit and natural ordering [29].

It is useful to order the magnitude channels by effectiveness to reasonably apply visual channels in designing visualizations [6]. Based on Munzner’s discussion, we ordered the different visual magnitude channels from best to least starting with ‘position on common scale’ (top) to ‘Volume (3D size)’ (bottom) in Table 1. This sorting facilitates the comparison of different visual encodings by effectiveness. Moreover, readers can get a better distribution overview of how musical features are visually encoded. Applying different visual mappings between concepts avoids confusion when reading a designed music notation.

Rhythm Harmony Dynamics Instructions
Visual Variable

Tempo / Beat

Meter /    Time Signature Pauses /    Breaks

Tones

   (Pitch / Frequency)

Note

Range    (Octaves) Accidental /    Normal

Duration

Intensity /    Volume

Articulation

Phrasing

Timbre    (Instrument)

Arrangement

Baseline / Clef

Finger

Text (Semantic Channel) [4] ♫, [23] ♫, [23, 33, 12] ♫, [23, 12] ♫, [12] ♫, [12, 9] ♫, [12]
Position on common scale [30, 8, 38] ♫, [24, 36, 30] ♫, [23, 33, 12] [38, 12, 8] [1] ♫, [45, 23, 34, 12] [9] ♫, [12] [9, 8, 38, 12]
Position on unaligned scale [1] ♫, [23, 12] ♫, [23, 12]
3-17[1pt/1pt] Length (1D) [45] [24] [45]
Tilt / Angle
3-17[1pt/1pt] Area (2D size) [1] [9] [24, 9]
Depth (3D position)
3-17[1pt/1pt] Color Luminance [38] [9] [8]
Color Saturation [8]
3-17[1pt/1pt] Texture [45] [45] [24]
Curvature
3-17[1pt/1pt]

Magnitude Channels

Volume (3D size) [38]
Spatial Region
Color Hue [36, 8, 41] [24, 45, 23, 9, 8] [23] [41] [12, 38, 30] [34]
3-17[1pt/1pt] Motion [34, 30, 41] [9] [34] [34, 9]

Identity

Channels

Shape [17] ♫, [23, 33, 24, 12] [17] [45] [9] ♫, [24, 23, 12] ♫, [24, 23, 33, 12, 38] ♫, [34, 12] [9] ♫, [24, 12] ♫, [23, 12]
Proximity ♫, [34] [9] [24]
Similarity [24, 30] [45] ♫, [9, 8] [23, 8, 9] [24, 9, 45] [24, 45] [34] [12, 30]
3-17[1pt/1pt] Enclosure ♫, [30, 24] [12]
Closure [24]
3-17[1pt/1pt] Continuity [30] [34] [12]

Gestalt Laws

Connection [24] [1] [12] ♫, [45] [34]
Table 1: An overview of mapping musical notation features (Rhythm, Harmony, Dynamics, Instructions) to visual variables: magnitude channel, identity channel, and Gestalt laws. The two-notes symbol (♫) is used to indicate how the Common Music Notation (CMN) encodes music variables.

2.4 Music Notation Visualization Pipeline

We designed a pipeline to model the required transformation steps for the processing from musical data to a visual representation that is based on the Information Visualization Reference Model from Card et al. (see Figure 1[5]. In the first step, extracting musical features is required to transform the music information into a finite set of dimensions that can be potentially converted into visual form. Of course, depending on the original data format, music can contain elements such as emotions, nuances, or performance interpretation that cannot be easily visualized and conveyed to the user. During the extraction process, the loss of information should be minimized to preserve relevant information. The visual mapping and the creation of visual structures is a task- and user-oriented process to meet the users’ needs. Allowing the user to alter transformations and to influence the represented music notation provides flexibility. In some cases, users may benefit from restricted modification opportunities to maintain the quality of music representations. For example, fundamental attributes such as beat, notes, pitch, and duration should always be present. Since the traditional visualization pipeline model is designed on a rather abstract level that requires the used data to be homogeneous in order to process transformation and create different views, we state that music features must be extracted before applying well-known visualization techniques. Due to the abstract level it may be require additional preprocessing steps to translate musical features to be able to process them using our proposed pipeline.

3 Traditional and Alternative Music Notations

Besides Common Music Notation, many different approaches have been proposed in different application domains such as research or teaching to reduce drawbacks of the CMN. These rather experimental notation concepts often use different visual channels to exploit the potential of unused visual variables. In Table 1, we provide an overview of different musical notation techniques based on categories of visual channels and Gestalt laws introduced in section 2. Our objective is to indicate possible visual mappings that have not been used in a music notation system before, highlighted by empty cells having a grayed background. We point out that the given overview does not claim to be exhaustive since there are many ideas and approaches outside the world of academia  (e.g., Pitch Bracket Notation [17]). In the table, the CMN is listed using a two notes symbol (♫) to emphasize differences between the listed notation approaches. Moreover, we selected the included references to be reproducible by a performer or computer instead of adding all methods that visualize any musical feature for analysis or entertainment. Frequently, music notation analysis visualizations apply abstract methods that do not take fundamental notation features into account[26, 44].

In Table 1

we classify fifteen existing music notation techniques. We emphasize that this is not representative for all notation techniques that exist in literature. It is structured to facilitate the understanding of which reference of a music representation method is using a visual variable (rows) grouped by Munzners visual channel categorization

[29] and Gestalt law to encode a musical feature (columns). The columns are grouped by the characteristic musical meta-features introduced in subsection 2.1. Some music notation concepts use different encodings for notes (a tone within an octave: A-G), ranges, and pitch. We consider pitch to be composed by note and octave. Mostly, a performer requires the exact position to precisely play a musical piece. Therefore, we decided to list them separately to highlight explicit representations of this musical dimension. Music notation often includes textual descriptions to provide information about instrumentation, volume, tempo, and meter signature. Borgo et al. subdivide all visual channels into four categories, one of which are the Semantic Channels containing text, number, symbols, signs, icons, and others [4]. We consider this category to be appropriate to describe the contextual music notation information (first row in the overview table). Seldomly, complete chords are directly mapped on visual variables instead of the respective single notes. For instance, Malandrino et al. enrich the CMN with a colored background to indicate the current pitch class at a particular position in the score [27]. For the sake of simplicity and clarity, we decided to omit a separate column for the chord dimension due to the scarcity of such techniques.

4 Research Implications and Opportunities

Our research shows how notations can be compared regarding their visual features. To the best of our knowledge, Table 1 comprises varying music notations but is not complete and may serve as a starting point for a survey. The introduced visualization pipeline (Fig. 1) helps to both understand existing techniques and develop new ones. Since the pipeline unifies the design process on a rather abstract and conceptual level, it may be required to consider some musical dimensions in more detail. Nevertheless, we claim that any music notation visualization can be modeled using our pipeline that can be extended to cover further musical dimensions that we did not consider, if necessary. In doing so, weaknesses of a specific representation, as well as its capability to visually map musical features for a given task, can be revealed, considering relevant notational dimensions. Vice versa, in the design process of new music notation visualizations, the pipeline in combination with the overview table supports the consideration of different notational dimensions. The line-up of mappings between musical features and visual variables can also be used as inspiration to create new higher-level visualizations which also support content-based visual analysis of music.

Our research can be seen as a starting point for in-detail investigations of certain combinations of mappings and users or tasks. Specific pairs of visual variables and musical features could particularly fit certain user groups (e.g., learners, composer) or tasks (e.g., high-level analysis of a musical piece, performance). We assume that readers of the classical musical notation system are inherently biased due to early familiarization with the standard notation. Due to the familiarity, the popularity of the CMN and the historical contingency, it can be a difficult process to develop a music notation that will finally replace the CMN.

By now, we can exploit visual channels requiring technology that was not available during the development process of the CMN such as motion or color. Since CMN has some disadvantages such as differentiating tone pitches, new techniques could address these drawbacks by applying visual variables in a different way. During the design process of new music visualization methods, it is necessary to take the respective application area into account, as some mappings are more intuitive in a specific situation and convey precise while others rather provide abstract information of musical features.

We argue that our design and visualization pipeline for music notation (see Figure 1

) is also applicable to develop methods for music fingerprinting to compare music regarding different epochs and styles, or to compare the visual encoding of music visualizations. For instance, a well-established mapping for some notational feature could be deployed in a higher level visualization, showing, for example, the trend of a musical feature throughout the entire musical piece. Visualization techniques as pixel visualizations, glyphs, or others could be applied to visually analyze larger amounts of musical data and enable comparison between musical pieces.

5 Conclusion

We introduced a design and visualization pipeline for music notation based on fundamental information visualization research. This approach can be used to compare existing music notation designs and to develop new techniques that are tailored to different users and their tasks. Thus, we bridge the research field of data visualization and the field of music representation in a structured way. An exemplary classification of fifteen notation systems indicates how musical features can be encoded. The table can be used as a starting point for a full survey of existing music notation techniques.

References