Engineering Music to Slow Breathing and Invite Relaxed Physiology

by   Grace Leslie, et al.
Georgia Institute of Technology

We engineered an interactive music system that influences a user's breathing rate to induce a relaxation response. This system generates ambient music containing periodic shifts in loudness that are determined by the user's own breathing patterns. We evaluated the efficacy of this music intervention for participants who were engaged in an attention-demanding task, and thus explicitly not focusing on their breathing or on listening to the music. We measured breathing patterns in addition to multiple peripheral and cortical indicators of physiological arousal while users experienced three different interaction designs: (1) a "Fixed Tempo" amplitude modulation rate at six beats per minute; (2) a "Personalized Tempo" modulation rate fixed at 75% of each individual's breathing rate baseline, and (3) a "Personalized Envelope" design in which the amplitude modulation matches each individual's breathing pattern in real-time. Our results revealed that each interactive music design slowed down breathing rates, with the "Personalized Tempo" design having the largest effect, one that was more significant than the non-personalized design. The physiological arousal indicators (electrodermal activity, heart rate, and slow cortical potentials measured in EEG) showed concomitant reductions, suggesting that slowing users' breathing rates shifted them towards a more calmed state. These results suggest that interactive music incorporating biometric data may have greater effects on physiology than traditional recorded music.


Enactive Mandala: Audio-visualizing Brain Waves

We are exploring the design and implementation of artificial expressions...

An empirical approach to the relationship between emotion and music production quality

In music production, the role of the mix engineer is to take recorded mu...

An Audio-Driven System For Real-Time Music Visualisation

Computer-generated visualisations can accompany recorded or live music t...

Augmenting Sheet Music with Rhythmic Fingerprints

In this paper, we bridge the gap between visualization and musicology by...

Static and Dynamic Measures of Active Music Listening as Indicators of Depression Risk

Music, an integral part of our lives, which is not only a source of ente...

Violent music vs violence and music: Drill rap and violent crime in London

The current policy of removing drill music videos from social media plat...

Automatic, Personalized, and Flexible Playlist Generation using Reinforcement Learning

Songs can be well arranged by professional music curators to form a rive...

I Introduction

Music invites strong emotional responses in its listeners, and thus presents a promising avenue for the design of interaction systems that bring health and well-being to users. Historically, researchers in the music cognition community have attributed musical emotion to cognitive appraisal [1]; however, more recent models acknowledge the possibility of multiple mechanisms by which music evokes emotions [2, 3], including brain stem reflexes and emotional contagion [4]. In fact, listeners have reported many different physical reactions due to focused music listening [5]. Music has been shown to help alleviate pain [6], influence movement synchronization [7, 8], and be used as a bio-feedback signal [9]. In the present research, we explore the possibility that indirect exposure to musical stimuli may also affect users in such a way as to produce beneficial changes in their physiology, without requiring attentive listening.

Past research has detailed the range of both perceived and felt emotions that accompany attentive music listening [10], in addition to a range of physical [11] and physiological [12][13] responses associated with emotions. However, less is known about how music may be specifically designed to induce a particular physiological response in the listener. Previous studies have shown that influencing breathing may be one effective pathway to inviting shifts in physiology. For example, instructing users with music in the control of their breathing patterns showed promise in breath regulation [14], relaxation [15], creating a meditative experience [16], and reducing muscle tension [17]. However, such studies relied on intentional breathing to be effective. Other studies have shown that listening to music can effect breathing rates [18]

, introducing white noise to music can influence electrodermal activity and heart rate variability

[19], though these were not carefully controlled to avoid intentional manipulation of breathing. The present study was carefully designed to examine any effects that auditory feedback may have on breathing even when the listener is unaware that an intervention is taking place, remaining fully engaged in a demanding task.

Breathing is a promising avenue to explore for regulation of affective state, as it is a physiological process that is under autonomic control, yet which can also be controlled externally through conscious effort or external influences [20]. The rate and manner of one’s breathing pattern changes with physical exertion [21] and affective state [22, 23]; in turn, aerobic capacity, stress level [24, 25], mental functions [26], and mood [27] can be influenced by consciously manipulating breathing, as is detailed in ancient Yoga Sutras [28], among more modern scientific research.

It is known that reducing one’s breathing rate can reduce perceived momentary stress levels [24, 25]. While consciously slowing down breathing using mindfulness exercises can be effective [29], such interventions require focused attention and divert attention from important tasks [30], which can make them impractical in workplaces. Researchers are beginning to explore implicit bio-feedback[31]. Recent findings showed that rhythmically oscillating audiovisual feedback presented to users significantly influenced slower breathing, had a lasting effect, improved self-reported calmness and focus, and was highly preferred for future use [32]. To this end, we designed three interactive music systems incorporating rhythmic loudness changes, also using a breath sensor (Zephyr BioHarness), to probe which interactions optimally, yet effortlessly, influence breathing rate through sound.

Ii Theory and Design

This interactive music system was designed to influence the user’s breathing pattern in order to invite a relaxation response. It was engineered with two competing principles in mind: first, this system must be as unobtrusive as possible, so as to not require focused attention that would detract from everyday activities or workplace tasks. We also wanted to avoid any prior familiarity with the music that might have prior emotional associations. Second, this system must invite physiological changes in the listener that accompany a relaxed state. To satisfy the first constraints, we engineered an original ambient-genre music composition using the PureData (Pd)[33] graphical programming language. Our software took an incoming stream of real-time breathing data recorded from the participant’s Zephyr BioHarness ( and translated this data to control the loudness of the principal melodic line in the ambient music mix. An amplitude envelope, calculated using a square-root function normally used to generate stereo panning curves, was applied to the overall sound mixture in order to produce an undulating volume effect reminiscent of the time course of a normal inhale and exhale breathing cycle. A diagram of this envelope is included in the design illustration in Fig. 1. The difference in overall loudness between full “inhale” maxima and full “exhale” minima was 6 dB, representing a doubling of loudness during the course of a synthesized breath cycle. A sample of this music can be streamed at

Fig. 1: System design. Left: The loudness of the music was modulated to produce an undulating, breath-like sound. Right: Top-level system view. Color codes: blue: hardware module, green: software module.

We developed three intervention designs that differed only in the level of interaction between the musical composition engine and the participant’s real-time breathing data stream:

Ii-a Fixed Tempo (FT)

In the Fixed Tempo design, the synthesized breathing cycle was set at 6 breaths per minute (bpm), a rate shown to produce ideal levels of relaxation [34]. Therefore, this condition was equivalent to playing a pre-recorded piece of music designed to have relaxing qualities, but lacking individualization to each user’s breathing pattern.

Ii-B Personalized Tempo (PT)

We calculated each participant’s average baseline breathing rate at the beginning of the experiment session. During the Personalized Tempo block, the music was presented with amplitude modulations occurring at 75% of the calculated baseline breathing rate, in an effort to gently “nudge” a reduction in breathing rate. In any case of a participant having an abnormally high natural breathing rate, their Personalized Tempo was capped at 15 beats per minute.

Ii-C Personalized Envelope (PE)

In the most interactive design, the real-time breathing data stream fully governed the rate of synthesized “inhale” and “exhale,” producing a sympathetic music feedback which mirrored the exact time course of each participant’s unintentional inhales and exhales. While PT relies on a single number calculated during a participant’s Baseline session, PE uses one’s instantaneous breathing signal to generate each note.

Iii Methods

Iii-a Experimental Procedure

This study was pre-approved by the MIT Review Board. Our sample featured 19 participants (11 females, 8 males), of varying ages (19–55 years).

The experiments were conducted indoors in a sound-treated studio with no audible external noise. Participants were seated at a table in front of a laptop at a distance of approximately 2 meters from each of a pair of studio audio monitors (Genelec) placed at approximately 30°left and right of center. During data collection, participants were asked to keep still, breathe spontaneously, and fixate their vision on a computer-generated crosshair symbol. We also recorded their breathing waveform, and electrocardiogram (ECG) data using a Zephyr BioHarness (ZephyrTM Performance Systems) sampled at 17 Hz, and 250Hz, respectively. Bilateral electrodermal activity (EDA) at 4 Hz was recorded from right and left wrists using the E4 wristband (Empatica, Inc). See the system diagram in Fig. 1.

Prior to data collection, the wristbands and Zephyr Bioharness were placed on each participant, and they were asked to walk up and down three flights of stairs in order to properly prime the EDA, ECG, and respiration sensors. After the sensor priming, participants were outfitted with a 16-channel EasyCap EEG cap connected to a BrainVision VAmp EEG amplifier.

Participants completed four blocks of 40 forewarned reaction time trials; each block lasted an average of seven minutes due to random inter-trial intervals of between two and five seconds. For each trial, a warning stimulus was presented, followed by a fixed silent interval of 4.5 seconds. After this fixed interval, the imperative stimulus, an alarming short buzzing sound, was played. The participants were instructed to press a key on a computer keyboard as soon as they hear the imperative stimulus. This trial design is a classic stimulus presentation from the event-related potential (ERP) literature in EEG research, and is designed to elicit the contingent negative variation (CNV), a slow cortical potential whose amplitude increases with the focusing of attentional resources required for this task[35]. Since this task is demanding, it provides the opportunity to test the calming influence of our interventions on peripheral and cortical arousal.

In order to minimize muscle artifacts in EEG, participants were reminded not to blink or move their eyes unless absolutely necessary, to keep their eyes open as much as possible while still blinking naturally, to keep their eyes fixated on the displayed crosshair, and to not clench their muscles or move their right hand unnecessarily, even when they were trying to respond very quickly to the buzzer sound.

The first of the four blocks was presented silently, in order to collect baseline physiological measurements. In each of the subsequent three blocks, one of the interactive music interventions was introduced at random.

The Lab Streaming Layer [36] protocol provided real-time synchronization of stimulus presentation timestamps along with breathing, ECG, and EEG streams. EDA data were synchronized with the other streams post-hoc using the timestamps recorded on the E4.

All participants provided prior informed consent for the primary reaction time task with accompanying musical stimuli. Participants were told that the music was provided as entertainment to mitigate possible boredom during long blocks of the repetitive task, and that multiple physiological measurements would be taken in order to test the effects of focused attention on these bodily processes. This deceptive experiment design was necessary to ensure that participants would not attempt to consciously entrain their breathing to the music, or otherwise manipulate their breathing rate to achieve any particular result. After data collection was complete, each participant was informed of the real purpose of the experiment.

Iii-B Pre-processing Steps

The raw breathing waveform was measured as a time-series representation of the extent to which the participants’ breathing increased and decreased tension in the BioHarness chest strap. The breathing waveform data were filtered using a lowpass Butterworth filter with a cutoff frequency of 1 Hz, representing the fastest reasonable breathing rate. Local maxima of the breathing signal were calculated. We enforced the detected peaks to drop at least 2 s111 refers to non-unit, a relative pressure unit for raw breathing waveform.

on either side before the signal attains a higher value. After detecting the peak locations, the inter-respiration intervals (IRI) were calculated in milliseconds. To better highlight the influence of interventions and lower the influence of personal baselines, we standardized measures for each participant. Specifically, the inter-respiration intervals were z-scored per participant using the following formula:


Here, refers to the z-score of IRIs. Moreover, refers to the mean of the signal during a complete session of the participant, including Baseline, Fixed Tempo, Personalized Envelope, and Personalized Tempo conditions. Similarly,

refers to the standard deviation of the IRIs during the whole session. The mean of

was calculated per block per participant as a proxy for the inverse of the relative breathing rate. The standard deviation of was calculated per block per participant as a proxy for variability of breathing.

Raw EDA was measured using the E4 sensors in . We applied a 6th order Butterworth low-pass filter (1 Hz cutoff frequency) to the EDA data. The filtered EDA was transformed into z-scores () similar to the method described in Equation 1. To better capture the relaxation response, we focused on the tonic skin conductance level which measures the smooth underlying slowly changing levels of EDA. Specifically, we measured the rate of change in the skin conductance level per experiment block per participant using the following formula:


Here, refers to the frequency of sampling which is 4Hz. Also, refers to the length of the EDA signal.

All EEG data processing was performed offline using the EEGLAB [37]

toolbox for Matlab. Continuous EEG data were high-pass filtered at .05 Hz by subtracting a low-passed version of the signal (.05 Hz) from the original signal. Channels with a kurtosis greater than five were excluded from analysis, and the remaining channels were re-referenced to their average. The continuous EEG data were epoched around the first audio stimulus of each trial, from 1 second prior to the stimulus to 4 seconds post-stimulus, and the 500ms preceding the stimulus was used as the baseline. Individual epochs with absolute amplitudes exceeding 50uV were excluded from analysis. We calculated the mean amplitude in the Cz channel over three time windows representing the early (400-1400 ms), mid (1500-2600 ms), and late-stage (2600-3700 ms) CNV


We used the Pan-Tompkins implementation of the QRS complex detector for ECG analyses [39]. The R-R intervals, i.e. inter-beat intervals (IBI), were consequently calculated. We used the Python HRV package [40] to calculate a range of time-based, frequency-based, and non-linear heart rate variability features from the IBIs. We also transformed the IBIs into z-scores similarly as in Equation (1) by standardizing IBIs within each session ().

Iv Results and Discussion

Iv-a Comparing Interventions

In this section, we present our findings regarding physiological changes arising from ambient music conditions in comparison to baseline. We use box-plot [41]

, where the middle line represents median, the inner-box covers first to third quartiles, and the whiskers extend the inner-box boundaries 1.5 times inter-quartile range.

Iv-A1 Breathing

Fig. 2: Comparison of the average of inter-respiration interval z-scores across conditions. Higher inter-breath intervals means lower relative breathing rates, and is associated with more relaxed states. See §IV-A for box-plot details & §IV-A1

for ANOVA statistics. Squared brackets show p-values of the post-hoc independent t-tests. **:

, ***: .

Relaxation has multiple physiological indicators, including changes in respiration patterns. Deep and slow breathing both arise from and give rise to physiological, affective, and cognitive calm [42]. On the other hand, sustained attention and cognitive load have been shown to reduce respiratory variability [43]. Moreover, negative emotional states are shown to reduce correlated breathing variability [44] and sense of relief is associated with higher breathing variability [45]. In the aforementioned articles, tidal volume, instantaneous respiration rate, and minute ventilation and their coefficient of variation have been used to measure total respiratory variability. Additionally, autocorrelation at one breath lag has been used to quantify correlated respiratory variability. Breathing variability is also a predictor of respiratory health and has been associated with more successful separation of the patient from the ventilator [46, 47]. Given the rich literature on breathing and its relationship with affective states and wellbeing, there is a consensus that slower breathing rate and higher breathing variability are associated with a calmer state.

To quantify effects on calming, we thus focus on two metrics from the breathing signal: the z-score of inter-respiration intervals within each experiment block (

), as well as the variance of inter-respiration interval z-scores for each block (

). See §III-B for more information about how to calculate these features. is proportional to the inverse of relative breathing rate. Thus we expect to see higher in a more relaxed state. Additionally, is associated with breathing variability. Therefore, we expect to see higher in more positive and calming settings.

As shown in Fig. 2, our analyses revealed that there was a significant difference in z-score of inter-respiration intervals () across all the conditions. An ANOVA test was performed to evaluate differences in between Baseline, Fixed Tempo, Personalized Envelope, and Personalized Tempo designs: . We conducted post-hoc pairwise comparisons using the independent t-test to compare each music design condition to the baseline: The Personalized Tempo design increases the most. The Personal Envelope design may influence breathing most strongly, but because of the variance between fast and slow breathers, it has a lesser effect overall. Moreover, simply having a fixed slow music tempo also reduces breathing (Fixed Tempo design), but has a lesser effect than the personalized designs.

Fig. 3: Comparison of the variability of inter-respiration interval z-scores across conditions. Higher variability in breathing is associated with more relaxed states. See §IV-A for box-plot details & §IV-A1 for ANOVA statistics. Squared brackets show p-values of the post-hoc independent t-tests. **:, ***: .

Additionally, our analyses revealed that there was a significant difference in variability of inter-respiration interval z-scores () across all the conditions, as shown in Fig. 3. An ANOVA test was performed to evaluate differences in between Baseline, Fixed Tempo, Personalized Envelope, and Personalized Tempo designs: . We conducted post-hoc pairwise comparisons using the independent t-test to compare each music design condition to the baseline: We observe that the music conditions resulted in a more variable breathing pattern which is associated with a more positive state. Moreover, the difference in breathing variability is more prominent in the personalized designs, suggesting that there is added value in bringing physiology-driven design to ambient music listening.

Iv-A2 Electrodermal Activity (EDA)

Fig. 4: Comparison of the rate of change of tonic EDA z-score across conditions. Negative values mean decreasing EDA levels, and are associated with a relaxing effect. See §IV-A for box-plot details & §IV-A2 for ANOVA statistics. Squared brackets show p-values of the post-hoc independent t-tests. *:.

EDA is traditionally characterized into two types, tonic and phasic activity. Tonic activity or skin conductance level shows the slowly-changing patterns of EDA. Lower levels of tonic activity are associated with more calming states. However, phasic activity or skin conductance response corresponds to rapid changes in EDA level in the form of peaks with a particular morphology: a quick rise and a slow decay. Higher skin conductance is usually associated with higher sympathetic stimulation and higher stress [48]. Since our experiment design presented stimuli with the goal of creating a relaxation response, we expect the tonic portion of the EDA to be more indicative of the effectiveness of our interactive music. As recommended by [48], we use z-scoring for standardization of raw EDA and use as a metric of rate of change in skin conductance level. See §III-B for more information about feature calculation. In our study, participants were using their dominant hand for doing the primary reaction-time task and making mouse clicks continuously. This resulted in motion artifacts introduced to the right hand EDA signal. Thus, for this analysis, we focus on the non-dominant EDA, which has a long history of study in the skin conductance literature [48].

As shown in Fig. 4, our analyses revealed that there was a significant difference in rate of change in skin conductance level () across all the conditions. An ANOVA test was performed to evaluate differences in between Baseline, Fixed Tempo, Personalized Envelope, and Personalized Tempo designs: 222The left E4 sensor did not record any data for one participant. This resulted in 4 missing data-points, one per condition, for this analysis.. We conducted post-hoc pairwise comparisons using the independent t-test to compare each music condition to the baseline. We only observed significant differences across the Personalized Tempo design and the baseline333 : This analysis shows that the Personalized Tempo design was specifically more influential in inducing a physiological state of calm and resulted in decreasing levels of tonic EDA.

Iv-A3 Electroencephalogram (EEG)

Fig. 5: Comparison of the mean amplitude of contingent negative variation (CNV) for Cz electrode across conditions. Lower CNV with greater absolute amplitude shows higher cortical arousal. In line with previous research findings[49, 50], cortical and peripheral arousal have an inverse relationship. See §IV-A for box-plot details & §IV-A3 for ANOVA statistics. Squared brackets show p-values of the post-hoc independent t-tests. *:.

We gave participants a reaction-time task that was chosen to elicit the contingent negative variation, a well-studied slow cortical potential that is known to index cortical arousal [35]. Past studies have shown that a decrease in EDA due to habituation to an auditory stimulus is accompanied by an increase in measured EEG power[49, 50]. Others have demonstrated that relaxation-inducing biofeedback causing a decrease in autonomic activity measured peripherally was also accompanied by an increase in the CNV amplitude [51]. As shown in Fig. 5, our analyses revealed that there was a significant difference between the amplitude of the late-stage CNV when compared between the baseline condition and all three design conditions. An ANOVA test was performed to evaluate differences in the late-stage CNV amplitude between the Baseline, Fixed Tempo, Personalized Envelope, and Personalized Tempo designs: 444

Note that 8 datapoints had kurtosis greater than 5 (2 per each condition) and 3 were detected as outliers for having a z-score of more than 3 or less than -3 (1 per each intervention condition).

. We conducted post-hoc pairwise comparisons using the independent t-test to compare each music design condition to the baseline. We only observed significant differences across the Personalized Tempo design and the baseline 555 : The analysis of our breathing and EDA data shows that the Personalized Tempo design was specifically more influential in inducing a physiological state of calm. The accompanying result showing an influence on slow cortical potentials suggests that the decrease in peripherally measured autonomic activity caused an increase in cortical excitability, replicating findings of previous independent studies [49, 50].

Iv-A4 Electrocardiogram (ECG)

We calculated the z-score of the inter-beat intervals (). See §III-B for more information about this feature. is proportional to the inverse of relative heart rate; thus we expect to see a higher in a more relaxed state. Fig. 6, visualizes the difference between inter-beat interval z-scores () across all conditions. An ANOVA test was performed to evaluate differences in between Baseline, Fixed Tempo, Personalized Envelope, and Personalized Tempo designs: 666Experiment blocks with less than two minutes of valid IBIs (one data point per each condition) were excluded from this analysis.. We also report pairwise comparisons using an independent t-test that compare each music design condition to the baseline. We only observed a significant difference between Fixed Tempo and Baseline conditions777 :

Though we calculated a comprehensive list of heart rate variability (HRV) features from ECG [40], we did not observe any significant differences between baseline and intervention conditions. This finding is to be expected, given that traditional HRV features may not best represent how the autonomic nervous system (ANS) is influencing heart function [52]. HRV is controlled by both the sympathetic and parasympathetic branches of the ANS. Traditional HRV measures are incapable of isolating the effects of these two branches, especially while breathing is changing. Particularly at low respiration rates, the parasympathetic activity shifts into lower frequencies and overlaps with the frequency interval that is traditionally associated with sympathetic activity [52]. Given that our ambient music design conditions have resulted in slower and more variable breathing, traditional HRV features are not equipped to distinguish between parasympathetic and sympathetic control of the heart [52]. See §IV-B for future work directions to try to mitigate this problem.

Fig. 6: Comparison of the average of inter-beat interval (IBI) z-scores across conditions. Higher IBI is associated with lower heart rate, usually accompanying a relaxed state. See §IV-A for box-plot details & §IV-A4 for ANOVA statistics. Squared brackets show p-values of the post-hoc independent t-tests. ns: , *:.

Iv-B Limitations

In this paper, we solely focused on the left wrist EDA signal due to the motion artifacts introduced to the EDA captured from the right wrist. Recent findings show that a more holistic view of the EDA signal from multiple locations on the body could reveal further information about the affective and cognitive state of the user [53]. In the future, we would like to explore alternative signal processing techniques to overcome motion artifacts and study EDA asymmetry features.

We focused on traditional HRV measures for studying the influence of ANS on the heart. However, we learned that these features are not suitable over slow and variable breathing rates [52]. For future work, we would like to explore enhanced HRV features [52] that better distinguish sympathetic and parasympathetic indices of HRV in slow and variable breathing.

In addition, we did not inquire of participants after the experiment if they suspected that the real intention of the study was to manipulate their breathing pattern. In the future, these steps will be taken to ensure that analyzed data is only from participants unaware of the the music presentation intention. Future experiments are needed to test generalizability to real-world tasks and test if the influence of the music intervention extends to arousing effects using uptempo, arousing stimuli.

V Conclusion

This study engineered music in a systematic way to influence breathing in participants, despite that their focus was on a (different) cognitive task. The three music interactions differed with respect to the amount of customization to the participant’s own natural breathing rates. While previous designs have shown promise in encouraging entrainment of breathing patterns to external stimuli, this study was the first of its kind to specifically target unfocused entrainment using a deceptive experiment design to ensure participants did not consciously mimic the stimuli with their breath.

Our results revealed that the intricate design of musical stimuli did influence the participants’ breathing patterns. All intervention conditions resulted in higher relative inter-breath intervals, i.e. lower relative breathing rates, which are associated with a calmer state. Additionally, they influenced the relative variability of breathing which is also associated with a more positive and relaxed state. Importantly, personalizing the system based on the user’s natural breathing rate made the system significantly more influential to the user’s breathing pattern in addition to measures of their relaxation response, as seen in the downward shift in the tonic activity of EDA. Similarly, it resulted in greater cortical arousal as measured by the CNV, which has been shown to have an inverse association with peripheral arousal.

Vi Acknowledgments

We thank Brain Vision LLC (Cary, NC, USA) for providing the EEG equipment and MIT Media Lab Consortium for supporting this research.


  • [1] L. B. Meyer, Emotion and meaning in music.   U. of Chicago P., 1956.
  • [2] M. D. van der Zwaag, J. H. Janssen, and J. H. Westerink, “Directing physiology and mood through music: Validation of an affective music player,” TAC, vol. 4, no. 1, pp. 57–68, 2012.
  • [3] E. O. Dijk and A. Weffers, “Breathe with the ocean: a system for relaxation using audio, haptic and visual stimuli,” in EuroHaptics, 2011.
  • [4] P. N. Juslin and D. Västfjäll, “Emotional responses to music: The need to consider underlying mechanisms,” Behav. and brain sciences, vol. 31, no. 5, pp. 559–575, 2008.
  • [5] J. A. Sloboda, “Music structure and emotional response: Some empirical findings,” Psychology of music, vol. 19, no. 2, pp. 110–120, 1991.
  • [6] J. Newbold, N. Berthouze, N. Gold, and A. Williams, “Musically informed sonification for self-directed chronic pain physical rehabilitation,” 2015.
  • [7] G. Aschersleben, “Temporal control of movements in sensorimotor synchronization,” Brain and cognition, vol. 48, no. 1, pp. 66–79, 2002.
  • [8] B. H. Repp, “Sensorimotor synchronization: a review of the tapping literature,” Psychonomic bul. & rev., vol. 12, no. 6, 2005.
  • [9] I. Bergstrom, S. Seinfeld, J. Arroyo-Palacios, M. Slater, and M. V. Sanchez-Vives, “Using music as a signal for biofeedback,” International J. of Psychophysiology, vol. 93, no. 1, pp. 140–149, 2014.
  • [10] M. Zentner, D. Grandjean, and K. R. Scherer, “Emotions evoked by the sound of music: characterization, classification, and measurement.” Emotion, vol. 8, no. 4, p. 494, 2008.
  • [11] J. Jaimovich, N. Coghlan, and R. B. Knapp, “Emotion in motion: A study of music and affective response,” in International Symposium on Computer Music Modeling and Retrieval.   Springer, 2012, pp. 19–43.
  • [12] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music.” Psychophysiology, vol. 44, no. 2, 2007.
  • [13] V. N. Salimpoor, M. Benovoy, G. Longo, J. R. Cooperstock, and R. J. Zatorre, “The rewarding aspects of music listening are related to degree of emotional arousal.” PloS one, vol. 4, no. 10, p. e7487, jan 2009.
  • [14] D. Siwiak, J. Berger, and Y. Yang, “Catch your breath-musical biofeedback for breathing regulation,” in Audio Eng. Society Conv., 2009.
  • [15] B. Yu, M. Funk, J. Hu, and L. Feijs, “Unwind: a musical biofeedback for relaxation assistance,” Behav. & Info. Tech., vol. 37, no. 8, 2018.
  • [16] J. Vidyarthi and B. E. Riecke, “Interactively mediating experiences of mindfulness meditation,” International J. of Human-Computer Studies, vol. 72, no. 8, pp. 674–688, 2014.
  • [17] S. L. Robb, “Music assisted progressive muscle relaxation, progressive muscle relaxation, music listening, and silence: A comparison of relaxation techniques,” J. of Music Therapy, vol. 37, no. 1, pp. 2–21, 2000.
  • [18] L. Bernardi, C. Porta, and P. Sleight, “Cardiovascular, cerebrovascular, & respiratory changes induced by different types of music in musicians & non-musicians: the importance of silence,” Heart, vol. 92, no. 4, 2006.
  • [19] R. Bhandari, A. Parnandi, E. Shipp, B. Ahmed, and R. Gutierrez-Osuna, “Music-based respiratory biofeedback in visually-demanding tasks.” in NIME, 2015, pp. 78–82.
  • [20] N. Moraveji, B. Olson, T. Nguyen, M. Saadat, Y. Khalighi, R. Pea, and J. Heer, “Peripheral paced respiration: influencing user physiology during information work,” in ACM symp. on UI sw. & tech., 2011.
  • [21] R. J. Robertson, “Central signals of perceived exertion during dynamic exercise.” Medicine & Science in Sports & Exercise, vol. 14, no. 5, 1982.
  • [22] E. Charmandari, C. Tsigos, and G. Chrousos, “Endocrinology of the stress response,” Annu. Rev. Physiol., vol. 67, pp. 259–284, 2005.
  • [23] S. Bloch, M. Lemeignan, and N. Aguilera-T, “Specific respiratory patterns distinguish among human basic emotions,” International J. of Psychophysiology, vol. 11, no. 2, pp. 141–154, 1991.
  • [24] R. P. Brown and P. L. Gerbarg, “Sudarshan kriya yogic breathing in the treatment of stress, anxiety, and depression: part i—neurophysiologic model,” J. of Alternative & Complementary Medicine, vol. 11, no. 1, pp. 189–201, 2005.
  • [25] ——, “Sudarshan kriya yogic breathing in the treatment of stress, anxiety, and depression: part ii—clinical applications and guidelines,” J. of Alternative & Complementary Medicine, vol. 11, no. 4, 2005.
  • [26] S. Soni, L. N. Joshi, and A. Datta, “Effect of controlled deep breathing on psychomotor & higher mental functions in normal individuals.” 2015.
  • [27] P. Philippot, G. Chapelle, and S. Blairy, “Respiratory feedback in the generation of emotion,” Cognition & Emotion, vol. 16, no. 5, 2002.
  • [28] E. F. Bryant, The yoga sutras of Patanjali: A new edition, translation, and commentary.   North Point Press, 2015.
  • [29] J. Kabat-Zinn, “Mindfulness-based interventions in context: past, present, and future,” Clinical psychology: Science and practice, vol. 10, no. 2, pp. 144–156, 2003.
  • [30] A. T. Adams, J. Costa, M. F. Jung, and T. Choudhury, “Mindless computing: designing technologies to subtly influence behavior,” in Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing.   ACM, 2015, pp. 719–730.
  • [31] K. Kuikkaniemi, T. Laitinen, M. Turpeinen, T. Saari, I. Kosunen, and N. Ravaja, “The influence of implicit and explicit biofeedback in first-person shooter games,” in CHI.   ACM, 2010, pp. 859–868.
  • [32] A. Ghandeharioun and R. Picard, “Brightbeat: Effortlessly influencing breathing for cultivating calmness and focus,” in Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems.   ACM, 2017, pp. 1624–1631.
  • [33] M. Puckette et al., “Pure data: another integrated computer music environment,” Proceedings of the second intercollege computer music concerts, pp. 37–41, 1996.
  • [34] P. M. Lehrer, E. Vaschillo, and B. Vaschillo, “Resonant frequency biofeedback training to increase cardiac variability: Rationale and manual for training,” Applied psychophysiology and biofeedback, vol. 25, no. 3, pp. 177–191, 2000.
  • [35] J. J. Tecce, “Contingent negative variation (CNV) and psychological processes in man,” Psychological Bul., vol. 77, no. 2, pp. 73–108, 1972.
  • [36] C. Kothe, “Lab streaming layer (lsl),” accessed: 2017-10-01.
  • [37]

    A. Delorme and S. Makeig, “Eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,”

    J. of neuroscience methods, vol. 134, no. 1, pp. 9–21, 2004.
  • [38] I. Funderud, M. Lindgren, M. Løvstad, T. Endestad, B. Voytek, R. T. Knight, and A. K. Solbakk, “Differential Go/NoGo Activity in Both Contingent Negative Variation and Spectral Power,” PLoS ONE, vol. 7, no. 10, 2012.
  • [39] J. Pan and W. J. Tompkins, “A real-time qrs detection algorithm,” IEEE Trans. Biomed. Eng, vol. 32, no. 3, pp. 230–236, 1985.
  • [40] “Python package for heart rate variability analysis,”, accessed: 2019-04-01.
  • [41] M. Frigge, D. C. Hoaglin, and B. Iglewicz, “Some implementations of the boxplot,” The American Statistician, vol. 43, no. 1, pp. 50–54, 1989.
  • [42] E. Grossman, A. Grossman, M. Schein, R. Zimlichman, and B. Gavish, “Breathing-control lowers blood pressure,” J. of human hypertension, vol. 15, no. 4, p. 263, 2001.
  • [43] E. Vlemincx, I. Van Diest, and O. Van den Bergh, “A sigh following sustained attention and mental stress: effects on respiratory variability,” Physiology & behavior, vol. 107, no. 1, pp. 1–6, 2012.
  • [44] E. Vlemincx, J. Taelman, S. De Peuter, I. Van Diest, and O. Van Den Bergh, “Sigh rate and respiratory variability during mental load and sustained attention,” Psychophysiology, vol. 48, no. 1, 2011.
  • [45] E. Vlemincx, J. Taelman, I. Van Diest, and O. Van den Bergh, “Take a deep breath: the relief effect of spontaneous and instructed sighs,” Physiology & behavior, vol. 101, no. 1, pp. 67–73, 2010.
  • [46] M. Wysocki, C. Cracco, A. Teixeira, A. Mercat, J.-L. Diehl, Y. Lefort, J.-P. Derenne, and T. Similowski, “Reduced breathing variability as a predictor of unsuccessful patient separation from mechanical ventilation,” Critical care medicine, vol. 34, no. 8, pp. 2076–2083, 2006.
  • [47] F. Baudin, H.-T. Wu, A. Bordessoule, J. Beck, P. Jouvet, M. G. Frasch, and G. Emeriaud, “Impact of ventilatory modes on the breathing variability in mechanically ventilated infants,” Frontiers in pediatrics, vol. 2, p. 132, 2014.
  • [48] W. Boucsein, Electrodermal activity.   Springer Sci. & Bus. Media, 2012.
  • [49] C. Lim, R. Barry, E. Gordon, A. Sawant, C. Rennie, and C. Yiannikas, “The relationship between quantified eeg and skin conductance level,” International J. of Psychophysiology, vol. 21, no. 2, pp. 151 – 162, 1996.
  • [50] S. Higuchi, S. Watanuki, and A. Yasukouchi, “Effects of reduction in arousal level caused by long-lasting task on cnv,” Applied Human Science, vol. 16, no. 1, pp. 29–34, 1997.
  • [51] Y. Nagai, L. H. Goldstein, H. D. Critchley, and P. B. Fenwick, “Influence of sympathetic autonomic arousal on cortical arousal: implications for a therapeutic behavioural intervention in epilepsy,” Epilepsy research, vol. 58, no. 2-3, pp. 185–193, 2004.
  • [52] B. Aysin and E. Aysin, “Effect of respiration in heart rate variability (hrv) analysis,” in EMBC.   IEEE, 2006, pp. 1776–1779.
  • [53] R. W. Picard, S. Fedor, and Y. Ayzenberg, “Multiple arousal theory and daily-life electrodermal activity asymmetry,” Emotion Rev., vol. 8, no. 1, pp. 62–75, 2016.