P-Reverb: Perceptual Characterization of Early and Late Reflections for Auditory Displays

We introduce a novel, perceptually derived metric (P-Reverb) that relates the just-noticeable difference (JND) of the early sound field(also called early reflections) to the late sound field (known as late reflections or reverberation). Early and late reflections are crucial components of the sound field and provide multiple perceptual cues for auditory displays. We conduct two extensive user evaluations that relate the JNDs of early reflections and late reverberation in terms of the mean-free path of the environment and present a novel P-Reverb metric. Our metric is used to estimate dynamic reverberation characteristics efficiently in terms of important parameters like reverberation time (RT60). We show the numerical accuracy of our P-Reverb metric in estimating RT60. Finally, we use our metric to design an interactive sound propagation algorithm and demonstrate its effectiveness on various benchmarks.

READ FULL TEXT VIEW PDF

Authors

page 1

page 7

page 8

10/10/2020

Learning Acoustic Scattering Fields for Dynamic Interactive Sound Propagation

We present a novel hybrid sound propagation algorithm for interactive ap...
05/20/2020

Evaluating Features and Metrics for High-Quality Simulation of Early Vocal Learning of Vowels

The way infants use auditory cues to learn to speak despite the acoustic...
11/05/2019

OtoMechanic: Auditory Automobile Diagnostics via Query-by-Example

Early detection and repair of failing components in automobiles reduces ...
12/14/2019

Estimating Early Fundraising Performance of Innovations via Graph-based Market Environment Model

Well begun is half done. In the crowdfunding market, the early fundraisi...
02/03/2022

A Psychoacoustic Quality Criterion for Path-Traced Sound Propagation

In developing virtual acoustic environments, it is important to understa...
08/10/2020

Parameter estimation in the SIR model from early infections

A standard model for epidemics is the SIR model on a graph. We introduce...
05/29/2021

DPLM: A Deep Perceptual Spatial-Audio Localization Metric

Subjective evaluations are critical for assessing the perceptual realism...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sound rendering uses auditory displays to communicate information to a user. Harnessing a user’s sense of hearing enhances the user’s experience and provides a natural and intuitive human-computer interface. Studies have shown a positive correlation between the accuracy or fidelity of sound effects and the sense of presence or immersion in virtual reality [16, 6, 22]. Sound is also an important cue for perceiving distance [31] and orientating oneself in an environment [30].

The sound emitted from a source and reaching the listener can be broken down into three components, described in more detail below: direct sound, early reflections, and late reflections or reverberation (Fig. 1). All three components of the sound field have perceptual relevance and have been extensively studied in psychoacoustics. Direct sound gives us an estimate of the loudness and the distance to the sound source [32]. Early reflections (ERs) arrive later than the direct sound, often in a range from to milliseconds. Late reflections or reverberation (LRs) are generated when the sound signal undergoes a large number of reflections and then decays as it is absorbed by the objects in the scene.

Because of the importance of different components of sound fields, there has been considerable work on simulating these effects and incorporating them into auditory displays. Some of the commonly used methods approximate the sound field using artificial reverberation filters, which use reverberation time to tune parametric digital filters [27]. These filters tend to have low computational requirements and are widely used for interactive auditory displays [12]. However, finding the right parameters for reverberation filters can be time-consuming and current methods do not provide sufficient fidelity. Geometric sound propagation methods work under the assumption that sound travels in straight lines and can be modeled using ray tracing [14]. This allows resulting algorithms to model sound’s interaction with the environment as it undergoes reflection and scattering. Many techniques have been proposed to accelerate ray tracing, and current methods can generate early reflections (ERs) and late reflections (LRs) at interactive rates in dynamic scenes using high-order ray tracing (e.g., more than orders of reflections) [23]. In practice, high-order ray tracing can be expensive and current interactive systems use multiple CPU cores on desktop workstations. The most accurate methods for sound rendering are based on wave-based acoustics, which directly solve the acoustic wave equation using numerical methods. However, their precomputation and storage overheads are very high and current methods are only practical for lower frequencies [19, 18, 20].

Many applications, including games, virtual environments, and multi-modal interfaces require an interactive sound rendering capability, i.e., fps or more. Furthermore, these systems are increasingly used on game consoles or mobile platforms where computational resources are limited. As a result, we need faster techniques to generate ERs and LRs in dynamic scenes for high-fidelity sound rendering. In particular, LR computation can be a major bottleneck.

Main Results: We present a novel, perceptually derived metric called that relates the ERs to the LRs in the scene. Our approach is based on the relationship between the mean-free path () and reverberation (Eq. 3), and we use early reflections to numerically estimate the mean-free path of the environment. We conduct two extensive user evaluations that establish the just-noticeable difference (JND) of sound rendered using early reflections and late reflections in terms of the mean-free path. We derive our perceptually-based metric by expressing the JNDs of early and late reflections in terms of the mean-free path. Moreover, our metric is used to efficiently estimate the late reverberation parameter (). We have evaluated the accuracy of our perceptual metrics in terms of computing the mean-free paths and reverberation time and comparing their performance with prior algorithms based on analytic or high-order ray tracing formulations. The mean-free path is within and reverberation time is within , which are within the JND values specified by ISO 3382-1 [10]. Overall, we observe significant benefits using our metric for fast evaluation of mean-free path and reverberation parameters for sound rendering and auditory displays. We have used for sound propagation and rendering in complex indoor scenes.

The rest of the paper is organized as follows. We give a brief overview of prior work in sound propagation and psychoacoustics in Section 2. We present our user evaluations establishing the metric in Section 3. We provide validation results in Section 4 and describe our how our metric can be used in an interactive sound propagation system in Section 5.

2 Background & Related Work

In this section, we give an overview of prior work in sound propagation, psychoacoustic characteristics, and related areas.

2.1 Reverberation

Reverberation forms the late sound field and is generated by successive reflections as they diminish in intensity. Reverberation is regarded as a critical component of the sound field. Many acoustic parameters such as the reverberation time (RT60) and clarity index (C50 and C80) are used to characterize reverberation [15].

Figure 1: We highlight the different components of the sound field. The sound directly reaching the listener is called the direct sound, the reflections that reach in the first ms are called early reflections (ERs), while the reflections following the early reflections that show a decaying exponential trend are called late reflections or reverberation (LRs). Our metric presents a new perceptual relationship between ERs and LRs and we use it for fast sound rendering.

2.1.1 Reverberation Time ():

is defined as the time for the sound field to decay by dB. A well-known expression used to compute the reverberation time is given by Sabine’s formula, which gives the relationship between the RT60 of a room in terms of its volume, surface area, and the total absorption coefficients of the materials used:

(1)

where V is the total volume of the room in , S is total surface area in , is the average absorption coefficient of the room surfaces, and is the total absorption in sabins. In this paper, we use as the main reverberation parameter and use our metric for fast computation in complex scenes.

2.1.2 Mean-Free Path

The mean-free path (MFP) of a point in the environment is defined as the average distance a sound ray travels in between collisions with the environment and is directly related to the [15]:

(2)

where is the constant of proportionality, is the mean-free path, and is the average surface absorption coefficient. A closed form expression  [2] for computing the mean-free path is given by:

(3)

where is the volume of the environment and is the surface area. The mean-free path can be computed to a reasonable degree of accuracy by only considering the specular reflection paths in the scene [28]. We use our metric for fast computation of using only ER in complex scenes.

2.2 Sound Propagation and Acoustic Modeling

Artificial reverberators provide a simple mechanism to add reverberation to “dry” audio, which has led to their widespread adoption in the music industry, virtual acoustics, computer games, and user interfaces. One widely used artificial reverberator was introduced by Schroeder [25] and it uses digital nested all-pass filters in combination with a parallel bank of comb filters to produce a series of decaying echoes. These filters require parameters such as reverberation time to tune the all-pass and comb filters. Geometric methods work on the underlying assumption of the rectilinear propagation of sound and use ray tracing to model the acoustics of the environment [14]. Other geometric methods include beam tracing [7] and frustum tracing [5]. In practice, ray tracing remains the most popular because of its relative simplicity and generality and because it can be accelerated on current multi-core processors. Over the years, research in ray tracing-based sound propagation has led to efficient methods to compute specular and diffuse reflections [24] for a large number of sound sources [23].

2.3 Early & Late Reflections: Psychoacoustics

Early reflections (ERs) have been shown to have a positive correlation with the perception of auditory spaciousness and are very important in concert halls. [1, 3] showed that adding early reflections generated the effect of “spatial impression” in subjects. Early reflections are also known to improve speech clarity in rooms. [4] showed that adding early reflections increased the signal-to-noise ratio and speech intelligibility scores for both impaired and non-impaired listeners. [9] showed that early reflections that come from the same direction as the direct sound reinforce localization, while those coming from the lateral directions tend to de-localize the sources.

Late reflections or reverberation (LRs) provide many perceptual cues. Source localization ability deteriorates in reverberant conditions, with localization accuracy decreasing in a reflecting room compared to the same absorbing room  [9]. Reverberation has a negative impact on speech clarity and  [13] showed the reduction in the number of sounds heard correctly in the presence of reverberation. Although reverberation decreases localization accuracy and speech intelligibility, it is known to have positive effects with respect to the perceived distance to a sound source in the absence of vision [31].

While there is considerable work on separately characterizing the perceptual effects of ERs or LRs, we are not aware of any work that establishes any relationship between ERs and LRs. is a metric that establishes the relationship between the respective JNDs and uses them for interactive sound rendering.

2.4 Estimating Reverberation Parameters

Given the importance of reverberation to the overall sound field, multiple methods have been established to measure the reverberation parameters over the years. , in particular, is considered to be the most important parameter in estimating reverberation and has been referred as the ‘The mother of all room acoustic parameters’ [26]. The most commonly used method to estimate reverberation time was given by Schroeder, and uses a backward time integration approach. [21] presents a method for blind estimation of

that does not require previous knowledge of sound sources or room geometry by modeling reverberation as an exponentially damped Gaussian white noise process.

[17] describes a method to estimate reverberation time using maximum likelihood estimator from noisy observations. [29] presents a comparison of different methods for estimating .

3 Perceptual Evaluations and P-Reverb

In this section, we describe two user evaluations that establish the just-noticeable difference (JND) for early and late reflections in terms of the mean-free path. Further, we show the relationship between the two JND values, thereby establishing our metric.

3.1 Experiment I - Just-noticeable difference of ERs

In this experiment, we seek to establish the just-noticeable difference of sound rendered using only direct and early reflections. In Experiment II, we establish the relationship between and sound rendered using the full simulation (direct + early + late reverberation) .

Participants:

106 participants took part in this web-based, online study. The subjects were recruited using a crowd-sourcing service. All subjects were either native English speakers or had professional proficiency in the language.

Apparatus:

The online survey was set up in Qualtrics. The impulse responses were generated using an in-house, realtime, geometric sound propagation engine written in C++, while the convolutions to generate the final sounds were computed using MATLAB.

Stimuli:

The stimuli were sound clips derived from cube-shaped rooms with increasing edge lengths such that their MFPs (Eq. 3) varied from m in increments of m. The range of lengths was chosen with the experimental goal in mind, namely, to extract a psychophysical function showing a gradient in perceived sound similarity relative to edge-length difference. The walls of the rooms had reflectivity similar to that of an everyday room. The source was a sound of clapping, which was chosen because it represents a broadband signal. The clips were filtered in 4 logarithmically spaced frequency bands (Hz, Hz, Hz, and Hz) to evaluate the effects of frequency on . Each of these filtered clapping sounds was convolved with the early impulse responses of the rooms. The final sound clip was around seconds long and contained distinct parts: seconds of the clapping sound in Room 1 (), second of silence, and another seconds of clapping in a second room drawn from (). All sounds were recorded assuming that the listener and the source were located at the origin . Given this symmetry, the sounds were rendered in mono with both speakers playing the same sound.

Design & Procedure:

To estimate the JND, our experiment used the method of constant stimuli [8] with a within-subject design. A stimulus comprised a sound clip containing Room 1 and one of the 7 possible comparison rooms (including Room 1). For each clip the subjects heard, they were asked to identify if the first clapping sound seemed to be different from the second clapping sound by selecting yes or no. Note this is a similarity judgment, not a discrimination. A block of judgments consisted of clips ( frequencies x comparison rooms paired with Room 1). A block was repeated times, giving a total of clips (4 frequencies x 7 possible rooms paired with Room 1 x 5 blocks). The ordering of the clips was randomized within a block. Each subject judged all stimuli. Before starting the experiments, subjects listened to a sample clip for familiarization. The subjects were required to have a pair of ear-buds/headphones to take part in the study, which took an average of minutes to complete.

Results & Analysis:

Fig. 2 shows the proportion of responses in which rooms were judged as sounding different, over all participants, as a function of the comparison level of . The first data point corresponds to rooms that were objectively identical, providing a baseline. The data essentially increases linearly with a larger , showing greater discrimination up to Room (), after which (

) the discriminatory ability seems to taper off. The standard errors are low and consistent, indicating the robustness of the results.

An interesting observation is the near-invariance of subjects’ ability to discriminate across the frequency bands. This was verified by an ANOVA analysis with factors of edge length (or ) and frequency. The analysis showed significant main effects for both factors of edge length and frequency . The interaction between edge length and frequency was also significant , reflecting that the performance decrement at the largest edge length is slightly greater for frequency band . However, the values are very low for effects involving frequency. Thus, while the effects of frequency show statistical significance, they are small in effect size and do not reflect consistent variation in frequency across edge length (or ). Therefore, for the purposes of constructing an overall rule, using data averaged over the frequencies is a valid simplification, particularly if the largest value of edge length is excluded. Fig. 3 shows the results averaged over the frequency for Rooms . As shown in the figure, the data fits a linear function well, with . Given our linear fit:

(4)

we can easily estimate the by considering the MFP values () where the subjects successfully discriminated the sounds 50% of the time given by . This tells us that a change in greater than 0.06m would result in perceptually differentiable sounds when using early impulse responses, but it doesn’t necessarily indicate if the relationship holds if the sounds were rendered using the full impulse response (LR). This led us to conduct the next experiment to establish the relationship between the JND of early reflections and the JND of full impulse response or late reverberation .

Figure 2: The psychometric function for sound rendered using the early reflections for the 4 frequency bands. The Y-axis shows the proportion of responses indicating the sounds were different. We see a clear, linear trend between increasing

and the probability of responding different, until the last room

, where the responses seem to flatten out.
Figure 3: The average JND over the frequency bands. The Y-axis shows the proportion of responses indicating that a difference was judged. The psychophysical function is essentially linear, showing that the probability of judging the sounds as different increases linearly with the increasing mean-free paths of the rooms.

3.2 Experiment II - Relationship between &

Once we have established the perceptibility threshold or of ERs, we need to relate this to the of LRs. Our goal is to use these relationships to cluster points with similar reverberation characteristics. We conducted another user study based on the results of the first study, described above.

Participants

participants took part in this online, web-based study. The subjects were recruited using the same crowd-sourcing service as in the previous experiment. All subjects were either native English speakers or had professional proficiency in the language.

Apparatus

The apparatus was the same as in Experiment I. The full impulse responses were generated using our in-house, realtime, geometric sound propagation engine written in C++, with the convolutions being computed using MATLAB.

Stimuli

The sound source used was the same as in the previous experiment, filtered for the same logarithmically-spaced frequency bands. Given our goal of establishing the relationship between and , we use our previously computed psychometric function (Eq. 4), to compute the values corresponding to detection rates ranging from to . This gives us values that can then be used to compute the cube rooms’ edge lengths using Eq. 3. These rooms and Room from the previous experiment serve as the environments in which the full impulse responses are computed. The material properties of the rooms were the same as in the previous experiment. Each sound clip in this case was about seconds because of the increased length of full impulse response, with seconds of clapping in Room , followed by a second of silence, followed by seconds of clapping in Rooms . The total number of sound clips was 0, as before ( frequency bands x rooms x blocks). The ordering of the sound clips was randomized within each block.

Design & Procedure

The study design was the same as in the ER study. Before starting the study, the subjects were asked to listen to a sample sound clip from the clips computed above for familiarization. The source and listener locations in the rooms were located at . The sound was rendered in mono. The subjects took an average of minutes to complete the study.

Figure 4: The psychometric function for sound rendered using the full impulse response for the frequency bands. The Y-axis shows the proportion of responses indicating sounds were judged to be different. In this case, we observe more variability for the different frequency bands, which could be attributed to the greater sensitivity of human hearing to a more accurate signal (compared to the less accurate ER signal). Overall, however, the responses can be modeled as a linear function with reasonable accuracy.
Results & Analysis

Fig. 4 shows the proportion of responses judging the sounds as different, as a function of increasing or edge-length. As before, we performed an ANOVA to assess the effect of edge length and frequency. The analysis showed significant main effects for edge length and frequency . The interaction between edge length and frequency was also significant . Again, the effect size for terms involving frequency was low, allowing us to average the responses for the frequency bands. Fig. 5 shows the values averaged for the frequency bands.

Figure 5: The average JND over the frequency bands for the full impulse response signal. The Y-axis shows the proportion of responses indicating a judgment of difference. The psychophysical function is not as linear as the early reflection signal, but a linear function approximates the subject responses reasonably well (), accounting for most of the variability.

3.3 Metric

Fig. 6 shows the relationship between the sounds rendered using only the early responses and the sounds rendered using the full impulse response. Note that the first point for both functions corresponds to two identical stimuli, and no difference is expected. However, beginning at the smallest edge lengths where objectively different stimuli were presented, the figure shows that the subjects were more likely to differentiate between sounds rendered with the full impulse response than they were with sounds rendered using only the early reflections. A difference in difference judgments is expected, because the full impulse response conveys more information about the space and is supposed to enable better perceptual differentiation than the early impulse response, thus giving a lower JND for the full impulse response, i.e., .

To establish a mathematical relationship between the two JNDs, we consider the ratio of the mean-free paths in both cases. The resulting figure is shown in Fig. 7. The linear fits are almost coincident after adding a constant offset of , i.e.

(5)

which gives a simple relationship between the two JND values:

(6)

where is the mean-free path in Room = m. Hence is the simple mathematical relationship or for the JND values of the two signals. Given as derived above, we can easily compute the value of as being for a reference room (Room 1) , giving us the percentage change () in the mean-free path values that constitute the JND for late reverberation, when using early reflections.

It turns out that Eq. 6 can be interpreted as a “first-order” approximation to a function that expresses the mathematical relationship between two multi-dimensional perceptual phenomena that are dependent on frequency, edge length, method of rendering, material parameters, etc. However, any function that accounted for the small frequency dependencies in the observed psychometric data and accommodated the effects of more complex environments and material parameters would have to be substantially more complicated than the linear relationship that we derive here. The value of the present formulation lies in its reasonable approximation of the observed effects with only one derived parameter.

We would also like to note that, although psychometric functions are usually fitted using sigmoid functions, our design did not require us to do so. A sigmoid function approach would have been suitable had we started with a value somewhere in the middle and taken a range of values above and below. This would have yielded two end-cases with the non-standard stimulus being judged smaller 100 % of the time; similarly, the larger non-standard stimulus would be judged as such 100 % of the time. In our approach, however, we never tested anything smaller than the standard, which led us to values that rose to the ceiling. Consequently, a linear fit to this function accounted for most of the variance

. A better fit could be achieved using a quadratic fit (accounting for of the variance), but at the expense of adding a parameter. A sigmoid function, too, would add another parameter without yielding much gain. Therefore given the fact that our linear fit accounts for most of the variability, we chose to not use a sigmoid fit.

Figure 6: This plot shows the overlaid psychometric functions for signals rendered using the early reflections (blue) and full impulse response (orange). Note that the first data point corresponds to differences being reported when the stimuli are objectively identical. Although the full impulse data shows a greater departure from a linear relationship beyond that point, the results are similar to the early reflection function, offset by a constant, allowing us to establish a simple, linear relationship between and in terms of the mean-free path.
Figure 7: The psychometric function with a constant offset adjustment. We consider the ratio of the mean-free path for the different rooms to the mean-free path of Room 1. The resulting linear fits for the two cases (early reflections and full impulse response) coincide once a constant offset of 0.02 is added to the ratio for the full impulse response. This highlights the accuracy of our model.

4 Results & Evaluation

Our approach consists of two primary numerical steps: computing the mean-free path using early reflections (ERs), and predicting using our perceptually established metric. We first validate the use of early reflections (ERs) to compute the mean-free path () in various environments. Next, we highlight the validation of the metric in terms of its accuracy in predicting .

Figure 8: We highlight the application of metric to predict variations in in a scene composed of interconnected rooms of different shapes and volumes: (a) shows the variation in along a path that goes through three rooms with volumes 135 , 256, and 125 from left to right; (b) shows three regions along the path roughly corresponding to the three rooms, where changes within the JND specified by the metric. This indicates that the reverberation in these regions would vary imperceptibly, as is indicated by the uniformity of the values; (c) shows rapidly varying values as one approaches the apertures between the connected rooms, indicating that would also vary rapidly. This is expected because the geometry varies rapidly in these regions and validates the accuracy of our perceptual metric .

4.1 Mean-Free Path Computation

Our metric depends on the numerically computed mean-free values that are computed using early reflections. The mean-free path is the average distance a sound ray would travel between collisions and we use ERs to estimate this distance. As mentioned, Eq. (3) can be used to compute mean-free path values in terms of the volume () and surface area (). Table 1 highlights the accuracy of our computed mean-free path values () as compared to the analytical values given by Eq. 3. We use rays and 20 bounces for each ray to compute our value as:

(7)

where is the distance traveled by a sound ray on the bounce, is the total number of rays, and is number of bounces per ray.

Cube 5 3.3 3.33 1
Rect. Prism (2,3,4) 1.87 1.85 1.3
Sq. Pyramid (2.8, 3) (b, h) 1.16 1.18 1.7
Room with Pillars (5,6,12) 3.14 3.04 3
Table 1: Mean-free path Computation: We show the accuracy of computing using early reflections for differently shaped rooms. The closed-form expression in Eq 3 gives us the analytical value for the mean-free paths in each of the rooms . We observe that ERs can closely approximate the analytically obtained . The Room with Pillars is shown in Fig. 9. Even for a scene with multiple obstacles, our method computes the mean-free path while inducing a maximum error of only .

4.2 using Computation

The metric predicts regions in a scene where the late reverberation is likely to vary imperceptibly. Conversely, it can estimate regions where the late reverberation would vary in a perceptually noticeable manner. We demonstrate the effectiveness of the metric in finding regions of similar reverberation characteristics by considering a scene shown in Fig. 8. The scene is composed of different interconnected rooms of varying shapes and volumes. Since reverberation is a function of the volume and shape of the room, it is likely to vary as one moves from one room to another. We consider a path that traverses three different connected rooms and compute the mean-free path along the path using ERs. Fig. 8(a) shows the variation in as we move along the path. We group the regions along the path where varies within the JND threshold computed using our metric (as shown in Fig. 8(b)), as Regions , , and . Based on the metric, each such region is likely to have an imperceptible sound in terms of . We illustrate this in Table 2. The corresponds to the average mean-free path value for the entire region and corresponds to the maximum difference from the for all the points in that region (i.e., a measure of variance). The represents the average value of the reverberation times for the region, while corresponds to the maximum difference from . For regions where varies within the JND specified by the metric, the values vary within 5% of the . This is within established JND values for , as specified in ISO 3382-1 [10] and correspond to imperceptible changes in late reverberation.

Fig. 8(c) shows rapidly varying values, as one moves from one room to another. This indicates that the reverberation or would vary rapidly in these regions. Since none of these values falls within the JND specified by , they cannot be grouped to create regions where reverberation would be imperceptible. This is expected because coupling of spaces is known to affect the sound energy flow and the change of close to the coupling aperture [11].

1 2.45 1.1 % 0.65 4.6 %
2 3.64 0.5 % 1.27 2.4 %
3 3.11 1.1 % 1.03 3.6 %
Table 2: Mean-Free Path and Reverberation Time Computation: We show the average values computed using high-order ray tracing for the three different regions shown in Fig. 8 and the differences from the average values. Each of these rooms corresponds to imperceptible regions based on our metric. The numerical value shows a maximum variation of , which is within JND values of . The exact was computed using high-order ray tracing with bounces.
Figure 9: Room with Pillars: We illustrate the room with pillars and use this benchmark to estimate the effectiveness of our mean-free path computation in complex environments with obstacles. We observe less than error using our early reflection based method.
Figure 10: The figure shows how the metric can be used to estimate regions where would vary imperceptibly in a scene. The left figure shows a typical listener path in a scene. At each point along this path, we compute the mean-free path using the early reflection based method described. Then using the metric, we cluster the points based on the to give us clusters along the path where would vary imperceptibly as shown in the right figure.

5 Interactive Sound Propagation

In this section we describe how the metric can be used for interactive sound propagation. As described in Sections 1. & 2., the sound reaching the listener from a source has three components: direct sound, early reflections, and late reverberation as shown in Fig. 1. Geometric sound propagation algorithms use methods such as ray tracing to compute the ERs and LRs in the scene. Although early reflections can be computed cheaply, late reverberation computation remains a major bottleneck as it requires very high-order ray bounces in the scene for accuracy making these methods resource heavy. This prevents the use of these methods in interactive environments such as games, which tend to use cheap filter-based approaches (digital reverberation filters) to simulate late reverberation. Reverberation filters require parameters such as to approximate late reverberation in an environment. One way in which reverberation filters can be parameterized accurately is to precompute the s along the listener’s path in the scene using a high-fidelity geometric sound propagation algorithm such as [24], and then use these precomputed values at runtime in the filter. This would avoid costly high-order ray tracing to simulate reverberation at runtime, but can incur a high precomputation cost requiring us to run high-order ray tracing for every point along the listener’s path. We now describe how using our metric can reduce the precomputation cost of computing values in the scene.

5.1 Sound Propagation using

Figure 11: The figure shows the schematic of a typical Schroeder-type filter used in our implementation. The input is processed through a parallel bank of comb filters that create the delayed version of the input signal. The output of this parallel bank goes through a series connection of allpass filters. These filters require parameters like to approximate the late reverberation in a scene.

5.1.1 Precomputation

We use our metric to accelerate the pre-computation of late reverberation for an interactive sound propagation system using a Schroeder-type reverb filter to simulate late reverberation (Fig 11). We sample a given scene at multiple points along the listener’s path and use a geometric sound propagation method  [24] to compute early reflections by placing an omni-directional sound source tracing orders of specular reflections at each of these points. Next, using Eq. 3 we compute the mean-free paths at each of these points. Using the metric, we clusters points on the path where varies within its JND, indicating that these regions will have perceptibly similar values (Fig. 10). Finally, using [24], we compute the values once for each computed region using high-order (300 bounces) reflections to get a high quality estimate. Table 3 shows the speed-up obtained using the metric in precomputation stage. The results were obtained on a multi-core desktop using single thread for the computations.

5.1.2 Runtime

At runtime, the direct sound computation is done through visibility testing; if a source is visible to the listener, its distance to the listener is used to attenuate the sound pressure according to the inverse distance law. The late reverberation computation is performed using the precomputed values in the previous stage. Given the listener position, a look-up is performed to ascertain the cluster (precomputed in the previous step) the listener position belongs to. Since, an value is associated with each cluster, this is now used as a parameter into the reverberation filter. As long as the listener in within this cluster, metric tells us that value would vary imperceptibly. The accompanying video shows the performance of our metric on three different scenes.

5.2 Benchmarks

Sun Temple

This scene consists of spatially varying reverberation effects. As the listener moves throughout the scene, the reverberation characteristics vary from being almost dry in the semi-outdoor part of the temple to being reverberant in the inner sanctum.

Shooter Game

This scene showcases the ability of our method to handle very large scenes. It shows an archetypal video game with multiple levels. As the listener moves from part of the scene to another, it shows our method’s ability to handle highly varying, large, virtual environments.

Tuscany

This scene has two different structures, a house and a cathedral, separated by an outdoor garden. The two structures have very different reverberant characteristics owing to their different geometries, and as the listener moves from the house to the cathedral going through the outdoor garden, the reverberant varies accordingly.

Sun Temple 215k 2301 40.2 124.2 53 3x
Tuscany 135k 1945 47.5 150.7 110 3x
Shooter Game 49k 3235 16.7 68.4 43 4x
Table 3: Precomputation Performance Analysis: We highlight the speed-up in precomputation stage using the metric. is the number of points along the listener path, is the avg. time taken at each point using ERs, is the average time taken at each point using LRs, and is the number of clusters found using our metric.

6 Conclusion, Limitations and Future Work

We present a novel perceptual metric that highlights the relationship between the JNDs of early reflections and late reverberation. Our metric is based on two user studies and can be used for fast computation of mean-free-paths and reverberation time in complex environments without high-order ray tracing. Our metric can be used to predict regions in an environment where the reverberation time is likely to vary within its JND value. We evaluate the accuracy of these perceptual metrics and find their accuracy within of the actual values on our benchmarks.

Our approach has some limitations. Our metric computation may not work in totally open environments since the mean-free path computation depends on the presence of collisions with the obstacles in the scene. Our metric can be regarded as an approximation to a complex function that corresponds to a multi-dimensional perceptual phenomenon dependent on source frequency, scene dimensions, method of sound rendering, material parameters, etc. As a result, we need to perform more evaluations that take other parameters into account. While we observe high accuracy in our current benchmarks, the accuracy could vary in more complex scenes. Further, our metric tends to be conservative and overestimates the number of regions with similar resulting in running more full simulations than optimal. That being said, it still significantly reduces the number of full simulations as shown in Table 3. Our experimental work also has limitations, including the restricted range of room sizes (motivated by the psychophysical goal), the fixed listener, and the restriction to mono rendering. As part of future work, we would like to overcome these limitations and further evaluate our approach on complex scenes and use them for multi-modal rendering.

7 Acknowledgments

The authors would like to thank the subjects who took part in the user-study. This work was supported in part by ARO grant W911NF-18-1-0313, NSF grant 1840864, and Intel.

References

  • [1] M. Barron. The subjective effects of first reflections in concert halls—the need for lateral reflections. Journal of sound and vibration, 15(4):475–494, 1971.
  • [2] A. Bate and M. Pillow. Mean free path of sound in an auditorium. Proceedings of the Physical Society, 59(4):535, 1947.
  • [3] J. Blauert and W. Lindemann. Auditory spaciousness: Some further psychoacoustic analyses. The Journal of the Acoustical Society of America, 80(2):533–542, 1986.
  • [4] J. Bradley, H. Sato, and M. Picard. On the importance of early reflections for speech in rooms. The Journal of the Acoustical Society of America, 113(6):3233–3244, 2003.
  • [5] A. Chandak, C. Lauterbach, M. Taylor, Z. Ren, and D. Manocha. Ad-frustum: Adaptive frustum tracing for interactive sound propagation. IEEE Transactions on Visualization and Computer Graphics, 14(6):1707–1722, 2008.
  • [6] E. Dubois, P. Gray, and L. Nigay. The engineering of mixed reality systems. Springer Science & Business Media, 2009.
  • [7] T. Funkhouser, I. Carlbom, G. Elko, G. Pingali, M. Sondhi, and J. West. A beam tracing approach to acoustic modeling for interactive virtual environments. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp. 21–32. ACM, 1998.
  • [8] G. A. Gescheider. Psychophysics: the fundamentals. Psychology Press, 2013.
  • [9] W. M. Hartmann. Localization of sound in rooms. The Journal of the Acoustical Society of America, 74(5):1380–1391, 1983.
  • [10] E. ISO. 3382-1, 2009,“ acoustics—measurement of room acoustic parameters—part 1: Performance spaces,”. International Organization for Standardization, Brussels, Belgium, 2009.
  • [11] Y. Jing and N. Xiang. Visualizations of sound energy across coupled rooms using a diffusion equation model. The Journal of the Acoustical Society of America, 124(6):EL360–EL365, 2008.
  • [12] M. Kleiner, B.-I. Dalenbäck, and P. Svensson. Auralization-an overview. Journal of the Audio Engineering Society, 41(11):861–875, 1993.
  • [13] V. O. Knudsen. Architectural acoustics. 1932.
  • [14] A. Krokstad, S. Strom, and S. Sørsdal. Calculating the acoustical room response by the use of a ray tracing technique. Journal of Sound and Vibration, 8(1):118–125, 1968.
  • [15] H. Kuttruff. Room acoustics. Crc Press, 2016.
  • [16] P. Larsson, D. Vastfjall, and M. Kleiner. Better presence and performance in virtual environments by improved binaural sound rendering. In Audio Engineering Society Conference: 22nd International Conference: Virtual, Synthetic, and Entertainment Audio. Audio Engineering Society, 2002.
  • [17] H. W. Löllmann and P. Vary. Estimation of the reverberation time in noisy environments. In Proc. of Intl. Workshop on Acoustic Echo and Noise Control (IWAENC), 2008.
  • [18] R. Mehra, N. Raghuvanshi, L. Antani, A. Chandak, S. Curtis, and D. Manocha. Wave-based sound propagation in large open scenes using an equivalent source formulation. ACM Transactions on Graphics (TOG), 32(2):19, 2013.
  • [19] R. Mehra, N. Raghuvanshi, L. Savioja, M. C. Lin, and D. Manocha. An efficient gpu-based time domain solver for the acoustic wave equation. Applied Acoustics, 73(2):83–94, 2012.
  • [20] N. Raghuvanshi, J. Snyder, R. Mehra, M. Lin, and N. Govindaraju. Precomputed wave simulation for real-time sound propagation of dynamic sources in complex scenes. In ACM Transactions on Graphics (TOG), vol. 29, p. 68. ACM, 2010.
  • [21] R. Ratnam, D. L. Jones, B. C. Wheeler, W. D. O’Brien Jr, C. R. Lansing, and A. S. Feng. Blind estimation of reverberation time. The Journal of the Acoustical Society of America, 114(5):2877–2892, 2003.
  • [22] A. Rungta, S. Rust, N. Morales, R. Klatzky, M. Lin, and D. Manocha. Psychoacoustic characterization of propagation effects in virtual environments. ACM Transactions on Applied Perception (TAP), 13(4):21, 2016.
  • [23] C. Schissler and D. Manocha. Interactive sound propagation and rendering for large multi-source scenes. ACM Transactions on Graphics (TOG), 36(1):2, 2017.
  • [24] C. Schissler, R. Mehra, and D. Manocha. High-order diffraction and diffuse reflections for interactive sound propagation in large environments. ACM Transactions on Graphics (TOG), 33(4):39, 2014.
  • [25] M. R. Schroeder and B. F. Logan. ” colorless” artificial reverberation. IRE Transactions on Audio, 9(6):209–214, 1961.
  • [26] M. Skålevik. Reverberation time–the mother of all room acoustic parameters. In CD Proceedings of 20th International Congress on Acoustic, ICA, vol. 10, 2010.
  • [27] V. Valimaki, J. D. Parker, L. Savioja, J. O. Smith, and J. S. Abel. Fifty years of artificial reverberation. IEEE Transactions on Audio, Speech, and Language Processing, 20(5):1421–1448, 2012.
  • [28] M. Vorländer. Room acoustical simulation algorithm based on the free path distribution. Journal of sound and vibration, 232(1):129–137, 2000.
  • [29] M. Vorländer and H. Bietz. Comparison of methods for measuring reverberation time. Acta Acustica united with acústica, 80(3):205–215, 1994.
  • [30] J. Wilson, B. N. Walker, J. Lindsay, C. Cambias, and F. Dellaert. Swan: System for wearable audio navigation. In Wearable Computers, 2007 11th IEEE International Symposium on, pp. 91–98. IEEE, 2007.
  • [31] P. Zahorik, D. S. Brungart, and A. W. Bronkhorst. Auditory distance perception in humans: A summary of past and present research. ACTA Acustica united with Acustica, 91(3):409–420, 2005.
  • [32] P. Zahorik and F. L. Wightman. Loudness constancy with varying sound source distance. Nature neuroscience, 4(1):78, 2001.