1 Introduction
Dust particles recovered from beneath the soles of an individual’s shoes consist of a mixture of dust particles collected from different sources and may be indicative of the locations recently visited by that individual. In particular, this dust may reveal his presence at a location of interest, e.g., the scene of a crime. The contributions of these locations to the mixture may vary as a function of the amount of dust present at each location, the time spent by the individual at each location, the activity of the individual at each location, or how recently the individual visited each location. The profile of a given source of dust can be described by a multinomial distribution over a fixed number of dust particle types, which enables us to describe the mixture of dust recovered from the sole of a shoe by latent Dirichlet allocation (Blei et al., 2003).
In this paper, we describe an algorithm that resolves mixtures of dust to address two different questions of forensic interest. Given a set of samples recovered from one or more objects of forensic interest that consists in mixtures of dust from sources, and samples of dust from known sources, we are interested in:

inferring the dust profiles of the unknown sources;

inferring the proportions in which dust from each of the sources are present in the samples.
An example of the first inference question may arise when, given an individual suspected of kidnapping with known home and workplace, we are interested in providing information on the dust composition of the unknown location where the victim is being held. Provided that the necessary data and a suitable inference framework exist, the second inference question may be useful to discuss issues such as (a) how long a person stayed at each location, (b) how recently the person visited each location, or (c) what type of activity the person had at each location.
We use latent Dirichlet allocation (LDA) (Blei et al., 2003) to define the generative process that produces mixtures of dust samples. We use variational Bayesian inference (VBI) (Hoffman et al., 2013; Blei et al., 2016) to study the parameters of our model and to address these two inference questions.
Currently, the use of dust evidence is anecdotical and is limited to cases where rare and characteristic particles are observed (e.g., pollen, seeds, spores  see Bull et al. (2006); Mildenhall (2006); Mildenhall et al. (2006); Stoney et al. (2011); Bryant (2014); Stoney and Stoney (2014) for a discussion of these methods). While most evidence types considered by forensic scientists result from the interactions between criminals and objects or victims at crime scenes, dust evidence arises from the mere presence of individuals at locations of interest, and therefore does not depend on the activity or actions that occur between criminals and objects or victims at the location of interest to be observed. Thus, the goal of this paper is to explore the statistical foundations of a new paradigm for the contribution of forensic science to criminal investigations.
2 Applying LDA to mixtures of dust
Topic models aim at discovering hidden semantic structures in a body of documents by grouping together words that are likely to have originated from the same themes or authors. By generalising this concept, LDA can be extended to arbitrary mixtures of objects represented by categorical random vectors, such as particle types.
Dust sources generate particles that can be observed at different locations, such as a crime scene or the house of a suspect, or one or more objects of forensic interest, such as shoe soles. The model presented in this paper uses dust samples collected from relevant geographical locations and from the surfaces of objects of forensic interest to make inferences on the dust profiles at the different locations, and on their respective contributions to the dust mixture observed on the forensic objects.
To help describe our research questions, the following parallel between topic modelling and the dust problem can be made:
1cm1cm Several authors, each one specialised in a single topic, jointly contribute to a book. We do know the speciality of each author, but we do not know what each topic looks like in terms of the proportions of the different words in the dictionary that are used in each one of them. We are interested in inferring the respective contribution of each author to the book. If we can obtain singletopic documents from all authors, the model can learn the topics from the singletopic documents, and then resolve the mixture of topics in the book. If we can only obtain singletopic documents from a subset of the authors, the model can learn some of the topics from the singletopic documents, and then learn the remaining topics, as well as the contribution of all authors, from the book.
This scenario shows that our inference questions are somewhat different than more traditional topicmodels. We are less concerned by what the different topics look like, as opposed to their mixture proportions in a given document. In addition, we have the possibility to infer some of the topic profiles by observing singletopic documents. That said, the number of authors (and therefore the number of topics) is not always known, and by extension, what their corresponding topics look like. When framed in the context of the forensic analysis of dust, the above scenario can be rewritten as:
1cm1cm Several geographical locations, each one receiving dust from a single source, were visited by a shoe. We do know that each location is only associated with a single dust source, but we do not know the dust profiles of these locations. We are interested in inferring the respective contribution of each location to the trace dust mixture observed on the surface of the sole of the shoe. If we can obtain dust samples from each location, the model can learn the dust profiles of the locations from these samples, and then resolve the mixture of dust under the shoe. If we can only obtain dust samples from some of the locations, the model can learn some of the dust profiles from the dust samples directly obtained at the locations, and learn the remaining profiles, as well as the respective contribution of the different locations, from the dust mixture recovered on the shoe.^{1}^{1}1Appendix A provides a glossary of terms that connects the vocabulary presented for dust modelling to the more familiar vocabulary of topic modelling to further assist the reader with our development.
In the dust scenario, we make a distinction between the location where dust can be sampled, and the source from which the dust originates. Our terminology further subsets locations by differentiating geographical locations, which correspond to any place that an individual might have visited and where dust might be sampled from, from trace locations, which correspond to a location or object where evidentiary dust samples are collected. Critically, our model considers that all the dust at a given geographical location originates exclusively from a single source, while the dust observed on the surfaces of trace objects originates potentially from more than one source (see assumptions (d) and (e) below). This constraint allows for learning the dust profiles of the different geographical locations that might have been visited by an individual and provides a basis to determine if dust from any of these locations is present in the mixture observed on the trace object.
Our model is different from the original LDA model proposed by Blei et al. (2003) in that these authors consider that a corpus consists in multiple documents, which are all composed of a mixture of the same topics in the same proportions. In our application, we consider that a corpus consists in multiple documents composed of the same topics, but in varying proportions. In other words, our corpus consists in multiple dust samples: some originating from geographical locations that are known to have been visited by the evidentiary objects (thus, representing singlesource dust profiles), and some originating from trace locations (thus, representing mixtures of dust profiles, whose mixture proportions may be different from one evidentiary object to another). This implies that we consider that the parameters that control the contributions of the different dust sources to the dust samples and the parameters that control the particle profiles of these sources are distributed according to asymmetric Dirichlet distributions. In our implementation, these hyperparameters are represented by matrices rather than vectors.
Our application also differs from supervised topic models (Blei and McAuliffe, 2007; LacosteJulien et al., 2008; Wang et al., 2009; Zhu and Ahmed, 2012), given that we are interested in determining the relative contributions of the
sources to the mixtures of dust observed in a sample, rather than associating a new set of dust samples with a single specific source. In fact, our problem cannot simply be framed as a supervised learning problem: the number of locations visited by a shoe, or the dust profiles of geographical locations that cannot be studied directly, are unknown and must be inferred from the data.
Finally, our problem differs from that addressed by authortopic models (RosenZvi et al., 2004; Steyvers et al., 2004; RosenZvi et al., 2010). Although we have made an analogy between authors and geographical sampling locations, our inference questions diverge. The authortopic models proposed by RosenZvi et al. (2004, 2010) and Steyvers et al. (2004) aim at discussing which topics are preferred by each author in a fixed set of known authors by assuming a uniform contribution of each author to a single document. The main inference question of our application is the exact opposite: we are interested in studying the respective contribution of each author to a set of documents, given their preferred topics.
2.1 Model assumptions
To develop our model and account for the differences discussed above, we make the following assumptions:

Observations made on particles within a dust sample are exchangeable (Robert (2007), page 159).

Observations made on dust samples collected at a given location are exchangeable.

Sources yield dust with a fixed and constant profile. Dust sources do not crosscontaminate each other.

The composition of dust samples recovered at a geographical location, such as a crime scene, workplace, or home, is considered to originate from a single source (i.e., the geographical location itself).

The composition of dust samples recovered on trace objects may be influenced by more than one source.

The evidentiary object has visited at most one unknown geographical location in addition to a set of known locations (i.e, ).
Assumptions (a) and (b) are identical to the assumptions made to develop the original model proposed by Blei et al. (2003). Assumption (a) is reasonable since particles are not organised in any particular order in a dust sample. In practice, an appropriate sampling procedure will ensure that assumption (b) holds. Assumption (c) considers that the dust profile of any given dust source is characterised by the “dust output” of that source, and thus accounts for any prior crosscontamination between sites. This assumption may not be appropriate and may be investigated in future work. Assumptions (d) and (e) are critical for the inferences we want to make with this model: they allow us to make inferences on the origin of the dust recovered from trace objects of forensic interest in terms of geographical sampling locations from mixtures of dust sources. Finally, assumption (f) is made in light of the foundational nature of our work and currently constrains the number of geographical locations that can contribute to the mixture of dust in a trace sample. It is supported by recent work on the rates of loss and replacement of very small particles on the contact surface of footwear (Stoney et al., 2018). Assumption (f) will be removed in future work.
2.2 Defining dust samples
We describe the generative process of a dust sample as follows. Notation is summarised in tables 4 and 5 in Appendix B. The top part of figure 1 provides a graphical representation of this process.

Choose an matrix to represent the relative contributions of the different particle types to the dust profiles of each of the sources that have the potential to contribute to the mixture. The th row of corresponds to the parameters of a Dirichlet distribution that drives the mixing proportions of the particle types that characterise source .

Choose an matrix to represent the relative contributions of the different sources to each of the locations from which we obtain dust samples. The th row, , of corresponds to the parameters of an Dirichlet distribution that drives the mixing proportions of the sources in samples obtained from location .

For a set of dust samples obtained from known locations and on evidentiary objects, sample an matrix from to obtain the mixing proportions of the types of dust particles for each source of dust .

For a sample taken from location :

Sample Dirichlet to obtain a vector of mixing proportions of the dust sources for the samples obtained from location .

For each of the particles, , in sample :

Sample a source of dust Multinomial .

Sample a particle Multinomial , where represents the row of matrix for the source defined by .


We see that the model makes no assumptions pertaining to any sort of ordering or grouping of the particles or the locations in a dust sample. This is synonymous to the bag of words assumption (i.e., exchangeability (Robert (2007), page 159) that is commonly associated with topic modelling.
This process can be represented by means of the Directed Acyclic Graph (DAG) depicted in figure 1 (top part). By making use of the DAG and the generative process described above, the probability of a sample of dust particles is given by:
(1) 
The joint probability of a set of samples collected across multiple locations is then given by:
(2) 
The distributions represented by each node in the top part of figure 1 are given by:
(3)  
(4)  
(5)  
(6) 
3 Assigning the model parameters
We use Variational Bayesian Inference (VBI) to assign the approximate posterior distribution of the model’s parameters given a set of exchangeable observations obtained from
geographical locations and trace objects. This is achieved by maximising the lower bound function defined by the negative KullbackLeibler divergence of the joint distribution
, and the variational distribution (Bishop (2006), Chapter 10). We introduce the variational parameters, , and to break the dependencies that exist between , and (see figure 1 (bottom part)), and we define as:(7) 
The “EStep” of our implementation of VBI maximises the lower bound function with respect to each of the variational parameters, , , and , while maintaining fixed values of , and ; the “MStep” maximises the lower bound function with respect to the global latent parameters and , while keeping the variational parameters obtained in the Estep fixed. Each step is itself an iterative process that repeats until some convergence criteria is satisfied. From equation (7), we note that , and can be updated independently to minimise the KL divergence between and for each sample. For ease of notation, we now suppress the explicit conditioning on the variational parameters, and shorten to .
3.1 The lower bound function for mixtures of dust particles
The lower bound function mentioned above is the sum of expectations of each of the latent parameters, taken with respect to the variational distribution, , as shown in equation (8). Note that each line in the last equality of equation (8) corresponds to one expectation. In addition, we note that is the digamma function, which gives the logarithmic derivative of the Gamma function, such that .
(8)  
Thefirst four expectations are developed using equations (3)  (6) above:
Thelast three expectations are developed using the entropies of the distributions corresponding to each of the latent parameters, given by equations (3)  (5):
By maximising equation (8) with respect to each of the variational parameters, we obtain update equations for , and :
(9)  
(10)  
(11) 
These updates for the variational parameters are used in the EStep of the VBI algorithm described in algorithm 1. Assigning values to the global parameters and in the MStep requires using optimisation techniques, since tractable maximum likelihood solutions do not exist. We use the LBFGSB method (Byrd et al., 1994; Zhu et al., 1994) to obtain the matrices and . For more information pertaining to the LBFGSB method and the MStep, see Appendix C.
3.2 Initialisation of the model
By assumption (d), our model considers that the dust observed at any given geographical location originates from a single source. Hence, the strategy behind the proposed model is to learn the dust profile of the known sources by obtaining “pure” samples from the corresponding locations, and to infer the profile of the remaining unknown sources (corresponding to locations that cannot be studied) through the deconvolution of the samples recovered on the surface of the trace objects. This process also provides information on the respective contributions of the sources to the mixtures observed on the trace objects.
To ensure that each geographical location (known and unknown) is uniquely associated with a single source, we constrain, , the matrix of Dirichlet parameters controlling the mixing proportions of dust sources at each location. Each column of corresponds to one of the sources that may have contributed dust to the different locations, and of the rows of correspond to known locations where “pure” samples were collected. Constraining requires to heavily weigh the th element of the th row of , when and . For rows of corresponding to the evidentiary samples, we use a flat Dirichlet distribution to reflect that, before running the algorithm, we consider that all dust sources are equally likely to have contributed to these samples. The values of the rows associated with the known locations are not updated by the algorithm, while the rows associated with the evidentiary samples are updated during the “Mstep” of the algorithm. There is currently a level of arbitrariness involved in the selection of the weight values for the known locations. The choice of weights appears to impact the convergence of the optimisation of . More objective ways of selecting these weights or more robust optimisation methods need to be investigated.
Matrix is initialised with flat Dirichlet distributions for all rows. All rows are updated during the Mstep of the algorithm to learn the dust profiles of the different sources potentially contributing to the evidentiary samples.
4 Inferences on sources’ dust profiles and mixing proportions
Following convergence of the algorithm, we obtain updated Dirichlet distribution parameter matrices and . The marginal distribution of each of the Dirichlet distributions in the rows of and gives the posterior distributions for the multinomial parameters and , respectively. Hence,

the contribution of the th source to the th sample from location is Beta. Note that, by construction, the expectation of will be very close to 1 if location is one of the known locations and .

the proportion of the th particle type in the th source is Beta.
5 Worked example
As an example, the algorithm presented above is used to resolve two mixtures of dust particles provided by Stoney Forensic, Inc. (Chantilly, VA, USA) under different scenarios. The data set is composed of “pure” and mixed samples of dust from two locations, labeled AT and LQ. These locations are extensively described in Stoney et al. (2018). The control and trace materials consist in (1) twelve samples of dust knowingly obtained from each of the two locations, and (2) two trace samples consisting in mixtures of dust obtained by mixing known proportions of dust from the two known locations. A dust sample is characterised by a vector of counts for fourteen particle types. The data set is summarised in tables 1, 2 and 3. Our model is used to resolve the dust mixtures in the trace samples presented in table 3 under three different scenarios:

In the first scenario, both sources are considered known and can be sampled from ();

In the second scenario, location AT is known and can be sampled from, while LQ is considered unknown and cannot be studied ();

In the last scenario, location LQ is known and can be sampled from, while AT is considered unknown and cannot be studied ().
1  2  3  4  5  6  7  8  9  10  11  12  

Alkali Feldspar  189  182  200  184  254  182  181  178  139  193  229  204 
Alterite  21  20  9  20  15  11  21  19  12  28  32  20 
Biotite  1  4  4  1  3  0  1  3  0  1  0  0 
Epidote  3  3  7  6  11  12  3  7  12  5  4  7 
High Index  3  2  1  7  3  3  2  2  3  2  1  4 
Hornblende  0  2  2  4  5  4  2  3  2  1  0  2 
Iron Oxides  9  4  1  5  5  7  7  9  5  6  7  6 
Lithic Fragments  3  0  7  12  3  2  3  2  3  3  4  0 
Muscovite  0  0  1  1  0  3  0  0  0  3  0  0 
Opaques  16  14  10  9  37  18  25  20  42  16  9  15 
Plagioclase  5  0  2  5  7  1  0  2  0  10  5  10 
Quartz  74  74  75  63  112  90  62  71  101  56  54  94 
Titanite  0  0  0  0  0  1  1  0  0  0  0  0 
Other  11  11  5  12  17  23  3  6  8  9  9  8 
1  2  3  4  5  6  7  8  9  10  11  12  

Alkali Feldspar  18  26  29  20  31  33  28  30  39  22  30  22 
Alterite  4  4  4  6  7  7  10  4  5  12  9  10 
Biotite  16  10  22  13  10  12  26  13  11  25  18  20 
Epidote  10  5  13  11  7  7  8  19  6  11  13  5 
High Index  3  0  1  2  2  0  0  1  0  0  2  2 
Hornblende  73  55  64  68  61  91  93  68  51  73  82  75 
Iron Oxides  0  0  0  0  0  1  0  0  0  2  0  0 
Lithic Fragments  5  7  6  6  2  4  0  6  3  6  7  11 
Muscovite  0  3  0  2  2  0  1  4  0  2  5  1 
Opaques  5  0  2  4  8  10  8  5  3  7  4  3 
Plagioclase  46  37  47  45  52  39  14  16  13  33  35  27 
Quartz  153  159  151  145  174  150  128  161  195  134  137  156 
Titanite  2  4  6  1  4  6  5  6  2  2  4  2 
Other  3  5  1  4  9  6  8  2  3  5  2  5 
1  2  3  4  5  6  7  8  9  10  11  12  13  14  LQ/AT  

Trace 1  312  31  5  12  5  16  9  7  1  32  12  151  1  17  0.10/0.90 
Trace 2  104  16  23  16  2  100  2  9  3  13  48  240  5  10  0.80/0.20 
5.1 Resolving the mixtures in table 3 when both locations are known
In this example, the algorithm is provided with all 26 samples described above: 12 “pure” control samples from each location and 2 “mixed” trace samples composed of locations AT and LQ in the proportions specified in the last column of table 3. This scenario serves primarily to demonstrate the effectiveness of the algorithm when all locations in a mixture can be observed. We initialise the algorithm by setting the matrices and to some initial values. To allow the model to freely determine the particle profiles of the two sources in the dust mixtures, the matrix is set to:
The first row of this matrix corresponds to the relative contributions of each of the fourteen particle types to source AT, while the second row of this matrix corresponds to the relative contributions of each of the fourteen particle types to source LQ.
To ensure that the algorithm correctly learns the dust profiles of the two known sources from the samples obtained from locations AT and LQ, the rows of matrix associated with these samples are heavily weighted in the dimension corresponding to sources AT and LQ, while the rows of matrix corresponding to the trace samples are set to 1:
The first column of this matrix corresponds to the relative contribution of source AT to the dust sample being considered. Likewise, the second column of this matrix corresponds to the relative contribution of source LQ to the dust sample being considered.
Upon introducing the samples representing the known sources and the two trace samples into the model and observing convergence, we obtain:

Note that, by design of the algorithm, the rows corresponding to and are not updated (and are therefore not represented in ). The rows of these matrices are the parameters of the posterior marginal distributions described in section 4. enables us to study the distributions of and , which represent the particle profiles of the sources present in the dust samples. The last two rows of enables us to study the distributions of and , which represent the mixing proportions of the two sources in the evidentiary samples. The resulting marginal posterior distributions of and are displayed in figure 2. The resulting marginal posterior distributions of and are displayed in figure 3.
Figure 2 shows that the model is able to effectively extract the dust profiles of the sources. All posterior distributions are sharply centred on their mean and mode.
for location AT (top two rows) and location LQ (bottom two rows) when both locations are known. Each plot is associated with one of the fourteen particle types. The vertical blue line corresponds to the point estimates of the mixing proportions, the vertical green line corresponds to the mean of the resulting posterior distribution, and the vertical red line corresponds to the mode of the resulting posterior distribution.The grey shaded region corresponds to the 95
HPDI.Figure 3
shows that the model is also able to extract the mixing proportions of the locations within the dust mixtures. The known mixing proportions are within the 95% Highest Posterior Density Intervals (HPDI’s), and the posterior distributions show little variance.
5.2 Resolving the mixtures in table 3 when only location AT is known
In this example, the algorithm is provided with 14 samples described above: 12 “pure” samples from location AT and 2 “mixed” samples. Matrix is initialised as in the previous example, since we are still interested in learning the dust profiles of both sources.
However, matrix has a different number of rows to reflect that no sample representing source LQ has been observed. Hence, only the rows of matrix associated with the samples obtained from location AT are heavily weighted in the dimension corresponding to source AT. As before, the rows of corresponding to the trace samples are set to 1:
Upon introducing the twelve samples from location AT and the two trace samples into the model and observing convergence, we obtain:

Note that, by design of the algorithm, the rows corresponding to are not updated and are not represented above. The resulting marginal posterior distributions of and are displayed in figure 5.
Even though the algorithm is only provided with “pure” samples from one single source, figure 4 shows that the model remains capable of effectively extracting the profiles of both sources. That said, by comparing figures 2 and 4, we note that the modes/means of the posterior distributions for the profile of source LQ are not as well aligned with the proportion estimates of the particle types when no “pure” sample from source LQ is observed. We also observe that, in general, there is more uncertainty on the particle profiles in figure 4.
Figure 5 shows that the model is able to extract accurately the mixing proportions of the locations within the dust mixture dominated by location AT, and less accurately the mixing proportions in the dust mixtures dominated by location LQ. These results seem to indicate that the lower precision (compared to the previous scenario) of the predicted particle profiles for both sources impacts the ability of the model to accurately resolve mixing proportions, in particular, in the case when the unobserved source dominates the mixture.
5.3 Deconvolving the mixtures in table 3 when only location LQ is known
In the final example, we assess the model’s ability to deconvolve the trace mixtures in table 3 when location LQ is known, and location AT is unknown.
Matrix remains the same as in the two previous examples. The known samples that are introduced into the model are now from location LQ. We account for this difference in information by weighting the elements of matrix corresponding to location LQ, rather than to location AT:
Upon observing convergence, we obtain the updated matrices and :

The resulting marginal posterior distributions of and are displayed in figure 6. The resulting marginal posterior distributions of and are displayed in figure 7.
As in the two previous examples, figure 6 shows that the model is able to extract the two location profiles present in the evidentiary samples: the mean and mode of the distributions prove to be reasonably similar to the proportion estimates of particle types. However, when contrasting figures 4 and 6, we see that the precision of the determination of the particle profiles in figure 6 is greater than in the previous example when AT was known and LQ unknown.
Contrary to the previous example, in the situation where only location LQ is known, the model is able to deconvolve both trace mixtures appropriately. It is not clear why the model behaves differently in these two different examples. We suspect that, since there are more overall particles from location AT in the mixtures (90% of AT in the first mixture and 20% of AT in the second mixture vs. 10% and 80% for LQ), the model may be able to learn the particle profile of location AT with greater accuracy even though “pure” samples from location AT are not provided.
6 Model performance
The results presented in sections 5.2 and 5.3 show that the performance of the model may differ depending on the input data. A search of the literature for discussions on the identifiability of LDA models indicated that this issue has only been considered by very few authors (Rabani et al., 2013; Vandermeulen and Scott, 2015) without providing satisfactory solutions. Furthermore, we are not aware of published LDAbased models where the models’ outputs are compared to ground truth. While many authors propose different flavours of LDAbased models, we have yet to find a publication whose authors test their model on a simulated corpus (whose proportions of topics and topic profiles are known) to get an idea of the general accuracy and identifiability of their models.
To study the performance of our algorithm, we simulate dust mixtures in which the particle counts and mixing proportions vary. Recall that by assumption (f), in section 2.1, we have, at most, one unknown location in a sample obtained from any given evidentiary object and that this assumption is compatible with an operational use of the model in the forensic context. We choose to simulate situations where we observe dust mixtures composed of two sources, AT and LQ. In all situations, source AT is known and can be sampled from, while source LQ is not known, and its profile is left to be learned by the model from the dust mixtures. These situations correspond to the example in section 5.2, which was the one where our model had the most difficulties. We consider two situations:

a single set of trace objects representing a single mixture of AT and LQ, originating from a single sampling location, is observed.

two sets of trace objects, representing two different mixtures of AT and LQ, originating from two different sampling locations, are observed.
We do not present the results of the situation in which “pure” dust samples can be obtained for both sources in the mixtures. The performance of the model in this situation is analogous to that presented in section 5.1, and is, overall, uninteresting.
We use point estimates for the proportions of the particle types for sources AT and LQ (obtained using the data presented in tables 1 and 2) to generate dust samples from these sources before mixing them. In each simulation, we consider trace sets that consist in five samples of mixtures of dust from sources AT and LQ, and a control set consisting in twelve samples obtained from location AT. The number of samples was selected to correspond to a realistic forensic scenario where both trace and control locations would be sampled several times to study their respective variability, but where the number of samples would not be unrealistic, or so high that their examination would be too time consuming. The mixing proportions of the two sources in situation (a) varies between simulations, such that takes the values in the following column matrix:
The mixing proportions associated with situation (b) remain the same as in situation (a) for the first sampling location and have been set to , for , for the second sampling location.
For each of the nine mixing proportions considered for , the particle count in each dust mixture varies so that , . This results in a total of 90 different simulations for each situation. Each simulation is repeated ten times, and the average model performance is evaluated.
6.1 Simulation results
6.1.1 One set of trace objects
The results obtained from the simulations conducted for situation (a) are presented in figures 8, 9, and 10. Figures 8 and 9 show the average predicted values of , , as a function of the number of particles present in each sample, and as a function of the true mixing proportion of source LQ present in the trace samples. From figure 8, we see that the model is able to accurately extract the profile of known source AT, which is not surprising given that “pure” samples of location AT were provided. However, given a single trace sampling location, we see from figure 9 that the model struggles to extract the profile of the unknown source, LQ.
Figure 9 shows two general aspects of the behaviour of the model. First, the accuracy of the prediction of the location particles’ profiles improves as the number of particles present in the samples increases. Second, larger proportions of the unknown source in the trace mixture results in more accurate predictions of its profile. These two aspects of the behaviour of the model are not surprising as, in both cases, the predictions improve with the amount of information regarding the profile of the unknown source. More interestingly, comparing figures 8 and 9 shows that the model has consistent difficulties to accurately predict the proportions of the particles types that have nonzero proportions in both profiles. In particular, it appears that the prediction is worse for particle types where the ratio of proportions between AT and LQ is largely in favour of AT, such as particle types 1 (0.56 vs. 0.08), 2 (0.055 vs. 0.02), 7 (0.017 vs. 0.001) or 10 (0.056 vs. 0.014). When the ratio is largely favourable to LQ, such as in particle types 6 (0.007 vs. 0.21) or 12 (0.22 vs. 0.45), predictions improve as the amount of information available regarding LQ increases. This observation questions the general identifiability of LDA models, in particular when all topic profiles have to be learned from the data as in text mining.
Figure 10(top) shows the predicted mixing proportions of the unobserved source LQ in the trace mixture as a function of the number of particles present in each sample, and as a function of the true mixing proportion of source LQ present in the trace samples. Figure 10(bottom) shows the deviation of the predicted mixing proportions from the true proportion. In this case, there does not appear to be a clear trend in the ability of the model to predict the mixing proportions as a function of the proportion of the unknown source in the trace mixture. We do note, however, that the performance of the model does improve as the number of particles present in the samples increases. Overall, it does not appear that the mean is a better predictor than the mode, and reciprocally. As mentioned in section 5.2, the inability of the model to resolve the particle profiles of the unknown location is unsurprisingly related to its ability to resolve the mixing proportion.
6.1.2 Two sets of trace objects
The results obtained from the simulations conducted for situation (b) are presented in figures 11 and 12. Figure 11 shows the average predicted values of , as a function of the number of particles present in each sample, and as a function of the true mixing proportion of source LQ present in the trace samples. As previously, the model is able to accurately predict the profile of known source AT; therefore, the results are not displayed below and the reader can refer to figure 8. The results in figure 11 illustrate that the presence of two sets of trace objects obtained from two distinct sampling locations greatly improves the model’s ability to predict the profile of the unknown source in terms of accuracy and precision. We note that the model still has difficulties to predict particle type 1; however, there is greater precision in the prediction.
Figure 12 shows the predicted mixing proportion of the unobserved source, LQ, in the first trace mixture (top row) and the second trace mixture (second row), as a function of the number of particles present in each sample, and as a function of the true mixing proportion of source LQ present in the first trace sample. In addition, figure 12 shows the deviation of the predicted mixing proportions from the true proportions for the first set of traces (third row) and for the second set of traces (fourth row). In the situation where two sets of trace objects are observed, we see the same convergence in the ability of the model to predict the mixing proportions as in the situation with only one set of trace objects: the convergence is a function of the particle count. However, in this case, we note that the model’s ability to correctly predict the proportion of LQ in the trace mixture appears to be a function of the true proportion of LQ present in the mixture: the larger the contribution of the unknown source to the dust mixture, the more the model struggles to extract the true proportion. We believe that this is due to the greater potential for error in mixtures where there is a larger proportion of the unknown source present in the mixture.
6.2 A note on performance
The main difference between our model and that originally proposed by Blei et al. (2003) is that we use asymmetric Dirichlet distributions instead of symmetric ones (Blei et al. (2003), p. 1006, footnote 2). Overall, we are not sure whether the lack of accuracy of our model originates from a general lack of identifiability of LDA models; from the large number of parameters to be assigned in and
, which may require a much larger number of samples than the number considered by our simulations; or from some instability of the numerical optimisation methods used in the Mstep of our algorithm. We have found little to no published material reporting on the performance of LDA models. Further investigations and future developments of our method may involve assigning the hyperparameters of our model using the method of moments, or implementing a Gibbs sampler or an Approximate Bayesian Computation algorithm to obtain posterior samples of parameters of interest. Exploring the method of moments may allow to set restrictions that will help with the identifiability of the model and has the potential to fully recover the parameters of the model, given that the necessary assumptions are fulfilled
(Anandumar et al., 2012).Nevertheless, we want to stress that, from an operational point of view, our model performs well. In an operational situation, investigators or factfinders will be far more concerned with information on the presence/absence of dust from a particular source in a mixture, with the ballpark contribution of this source (i.e., minimal vs. large) to the mixture and with any interesting characteristics that the profile of this source might have (e.g., a large proportion, or the complete absence, of a particular particle type) than with the exact amounts. It is unlikely that the outcome of an investigation will be drastically different whether a dust mixture contains 80% or 87% dust from a specific source. Figure 10 shows that our model can predict, within a reasonable interval (i.e., around 5%), the proportion of dust from an unknown source and figure 9 shows that it can also extract the main characteristics of the unknown source’s profile with a reasonable accuracy.
7 Conclusion
Dust particles recovered from beneath the soles of an individual’s shoes consist of a mixture of dust particles collected from different sources and may be indicative of the locations recently visited by that individual. In particular, this dust may reveal his presence at a location of interest, e.g., the scene of a crime. In this paper, we propose a model for the deconvolution of mixtures of dust originating from sources. Our goal is to infer the particle profiles of the sources, as well as their respective contribution to the mixture. Our overarching purpose is to enable the use and interpretation of dust evidence in order to determine, for example, if the dust recovered from under a pair of shoes contains particles originating from a given crime scene.
We describe the profiles of each of the dust sources using a multinomial distribution over a fixed number of particle types. We use latent Dirichlet allocation (LDA) to define the probability distribution of the dust mixture. We use variational Bayesian inference (VBI) to study the source mixing proportions and particle profiles of each of the sources present in the dust mixture. Finally, we propose a method to constrain our model to learn the dust profiles of known sources using control samples collected at locations of interest (such as crime scenes, houses of suspects, etc.), while retaining the model’s ability to learn the dust profiles of sources that are present in the mixtures but cannot be directly observed (such as the unknown location where a body is buried).
We test the performance of the model using real and simulated data. We find that our model is able to effectively extract the particle profiles of the sources in the mixtures present in the real data set when “pure” samples from all sources present in the mixtures are used to resolve them. Our simulations indicate that the accuracy of our model appears to be a function of the number of dust particles, the proportion of the different sources in the dust mixtures, and the magnitude of the ratios between the proportions of given particle types in the different sources. These results are very similar to the observations made regarding the wellestablished examination of DNA evidence in forensic science.
We observe that our model behaves very differently depending on the constraints used for the numerical optimisation of its Dirichlet parameters. The lack of consistency of our model may be rooted in a lack of identifiability of LDA models in general. Very little has been published on the subject of identifiability of LDA models. Furthermore, models proposed in the literature are not tested using datasets with known parameters and, therefore, their accuracy cannot be assessed. This is clearly an open field for future research.
The performance of our model in various situations needs to be extensively tested before it can be used in forensic practice. That said, it is capable of resolving mixtures of dust sources to a satisfactory level from a forensic operational perspective, and thus, of enabling forensic examiners to quantitatively support their inference of the presence of a suspect/object at a location of interest by examining dust evidence. While the transfer and examination of dust evidence was only considered as a theoretical concept by the founding fathers of forensic science, our model shows that dust particles have a great potential as a forensic tool in the near future.
Acknowledgements
This project was supported in part by Award No. 2016DNBX0146 awarded by the National Institute of Justice (Office of Justice Programs, U.S. Department of Justice) to Stoney Forensic, Inc. (Chantilly, VA, USA), and Award No. 2014IJCXK088 awarded by the National Institute of Justice to South Dakota State University (Brookings, SD, USA). The opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect those of the Department of Justice or the National Science Foundation. We would like to thank Dr. David Stoney for bringing up this interesting problem and providing the data.
References
 Anandumar et al. (2012) Anandumar, A., D. Foster, D. Hsu, S. Kakade, and Y.K. Liu (2012). A spectral algorithm for latent dirichlet allocation. Advances in Neural Information Processing Systems 25.
 Bishop (2006) Bishop, C. (2006). Pattern Recognition and Machine Learning. Springer Texts in Statistics.
 Blei et al. (2016) Blei, D., A. Kucukelbir, and J. McAuliffe (2016). Variational inference: A review for statisticians. arXiv:1601.00670.
 Blei and McAuliffe (2007) Blei, D. and J. McAuliffe (2007, March). Supervised topic models. Proceedings of NIPS, 121–128.
 Blei et al. (2003) Blei, D., A. Ng, and M. Jordan (2003, January). Latent dirichlet allocation. Journal of Machine Learning Research 3, 993–1022.
 Bryant (2014) Bryant, V. (2014). Wiley Encyclopedia of Forensic Science (2nd edition), Chapter Pollen and spore evidence in Forensics. WileyBlackwell.
 Bull et al. (2006) Bull, P., A. Parker, and R. Morgan (2006). The forensic analysis of soils and sediment taken from the cast of a footprint. Forensic Science International 162, 6–12.
 Byrd et al. (1994) Byrd, R., P. Lu, J. Nocedal, and C. Zhu (1994). A limited memory algorithm for bound constrained optimisation. Technical Report NAM08, Northwestern University.
 Hoffman et al. (2013) Hoffman, M., D. Blei, C. Wang, and J. Paisley (2013, May). Stochastic variational inference. Journal of Machine Learning Research 14, 1303–1347.
 LacosteJulien et al. (2008) LacosteJulien, S., F. Sha, and M. Jordan (2008). DiscLDA: Discriminative learning for dimensionality reduction and classification. In NIPS, Volume 22.
 Mildenhall (2006) Mildenhall, D. (2006). Hypericum pollen determines the presence of burglars at the scene of a crime: An example of forensic palynology. Forensic Science International 163(3), 231–235.
 Mildenhall et al. (2006) Mildenhall, D., P. Wildshire, and V. Bryant (2006). Forensic palynology: Why do it and how it works. Forensic Science International 163(3), 163–172.
 Rabani et al. (2013) Rabani, Y., L. Schulman, and C. Swamy (2013). Learning mixtures of arbitrary distributions over large discrete domains. arXiv:1212.1527v3.
 Robert (2007) Robert, C. (2007). The Bayesian Choice. Springer Texts in Statistics.
 RosenZvi et al. (2010) RosenZvi, M., C. Chemudugunta, T. Griffiths, P. Smyth, and M. Steyvers (2010, January). Learning authortopic models from text corpora. ACM Transaction on Information Systems 28(1).

RosenZvi et al. (2004)
RosenZvi, M., T. Griffiths, M. Steyvers, and P. Smyth (2004).
The authortopic model for authors and documents.
In
Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence
, pp. 487–494. AUAI Press.  Steyvers et al. (2004) Steyvers, M., P. Smyth, M. RosenZvi, and T. Griffiths (2004). Probabilistic authortopic models for information discovery. 10th ACM SigKDD conference knowledge discovery and data mining.
 Stoney et al. (2018) Stoney, D., A. Bowen, M. Ausdemore, P. Stoney, C. Neumann, and F. Stoney (2018). Rates of loss and replacement of very small particles (vsp) on the contact surfaces of footwear during successive exposures. Forensic Science International. https://doi.org/10.1016/j.forsciint.2018.12.020.
 Stoney et al. (2011) Stoney, D., A. Bowen, V. Bryant, E. Caven, M. Cimino, and P. Stoney (2011). Particle combination analysis for predictive source attribution: Tracing a shipment of contraband ivory. Journal of the American Society of Trace Evidence Examiners 2(1), 13–72.
 Stoney and Stoney (2014) Stoney, D. and P. Stoney (2014). Critical review of forensic trace evidence analyis and the need for a new approach. Forensic Science International 251, 159–170.
 Vandermeulen and Scott (2015) Vandermeulen, R. and C. Scott (2015). On the identifiability of mixture models from grouped samples. arXiv:1502.06644.

Wang
et al. (2009)
Wang, C., D. Blei, and L. FeiFei (2009).
Simultaneous image classification and annotation.
IEEE Conference on Computer Vision and Pattern Recognition
, 1903–1910.  Zhu et al. (1994) Zhu, C., R. Byrd, P. Lu, and J. Nocedal (1994). LBFGSB  fortran subroutines for largescale bound constrained optimisation. Technical Report NAM11, Northwestern University.
 Zhu and Ahmed (2012) Zhu, J. and A. Ahmed (2012, August). MedLDA: Maximum margin supervised topic models. Journal of Machine Learning Research 13, 2237–2278.
Appendix A Topic modelling and dust modelling: parallel terms
The following list of terms connects the vocabularies used in the topic modelling and dust modelling anecdotes of section 2:

Dust particle: a dust particle corresponds to a word in a topic model.

Sample: a collection of dust particles. A sample of dust corresponds to a document in a topic model.

Source: a process or geographical area that yields dust. A source corresponds to a topic in a topic model.

Sampling Location: a geographical area or an object where a set of samples of dust is obtained. A location may resemble an author in a topic model in the sense that both may generate samples.

Particle types
: a predefined set of categories that are used to classify the dust particles. A list of particle types corresponds to the vocabulary or dictionary of words in a topic model.
Appendix B Tables of notation, variables and parameters used in development
Description  Variable Type  

Indicates which location a sample corresponds to, where  Observed variable  
Indicates which sample is being considered, where  Observed variable  
Indicates which particle is being considered, where  Observed variable  
Indicates which source produced a particle , where , where  Latent variable  
Indicates which particle type is being considered, where  Observed variable 
Description  Variable Type  

A set of samples; This set includes all samples obtained from all locations (and trace objects)  Observed variable  
The sample of dust particles from a single location , such that , where is the number of dust particles in sample from location  Observed variable  
The particle type associated with particle from the sample from location . is an indicator vector of length , such that when the position of is equal to 1, then particle is of type  Observed variable  
An matrix of the sources associated with all particles from all samples across locations  Latent variable  
The source associated with particle ; is an indicator vector of length , such that when the position of is equal to 1, then particle originates from source  Latent variable  
An matrix of mixing proportions of sources for all samples across all locations  Latent variable  
A vector of mixing proportions associated with sample from location  Latent parameter  
The proportional contribution of the source to the sample from location  Latent parameter  
A matrix of the probabilities of observing each of the particle types at all sources for the sample  Latent parameter  
The vector of probabilities associated with observing each of the particle types at source in sample  Latent parameter  
The probability of observing particle type at source in sample  Latent parameter  
An matrix of Dirichlet distribution parameters that drive the mixing proportions of the sources at each of the locations  Latent parameter  
The vector of Dirichlet distribution parameters that drive the mixing proportions of the sources at location  Latent parameter  
The Dirichlet distribution parameter that drives the mixing proportion of source at location  Latent parameter  
A matrix of Dirichlet distribution parameters that drive the mixing proportions of the particle types at each of the sources  Latent parameter  
The vector of Dirichlet distribution parameters that drive the mixing proportions of the particle types for source  Latent parameter  
The Dirichlet distribution parameter that determines the mixing proportion of particle at source  Latent parameter 
Appendix C Further discussion on the MStep
LBFGSB is a limitedmemory quasiNewton method that incorporates box constraints into the optimisation process (Byrd et al., 1994; Zhu et al., 1994). The method serves to minimise (or, likewise, maximise) a function, , subject to the condition that , where and represent the lower and upper bounds specified for the current iterate, . LBFGSB avoids the computational cost of explicitly computing a Hessian matrix, , and instead, approximates using the gradient of the function to be optimised, , and is thus efficient for largescale problems. The algorithm proceeds by defining a quadratic function in terms of the original function, , to be minimised, the gradient, , and a positive definite limitedmemory Hessian approximation, . Minimising this quadratic function provides an approximate solution for the next iterate, , from which we can obtain a search direction. This search direction allows us to find the next iterate . Given , a new gradient and limitedmemory Hessian are computed, and, pending satisfaction of some convergence criterion, the next iteration begins.
In deconvolving mixtures of dust particles, LBFGSB can be used to obtain updates for the matrices and by maximising the lower bound function with respect to each and . Clearly, the gradient plays a central role in this method, and so we use this appendix to define the gradients used in the LBFGSB algorithm to obtain the updates for and . For further discussion on the LBFGSB method, see Byrd et al. (1994) and Zhu et al. (1994).
It is convenient to begin by defining the lower bound in terms of the considered parameters. We note that the lower bound functions and can be optimised independently, since neither function depends on the parameters of the other:
(12)  
Indeed, specifying and makes it straightforward to define the gradients:
(13)  
where each and is given by:
(14)  
Comments
There are no comments yet.