The impact of a liquid drop on a solid surface is an important phenomenon that occurs frequently both in nature and in industry [31, 64]. Many different physical properties are involved in this phenomenon, such as the properties of the liquid drop (e.g., its velocity, surface tension, and viscosity), the conditions of the solid surface (e.g., its temperature, roughness, and stiffness), and the ambient conditions (e.g., temperature, pressure, and humidity) [31, 64, 17, 54, 66, 56, 26, 60, 32]. Thus, there are various possible outcomes when a drop impacts on a solid surface [64, 51, 68, 31].
A major outcome is splashing, which occurs when the impacting drop breaks up and ejects secondary droplets [24, 49, 50, 10]. By contrast, a nonsplashing drop just spreads over the surface until it reaches a maximum radius [23, 15, 5].
The study of drop impact has evolved enormously from the time when only a few stages of this high-speed phenomenon could be observed to the recent advent of high-speed videography that has enabled the observation of microdrop impact at a rate of a frame every 100 ns [62, 55, 58, 59]. Nevertheless, observation and study of the drop impact still rely heavily on frame-by-frame inspection with human eyes. However, owing to the complex nature of the phenomenon, many important but nonintuitive characteristics could possibly be missed when observation is with the naked eye alone.
Fortunately, the tremendous advances in machine learning techniques brought about by the recent boom in artificial intelligence (AI) seem to have provided an answer. Artificial neural networks (ANNs), supervised machine learning algorithms inspired by biological neural networks, have been widely utilized and have proven accurate for various classification and prediction tasks[52, 28, 29, 14]
. For example, the ability of a deep convolutional neural network (CNN) to accurately classify images has been widely exploited in search engines, face recognition, and cancer diagnosis, among many other tasks[33, 27, 20, 35]. Already in the field of fluid mechanics , ANNs have been utilized for various purposes [25, 38, 44, 63, 19, 36, 43, 45]
, such as bubble pattern recognition, turbulence modeling [41, 65, 39], and classification of vortex wakes .
Despite their proven prediction and classification accuracy, machine learning models are often too complicated and usually function as black boxes, with the designers being unable to explain the underlying reasoning that leads to a specific decision [3, 2]. However, previous studies have shown that a simple and interpretable model such as a feedforward neural network (FNN) can achieve high performance when trained with highly similar and high-quality data even if the amount of training data is limited [16, 19, 7].
Therefore, this study aims to unveil important but nonintuitive characteristics of the splashing of a drop on a solid surface by extracting the image features that a well-trained and highly accurate FNN model uses to classify images of splashing and nonsplashing drops during their impact. In Sec. 2, the methodology of the study, including data collection, data preparation, and image classification using an FNN, is explained in detail. In Sec. 3, the results, including the classification performance and an analysis of the classification process of the trained FNN, are presented and discussed.
The methodology of this study can be summarized as follows. A drop impact experiment is performed to capture high-speed videos of impacts of splashing and nonsplashing drops on a solid surface using a high-speed camera (Sec. 2.1). To ensure high similarity and quality of the images, digital image processing is performed using an in-house MATLAB code to extract the desired frames and crop away the unnecessary image background (Sec. 2.2.1). Next, the processed images are segmented (Sec. 2.2.2) to train, validate, and test an FNN until high accuracy is achieved (Sec. 2.3). Finally, the classification process of the optimized FNN is analyzed to extract the image features that the FNN uses to decide whether the drop in an image is splashing or nonsplashing.
2.1 Data collection: drop impact experiment
A drop impact experiment was carried out to collect high-speed videos of splashing and nonsplashing drops from which images were extracted for image classification.
2.1.1 Experimental setup
The experimental setup, shown in Fig. 1, consisted of a syringe, a rubber tube, a plastic needle, an adjustable stand, a glass substrate, a high-speed camera, and background lighting. The syringe supplied liquid via the rubber tube to the plastic needle (internal diameter 0.97 mm), which was clamped to the adjustable stand. A drop formed at the tip of the needle and fell freely before impacting the hydrophilic surface of the glass substrate. The impact was recorded using the high-speed camera in the presence of background lighting.
2.1.2 Experimental conditions
Drop inertia or impact velocity is the only physical property that was manipulated in the experiment. Through the adjustable stand, was varied by adjusting the impact height , i.e., the vertical distance between the point where the drop started to free fall and the surface of the glass substrate. ranged between 4 and 60 cm, and the outcome of the drop impact was either splashing or nonsplashing.
Throughout the experiment, other physical properties of the liquid were kept constant by using the same liquid, ethanol (Hayashi Pure Chemical Ind., Ltd.; density , surface tension N/m, and dynamic viscosity Pas). The drop size [area-equivalent radius m] was kept constant by using the same plastic needle. The physical properties of the solid surface and the ambient air were kept constant by using the same type of glass substrate (Muto Pure Chemicals Co., Ltd., star frost slide glass 511611) and by carrying out the experiment under atmospheric pressure at room temperature (– ).
2.1.3 High-speed videography
The drop impact was recorded using a high-speed camera (Photron, FASTCAM SA-X) at 45 000 frames/s and a spatial resolution of m/pixel, in the presence of background lighting. The recorded videos are sequences of 8-bit grayscale images with an image height of 288 pixels and an image width of 1024 pixels. The recording started at least nine frames or s before the drop touched the surface. This was to ensure that at the beginning of the recording, the drop did not touch the surface, so that drop size and impact velocity could be computed correctly. The recording ended after the drop deformed into a thin sheet with uniform thickness. Figure 2 shows several snapshots of the drop impacts for , 20, and 8 cm.
In the presence of background lighting, during a drop impact, the intensity value is near to 0 at a pixel position that captured the drop or the ejected secondary droplets, since the light is blocked. On the other hand, the intensity value is approximately equal to the intensity of the background lighting at a pixel position that captured neither the drop nor the ejected secondary droplets. Note that the intensity value is the luminous intensity captured by the high-speed camera, which scales between 0 and 255 for grayscale images. To ensure high similarity of the image data, the intensity of the background lighting was set to about 210 for every recording.
A total of 252 videos were recorded: 142 splashing and 110 nonsplashing.
2.1.4 Collected data
The outcomes of the impact of ethanol drops that were identified by looking for the presence of secondary droplets and the measured (measured from the ninth frame before impact) after falling from heights ranging from 4 to 60 cm are summarized in Fig. 3. The blue and green circles represent splashing and nonsplashing drop impacts, respectively. From a frame-by-frame inspection for the presence of secondary droplets by human eyes, it was found that splashing never occurred for drops falling from cm, whereas it always occurred for those falling from cm. In addition, there was a splashing transition at and 22 cm, where splashing only occurred in a few cases. As shown in Fig. 2, for the splashing that occurred at cm, only a few secondary droplets were ejected.
The validity of the experimental results was confirmed by comparison with the theoretical drop velocity and the splashing threshold proposed by Usawa et al.  The equation for is as follows:
where the gravitational acceleration . The curve of against (black dash-dotted line) shows good agreement with the measured . Note that the measured is slightly lower than due to the drag force that acted on the drop during the free fall. As for the splashing threshold proposed by Usawa et al. , it is shown in the following equation:
where is a constant that is deduced using lubrication theory, is the dynamic viscosity of the surrounding air, and is the velocity at which the lamella is initially ejected. The threshold velocity according to the splashing threshold proposed by Usawa et al.  (horizontal blue dashed line) also validates the outcome of the experiment, since it intersects with the curve of within the experimentally determined splashing transition height. Such agreements with theory and a previous study prove the validity of the experiment carried out in this study. In the remaining sections of this article, the drop inertia is presented in the form of the impact height .
The corresponding dimensionless numbers: Froude number , Ohnesorge number , Reynolds number , Stokes number , and Weber number are shown in Table I.
2.2 Data preparation
2.2.1 Digital image processing
Digital image processing was performed to prepare highly similar and high-quality images for image classification using an FNN. This included frame extraction and removal of the image background.
In this study, the frame in which half of the drop impacted the surface, i.e., when the central height of the drop normalized by the area-equivalent diameter was , was selected from each recorded drop impact video to train, validate, and test the FNN. The frame when
was extracted using an in-house MATLAB code. This MATLAB code executed image processing that included binarization, nonlocal means filtering , and object analyses such as circle detection [67, 4] and edge detection . Examples of the extracted images are shown in Fig. ((c))(c).
The same code also cropped the extracted images from pixel to pixel, with the impacting drop and the substrate surface at the center and bottom of the cropped image, respectively. This is to reduce the computation time and to increase the interpretability of the image classification process.
As a result of similar videographing conditions and the digital image processing explained in this subsection, the images are highly similar in terms of drop height and background lighting, regardless of and the outcomes (splashing/nonsplashing). Figure 4 shows several examples of the processed images for the drop impacts with , 20, and 8 cm. The images in the first row are the processed images of the images shown in Figure ((c))(c).
2.2.2 Data segmentation for cross-validation
|Combination||Number of data|
. The purpose of training was to find the best set of weights and biases for the trained model. Validation was carried out concurrently with training for hyperparameter tuning and prevention of underfitting and overfitting. Last, but not least, testing was done to ensure the generalizability of the trained FNN, i.e., its ability to classify new images that were not used for training.
For implementation of cross-validation, the collected data (labeled images) were segmented into training, validation, and test data. For fivefold cross-validation, the collected data were first divided into five different groups, with each group consisting of about 20% of the collected data. One of these groups was kept aside and reserved for testing, while the remaining roughly 80% of the data were used to train and validate the FNN. Out of this 80% training–validation data, about 10% were picked randomly for validation, and thus only about 70% were used for training. Consequently, there were five different combinations of training–validation–test data that were used to train–validate and test the FNN. Thus, there were five different sets of results to be analyzed. To ensure that the data for every impact height were included in both the training–validation and the test datasets for all five data combinations, 80%–20% segmentation was carried out for each before they were pooled into the respective data combinations. As shown in Table II, the numbers of splashing and nonsplashing data for training–validation and testing are similar regardless of the data combination, i.e, about 200 and 50 for training–validation and testing, respectively. To check whether the number of data is sufficient, training was performed using a smaller number of training–validation data. The results are similar even when the segmentation is 20% for training–validation and 80% for testing, thus confirming that the number of data is sufficient for the objectives of this study.
2.3 Image classification: feedforward neural network (FNN)
In this subsection, the details of the FNN that was used to classify the images of splashing and nonsplashing drops are explained. The implementation of the FNN was done in the Python programming language on Google Colaboratory 
using the libraries of TensorFlow. Through architecture optimization (the process of which is explained in Appendix A), an FNN with zero hidden layer was chosen. In Sec. 2.3.1, the details of the optimized architecture and the mathematical operations involved in the FNN are given, and in Sec. 2.3.2, the algorithms and mathematical equations involved in the training, validation, and testing of the FNN are described.
2.3.1 Neural network architecture
The optimized architecture of the FNN is shown in Fig. 5. The optimized FNN with no hidden layer exhibited a classification performance as high as that of an FNN with hidden layers while being superior to the later in terms of higher interpretability and lower computational cost. Since there is no hidden layer, the input layer is fully connected to the output layer.
As mentioned in Sec. 2.2.1, the input images were images cropped into size pixels
. In the input layer, these were flattened in row-major order from two-dimensional matrices and transposed into one-dimensional column vectors:, for . The value of each element in the vectors was normalized from 0–255 to 0–1.
Each element of (red circles in Fig. 5) is connected to each element of in the output layer (blue circles) by a linear function:
where is the result of this mathematical operation, is the weight that connects the input and output layers, and
is the bias vector.is the number of output classes, which are splashing and nonsplashing in this case, and so . Both and are updated through neural network training.
Each element of
was set to be activated by a sigmoid function:
for , where is the result of this mathematical operation. As shown in Eq. (4), a sigmoid function saturates negative values at 0 and positive values at 1. Thus,
can be interpreted as a vector that contains the probabilities of an input image to be classified as a nonsplashing dropand as a splashing drop . For binary classifications, the sum of and is approximately equal to 1.
The classification threshold of the trained FNN was set to be 0.5. In other words, the trained FNN classifies an image based on the element of that has a value equal to or greater than 0.5. For example, if the prediction of an image by the trained FNN is , then the image will be classified as an image of a splashing drop.
2.3.2 Training and validation
accuracy after every fifty epochs. Comb., combination; train., training loss or accuracy; val.. validation loss or accuracy).
The purpose of neural network training is to determine the value of each element in the weight matrix and the bias vector of the FNN, which were initialized using the Glorot uniform initializer . The training process is illustrated by the flowchart shown in Fig. 6.
Every trained image is fed through the FNN to compute , which is then compared with the label of the trained image . As already mentioned in Sec. 2.1.4, the images were inspected frame-by-frame and labeled with . For an image of a splashing drop, , while for an image of a nonsplashing drop, .
The comparison between and is made by computing the loss
using the cross-entropy loss function for binary classification, as follows:
for . If is close to , then will be close to 0. However, if is not equal to , will increase dramatically as deviates further from . The loss function was used to evaluate the model both during training and validation, but not during testing.
From this computed loss
, a backpropagation algorithm was applied to compute the gradient of the loss function with respect to each element of and of the FNN. With the computed gradients, the algorithm can determine how each element of and should be tweaked to minimize the loss. To tweak them in the direction of descending gradients, an algorithm called mini-batch gradient descent  was used.
As well as the cross-entropy loss function [Eq. (5)], the classification accuracy of the FNN was also evaluated:
The number of correct predictions is determined by the classification threshold, which was set to 0.5. The accuracy of the model was evaluated during training, validation, and testing.
To avoid overfitting, a regularization technique called early stopping  was applied. The training of the FNN was evaluated from the plots of loss and accuracy against number of epochs, which are shown in Fig. 7. For better visibility, only the loss and accuracy after every fifty epochs are plotted in the figure. Here, the number of epochs indicates how many times all training images are fed through the FNN for training. As the number of epochs increased, both training and validation losses decreased and approached 0. With early stopping, the losses did not increase after reaching their minimum value. On the other hand, both training and validation accuracies increased with increasing number of epochs and eventually reached 1. The same trend was observed for all data combinations. This showed that the training was valid and the trained FNN was well generalized. The trained FNN was then tested for its accuracy in classifying test images.
3 Results and Discussion
In this section, the results and an in-depth discussion are presented. In Sec. 3.1, the testing of the trained FNN is explained. In Sec. 3.2, the process for extracting the image features used by the FNN for classification is elaborated. Finally, in the discussion in Sec. 3.3, an attempt is made to understand the physical interpretation of the extracted image features.
3.1 Testing of FNN
Testing is the evaluation of the ability of the trained FNN to predict new images. The results for all data combinations are shown in Table III. Among all combinations, the test accuracy in classifying images of both splashing and nonsplashing drops is .
To check how confident the trained FNN is with regard to classification, the splash probability predicted by the FNN trained with combination 1 for test images with different impact heights is plotted in Fig. 8. Note that only the plot for combination 1 is shown here, because similar results were obtained for other combinations. For most splashing and nonsplashing drops, is above 0.8 and below 0.2, respectively. This indicates a reasonably high confidence of the trained FNN in classifying test images of both splashing and nonsplashing drops.
3.2 Extraction of image features of splashing and nonsplashing drops
To extract the image features that the trained FNN identifies for classification, the important pixel positions were determined by reshaping and visualizing the trained weight matrix as follows. The matrix form of is
where the elements and are the vector elements of the weight matrix for computing the probabilities of nonsplashing and splashing, respectively. For visualization, both and were reshaped in row-major order into a two-dimensional matrix of size , which is the same shape as the input images, and are presented as colormaps in Fig. 9. In this figure, the combination column indicates the data combination that was used to train the FNN. For both and , the distribution of the values with large magnitude resembles a splashing drop. These values are located at the same positions with opposite signs (negative blue and positive red). The colormaps are similar for all data combinations, indicating good generalizability of the results.
In the colormap of the reshaped of the FNN trained with each data combination, extreme negative values (blue) are distributed around 1⃝ and 2⃝, while extreme positive values (red) are found around 3⃝. The distribution of theses values is symmetric. Remarkably, by comparing these distribution with the images of a typical splashing drop [Fig. ((a))(a)] and a typical nonsplashing drop [Fig. ((a))(a)], it is found that 1⃝ corresponds to the area where the ejected secondary droplets of a splashing drop are present, 2⃝ to the contour of the main body of a splashing drop, and 3⃝ to the lamellae of a nonsplashing drop.
To understand how the trained weight helps the FNN to classify images, the process of classifying the images of a typical splashing drop [Fig. ((a))(a)] and a typical nonsplashing drop [Fig. ((a))(a)] by the trained FNN is analyzed. Note that these two typical images were cropped from Fig. ((c))(c) into size .
The analysis was done by visualizing from Eq. (3) as follows. Each element of the trained weight matrix was multiplied by the normalized intensity value at the corresponding pixel position of an image vector by computing the Hadamard product  using the following equation:
where is the vector element of the weight matrix for computing the probability of each output (splashing/nonsplashing) and is the transpose of a flattened image. Note that the sum of all elements of is equal to the matrix product of and , i.e., . For brevity, the explanation is focused on , which corresponds to the output for splashing. Similar to the visualization of , was reshaped in row-major order into a two-dimensional matrix of size and presented as colormaps in Figs. ((b))(b) and ((b))(b), with the same blue-green-red (BGR) scale as Fig. 9, i.e., from to .
In Figs. ((a))(a) and ((a))(a), with background lighting, the intensity value is almost zero at the pixel positions where the drop and the ejected secondary droplets covered the light. Thus, in Figs. ((b))(b) and ((b))(b), most of the values with large magnitudes (red and blue) of were zeroed out (green) by the presence of the drop in both images. This can be clearly observed in Figs. ((c))(c) and ((c))(c), where there is only zero or green in the area bounded by the contours of the respective impacting drops.
Nevertheless, through careful observation at 1⃝ (the area where the ejected secondary droplets from a splashing drop are present) and 2⃝ (along the contour of the impacting drop) as indicated in Fig. 9, there are more negative values (blue) remaining in the image of the nonsplashing drop than in that of the splashing drop. As a consequence, the sum of all elements of , i.e., [; see Eq. (3)], for the image of the nonsplashing drop is lower than for the image of the splashing drop. Thus, the image of a nonsplashing drop could not produce a value of that is high enough to exceed the classification threshold to be classified as an image of a splashing drop. On the other hand, for the image of the splashing drop, more negative values (blue) were zeroed out, raising the value of . Consequently, the value of is high enough to exceed the classification threshold to be classified as an image of a splashing drop. Note that owing to the presence of the sigmoid function, when .
The validity of these observations is confirmed by the plot of against impact height in Fig. 12. Since the same tendency was observed for all data combinations, only the plot for combination 1 is shown. In this figure, the values of for the images of the splashing and the nonsplashing drops in Figs. ((a))(a) and ((a))(a) are indicated by in the blue box and in the green box, respectively. It is worth noting that shows an increasing trend with
. Such a trend indicates that even without explicit learning, the trained FNN could estimate the inertia of an impacting drop based on the extracted image features, suggesting a possible correlation between the extracted image features and physical properties.
For the trained bias , the values for each data combination are listed in Table IV. The order of magnitude of is , which is much smaller than the computed by the trained FNNs. On the other hand, among the test images of all data combinations, the smallest absolute value of is 0.40. Therefore, . This indicates that the trained did not affect the classification of the FNN and is negligible.
3.3 Discussion of extracted image features
In this subsection, the distribution of the values with large magnitude in the trained weight for splashing output is discussed in attempt to understand the underlying physical mechanism.
As shown in Fig. 9, extreme negative values (blue) are distributed at 1⃝, the area where the ejected secondary droplets of the splashing drop are present, and 2⃝, along the contour of the main body of the impacting drop, while extreme positive values (red) are found at 3⃝, the lamellae. Among these, 1⃝ and 2⃝ are important characteristics for the FNN to identify a splashing drop, while 3⃝ is an important feature for the FNN to identify a nonsplashing drop.
The physical interpretations of 1⃝ and 3⃝ are quite intuitive. In the case of 1⃝, the physical interpretation is immediately obvious, since this feature satisfies the typical definition of a splashing drop, namely, the ejection of secondary droplets from the main body of the impacting drop [31, 64]. In the case of 3⃝, which is characteristic of a nonsplashing drop, it can be seen that the lamellae of a nonsplashing drop are shorter and thicker when compared with those of a splashing drop. The lamellae are shorter because the ejection velocity of a lamella of a nonsplashing drop is lower owing to the lower impact velocity (smaller Weber number ) [49, 50]. They are thicker because secondary droplets are not ejected from the lamellae of a nonsplashing drop.
For the remainder of this subsection, the discussion will be focused on 2⃝, the newly discovered characteristic of a splashing drop, which shows that the contour of the main body of a splashing drop is higher than that of a nonsplashing drop. To understand how important 2⃝ is for the classification of splashing and nonsplashing drops, images extracted when were further cropped into two different sets: one focused on the left lamella () and the other focused on the contour of the main body (), where is the radial distance from the center of the drop. Several examples of these two sets of cropped images are shown in Fig. 13. These sets were used to train an FNN with the same architecture, and the trained weights were visualized as colormaps, which are shown in Fig. 14. Note that these colormaps are scaled from to . All the image features that were previously mentioned can be observed in these trained weights, where 1⃝ and 3⃝ can be seen in Fig. ((a))(a), while 2⃝ appears in Fig. ((b))(b). The respective test results are shown in Tables V and VI. The accuracy of the FNN trained with the images focused on the lamella dropped slightly to for the classification of images of both splashing and nonsplashing drops for all combinations, as compared with the FNN trained with the images that have both the lamellae and the main body of the drop (). Remarkably, the accuracy of the FNN trained with the images focused on the contour of the main body is still as high as for the classification of images of both splashing and nonsplashing drops for all combinations, even without identifying the presence of the ejected secondary droplets. In other words, an important but nonintuitive characteristic that differentiates a splashing drop and a nonsplashing drop has been successfully extracted through visualizing the image classification process of an FNN.
Understanding of the phenomenon of drop impact on a solid surface can be deepened through discovering the underlying mechanism that leads to 2⃝, the higher contour of the main body of splashing drops when compared with nonsplashing drops. Although the mechanism is unclear at the time of writing, it will be analyzed and discussed here from three aspects:
pre-impact drop shape;
(i) It is important to consider differences in pre-impact drop shape, because the difference in contour height could possibly be due to the difference in the pre-impact width-to-height ratio of the drop, which might have changed during free fall . For the analysis, the contour of each drop before impacting the surface was extracted and averaged according to whether a drop was splashing or nonsplashing. The contours, i.e., height of the drop along the radial axis , were normalized by the area-equivalent diameter and are shown in Fig. 15, where the blue and green lines represent the averaged contours of splashing and nonsplashing drops, respectively. The black dashed line represents the contour of a half-circle. Before impact [see Fig. ((a))(a)], the averaged contours of both splashing and nonsplashing drops are similar to a half-circle, indicating that the averaged shapes of splashing and nonsplashing drops are similar. However, after impact [see Fig. ((b))(b)], the averaged contours of both splashing and nonsplashing drops become higher than the circle, with that of splashing drops being higher than that of nonsplashing drops. This indicates that the higher contour of the main body of a splashing drop compared with that of a nonsplashing drop is due to the dynamics during the impact, rather than to any difference in drop shape before impact.
(ii) Bubble entrainment is analyzed because the difference in contour height could possibly be due to the difference in the volume of air entrapped. In the study by Bowhuis et al. , experiments were conducted on drop impact for Stokes numbers ranging from to . The results showed that the volume of air entrapped during drop impact increases with owing to the reduction in capillary forces, until it reaches a maximum value at , after which it decreases with increasing owing to the increasing inertia of the drop. Since splashing did not occur in the study by Bowhuis et al., their results validate those reported here, in which there was a splashing transition at . However, all these results also prove that the higher contour of a splashing drop is not due to air entrapment, because the range of is in the regime where air entrapment is inhibited by the inertia of the drop.
(iii) Last, but not least, the pressure impact is analyzed because the difference in contour height could be due to the reaction force of the pressure impact that arises when a drop collides with a solid surface. From the studies by Eggers et al.  and Lagubeau et al. , it is known that the spreading dynamics experiences a transition between pressure-impact and self-similar inertial regimes when . The difference between splashing and nonsplashing drops in these two dynamical regimes were checked by training an FNN with the same architecture to classify images of splashing and nonsplashing drops in both regimes: (pressure-impact regime) and 0.25 (self-similar inertial regime). Several examples of the images used for the training are shown in Fig. 16. The trained weights were visualized as colormaps, scaled from to , which are shown in Fig. 17. In this figure, Fig. ((b))(b) shows the zoomed-in view of Fig. ((a))(a). Interestingly, for both and 0.25, the extracted image features correspond to those extracted from the image classification for . The higher contour of the main body of a splashing drop compared with a nonsplashing drop can already be observed in the pressure impact regime . It is therefore necessary to examine the relationship between pressure impact and image feature 2⃝. It is known that pressure , where is the acceleration of the drop during impact. Since and were fixed in the experiment, the double integral of with respect to time is equivalent to a length scale. By checking whether this length scale scales with impact velocity , the relationship between and the higher contour can be more or less confirmed. For this, double integration with respect to time was performed on the expression for the dimensionless pressure exerted by an impacting drop on a solid surface proposed by Philippi et al. , namely, , where is the dimensionless time and is the dimensionless radial distance from the center of the drop. The result scales only with , i.e., . Since is the same for the same , the contour height is expected to be the same for both splashing and nonsplashing drops. This analysis cannot prove that pressure impact is the direct cause of the higher contour of a splashing drop compared with a nonsplashing that was found by the trained FNN. However, this does not rule out the existence of a relationship between pressure impact and contour height. Therefore, further analysis is necessary to clarify the mechanism underlying the difference in contour height between splashing and nonsplashing drops.
4 Conclusions and Outlook
In this study, nonintuitive characteristics of a drop splashing on a solid surface have been unveiled through image feature extraction using a feedforward neural network (FNN). Experiments were carried out to collect images of splashing and nonsplashing ethanol drops impacting on a hydrophilic glass surface after falling from an impact heightranging from 4 to 60 cm. The collected images were processed to produce very similar images for the training, validation, and testing of the FNN.
The trained FNN achieved an accuracy during testing to classify images of splashing and nonsplashing drops when half of the drop impacted the surface (). The confidence of the FNN with regard to the classification was reasonably high, with a splashing probability and for most of the images of splashing drops and nonsplashing drops, respectively.
Analysis of the classification process showed that the important image features used by the trained FNN to identify a splashing drop are the area where the ejected secondary droplets are present and along the contour of the main body of the impacting drop, while the relevant features used to identify a nonsplashing drop are short and thick lamellae. Among these features, the presence of ejected secondary droplets has typically been used in previous studies to distinguish splashing from nonsplashing drops, while short and thick lamellae have been identified as being characteristic of nonsplashing drops. However, the higher contour of the main body of a splashing drop compared with a nonsplashing drop has not hitherto been reported. Further image classification shows that the trained FNN has an accuracy in classifying splashing and nonsplashing drops according to the contour of the main body without checking for the presence of ejected secondary droplets.
Last, but not least, these image features quantified by exhibit an increasing trend with impact height , indicating a correlation between image features and impact velocity. This opens up the possibility of image-based estimation of impact velocity during drop impact.
Further experimental and computational studies are crucial for obtaining greater understanding of the mechanism responsible for the higher contour of the main body of a splashing drop. Moreover, it is also important to discover time-dependent image features of a splashing drop, which can be done by training ANNs for classification based on high-speed videos of drop impact instead of just still images.
For the outlook of this study, through transfer learning[61, 69, 30], the FNNs trained in this study can be further trained using images that are collected from other drop impact experiments, which have other physical quantities as the manipulated variables. Eventually, a universal splashing-nonsplashing classification model, which can unveil the nonintuitive universal characteristics of a splashing drop, can be built. For that, the image data and the FNN coding used in this study would be uploaded on GitHub (the GitHub link is under construction) and we would like to invite other drop impact researchers to build this universal classification model together.
We believe that through the methodology of this study that utilizes the image processing ability of ANNs and visualizes the classification processes that are usually black-box processes, nonintuitive characteristics of various phenomena related to fluid dynamics can be extracted, thus creating new insights for the research of fluid dynamics. Therefore, we would also like to extend our invitation to researchers who have collected numerous beautiful data (not just limited to images) of various phenomena, so that together we can develop the methodology and explore different fluid phenomena from a different perspective.
This work was funded by Japan Society for the Promotion of Science (Grant No. 20H00223, 20H00222, and 20K20972) and Japan Science and Technology Agency PRESTO (Grant No. JPMJPR21O5). The authors would also like to thank Dr. Masaharu Kameda (Professor, Tokyo University of Agriculture and Technology) and Dr. Masakazu Muto (Assistant Professor, Tokyo University of Agriculture and Technology) for their valuable discussions and suggestions.
The authors have no conflicts of interests to disclose.
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
Appendix A Architecture Optimization
The architecture of the feedforward neural network (FNN) was optimized to find the optimum numbers of hidden layers and neurons to achieve the desired performance. The desired performance is set to be lossand accuracy for images of both splashing and nonsplashing drops. To reduce the computational cost, the architecture with the least number of hidden layers that was still capable of achieving the desired performance was chosen as the optimized architecture.
For the training of all candidates, the training results were similar in terms of test accuracy and extracted image features. In terms of test accuracy, all the candidates achieved an accuracy of 80%. With regard to the colormaps of the trained weight , the distribution of the values with large magnitude of the active neurons is similar for all candidates. The optimized architecture that achieved the desired performance with the lowest computational cost has zero hidden layers.
-  (2016) Tensorflow: a system for large-scale machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16), pp. 265–283. Cited by: §2.3.
-  (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, pp. 52138–52160. Cited by: §1.
-  (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, pp. 82–115. Cited by: §1.
-  (1999) Size invariant circle detection. Image Vision Comput. 17 (11), pp. 795–803. Cited by: §2.2.1.
-  (2020) Effect of interfacial mass transport on inertial spreading of liquid droplets. Phys. Fluids 32 (3), pp. 032101. Cited by: §1.
-  (2012) Maximal air bubble entrainment at liquid-drop impact. Phys. Rev. Lett. 109 (26), pp. 264501. Cited by: §3.3.
-  (2013) Compressive sensing based machine learning strategy for characterizing the flow around a cylinder with limited pressure measurements. Phys.Fluids 25 (12), pp. 127102. Cited by: §1.
-  (2020) Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 52, pp. 477–508. Cited by: §1.
A non-local algorithm for image denoising.
2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), Vol. 2, pp. 60–65. Cited by: §2.2.1.
-  (2020) On the splashing of high-speed drops impacting a dry surface. J. Fluid Mech. 892. Cited by: §1.
-  (1986) A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8 (6), pp. 679–698. Cited by: §2.2.1.
Performance analysis of Google Colaboratory as a tool for accelerating deep learning applications. IEEE Access 6, pp. 61677–61685. Cited by: §2.3.
-  (2010) On over-fitting in model selection and subsequent selection bias in performance evaluation. J. Mach. Learn. Res. 11, pp. 2079–2107. Cited by: §2.2.2.
-  (2019) Design and implementation of cloud analytics-assisted smart power meters considering advanced artificial intelligence as edge analytics in demand-side management for smart homes. Sensors 19 (9), pp. 2047. Cited by: §1.
-  (2004) Maximal deformation of an impacting drop. J. Fluid Mech. 517, pp. 199–208. Cited by: §1.
-  (2018) Classifying vortex wakes using neural networks. Bioinspir. Biomim. 13 (2), pp. 025003. Cited by: §1, §1.
-  (2021) Initial spreading dynamics of a liquid droplet: the effects of wettability, liquid properties, and substrate topography. Phys. Fluids 33 (4), pp. 042118. Cited by: §1.
-  (2010) Drop dynamics after impact on a solid wall: theory and simulations. Phys. Fluids 22 (6), pp. 062101. Cited by: §3.3.
-  (2020) Shallow neural networks for fluid flow reconstruction with limited sensors. Proc. R. Soc. Lond. A 476 (2238), pp. 20200097. Cited by: §1, §1.
-  (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542 (7639), pp. 115–118. Cited by: §1.
Hands-on machine learning with scikit-learn, keras, and tensorflow: concepts, tools, and techniques to build intelligent systems. O’Reilly Media. Cited by: §2.2.2.
-  (2010) Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256. Cited by: §2.3.2.
-  (2019) A theory on the spreading of impacting droplets. J. Fluid Mech. 866, pp. 298–315. Cited by: §1.
-  (2019) A note on the aerodynamic splashing of droplets. J. Fluid Mech. 871. Cited by: §1.
-  (2021) Semi-conditional variational auto-encoder for flow reconstruction and uncertainty quantification from limited observations. Phys. Fluids 33 (1), pp. 017119. Cited by: §1.
-  (2019) Magic carpet breakup of a drop impacting onto a heated surface in a depressurized environment. Int. J. Heat Mass Transf. 145, pp. 118729. Cited by: §1.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §1.
-  (1989) Multilayer feedforward networks are universal approximators. Neural Networks 2 (5), pp. 359–366. Cited by: §1.
-  (1991) Approximation capabilities of multilayer feedforward networks. Neural Networks 4 (2), pp. 251–257. Cited by: §1.
-  (2020) Transfer learning for nonlinear dynamics and its application to fluid turbulence. Phys. Rev. E 102 (4), pp. 043301. Cited by: §4.
-  (2016) Drop impact on a solid surface. Annu. Rev. Fluid Mech. 48, pp. 365–391. Cited by: §1, §3.3.
-  (2014) Drop splashing on a rough surface: how surface morphology affects splashing threshold. Appl. Phys. Lett. 104 (16), pp. 161608. Cited by: §1.
-  (2012) ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, pp. 1097–1105. Cited by: §1.
-  (2012) Spreading dynamics of drop impacts. J. Fluid Mech. 713, pp. 50–60. Cited by: §3.3.
-  (1997) Face recognition: a convolutional neural-network approach. IEEE Trans. Neural Networks 8 (1), pp. 98–113. Cited by: §1.
-  (2020) Machine learning open-loop control of a mixing layer. Phys. of Fluids 32 (11), pp. 111701. Cited by: §1.
-  (2014) Efficient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 661–670. Cited by: §2.3.2.
-  (2021) An efficient deep learning framework to reconstruct the flow field sequences of the supersonic cascade channel. Phys. Fluids 33 (5), pp. 056106. Cited by: §1.
-  (2016) Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 807, pp. 155–166. Cited by: §1.
-  (2008) Hadamard, Khatri–Rao, Kronecker and other matrix products. Int. J. Inf. Syst. Sci. 4 (1), pp. 160–177. Cited by: §3.2.
Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow. Phys. Fluids 33 (2), pp. 025116. Cited by: §1.
-  (1979) A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9 (1), pp. 62–66. Cited by: §2.2.1.
-  (2020) Interface learning in fluid dynamics: statistical inference of closures within micro–macro-coupling models. Phys. Fluids 32 (9), pp. 091704. Cited by: §1.
-  (2021) Physics guided machine learning using simplified theories. Phys. Fluids 33 (1), pp. 011701. Cited by: §1.
-  (2020) Unsteady reduced-order model of flow over cylinders based on convolutional and deconvolutional neural network structure. Phys. Fluids 32 (12), pp. 123609. Cited by: §1.
-  (2016) Drop impact on a solid surface: short-time self-similarity. J. Fluid Mech. 795, pp. 96–135. Cited by: §3.3.
-  (2016) Artificial neural network for bubbles pattern recognition on the images. In J. Phys.: Conf. Ser., Vol. 754, pp. 072002. Cited by: §1.
-  (1998) Early stopping—but when?. In Neural Networks: Tricks of the Trade, pp. 55–69. Cited by: §2.3.2.
-  (2014) Experiments of drops impacting a smooth solid surface: a model of the critical impact speed for drop splashing. Phys. Rev. Lett. 113 (2), pp. 024507. Cited by: §1, §3.3.
-  (2017) Boundary-layer effects in droplet splashing. Phys. Rev. E 96 (1), pp. 013105. Cited by: §1, §3.3.
-  (2001) Outcomes from a drop impact on solid surfaces. Atomization Sprays 11 (2). Cited by: §1.
The perceptron: a probabilistic model for information storage and organization in the brain.. Psychol. Rev. 65 (6), pp. 386. Cited by: §1.
-  (1986) Learning representations by back-propagating errors. Nature 323 (6088), pp. 533–536. Cited by: §2.3.2.
-  (2021) Collisional ferrohydrodynamics of magnetic fluid droplets on superhydrophobic surfaces. Phys. Fluids 33 (1), pp. 012012. Cited by: §1.
-  (2008) High-speed imaging of drops and bubbles. Annu. Rev. Fluid Mech. 40, pp. 257–285. Cited by: §1.
-  (2021) Large impact velocities suppress the splashing of micron-sized droplets. Phys. Rev. Fluids 6 (2), pp. 023605. Cited by: §1, §2.1.4.
-  (2009) Single-drop fragmentation determines size distribution of raindrops. Nat. Phys. 5 (9), pp. 697–702. Cited by: §3.3.
-  (2015) Dynamics of high-speed micro-drop impact: numerical simulations and experiments at frame-to-frame times below 100 ns. Soft Matter 11 (9), pp. 1708–1722. Cited by: §1.
-  (2012) Microdroplet impact at very high velocity. Soft Matter 8 (41), pp. 10732–10737. Cited by: §1.
-  (2017) Wetting and electrowetting on corrugated substrates. Phys. Fluids 29 (6), pp. 067101. Cited by: §1.
-  (2016) A survey of transfer learning. J. Big Data 3 (1), pp. 1–40. Cited by: §4.
-  (1877) XXVIII. On the forms assumed by drops of liquids falling vertically on a horizontal plate. Proc. R. Soc. Lond. 25 (171-178), pp. 261–272. Cited by: §1.
Deep-learning of parametric partial differential equations from sparse and noisy data. Phys. Fluids 33 (3), pp. 037132. Cited by: §1.
-  (2006) Drop impact dynamics: splashing, spreading, receding, bouncing…. Annu. Rev. Fluid Mech. 38, pp. 159–192. Cited by: §1, §3.3.
-  (2020) Feature selection and processing of turbulence modeling based on an artificial neural network. Phys. Fluids 32 (10), pp. 105117. Cited by: §1.
-  (2022) Droplet impact of blood and blood simulants on a solid surface: effect of the deformability of red blood cells and the elasticity of plasma. Forensic Sci. Int. 331, pp. 111138. Cited by: §1.
-  (1990) Comparative study of Hough transform methods for circle finding. Image Vision Comput. 8 (1), pp. 71–77. Cited by: §2.2.1.
-  (2018) Splashing criterion and topological features of a single droplet impinging on the flat plate. SAE Int. J. Engines 11 (2018-01-0289). Cited by: §1.
-  (2021) A comprehensive survey on transfer learning. Proc. IEEE 109, pp. 43–76. Cited by: §4.