1 Introduction
Human movement results from a coordinated activation of the skeletal muscles. The muscle fascicle length and their change in length is critical for the force and efficiency of the muscle. It is thus necessary to measure fascicle length, which is usually done from ultrasonic images Fukunaga et al. (1997, 2001); Zhou and Zheng (2012). An example of a Bmode ultrasound image of the muscle gastrocnemius medialis recorded with an ALOKA Prosound 7 can be seen in Fig. 1: the fascicles are spanned between the two aponeuroses.
As the fascicles are interrupted by noise and rarely are captured in their full length by the imaging process, their length must be computed from three different auxiliary observables: the position of the two aponeuroses and the fascicle orientation angle (pennation). Throughout the present paper, we make the simplifying assumption that both aponeuroses can be approximated by straight lines. The fascicle length can then be computed from the pennation angle at different positions on these lines. We thus only concentrate on the problem of finding the aponeuroses and estimating the pennation angle.
When the imaging is done while the muscle is in motion, the image quality can deteriorate due to variation of transducer skin contact and position Aggeloussis et al. (2010). If the fascicle length or orientation estimation is not done manually, but semiautomatic or even fully automatic, this requires thus robust image processing methods for possibly noisy videos.
According to Yuan et al. (2020), algorithms for automatically estimating the fascicle orientation can be divided into three different approaches. In the first category, the tracking is semiautomatic by following a manually marked indicidual fibers in subsequent frames. This can be done, for example by calculating the optical flow, like in the UltraTrack software Farris and Lichtwark (2016). A disadvantage of these methods is the cumulative error, which requires manual correction after several frames. In addition, misalignments may result due to significant changes in the appearance and intensity of the structures between successive frames. These problems occur particularly with large displacement fields due to fast motion and insufficient sampling rates of most currently available commercial devices.
Methods based on texture feature detection form the second category, which includes Hough transform Zhou and Zheng (2008), Radon transform Zhao and Zhang (2011), or vesselness filter Rana et al. (2009). The disadvantage of these methods is that the result of the angle estimation may be distorted by speckle noise and intramuscular blood vessels, which modify the characteristics of the muscle fascicles.
The third category includes deep learning approaches. Cunningham proposed deep residual
Cunningham et al. (2017)and convolution neural networks
Cunningham et al. (2018) to estimate the muscle fascicle orientation. One problem with using deep learning methods is that they require a large amount of manually measured image data to achieve good results. Another difficulty is the dependence of the image acquisition and the image distortions on the ultrasound transducer, so that adjusted data sets are required.In the present article, we compare two established methods from the literature with two new approaches to determine the orientation of textures. As established methods, we consider vesselness filtering Rana et al. (2009) and Radon transform Zhao and Zhang (2011). We compare these with the very recently proposed gray value cooccurence matrix based texture orientation estimation Zheng et al. (2018) and the calculation of the angle using the projection profile Dalitz et al. (2008). The latter method has been used for some time in document image analysis for estimating the rotation of binary documents. Here we demonstrate that it can be used for gray level images, too.
In order to evaluate the quality of the different algorithms, we have compared their results with manual estimations of the pennation angle by different expert observers. As evaluation criteria, we utilized the intraclass correlation and the mean absolute percentage with respect to the interobserver average, and the percentage of results within the interobserver range.
This article is organized as follows: in section 2 & 3 we describe the implemented algorithms, section 4 describes the evaluation method, section 5 discusses the results and compares the algorithm performances, and in section 6 we draw some conclusions and give recommendations for a practical utilization of the algorithms.
2 Region of Interest Extraction
To determine the region of interest (ROI), each video frame is evaluated separately. Firstly, the two black areas (see Fig.1) are removed. Then, for a reinforcement of the aponeuroses a vesselness filtering (see section 3.2) is carried out. Then, Otsu’s thresholding method is used generate a binary image of the filtered image. In the result, the two largest segments which correspond to the two aponeuroses are selected. Straight lines are fitted to the lower segment border of the superficial aponeurosis and to the upper segment border of the deep aponeurosis using the least squares method. The height of the ROI resulted from the difference between the smallest value of the lower aponeurosis minus 10 pixels and the largest
value of the upper aponeurosis plus 10 pixels. The width of the ROI is calculated from the width of the image minus a safety area of 10 pixels to the left and right borders. This ensures that the ROI is always positioned within the muscle. As the noise level or the orientation angle may vary over the entire ROI, we additionally subdivided the entire region horizontally into eight overlapping subregions. For a fully automated process, it would be necessary to automatically pick the subregion with the “best” image quality. To characterize this quality, we have computed, for every subregion, the gray value variance as a measure for contrast, the mean gradient value and the maximum value of the histogram of the gradients as measures for edge sharpness.
3 Fascicle Direction Estimation
For the determination of the fiber orientation we used different methods, which are described in the following. These methods were either applied directly to the ROI or a preprocessing step was used for fascicle enhancement. For preprocessing, a Vesselness filter or Radon transformation was optionally applied for image enhancement. Tbl. 1 shows the investigated combinations for preprocessing and fascicle orientation estimation.
orientation  preprocessing  

estimation  none  Frangi  Radon 
Frangi    x   
Radon      x 
GLCM  x  x  x 
projections  x  x  x 
3.1 Radon Transform
The Radon transformation determines the line integral of the function along all straight lines of the plane. For each of these straight lines one can consider the Radon transform as a projection of the function onto a straight line perpendicular to it. For this reason it was used by Zhao and Zhang (2011), Yuan et al. (2020) to determine the orientation of the muscle fibers in ultrasound images. It should be noted that the radon transformation cannot only be used for direct angle estimation, but also merely as a preprocessing operation to reinforce fascicles. Such a preprocessed image with an enhancement of the linear structures in the initial image is achieved by applying the following equation:
(1) 
where is the Radon transform and is the inverse Radon transform. The result of the Radon transform based enhancement is shown in Fig. 2(c). The angle of the fascicle orientation resulted from the position of the maximum of the radon transformed. In our tests, we calculated the radon transformation only for an angular range of 15 to 70 degrees in which the actual values vary to exclude errors due to the orientation of the speckle pattern.
3.2 Vesselness Filter
Muscle fascicles appear in ultrasound images as vessellike tubular structures, so that in Rana et al. (2009) the multiscale vesselness filter developed by Frangi Frangi et al. (1998) was used to enhance them.
In the first step of this filter, images are convolved with Gaussian kernels. Then the Hessian matrix of these convolved images is computed. Their eigenvalues provide information related to the direction of linelike structures. The eigenvector in the direction of the smallest eigenvalue yields the orientation angle at the respective pixel position. For our tests we used the implementation in
libfrangi^{1}^{1}1https://github.com/ntnubioopt/libfrangiwhereby we only allowed angles within our chosen range of 15 to 70 degrees in order to suppress responses from dominating horizontal or vertical structures. All values outside this range were set to zero in the result image. To estimate a total orientation angle from all the local angles estimated at nonzero pixels, we estimated the angle distribution with a kernel density estimator with “Silverman’s rule of thumb”
Sheather (2004) and determined the angle maximizing this density.Like the radon transform, the vesselness filter can alternatively also merely be used as a preprocessing operation for enhancing fascicle structures. An example is shown in Fig. 2(b).
3.3 Projection Profile
The projection profile method Dalitz et al. (2008) estimates the orientation angle
as the angle with the highest variation of the skewed projection profile
(2) 
where is the gray value of the ultrasound image at position and zero outside the image. The variation of this profile is defined as
(3) 
In our implementation we calculate the variation for an angle range from 15 to 70 degrees with a step width of 0.5 degrees, which corresponds to the possible angles occurring for our recording conditions. Then we select the angle corresponding to the highest variation.
3.4 Graylevel Cooccurence
The gray level cooccurence matrix (GLCM) represents an estimate of the probability that a pixel at position
in an image with a graylevel has a graylevel at position . The GLCM has a size of , whereby is the maximum of the gray levels in the image. If arbitrary relative positions are used to calculate the GLCM, the texture orientation can be estimated. Zheng Zheng et al. (2018)applied this method to evaluate SAR images of the sea surface. For the calculation of the GLCM, we utilized in the method that Zheng et al. called “scheme 1”. If the shift vector
corresponds with the texture orientation, the diagonal elements of the GLCM attain high values. For the estimation of the fascicle orientation, we apply the criterion suggested in Zheng et al. (2018), i.e., the degree of concentration of larger elements of the GLCM with respect to the diagonal line:(4) 
The weight , which increases with increasing distance of the matrix element from the diagonal, results in smaller values for images with a strong line structure if the angle corresponds to the orientation of this structure. In our experiments we used a maximum of 40 and an angle range of 15 to 70 degrees. The used angle corresponded to the angle with the lowest concentration value.
3.5 Local Regression
Due to the noisy nature of the images, the angle estimate can fluctuate considerably between adjacent frames and subregions. It is thus natural to seek a more robust angle estimate by means of local regression. To this end, we optionally apply Cleveland & Devin’s LOESS method Cleveland and Devlin (1988), which is a distance weighted least squares fit over the nearest neighbors with weight
(5) 
where is the distance to the th nearest neighbor. In our case, the predictor is the frame number and the dependent variable is the pennation angle.
4 Evaluation Method
In order to evaluate and compare the different algorithms, we have asked three different experts to manually draw the aponeurosis and fascicle orientation into ultrasonic images with the user interface of the UltraTrack software Cronin et al. (2011). The images were taken from two different videos, which were recorded each with an ALOKA Prosound 7 for five consecutive stance phases (touchdown to toeoff) of the left foot during walking (video “W”) and running (video “R”). This resulted in a total of 425 different frames. The muscle fascicles in the R video were less clearly visible tan in the W video due to the shakier transducer skin contact during running.
Each frame was examined three times by every expert, but on different days. We thus had nine different manually estimated angles for each frame. This was done to estimate the accuracy of the expert opinion. The intraclass correlation ICC3 Shrout and Fleiss (1979) between the experts’ angle estimations was 0.97, which means that there was good agreement among the experts which angles were higher and which were lower. On the other hand, the average angle spread per frame was for expert , for expert , for expert C, and over all experts. BoxPlots for the spread distribution can be seen in Fig. 3. The spread between experts was thus considerably greater than within each expert, and we conclude that we cannot expect an algorithm to estimate the angle with an accuracy greater than about two degrees.
Part of the inter and intraobserver variation can be explained by varying fascicle orientations for different image regions. We therefore split the region of interest into eight slightly overlapping subregions and ran the algorithms on each subregion plus on the entire region. For each algorithm, we then measured the following performance indicators for each of these nine regions:

the intraclass correlation (ICC3) with the interobserver average; this measures how well the estimated angles follow the curve shape

the mean absolute error (MAE) with respect to the interobserver average; this measures the overall error in the estimation in degrees

the percentage of values inside the interobserver range (hit)
5 Results
As the pennation angle is defined as the angle between the deep aponeurosis and the muscle fascicles, there are two possible sources of error for its estimation: errors in the estimation of the aponeurosis’ slope, and in the estimation of the fascicle orientation. We therefore first evaluated the aponeurosis estimation, and then the estimation of the pennation angle. Moreover, to derive recommendations for preprocessing filtering, we report results for the different combinations of preprocessing and estimation algorithms listed above in Tbl. 1.
5.1 Aponeurosis slope
In video “R”, the deep aponeurosis was very close to a straight line, and algorithm and expert opinion about its slope angle was in good agreement: ICC3=, MAE=, hit=.
In video “W”, the deep aponeurosis was curved slightly (see Fig. 4) and the experts tended to estimate the slope at the right end, whilst the algorithm computed an average slope over its entire width. This had the effect that the automatic estimate of its slope angle was on average one degree greater than the expert opinion: ICC3=, MAE=, hit=.
As the decision at which position the tangential angle of the aponeurosis is measured is somewhat arbitrary, we conclude that the aponeurosis slope angle is estimated by our algorithm within the possible accuracy. The difference in slope estimation has no effect for video “R”, but for video “W” it leads to a systematic difference of about one degree for the pennation angle, i.e., the automatically estimated pennation angle in video “W” should be about one degree greater than the manually estimated angle.
5.2 Pennation angle
For the pennation angle, we have evaluated two different approaches to its estimation. The first approach models the angle as a single texture feature over the entire ROI, whilst the second approach models it as locally and statistically varying and applies a LOESS fit over subregions of neighboring frames.
5.2.1 Entire region
It turned out that the results were very different for the two videos: for all algorithms, all performance indices were considerably better on the less noisy video “W” (see Tbl. 2). The best performing algorithm was the projection profile method, followed by the GLCM. As can be seen in Fig. 5(a), the angles estimated by the other two algorithm follow the curve shape with lesser agreement, which corresponds to poorer ICC3 values in Tbl. 2.
algorithm  video  ICC3  MAE  hit 

projection  W  0.871  1.231  58% 
R  0.003  9.335  35%  
GLCM  W  0.784  2.221  33% 
R  0.180  10.437  10%  
Frangi  W  0.552  2.998  17% 
R  0.524  5.493  33%  
Radon  W  0.540  2.309  42% 
R  0.430  6.506  32% 
5.2.2 LOESS fit over subregions
To obtain a more robust angle estimator, we calculated the estimates for eight subregions, selected the “best” three subregions per frame and made a LOESS fit over these subregions including the eight neighboring frames. As our predictor was the frame number, the distance in Eq. (5) was measured in frame numbers and the number of neighbors was .
This raises the question, how the “best” subregions are selected for each frame. A human expert would focus on a region in which the fascicles are clearly visible, i.e. a region with high contrast or sharp edges. The three criteria listed in section 2 try to measure this property. It turned out that the actual criterion has a smaller effect than the choice of algorithm. We thus present the results that use the highest mean gradient as a criterion for the “best” subregions; the results for the other criteria are similar.
algorithm  video  ICC3  MAE  hit 

projection  W  0.975  0.548  86% 
R  0.096  12.895  1%  
GLCM  W  0.926  1.567  27% 
R  0.848  4.122  29%  
Frangi  W  0.733  1.704  49% 
R  0.635  6.688  14%  
Radon  W  0.804  2.007  41% 
R  0.868  3.357  36% 
As can be seen from Tbl. 3, the LOESS fit improves the performance indices in almost all cases. One notable exception is the projection profile method for video “R”: in this case the angle estimates were so far off that they even fell outside the range of Fig. 6(b), although this algorithm performed best on video “W”. We thus conclude that the projection profile method should be used in combination with a preprocessing filter because it is not robust with respect to high levels of noise.
5.3 Effect of preprocessing
algorithm  video  ICC3  MAE  hit 

projection  W  0.946  1.962  25% 
(with Frangi)  R  0.946  1.871  62% 
projection  W  0.755  2.231  31% 
(with Radon)  R  0.665  4.500  29% 
GLCM  W  0.914  2.408  21% 
(with Frangi)  R  0.058  39.699  0% 
GLCM  W  0.718  2.912  20% 
(with Radon)  R  0.695  4.316  23% 
To see whether using the Radon transform or the vesselness filter (“Frangi”) as a preprocessing operation improves the performance of the other algorithms, we have first applied these filters and then utilized the same LOESS approach as in the preceding subsection. As can be seen from Tbl. 4, this did not improve the performance of the GLCM with respect to Tbl. 3, but for the projection profile method, preprocessing with a vesselness filter seriously improved the results for video “R”. Overall, the combination “vesselness filter and projection profile method” was the best performing algorithm, followed secondly by the GLCM without preprocessing.
6 Conclusions
Based upon our experimental evaluation, we recommend two possible algorithms for estimating the pennation angle in ultrasonic images of muscles. The best performing algorithm was a combination of the vesselness filter as a preprocessing operation with the projection profile method for angle estimation. This algorithm achieved an intraclass correlation close to one and had a mean average error less than two degrees. The second best algorithm was based on the gray level cooccurance matrix (GLCM).
Both the robustness and accuracy of the angle estimates are considerably improved by a LOESS fit over neighboring frames and the subregions with the best visible edges. In our study, we have selected these regions automatically on basis of the mean absolute value of the gradient within the subregion.
In practice, if a semiautomatic processing is possible, the region selection process could alternatively done by an expert user. This would also have the benefit that the fascicle length computation can be based on the selected region. This is of relevance, because the fascicle length is not well defined if the superficial and the deep aponeuroses are not parallel. In this case, a hint by an expert user is necessary in any case where to set an anchor point of the line used for computing the fascicle length, which could be chosen, e.g., as the mid point of the user selected region.
Acknowledgments
Parts of this study were financially supported by the German Federal Ministry of Economic Affairs and Energy under grant no. 50WB1728.
References
 Reproducibility of fascicle length and pennation angle of gastrocnemius medialis in human gait in vivo. Gait & posture 31 (1), pp. 73–77. Cited by: §1.

Locally weighted regression: an approach to regression analysis by local fitting
. Journal of the American statistical association 83 (403), pp. 596–610. Cited by: §3.5.  Automatic tracking of medial gastrocnemius fascicle length during human locomotion. Journal of Applied Physiology 111 (5), pp. 1491–1496. Cited by: §4.
 Deep residual networks for quantification of muscle fiber orientation and curvature from ultrasound images. In Annual Conference on Medical Image Understanding and Analysis, pp. 63–73. Cited by: §1.
 Estimating full regional skeletal muscle fibre orientation from Bmode ultrasound images using convolutional, residual, and deconvolutional neural networks. Journal of Imaging 4 (2), pp. 29. Cited by: §1.
 Optical recognition of psaltic byzantine chant notation. International Journal of Document Analysis and Recognition (IJDAR) 11 (3), pp. 143–158. Cited by: §1, §3.3.
 UltraTrack: software for semiautomated tracking of muscle fascicles in sequences of Bmode ultrasound images. Computer Methods and Programs in Biomedicine 128, pp. 111–118. Cited by: §1.
 Multiscale vessel enhancement filtering. In International conference on medical image computing and computerassisted intervention, pp. 130–137. Cited by: §3.2.
 Determination of fascicle length and pennation in a contracting human muscle in vivo. Journal of Applied Physiology 82 (1), pp. 354–358. Cited by: §1.
 In vivo behaviour of human muscle tendon during walking. Proceedings of the Royal Society of London. Series B: Biological Sciences 268 (1464), pp. 229–233. Cited by: §1.
 Automated tracking of muscle fascicle orientation in Bmode ultrasound images. Journal of biomechanics 42 (13), pp. 2068–2073. Cited by: §1, §1, §3.2.
 Density estimation. Statistical science 19 (4), pp. 588–597. Cited by: §3.2.
 Intraclass correlations: uses in assessing rater reliability.. Psychological bulletin 86 (2), pp. 420. Cited by: §4.
 Dynamic measurement of pennation angle of gastrocnemius muscles obtained from ultrasound images based on gradient radon transform. Biomedical Signal Processing and Control 55, pp. 101604. Cited by: §1, §3.1.
 Automatic tracking of muscle fascicles in ultrasound images using localized radon transform. IEEE Transactions on Biomedical Engineering 58 (7), pp. 2094–2101. Cited by: §1, §1, §3.1.
 Development of a graylevel cooccurrence matrixbased texture orientation estimation method and its application in sea surface wind direction retrieval from SAR imagery. IEEE Transactions on Geoscience and Remote Sensing 56 (9), pp. 5244–5260. Cited by: §1, §3.4.
 Human motion analysis with ultrasound and sonomyography. In 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 6479–6482. Cited by: §1.
 Estimation of muscle fiber orientation in ultrasound images using revoting Hough transform (RVHT). Ultrasound in medicine & biology 34 (9), pp. 1474–1481. Cited by: §1.
Comments
There are no comments yet.