1 Introduction
Estimating blood oxygenation () or similar functional parameters deep inside tissue is one of the key use cases for multispectral photoacoustic (PA) imaging[1]. A reliable and fast measurement of can have many applications and is one of the main arguments for translation of PA imaging into the clinic[2, 3]. PA imaging systems that are based on clinical ultrasound systems with hand held linear probes[4] are relatively easily integrated in a clinical workflow due to the simultaneous access to ultrasonic imaging and the familiarity of use to clinicians. In contrast to other PA imaging setups, hand held linear probes can be applied to a large variety of clinical use cases. However, most translational work with such scanners is based on proprietary software development and as such not accessible to the community.
To address this bottleneck, we present a real-time multispectral hybrid PA and ultrasonic (US) (PAUS) imaging system running on the open-source platform Medical Imaging Interaction Toolkit (MITK)[5]. The functionality of the software framework is demonstrated by implementing a new method for
estimation in the presence of motion. The latter relies on the computer vision method
optical flow[6] to co-register a sequence of PA images using brightness patterns in corresponding US images and was specifically designed to mitigate non-rigid inter-frame motion caused by pulsing arteries. The performance of the platform including the presented software modules is demonstrated by means of motion-corrected blood oxygenation estimation in the carotid artery and accompanying vein based on data acquired with the presented system.2 PAUS imaging platform
In this section, we present the hardware setup and the open-source software components developed for our PAUS imaging platform.

2.1 Hardware setup
The hardware setup is shown schematically in Fig. 1. Data acquisition is performed using our DiPhAS US system (Fraunhofer IBMT, St. Ingbert, Germany) and a 128-element linear US transducer operating on a center frequency of 7.5 MHz (L7-Xtech, Vermon, Tours, France). The light from a fast tuning OPO laser (Phocus Mobile, Opotek, Carlsbad, USA) is delivered via a custom fiber bundle to the probe and into the tissue or phantom. The pulse energy of up to 40 mJ at a pulse repetition rate of 20 Hz is delivered over a surface of 2 cm, below the maximum permissible exposure safety limit [7] to avoid tissue damage. The actual energy output of each individual laser pulse can be measured with an integrated pulse energy sensor. For all mentioned hardware components application programming interfaces (APIs) are provided by the vendors. Additionally, several software components were specifically developed by us for hardware communication, as detailed below.
2.2 MITK Photoacoustics software components
The real-time PAUS imaging software has been developed as part of MITK, a free open-source software platform for interactive medical image processing software. MITK has a modular architecture, where Plugins contain the user interface and have dependencies to Modules which comprise the application logic and domain specific functionality. Each plugin provides graphical user interfaces (GUIs) for specialized controls in so-called Views which are callable from the MITK workbench – the interactive user application for research within MITK. The structure of MITK therefore implements a clear separation of the application layer, i.e. the direct user interaction, and lower layers such as hardware control.
External libraries like the Insight Toolkit (ITK)[8] and the Visualization Toolkit (VTK)[9] are integral dependencies of MITK. For further specialized use cases other libraries are provided through their implementations in MITK. Examples for external libraries in use are the OpenCV[10] library for motion estimation, Eigen[11] for spectral unmixing and OpenCL[12] for implementations of algorithms on GPU for faster execution. Data can be exchanged in real-time with systems not running via MITK or even on different PCs using MITKs implementation of OpenIGTLink[13]. Offline processing of recorded data can be performed with the MITK PA image processing plugin, as was the case for the beamforming in the experiments.[14]111using commit https://phabricator.mitk.org/rMITK9ce68418f58b Fig. 2 shows the main software components we implemented or extended for PAUS imaging. The following paragraphs describe those components in detail.

Hardware layer
The hardware layer abstracts the control of the hardware used, i.e. the DiPhAS US system and the OPO laser. Hardware control has been implemented in the classes corresponding to the USHardwareDiPhAs sub-module of the Ultrasound Hardware module and the PhotoacousticsHardware modules. The Ultrasound Hardware module and its sub-modules rely heavily on the existing generalized architecture of MITK for US devices[15]. Communication with the DiPhAs API and therefore control over the transducer and its output is implemented in the USHardwareDiPhAs sub-module. The transducer itself is represented as a generalized US device by a subclass of the abstract Device class where all hardware control is centralized. The Device class, as well as further classes, which are accessible through the Device class, such as an ImageSource subclass that handles image acquisition, handle communication, storage and adjustment of necessary parameters for image acquisition, e.g. time gain compensation or transducer voltage, as well as the setup of the interleaved PAUS acquisition sequence. This simple hierarchical approach allows to use a single unifying extendable interface to access multiple US devices, which can be managed in parallel.
The ImageSource class acquires image data from the US system through the external vendor API through a callback method. Both the raw image data, as well as images beamformed by the DiPhAS system are passed through the callback, allowing both for fast display of the beamformed US and PA images, as well as for later image processing of the raw data. The passed image data is continuously put into a buffer which is accessed by the processing layer for later display, while the raw data can be saved directly onto a hard drive for later use.
The PhotoacousticsHardware module handles communication with the laser system to control the pump laser and tune the OPO. Specifically, the pump laser is controlled via serial port using MITKs SerialCommunication class, while the OPO can be controlled using its API. The module also implements a method to allow the USHardwareDiPhAs sub-module to read out the data from an internal pyroelectrical sensor in the laser system which acquires the current laser pulse energy. This data can be matched to the acquired PA images and used to perform corrections for laser energy fluctuations.
Processing layer
Basic Processing is performed in the ImageSource class before the acquired images are displayed. Whenever the plugin within the application layer requests a new image, a method within the ImageSource class is called through the Device class, which grabs the most recent image within the buffer that contains the beamformed images. Various filters can be applied to the image depending on the settings, such as a resampling filter (i.e. vertically rescaling the image), a B-Mode filter, or basic fluence corrections for pre-defined illumination geometries. Afterwards, the processed image is passed to the application layer.
Application layer
Data acquisition from the DiPhAS setup is performed based on user settings specified in the MITK US Support plugin [15] which has been extended to work with our custom DAQ222The data acquisition was performed with commit https://phabricator.mitk.org/rMITK2d2ebd4f22fa. The US Support plugin communicates with the USHardwareDiPhAS sub-module through an instance of its Device subclass, which it accesses through MITK’s micro service functionality; through micro services, a module is able to register as a specific service, which can be then requested by other modules or plugins. Various PA specific image acquisition settings are available within the US Support plugin. These preferences are handed over to the USHardwareDiPhAS sub-module which is responsible for communication with the hardware API. The plugin also passes image data acquired through the device’s ImageSource class to the PAUSViewer plugin. The PAUSViewer plugin has been implemented specifically for the use case of PAUS imaging and provides a view that presents corresponding images of both modalities side by side in real-time, with options to set specific level windows and various colormaps. To set the triggers for the Laser system, define fast tuning wavelength sequences, as well as to control the status of the laser, the Laser Control Plugin was implemented. The plugin serves as a user interface for the PhotoacousticsHardware module. The application layer in general is highly customizable and extendable depending on the use case or specific problem at hand.
3 Clinical Sample Application
An example for a possible clinical application of PA imaging is the visualization of transmural inflammatory processes such as large cell arteritis, i.e. giant cell arteritis[16], where PA imaging could be used in the diagnostic workup. As an initial step towards this use-case we image the carotid artery of a healthy human volunteer and estimate blood oxygenation as a qualitative validation. The long term goal for PA imaging of the carotid artery is to investigate if and how multispectral PA imaging can be used in the diagnosis of inflammatory processes in arteries. In addition, PA imaging may be useful in the evaluation of plaque morphology in carotid artery stenosis or in the planning phase of surgeries for head and neck cancers invading the carotid artery.
One of the main issues during the sequential measurement of multiple multispectral PA images of the carotid artery is inter-frame motion, which leads to invalid results due to intense motion artifacts. If the motion is periodic, these artifacts might be somewhat mitigated by frame averaging [17] and there exist approaches in PA computed tomography which assume rigid motion [18] or perform gated image acquisition [19] to minimize motion artifacts. But frame averaging in moving structures will always improve imaging results at the expense of resolution and imaging time as well as temporal resolution. Also, the motion of the carotid artery and the surrounding tissue is not rigid. Furthermore, gating is impractical due to the need for additional equipment and the about 20-fold decrease in imaging frame rate to approximately 1 Hz which in turn leads to increased motion artifacts due to the movement of patient and physician especially when using free hand probes. The following section shows how we aim to demonstrate the applicability of our framework to in vivo blood oxygenation estimation.
3.1 Motion-compensated blood oxygenation estimation
In the following we present the methods used for our platform demonstration experiments: (1) The spectral unmixing method for estimation and (2) our new inter-frame motion correction approach. Both methods are implemented as python extensions with minimal overhead and can be used via the python interface offered by MITK.333see http://docs.mitk.org/nightly/mitkPython_Overview.html.
3.1.1 Blood oxygenation estimation with spectral unmixing
estimation with spectral unmixing from PA images requires a number of acquired wavelengths of at least the number of unmixed chromophores (two). Using more wavelengths will make the estimation more robust.[20] We record a sequence of raw PA data at five wavelengths in the near infrared. Based on Luke et al.[21]
and considering the power spectrum of our laser source we skew our wavelength sequence from an equidistant spacing towards wavelengths where the differences between absorption of oxygenated and deoxygenated hemoglobin are most significant. The acquisition wavelength sequence is measured by a spectrometer (HR2000+, Ocean Optics, Dunedin, USA) to account for errors in OPO calibration.
Spectral unmixing is performed using a non negative constrained linear least squares solver[22]444scipy.optimize.nnls using python 2.7 on the sets of five B-Mode images. The unmixing results for oxygenated (HbO) and deoxygenated hemoglobin (Hb) are used to calculate blood oxygenation and total hemoglobin (THb):
(1) |
is visualized by masking the results for low values of .
3.1.2 PAUS inter-frame motion correction
We propose a method which uses US images to compensate for intra-sequence motion of the PA images. Our PAUS imaging system acquires interleaved US images with minimal delay after each PA image. We estimate the optical flow[6] of each US image in a sequence with respect to the first US image in that sequence. We then use these estimated optical flows to warp their corresponding PA images. As “Optical flow is the distribution of apparent velocities of movement of brightness patterns in an image”[23], it is necessary to estimate the optical flow in the US images instead of the PA images, as brightness and the patterns having that brightness vary strongly in a multispectral PA acquisition sequence.
The specific optical flow implementation we use is by Farnebäck et al.[6] and part of the Open CV library555as cv2.calcOpticalFlowFarneback in phython 2.7. Knowing that the motion we want to compensate for is relatively small and quite homogeneous with no small structures moving independent of the surrounding tissue, we want an optical flow estimation which is approximated with a smooth surface – a blurred motion field. In addition, we aim for a fast (real-time capable) computation. Because of that we only perform two iterations (iterations = 2) and chose a large averaging window and neighborhood (specifically: winsize = 40, poly_n = 7, poly_sigma = 1.5). This will result in a blurred motion field that has the desirable side-effect of yielding a more robust algorithm.
3.2 Validation experiments
The purpose of our validation experiments was to demonstrate the applicability of our platform to a translational research field. The validation experiments of the MITK PAUS real-time imaging platform were performed on data acquired with the platform. Two experiments were performed with the goal to (1) estimate blood oxygenation in peripheral vessels, namely the radial and ulnar arteries and to (2) estimate blood oxygenation the carotid artery. Both (1) and (2) require oxygenation estimation as detailed in section 3.1.1 and (2) requires additional motion correction as detailed in section 3.1.2. While the data acquired for the experiments was processed and visualized as B-Mode images (both PA and US) in real-time, we performed the analysis for the experiments offline on the data recorded beforehand by MITK. However, oxygenation estimation and motion correction have been both implemented to be real-time capable. Both in vivo PAUS experiments were performed free hand on healthy human volunteers while aiming to hold the probe as still as possible while acquiring approximately twenty seconds of data. Imaging of the carotid artery was performed by a vascular surgeon. The acquired raw PA data was corrected for fluctuations in laser pulse energy and then beamformed using the commonly used Delay and Sum (DAS) algorithm[4] with Hanning apodization.
4 Results
During the acquisition of the presented data, the user had a real-time view of both US and PA images at frame rates of 13 to 20 Hz. The fast tuned PA acquisition wavelength sequence of the OPO laser was measured as nm with an accuracy of 1.5 nm. In the following we show oxygenation measurements in peripheral vessels and the effect of motion correction on estimations in a carotid artery.
4.1 Blood oxygenation in peripheral vessels
Fig. 3 shows a representative example of estimation in the radial artery and accompanying vein. is visualized by thresholding the total hemoglobin from the unmixing results. The average in the radial and ulnar artery was averaged over a total of mean oxygenations in a region of interest as marked in Fig. 3. The radial artery has been scanned twice, likewise the ulnar artery. The in the accompanying vein of the radial artery was on average over a total of mean oxygenation estimations during two scans on the same healthy human volunteer.

4.2 Motion correction and blood oxygenation estimation in the carotid artery
Estimating the optical flow for four px sized) US images of a sequence relative to the first US images in that sequence and warping the acquired PA and US images accordingly took ms (averaged over sequences acquired in one scan) with python on a single core (2.6 GHz Intel Core i5-3230M). Fig. 4 illustrates the position of the upper arterial wall in the US images of the acquisition sequences before and after the application of our optical flow based method by plotting the position of the maximum intensity pixel in the center of the upper arterial wall.

Fig. 5 shows a section of the carotid artery and its corresponding oxygenation. The oxygenation was estimated based on the motion corrected stack of PA images. Average sO in carotid artery and jugular vein is denoted on the regions of interest shown as red and blue boxes. An example of artifacts resulting from spectral unmixing on sequences not corrected for motion is shown in Fig. 6.


We measured the oxygenation saturation with image sequences of the carotid artery. Estimating on images uncorrected for motion yielded mean(SaO 59 % when averaging over the
pixels over the THb threshold. The standard deviation of the measurement was std(SaO
. After optical flow in sequence (ofis) motion correction we measured SaO over pixels. In the accompanying jugular vein the estimation changed from SvO without motion correction to SvO with motion correction. The distributions of sO over all pixels is shown in the violin plots of Fig. 6. As some of these are no normal distributions, we also determined their inter quartial ranges (IQR): For arterial blood a reduction from IQR(SaO
to IQR(SaO for venous blood a reduction from IQR(SvO to IQR(SvO.5 Discussion
In this paper we present a real-time PAUS imaging platform with a linear array probe using a fast tuning laser. Within the open-source software platform MITK we provide implementations for direct control of the US DAQ and laser systems and their components. The presented software components are highly reusable and extensible due to the modular architecture of MITK. This should make it possible to use the project as a starting point for translational research projects with PA imaging. For each specific laser or DAQ system, only the hardware layer would have to be extended to comply with new APIs.
Furthermore we provide a method for correcting inter-frame motion as encountered in a clinical sample application which we presented as a demonstration use case of our system. In this clinical sample application we imaged the carotid artery of a healthy human volunteer. In this context we were faced with problematic inter-frame motion due to the pulsing artery. To address this issue we presented a new method for motion correction of multispectral PA image sequences using optical flow in seqence (ofis) on corresponding US images. Run time measurements of this methods show the real-time capability of the method considering motion correction of a sequence of five corresponding PA images is performed in 120 ms in our python implementation running single threaded on a consumer CPU core – while the sequence acquisition takes 250 ms with out fast tuning OPO. The motion correction method succeeds in reducing motion artifacts and the sO estimations with applied motion correction make it possible to clearly distinguish between arterial and venous blood. The variation of estimated sO is reduced as shown by the drop in standard deviation when correction for motion and illustrated in Fig. 6. This shows a higher precision of sO estimation by spectral unmixing when correcting for inter-frame motion. It is to our knowledge the first inter-frame motion correction approach using corresponding US images or optical flow.
While the sO estimates when using motion correction are more consistent and closer to the physiological values in a healthy human[24], there is a systematic underestimation of oxygenation with our data. This is also the case in more superficial vessels, but to a lesser degree. We attribute this underestimation to (1) fluence effects due to a high overall oxygenation in tissue – this should be addressed quantitatively[20, 25] and to a lesser degree (2) noise levels.
We have presented a starting point for translational PAUS imaging research integrated in the open-source MITK platform. We validated its performance on a clinical sample application, were we were able to show that we can image the carotid artery multispectrally by correcting for inter-frame motion using an optical flow based approach.
Acknowledgements.
The authors would like to acknowledge support from the European Union through the ERC starting grant COMBIOSCOPY under the New Horizon Framework Programme under grant agreement ERC-2015-StG-37960. We also would like to thank the MITK team for providing the open-source development and testing infrastructure that was used in this project, as well as E. Stenau for fruitful discussions of motion correction.References
- [1] Taruttis, A. and Ntziachristos, V., “Advances in real-time multispectral optoacoustic imaging and its applications,” Nature Photonics 9(4), 219–227 (2015).
- [2] Gerling, M., Zhao, Y., Nania, S., Norberg, K. J., Verbeke, C. S., Englert, B., Kuiper, R. V., Bergström, Å., Hassan, M., Neesse, A., et al., “Real-time assessment of tissue hypoxia in vivo with combined photoacoustics and high-frequency ultrasound,” Theranostics 4(6), 604 (2014).
- [3] Mohajerani, P., Tzoumas, S., Rosenthal, A., and Ntziachristos, V., “Optical and optoacoustic model-based tomography: theory and current challenges for deep tissue imaging of optical contrast,” IEEE Signal Processing Magazine 32(1), 88–100 (2015).
- [4] Kim, J., Park, S., Jung, Y., Chang, S., Park, J., Zhang, Y., Lovell, J. F., and Kim, C., “Programmable real-time clinical photoacoustic and ultrasound imaging system,” Scientific reports 6 (2016).
- [5] Nolden, M., Zelzer, S., Seitel, A., Wald, D., Müller, M., Franz, A. M., Maleike, D., Fangerau, M., Baumhauer, M., Maier-Hein, L., Maier-Hein, K. H., Meinzer, H. P., and Wolf, I., “The medical imaging interaction toolkit: challenges and advances,” International Journal of Computer Assisted Radiology and Surgery 8, 607–620 (Jul 2013).
- [6] Farnebäck, G., “Two-frame motion estimation based on polynomial expansion,” Image analysis 1, 363–370 (2003).
- [7] American National Standards Institute, I., [American National Standard for Safe Use of Lasers ], Laser Institute of America (2007).
- [8] Ibanez, L., Schroeder, W., Ng, L., and Cates, J., “The itk software guide,” (2005).
- [9] Schroeder, W. J., Lorensen, B., and Martin, K., [The visualization toolkit: an object-oriented approach to 3D graphics ], Kitware (2004).
- [10] Culjak, I., Abram, D., Pribanic, T., Dzapo, H., and Cifrek, M., “A brief introduction to opencv,” in [MIPRO, 2012 proceedings of the 35th international convention ], 1725–1730, IEEE (2012).
- [11] Guennebaud, G., Jacob, B., et al., “Eigen v3.” http://eigen.tuxfamily.org (2010).
- [12] Stone, J. E., Gohara, D., and Shi, G., “Opencl: A parallel programming standard for heterogeneous computing systems,” Computing in science & engineering 12(3), 66–73 (2010).
- [13] Klemm, M., Kirchner, T., Gröhl, J., Cheray, D., Nolden, M., Seitel, A., Hoppe, H., Maier-Hein, L., and Franz, A. M., “Mitk-openigtlink for combining open-source toolkits in real-time computer-assisted interventions,” International journal of computer assisted radiology and surgery 12(3), 351–361 (2017).
- [14] Kirchner, T., Sattler, F., Gröhl, J., and Maier-Hein, L., “Signed real-time delay multiply and sum beamforming for multispectral photoacoustic imaging,” Journal of Imaging 4(10) (2018).
- [15] März, K., Franz, A. M., Seitel, A., Winterstein, A., Bendl, R., Zelzer, S., Nolden, M., Meinzer, H.-P., and Maier-Hein, L., “Mitk-us: real-time ultrasound support within mitk,” International journal of computer assisted radiology and surgery 9(3), 411–420 (2014).
- [16] Hunder, G. G., Bloch, D. A., Michel, B. A., Stevens, M. B., Arend, W. P., Calabrese, L. H., Edworthy, S. M., Fauci, A. S., Leavitt, R. Y., Lie, J. T., Lightfoot, R. W., Masi, A. T., McShane, D. J., Mills, J. A., Wallace, S. L., and Zvaifler, N. J., “The American College of Rheumatology 1990 criteria for the classification of giant cell arteritis,” Arthritis & Rheumatism 33, 1122–1128 (Aug. 1990).
- [17] Kim, M., Kang, J., Chang, J. H., Song, T.-K., and Yoo, Y., “Image quality improvement based on inter-frame motion compensation for photoacoustic imaging: A preliminary study,” in [Ultrasonics Symposium (IUS), 2013 IEEE International ], 1528–1531, IEEE (2013).
- [18] Willemink, R., Slump, C., and van der Heijden, F., [On image quality enhancement in photoacoustic image reconstruction by motion compensation ], 216–222, Technology Foundation (STW) (11 2006). http://eprints.ewi.utwente.nl/8919.
- [19] Xia, J., Chen, W., Maslov, K., Anastasio, M. A., and Wang, L. V., “Retrospective respiration-gated whole-body photoacoustic computed tomography of mice,” Journal of biomedical optics 19(1), 016003–016003 (2014).
- [20] Tzoumas, S., Nunes, A., Olefir, I., Stangl, S., Symvoulidis, P., Glasl, S., Bayer, C., Multhoff, G., and Ntziachristos, V., “Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues,” Nature communications 7 (2016).
- [21] Luke, G. P., Nam, S. Y., and Emelianov, S. Y., “Optical wavelength selection for improved spectroscopic photoacoustic imaging,” Photoacoustics 1(2), 36 – 42 (2013).
- [22] Lawson, C. L. and Hanson, R. J., [Solving least squares problems ], SIAM (1995).
- [23] Horn, B. K. and Schunck, B. G., “Determining optical flow,” Artificial intelligence 17(1-3), 185–203 (1981).
- [24] Zander, R., “The oxygen status of arterial human blood,” Scandinavian Journal of Clinical and Laboratory Investigation 50(sup203), 187–196 (1990).
-
[25]
Kirchner, T., Gröhl, J., and Maier-Hein, L., “Context encoding enables machine learning-based quantitative photoacoustics,”
Journal of Biomedical Optics 23, 1–9 (May 2018).