Title
Photoacoustic image guidance and robotic visual servoing to mitigate fluoroscopy during cardiac catheter interventions
Date Issued
01 January 2019
Access level
metadata only access
Resource Type
conference paper
Author(s)
Graham M.T.
Assis F.
Allman D.
Wiacek A.
Gubbi M.R.
Dong J.
Hou H.
Beck S.
Chrispin J.
Lediju Bell M.A.
Johns Hopkins University
Publisher(s)
SPIE
Abstract
Many cardiac interventional procedures (e.g., radiofrequency ablation) require fluoroscopy to navigate catheters in veins toward the heart. However, this image guidance method lacks depth information and increases the risks of radiation exposure for both patients and operators. To overcome these challenges, we developed a robotic visual servoing system that maintains visualization of segmented photoacoustic signals from a cardiac catheter tip. This system was tested in two in vivo cardiac catheterization procedures with ground truth position information provided by fluoroscopy and electromagnetic tracking. The 1D root mean square localization errors within the vein ranged 1.63 - 2.28 mm for the first experiment and 0.25 - 1.18 mm for the second experiment. The 3D root mean square localization error for the second experiment ranged 1.24 - 1.54 mm. The mean contrast of photoacoustic signals from the catheter tip ranged 29.8 - 48.8 dB when the catheter tip was visualized in the heart. Results indicate that robotic-photoacoustic imaging has promising potential as an alternative to fluoroscopic guidance. This alternative is advantageous because it provides depth information for cardiac interventions and enables enhanced visualization of the catheter tips within the beating heart.
Volume
11229
Language
English
OCDE Knowledge area
Biotecnología médica
Scopus EID
2-s2.0-85083431078
Resource of which it is part
Progress in Biomedical Optics and Imaging - Proceedings of SPIE
ISBN of the container
9781510632219
Conference
Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XVIII 2020 San Francisco 2 February 2020 through 4 February 2020
Sponsor(s)
This work was funded by NSF CAREER Award ECCS 1751522 (awarded to M.A.L.B.) and in part by NSF Graduate Research Fellowship DGE174689 (awarded to M.T.G.). The authors thank Sarah Fink, Theron Palmer, Rene Lopez, Brooke Stephanian, Jessica Hsu and Joanna Guo for their assistance during experiments. We also acknowledge support from the Carnegie Center for Surgical Innovation. In addition, we acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
Sources of information: Directorio de Producción Científica Scopus