New research at the Scientific Computing and Imaging Institute at the University of Utah helps to precisely target tumors in lung cancer patients using artificial intelligence. Employing algorithms developed by Sarang Joshi, professor of Biomedical Engineering, and graduate student Markus Foote in collaboration with Amit Sawant at the University of Maryland, Baltimore, and radiation oncologists will be able to predict movement in lung tumors as the patient breathes in real-time with a 3D motion model.
Joshi and Foote’s computational models allow doctors to see the location and shape of the tumor before they administer treatment. This aids in planning where to target the radiation dose so it is directed solely at the cancerous tissue while healthy tissue and vital organs are spared any damage. With the exact location, a higher radiation dose can be administered to the tumor which leads to shorter treatment time with fewer negative side effects.
The treatment visit itself will also be shorter because doctors can see almost instantly where the tumor moves while the patient is breathing. Current approaches make radiation treatment an hours-long ordeal. There is a lot of time spent waiting between the patient’s breaths for the tumor to be positioned as expected before radiation is given.
Joshi and Foote used a type of artificial intelligence called deep convolutional neural networks or deep learning to analyze lung cancer patients’ CT images and create breathing models. By training a computer to recognize and analyze x-ray images of individual patients, doctors will be able to design patient-specific treatment regimens much quicker than now.