Image Courtesy of Pixabay.
You notice a building out of the corner of your eye, but something is off: it looks misplaced. You walk closer and notice cracks in some windows. A few more steps, and you come face-to-face with a splattering of stains on the window tiles. To better inspect each feature of the structure, we intuitively zoom in by walking closer. But what happens when we swap out our looming building for something much smaller? For instance, consider the human eye.
Like the building’s range of imperfections, eye imaging defects can occur at every level: an uncentered image, vessels that appear detached, and noise around minuscule capillaries. These defects challenge the otherwise promising optical coherence tomography angiography, a technique that separately zooms in on each layer of an image. In his recent publication in Scientific Reports, researcher Rahul M. Dhodapkar of the Yale School of Medicine Department of Ophthalmology developed a deep learning model to accurately classify whether medical images are high-quality enough for different clinical use cases despite defects. The model was trained on 347 scans collected by the Yale Eye Center from 134 patients, representing a diverse group with and without retinal diseases.
“An important thing about research in medicine [is that] you have to always remember why you’re doing it,” Dhodapkar said. In this case, image defects may force patients to undergo imaging multiple times, but not everyone lives near an academic setting or hospital where repeated imaging is accessible. “We can make specialized models for each clinical purpose and integrate them directly into the image capture technology,” Dhodapkar said. These advancements would provide technicians with real-time feedback to decide whether they should retake an image. Through its plethora of insights, this research provides a framework to improve how diverse communities receive diagnoses and treatments in the medical system.