Tools & Techniques Imaging

On the Right Track

Age-related macular degeneration (AMD) is a huge issue for eyecare professionals – and one that will only grow with time. Wet AMD accounts for a large proportion of outpatient hospital activity and is a major area of interest in ophthalmology – from both a clinical and academic viewpoint. But AMD doesn’t have only one vision-threatening form, and dry AMD, with its advanced stage of geographic atrophy (GA), still has no effective treatment. Patients are left in a hopeless situation and ophthalmologists can only monitor the gradual progression of visual impairment. We are now in an era of very intensive academic activity to search for GA treatments and a number of molecules show good promise – some already in late clinical trials. Aside from the lack of treatments available, there is also the challenge of detecting GA at its early stages, and quantification of disease progression is particularly difficult. 

This vast unmet need inspired me to look at this condition in more depth, including disease management and its effect on capacity pressures in the UK’s already overburdened National Health Service (NHS). Did you know that ophthalmology accounts for 10 percent of all outpatient activity in the NHS – more than any other medical specialty?

Our research team is considering how automation through AI could help alleviate the burden for patients, doctors, and the healthcare system. Specifically, we have developed a deep learning algorithm for the detection and quantification of GA that performs a specialist assessment in a fully automated manner (1). Using OCT images, the algorithm tracks disease progression in just two seconds and brings consistency and repeatability in an assessment that suffers from great interobserver variability – even between experts. Precisely quantifying GA progression means patients can be better characterized earlier to maximize the window of opportunity for therapeutics. And it could also lead to improved patient selection for clinical trials. A major challenge for eye departments that store large databases of OCTs is to quickly identify patients who are eligible for a clinical trial, so they tend to rely on traditional labor-intensive approaches of manual screening of clinical and imaging records. Imagine how valuable it could be to apply an AI model to an entire OCT database within a local hospital environment, identifying the cases of GA, and then selecting patients whose stage of progression is more suitable for a particular clinical trial.

Why AMD? The figures tell the story…

AMD is the leading cause of vision loss in the developed world, with an estimated 1.9 million people living with vision impairment or blindness in the UK alone. This is projected to increase to 4 million by the year 2050, as a consequence of an aging population, degenerative disease, and lifestyle factors. Between 2008 and 2013, the prevalence of AMD in the UK increased by 40 percent – an extraordinary increase.

Dry AMD versus AI

In wet AMD, there is usually an episode of bleeding or swelling at the back of the eye, which causes a significant and noticeable change in vision. This will in most cases cause alarm and prompt patients to seek immediate medical assistance and treatment to prevent or reverse progression. GA, on the other hand, progresses slowly over time, causing areas of wear and tear at the back of the eye which grow at an unpredictable pace, in a sense creeping up on the patient. Patients tend not to seek expert advice early as they often don’t realize a problem has developed in their eye until a lot of damage has already been done. Consequently, there is an obvious need to improve dry AMD and GA screening and tracking, as well as objectively monitoring the progression of affected retinal areas – a very challenging task. 

The imaging modality device of choice for both types of AMD and many other retinal conditions is OCT. Tracking GA lesion progression over time with 3D OCT scans requires manual delineation and segmentation of complex scans that often consist of over 100 individual images. Manual processing of all the images would be unrealistic in real-life clinical practice. It would be therefore infeasible to consistently and accurately monitor the behavior of the disease and to assess the efficacy of any potential treatment. Hence, this process is a prime candidate for automation. 

Our automated method uses Deep Learning (a form of AI) to quickly quantify retinal atrophy over consequent patient visits, offering an accurate representation of the stage and extent of atrophic Dry AMD and helping the ophthalmologist decide if treatment is needed and if so, how well the patient is responding to treatment. When decisions are based on standardized, reliable, repeatable quantification of the area of atrophy, it is expected that better clinical outcomes from emerging treatments will be achieved. And for patients, it also means shorter visits and less time spent in crowded hospital waiting rooms. When we were developing this AI system, the average time required for an expert human grader to segment a geographic atrophy lesion on OCT was 43 minutes; our Deep Learning model can achieve the same task in 2.04 seconds, on average.

At the London City Road campus of Moorfields Eye Hospital, we administer more than 100 intravitreal injections for the treatment of Wet AMD every day. Although treatment decisions for Wet AMD are primarily guided by OCT scans, these are mostly assessed in a qualitative way. Just by looking at the scan, we can tell whether the condition is getting worse, better, or is stable. In GA, the situation is very different. Disease progression is too subtle and the changes are too slow to visually track by human experts with the level of precision needed to discern short-term progression. Hypothetically, if a GA treatment became available and the patient population requiring intravitreal injections doubled, the need of an automated, fast and accurate monitoring system will be ubiquitous. Not only would this help physicians and patients to jointly make the decision on whether treatment is warranted, but also when and how frequently it should be given. 

Inside the algorithm

To explain how the algorithm works, we should go back to the basics of Deep Learning. The algorithms are exposed to a large number of examples – in this case, retinal OCT images – and they start to recognize patterns. Next, they are able to recognize similar patterns in images they have not seen before. My team used data from a phase II clinical trial – the FILLY study, which assessed the efficacy of a novel GA therapeutic. We manually graded multiple images from volume OCT scans – resulting in approximately 6,000 individual manually graded images – and used them to train the Deep Learning model for the detection and monitoring of retinal atrophy. The algorithm was then exposed to a rigorous testing phase applying it to a distinctly different data set not previously seen by the algorithm. In our case, the performance of the algorithm – expressed by a statistical metric called the dice similarity score – was 0.96, which is extremely high. 

It is relatively easy for an AI model to perform well when exposed to data from patients with similar traits to those it was trained on. The real test is when it is exposed to a data set from a completely distinct healthcare setting. And that’s why it’s important to note that our OCT algorithm was tested against a data set from a patient population completely distinct, geographically and temporally separated from the population used for training. (The FILLY study had patients from the US, Australia and New Zealand, whereas our external validation data set was from Moorfields Eye Hospital patients.)

But there are still challenges ahead. An AI model alone is a sophisticated piece of code with no clinical utility in isolation. AI implementation requires further infrastructure development, and particularly a user interface allowing clinicians to import OCT scans which will then enable visualisation of the AI output to clinicians and researchers in a digestible and user-friendly format. Cloud-based deployment and information security are additional major priority areas for consideration. Ahead of clinical implementation, approvals from the Medicines and Healthcare Products Regulatory Agency (MHRA) and potentially other regulatory authorities are an essential pre-requisite.

The system could also be applied to other ophthalmic disease areas. In essence, we developed four different deep learning models, one for each retinal layer that is a constituent feature of GA. As a by-product of that, we can now tune these models to detect changes in individual layers that have distinct profiles of impairment or degeneration, particularly in inherited eye diseases.

On the horizon

A really common question AI experts are asked is, “How soon do you think we will have it in our clinic?” People used to be conservative with their answers but, as the field develops, we are becoming more optimistic! Now that we are able to develop such models “in house” using our own resources, I am more positive about the future of point-of-care AI-based decision support systems. It might still be a matter of few years, but it’s no longer in the distant future – for this clinical application of AI decision support systems in retinal disease, at least. We can also envisage a longer-term future with AI processing high-dimensional data including multi-modal imaging, but also clinical data, genetics, proteomics, and more. With the increasing importance of the Internet of Things, data from remote monitoring devices will eventually also be integrated and processed through AI systems, with the over-arching objective to inform personalized treatment plans best suited for the needs of each individual patient.

Images used in the collage are from

Receive content, products, events as well as relevant industry updates from The Translational Scientist and its sponsors.

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. G Zhang et al., “Clinically relevant deep learning for detection and quantification of geographic atrophy from optical coherence tomography: a model development and external validation study,” Lancet Digit Health, 3, e655 (2021). PMID: 34509423.
About the Author
Konstantinos Balaskas

Consultant Ophthalmologist at Moorfields Eye Hospital, London, specialising in Medical Retina, and an Associate Professor at the UCL Institute of ophthalmology. Dr Balaskas is Director of the Morefields Ophthalmic Reading Centre & Clinical Artificial Intelligence Lab (MORC-AI), a research centre specialising in Ophthalmic Imaging, Digital Medicine and Artificial Intelligence.

Register to The Translational Scientist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:

  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts