| Qualification Type: | PhD |
|---|---|
| Location: | Exeter |
| Funding for: | UK Students, EU Students, International Students |
| Funding amount: | £20,780 per year. Payment of tuition fees (Home), Research Training Support Grant £5,000 over 3.5 years |
| Hours: | Full Time |
| Placed On: | 25th November 2025 |
|---|---|
| Closes: | 12th January 2026 |
| Reference: | 5732 |
Project details:
Background: Diabetic Retinopathy (DR) is the leading cause of blindness in working aged people. All people with diabetes aged over 12 need annual retina screening by attending ophthalmology clinic. This produces over a billion retina images every year. However, about 50% of diabetic adults do not receive this recommended annual screening, and lower-middle income countries, have below 10% screening rate. Factors contributing to low screening rates include shortages of trained expert ophthalmologists, lack of facilities in rural areas and inability of patients to attend clinics in person.
Aims: The project aims to develop and evaluate AI methods for medical image analysis to detect diabetic retinopathy, glaucoma, cataract and age-related macular degeneration (AMD). As well as retinal fundus images, we will explore analysis of new eye image datasets including OCTA and CCM images for diagnosis of diabetic neuropathy
Machine Learning: We will develop artificial intelligence (AI), machine learning (ML), deep learning (DL) and Data science methods for medical image analysis, to autonomously grade the fundus images from large datasets. This will be supported by Professor Neil Vaughan and Professor Xujiong Ye with input from members of our medical image analysis labs. This will enable the assessment of ML approaches for image grading and application to grade retinopathy severity levels 0-4 (None, Mild, Moderate, Severe, Proliferative). We will also focus on optimising training for explainable, to ensure the reasons for diagnosis are understandable for clinicians and patients. The ML will use 500,000 fundus images from open-source and customised retinopathy datasets. We will compare retinopathy grading accuracy by NHS clinician vs ML algorithm. This will build on Exeter’s reviews of DL retinopathy image analysis which found that ML & DL retinopathy grading systems have potential to be more sensitive than human graders, and will develop evidence that this approach could be safe to use in clinical practice.
Datasets Available: We have gained unique access to a large number of datasets of fundus images that will be used. EyePACS Dataset. This has over 5 million retinal images from diverse populations with various degrees of diabetic retinopathy. UK BioBank fundus Dataset. The UK Biobank has collected 175,824 fundus images from 85,848 participants.
INSIGHT Data. From two NHS Trusts: University Hospitals Birmingham (UHB) and Moorfields Eye Hospital. They include over 35 million retinal images including OCT scans and fundus retinal photographs. HVDROPDB dataset. We have international collaboration with leading institutes in India who are collecting retinal image data in rural India communities. North Devon and Royal Devon University Healthcare NHS Foundation Trust Data. Our co-supervisor from RDUH NHS runs eye clinics within NHS and regularly screens patients.
Supervisory team: Our established team consists of experts including academic medical image analysis, NHS ophthalmologist and external partners. Benefits: Automation of screening with ML can enable more regular eye screening for patients. This will reduce the undetected cases of retinopathy, enable retinal issues to be detected earlier. This will increase chances that irreversible damage may occur before retinopathy is detected. Reducing the costs for the NHS and reducing number of appointments.
Please direct project specific enquiries to: Professor Neil Vaughan (n.vaughan@exeter.ac.uk) Office L03.04, RILD Building, Barrack Road, Exeter, EX2 5DW, UK. Telephone 0044 7783527327 Please ensure you read the entry requirements for the potential programme you are applying for.
Type / Role:
Subject Area(s):
Location(s):