| Qualification Type: | PhD |
|---|---|
| Location: | Newcastle upon Tyne |
| Funding for: | UK Students, EU Students, International Students |
| Funding amount: | Not Specified |
| Hours: | Full Time |
| Placed On: | 5th February 2026 |
|---|---|
| Closes: | 15th February 2026 |
Award Summary
100% fees covered, and a minimum tax-free annual living allowance of £20,780 (2025/26 UKRI rate). Additional project costs will also be provided.
Overview
Research Project
Critical infrastructure systems, such as power grids, transportation networks, and water treatment plants, increasingly rely on AI-driven decision-making for efficiency and autonomy. However, these systems face unique safety challenges. Real-world conditions, including weather, cyberattacks, and equipment degradation, are unpredictable, causing AI behaviors to deviate from lab-tested performance.
Current digital twin technologies focus on predictive maintenance and optimization but lack frameworks to continuously verify AI safety in operational contexts. This project aims to develop a dynamic validation framework for AI systems using high-fidelity digital twins, enabling real-time stress-testing under simulated edge cases like cyber-physical attacks and sensor failures.
The Research Challenges
There exists a complex interplay of factors that present challenges in ensuring the resilience of AI in critical infrastructure. The main challenges are:
The proposed framework provides a comprehensive solution by designing resilience metrics to quantify AI safety, focusing on robustness, recoverability, and ethical compliance. To bridge digital twin simulations with physical systems, the project will deploy real-time monitoring tools enabling preemptive risk mitigation. Furthermore, it will embed regulatory rules, such as the EU AI Act, into digital twins to audit AI alignment with fairness and transparency standards.
Supervision Environment
Extensive training will be provided on physics-informed digital twin development and critical infrastructure simulation. Training on formal verification methods (probabilistic model checking) and AI safety compliance (EU AI Act standards) will also be provided.
Student Applicant Skills/Background
The applicant should have a solid background in computer science or systems engineering. Knowledge of AI/ML algorithms and simulation environments is highly advantageous. A keen interest in critical infrastructure resilience, cyber-physical systems, and AI safety ethics is essential to align with the focus of this research. Additionally, candidates should demonstrate analytical thinking regarding safety certifications and regulatory compliance.
Number Of Awards
1
Start Date
1 October 2026
Award Duration
4 years
Application Closing Date
15 February 2026
Sponsor
Supervisors
Dr. Yinhao Li, Dr Dev Jha, Dr. Charith Perera
Eligibility & How to Apply
For eligibility criteria and how to apply please visit our website.
Contact Details
You can also contact: doctoral.awards@ncl.ac.uk for independent advice on your application.
Type / Role:
Subject Area(s):
Location(s):