Location: | Coventry, University of Warwick |
---|---|
Salary: | £47,389 to £56,535 per annum. |
Hours: | Full Time |
Contract Type: | Fixed-Term/Contract |
Placed On: | 16th September 2025 |
---|---|
Closes: | 30th September 2025 |
Job Ref: | 3303 |
About the Role
For informal enquiries, please contact Siddartha Khastgir (Professor) s.khastgir.1@warwick.ac.uk and Xingyu Zhao (Associate Professor) at Xingyu.Zhao@warwick.ac.uk.
As an Assistant Professor (Research) in our Safe Autonomy Research Group, you’ll lead groundbreaking research on the safety of Large Language Models (LLMs) in safety critical domains like autonomous transport—land, air, and marine. Unlike conventional positions, this role places you at the heart of real-world impact: influencing international safety standards, policy, and industrial practices through projects like the Horizon Europe-funded CERTAIN, AIGGREGATE and EEA4CCAM.
You’ll collaborate with global academic and industry partners to tackle frontier challenges, from fine-tuning LLMs for safety-critical applications to developing novel methods for AI alignment, agentic architectures, and multimodal grounding.
With minimal teaching responsibilities and a strong research focus, you’ll dedicate your energy to cutting-edge work: securing grants, publishing transformative studies, and building tools that ensure generative AI acts reliably in the real world. Mentorship opportunities and our established ecosystem provide unmatched career growth, while our partnerships with Innovate UK, Horizon Europe, and leading industrial players offer unparalleled resources.
About You
You're a PhD-qualified researcher pioneering safe AI deployment, with deep expertise in Large Language Models (LLMs), agentic architectures, and multimodal systems. Your background includes hands-on work in LLM fine-tuning, safety alignment, and RAG frameworks for high-stakes domains like autonomous transport. You bring a proven track record of high-impact publications, coupled with the ability to translate complex research for academic, industry, and policy audiences. Passionate about embedding ethical governance into real-world AI systems, you thrive in collaborative environments and excel at driving safety research from concept to standards-influencing solutions. You’re motivated to shape the future of responsible AI in autonomous systems (land/air/marine).
Full details of the duties and selection criteria for this role can be found in the vacancy advert on the University of Warwick's jobs pages. You will be routed to this when you click on the 'Apply' button.
CLOSING DEADLINE: Tuesday, 30th September 2025 at 23:55.
Type / Role:
Subject Area(s):
Location(s):