| Location: | Edinburgh, Hybrid |
|---|---|
| Salary: | £41,064 to £48,822 per annum. Grade UE07 |
| Hours: | Full Time, Part Time |
| Contract Type: | Fixed-Term/Contract |
| Placed On: | 24th March 2026 |
|---|---|
| Closes: | 6th April 2026 |
| Job Ref: | 13968 |
Grade UE07: £41,064 - £48,822 per annum
School of Population Health Sciences / Usher Institute / Bioquarter
Full-time: 35 hours per week
Fixed-term contract: available from April 2026 – 31st March 2027
The Opportunity
We are seeking an outstanding Research Fellow in Privacy-Preserving Clinical NLP to join the TransPECT (Transformer Privacy Evaluation and Checking Toolkit) project. This is a rare opportunity to lead cutting-edge research at the intersection of natural language processing, large transformer models, and AI privacy, working with real-world NHS clinical and administrative data within secure Trusted Research Environments. You will play a central role in developing novel methods to understand, evaluate, and mitigate privacy risks in transformer-based NLP systems trained on sensitive health data. Your work will directly shape how advanced language models can be safely developed, evaluated, and disclosed in high-stakes healthcare settings.
What We Are Looking For
We are looking for a creative and technically strong NLP researcher with expertise in transformer models and a strong interest in responsible, safe, and privacy-aware AI. You will have a PhD (awarded or near completion) in NLP, Machine Learning, Computer Science, AI, Health Informatics, or a related discipline, alongside hands-on experience developing and evaluating transformer-based language models. You should be comfortable designing rigorous computational experiments, analysing model behaviour, and working with complex datasets.
The ideal candidate will combine technical excellence with curiosity about the broader implications of AI in healthcare. Experience working with sensitive health data, clinical text, or secure research environments is highly desirable, as is knowledge of model robustness, fairness, bias mitigation, explainability, or governance in AI systems.
This role offers the opportunity to help define best practice for trustworthy clinical AI at a national level, contributing to a vibrant, interdisciplinary team spanning NLP, machine learning, health data science, and information governance.
This post is advertised as full-time (35 hours per week), however, we are open to considering flexible working patterns. We are also open to considering requests for hybrid working (on a non-contractual basis) that combines a mix of remote and regular on-campus working.
To apply for this role, please click on the 'Apply' button above.
Type / Role:
Subject Area(s):
Location(s):