|Funding for:||UK Students, EU Students|
|Placed On:||11th July 2019|
|Expires:||10th October 2019|
Supervisors: Prof Nathan Griffiths, Dr Thomas Popham and Dr Abhir Bhalerao, Department of Computer Science and Department of Engineering, University of Warwick.
We are seeking a PhD student for a project that aims to develop tools and methods to improve transparency of intelligent systems for autonomous driving applications, in order to help establish a safety assurance framework for intelligent systems deployed for vehicular autonomy.
Many recent developments in machine intelligence focus on the quality of training and datasets that are fed to the learning algorithms. The shortfalls of an intelligent system are almost always attributed to flaws in the input data. There is also an existing consensus around the implausibility of providing an exhaustive training dataset so that the intelligent system behaves in a robust manner under normal as well extreme operating scenarios. On the other hand, there is little rigorous understanding into inner workings of intelligent algorithms which are conventionally treated as a “black box”. This approach – combined with reliance on quality of data and training - poses a huge challenge for validation and interpretability of AI systems as a whole. This project will focus on developing tools and models to draw out “what has been learnt and why” by an AI system so that a safety assurance argument can be constructed for intelligent autonomy.
Note that this project includes a 3-month secondment with Jaguar Land Rover Autonomy Research.
Candidates should ideally have a computer science, engineering or data science background, although candidates from other scientific disciplines will be considered. Candidates should have strong analytical skills, and should have solid programming experience, including developing or applying machine learning techniques (ideally including deep neural networks).
Type / Role: