Location: | London |
---|---|
Salary: | £38,194 to £43,093 p.a. |
Hours: | Full Time |
Contract Type: | Fixed-Term/Contract |
Placed On: | 30th March 2023 |
---|---|
Closes: | 31st May 2023 |
Job Ref: | ENG02573 |
Research Assistant salary range: £38,194 to £41,388 per annum
Research Associate salary range: £43,093 to £50,834 per annum*. Maximum starting salary £43,093 per annum
Full-time, Fixed term to start October 2023 for 24 months
This post of Research Assistant / Associate (post-doctoral) is to conduct world-leading research on safe reinforcement learning through formal methods, under the direction of Dr Francesco Belardinelli, within the EPSRC New Investigator Award An Abstraction-based Technique for Safe Reinforcement Learning.
Autonomous agents learning to act in unknown environments have been attracting research interest due to their wider implications for AI, as well as for their applications in key domains, including robotics, network optimisation, resource allocation. Currently, one of the most successful approaches is reinforcement learning (RL). However, to learn how to act, agents are required to explore the environment, which in safety-critical scenarios means that they might take dangerous actions, possibly harming themselves or even putting human lives at risk.
The main goal of this project is to develop Safe through Abstraction (multi-agent) Reinforcement learning (StAR), a framework to formally guarantee the safe behaviour of agents learning to act in unknown environments, through the satisfaction of safety constraints by the policies synthesized through RL, both at training and test time. We aim at combining RL and formal methods to ensure the satisfaction of constraints expressed in (probabilistic) temporal logic (PTL) in multi-agent environments.
The successful applicant will join the Formal Methods in AI (FMAI) research group, led by Dr Belardinelli. For further information on the group and related projects, see: https://www.doc.ic.ac.uk/~fbelard/).
The position offers an exciting opportunity for conducting internationally leading and impactful research in safe reinforcement learning. The postholder will be responsible for researching and delivering abstraction-based methods to guarantee the safe and trustworthy behaviour of autonomous agents based on the most widely used RL algorithms. They will also be expected to submit publications to top-tier conferences and journals in AI.
To apply, you must have a strong computer science background with a focus on AI, have experience, including a proven publication track-record, in at least two of the following areas, as well as ability and willingness to become familiar with the other: Logic-based languages and formal methods; Formal verification, including model checking; (safe) Reinforcement. You should also have:
*Candidates who have not yet been officially awarded their PhD will be appointed as Research Assistant within the salary range £38,194 to £41,388 p.a.
To apply
Visit https://www.imperial.ac.uk/jobs/ and search using reference ENG02573. In addition to completing the online application, candidates should attach:
Informal enquiries related to the position should be directed to Dr. Francesco Belardinelli: francesco.belardinelli@imperial.ac.uk.
For queries regarding the application process contact Jamie Perrins: j.perrins@imperial.ac.uk.
Closing Date: 31 May 2023 (midnight)
Type / Role:
Subject Area(s):
Location(s):