Location: | London |
---|---|
Salary: | £43,205 to £50,585 per annum, including London Weighting Allowance |
Hours: | Full Time |
Contract Type: | Fixed-Term/Contract |
Placed On: | 12th August 2024 |
---|---|
Closes: | 8th September 2024 |
Job Ref: | 093987 |
About us
The Dickson Poon School of Law, King's College London is one of the oldest law schools in England and recognised globally as one of the best law schools in the world.* The School was established in 1831 and has played an integral role in the life of King's since the university was formed almost 200 years ago.
King’s has been in service to society since its foundation and we’re proud to continue that tradition to this day. Our research and teaching address some of the most pressing questions of our time relating to equality and human rights, the legal implications of climate change, globalisation, international relations, trade, competition and global finance, to name but a few. Members of The Dickson Poon School of Law advise governments, serve on commissions and public bodies and are seconded to national and international organisations, helping to shape policy and practice nationally and internationally.
About the role
This role supports Hunter’s contributions to two work packages in the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project, focused on interviewing subjects, developing methodologies, assessing investigator efforts against legal, regulatory and ethical frameworks, writing up academic articles, dissemination, public engagement, and other auxiliary activities for the project.
The project
A significant barrier to reaping the benefits of predictive and generative AI is their unassessed potential for harms. Hence, AI auditing has become imperative, in line with existing & impending regulatory frameworks. Yet, AI auditing has been haphazard, unsystematic, and left solely in the hands of experts. This project investigates how to enable individual and collective participatory auditing of current and future AI technologies so that diverse stakeholders beyond AI experts can be involved in auditing their harms. Our research will systematically investigate stakeholders’ needs for and notions of fairness and harms to create auditing workbenches comprising novel user interfaces, algorithms and privacy-preserving mechanisms that help stakeholders to perform audits whilst guarding against unintended negative effects or abuse by malicious actors.
We will create participatory auditing methodologies which reflect, anticipate & inform regulatory frameworks, specifying how to embed participatory auditing in the AI development lifecycle using the workbenches we have developed. We will develop and implement training for stakeholders in participatory auditing to embed our project outputs in practice. We will work towards a certification framework for AI solutions, thus ensuring that AI is safe & trustworthy.
This is a full time post (35 hours per week), and you will be offered a fixed term contract until 31 April 2025 depending on start date (this is a 6 month contract in total with a proposed start date of 1 November 2024).
About you
To be successful in this role, we are looking for candidates to have the following skills and experience:
Essential criteria
Desirable criteria
Type / Role:
Subject Area(s):
Location(s):