|Funding for:||EU Students, International Students, Self-funded Students, UK Students|
|Funding amount:||Full Home/EU fee waiver or equivalent fee discount for overseas students (see advert text)|
|Placed On:||6th June 2019|
|Closes:||28th June 2019|
AI for good: teaching young people tackle biased news and hateful content
Artificial Intelligence and Machine Learning have become powerful paradigms to address a wide range of everyday problems. Recent developments have also had a significant impact on the way computers process natural language automatically and make decisions such as to classify an email as spam or to flag up a news article as something that might indicate an emerging story. The focus of this project is to advance the state of the art in understanding, predicting and ameliorating the diffusion and effects of toxic content on social media that quickly become a real threat to society.
This PhD scholarship is part of the research project “COURAGE: A Social Media Companion Safeguarding and Educating Students”, which is an international collaboration funded by VolkswagenStiftung (Volkswagen Foundation) as part of the Artificial Intelligence and the Society of the Future funding initiative. The project partners include the Universitat Pompeu Fabra (Spain), the Istituto per le Tecnologie Didattiche of the National Council of Research ITD-CNR (Italy), Hochschule Ruhr West (Germany) and the Rhine-Ruhr Institute for System Innovation (Germany). The project aims to develop a Virtual Social Media Companion that educates and supports teenage school students facing the threats of social media such as discrimination and biases as well as hate speech, bullying, fake news and other toxic content. The companion will raise awareness of potential threats in social media among students without being intrusive.
The Essex team will be involved in developing Bayesian computational models of beliefs dynamics of social media users to support governance and educational strategies. These models will also be applied to evaluate socially relevant variables, such as trust and inclusion. We will build on and implement state-of-the-art NLP & AI methods to provide measurements of sentiment, bias, hatefulness, veracity, polarization, and sensationalism of social media content. In addition, we will drive forward the state of the art in detecting hate speech and biased content. The companion will actively counteract this kind of content, balancing it with opposite perspectives and proposing specifically themed challenges adopting ideas used in games.
The PhD studentship aims to address these challenges combining dynamic network modelling with automated content analyses (textual or multimedia) using modern machine learning methods, such as Deep Learning and Hierarchical Bayesian Models.
The student may extract relevant content features, topics and events from online discussions to (a) predict short and long term responses of multiple users, (b) estimate the different effects of diverse information suggestion strategies in such context, and (c) define different interventions to improve model accuracy.
As such, we are particularly interested in PhD candidates that like to work on one or multiple of the following topics:
The successful applicant will join the Essex COURAGE team, formed by Professor Kruschwitz, (PI), Dr Ognibene, (Co-I) and Dr Villavicencio (Co-I). Please contact Dr Ognibene to discuss further, email@example.com
Funding: Full Home/EU fee waiver or equivalent fee discount for overseas students (further fee details) and a doctoral stipend of (£15,009 in 2019-20).
Type / Role: