Post Doc in Ethics, Privacy and Fairness in Digital Education Environments

University of Tubingen

Postdoc Ethics, Privacy and Fairness in Digital Education Environments (m/f/d)

Cluster of Excellence, Machine Learning: New Perspectives for Science

Application deadline : 30.06.2021

The application of ML methods in digital education raises significant ethical issues. Adaptive learning systems promise to be particularly useful for disadvantaged students without adequate family support and could thus contribute to the reduction of educational inequalities. Modern machine learning techniques promise a revolution in interactive and personalized education. However, students who stand to benefit the most are also the least able to advocate for themselves. Moreover, irresponsible implementation of algorithmic systems threatens to lower education quality and widen existing inequalities. Accordingly, the Innovation Fund “Machine Learning in Education” in the Cluster of Excellence “Machine Learning: New Perspectives for Science” in collaboration with the Hector Research Institute of Education Sciences and Psychology seeks to hire a Postdoc for fundamental research in the ethics and methodology of machine learning for education. The  

postdoc position (m/f/d; E-13 TVL, 100% - 36 Months)

is to be filled (ideally) in Summer/Fall of 2021 and will be supervised by Konstantin Genin, Thomas Grote, Benjamin Nagengast and Bob Williamson. Close collaboration with the other members of the Innovation Fund “Machine Learning in Education” is expected. The position is funded for 3 years. Compensation is at minimum €4002/month brutto (€2379 netto) and increases according to experience. Funding for equipment, travel and other expenses is also available.

Possible research areas include but are not limited to the following. 

  1. Methodological Issues in the testing of ML algorithms. How do we learn whether algorithmic interventions are helpful or harmful? If an algorithmic intervention is helpful on average, how should its benefits be distributed among groups? Should randomized controlled trials be used to study the effects of algorithmic intervention? If so, how do we manage issues of privacy, equipoise and informed consent, especially when students may not be able to opt-out of such trials?
  2. Algorithmic Fairness. Algorithmic tutors make frequent and continual inferences about latent student features: mastery, motivation, attention, etc. These inferences inform what material is presented and how it is sequenced. Inequalities in algorithmic accuracy could allow discrimination to infiltrate the learning process. Mathematical trade-offs between competing algorithmic fairness notions only complicate matters. What are the relevant notions of fairness in algorithmic tutoring? How should tradeoffs between these notions be managed?
  3. Privacy, Respect and Autonomy. In educational ML, researchers will be able to collect unprecedentedly fine-grained information about students---up to the motion of their eyes. That could enable a revolution in personalized learning, but also poses significant threats to privacy and autonomy. Irresponsible or punitive use of these technologies threatens to be invasive, arbitrary and incompatible with respect for student autonomy. Is it possible to use these promising technologies without creating educational dystopias?

The position is, by its nature, extremely interdisciplinary. Therefore, we are open-minded about the background of potential applicants. Applicants holding a PhD in philosophy (esp. ethics),  statistics, machine learning, social science (e.g. psychology, psychometrics, economics, political science, sociology), education or allied fields are welcome to apply. The postdoc will be expected to collaborate with other groups in the “Machine Learning in Education” Innovation fund on issues of ethics and methodology. 

Please upload the usual documents (cover letter; short (1 page) research proposal; academic CV including list of publications; writing sample and letters, if available) as a single PDF to this dropbox folder by the deadline of June 30, 2021. Please indicate in the cover letter which of your publications you would most like us to read and why you believe it is your best work. The group aims to decide on candidates by the end of the Summer.  Questions can be directed to konstantin.geninspam prevention@uni-tuebingen.de. 

The University aims to increase the proportion of women in research and teaching and therefore urges suitably qualified women scientists to apply. The “Machine Learning in Education” group also welcomes applications from other groups underrepresented in philosophy and machine learning. Qualified international researchers are expressly invited to apply. Equally qualified applicants with disabilities will be given preference. The employment will be carried out by the central administration of the University of Tübingen.

Apply