Aarhus University Seal

Joshua James Hatherley

Title

Postdoc

Primary affiliation

Joshua James Hatherley

Contact information

Telephone number
Email address

Profile

My research interests lie at the intersection of AI ethics, bioethics, and science and technology studies. I primarily aim to address practical and ethical challenges relating to the design, implementation, and maintenance of AI systems, especially in healthcare settings.

More specifically, I am interested in addressing questions relating to the opacity and explainability of AI systems, the impact of AI systems on therapeutic relationships, and the ethical challenges associated with monitoring and regulating AI systems post-implementation.
 
I am also interested in addressing conceptual challenges and ethical risks associated with our tendency to describe AI systems using anthropomorphic language (e.g as objects that we can “trust,” as current and future "colleagues," and as things that exhibit “agency”).
 
You can access my public CV here.
 

Research

In my previous work, I have contributed to a variety of debates in AI ethics, including debates about trust and explainability in medical AI systems, the risks of generative AI, and the challenges of continual learning systems in healthcare.

For example, I have argued that medical AI systems are not the appropriate objects of trust or trustworthiness, and that prioritising accuracy over explainability in medical AI systems may, unintuitively, generate worse patient health outcomes than prioritising explainability in these systems.

More recently, I have argued that "adaptive" medical AI systems (i.e. those that continue learning from new data even after being implemented in a clinical setting) exacerbate the ethical issues associated with standard, "locked" medical AI systems. I have also argued that using and maintaining adaptive medical AI systems are activities that may need to be classified (and, therefore, regulated) as a form of medical research.

Beyond healthcare, I have also advocated for a range of practical strategies to minimise the risks associated with generative AI systems (e.g. ChatGPT, Gemini, Copilot, etc.), particularly in teaching and education.

Beyond AI, I also contribute to debates in biomedical ethics. For instance, I have argued that the exclusion of psychiatric patients from access to physician-assisted suicide is a form of discrimination.

You can find a full list of my peer-reviewed articles and citation metrics here.

Selected publications

More
Use arrow keys on your keyboard to explore

Selected activities

More
Use arrow keys on your keyboard to explore