PhD opportunity
Argumentation Technology for Explainable Misinformation Identification
Unfunded
31 December 2026
Overview
Misinformation has been identified by the World Economic Forum as one of the top global risks, with its rapid spread amplified by generative AI and large language models (LLMs). While automated misinformation detection systems offer promising solutions, most rely on sequence classification approaches that struggle to identify rational misinformation such as fallacies. These methods capture linguistic patterns but fail to model the reasoning structures underpinning misleading content.
This PhD project will integrate argumentation technology into misinformation detection systems, enriching them with reasoning-aware layers that explain not just whether content is misleading, but also why. By doing so, the research will produce more transparent, trustworthy, and educational systems, offering new ways to counter the spread of misinformation online.
Why It Matters:
- Innovative Approach: Moves beyond pattern recognition to reasoning-based analysis of misinformation.
- Transparency & Trust: Provides explanations for predictions, making systems more persuasive and credible.
- Societal Impact: Supports safer online environments and enhances public understanding of misinformation tactics.
Key Objectives:
- Develop models that combine argumentation theory with natural language processing for misinformation detection.
- Create explainable systems that identify not only misinformation but the reasoning flaws behind it.
- Evaluate system transparency, trustworthiness, and educational value in real-world scenarios.
References:
- Tong, A., Du, D.Z. & Wu, W. (2018). On misinformation containment in online social networks. Advances in Neural Information Processing Systems, 31.
- Chen, C. & Shu, K. (2024). Can LLM-Generated Misinformation Be Detected? Proc. ICLR 2024.
- Jin, Z. et al. (2022). Logical Fallacy Detection. Findings of ACL: EMNLP 2022, 7180–7198.
- Cheng Niu et al. (2024). VeraCT Scan: Retrieval-Augmented Fake News Detection with Justifiable Reasoning. Proc. ACL 2024 (Vol. 3: System Demonstrations), 266–277.
- Ruiz-Dolz, R. & Lawrence, J. (2023). Detecting argumentative fallacies in the wild: Problems and limitations of large language models. Proc. 10th Workshop on Argument Mining.
Diversity statement
Our research community thrives on the diversity of students and staff which helps to make the University of Dundee a UK university of choice for postgraduate research. We welcome applications from all talented individuals and are committed to widening access to those who have the ability and potential to benefit from higher education.
How to apply
- Email Dr Ramon Ruiz-Dolz to
- Send a copy of your CV
- Discuss your potential application and any practicalities (e.g. suitable start date).
- After discussion with Dr Ruiz-Dolz, formal applications can be made via our Direct Application System.
Supervisors
Principal supervisor
Second supervisor
Person