Press release -
Generative AI has seven distinct roles in combating misinformation
Generative AI can be used to combat misinformation. However, it can also exacerbate the problem by producing convincing manipulations that are difficult to detect and can quickly be copied and disseminated on a wide scale. In a new study, researchers have defined seven distinct roles that AI can play in the information environment and analysed each role in terms of its strengths, weaknesses, opportunities and risks.
“One important point is that generative AI has not just one but several functions in combating misinformation. The technology can be anything from information support and educational resource to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and we need to create more effective policies,” says Thomas Nygren, Professor at Uppsala University, who conducted the study together with colleagues at the University of Cambridge, UK, and the University of Western Australia.
From fact-checking to influence – same capacity has double-edged effects
The study is an overview in which researchers from a range of scholarly disciplines have reviewed the latest research on how generative AI can be used in various parts of the information environment. These uses range from providing information and supporting fact-checking to influencing opinion and designing educational interventions, and the study considers the strengths, weaknesses, opportunities and risks associated with each use.
The researchers chose to work with a SWOT framework as this leads to a more practical basis for decisions than general assertions that ‘AI is good’ or ‘AI is dangerous’. A system can be helpful in one role but also harmful in the same. Analysing each role using SWOT can help decision-makers, schools and platforms discuss the right measures for the right risk.
AI can serve several functions
“The roles emerged from a process of analysis where we started out from the perception that generative AI is not a simple ‘solution’ but a technology that can serve several functions at the same time. We identified recurrent patterns in the way AI is used to obtain information, to detect and manage problems, to influence people, to support collaboration and learning, and to design interactive training environments. These functions were summarised in seven roles,” Nygren explains.
The seven roles that the researchers identified as their research evolved were informer, guardian, persuader, integrator, collaborator, teacher and playmaker (see the fact box). The point of the roles is that they can serve as a checklist: they help us to see how each role can contribute to strengthening the resilience of society to misinformation, but also how each role entails specific vulnerabilities and risks. The researchers therefore analysed each role using a SWOT approach: what strengths and opportunities it embodies, but also what weaknesses and threats need to be managed.
“AI must be implemented responsibly”
“We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale. However, risks such as hallucinations, in other words, that AI comes out with ‘facts’ that are wrong, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly. Clear policies are therefore needed on the permissible use of AI.”
The researchers particularly underline the need for:
- Regulations and clear frameworks for the permissible use of AI in sensitive information environments;
- Transparency about AI-generated content and systemic limitations;
- Human oversight where AI is used for decisions, moderation or advice;
- AI literacy to strengthen the ability of users to evaluate and question AI answers.
“The analysis shows that generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation, but having said that, there is a risk that excessive use could be detrimental for the development of knowledge and make us lazy and ignorant and therefore more easily fooled. Consequently, with the rapid pace of developments, it’s important to constantly scrutinise the roles of AI as ‘teacher’ and ‘collaborator’, like the other five roles, with a critical and constructive eye,” Nygren emphasises.
Article: Nygren, T., Spearing, E. R., Fay, N., Vega, D., Hardwick, I. I., Roozenbeek, J., & Ecker, U. K. H. (2026). The seven roles of generative AI: Potential & pitfalls in combatting misinformation. Behavioral Science & Policy, 0(0). DOI 10.1177/23794607261417815
For more information: Thomas Nygren, Professor of Education at the Department of Education, Uppsala University, thomas.nygren@edu.uu.se, +46-73-646 86 49
FACT BOX:
The seven roles of generative AI: potential and pitfalls (Nygren et al. 2026).
1) Informer
- Strengths/opportunities: Can make complex information easier to understand, translate and adapt language, can offer a quick overview of large quantities of information.
- Problems/risks: Can give incorrect answers (‘hallucinations’), oversimplify and reproduce training data biases without clearly disclosing sources.
2) Guardian
- Strengths/opportunities: Can detect and flag suspect content on a large scale, identify coordinated campaigns and contribute to a swifter response to misinformation waves.
- Problems/risks: Risk of false positives/negatives (irony, context, legitimate controversies), distortions in moderation, and lack of clarity concerning responsibility and rule of law.
3) Persuader
- Strengths/opportunities: Can support correction of misconceptions through dialogue, refutation and personalised explanations; can be used in pro-social campaigns and in educational interventions.
- Problems/risks: The same capacity can be used for manipulation, microtargeted influence and large-scale production of persuasive yet misleading messages – often quickly and cheaply.
4) Integrator
- Strengths/opportunities: Can structure discussions, summarise arguments, clarify distinctions, and support deliberation and joint problem-solving.
- Problems/risks: Can create false balance, normalise errors through ‘neutral synthesis’, or indirectly control problem formulation and interpretation.
5) Collaborator
- Strengths/opportunities: Can assist in analysis, writing, information processing and idea development; can support critical review by generating alternatives, counterarguments and questions.
- Problems/risks: Risk of overconfidence and cognitive outsourcing; users can fail to realise that the answer is based on uncertain assumptions and that the system lacks real understanding.
6) Teacher
- Strengths/opportunities: Can give swift, personalised feedback and create training tasks at scale; can foster progression in source criticism and digital skills.
- Problems/risks: Incorrect or biased answers can be disseminated as ‘study resources’; risk that teaching becomes less investigative if students/teachers uncritically accept AI-generated content.
7) Playmaker
- Strengths/opportunities: Can support design of interactive, gamified teaching environments and simulations that train resilience to manipulation and misinformation.
- Problems/risks: Risk of simplifying stereotypes, ethical and copyright problems, and that gaming mechanisms can reward the wrong type of behaviour if the design is not well considered.
Topics
Categories
Founded in 1477, Uppsala University is the oldest university in Sweden. With more than 50,000 students and 7,500 employees in Uppsala and Visby, we are a broad university with research in social sciences, humanities, technology, natural sciences, medicine and pharmacology. Our mission is to conduct education and research of the highest quality and relevance to society on a long-term basis. Uppsala University is regularly ranked among the world’s top universities. www.uu.se