Funded by the European Union
AIRY logo

AIRY

Responsible use of AI and Intelligent Choices for Youths
AIRY Interactive Handbook for Youth. Activity 3, Task 3.2: Development of an interactive handbook for youth. Date: 22 January 2026. Lead partner: INQUIRY FUSE & Plan B.
AIRY project partners — HUUB, PlanB Research & Consulting, INQUIRIUM
Scroll to begin

Table of contents

About AIRY

What the AIRY project is

The AIRY project (Responsible use of AI and Intelligent Choices for youths) is an Erasmus+ small-scale partnership that brings together three organisations from Estonia, Greece, and Cyprus. The project aims to equip young people aged 16-30 with the knowledge and skills to understand, critically evaluate, and responsibly use Artificial Intelligence (AI) in their personal and professional lives. Through focus groups, educational materials, and interactive learning activities, AIRY addresses the growing need for AI literacy among youth, with a focus on ethical use, critical thinking, and media literacy.

Why AI literacy matters for young people today

AI technologies are increasingly integrated into everyday life, from social media algorithms and voice assistants to educational tools and creative applications. Young people are among the most active users of digital technologies, yet they are often excluded from discussions about AI's development, ethical implications, and responsible use. Research shows that many young people interact with AI tools without fully understanding how they work or recognising their limitations. The AIRY project responds to this gap by providing accessible, youth-friendly resources that build critical AI literacy, promote ethical awareness, and develop critical thinking skills needed to navigate an AI-driven world.

AIRY interactive handbook for youth

Purpose of the handbook

The AIRY interactive handbook is designed for young people aged 16-30. It responds directly to the focus group findings from partner countries, namely, Estonia, Cyprus, and Greece, addressing young people’s real practices, concerns, and expectations regarding the use of AI technologies. The handbook presents AI as a supportive tool, while emphasising critical thinking, ethical awareness, and responsible use, particularly in relation to misinformation.

The handbook is:

Youth-friendly and practical

Grounded in real-life examples and everyday AI use

Designed for self-directed learning and use by youth workers

How to use this handbook

For young people (self-learning)

If you are a young person, this handbook is designed so you can work through it at your own pace, in any order that interests you. You don't need a teacher, a workshop, or a group – just curiosity and a willingness to think critically about the technology you use every day.

Each chapter covers a different aspect of AI literacy: understanding how AI works, navigating ethical questions, and recognising misinformation. Within each chapter, you'll find short explanations written in accessible language, real-life examples that connect to situations you probably recognise, and reflection questions that invite you to think about your own experiences. There are no right or wrong answers to these questions – they're designed to help you develop your own perspective.

The interactive elements throughout the handbook – quizzes, dilemma cards, self-check tables, and “Try It Yourself” activities – are meant to make your learning more active. Don't skip them. They work best when you actually pause, think, and engage rather than just reading through. If something surprises you or challenges what you previously thought, that's a good sign – it means you're learning.

You can start with whatever topic feels most relevant to you. If you're curious about how AI actually works, begin with Chapter 1. If you've been thinking about privacy or how much you rely on AI tools, jump to Chapter 2. If you've ever wondered whether something you saw online was real or fake, Chapter 3 is a good starting point. The chapters are connected but each one also stands on its own.

Keep a notebook or digital document nearby as you read. Writing down your thoughts, especially in response to the reflection questions, helps you process ideas more deeply and gives you something to look back on later.

For youth workers (group work, workshops, discussions)

If you are a youth worker, this handbook is structured to support facilitated learning in youth centres, schools, non-formal education settings, and workshops. Each chapter provides enough material for one or more group sessions, depending on the depth of discussion and the number of activities you choose to include.

The consistent structure within each chapter – explanation, example, interactive element, reflection – gives you a natural rhythm for group sessions. A recommended approach is to start each session by reading the key concept together, then discuss the real-life example as a group, work through the interactive activity (individually or in small groups), and close with a facilitated discussion using the reflection questions. This structure typically works well within a 60–90 minute session.

For example. the ethical dilemma cards in Chapter 2 and the “Spot-the-Fake” exercises in Chapter 3 are particularly well-suited for group work. For the dilemma cards, divide participants into small groups of 3–5, give each group a different dilemma, and ask them to discuss the options before presenting their reasoning to the wider group. Encourage participants to explain not just what they would do, but why – this develops critical thinking more effectively than simply choosing an answer.

The self-check tables and quizzes can be used as session openers or closers. At the start, they can help participants recognise what they already know (or assume). At the end, they can serve as a reflection tool to see how their thinking has shifted. You don't need to collect or grade these – they work best as personal awareness tools.

Adapt the materials to your context. The scenarios feature young people from different countries and situations, so invite participants to share similar examples from their own lives. Local relevance increases engagement. If your group is younger (16–18), you may want to spend more time on the foundational concepts in Chapter 1 before moving to the ethical and media literacy topics. For older participants (18–30), you might move more quickly through Chapter 1 and dedicate more time to the discussions and activities in Chapters 2 and 3.

All materials in this handbook can be photocopied, projected, or adapted for use in your sessions. The SPOTit project resources referenced in Chapter 3, including the free MOOC and digital escape rooms, offer additional structured activities that complement this handbook well.

Chapter 1 – Understanding AI: What it is and how it works

The AI Spectrum The AI Spectrum Rule-Based Systems Fixed logic, if-this-then-that. Machine Learning Learns patterns from data. Deep Learning Neural networks, many layers. Generative AI Creates new text, image, audio. From fixed rules to generating new content — each step builds on the last.
The AI Spectrum

Chapter aim

Artificial intelligence (AI) is already part of everyday life for young people, from social media feeds and voice assistants to educational tools and creative applications. Yet many young people interact with AI without fully understanding what it is, how it works, or what its limitations are. During the AIRY focus groups conducted in Estonia, Cyprus, and Greece, young people reported using AI primarily for brainstorming, learning, and daily tasks, but also expressed concerns about inaccuracy, bias, and over-reliance. This chapter aims to build a foundational understanding of AI, helping young people recognise where they encounter it, appreciate what it can and cannot do, and approach it with curiosity and critical thinking.

What is AI? Explained simply

Key Idea

AI refers to computer systems designed to perform tasks that typically require human intelligence, such as recognising images, understanding language, making predictions, or generating text. Unlike traditional software that follows fixed rules, AI systems learn from large amounts of data. They identify patterns in thedata and use these patterns to make decisions or produce outputs. For example, when you ask a chatbot a question, it does not look up a single correct answer from a database. Instead, it predicts the most likely sequence of words based on patterns it has learned from millions of texts. This is why AI can produce responses that sound very convincing, even when the information is incorrect. AI is a tool that mimics certain aspects of human thinking, but it does not truly understand the world the way humans do (UNESCO, 2023).

Real-Life Example

When you use a translation app on your phone to understand a menu in another language, AI is working behind the scenes. The app analyses the text, compares it to millions of translated sentences it has been trained on, and generates a translation. Sometimes the result is excellent; other times it misses the context or cultural meaning. This is because the AI is predicting the most statistically likely translation, not truly understanding the meaning of the words.

Did You Know?

AI does not think or feel. When a chatbot says something like “I think...” or “I believe...”, it is not expressing a real opinion. It is generating the most probable next words based on patterns in its training data. AI has no consciousness, emotions, or personal experiences (European Commission, 2022).

Reflection Questions

1. Have you ever asked an AI tool a question and received an answer that surprised you? What made you trust or doubt the response?

2. If AI learns from data created by humans, what kind of biases or errors might it pick up?

AI in Your Daily Life AI in Your Daily Life YOU Social feeds 📱 Voice assistants 🎤 Streaming picks Navigation 🧭 Email filters Online shopping 🛒 Translation 🌐 Photo filters 📷

Where AI appears in everyday life

Key Idea

AI is embedded in many of the digital tools and platforms that young people use every day, often without being noticed. Social media platforms use AI algorithms to decide which posts, videos, or advertisements appear in your feed. Music and video streaming services use AI to recommend content based on your previous choices. Voice assistants like Siri, Alexa, or Google Assistant rely on AI to understand speech and respond to commands. Educational platforms use AI to personalise learning by adapting content to a student’s level. Even features like auto-correct on your phone, face recognition to unlock your device, and spam filters in your email are powered by AI. Understanding where AI operates helps you become a more conscious and critical user of technology (CIFAR, 2023; Council of Europe, 2022).

Real-Life Example

Young people participating in the AIRY focus groups in Estonia, Cyprus, and Greece reported using AI tools primarily for brainstorming and idea generation, assisting with school assignments and research, managing daily tasks and schedules, and exploring creative projects and hobbies. Many participants recognised that AI is useful as a support tool, but emphasised that it should not replace their own thinking and decision-making.

Did You Know?

Every time you scroll through social media, an AI algorithm is deciding what you see next. These algorithms are designed to keep you engaged for as long as possible, which means they tend to show you content that triggers strong emotions, whether positive or negative. This is why your feed might sometimes feel repetitive or intense (Council of Europe, 2022; OECD, 2023).

Reflection Questions

1. Can you list five ways you interacted with AI today without realising it?

2. How do you think the AI algorithm on your favourite social media platform decides what to show you?

What AI does well and what it cannot do?

Key Idea

AI excels at processing large amounts of data quickly, identifying patterns, and performing repetitive tasks with high accuracy. It can translate languages, generate text, recognise objects in images, and even assist doctors in analysing medical scans. However, AI has significant limitations. It cannot truly understand context the way humans do. It lacks common sense, empathy, and the ability to make moral judgments. AI systems can confidently produce incorrect or misleading answers, a phenomenon known as “hallucination.” They cannot verify whether their outputs are true; they can only predict what seems most statistically likely based on their training data. This means that while AI is an extraordinarily powerful tool, it is not a substitute for human judgment, creativity, or ethical reasoning (UNESCO, 2023; European Commission, 2022).

Real-Life Example

A student asks an AI chatbot to help write a history essay. The AI produces a well-structured text with clear arguments and fluent language. However, it includes a reference to a book that does not exist and attributes a quote to a historical figure who never said it. The essay looks professional, but the content is partly fabricated. This is a common example of AI hallucination: the system generates plausible-sounding but incorrect information.

Did You Know?

In a 2023 legal case in the United States, a lawyer submitted a court filing that included several case references generated by an AI chatbot. It turned out that none of the cited cases existed; the AI had invented them. The lawyer faced professional consequences for not verifying the AI-generated content. This case highlights why it is essential to always check AI outputs against reliable sources (UNESCO, 2023).

Reflection Questions

1. If AI can write a convincing essay, does that mean it understands the topic? Why or why not?

2. In what situations would you trust AI-generated information, and when would you want to verify it yourself?

Errors, bias, and limitations in AI systems

Key Idea

AI systems learn from data created by humans, which means they can absorb and reproduce the biases present in that data. If an AI is trained on text that contains gender stereotypes, racial prejudice, or cultural assumptions, its outputs may reflect these same biases. For example, research has shown that AI language models associate women with domestic roles four times more often than men, while linking male names to careers and leadership positions (UNESCO, 2024). AI can also produce errors when it encounters situations that differ from its training data. These errors are not random; they often disproportionately affect people from underrepresented groups. Understanding AI bias is important because these systems are increasingly used in decisions that affect people’s lives, from education and hiring to healthcare and law enforcement (European Commission, 2019; CIFAR, 2023).

Real-Life Example

Researchers at Stanford University found that when AI language models were asked to create stories featuring students, they overwhelmingly depicted struggling learners as characters with names associated with historically marginalised groups. Native American students were almost entirely absent from positive representations. This example shows how AI can silently reinforce harmful stereotypes, even in educational contexts where it is supposed to help.

Did You Know?

AI image generators trained on internet data have been shown to produce stereotypical images when given neutral prompts. For example, asking for an image of “a CEO” often produces images of older white men, while asking for “a nurse” predominantly shows women. These outputs reflect biases in the training data, not reality (UNESCO, 2024).

Reflection Questions

1. Have you ever noticed AI producing results that seemed biased or stereotypical? What did you observe?

2. If AI learns from the internet, and the internet contains both accurate and biased information, how should we approach AI outputs?

AI as a support tool, not a thinking substitute

Key Idea

AI is most effective when used as a support tool that enhances human abilities rather than replacing them. It can help with brainstorming, organising ideas, checking grammar, translating text, or exploring creative possibilities. However, relying too heavily on AI can weaken independent thinking, reduce creativity, and erode the ability to critically evaluate information. Young people who participated in the AIRY focus groups across Estonia, Cyprus, and Greece consistently emphasised that AI should support human thinking, not replace it. They highlighted the importance of self-regulation, maintaining a balance between AI-assisted tasks and personal judgment. The key is to use AI consciously and intentionally: know when it helps, recognise when it might mislead, and always apply your own critical thinking to its outputs (OECD, 2022; UNICEF, 2021).

Real-Life Example

A young person uses AI to brainstorm ideas for a school project on climate change. The AI suggests several interesting angles and provides useful background information. Instead of simply copying the AI’s output, the student selects the most relevant ideas, checks the facts against trusted sources, adds personal reflections and local examples, and creates an original presentation. This is an example of using AI as a starting point, not as a final product.

Did You Know?

The European Commission’s Ethical Guidelines on the Use of AI for Educators recommend that AI should always be used under human oversight. This means that even when AI produces helpful outputs, a human should always review, evaluate, and take responsibility for the final result. This principle applies equally to students, teachers, and youth workers (European Commission, 2022).

Reflection Questions

1. Think about the last time you used AI for a task. Did you check the output, or did you accept it without question?

2. How can you use AI to learn more effectively while still developing your own skills and knowledge?

Interactive Elements: Mini Quiz – AI Myths vs Facts

Read each statement and decide: is it a Myth or a Fact? Write your answer in the right column. Check your answers in the Answer Key at the end of this section.

Statement · Myth or Fact? · Explanation

AI understands the meaning of the text it generates.
Myth or Fact?
Explanation: AI processes patterns in data and predicts the most likely next words. It does not understand meaning, context, or truth the way humans do.
AI can produce incorrect information that sounds very convincing.
Myth or Fact?
Explanation: This is known as AI hallucination. AI systems can confidently generate false information, including invented references, fabricated statistics, or inaccurate claims.
AI treats everyone equally and does not discriminate.
Myth or Fact?
Explanation: AI systems can reflect and amplify biases present in their training data, leading to outputs that reinforce stereotypes based on gender, race, or socio-economic background.
AI algorithms on social media decide what content you see.
Myth or Fact?
Explanation: Social media platforms use AI-powered recommendation algorithms that prioritise content likely to keep you engaged, which can include emotionally charged or sensationalised posts.
AI will eventually replace the need for human thinking and creativity.
Myth or Fact?
Explanation: AI is a tool that supports human work, but it lacks consciousness, empathy, moral judgment, and genuine creativity. Human oversight remains essential.
Answer Key: AI Myths vs Facts. Check your answers below. Each explanation helps you understand why the statement is a myth or a fact.

Personal Reflection: “How do I already use AI?”

Take a moment to reflect on your daily interactions with AI. Use the questions below to guide your thinking:

1. Which AI tools or features do I use regularly? (e.g., chatbots, social media, translation apps, voice assistants, recommendation systems)

2. What do I use AI for most often? (e.g., learning, entertainment, organising tasks, creative projects)

3. Do I usually check whether AI-generated information is accurate?

4. Have I ever noticed AI making a mistake or giving me biased results?

5. How would I describe my current relationship with AI: Am I a conscious user, or do I accept its outputs without much thought?

References

  • CIFAR. (2023). Responsible AI and Children. Canadian Institute for Advanced Research.
  • Council of Europe. (2022). Insights into Artificial Intelligence and the Youth Sector. Council of Europe Publishing.
  • European Commission. (2019). Ethics Guidelines for Trustworthy AI. High-Level Expert Group on Artificial Intelligence.
  • European Commission. (2022). Ethical Guidelines on the Use of AI and Data in Teaching and Learning for Educators. Publications Office of the European Union.
  • OECD. (2022). OECD AI Policy Observatory: AI and Education. Organisation for Economic Co-operation and Development.
  • OECD. (2023). Disinformation and Misinformation: Policy Responses and Research. Organisation for Economic Co-operation and Development.
  • UNESCO. (2023). Guidance for Generative AI in Education and Research. United Nations Educational, Scientific and Cultural Organization.
  • UNESCO. (2024). AI Competency Framework for Students. United Nations Educational, Scientific and Cultural Organization.
  • UNICEF. (2021). Policy Guidance on AI for Children. United Nations Children’s Fund.

Chapter 2 – Ethical AI: Navigating the digital world responsibly

The 5 Pillars of Ethical AI Use The 5 Pillars of Ethical AI Use Responsible Use 🔒 Privacy & Data 🧠 Independent Thinking 🤝 Human Relationships Ethical Choices
The 5 Pillars of Ethical AI Use

Chapter aim

As Chapter 1 explored, AI is already woven into the digital tools young people use every day – from content recommendations to text generation. But knowing how AI works is only the first step. The deeper question is: how should we use it? Responsible AI use means going beyond technical understanding and thinking about the ethical dimensions of our interactions with these systems. It involves asking questions like: Is this output fair? Could sharing this content harm someone? Am I being transparent about using AI in my work? Every time you interact with an AI tool, you make choices that have real consequences – for your own learning, for other people's privacy, and for the quality of information in your community. This chapter gives you practical frameworks for making those choices thoughtfully. Each section presents an ethical issue, a real-life scenario, and a dilemma that invites you to reflect on what responsible digital behaviour looks like in practice.

2.1. What does responsible AI use mean for young people?

Reflection Questions

1. When you use a generative AI tool (such as ChatGPT or an image generator), do you think about how the information was produced?

2. Should young people have responsibilities when using AI tools in school, work, or social media?

Ethical Issue Explained

AI systems increasingly influence how young people learn, communicate, and create content online. Many digital platforms use algorithms to recommend videos, filter information, generate text, or create images. Responsible AI use means interacting with these tools thoughtfully and ethically. It includes understanding their limitations, recognising potential bias, protecting personal data, and using AI in ways that respect other people’s rights. As already discussed in Chapter 1 of this handbook AI systems do not think or understand information in the same way humans do. Their outputs depend on the data used to train them and the design decisions of developers. Incorrect, biased, or misleading results can therefore appear in AI-generated content (European Commission, 2022).

Scenario

Maria, a university student, uses an AI chatbot to help her with an essay about climate change. The chatbot produces a full paragraph explaining the topic. The text looks convincing and well written, so Maria copies it into her assignment without checking the information or rewriting the content. Later, her lecturer discovers that the paragraph contains incorrect data and outdated statistics. Maria explains that the AI tool generated the text and she assumed it was reliable. The lecturer explains that generative AI tools can produce convincing answers but they do not always verify facts.

What would you do in such a situation?

AOption A: Use the AI-generated text exactly as it appears because the tool probably knows the correct answer.
BOption B: Use the AI output as a starting point, then verify the information with reliable sources and rewrite the text in your own words.
COption C: Avoid AI tools completely because they cannot be trusted.

Correct answer: Correct answer: Option B – Use the AI output as a starting point, then verify the information with reliable sources and rewrite the text in your own words.

Why this is the best choice: Explanation: Option A is risky because AI tools can generate convincing but inaccurate content – exactly what happened to Maria. Blindly trusting AI output undermines both learning and academic integrity. Option C goes too far – AI tools can be genuinely useful when used critically. Option B represents the balanced approach: use AI to get started, but always verify facts against reliable sources and express ideas in your own words. This way, you benefit from AI's efficiency while maintaining accuracy and originality.

Open reflection: How could AI support your learning while still keeping your work original and accurate?

Why It Matters

Responsible AI use matters because AI systems influence knowledge, creativity, and decision-making in everyday life. AI-generated content sometimes appears credible but contains factual errors, bias, or fabricated references. Critical thinking is essential when interpreting AI-generated content. Young people who understand these limitations are better prepared to question AI outputs and make informed decisions. Responsible behaviour includes checking information, acknowledging AI assistance, respecting privacy, and considering the potential consequences of digital actions (Bender et al., 2021; Long & Magerko, 2020).

2.2. Privacy and personal data when using AI tools

Reflection Questions

1. When you use an AI tool or digital platform, do you think about what happens to the information you type or upload?

2. Would you share personal information with an AI chatbot in the same way you might share it with a friend?

Ethical Issue Explained

AI systems often rely on large amounts of data to function effectively. Many AI tools collect information from user interactions in order to improve performance, personalise services, or analyse patterns of behaviour. Personal data may include names, locations, search history, images, voice recordings, or written prompts entered into AI systems. Users do not always know how their information is stored, analysed, or reused by digital platforms. European data protection frameworks emphasise that individuals have rights regarding their personal data, including transparency about how data is collected and used (European Commission, 2022; European Parliament & Council, 2016).

Scenario

Nikos enjoys using an AI chatbot to help with school assignments and everyday questions. One evening he decides to ask the chatbot for advice about a personal situation involving a disagreement with a friend. Nikos writes a detailed description of the situation, including his friend’s name, their school, and where they live. A few days later, he reads an article explaining that some AI platforms store user prompts to improve their systems. Nikos suddenly realises that the personal information he shared might have been recorded and analysed by the system.

What would you do in such a situation?

AOption A: Continue using the AI tool in the same way because the system probably keeps the information private.
BOption B: Use the AI tool but avoid sharing personal details such as names, locations, or sensitive information.
COption C: Stop using AI chatbots completely because they collect too much data.

Correct answer: Correct answer: Option B – Use the AI tool but avoid sharing personal details such as names, locations, or sensitive information.

Why this is the best choice: Explanation: Option A ignores a real risk – many AI platforms do store and process user inputs, and privacy policies vary widely. Option C is unnecessarily extreme – AI tools can be used safely with the right precautions. Option B is the responsible middle ground: continue using AI tools for their benefits, but treat them like a public space. Never share full names, addresses, school names, financial details, or sensitive personal situations. A good rule of thumb: if you would not say it on a public forum, do not type it into an AI chatbot.

Open reflection: What type of information should never be shared with an AI tool or digital platform?

Why It Matters

Privacy and personal data protection are fundamental rights in the digital world. The General Data Protection Regulation (GDPR) establishes clear principles regarding transparency, accountability, and individuals’ rights over their information. AI systems can analyse patterns and infer information about users even when limited personal details are provided. Responsible AI use includes avoiding the sharing of personal identifiers, checking privacy settings, and thinking critically about the information you upload online (Floridi et al., 2018; UNICEF, 2021).

2.3. Over-reliance on AI and independent thinking

Reflection Questions

1. When an AI tool gives you an answer, do you usually accept it immediately or do you question it?

2. Could frequent use of AI tools affect how people think, learn, or solve problems on their own?

Ethical Issue Explained

AI tools can generate text, images, summaries, and answers in seconds. These capabilities make AI attractive for studying, creative work, and everyday problem solving. At the same time, excessive dependence on AI systems may reduce opportunities for independent thinking. Independent thinking involves analysing information, evaluating evidence, and developing personal conclusions. AI-generated responses sometimes appear highly confident and well structured, even when they contain errors or incomplete information. Responsible AI use involves balancing technological assistance with critical reflection and personal reasoning (Bender et al., 2021).

Scenario

Anna is preparing a presentation for her class about renewable energy. She decides to use an AI chatbot to gather information quickly. The chatbot produces a clear explanation and several arguments supporting solar power. Anna copies the information directly into her slides. During the presentation, a classmate asks about the environmental impact of solar panel production. Anna realises she cannot answer because she did not fully understand the topic. Her presentation relied entirely on the AI-generated explanation.

What would you do in such a situation?

AOption A: Use the AI explanation exactly as provided because it saves time.
BOption B: Use the AI output as a starting point, then read additional sources and develop your own understanding of the topic.
COption C: Avoid AI tools completely when preparing school work.

Correct answer: Correct answer: Option B – Use the AI output as a starting point, then read additional sources and develop your own understanding of the topic.

Why this is the best choice: Explanation: Option A leads to exactly the problem Anna experienced – she could not answer follow-up questions because she never understood the material herself. Option C is unnecessarily restrictive. Option B ensures that AI serves as a research assistant, not a replacement for learning. The key habit is: after AI gives you information, read at least one or two additional sources, form your own view, and be ready to explain the topic in your own words. If you cannot explain it without the AI, you have not learned it

Open reflection: How could AI support learning while still allowing you to develop your own ideas and knowledge?

Why It Matters

Independent thinking is one of the most important skills in education and civic life. Learning occurs most effectively when individuals engage with ideas, question information, and construct their own explanations (OECD, 2019). Passive acceptance of automated answers can reduce opportunities for deeper understanding. AI tools may help generate ideas or summarise large texts, but human judgement remains essential for interpreting results, identifying limitations, and forming conclusions.

2.4. AI and human relationships

Reflection Questions

1. Can a conversation with an AI chatbot feel similar to talking with a real person?

2. Could frequent interaction with AI systems influence how people communicate with friends, family, or classmates?

Ethical Issue Explained

AI systems increasingly interact with people through conversational chatbots, virtual assistants, and social media algorithms. Some AI tools are designed to simulate conversation, provide emotional support, or respond in ways that resemble human communication. AI systems do not possess emotions, personal experiences, or intentions. Their responses are generated through patterns in large datasets. Users may still feel a sense of connection during conversations with these systems, a process known as anthropomorphism. Awareness of this tendency helps young people recognise the difference between human relationships and interactions with automated systems (Floridi et al., 2018).

Scenario

Elena recently discovered an AI chatbot that offers friendly conversation and advice. She begins using it regularly, especially when she feels stressed about school. After some time, Elena finds herself sharing more personal thoughts with the chatbot than with her friends. She feels comfortable because the system never criticises her and always responds positively. One evening, Elena’s best friend asks why she has been distant lately. Elena realises she has spent more time chatting with the AI system than talking with people close to her.

What would you do in such a situation?

AOption A: Continue talking mainly with the AI chatbot because it feels easier than discussing problems with other people.
BOption B: Use the chatbot occasionally for ideas or advice, while continuing to share feelings and experiences with trusted friends or family.
COption C: Stop using conversational AI completely because it could replace real relationships.

Correct answer: Correct answer: Option B – Use the chatbot occasionally for ideas or advice, while continuing to share feelings and experiences with trusted friends or family.

Why this is the best choice: Explanation: Option A risks social isolation – AI chatbots do not experience emotions, cannot truly understand your situation, and their supportive-sounding responses are generated by pattern matching, not empathy. Option C is too extreme – casual use of conversational AI is not inherently harmful. Option B recognises that AI can be a useful low-pressure space for organising thoughts, but real emotional support comes from human relationships. If you notice yourself consistently preferring AI over people, that is a signal to reconnect with friends, family, or a counsellor.

Open reflection: What role should AI systems have in conversations about personal experiences or emotions?

Why It Matters

Human relationships play an important role in emotional development, communication skills, and social understanding. Conversational AI systems may create the impression of understanding and empathy, but their responses result from algorithmic patterns rather than emotional awareness. Real relationships involve misunderstanding, compromise, and emotional complexity – experiences that contribute to social maturity and resilience. AI tools can complement support from teachers and friends, but human relationships remain essential for complex emotional situations (Turkle, 2015).

2.5. Making ethical choices online

Reflection Questions

1. When you post, share, or generate content online, do you think about how it might affect other people?

2. Should people take responsibility for the content they create or share using AI tools?

Ethical Issue Explained

Digital technologies allow people to create and distribute information faster than ever before. AI tools make this process even easier through automated text generation, image creation, and content editing. Ethical challenges arise when digital content influences other people’s rights, reputation, or access to accurate information. Responsible behaviour includes respecting privacy, avoiding harmful or misleading content, and recognising the impact of sharing information. AI-generated media may appear realistic even when it is inaccurate or manipulated, increasing the importance of ethical awareness and critical judgement (European Commission, 2022).

Scenario

Lukas enjoys experimenting with AI image generators. One afternoon he creates a realistic picture showing a famous athlete apparently promoting a product. The image looks authentic even though the athlete never participated in the advertisement. Lukas finds the image amusing and considers sharing it on social media as a joke. His friends might find it funny, but other people might believe the image is real.

What would you do in such a situation?

AOption A: Post the image online because it is only a joke and people should recognise that it was created with AI.
BOption B: Share the image but clearly explain that it is AI-generated and fictional.
COption C: Decide not to share the image because it might mislead people or harm someone’s reputation.

Correct answer: Correct answer: Option C – Decide not to share the image because it might mislead people or harm someone's reputation.

Why this is the best choice: Explanation: Option A is irresponsible – research shows that many people do not recognise AI-generated images, and "it's just a joke" does not undo reputational damage. Option B is better than A but still problematic – even with a disclaimer, realistic fake images of real people can be taken out of context, re-shared without the explanation, and potentially violate the person's rights. Option C is the most ethical choice. Creating AI images for personal experimentation is fine, but sharing realistic fake depictions of real people – even as humour – crosses an ethical and potentially legal line. The responsible question to ask is: would the person in this image consent to it being shared?

Open reflection: What responsibilities do individuals have when creating or sharing AI-generated content online?

Why It Matters

Digital environments increasingly shape public discussion and social interaction. AI-generated content can look convincing even when fabricated. Deepfake technology demonstrates how easily digital media can imitate real individuals. Transparency and respect for others are important ethical principles in online environments. Young people who consider the consequences of their online actions contribute to more respectful and reliable digital spaces (Chesney & Citron, 2019; European Commission, 2019).

Ethical Dilemma Cards

Use cards below in group discussions or personal reflection. For each dilemma, discuss the options and think about what you would do and why.

Dilemma 1: The Group Project

Your team is working on a school project with a tight deadline. One team member suggests using AI to write the entire report. Another team member feels this would be dishonest. What should your team do?

Suggested response: The team should use AI as a support tool, not a replacement. A responsible approach would be to use AI for brainstorming, structuring ideas, or checking grammar, while the actual content and arguments should come from the team members. This way, everyone learns from the process and the work remains authentic. If your school has an AI use policy, follow it – and always be transparent about how AI was used.

Why it matters: Submitting fully AI-generated work as your own is a form of academic dishonesty. It also means you miss the learning opportunity. The goal of education is to develop your own thinking, not to produce a perfect-looking document.

"Your team is working on a school project with a tight deadline. One team member suggests using AI to write the entire report. Another team member feels this would be dishonest. What should your team do?"
Dilemma 2: The Viral Photo

A friend sends you a funny AI-generated image of a classmate in an embarrassing situation. It looks very realistic. Your friend wants you to share it in a group chat. What do you do?

Suggested response: Do not share it. Even though the image is AI-generated, it can cause real harm to the person depicted. The right thing to do is to tell your friend that sharing it could be hurtful and potentially illegal. If someone created a realistic fake image of you in an embarrassing situation, you would not want it shared either.

Why it matters: AI-generated images of real people without their consent raise serious ethical and legal issues. In many countries, creating and distributing realistic fake images of someone – especially if harmful or humiliating – can violate privacy laws or anti-bullying legislation. Digital empathy means treating others online the way you would want to be treated.

"A friend sends you a funny AI-generated image of a classmate in an embarrassing situation. It looks very realistic. Your friend wants you to share it in a group chat. What do you do?"
Dilemma 3: The Perfect CV

You are applying for a summer job and use an AI tool to write your CV and cover letter. The AI makes your experience sound much more impressive than it really is. Do you submit it as written, edit it to be more accurate, or write it yourself?

Suggested response: Edit it to be accurate. There is nothing wrong with using AI to help structure your CV or improve the language, but the content must truthfully represent your actual skills and experience. Exaggerating or fabricating qualifications is dishonest and can backfire – employers may ask about specific experiences mentioned in your CV during an interview.

Why it matters: Trust and honesty are the foundation of professional relationships. If you start a job based on inflated claims, you may find yourself unable to meet expectations. Using AI to polish your presentation is fine; using it to misrepresent yourself is not.

"You are applying for a summer job and use an AI tool to write your CV and cover letter. The AI makes your experience sound much more impressive than it really is. Do you submit it as written, edit it to be more accurate, or write it yourself?"
Dilemma 4: The Advice Bot

A younger sibling is struggling with a personal problem and starts relying heavily on an AI chatbot for emotional support instead of talking to family or friends. Should you intervene? How?

Suggested response: Yes, you should gently intervene. Start by showing interest and empathy – ask your sibling how they are doing without being judgmental. Explain that while AI chatbots can be useful for general information, they cannot truly understand emotions, provide real empathy, or offer the kind of support that trusted people can. Encourage them to talk to a family member, friend, school counsellor, or other trusted adult.

Why it matters: AI chatbots simulate conversation but do not have genuine understanding or emotional intelligence. For serious personal issues, relying solely on AI can delay getting proper help, and the AI may give inappropriate or even harmful advice. Human connection is essential for emotional wellbeing, especially for young people.

"A younger sibling is struggling with a personal problem and starts relying heavily on an AI chatbot for emotional support instead of talking to family or friends. Should you intervene? How?"
Dilemma 5: The News Story

You see a dramatic news story on social media with thousands of shares. The story supports something you already believe. You want to share it too, but you have not checked if it is true. What should you do first?

Suggested response: Stop and check before sharing. Use the SIFT method: Stop (pause before reacting), Investigate the source (who published this?), Find better coverage (do reputable news outlets report the same story?), and Trace claims back to the original source. The fact that a story confirms your existing beliefs makes it even more important to verify it – this is exactly how confirmation bias works.

Why it matters: Sharing unverified content – even unintentionally – contributes to the spread of misinformation. Every share amplifies the reach. Being a responsible digital citizen means taking a few minutes to verify before spreading information further, especially when the content is emotionally charged or sensational.

"You see a dramatic news story on social media with thousands of shares. The story supports something you already believe. You want to share it too, but you have not checked if it is true. What should you do first?"

Self-Check: “Am I Using AI Responsibly?”

Rate yourself honestly on each of the 8 statements using the 1–5 scale below (1 = Never, 5 = Always). Your total is the sum of all 8 ratings, so scores can range from 8 (all “Never”) to 40 (all “Always”). Use the “How did you score?” guide further down to interpret your result.

Statement 1 · Never 2 · Rarely 3 · Sometimes 4 · Often 5 · Always
I check whether AI-generated information is accurate before using or sharing it.
I avoid sharing personal details (names, locations, sensitive information) with AI tools.
I rewrite AI-generated text in my own words rather than copying it directly.
I think about how my online actions might affect other people.
I recognise when I am relying too much on AI instead of thinking for myself.
I respect other people’s privacy when creating or sharing digital content.
I question AI outputs instead of accepting them without thought.
I use AI as a support tool, not as a replacement for my own learning.
— / 40

How did you score?

Your total = the sum of your 1–5 ratings across all 8 statements (minimum 8, maximum 40).

36–40: AI-Responsible Champion. You are highly aware of how you use AI and consistently apply critical thinking and ethical considerations. Keep it up – and help others develop the same habits.

28–35: Thoughtful User. You have good awareness of responsible AI use, but there are areas where you could be more consistent. Look at the statements where you scored 2 or lower – those are your growth areas.

20–27: Developing Awareness. You are starting to think about responsible AI use, but you often rely on AI without fully questioning it. Try to build one new habit at a time – for example, always fact-checking AI outputs before sharing them.

12–19: Early Stage. AI plays a role in your life, but you have not yet developed strong habits around using it responsibly. That is okay – awareness is the first step. Revisit the chapters in this handbook and pick two or three concrete actions to start with.

8–11: Starting Point. You may not have thought much about responsible AI use yet. This handbook is a great place to begin. Start with small steps: next time you use an AI tool, ask yourself – is this output accurate? Is it fair? Am I being transparent about using it?

References

  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623).
  • Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation war. Foreign Affairs, 98, 147.
  • European Commission. (2019). Ethics Guidelines for Trustworthy AI. High-Level Expert Group on Artificial Intelligence.
  • European Commission. (2022). Guidelines on the Ethical Use of Artificial Intelligence and Data in Teaching and Learning. Publications Office of the European Union.
  • European Parliament & Council. (2016). Regulation (EU) 2016/679: General Data Protection Regulation (GDPR). Official Journal of the European Union.
  • Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People – An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
  • Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1–16).
  • OECD. (2019). OECD Learning Compass 2030. Organisation for Economic Co-operation and Development.
  • Turkle, S. (2015). Reclaiming Conversation: The Power of Talk in a Digital Age. Penguin.
  • UNICEF. (2021). Policy Guidance on AI for Children. UNICEF Office of Global Insight and Policy.

Chapter 3 – Misinformation and fake news: How AI shapes information

Chapter aim

In a world where billions of pieces of content are being created, shared, and promoted through online platforms, social media, and applications, Artificial Intelligence (AI) plays an instrumental role in developing, distributing, and expanding information. AI has the capacity to widely spread any content at a speed and scale that was unimaginable even a decade ago. In this AI era, the UN highlighted that “disinformation is not only a top global threat-it’s the one country feel least prepared to address” (UN Global Risk Report, 2024).

This chapter focuses on how AI contributes to the spread of both accurate and misleading information, what deepfakes look like, why misinformation travels so fast, and what we can each do about it. These are not just abstract issues – they affect the news we read, the decisions we make, and all aspects of our daily life. Young people who participated in the focus groups across Estonia, Cyprus, and Greece highlighted that while AI can be very useful, it can also produce inaccurate or misleading information. Therefore, responsible use of AI requires awareness, scepticism, and the ability to fact-check information independently.

3.1. How AI generates and spreads information

Concept Explained

As already explained in the previous chapters, AI systems, and particularly Large Language Models (LLMs) such as ChatGPT or Gemini, are trained on huge amounts of human-written text. These systems learn statistical patterns in language and can generate new content that sounds confident and coherent. However, AI does not know what is true; it can generate false information with equal fluency. It produces text that is statistically plausible, but not text that has been verified. Once AI-generated content is available online, recommendation algorithms on social media amplify it, often reaching millions of people before any correction is made (Chesney & Citron, 2019).

Case Study: AI-Generated Misinformation in Digital Media

In 2023, AI-generated images of an explosion near the Pentagon were shared widely on Twitter/X and briefly caused a small dip in stock markets before journalists debunked them. The images looked convincing because AI generators had been trained on thousands of real news photographs – and the story spread because algorithms amplify emotionally charged content, regardless of whether it is true (Brewster, 2023).

Warning Signs: Spotting AI-Generated or Unreliable Content on Social Media

The content sounds very confident but cites no verifiable sources.

The content triggers a strong emotional reaction, such as outrage, fear, or excitement.

The account sharing the content was created recently or has very few followers.

The same content cannot be found on established, reputable online sources.

Images or videos accompanying the content look slightly “off” (e.g. perfect lighting, no shadows, distorted backgrounds, strange hands).

The content uses vague language like “sources say” or “many experts believe”, but without specific references.

Try It Yourself

Find a headline from your social media feed today. Before believing or sharing it:

1. Search for the same content on two other reputable news websites.

2. Check the date – it could be a recycled old story.

3. Look up the original source.

Share your findings with others (e.g. family, friends, classmates) and discuss what you found.

3.2. Deepfakes and manipulated content

Concept Explained

Deepfakes are synthetic media – videos, images, or audio clips – that are generated or manipulated using AI to seem and sound real but are not. The technology is typically based on deep learning techniques such as Generative Adversarial Networks (GANs). Deepfake technology has become increasingly accessible through free and low-cost apps. While there are legitimate creative applications, deepfakes pose serious risks: they have been used to create non-consensual intimate imagery, to spread political disinformation, and to conduct financial fraud through voice cloning. The European Parliament has highlighted deepfakes as a growing threat to democratic discourse and individual rights (European Parliament, 2023).

Deepfake Detection Checklist Deepfake Detection Checklist 👁 Eyes & blinking Do they blink naturally? 👄 Lip sync & mouth Does speech match lip shape? Hands & fingers Right number and shape? 💡 Lighting & shadows Are shadows consistent? 🔊 Audio & voice Any odd pitch or timing? 🖼 Edges & background Blurs or glitches at edges?
Deepfake Detection Checklist
Case Study: Deepfake Audio in the 2023 Slovakia Election

In the lead-up to Slovakia’s 2023 parliamentary elections, a deepfake audio recording circulated on social media appearing to show a leading politician discussing how to buy votes. The recording was posted just days before the election, during a period of media silence. Analysis later confirmed it was AI-generated, but by then it had been shared thousands of times and may have influenced voter opinion (Walker, 2023).

Warning Signs: How to Tell If a Video Might Be a Deepfake

Facial movements look slightly unnatural: blinking is too slow, too fast, or entirely absent.

The edges of the face or hair appear blurry, flickering, or inconsistent with the rest of the video.

Lighting on the face does not match the rest of the video.

Audio quality does not match the video – sync is slightly off, or the voice sounds hollow.

The clip has no original source and cannot be found on any verified channel.

Teeth, glasses, or jewellery appear distorted or digitally generated.

Try It Yourself

Watch a short clip from a video online. You can find deepfake examples on:

MIT Media Lab’s Detect Fakes: https://detectfakes.media.mit.edu

FotoForensics (for images): https://fotoforensics.com

List three things you notice that could indicate manipulation. Would you have spotted these without being told the video might be fake?

3.3. Why misinformation spreads so easily

Concept Explained

Misinformation is false or inaccurate information not necessarily spread with bad intent, while disinformation is deliberately false information designed to deceive. A landmark study published in Science found that false news spreads significantly faster, further, and more broadly than true news on social media, and that humans, not bots, were primarily responsible (Vosoughi et al., 2018). False stories tend to be more novel and emotionally charged, triggering fast, intuitive thinking. Social media platforms are designed to maximise engagement, and content that provokes strong emotional reactions achieves exactly that, regardless of truth.

Good Practice: The SPOTit Project

The SPOTit project – an Erasmus+ project specifically designed to build young people’s media literacy – developed practical tools, including a free training package, a certified online course, and digital escape rooms, to help young people and youth workers develop the critical thinking skills needed to push back against misinformation.

Get started: Visit https://spotitproject.eu, explore the Resources section, register for the certified MOOC at https://elearning.spotitproject.eu, and try the Digital Escape Rooms.

Case Study: The COVID-19 Infodemic

During the COVID-19 pandemic, health misinformation spread faster across WhatsApp, Facebook, and YouTube than official public health guidance. The WHO declared an “infodemic” – an overload of information, accurate and inaccurate mixed together. Studies found that AI recommendation algorithms were directing users from legitimate health content towards increasingly conspiratorial channels (WHO, 2020).

Warning Signs: Recognising When You Are Vulnerable to Misinformation

The story triggers an immediate strong emotional response, such as anger, fear, or euphoria.

The information confirms something you already believed (confirmation bias).

You feel an urgent need to share the content immediately, before checking it.

The content is framed as a secret that “mainstream media won’t tell you.”

Statistics or data are presented without hyperlinks to the original source.

The story is being shared by people you trust personally, creating social proof.

3.4. How to fact-check AI outputs

Concept Explained

AI language models are designed to produce fluent, confident-sounding text. But confidence in tone does not equal accuracy in content. AI systems can produce “hallucinations” – factually incorrect statements presented with apparent certainty. This includes inventing fake citations, misquoting real people, or describing events that never happened. Fact-checking AI outputs means verifying specific claims against primary or authoritative sources. The SIFT method, developed by digital literacy educator Mike Caulfield, provides a practical framework: Stop, Investigate the source, Find better coverage, and Trace claims to their origin (Caulfield, 2019).

How to Spot Misinformation How to Spot Misinformation You see a claim online Does it trigger strong emotions? Yes Can you find the original source? No No Do reputable outlets report the same? Yes Likely misinformation ✗ Needs more research ? Likely reliable ✓ Yes
How to Spot Misinformation
THE SIFT METHOD — How to Fact-Check Any Content
SSTOP

Pause. Don't share immediately. Check your emotional reaction.

IINVESTIGATE

Look up the source. Who is behind this content? Are they credible?

FFIND BETTER COVERAGE

Search for the same claim on other reputable outlets.

TTRACE CLAIMS

Go back to the original source. Is the claim accurate in context?

Source: Caulfield, M. (2019). SIFT (The four moves). Hapgood. https://hapgood.us/2019/06/19/sift-the-four-moves/

Case Study: AI Hallucinations in Professional Contexts

In 2024, Air Canada's chatbot gave a customer the wrong information about a flight refund policy. He acted on it, lost money, and had to take the airline to court — and won. AI doesn't know when it's wrong. Always check official sources directly. (Moffatt v. Air Canada, 2024 BCCRT 149)

Warning Signs: When to Double-Check What AI Tells You

AI provides a citation or statistic but you cannot find it on any reputable database.

The AI gives a very precise figure (e.g., “72.4% of people...”) without any source.

The AI contradicts itself within the same conversation.

You use only one AI tool and assume its output is definitive.

The AI confidently describes recent events (AI knowledge may be months or years out of date).

You feel the AI “must know” because it sounds authoritative.

Fact-Checking Checklist

Use this checklist every time you receive information from an AI tool, or any source you are unsure about:

Fact-Checking Checklist
Good Practice: SPOTit Project MOOC

The SPOTit project’s free MOOC for youth workers includes practical modules on fact-checking skills, with an emphasis on using digital tools critically. It is available at elearning.spotitproject.eu

Try It Yourself

Ask an AI chatbot three questions: one about history, one about science, one about recent news. Write down the answers. Now fact-check each one using the checklist above. How many claims could you verify? What did this tell you about how much you can trust AI outputs?

3.5. Being a responsible digital citizen

Concept Explained

Digital citizenship means participating in online life in ways that are ethical, safe, and responsible – not just for yourself, but for your community. In the era of AI, this includes being mindful of the content you consume, share, and create. Every time you share unverified information, you contribute to its spread. Every time you pause and check, you help break the chain of misinformation.

Good Practice: SPOTit Digital Escape Rooms

The SPOTit project created Digital Escape Rooms – interactive online challenges where young people use technology to solve puzzles, developing media literacy, critical thinking, and collaboration skills in a way that feels engaging rather than like a lesson. Access them at https://elearning.spotitproject.eu.

Case Study: Digital Citizenship in Practice

The #StopHateForProfit campaign in 2020 saw thousands of individuals and brands pause their social media advertising to pressure platforms into combating misinformation and hate speech. Youth-led movements like the Climate Action Network have used digital tools critically and responsibly – fact-checking claims, citing sources, and engaging constructively in public debate.

Warning Signs: Habits That Make You Part of the Problem

You share content without reading beyond the headline.

You assume something is true because people in your network believe it.

You use AI-written content and present it as your own without disclosure.

You share others’ personal information without their consent.

You use AI-generated images or voice clones of real people without their consent.

Try It Yourself

Write your personal digital responsibility commitment – three to five commitments about how you will engage with information online. Keep it somewhere visible throughout the programme.

My Digital Responsibility Commitments
Spot-the-Fake Exercise: The Headline Test

Read each headline. Mark it real or fake, and note your reason. Then use a fact-checking tool such as Snopes, Full Fact, or AFP Fact Check to check your instincts.

Headline
“Scientists Confirm Coffee Cures All Forms of Cancer, Study Says”
Real / Fake?
FAKE: No credible scientific study supports this claim. Red flag: absolute claim with no specific citation.
Headline
“EU Proposes New Rules on AI-Generated Political Content”
Real / Fake?
REAL: The EU has proposed regulation of AI-generated political content under the AI Act.
Headline
“Government Hiding Evidence of Alien Contact, Insider Reveals”
Real / Fake?
FAKE: Classic conspiracy framing. No credible source supports this.
Headline
“Youth Social Media Use Linked to Anxiety, New Research Finds”
Real / Fake?
REAL: Multiple peer-reviewed studies have linked excessive social media use to anxiety.
Headline
“Local AI Tool Predicts Lottery Numbers with 90% Accuracy”
Real / Fake?
FAKE: No tool can predict lottery numbers – lotteries are randomised by design.
Indicative answers . “Scientists Confirm Coffee Cures All Forms of Cancer, Study Says” — FAKE. No credible scientific study supports this claim. Red flag: absolute claim with no specific citation. “EU Proposes New Rules on AI-Generated Political Content” — REAL. The EU has proposed regulation of AI-generated political content under the AI Act. “Government Hiding Evidence of Alien Contact, Insider Reveals” — FAKE. Classic conspiracy framing. No credible source supports this. “Youth Social Media Use Linked to Anxiety, New Research Finds” — REAL. Multiple peer-reviewed studies have linked excessive social media use to anxiety. “Local AI Tool Predicts Lottery Numbers with 90% Accuracy” — FAKE. No tool can predict lottery numbers – lotteries are randomised by design.
Spot-the-Fake Exercise: Real or AI-Generated?

Read both paragraphs. Which do you trust more, and why?

I trust Text A more

“The recent flooding in the region has left thousands displaced, with emergency services working around the clock to reach affected communities. Local authorities confirmed three people have died and dozens remain missing. Relief organisations are calling for immediate international support as temperatures are expected to drop sharply this week.”

I trust Text B more

“Experts from across the spectrum are increasingly concerned about the rapidly evolving situation, which many say could have far-reaching implications for the entire region and beyond. Multiple sources have indicated that this development represents a significant turning point, though official responses have so far been measured and cautious.”

Expert answer

Text A is more trustworthy – it contains specific, verifiable details. Text B is likely AI-generated or deliberately vague, using non-specific language throughout.

References

  • Brewster, T. (2023, May 22). Fake AI image of Pentagon explosion briefly went viral and spooked markets. Forbes.
  • Caulfield, M. (2019). SIFT (The four moves). Hapgood. https://hapgood.us/2019/06/19/sift-the-four-moves/
  • Chesney, R., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1820.
  • Council of Europe. (2019). Digital Citizenship Education Handbook. Council of Europe Publishing.
  • European Parliament. (2023). Deepfakes and Democracy: Challenges and Ways Forward (PE 740.235). European Parliament Research Service.
  • SPOTit Project. (2021). SPOTit: Fighting Fake News in Social Media. Erasmus+ Programme.
  • Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.
  • Walker, S. (2023, October 2). Slovakia election: Deepfake audio of liberal candidate discussing vote rigging spreads online. The Guardian.
  • Weiser, B. (2023, June 22). Here’s what happens when your lawyer uses ChatGPT. The New York Times.
  • Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise. Teachers College Record, 121(11), 1–40.
  • World Health Organization. (2020). Infodemic Management. WHO.

Conclusions

Key messages from all chapters

This handbook has explored three essential dimensions of AI literacy for young people: understanding what AI is and how it works, navigating the ethical challenges of AI use, and developing the critical thinking skills needed to identify misinformation and AI-generated content. Across all three chapters, several key messages emerge. AI is a powerful tool, but it is not infallible – it can produce errors, reflect biases, and generate convincing but false information. Responsible AI use requires active engagement: checking facts, protecting privacy, thinking independently, and considering the impact of digital actions on others. Young people are not passive consumers of technology – they are active participants who can shape how AI is used in their communities and societies.

Practical tips for everyday AI use

1. Always verify AI-generated information against reliable, independent sources before using or sharing it.

2. Protect your privacy by avoiding sharing personal details with AI tools and checking platform privacy settings.

3. Use AI as a starting point, not a final product – add your own thinking, analysis, and creativity.

4. Be aware of AI bias and consider whether outputs might reflect stereotypes or cultural assumptions.

5. Practice the SIFT method when evaluating online content: Stop, Investigate, Find better coverage, Trace claims.

6. Think before you share – consider the potential impact of content on others before posting or forwarding.

7. Maintain a healthy balance between AI-assisted tasks and independent thinking.

8. Stay informed about how AI technologies work and evolve – AI literacy is an ongoing learning process.

Youth quotes from focus groups

As part of the AIRY project, 45 young people aged 14–30 participated in focus groups conducted in Estonia, Cyprus, and Greece. Their voices, experiences, and concerns shaped the content of this handbook. Below is a selection of key insights shared by participants during these sessions.

On understanding AI
"AI started out as an aid for creative assignments, but now it has become a daily habit – we use it for planning studies, managing time, even cooking."
— Focus group participant, Greece
"We use AI mainly for brainstorming and idea generation, assisting with school assignments, managing daily tasks, and exploring creative projects. But it should not replace our own thinking and decision-making."
— Focus group participants, Estonia
"AI provides quick responses and can enhance productivity, but its outputs can be inaccurate, include hallucinations, and reflect biases from its data sources."
— Focus group participant, Cyprus
On trust and critical thinking
"We do not trust AI with certainty – we approach it with skepticism and critical thinking. Often the answers are not satisfactory and sometimes misleading."
— Focus group participant, Greece
"AI can present incorrect information, misleading references, or convincing but false claims. You always need to independently check and cross-reference what it gives you."
— Focus group participant, Estonia
"The increasing realism of AI-generated media, including videos and social media posts, makes it more difficult to discern what is true. We need to learn how to recognise this."
— Focus group participant, Cyprus
On risks and concerns
"Our culture of taking the easy way out leads us astray and makes us complacent. If AI continues to lead us astray, we will have a distorted view and perception of the world."
— Focus group participant, Greece
"Over-reliance on AI is a real risk – it can weaken independent thinking and erode the ability to learn deeply. Strategies that provide ready knowledge without an actual learning process can lead us to just receive information without ever doubting it."
— Focus group participant, Estonia
"There are ethical concerns about personal data and privacy. We don't always know how the information we share with AI tools is stored, analysed, or reused."
— Focus group participant, Cyprus
On AI and human relationships
"Some lonely people who do not have human relationships are repelled from being extroverted because of AI. Over-familiarity with AI can affect human relationships. It can reach the point of turning us into algorithms."
— Focus group participant, Greece
"AI should enhance rather than replace human judgment – supporting tasks like brainstorming, learning, and improving ideas, while we maintain our social and reflective skills."
— Focus group participant, Estonia
On what AI should be
"AI is a tool and not the solution to everyday life and knowledge, but it must be managed wisely and within limits. It requires analytical skills for its proper use, self-regulation, and critical thinking."
— Focus group participant, Greece
"AI is a supportive resource – useful in education, healthcare, environmental protection, and community work – but it should never replace human judgment. We need structured education that combines practical exercises, ethical guidance, and digital literacy."
— Focus group participant, Cyprus
"We need new interactive ways of learning that are more enjoyable and keep us interested. Interaction is important for the learning process – not just sterile knowledge. But discipline, attention, and boundaries are also necessary."
— Focus group participant, Greece