Safer Internet Day- ‘Smart tech, safe choices’
Today is Safer Internet Day, coordinated by the UK Safer Internet Centre,and this year’s theme, ‘Smart tech, safe choices – Exploring the safe and responsible use of AI’, reflects the rapid pace at which AI tools are becoming part of everyday life.
The theme directs our focus on the digital safeguarding challenges artificial intelligence raises, a key priority for schools.
From chatbots and image generators to recommendation algorithms and automated decision-making, children and young people are interacting with AI more often than perhaps many adults realise. Safer Internet Day 2026 is an opportunity to pause, reflect and build the knowledge and judgement needed to use these technologies safely and responsibly.
AI is not inherently harmful. Used well, it can support learning, creativity, accessibility and inclusion. However, without the right understanding and boundaries, AI also introduces new safeguarding risks which education settings cannot afford to ignore.
Key concerns include:
- Misinformation and deepfakes: AI-generated text, images and videos can blur the line between fact and fiction, making it harder for young people to judge what is real;
- Privacy and data protection: many AI tools collect, store and reuse data, sometimes in ways that are not transparent to users;
- Bias and discrimination: AI systems can reflect and amplify existing social biases, affecting how people are represented or treated online;
- Over-reliance on technology: pupils may begin to depend on AI tools for thinking, writing, decision-making, all of which have an impact on learning, independence and development of critical life skills;
- Safeguarding and exploitation risks: AI can be misused to generate sexualised images, impersonate individuals or facilitate grooming and coercion.
It is important for educators and safeguarding leads to move beyond surface-level warnings to better support children to understand how AI works and what responsible use looks like in practice.
The theme ‘smart tech, safe choices’ places emphasis on agency and judgement, rather than fear or blanket bans. For schools and colleges, this means helping learners to:
- Ask critical questions about AI outputs: Who created this? What data was used? What might be missing or distorted?;
- Understand that AI tools are not neutral or all-knowing, and can make mistakes;
- Recognise when and where AI use is appropriate, and when it crosses ethical or academic boundaries;
- Make informed choices about sharing personal information online;
- Speak up when something feels wrong, misleading or unsafe.
Effective approaches may include:
- Curriculum activities exploring AI, ethics and digital literacy in age-appropriate ways;
- Staff CPD on emerging AI risks, safeguarding implications and professional use of AI tools;
- Parent and carer engagement, helping families understand the platforms and tools children may be using at home;
- Pupil voice, creating space for young people to talk honestly about how they use AI and what support they need.
For governance, it is also important to ensure policies, training and curriculum content up to date and incorporate measures for safe AI use, particularly in relation to online safety, data protection and safeguarding. Aligned closely with current safeguarding expectations around online safety, digital resilience and professional curiosity; schools should consider how AI features within:
- Online safety and acceptable use policies;
- Staff codes of conduct;
- Curriculum planning (including RSHE and digital literacy);
- Safeguarding training and reporting procedures.
As AI continues to evolve, so will the opportunities and risks it presents.
This year’s theme reminds us that safeguarding in the age of AI is not solely about technology; it includes values, judgement and shared responsibility. This is a timely moment for all educational settings to reflect, update practice and reaffirm their commitment to keeping children and young people safe online.
SSS Learning
9 February 2026