EU survey shows strong public backing for digital literacy in schools

A new Eurobarometer survey finds that Europeans want digital skills to hold the same status in schools as reading, mathematics and science.

Citizens view digital competence as essential for learning, future employment and informed participation in public life.

Nine in ten respondents believe that schools should guide pupils on how to handle the harmful effects of digital technologies on their mental health and well-being, rather than treating such issues as secondary concerns.

Most Europeans also support a more structured approach to online information. Eight in ten say digital literacy helps them avoid misinformation, while nearly nine in ten want teachers to be fully prepared to show students how to recognise false content.

A majority continues to favour restrictions on smartphones in schools, yet an even larger share supports the use of digital tools specifically designed for learning.

More than half find that AI brings both opportunities and risks for classrooms, which they believe should be examined in greater depth.

Almost half want the EU to shape standards for the use of educational technologies, including rules on AI and data protection.

The findings will inform the European Commission’s 2030 Roadmap on digital education and skills, scheduled for release next year as part of the Union of Skills initiative.

A survey carried out across all member states reflects a growing expectation that digital education should become a central pillar of Europe’s teaching systems, rather than an optional enhancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India moves toward mandatory AI royalty regime

India is weighing a sweeping copyright framework that would require AI companies to pay royalties for training on copyrighted works under a mandatory blanket licence branded as the hybrid ‘One Nation, One Licence, One Payment’ model.

A new Copyright Royalties Collective for AI Training, or CRCAT, would collect payments from developers and distribute money to creators. AI firms would have to rely only on lawfully accessed material and file detailed summaries of training datasets, including data types and sources.

The panel is expected to favour flat, revenue-linked percentages on global earnings from commercial AI systems, reviewed roughly every three years and open to legal challenge in court.

Obligations would apply retroactively to AI developers that have already trained profitable models on copyright-protected material, framed by Indian policymakers as a corrective measure for the creative ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India expands job access with AI-powered worker platforms

India is reshaping support for its vast informal workforce through e-Shram, a national database built to connect millions of people to social security and better job prospects.

The database works together with the National Career Service portal, and both systems run on Microsoft Azure.

AI tools are now improving access to stable employment by offering skills analysis, resume generation and personalised career pathways.

The original aim of e-Shram was to create a reliable record of informal workers after the pandemic exposed major gaps in welfare coverage. Engineers had to build a platform capable of registering hundreds of millions of people while safeguarding sensitive data.

Azure’s scalable infrastructure allowed the system to process high transaction volumes and maintain strong security protocols. Support reached remote areas through a network of service centres, helped further by Bhashini, an AI language service offering real-time translation in 22 Indian languages.

More than 310 million workers are now registered and linked to programmes providing accident insurance, medical subsidies and housing assistance. The integration with NCS has opened paths to regulated work, often with health insurance or retirement savings.

Workers receive guidance on improving employability, while new features such as AI chatbots and location-focused job searches aim to help those in smaller cities gain equal access to opportunities.

India is using the combined platforms to plan future labour policies, manage skill development and support international mobility for trained workers.

Officials also hope the digital systems will reduce reliance on job brokers and strengthen safe recruitment, including abroad through links with the eMigrate portal.

The government has already presented the platforms to international partners and is preparing to offer them as digital public infrastructure for other countries seeking similar reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adobe brings its leading creative tools straight into ChatGPT

Yesterday, Adobe opened a new chapter for digital creativity by introducing Photoshop, Adobe Express and Adobe Acrobat inside ChatGPT.

The integration gives 800 million weekly users direct access to trusted creative and productivity tools through a conversational interface. Adobe aims to make creative work easier for newcomers by linking its technology to simple written instructions.

Photoshop inside ChatGPT offers selective edits, tone adjustments and creative effects, while Adobe Express brings quick design templates and animation features to people who want polished content without switching between applications.

Acrobat adds powerful document controls, allowing users to organise, edit or redact PDFs inside the chat. Each action blends conversation with Adobe’s familiar toolsets, giving users either simple text-driven commands or fine control through intuitive sliders.

The launch reflects Adobe’s broader investment in agentic AI and its Model Context Protocol. Earlier releases such as Acrobat Studio and AI Assistants for Photoshop and Adobe Express signalled Adobe’s ambition to expand conversational creative experiences.

Adobe also plans to extend an upcoming Firefly AI Assistant across multiple apps to support faster movement from an idea to a finished design.

All three apps are now available to ChatGPT users on desktop, web and iOS, with Android support expanding soon. Adobe positions the integration as an entry point for new audiences who may later move into the full desktop versions for deeper control.

The company expects the partnership to widen access to creative expression by letting anyone edit images, produce designs or transform documents simply by describing what they want to achieve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Three in ten US teens now use AI chatbots every day, survey finds

According to new data from the Pew Research Center, roughly 64% of US teens (aged 13–17) say they have used an AI chatbot; about three in ten (≈ 30%) report daily use. Among those teens, the leading chatbot is ChatGPT (used by 59%), followed by Gemini (23%) and Meta AI (20%).

The widespread adoption raises growing safety and welfare concerns. As teenagers increasingly rely on AI for information, companionship or emotional support, critics point to potential risks, including exposure to biased content, misinformation, or emotionally manipulative interactions, particularly among vulnerable youth.

Legal action has already followed, with families of at least two minors suing AI-developer companies after alleged harmful advice from chatbots.

Demographic patterns reveal that Black and Hispanic teens report higher daily usage rates (around 33-35%) compared to their White peers (≈ 22%). Daily use is also more common among older teens (15–17) than younger ones.

For policymakers and digital governance stakeholders, the findings add urgency to calls for AI-specific safeguarding frameworks, especially where young people are concerned. As AI tools become embedded in adolescent life, ensuring transparency, responsible design, and robust oversight will be critical to preventing unintended harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian families receive eSafety support as the social media age limit takes effect

Australia has introduced a minimum age requirement of 16 for social media accounts during the week, marking a significant shift in its online safety framework.

The eSafety Commissioner has begun monitoring compliance, offering a protective buffer for young people as they develop digital skills and resilience. Platforms now face stricter oversight, with potential penalties for systemic breaches, and age assurance requirements for both new and current users.

Authorities stress that the new age rule forms part of a broader effort aimed at promoting safer online environments, rather than relying on isolated interventions. Australia’s online safety programmes continue to combine regulation, education and industry engagement.

Families and educators are encouraged to utilise the resources on the eSafety website, which now features information hubs that explain the changes, how age assurance works, and what young people can expect during the transition.

Regional and rural communities in Australia are receiving targeted support, acknowledging that the change may affect them more sharply due to limited local services and higher reliance on online platforms.

Tailored guidance, conversation prompts, and step-by-step materials have been produced in partnership with national mental health organisations.

Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts.

eSafety officials underline that the new limit introduces a delay rather than a ban. The aim is to reduce exposure to persuasive design and potential harm while encouraging stronger digital literacy, emotional resilience and critical thinking.

Ongoing webinars and on-demand sessions provide additional support as the enforcement phase progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK partners with DeepMind to boost AI innovation

The UK Department for Science, Innovation and Technology (DSIT) has entered a strategic partnership with Google DeepMind to advance AI across public services, research, and security.

The non-legally binding memorandum of understanding outlines a shared commitment to responsible AI development, while enhancing national readiness for transformative technologies.

The collaboration will explore AI solutions for public services, including education, government departments, and the Incubator for AI (i.AI). Google DeepMind may provide engineering support and develop AI tools, including a government-focused version of Gemini aligned with the national curriculum.

Researchers will gain priority access to DeepMind’s AI models, including AlphaEvolve, AlphaGenome, and WeatherNext, with joint initiatives supporting automated R&D and lab facilities in the UK. The partnership seeks to accelerate innovation in strategically important areas such as fusion energy.

AI security will be strengthened through the UK AI Security Institute, which will share model insights, address emerging risks, and enhance national cyber preparedness. The MoU is voluntary, spans 36 months, and ensures compliance with data privacy laws, including UK GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agents redefine knowledge work through cognitive collaboration

A new study by Perplexity and Harvard researchers sheds light on how people use AI agents at scale.

Millions of anonymised interactions were analysed to understand who relies on agent technology, how intensively it is used and what tasks users delegate. The findings challenge the notion of a digital concierge model and reveal a shift toward more profound cognitive collaboration, rather than merely outsourcing tasks.

More than half of all activity involves cognitive work, with strong emphasis on productivity, learning and research. Users depend on agents to scan documents, summarise complex material and prepare early analysis before making final decisions.

Students use AI agents to navigate coursework, while professionals rely on them to process information or filter financial data. The pattern suggests that users adopt agents to elevate their own capability instead of avoiding effort.

Usage also evolves. Early queries often involve low-pressure tasks, yet long-term behaviour moves sharply toward productivity and sustained research. Retention rates are highest among users working on structured workflows or tasks that require knowledge.

The trajectory mirrors the early personal computer, which gained value through spreadsheets and text processing rather than recreational use.

Six main occupations now drive most agent activity, with firm reliance among digital specialists as well as marketing, management and entrepreneurial roles. Context shapes behaviour, as finance users concentrate on efficiency while students favour research.

Designers and hospitality staff follow patterns linked to their professional needs. The study argues that knowledge work is increasingly shaped by the ability to ask better questions and that hybrid intelligence will define future productivity.

The pace of adaptation across the broader economy remains an open question.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

People trust doctors more than AI

New research shows that most people remain cautious about using ChatGPT for diagnoses but view AI more favourably when it supports cancer detection. The findings come from two nationally representative surveys presented at the Society for Risk Analysis annual meeting.

The study, led by researchers from USC and Baruch College, analysed trust and attitudes towards AI in medicine. Participants generally trusted human clinicians more, with only about one in six saying they trusted AI as much as a medical expert.

Individuals who had used AI tools such as ChatGPT tended to hold more positive attitudes, reporting greater understanding and enthusiasm for AI-assisted healthcare. Familiarity appeared to reduce hesitation and increase confidence in the technology.

When shown an AI system for early cervical cancer detection, respondents reported more excitement and potential than fear. The results suggest that concrete, real-world applications can help build trust in medical AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google revisits smart glasses market with AI-powered models

Google has announced plans to re-enter the smart-glasses market in 2026 with new AI-powered wearables, a decade after discontinuing its ill-fated Google Glass.

The company will introduce two models: one without a screen that provides AI assistance through voice and sensor interaction, and another with an integrated display. The glasses will integrate Google’s Gemini AI system.

The move comes as the sector experiences rapid growth. Meta has sold more than two million pairs of its Ray-Ban-built AI glasses, helping drive a 250% year-on-year surge in smart-glasses sales in early 2025.

Analysts say Google must avoid repeating the missteps of Google Glass, which suffered from privacy concerns, awkward design, and limited functionality before being withdrawn in 2015.

Google’s renewed effort benefits from advances in AI and more mature consumer expectations, but challenges remain. Privacy, data protection, and real-world usability issues, core concerns during Google Glass’s first iteration, are expected to resurface as AI wearables become more capable and pervasive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!