Inspired Education introduces AI-driven learning for primary schools

Inspired Education has unveiled a new AI-enabled primary teaching model designed to modernise traditional learning systems. The programme aims to better align education with how children learn in a digital and fast-changing environment.

The model combines core academic subjects in the morning with applied learning in the afternoon. Students focus on life skills such as problem-solving, entrepreneurship and communication alongside standard curriculum content.

Learning is structured around mastery rather than age, allowing children to progress at their own pace. AI-powered tools are used to personalise lessons and support faster and more adaptive learning outcomes.

The first early-access programme will launch in Central London in January 2027. Further rollouts are planned across cities, including Lisbon, Milan, Madrid, Mexico City, SĂŁo Paulo and Auckland.

Developers say the approach responds to growing demand from parents for AI-integrated education. The initiative reflects broader efforts to prepare students with digital, practical and future-ready skills.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Human data demand fuels new global digital economy

A growing number of individuals worldwide are participating in a new digital economy built around supplying data for AI systems.

Through platforms such as Kled AI and Silencio, users upload videos, audio recordings and personal interactions in exchange for payment, contributing to the development of increasingly sophisticated AI models.

Such a trend reflects a broader shift in the AI industry, where demand for high-quality human-generated data is rising as traditional web-based sources become more limited.

Researchers suggest that human data remains essential for improving system performance and modelling behaviour beyond existing datasets. As a result, data marketplaces have emerged as an alternative supply mechanism.

Economic considerations often shape participation. In regions facing limited employment opportunities or currency instability, earning income in global currencies can provide a meaningful financial incentive.

At the same time, similar practices are expanding in higher-income countries, where individuals seek supplementary income streams amid rising living costs.

However, the model introduces complex trade-offs.

Contributors may grant extensive usage rights over their data, sometimes on a long-term or irreversible basis. Experts note that such arrangements can reduce control over how personal information is reused, including in contexts not initially anticipated.

Concerns also extend to issues such as data security, transparency and the potential for misuse in areas including synthetic media and identity replication.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI hiring tools are rejecting graduates before a human ever reads their CV

AI is increasingly taking over the early stages of hiring, with 89% of UK recruiters planning to use it more in the recruitment process this year.

For graduates like Bhuvana Chilukuri, a third-year business student at Queen Mary University London who has applied for over 100 roles without a single offer, this means facing automatic CV screening and AI video interviews, with some rejections arriving in under two minutes.

The scale of the problem is significant on both sides. Denis Machuel, CEO of Adecco, one of the world’s largest recruitment specialists, noted that candidates now need to send an average of 200 applications to receive a single job offer.

Meanwhile, firms like law firm Mishcon de Reya received 5,000 applications for just 35 roles in its last hiring round, a volume driven in part by candidates using AI to write and mass-submit applications, prompting employers to deploy AI to filter them out.

Supporters of AI hiring tools argue they can reduce human bias and deliver more consistent decisions. But critics warn the process strips candidates of their personality and humanity, with applicants describing feeling ‘robotic’ and ‘monotone’ while recording answers into a screen with no human interaction.

Machuel acknowledged the tension, calling for AI and human judgement to be combined at the right moments in the process, arguing that balance is the only way to break what he described as a growing ‘arms race.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI survey highlights opportunities, risks, and real-world use

A global survey by Anthropic of over 80,000 Claude users across 159 countries highlights how AI is increasingly shaping work, learning, and everyday life. Respondents cite benefits in productivity, skill-building, and task management, with AI helping save time and reduce mental effort.

Users highlight AI’s role in learning and personal growth, helping them access knowledge, gain confidence, and pursue careers or entrepreneurship previously out of reach. The study also shows AI assisting in organisation, research synthesis, and emotional support.

Alongside these benefits, concerns remain widespread. Reliability issues, job disruption, cognitive dependence, and privacy risks are frequently cited.

Many users describe navigating both advantages and potential harms, reflecting Anthropic’s ‘light and shade’ concept: AI can empower, yet create new risks and expectations.

Regional views differ: South America, Africa, and parts of Asia see AI as an opportunity, while Europe and the US focus on complexity, workload, and economic impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US senator proposes AI rules for children

A US senator has introduced a draft framework to establish nationwide AI rules, with a focus on child safety and copyright protection. The proposal seeks to create a unified federal approach to replace state laws that differ.

The plan would require developers to implement safeguards for minors, including age verification, data protection and mechanisms to report harm. Companies could also face legal action over failures linked to AI system design.

Copyright measures include new standards for identifying AI-generated content and preventing tampering. Authorities would also develop cybersecurity guidelines to support the transparency and authenticity of content.

Debate over this in the US continues over the balance between regulation and innovation, with some stakeholders warning of legal and economic risks. Discussions between lawmakers and the administration are expected to shape a final framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Firefox adds VPN and AI tools

Mozilla is preparing a major update to its Firefox browser, introducing a built-in VPN and new AI-powered tools. The company says the changes aim to strengthen privacy and give users greater control over browsing.

The integrated VPN will hide the user’s location and IP address while offering a limited monthly data allowance in selected regions. The feature replaces a previously separate paid service and will be built into the browser.

New AI tools will support tasks such as summarising content and comparing products without leaving a web page. Additional features include split-screen browsing and tools to organise notes across tabs.

The update also introduces redesigned settings and a refreshed interface to improve usability. Mozilla says the changes are intended to create a more personalised and modern browsing experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bulgaria becomes first country to deploy a national AI model across a tax authority

Bulgaria’s National Revenue Agency (NRA) has begun rolling out an AI system developed by INSAIT, the Institute for Computer Science, Artificial Intelligence and Technology at Sofia University, across all of its organisational structures, making it the first large-scale public administrative body in the country to deploy the BgGPT national language model.

Following a successful pilot phase, the system is now in expanded use across the NRA’s central office and seven territorial directorates.

The AI system enables staff to conduct general and specialised searches related to tax and social security legislation, generating instant responses to improve service quality for citizens and businesses.

Crucially, it runs exclusively on open-weight models and operates on proprietary hardware, an approach specifically designed to prevent data leakage and protect privacy, two of the central concerns when integrating AI into government institutions.

The next phase of the project will see the system adapted for specialised use cases and integrated into internal processes alongside national integrator ‘Information Services’, with the goal of reaching daily use by more than 7,000 NRA employees.

INSAIT describes the initiative as a concrete contribution to European AI sovereignty, with Bulgaria combining nationally developed language models and locally controlled hardware to reduce dependence on commercial AI providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mastercard expands AI strategy with new payments model

Mastercard has introduced a generative AI foundation model trained on billions of anonymised transactions. The model is designed as a backend system to power insights across payments and commerce services.

The company plans to extend AI use beyond fraud detection into cybersecurity, loyalty programmes and small-business tools. The model is being developed with support from Nvidia and Databricks technologies.

Earlier AI tools focused on fraud detection, significantly improving accuracy and reducing false positives. The new model marks a shift towards a broader infrastructure approach across multiple products.

This move aligns with Mastercard’s growing reliance on value-added services, which generated over $13 billion in revenue. These services include security, analytics and digital payment solutions beyond the core network.

Competitors such as Visa and PayPal are also expanding AI-driven commerce platforms. The race is intensifying as firms build integrated systems for payments, automation and intelligent services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube enlists users to rate videos as AI slop in content quality push

YouTube has introduced a new pop-up survey asking viewers to rate whether videos feel like ‘AI slop’, with users able to score content on a scale from ‘not at all’ to ‘extremely’ sloppy.

The feature began appearing on 17 March 2026 and marks a shift in approach, with YouTube now enlisting its audience directly to help identify low-quality, AI-generated content.

The move adds a third layer of detection on top of YouTube’s existing automated and human review systems, both of which have struggled to keep pace with the flood of AI-generated uploads.

Research found that roughly 21% of the first 500 videos recommended to a brand-new YouTube account were identified as AI slop, with a further 33% falling into a broader category of repetitive, low-substance content.

Combating this was named a 2026 priority by YouTube CEO Neal Mohan in his annual letter to the platform.

The survey has not been without controversy.

Critics on social media have pointed out that viewer-labelled ‘slop’ data could be fed into Google’s Veo video generation models, potentially training future AI to avoid the very patterns humans flag as low quality, raising questions about whether YouTube is crowdsourcing content moderation or, inadvertently, AI improvement.

YouTube has not clarified how the feedback data will be used.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smart Ship Hub calls for careful approach to AI cameras on vessels

Digital vessel performance platform Smart Ship Hub is calling on the maritime industry to embrace AI-enabled camera systems as proactive safety tools, while insisting that their deployment must be underpinned by strong governance and genuine respect for seafarers’ working and living environments.

The company warns that, introduced without clarity or context, the technology risks being perceived as surveillance rather than safety enhancement.

Captain Nagpaul, Voyage Performance Specialist at Smart Ship Hub, outlined a broad range of operational applications for AI cameras at sea, from early fire detection and cargo monitoring during high-risk activities such as mooring operations, to improved situational awareness in areas of poor visibility and high vessel traffic.

The systems can also generate time-stamped visual records to support incident investigations and enable shore-based specialists to provide remote technical support through secure mobile applications.

Smart Ship Hub CEO Joy Basu argued that resisting the technology is not a viable strategy for the sector, noting that crew acceptance improves when workers see tangible benefits such as reduced workload and safer daily operations.

He described AI camera systems as powerful tools that enhance safety and strengthen the connection between ship and shore, but stressed they are not substitutes for professional experience and judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!