OpenAI backs policy push for Europe’s AI uptake

OpenAI and Allied for Startups have released Hacktivate AI, a set of 20 ideas to speed up AI adoption across Europe ahead of the Commission’s Apply AI Strategy.

The report emerged from a Brussels policy hackathon with 65 participants from EU bodies, governments, enterprises and startups, proposing measures such as an Individual AI Learning Account, an AI Champions Network for SMEs, a European GovAI Hub and relentless harmonisation.

OpenAI highlights strong European demand and uneven workplace uptake, citing sector gaps and the need for targeted support, while pointing to initiatives like OpenAI Academy to widen skills.

Broader policy momentum is building, with the EU preparing an Apply AI Strategy to boost homegrown tools and cut dependencies, reinforcing the push for practical deployment across public services and industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexico drafts law to regulate AI in dubbing and animation

The Mexican government is preparing a law to regulate the use of AI in dubbing, animation, and voiceovers to prevent unauthorised voice cloning and safeguard creative rights.

Working with the National Copyright Institute and more than 128 associations, it aims to reform copyright legislation before the end of the year.

The plan would strengthen protections for actors, voiceover artists, and creative workers, while addressing contract conditions and establishing a ‘Made in Mexico’ seal for cultural industries.

A bill that is expected to prohibit synthetic dubbing without consent, impose penalties for misuse, and recognise voice and image as biometric data.

Industry voices warn that AI has already disrupted work opportunities. Several dubbing firms in Los Angeles have closed, with their projects taken over by companies specialising in AI-driven dubbing.

Startups such as Deepdub and TrueSync have advanced the technology, dubbing films and television content across languages at scale.

Unions and creative groups argue that regulation is vital to protect both jobs and culture. While AI offers efficiency in translation and production, it cannot yet replicate the emotional depth of human performance.

The law is seen as the first attempt of Mexico to balance technological innovation with the rights of workers and creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS expands tech skills programme to Tennessee

Amazon Web Services (AWS) is expanding its Skills to Jobs Tech Alliance to Tennessee, making it the sixth US state to join the initiative. The partnership with the Nashville Innovation Alliance targets middle Tennessee’s rising demand for AI and cloud computing talent.

Between 2020 and 2023, tech job postings in the region increased by 35 percent, with around 8,000 roles currently open.

The programme will link students from local universities with employers and practical learning opportunities. Courses will be modernised to meet industry demand, ensuring students gain relevant AI and cloud expertise.

Local leaders emphasised the initiative’s potential to strengthen Nashville’s workforce. Mayor Freddie O’Connell stressed preparing residents for tech careers, while AWS and the Alliance aim to create sustainable pathways to high-paying roles.

The Tech Alliance has already reached 62,000 learners globally and engaged over 780 employers. Tennessee’s expansion aims to reach over 1,000 residents by 2027, with further statewide growth planned to boost Nashville’s role as a southeastern tech hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NSW expands secure AI platform NSWEduChat across schools

Following successful school trials, the New South Wales Department of Education has confirmed the broader rollout of its in-house generative AI platform, NSWEduChat.

The tool, developed within the department’s Sydney-based cloud environment, prioritises privacy, security, and equity while tailoring content to the state’s educational context. It is aligned with the NSW AI Assessment Framework.

The trial began in 16 schools in Term 1, 2024, and then expanded to 50 schools in Term 2. Teachers reported efficiency gains, and students showed strong engagement. Access was extended to all staff in Term 4, 2024, with Years 5–12 students due to follow in Term 4, 2025.

Key features include a privacy-first design, built-in safeguards, and a student mode that encourages critical thinking by offering guided prompts rather than direct answers. Staff can switch between staff and student modes for lesson planning and preparation.

All data is stored in Australia under departmental control. NSWEduChat is free and billed as the most cost-effective AI tool for schools. Other systems are accessible but not endorsed; staff must follow safety rules, while students are limited to approved tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greece considers social media ban for under-16s, says Mitsotakis

Greek Prime Minister Kyriakos Mitsotakis has signalled that Greece may consider banning social media use for children under 16.

He raised the issue during a UN event in New York, hosted by Australia, titled ‘Protecting Children in the Digital Age’, held as part of the 80th UN General Assembly.

Mitsotakis emphasised that any restrictions would be coordinated with international partners, warning that the world is carrying out the largest uncontrolled experiment on children’s minds through unchecked social media exposure.

He cautioned that the long-term effects are uncertain but unlikely to be positive.

The prime minister pointed to new national initiatives, such as the ban on mobile phone use in schools, which he said has transformed the educational experience.

He also highlighted the recent launch of parco.gov.gr, which provides age verification and parental control tools to support families in protecting children online.

Mitsotakis stressed that difficulties enforcing such measures cannot serve as an excuse for inaction, urging global cooperation to address the growing risks children face in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Balancing chaos and precision: The paradox of AI work

In a recent blog post, Jovan Kurbalija explores why working in AI often feels like living with two competing personalities. On one side is the explorer, curious, bold, and eager to experiment with new models and frameworks. That mindset thrives on quick bursts of creativity and the thrill of discovering novel possibilities.

Yet, the same field demands the opposite. The engineer’s discipline, a relentless focus on precision, validation, and endless refinement, until AI systems are impressive and reliable.

The paradox makes the search for AI talent unusually difficult. Few individuals naturally embody both restless curiosity and meticulous perfectionism.

The challenge is amplified by AI itself, which often produces plausible but uncertain outputs, requiring both tolerance for ambiguity and an insistence on accuracy. It is a balancing act between ADHD-like energy and OCD-like rigour—traits rarely found together in one professional.

The tension is visible across disciplines. Diplomats, accustomed to working with probabilities in unpredictable contexts, approach AI differently from software developers trained in deterministic systems.

Large language models blur these worlds, demanding a blend of adaptability and engineering rigour. Recognising that no single person can embody all these traits, the solution lies in carefully designed teams that combine contrasting strengths.

Kurbalija points to Diplo’s AI apprenticeship as an example of this approach. Apprentices are exposed to both the ‘sprint’ of quickly building functional AI agents and the ‘marathon’ of refining them into robust, trustworthy systems. By embracing this duality, teams can bridge the gap between rapid innovation and reliable execution, turning AI’s inherent contradictions into a source of strength.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!