Mexico drafts law to regulate AI in dubbing and animation

The Mexican government is preparing a law to regulate the use of AI in dubbing, animation, and voiceovers to prevent unauthorised voice cloning and safeguard creative rights.

Working with the National Copyright Institute and more than 128 associations, it aims to reform copyright legislation before the end of the year.

The plan would strengthen protections for actors, voiceover artists, and creative workers, while addressing contract conditions and establishing a ‘Made in Mexico’ seal for cultural industries.

A bill that is expected to prohibit synthetic dubbing without consent, impose penalties for misuse, and recognise voice and image as biometric data.

Industry voices warn that AI has already disrupted work opportunities. Several dubbing firms in Los Angeles have closed, with their projects taken over by companies specialising in AI-driven dubbing.

Startups such as Deepdub and TrueSync have advanced the technology, dubbing films and television content across languages at scale.

Unions and creative groups argue that regulation is vital to protect both jobs and culture. While AI offers efficiency in translation and production, it cannot yet replicate the emotional depth of human performance.

The law is seen as the first attempt of Mexico to balance technological innovation with the rights of workers and creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS expands tech skills programme to Tennessee

Amazon Web Services (AWS) is expanding its Skills to Jobs Tech Alliance to Tennessee, making it the sixth US state to join the initiative. The partnership with the Nashville Innovation Alliance targets middle Tennessee’s rising demand for AI and cloud computing talent.

Between 2020 and 2023, tech job postings in the region increased by 35 percent, with around 8,000 roles currently open.

The programme will link students from local universities with employers and practical learning opportunities. Courses will be modernised to meet industry demand, ensuring students gain relevant AI and cloud expertise.

Local leaders emphasised the initiative’s potential to strengthen Nashville’s workforce. Mayor Freddie O’Connell stressed preparing residents for tech careers, while AWS and the Alliance aim to create sustainable pathways to high-paying roles.

The Tech Alliance has already reached 62,000 learners globally and engaged over 780 employers. Tennessee’s expansion aims to reach over 1,000 residents by 2027, with further statewide growth planned to boost Nashville’s role as a southeastern tech hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s Sora app raises tension between mission and profit

The US AI company, OpenAI, has entered the social media arena with Sora, a new app offering AI-generated videos in a TikTok-style feed.

The launch has stirred debate among current and former researchers, some praising its technical achievement while others worry it diverges from OpenAI’s nonprofit mission to develop AI for the benefit of humanity.

Researchers have expressed concerns about deepfakes, addictive loops and the ethical risks of AI-driven feeds. OpenAI insists Sora is designed for creativity rather than engagement, highlighting safeguards such as reminders for excessive scrolling and prioritisation of content from known contacts.

The company argues that revenue from consumer apps helps fund advanced AI research, including its pursuit of artificial general intelligence.

A debate that reflects broader tensions within OpenAI: balancing commercial growth with its founding mission. Critics fear the consumer push could dilute its focus, while executives maintain products like ChatGPT and Sora expand public access and provide essential funding.

Regulators are watching closely, questioning whether the company’s for-profit shift undermines its stated commitment to safety and ethical development.

Sora’s future remains uncertain, but its debut marks a significant expansion of AI-powered social platforms. Whether OpenAI can avoid the pitfalls that defined earlier social media models will be a key test of both its mission and its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NSW expands secure AI platform NSWEduChat across schools

Following successful school trials, the New South Wales Department of Education has confirmed the broader rollout of its in-house generative AI platform, NSWEduChat.

The tool, developed within the department’s Sydney-based cloud environment, prioritises privacy, security, and equity while tailoring content to the state’s educational context. It is aligned with the NSW AI Assessment Framework.

The trial began in 16 schools in Term 1, 2024, and then expanded to 50 schools in Term 2. Teachers reported efficiency gains, and students showed strong engagement. Access was extended to all staff in Term 4, 2024, with Years 5–12 students due to follow in Term 4, 2025.

Key features include a privacy-first design, built-in safeguards, and a student mode that encourages critical thinking by offering guided prompts rather than direct answers. Staff can switch between staff and student modes for lesson planning and preparation.

All data is stored in Australia under departmental control. NSWEduChat is free and billed as the most cost-effective AI tool for schools. Other systems are accessible but not endorsed; staff must follow safety rules, while students are limited to approved tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greece considers social media ban for under-16s, says Mitsotakis

Greek Prime Minister Kyriakos Mitsotakis has signalled that Greece may consider banning social media use for children under 16.

He raised the issue during a UN event in New York, hosted by Australia, titled ‘Protecting Children in the Digital Age’, held as part of the 80th UN General Assembly.

Mitsotakis emphasised that any restrictions would be coordinated with international partners, warning that the world is carrying out the largest uncontrolled experiment on children’s minds through unchecked social media exposure.

He cautioned that the long-term effects are uncertain but unlikely to be positive.

The prime minister pointed to new national initiatives, such as the ban on mobile phone use in schools, which he said has transformed the educational experience.

He also highlighted the recent launch of parco.gov.gr, which provides age verification and parental control tools to support families in protecting children online.

Mitsotakis stressed that difficulties enforcing such measures cannot serve as an excuse for inaction, urging global cooperation to address the growing risks children face in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Balancing chaos and precision: The paradox of AI work

In a recent blog post, Jovan Kurbalija explores why working in AI often feels like living with two competing personalities. On one side is the explorer, curious, bold, and eager to experiment with new models and frameworks. That mindset thrives on quick bursts of creativity and the thrill of discovering novel possibilities.

Yet, the same field demands the opposite. The engineer’s discipline, a relentless focus on precision, validation, and endless refinement, until AI systems are impressive and reliable.

The paradox makes the search for AI talent unusually difficult. Few individuals naturally embody both restless curiosity and meticulous perfectionism.

The challenge is amplified by AI itself, which often produces plausible but uncertain outputs, requiring both tolerance for ambiguity and an insistence on accuracy. It is a balancing act between ADHD-like energy and OCD-like rigour—traits rarely found together in one professional.

The tension is visible across disciplines. Diplomats, accustomed to working with probabilities in unpredictable contexts, approach AI differently from software developers trained in deterministic systems.

Large language models blur these worlds, demanding a blend of adaptability and engineering rigour. Recognising that no single person can embody all these traits, the solution lies in carefully designed teams that combine contrasting strengths.

Kurbalija points to Diplo’s AI apprenticeship as an example of this approach. Apprentices are exposed to both the ‘sprint’ of quickly building functional AI agents and the ‘marathon’ of refining them into robust, trustworthy systems. By embracing this duality, teams can bridge the gap between rapid innovation and reliable execution, turning AI’s inherent contradictions into a source of strength.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Technology and innovation define Researchers’ Night 2025 in Greece

Greece hosted the European Researchers’ Night 2025 on Friday, 26 September at the Thessaloniki Concert Hall, marking a significant celebration of science and technology.

The Centre coordinated it for Research and Technology Hellas (CERTH), which also celebrated its 25th anniversary.

Visitors experienced an extensive interactive technology exhibition featuring VR, autonomous robots and AI applications, alongside demonstrations across energy, digital systems and life sciences.

Attendees engaged directly with researchers and explored how cutting-edge research is transformed into practical innovations with societal and economic impact.

Contributions came from Aristotle University of Thessaloniki, the University of Ioannina, the International Hellenic University, the Anna Papageorgiou STEM Centre, the Hellenic Agricultural Organisation – DIMITRA, and the Astronomy Friends Association.

The event showcased CERTH’s spin-offs and technology transfer initiatives, highlighting how advanced research evolves into market-ready products and services. The ‘European Corner’ also presented EU policies and opportunities for research and innovation.

In parallel, the online ‘Chat Lab’ brought together 51 researchers for public discussions on emerging scientific issues until 3 October.

With simultaneous events in Athens, Heraklion, Patras, Larissa and Rethymno, the European Researchers’ Night once again reinforced the role of Greece in connecting frontier research with society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global rollout of teen accounts for Facebook and Messenger

US tech giant Meta is expanding its dedicated teen accounts to Facebook and Messenger users worldwide, extending a safety system on Instagram. The move introduces more parental controls and restrictions to protect younger users on Meta’s platforms.

The accounts, now mandatory for teens, include stricter privacy settings that limit contact with unknown adults. Parents can supervise how their children use the apps, monitor screen time, and view who their teens are messaging.

For younger users aged 13 to 15, parental permission is required before adjusting safety-related settings. Meta is also deploying AI tools to detect teens lying about their age.

Alongside the global rollout, Instagram is expanding a school partnership programme in the US, allowing middle and high schools to report bullying and problematic behaviour directly.

The company says early feedback from participating schools has been positive, and the scheme is now open to all schools nationwide.

An expansion that comes as Meta faces lawsuits and investigations over its record on child safety. By strengthening parental controls and school-based reporting, the company aims to address growing criticism while tightening protections for its youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!