AI regulation offers development opportunity for Latin America

Latin America is uniquely positioned to lead on AI governance by leveraging its social rights-focused policy tradition, emerging tech ecosystems, and absence of legacy systems.

According to a new commentary by Eduardo Levy Yeyati at the Brookings Institution, the region has the opportunity to craft smart AI regulation that is both inclusive and forward-looking, balancing innovation with rights protection.

Despite global momentum on AI rulemaking, Latin American regulatory efforts remain slow and fragmented, underlining the need for early action and regional cooperation.

The proposed framework recommends flexible, enforceable policies grounded in local realities, such as adapting credit algorithms for underbanked populations or embedding linguistic diversity in AI tools.

Governments are encouraged to create AI safety units, invest in public oversight, and support SMEs and open-source innovation to avoid monopolisation. Regulation should be iterative and participatory, using citizen consultations and advisory councils to ensure legitimacy and resilience through political shifts.

Regional harmonisation will be critical to avoid a patchwork of laws and promote Latin America’s role in global AI governance. Coordinated data standards, cross-border oversight, and shared technical protocols are essential for a robust, trustworthy ecosystem.

Rather than merely catching up, Latin America can become a global model for equitable and adaptive AI regulation tailored to the needs of developing economies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rights groups condemn Jordan’s media crackdown

At least 12 independent news websites in Jordan have been blocked by the authorities without any formal legal justification or opportunity for appeal. Rights groups have condemned the move as a serious violation of constitutional and international protections for freedom of expression.

The Jordanian Media Commission issued the directive on 14 May 2025, citing vague claims such as ‘spreading media poison’ and ‘targeting national symbols’, without providing evidence or naming the sites publicly.

The timing of the ban suggests it was a retaliatory act against investigative reports alleging profiteering by state institutions in humanitarian aid efforts to Gaza. Affected outlets were subjected to intimidation, and the blocks were imposed without judicial oversight or a transparent legal process.

Observers warn this sets a dangerous precedent, reflecting a broader pattern of repression under Jordan’s Cybercrime Law No. 17 of 2023, which grants sweeping powers to restrict online speech.

Civil society organisations call for the immediate reversal of the ban, transparency over its legal basis, and access to judicial remedies for affected platforms.

They urge a comprehensive review of the cybercrime law to align it with international human rights standards. Press freedom, they argue, is a pillar of democratic society and must not be sacrificed under the guise of combating disinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI to disrupt jobs, warns DeepMind CEO, as Gen Alpha faces new realities

AI will likely cause significant job disruption in the next five years, according to Demis Hassabis, CEO of Google DeepMind. Speaking on the Hard Fork podcast, Hassabis emphasised that while AI is set to displace specific jobs, it will also create new roles that are potentially more meaningful and engaging.

He urged younger generations to prepare for a rapidly evolving workforce shaped by advanced technologies. Hassabis stressed the importance of early adaptation, particularly for Generation Alpha, who he believes should embrace AI just as millennials did the internet and Gen Z did smartphones.

Hassabis also called on students to become ‘ninjas with AI,’ encouraging them to understand how these tools work and master them for future success. While he highlighted the potential of generative AI, such as Google’s new Veo 3 video generator unveiled at I/O 2025, Hassabis also reminded listeners that a solid foundation in STEM remains vital.

He noted that soft skills like creativity, resilience, and adaptability are equally essential—traits that will help young people thrive in a future defined by constant technological change. As AI becomes more deeply embedded in industries from education to entertainment, Hassabis’ message is clear – the next generation must balance technical knowledge with human ingenuity to stay ahead in tomorrow’s job market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pakistan aims to become global crypto and AI leader

Pakistan has set aside 2,000 megawatts of electricity in a major push to power Bitcoin mining and AI data centres, marking the start of a wider national digital strategy.

Led by the Pakistan Crypto Council (PCC), a body under the Ministry of Finance, this initiative aims to monetise surplus energy instead of wasting it, while attracting foreign investment, creating jobs, and generating much-needed revenue.

Bilal Bin Saqib, CEO of the PCC, stated that with proper regulation and transparency, Pakistan can transform into a global powerhouse for crypto and AI.

By redirecting underused power capacity, particularly from plants operating below potential, Pakistan seeks to convert a longstanding liability into a high-value asset, earning foreign currency through digital services and even storing Bitcoin in a national wallet.

Global firms have already shown interest, following recent visits from international miners and data centre operators.

Pakistan’s location — bridging Asia, the Middle East, and Europe — coupled with low energy costs and ample land, positions it as a competitive alternative to regional tech hubs like India and Singapore.

The arrival of the Africa-2 subsea cable has further boosted digital connectivity and resilience, strengthening the case for domestic AI infrastructure.

It is just the beginning of a multi-stage rollout. Plans include using renewable energy sources like wind, solar, and hydropower, while tax incentives and strategic partnerships are expected to follow.

With over 40 million crypto users and increasing digital literacy, Pakistan aims to emerge not just as a destination for digital infrastructure but as a sovereign leader in Web3, AI, and blockchain innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude Opus 4 sets a benchmark in AI coding as Anthropic’s revenue doubles

Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced AI models to date. The launch comes amid rapid industry growth, with the company’s annualised revenue reportedly doubling to $2 billion in the first quarter of 2025.

The Claude 4 models, backed by Amazon and developed by former OpenAI executives, feature improvements in coding, autonomous task execution, and reasoning.

Opus 4 leads in the SWE-bench coding benchmark at 72.5 percent, outperforming OpenAI’s GPT-4.1 and Google’s Gemini 2.5 Pro. Designed for extended task execution, it can maintain focus for up to seven hours, simulating a full workday.

Anthropic says both Opus 4 and Sonnet 4 use hybrid reasoning systems. These allow near-instant responses alongside extended, tool-assisted tasks, including independent web searches, file analysis, and use of multiple tools simultaneously.

Claude models can also build ‘tacit knowledge’ from local file interactions, supporting continuity over time. Sonnet 4, a more efficient alternative to Opus, offers improved instruction following and is already integrated into GitHub’s next Copilot agent.

Both models support expanded developer tools and memory caching through Anthropic’s API, with direct integration into environments like VS Code and JetBrains.

Pricing for Claude Opus 4 is set at $15 per million input tokens and $75 per million output tokens. Sonnet 4 is offered at lower rates of $3 and $15, respectively. Opus 4 is included in Claude’s Pro, Max, Team, and Enterprise tiers, while Sonnet 4 is accessible to free users.

The release also includes Claude Code, a developer assistant capable of reviewing pull requests, resolving CI errors, and proposing code edits. New API features support GitHub integrations, execution tools, and file management.

Anthropic is positioning itself in direct competition with OpenAI, Google, and Meta. While other firms lead in general reasoning and multimodal performance, Anthropic’s strength lies in sustained coding and planning tasks.

However, the company also acknowledged new safety concerns. Claude Opus 4 has triggered Anthropic’s AI Safety Level 3 protocol, following internal findings that it could help users with limited expertise produce hazardous materials.

In response, more than 100 safety controls have been implemented, including real-time monitoring, restricted data egress, and a bug bounty program. Claude Opus 4 and Sonnet 4 are available via Anthropic’s API, Amazon Bedrock, and Google Cloud Vertex AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia ramps up AI push with new Taiwan plans

Nvidia CEO Jensen Huang has urged Taiwan to embrace agentic AI and robotics to tackle its ongoing labour shortage.

Speaking before his departure from Taipei after a week-long visit, Huang said 2025 would be a ‘very exciting’ year for AI, as the technology now possesses the ability to ‘reason’ and carry out step-by-step problem-solving never encountered before.

The new wave of agentic AI, he explained, could assist people with various workplace and everyday tasks.

Huang added that Taiwan, despite being a hub of innovation, faces a lack of manpower. ‘Now with AI and robots, Taiwan can expand its opportunity,’ he said.

He also expressed enthusiasm over the production ramp-up of Blackwell, Nvidia’s latest GPU architecture built for AI workloads, noting that partners across Taiwan are already in full swing.

Huang’s trip included meetings with local partners and a keynote at Computex Taipei, where he unveiled Nvidia’s new Taiwan office and plans for the country’s first large-scale AI supercomputer.

In a TV interview, Huang urged the Taiwanese government to invest more in energy infrastructure to support the growing AI sector. He warned that the energy demands of AI development could exceed 100 megawatts in the near future, stressing that energy availability is the key limitation.

Taiwan’s expanding AI ecosystem — from chip plants to educational institutions — would require substantial support to thrive, he said, pledging to return for Chinese New Year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Silicon Valley fights over AI elite

Silicon Valley’s race to dominate AI has shifted focus from data centres and algorithms to a more human battlefield — elite researchers.

Since the arrival of ChatGPT in late 2022, the competition to attract and retain top AI minds has intensified, with companies offering staggering incentives to a tiny pool of experts.

Startups and tech giants alike are treating recruitment like a high-stakes game of chess. Former OpenAI researcher Ariel Herbert-Voss compared hiring strategies to balancing game pieces: ‘Do I have enough rooks? Enough knights?’

Companies like OpenAI, Google DeepMind, and Elon Musk’s xAI are pulling out all the stops — from private jets to personal calls — to secure researchers whose work can directly shape AI breakthroughs.

OpenAI has reportedly offered multi-million dollar bonuses to deter staff from joining rivals such as SSI, the startup led by former chief scientist Ilya Sutskever. Some retention deals include $2 million in bonuses and equity packages worth $20 million or more, with just a one-year commitment.

Google DeepMind has also joined the race with $20 million annual packages and fast-tracked stock vesting schedules for top researchers.

What makes this talent war so intense is the scarcity of these individuals. Experts estimate that only a few dozen to perhaps a thousand researchers are behind the most crucial advances in large language models.

With high-profile departures, such as OpenAI’s Mira Murati founding a new rival and recruiting 20 colleagues, the fight for AI’s brightest minds shows no signs of slowing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing partnership launches in Doha

Quantinuum and Al Rabban Capital have announced a new venture aimed at advancing quantum computing in Qatar and the region.

The partnership seeks to provide access to Quantinuum’s technologies, co-develop relevant quantum applications and train a new generation of developers.

This move aligns with Qatar’s ambition to become a hub for advanced technologies. Applications will focus on energy, medicine, genomics, and finance, with additional potential in emerging fields like Generative Quantum AI.

The venture builds on existing collaborations with Hamad Bin Khalifa University and the Qatar Center for Quantum Computing. Quantinuum’s expansion into Qatar follows growth across the US, UK, Europe, and Indo-Pacific.

Leaders from both organisations see this as a strategic milestone, strengthening technological ties between Qatar and the West. The joint venture not only supports national goals but also reflects rising global demand for quantum technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tesla robot learns to cook and clean

Tesla has released a new video showing its Optimus robot performing a variety of domestic tasks, from vacuuming floors to stirring food. Instructed through natural language prompts, the robot handled chores such as cleaning a table, tearing paper towels, and taking out the bin with notable precision.

The development marks another step forward in Tesla’s goal of making humanoid robots useful in everyday settings. The Optimus team claims a breakthrough now allows the robot to learn directly from first-person human videos, accelerating task training compared to traditional methods.

Reinforcement learning is also being used to help Optimus refine its skills through trial and error in simulations or the real world. Tesla hopes to eventually deploy thousands of these robots in its factories to perform repetitive or hazardous jobs.

While still far from superhuman, Optimus’s progress highlights how Tesla is positioning itself in the race to commercialise humanoid robots. Competitors around the world are also developing robots for work and home environments, aiming to reshape how humans interact with machines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!