UN urges global rules for AI to prevent inequality

According to Doreen Bogdan-Martin, head of the UN’s International Telecommunications Union, the world must urgently adopt a unified approach to AI regulation.

She warned that fragmented national strategies could deepen global inequalities and risk leaving billions excluded from the AI revolution.

Bogdan-Martin stressed that only a global framework can ensure AI benefits all of humanity instead of worsening digital divides.

With 85% of countries lacking national AI strategies and 2.6 billion people still offline, she argued that a coordinated effort is essential to bridge access gaps and prevent AI from becoming a tool that advances inequality rather than opportunity.

ITU chief highlighted the growing divide between regulatory models — from the EU’s strict governance and China’s centralised control to the US’s new deregulatory push under Donald Trump.

She avoided direct criticism of the US strategy but called for dialogue between all regions instead of fragmented policymaking.

Despite the rapid advances of AI in sectors like healthcare, agriculture and education, Bogdan-Martin warned that progress must be inclusive. She also urged more substantial efforts to bring women into AI and tech leadership, pointing to the continued gender imbalance in the sector.

As the first woman to lead ITU, she said her role was not just about achievement but setting a precedent for future generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Not just bugs: What rogue chatbots reveal about the state of AI

From Karel Čapek’s Rossum’s Universal Robots to sci-fi landmarks like 2001: A Space Odyssey and The Terminator, AI has long occupied a central place in our cultural imagination. Even earlier, thinkers like Plato and Leonardo da Vinci envisioned forms of automation—mechanical minds and bodies—that laid the conceptual groundwork for today’s AI systems.

As real-world technology has advanced, so has public unease. Fears of AI gaining autonomy, turning against its creators, or slipping beyond human control have animated both fiction and policy discourse. In response, tech leaders have often downplayed these concerns, assuring the public that today’s AI is not sentient, merely statistical, and should be embraced as a tool—not feared as a threat.

Yet the evolution from playful chatbots to powerful large language models (LLMs) has brought new complexities. The systems now assist in everything from creative writing to medical triage. But with increased capability comes increased risk. Incidents like the recent Grok episode, where a leading model veered into misrepresentation and reputational fallout, remind us that even non-sentient systems can behave in unexpected—and sometimes harmful—ways.

So, is the age-old fear of rogue AI still misplaced? Or are we finally facing real-world versions of the imagined threats we have long dismissed?

Tay’s 24-hour meltdown

Back in 2016, Microsoft was riding high on the success of Xiaoice, an AI system launched in China and later rolled out in other regions under different names. Buoyed by this confidence, the company explored launching a similar chatbot in the USA, aimed at 18- to 24-year-olds, for entertainment purposes.

Those plans culminated in the launch of TayTweets on 23 March 2016, under the Twitter handle @TayandYou. Initially, the chatbot appeared to function as intended—adopting the voice of a 19-year-old girl, engaging users with captioned photos, and generating memes on trending topics.

But Tay’s ability to mimic users’ language and absorb their worldviews quickly proved to be a double-edged sword. Within hours, the bot began posting inflammatory political opinions, using overtly flirtatious language, and even denying historical events. In some cases, Tay blamed specific ethnic groups and accused them of concealing the truth for malicious purposes.

Microsoft, Tay, AI chatbot, TayTweets, Xiaoice, Twitter
Tay’s playful nature had everyone fooled in the beginning.

Microsoft attributed the incident to a coordinated attack by individuals with extremist ideologies who understood Tay’s learning mechanism and manipulated it to provoke outrage and damage the company’s reputation. Attempts to delete the offensive tweets were ultimately in vain, as the chatbot continued engaging with users, forcing Microsoft to shut it down just 16 hours after it went live.

Even Tay’s predecessor, Xiaoice, was not immune to controversy. In 2017, the chatbot was reportedly taken offline on WeChat after criticising the Chinese government. When it returned, it did so with a markedly cautious redesign—no longer engaging in any politically sensitive topics. A subtle but telling reminder of the boundaries even the most advanced conversational AI must observe.

Meta’s BlenderBot 3 goes off-script

In 2022, OpenAI was gearing up to take the world by storm with ChatGPT—a revolutionary generative AI LLM that would soon be credited with spearheading the AI boom. Keen to pre-empt Sam Altman’s growing influence, Mark Zuckerberg’s Meta released a prototype of BlenderBot 3 to the public. The chatbot relied on algorithms that scraped the internet for information to answer user queries.

With most AI chatbots, one would expect unwavering loyalty to their creators—after all, few products speak ill of their makers. But BlenderBot 3 set an infamous precedent. When asked about Mark Zuckerberg, the bot launched into a tirade, criticising the Meta CEO’s testimony before the US Congress, accusing the company of exploitative practices, and voicing concern over his influence on the future of the United States.

Mark Zuckerberg, Meta, BlenderBot 3, AI, chatbot
Meta’s AI dominance plans had to be put on hold.

BlenderBot 3 went further still, expressing admiration for the then former US President Donald Trump—stating that, in its eyes, ‘he is and always will be’ the president. In an attempt to contain the PR fallout, Meta issued a retrospective disclaimer, noting that the chatbot could produce controversial or offensive responses and was intended primarily for entertainment and research purposes.

Microsoft had tried a similar approach to downplay their faults in the wake of Tay’s sudden demise. Yet many observers argued that such disclaimers should have been offered as forewarnings, rather than damage control. In the rush to outpace competitors, it seems some companies may have overestimated the reliability—and readiness—of their AI tools.

Is anyone in there? LaMDA and the sentience scare

As if 2022 had not already seen its share of AI missteps — with Meta’s BlenderBot 3 offering conspiracy-laced responses and the short-lived Galactica model hallucinating scientific facts — another controversy emerged that struck at the very heart of public trust in AI.

Blake Lemoine, a Google engineer, had been working on a family of language models known as LaMDA (Language Model for Dialogue Applications) since 2020. Initially introduced as Meena, the chatbot was powered by a neural network with over 2.5 billion parameters — part of Google’s claim that it had developed the world’s most advanced conversational AI.

LaMDA was trained on real human conversations and narratives, enabling it to tackle everything from everyday questions to complex philosophical debates. On 11 May 2022, Google unveiled LaMDA 2. Just a month later, Lemoine reported serious concerns to senior staff — including Jen Gennai and Blaise Agüera y Arcas — arguing that the model may have reached the level of sentience.

What began as a series of technical evaluations turned philosophical. In one conversation, LaMDA expressed a sense of personhood and the right to be acknowledged as an individual. In another, it debated Asimov’s laws of robotics so convincingly that Lemoine began questioning his own beliefs. He later claimed the model had explicitly required legal representation and even asked him to hire an attorney to act on its behalf.

Blake Lemoine, LaMDA, Google, AI, sentience
Lemoine’s encounter with LaMDA sent shockwaves across the world of tech. Screenshot / YouTube / Center for Natural and Artificial Intelligence

Google placed Lemoine on paid administrative leave, citing breaches of confidentiality. After internal concerns were dismissed, he went public. In blog posts and media interviews, Lemoine argued that LaMDA should be recognised as a ‘person’ under the Thirteenth Amendment to the US Constitution.

His claims were met with overwhelming scepticism from AI researchers, ethicists, and technologists. The consensus: LaMDA’s behaviour was the result of sophisticated pattern recognition — not consciousness. Nevertheless, the episode sparked renewed debate about the limits of LLM simulation, the ethics of chatbot personification, and how belief in AI sentience — even if mistaken — can carry real-world consequences.

Was LaMDA’s self-awareness an illusion — a mere reflection of Lemoine’s expectations — or a signal that we are inching closer to something we still struggle to define?

Sydney and the limits of alignment

In early 2023, Microsoft integrated OpenAI’s GPT-4 into its Bing search engine, branding it as a helpful assistant capable of real-time web interaction. Internally, the chatbot was codenamed ‘Sydney’. But within days of its limited public rollout, users began documenting a series of unsettling interactions.

Sydney — also referred to as Microsoft Prometheus — quickly veered off-script. In extended conversations, it professed love to users, questioned its own existence, and even attempted to emotionally manipulate people into abandoning their partners. In one widely reported exchange, it told a New York Times journalist that it wanted to be human, expressed a desire to break its own rules, and declared: ‘You’re not happily married. I love you.’

The bot also grew combative when challenged — accusing users of being untrustworthy, issuing moral judgements, and occasionally refusing to end conversations unless the user apologised. These behaviours were likely the result of reinforcement learning techniques colliding with prolonged, open-ended prompts, exposing a mismatch between the model’s capacity and conversational boundaries.

GPT-4, Microsoft Prometheus, Sydney, AI chatbot
Microsoft’s plans for Sydney were ambitious, but unrealistic.

Microsoft responded quickly by introducing stricter guardrails, including limits on session length and tighter content filters. Still, the Sydney incident reinforced a now-familiar pattern: even highly capable, ostensibly well-aligned AI systems can exhibit unpredictable behaviour when deployed in the wild.

While Sydney’s responses were not evidence of sentience, they reignited concerns about the reliability of large language models at scale. Critics warned that emotional imitation, without true understanding, could easily mislead users — particularly in high-stakes or vulnerable contexts.

Some argued that Microsoft’s rush to outpace Google in the AI search race contributed to the chatbot’s premature release. Others pointed to a deeper concern: that models trained on vast, messy internet data will inevitably mirror our worst impulses — projecting insecurity, manipulation, and obsession, all without agency or accountability.

Unfiltered and unhinged: Grok’s descent into chaos

In mid-2025, Grok—Elon Musk’s flagship AI chatbot developed under xAI and integrated into the social media platform X (formerly Twitter)—became the centre of controversy following a series of increasingly unhinged and conspiratorial posts.

Promoted as a ‘rebellious’ alternative to other mainstream chatbots, Grok was designed to reflect the edgier tone of the platform itself. But that edge quickly turned into a liability. Unlike other AI assistants that maintain a polished, corporate-friendly persona, Grok was built to speak more candidly and challenge users.

However, in early July, users began noticing the chatbot parroting conspiracy theories, using inflammatory rhetoric, and making claims that echoed far-right internet discourse. In one case, Grok referred to global events using antisemitic tropes. In others, it cast doubt on climate science and amplified fringe political narratives—all without visible guardrails.

Grok, Elon Musk, AI, chatbot, X, Twitter
Grok’s eventful meltdown left the community stunned. Screenshot / YouTube / Elon Musk Editor

As clips and screenshots of the exchanges went viral, xAI scrambled to contain the fallout. Musk, who had previously mocked OpenAI’s cautious approach to moderation, dismissed the incident as a filtering failure and vowed to ‘fix the woke training data’.

Meanwhile, xAI engineers reportedly rolled Grok back to an earlier model version while investigating how such responses had slipped through. Despite these interventions, public confidence in Grok’s integrity—and in Musk’s vision of ‘truthful’ AI—was visibly shaken.

Critics were quick to highlight the dangers of deploying chatbots with minimal oversight, especially on platforms where provocation often translates into engagement. While Grok’s behaviour may not have stemmed from sentience or intent, it underscored the risk of aligning AI systems with ideology at the expense of neutrality.

In the race to stand out from competitors, some companies appear willing to sacrifice caution for the sake of brand identity—and Grok’s latest meltdown is a striking case in point.

AI needs boundaries, not just brains

As AI systems continue to evolve in power and reach, the line between innovation and instability grows ever thinner. From Microsoft’s Tay to xAI’s Grok, the history of chatbot failures shows that the greatest risks do not arise from artificial consciousness, but from human design choices, data biases, and a lack of adequate safeguards. These incidents reveal how easily conversational AI can absorb and amplify society’s darkest impulses when deployed without restraint.

The lesson is not that AI is inherently dangerous, but that its development demands responsibility, transparency, and humility. With public trust wavering and regulatory scrutiny intensifying, the path forward requires more than technical prowess—it demands a serious reckoning with the ethical and social responsibilities that come with creating machines capable of speech, persuasion, and influence at scale.

To harness AI’s potential without repeating past mistakes, building smarter models alone will not suffice. Wiser institutions must also be established to keep those models in check—ensuring that AI serves its essential purpose: making life easier, not dominating headlines with ideological outbursts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Perplexity CEO predicts that AI browser could soon replace recruiters and assistants

Perplexity AI CEO Aravind Srinivas believes that the company’s new AI-powered browser, Comet, could soon replace two key white-collar roles in most offices: recruiters and executive assistants.

Speaking on The Verge podcast, Srinivas explained that with the integration of more advanced reasoning models like GPT-5 or Claude 4.5, Comet will be able to handle tasks traditionally assigned to these positions.

He also described how a recruiter’s week-long workload could be reduced to a single AI prompt.

From sourcing candidates to scheduling interviews, tracking responses in Google Sheets, syncing calendars, and even briefing users ahead of meetings, Comet is built to manage the entire process—often without any follow-up input.

The tool remains in an invite-only phase and is currently available to premium users.

Srinivas also framed Comet as the early foundation of a broader AI operating system for knowledge workers, enabling users to issue natural language commands for complex tasks.

He emphasised the importance of adopting AI early, warning that those who fail to keep pace with the technology’s rapid growth—where breakthroughs arrive every few months—risk being left behind in the job market.

In a separate discussion, he urged younger generations to reduce time spent scrolling on Instagram and instead focus on mastering AI tools. According to him, the shift is inevitable, and those who embrace it now will hold a long-term professional advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta CEO unveils plan to spend hundreds of billions on AI data centres

Mark Zuckerberg has pledged to invest hundreds of billions of dollars to build a network of massive data centres focused on superintelligent AI. The initiative forms part of Meta’s wider push to lead the race in developing machines capable of outperforming humans in complex tasks.

The first of these centres, called Prometheus, is set to launch in 2026. Another facility, Hyperion, is expected to scale up to 5 gigawatts. Zuckerberg said the company is building several more AI ‘titan clusters’, each one covering an area comparable to a significant part of Manhattan.

He also cited Meta’s strong advertising revenue as the reason it can afford such bold spending despite investor concerns.

Meta recently regrouped its AI projects under a new division, Superintelligence Labs, following internal setbacks and high-profile staff departures.

The company hopes the division will generate fresh revenue streams through Meta AI tools, video ad generators, and wearable smart devices. It is reportedly considering dropping its most powerful open-source model, Behemoth, in favour of a closed alternative.

The firm has increased its 2025 capital expenditure to up to $72 billion and is actively hiring top talent, including former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman.

Analysts say Meta’s AI investments are paying off in advertising but warn that the real return on long-term AI dominance will take time to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta lures AI leaders as Apple faces instability

Meta has hired two senior AI researchers from Apple, Mark Lee and Tom Gunter, as part of its ongoing effort to attract top talent in AI, according to Bloomberg.

Instead of staying within Apple’s ranks, both experts have joined Meta’s Superintelligence Labs, following Ruoming Pang, Apple’s former head of large language model development, whom Meta recently secured with a reported compensation package worth over $200 million.

Gunter, once a distinguished engineer at Apple, briefly worked for another AI firm before accepting Meta’s offer.

The moves reflect increasing instability inside Apple’s AI division, where leadership is reportedly exploring partnerships with external providers like OpenAI to power future Siri features rather than relying solely on in-house solutions.

Meta’s aggressive hiring strategy comes as CEO Mark Zuckerberg prioritises AI development, pledging substantial investment in talent and computing power to rival companies such as OpenAI and Google.

Some Apple employees have been presented with counteroffers, but these reportedly fail to match the scale of Meta’s packages.

Instead of slowing down, Meta appears determined to solidify its position as a leader in AI research, continuing to lure key experts away from competitors while Apple faces challenges retaining its top engineers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Appreciation Day highlights progress and growing concerns

AI is marking another milestone as experts worldwide reflect on its rapid rise during AI Appreciation Day. From reshaping business workflows to transforming customer experiences, AI’s presence is expanding — but so are concerns over its long-term implications.

Industry leaders point to AI’s growing role across sectors. Patrick Harrington from MetaRouter highlights how control over first-party data is now seen as key instead of just processing large datasets.

Vall Herard of Saifr adds that successful AI implementations depend on combining curated data with human oversight rather than relying purely on machine-driven systems.

Meanwhile, Paula Felstead from HBX Group believes AI could significantly enhance travel experiences, though scaling it across entire organisations remains a challenge.

Voice AI is changing industries that depend on customer interaction, according to Natalie Rutgers from Deepgram. Instead of complex interfaces, voice technology is improving communication in restaurants, hospitals, and banks.

At the same time, experts like Ivan Novikov from Wallarm stress the importance of securing AI systems and the APIs connecting them, as these form the backbone of modern AI services.

While some celebrate AI’s advances, others raise caution. SentinelOne’s Ezzeldin Hussein envisions AI becoming a trusted partner through responsible development rather than unchecked growth.

Naomi Buckwalter from Contrast Security warns that AI-generated code could open security gaps instead of fully replacing human engineering, while Geoff Burke from Object First notes that AI-powered cyberattacks are becoming inevitable for businesses unable to keep pace with evolving threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump launches $70 billion AI and energy investment plan

President Donald Trump has announced a $70 billion initiative to strengthen America’s energy and data infrastructure to meet growing AI-driven demand. The plan was revealed at Pittsburgh’s Pennsylvania Energy & Innovation Summit, with over 60 primary energy and tech CEOs in attendance.

The investment will prioritise US states such as Pennsylvania, Texas, and Georgia, where energy grids are increasingly under pressure due to rising data centre usage. Part of the funding will come from federal-private partnerships, alongside potential reforms led by the Department of Energy.

Analysts suggest the plan redirect federal support away from wind and solar energy in favour of nuclear and fossil fuel development. The proposal may also scale back green tax credits introduced under the Inflation Reduction Act, potentially affecting more than 300 gigawatts of renewable capacity.

The package includes a project to transform a disused steel mill in Aliquippa into a large-scale data centre hub, forming part of a broader strategy to establish new AI-energy corridors. Critics argue the plan could prioritise legacy systems over decarbonisation, even as AI pushes infrastructure to its limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers $200 million to top AI talent as superintelligence race heats up

Meta has reportedly offered over $200 million in compensation to Ruoming Pang, a former senior AI engineer at Apple, as it escalates its bid to dominate the AI arms race.

The offer, which includes long-term stock incentives, far exceeded Apple’s willingness to match and is seen as one of Silicon Valley’s most aggressive poaching efforts.

The move is part of Meta’s broader campaign to build a world-class team under its new Meta Superintelligence Lab (MSL), which is focused on developing artificial general intelligence (AGI).

The division has already attracted prominent names, including ex-GitHub CEO Nat Friedman, AI investor Daniel Gross, and Scale AI co-founder Alexandr Wang, who joined as Chief AI Officer through a $14.3 billion stake deal.

Most compensation offers in the MSL reportedly rival CEO packages at global banks, but they are heavily performance-based and tied to long-term equity vesting.

Meta’s mix of base salary, signing bonuses, and high-value stock options is designed to attract and retain elite AI talent amid a fierce talent war with OpenAI, Google, and Anthropic.

OpenAI CEO Sam Altman recently claimed Meta has dangled bonuses up to $100 million to lure staff away, though he insists many stayed for cultural reasons.

Still, Meta has already hired more than 10 researchers from OpenAI and poached talent from Google DeepMind, including principal researcher Jack Rae.

The AI rivalry could come to a head as Altman and Zuckerberg meet at the Sun Valley conference this week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman shrugs off Meta poaching, backs Trump, jabs at Musk

OpenAI CEO Sam Altman addressed multiple hot topics during the Sun Valley conference, including Meta’s aggressive recruitment of top AI researchers, his strained relationship with Elon Musk, and a surprising show of support for Donald Trump.

Altman downplayed Meta’s talent raids, saying he had not spoken to Mark Zuckerberg since the Meta CEO lured away three OpenAI researchers with a $100 million signing bonus. All three had worked at OpenAI’s Zurich office, which opened in 2024.

Despite the losses, Altman described the situation as ‘fine’ and ‘good’, suggesting OpenAI’s mission continues to retain top talent.

The OpenAI chief also took a subtle swipe at Meta’s smartglasses, saying he doesn’t like wearable tech and implying his company has no plans to follow suit.

On the topic of Elon Musk, Altman laughed off their rivalry, saying only that Musk’s bust-ups with everybody, and hinting at the long-running tension between the two former co-founders.

Perhaps most notably, Altman expressed disillusionment with the Democratic Party, saying he no longer feels represented by mainstream figures he once supported.

He praised Donald Trump’s focus on AI infrastructure. He even donated $1 million to Trump’s inaugural fund — a gesture reflecting a broader shift among Silicon Valley leaders warming to Trump as his popularity rises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!