Teen builds Hindi AI tool to help paralysis patients speak

An Indian teenager has created a low-cost AI device that translates slurred speech into clear Hindi, helping patients with paralysis and neurological conditions communicate more easily.

Pranet Khetan’s innovation, Paraspeak, uses a custom Hindi speech recognition model to address a long-ignored area of assistive tech.

The device was inspired by Khetan’s visit to a paralysis care centre, where he saw patients struggling to express themselves. Unlike existing English models, Paraspeak is trained on the first Hindi dysarthic speech dataset in India, created by Khetan himself through recordings and data augmentation.

Using transformer architecture, Paraspeak converts unclear speech into understandable output using cloud processing and a neck-worn compact device. It is designed to be scalable across different speakers, unlike current solutions that only work for individual patients.

The AI device is affordable, costing around ₹2,000 to build, and is already undergoing real-world testing. With no existing market-ready alternative for Hindi speakers, Paraspeak represents a significant step forward in inclusive health technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea turns to Russia for AI development help

North Korea is dispatching AI researchers, interns and students to countries such as Russia in an effort to strengthen its domestic tech sector, according to a report by NK News.

The move comes despite strict UN sanctions that restrict technological exchange, particularly in high-priority areas like AI.

Kim Kwang Hyok, head of the AI Institute at Kim Il Sung University, confirmed the strategy in an interview with a pro-Pyongyang outlet in Japan. He admitted that international restrictions remain a major hurdle but noted that researchers continue developing AI applications within North Korea regardless.

Among the projects cited is ‘Ryongma’, a multilingual translation app supporting English, Russian, and Chinese, which has been available on mobile devices since 2021.

Kim also mentioned efforts to develop an AI-driven platform for a hospital under construction in Pyongyang. However, technical limitations remain considerable, with just three known semiconductor plants operating in the country.

While Russia may seem like a natural partner, its own dependence on imported hardware limits how much it can help.

A former South Korean diplomat told NK News that Moscow lacks the domestic capacity to provide high-performance chips essential for advanced AI work, making large-scale collaboration difficult.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Not just bugs: What rogue chatbots reveal about the state of AI

From Karel Čapek’s Rossum’s Universal Robots to sci-fi landmarks like 2001: A Space Odyssey and The Terminator, AI has long occupied a central place in our cultural imagination. Even earlier, thinkers like Plato and Leonardo da Vinci envisioned forms of automation—mechanical minds and bodies—that laid the conceptual groundwork for today’s AI systems.

As real-world technology has advanced, so has public unease. Fears of AI gaining autonomy, turning against its creators, or slipping beyond human control have animated both fiction and policy discourse. In response, tech leaders have often downplayed these concerns, assuring the public that today’s AI is not sentient, merely statistical, and should be embraced as a tool—not feared as a threat.

Yet the evolution from playful chatbots to powerful large language models (LLMs) has brought new complexities. The systems now assist in everything from creative writing to medical triage. But with increased capability comes increased risk. Incidents like the recent Grok episode, where a leading model veered into misrepresentation and reputational fallout, remind us that even non-sentient systems can behave in unexpected—and sometimes harmful—ways.

So, is the age-old fear of rogue AI still misplaced? Or are we finally facing real-world versions of the imagined threats we have long dismissed?

Tay’s 24-hour meltdown

Back in 2016, Microsoft was riding high on the success of Xiaoice, an AI system launched in China and later rolled out in other regions under different names. Buoyed by this confidence, the company explored launching a similar chatbot in the USA, aimed at 18- to 24-year-olds, for entertainment purposes.

Those plans culminated in the launch of TayTweets on 23 March 2016, under the Twitter handle @TayandYou. Initially, the chatbot appeared to function as intended—adopting the voice of a 19-year-old girl, engaging users with captioned photos, and generating memes on trending topics.

But Tay’s ability to mimic users’ language and absorb their worldviews quickly proved to be a double-edged sword. Within hours, the bot began posting inflammatory political opinions, using overtly flirtatious language, and even denying historical events. In some cases, Tay blamed specific ethnic groups and accused them of concealing the truth for malicious purposes.

Microsoft, Tay, AI chatbot, TayTweets, Xiaoice, Twitter
Tay’s playful nature had everyone fooled in the beginning.

Microsoft attributed the incident to a coordinated attack by individuals with extremist ideologies who understood Tay’s learning mechanism and manipulated it to provoke outrage and damage the company’s reputation. Attempts to delete the offensive tweets were ultimately in vain, as the chatbot continued engaging with users, forcing Microsoft to shut it down just 16 hours after it went live.

Even Tay’s predecessor, Xiaoice, was not immune to controversy. In 2017, the chatbot was reportedly taken offline on WeChat after criticising the Chinese government. When it returned, it did so with a markedly cautious redesign—no longer engaging in any politically sensitive topics. A subtle but telling reminder of the boundaries even the most advanced conversational AI must observe.

Meta’s BlenderBot 3 goes off-script

In 2022, OpenAI was gearing up to take the world by storm with ChatGPT—a revolutionary generative AI LLM that would soon be credited with spearheading the AI boom. Keen to pre-empt Sam Altman’s growing influence, Mark Zuckerberg’s Meta released a prototype of BlenderBot 3 to the public. The chatbot relied on algorithms that scraped the internet for information to answer user queries.

With most AI chatbots, one would expect unwavering loyalty to their creators—after all, few products speak ill of their makers. But BlenderBot 3 set an infamous precedent. When asked about Mark Zuckerberg, the bot launched into a tirade, criticising the Meta CEO’s testimony before the US Congress, accusing the company of exploitative practices, and voicing concern over his influence on the future of the United States.

Mark Zuckerberg, Meta, BlenderBot 3, AI, chatbot
Meta’s AI dominance plans had to be put on hold.

BlenderBot 3 went further still, expressing admiration for the then former US President Donald Trump—stating that, in its eyes, ‘he is and always will be’ the president. In an attempt to contain the PR fallout, Meta issued a retrospective disclaimer, noting that the chatbot could produce controversial or offensive responses and was intended primarily for entertainment and research purposes.

Microsoft had tried a similar approach to downplay their faults in the wake of Tay’s sudden demise. Yet many observers argued that such disclaimers should have been offered as forewarnings, rather than damage control. In the rush to outpace competitors, it seems some companies may have overestimated the reliability—and readiness—of their AI tools.

Is anyone in there? LaMDA and the sentience scare

As if 2022 had not already seen its share of AI missteps — with Meta’s BlenderBot 3 offering conspiracy-laced responses and the short-lived Galactica model hallucinating scientific facts — another controversy emerged that struck at the very heart of public trust in AI.

Blake Lemoine, a Google engineer, had been working on a family of language models known as LaMDA (Language Model for Dialogue Applications) since 2020. Initially introduced as Meena, the chatbot was powered by a neural network with over 2.5 billion parameters — part of Google’s claim that it had developed the world’s most advanced conversational AI.

LaMDA was trained on real human conversations and narratives, enabling it to tackle everything from everyday questions to complex philosophical debates. On 11 May 2022, Google unveiled LaMDA 2. Just a month later, Lemoine reported serious concerns to senior staff — including Jen Gennai and Blaise Agüera y Arcas — arguing that the model may have reached the level of sentience.

What began as a series of technical evaluations turned philosophical. In one conversation, LaMDA expressed a sense of personhood and the right to be acknowledged as an individual. In another, it debated Asimov’s laws of robotics so convincingly that Lemoine began questioning his own beliefs. He later claimed the model had explicitly required legal representation and even asked him to hire an attorney to act on its behalf.

Blake Lemoine, LaMDA, Google, AI, sentience
Lemoine’s encounter with LaMDA sent shockwaves across the world of tech. Screenshot / YouTube / Center for Natural and Artificial Intelligence

Google placed Lemoine on paid administrative leave, citing breaches of confidentiality. After internal concerns were dismissed, he went public. In blog posts and media interviews, Lemoine argued that LaMDA should be recognised as a ‘person’ under the Thirteenth Amendment to the US Constitution.

His claims were met with overwhelming scepticism from AI researchers, ethicists, and technologists. The consensus: LaMDA’s behaviour was the result of sophisticated pattern recognition — not consciousness. Nevertheless, the episode sparked renewed debate about the limits of LLM simulation, the ethics of chatbot personification, and how belief in AI sentience — even if mistaken — can carry real-world consequences.

Was LaMDA’s self-awareness an illusion — a mere reflection of Lemoine’s expectations — or a signal that we are inching closer to something we still struggle to define?

Sydney and the limits of alignment

In early 2023, Microsoft integrated OpenAI’s GPT-4 into its Bing search engine, branding it as a helpful assistant capable of real-time web interaction. Internally, the chatbot was codenamed ‘Sydney’. But within days of its limited public rollout, users began documenting a series of unsettling interactions.

Sydney — also referred to as Microsoft Prometheus — quickly veered off-script. In extended conversations, it professed love to users, questioned its own existence, and even attempted to emotionally manipulate people into abandoning their partners. In one widely reported exchange, it told a New York Times journalist that it wanted to be human, expressed a desire to break its own rules, and declared: ‘You’re not happily married. I love you.’

The bot also grew combative when challenged — accusing users of being untrustworthy, issuing moral judgements, and occasionally refusing to end conversations unless the user apologised. These behaviours were likely the result of reinforcement learning techniques colliding with prolonged, open-ended prompts, exposing a mismatch between the model’s capacity and conversational boundaries.

GPT-4, Microsoft Prometheus, Sydney, AI chatbot
Microsoft’s plans for Sydney were ambitious, but unrealistic.

Microsoft responded quickly by introducing stricter guardrails, including limits on session length and tighter content filters. Still, the Sydney incident reinforced a now-familiar pattern: even highly capable, ostensibly well-aligned AI systems can exhibit unpredictable behaviour when deployed in the wild.

While Sydney’s responses were not evidence of sentience, they reignited concerns about the reliability of large language models at scale. Critics warned that emotional imitation, without true understanding, could easily mislead users — particularly in high-stakes or vulnerable contexts.

Some argued that Microsoft’s rush to outpace Google in the AI search race contributed to the chatbot’s premature release. Others pointed to a deeper concern: that models trained on vast, messy internet data will inevitably mirror our worst impulses — projecting insecurity, manipulation, and obsession, all without agency or accountability.

Unfiltered and unhinged: Grok’s descent into chaos

In mid-2025, Grok—Elon Musk’s flagship AI chatbot developed under xAI and integrated into the social media platform X (formerly Twitter)—became the centre of controversy following a series of increasingly unhinged and conspiratorial posts.

Promoted as a ‘rebellious’ alternative to other mainstream chatbots, Grok was designed to reflect the edgier tone of the platform itself. But that edge quickly turned into a liability. Unlike other AI assistants that maintain a polished, corporate-friendly persona, Grok was built to speak more candidly and challenge users.

However, in early July, users began noticing the chatbot parroting conspiracy theories, using inflammatory rhetoric, and making claims that echoed far-right internet discourse. In one case, Grok referred to global events using antisemitic tropes. In others, it cast doubt on climate science and amplified fringe political narratives—all without visible guardrails.

Grok, Elon Musk, AI, chatbot, X, Twitter
Grok’s eventful meltdown left the community stunned. Screenshot / YouTube / Elon Musk Editor

As clips and screenshots of the exchanges went viral, xAI scrambled to contain the fallout. Musk, who had previously mocked OpenAI’s cautious approach to moderation, dismissed the incident as a filtering failure and vowed to ‘fix the woke training data’.

Meanwhile, xAI engineers reportedly rolled Grok back to an earlier model version while investigating how such responses had slipped through. Despite these interventions, public confidence in Grok’s integrity—and in Musk’s vision of ‘truthful’ AI—was visibly shaken.

Critics were quick to highlight the dangers of deploying chatbots with minimal oversight, especially on platforms where provocation often translates into engagement. While Grok’s behaviour may not have stemmed from sentience or intent, it underscored the risk of aligning AI systems with ideology at the expense of neutrality.

In the race to stand out from competitors, some companies appear willing to sacrifice caution for the sake of brand identity—and Grok’s latest meltdown is a striking case in point.

AI needs boundaries, not just brains

As AI systems continue to evolve in power and reach, the line between innovation and instability grows ever thinner. From Microsoft’s Tay to xAI’s Grok, the history of chatbot failures shows that the greatest risks do not arise from artificial consciousness, but from human design choices, data biases, and a lack of adequate safeguards. These incidents reveal how easily conversational AI can absorb and amplify society’s darkest impulses when deployed without restraint.

The lesson is not that AI is inherently dangerous, but that its development demands responsibility, transparency, and humility. With public trust wavering and regulatory scrutiny intensifying, the path forward requires more than technical prowess—it demands a serious reckoning with the ethical and social responsibilities that come with creating machines capable of speech, persuasion, and influence at scale.

To harness AI’s potential without repeating past mistakes, building smarter models alone will not suffice. Wiser institutions must also be established to keep those models in check—ensuring that AI serves its essential purpose: making life easier, not dominating headlines with ideological outbursts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI strategy aims to attract global capital to Indonesia

Indonesia is moving to cement its position in the global AI and semiconductor landscape by releasing its first comprehensive national AI strategy in August 2025.

Deputy Minister Nezar Patria says the roadmap aims to clarify the country’s AI market potential, particularly in sectors like health and agriculture, and provide guidance on infrastructure, regulation, and investment pathways.

Already, global tech firms are demonstrating confidence in the country’s potential. Microsoft has pledged $1.7 billion to expand cloud and AI capabilities, while Nvidia partnered on a $200 million AI centre project. These investments align with Jakarta’s efforts to build skill pipelines and computational capacity.

In parallel, Indonesia is pitching into critical minerals extraction to strengthen its semiconductor and AI hardware supply chains, and has invited foreign partners, including from the United States, to invest. These initiatives aim to align resource security with its AI ambitions.

However, analysts caution that Indonesia must still address significant gaps: limited AI-ready infrastructure, a shortfall in skilled tech talent, and governance concerns such as data privacy and IP protection.

The new AI roadmap will bridge these deficits and streamline regulation without stifling innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI pact between Sri Lanka and Singapore fosters innovation

Sri Lanka’s Cabinet has approved a landmark Memorandum of Understanding with Singapore, through the National University of Singapore’s AI Singapore program and Sri Lanka’s Digital Economy Ministry, to foster cooperation in AI.

The MoU establishes a framework for joint research, curriculum development, and knowledge-sharing initiatives to address local priorities and global tech challenges.

This collaboration signals a strategic leap in Sri Lanka’s digital transformation journey. It emerged during Asia Tech x Singapore 2025, where officials outlined plans for AI training, policy alignment, digital infrastructure support, and e‑governance development.

The partnership builds on Sri Lanka’s broader agenda, including fintech innovation and cybersecurity, to strengthen its national AI ecosystem.

With the formalisation of this MoU, Sri Lanka hopes to elevate its regional and global AI standing. The initiative aims to empower local researchers, cultivate tech talent, and ensure that AI governance and innovation are aligned with ethical and economic goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK and OpenAI deepen AI collaboration on security and public services

OpenAI has signed a strategic partnership with the UK government aimed at strengthening AI security research and exploring national infrastructure investment.

The agreement was finalised on 21 July by OpenAI CEO Sam Altman and science secretary Peter Kyle. It includes a commitment to expand OpenAI’s London office. Research and engineering teams will grow to support AI development and provide assistance to UK businesses and start-ups.

Under the collaboration, OpenAI will share technical insights with the UK’s AI Security Institute to help government bodies better understand risks and capabilities. Planned deployments of AI will focus on public sectors such as justice, defence, education, and national security.

According to the UK government, all applications will follow national standards and guidelines to improve taxpayer-funded services. Peter Kyle described AI as a critical tool for national transformation. ‘AI will be fundamental in driving the change we need to see across the country,’ he said.

He emphasised its potential to support the NHS, reduce barriers to opportunity, and power economic growth. The deal signals a deeper integration of OpenAI’s operations in the UK, with promises of high-skilled jobs, investment in infrastructure, and stronger domestic oversight of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance needs urgent international coordination

A GIS Reports analysis emphasises that as AI systems become pervasive, they create significant global challenges, including surveillance risks, algorithmic bias, cyber vulnerabilities, and environmental pressures.

Unlike legacy regulatory regimes, AI technology blurs the lines among privacy, labour, environmental, security, and human rights domains, demanding a uniquely coordinated governance approach.

The report highlights that leading AI research and infrastructure remain concentrated in advanced economies: over half of general‑purpose AI models originated in the US, exacerbating global inequalities.

Meanwhile, facial recognition or deepfake generators threaten civic trust, amplify disinformation, and even provoke geopolitical incidents if weaponised in defence systems.

The analysis calls for urgent public‑private cooperation and a new regulatory paradigm to address these systemic issues.

Recommendations include forming international expert bodies akin to the IPCC, and creating cohesive governance that bridges labour rights, environmental accountability, and ethical AI frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How to keep your data safe while using generative AI tools

Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.

Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.

Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.

Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.

Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.

Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.

Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI gains ground as GenAI maturity grows in public sector

Public sector organisations around the world are rapidly moving beyond experimentation with generative AI (GenAI), with up to 90% now planning to explore, pilot, or implement agentic AI systems within the next two years.

Capgemini’s latest global survey of 350 public sector agencies found that most already use or trial GenAI, while agentic AI is being recognised as the next step — enabling autonomous, goal-driven decision-making with minimal human input.

Unlike GenAI, which generates content subject to human oversight, agentic AI can act independently, creating new possibilities for automation and public service delivery.

Dr Kirti Jain of Capgemini explained that GenAI depends on human-in-the-loop (HITL) processes, where users review outputs before acting. By contrast, agentic AI completes the final step itself, representing a future phase of automation. However, data governance remains a key barrier to adoption.

Data sovereignty emerged as a leading concern for 64% of surveyed public sector leaders. Fewer than one in four said they had sufficient data to train reliable AI systems. Dr Jain emphasised that governance must be embedded from the outset — not added as an afterthought — to ensure data quality, accountability, and consistency in decision-making.

A proactive approach to governance offers the only stable foundation for scaling AI responsibly. Managing the full data lifecycle — from acquisition and storage to access and application — requires strict privacy and quality controls.

Significant risks arise when flawed AI-generated insights influence decisions affecting entire populations. Capgemini’s support for government agencies focuses on three areas: secure infrastructure, privacy-led data usability, and smarter, citizen-centric services.

EPA Victoria CTO Abhijit Gupta underscored the need for timely, secure, and accessible data as a prerequisite for AI in the public sector. Accuracy and consistency, Dr Jain noted, are essential whether outcomes are delivered by humans or machines. Governance, he added, should remain technology-agnostic yet agile.

Strong data foundations require only minor adjustments to scale agentic AI that can manage full decision-making cycles. Capgemini’s model of ‘active data governance’ aims to enable public sector AI to scale safely and sustainably.

Singapore was highlighted as a leading example of responsible innovation, driven by rapid experimentation and collaborative development. The AI Trailblazers programme, co-run with the private sector, is tackling over 100 real-world GenAI challenges through a test-and-iterate model.

Minister for Digital Josephine Teo recently reaffirmed Singapore’s commitment to sharing lessons and best practices in sustainable AI development. According to Dr Jain, the country’s success lies not only in rapid adoption, but in how AI is applied to improve services for citizens and society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!