WhatsApp trials AI-powered Writing Help for personalised messaging

WhatsApp is testing a new AI feature for iOS users that provides real-time writing assistance.

Known as ‘Writing Help’, the tool suggests alternative phrasings, adjusts tone, and enhances clarity, with all processing handled on-device to safeguard privacy.

The feature allows users to select professional, friendly, or concise tones before the AI generates suitable rewordings while keeping the original meaning. According to reports, the tool is available only to a small group of beta testers through TestFlight, with no confirmed release date.

WhatsApp says it uses Meta’s Private Processing technology to ensure sensitive data never leaves the device, mirroring privacy-first approaches like Apple’s Writing Tools.

Industry watchers suggest the new tool could give WhatsApp an edge over rivals such as Telegram and Signal, which have not yet introduced generative AI writing aids.

Analysts also see potential for integration with other Meta platforms, although challenges remain in ensuring accurate, unbiased results across different languages.

Writing Help could streamline business communication by improving grammar, structure, and tone accuracy if successful. While some users have praised its seamless integration, others warn that heavy reliance on AI could undermine authenticity in digital conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The First Descendant faces backlash over AI-generated streamer ads

Nexon’s new promotional ads for their looter-shooter The First Descendant have ignited controversy after featuring AI-generated avatars that closely mimic real content creators, one resembling streamer DanieltheDemon.

The ads, circulating primarily on TikTok, combine unnatural expressions with awkward speech patterns, triggering community outrage.

Fans on Reddit slammed the ads as ’embarrassing’ and akin to ‘cheap, lazy marketing,’ arguing that Nexon had bypassed genuine collaborators for synthetic substitutes, even though those weren’t subtle attempts.

Critics warned that these deepfake-like promotions undermine the trust and credibility of creators and raise ethical questions over likeness rights and authenticity in AI usage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI upskilling at heart of Singapore’s new job strategy

Singapore has launched a $27 billion initiative to boost AI readiness and protect jobs, as global tensions and automation reshape the workforce.

Prime Minister Lawrence Wong stressed that securing employment is key to national stability, particularly as geopolitical shifts and AI adoption accelerate.

IMF research warns Singapore’s skilled workers, especially women and youth, are among the most exposed to job disruption from AI technologies.

To address this, the government is expanding its SkillsFuture programme and rolling out local initiatives to connect citizens with evolving job markets.

The tech investment includes $5 billion for AI development and positions Singapore as a leader in digital transformation across Southeast Asia.

Social challenges remain, however, with rising inequality and risks to foreign workers highlighting the need for broader support systems and inclusive policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI toys change the way children learn and play

AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.

Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.

Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.

Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.

The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.

At the same time, experts warn about privacy risks, the collection of children’s data, and potential reductions in face-to-face interaction.

Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.

The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake Telegram Premium site spreads dangerous malware

A fake Telegram Premium website infects users with Lumma Stealer malware through a drive-by download, requiring no user interaction.

The domain, telegrampremium[.]app, hosts a malicious executable named start.exe, which begins stealing sensitive data as soon as it runs.

The malware targets browser-stored credentials, crypto wallets, clipboard data and system files, using advanced evasion techniques to bypass antivirus tools.

Obfuscated with cryptors and hidden behind real services like Telegram, the malware also communicates with temporary domains to avoid takedown.

Analysts warn that it manipulates Windows systems, evades detection, and leaves little trace by disguising its payloads as real image files.

To defend against such threats, organisations are urged to implement better cybersecurity controls, such as behaviour-based detection and enforce stronger download controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The dark side of AI: Seven fears that won’t go away

AI has been hailed as the most transformative technology of our age, but with that power comes unease. From replacing jobs to spreading lies online, the risks attached to AI are no longer abstract; they are already reshaping lives. While governments and tech leaders promise safeguards, uncertainty fuels public anxiety.

Perhaps the most immediate concern is employment. Machines are proving cheaper and faster than humans in the software development and graphic design industries. Talk of a future “post-scarcity” economy, where robot labour frees people from work, remains speculative. Workers see only lost opportunities now, while policymakers struggle to offer coordinated solutions.

Environmental costs are another hidden consequence. Training large AI models demands enormous data centres that consume vast amounts of electricity and water. Critics argue that supposed future efficiencies cannot justify today’s pollution, which sometimes rivals small nations’ carbon footprint.

Privacy fears are also escalating. AI-driven surveillance—from facial recognition in public spaces to workplace monitoring—raises questions about whether personal freedom will survive in an era of constant observation. Many fear that “smart” devices and cameras may soon leave nowhere to hide.

Then there is the spectre of weaponisation. AI is already integrated into warfare, with autonomous drones and robotic systems assisting soldiers. While fully self-governing lethal machines are not yet in use, military experts warn that it is only a matter of time before battlefields become dominated by algorithmic decision-makers.

Artists and writers, meanwhile, worry about intellectual property theft. AI systems trained on creative works without permission or payment have sparked lawsuits and protests, leaving cultural workers feeling exploited by tech giants eager for training data.

Misinformation represents another urgent risk. Deepfakes and AI-generated propaganda are flooding social media, eroding trust in institutions and amplifying extremist views. The danger lies not only in falsehoods themselves but in the echo chambers algorithms create, where users are pushed toward ever more radical beliefs.

And hovering above it all is the fear of runaway AI. Although science fiction often exaggerates this threat, researchers take seriously the possibility of systems evolving in ways we cannot predict or control. Calls for global safeguards and transparency have grown louder, yet solutions remain elusive.

In the end, fear alone cannot guide us. Addressing these risks requires not just caution but decisive governance and ethical frameworks. Only then can humanity hope to steer AI toward progress rather than peril.

Source: Forbes

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Claude AI gains power to end harmful chats

Anthropic has unveiled a new capability in its Claude AI models that allows them to end conversations they deem harmful or unproductive.

The feature, part of the company’s more exhaustive exploration of ‘model welfare,’ is designed to allow AI systems to disengage from toxic inputs or ethical contradictions, reflecting a push toward safer and more autonomous behaviour.

The decision follows an internal review of over 700,000 Claude interactions, where researchers identified thousands of values shaping how the system responds in real-world scenarios.

By enabling Claude to exit problematic exchanges, Anthropic hopes to improve trustworthiness while protecting its models from situations that might degrade performance over time.

Industry reaction has been mixed. Many researchers praised the step as a blueprint for responsible AI design. In contrast, others expressed concern that allowing models to self-terminate conversations could limit user engagement or introduce unintended biases.

Critics also warned that the concept of model welfare risks over-anthropomorphising AI, potentially shifting focus away from human safety.

The update arrives alongside other recent Anthropic innovations, including memory features that allow users to maintain conversation history. Together, these changes highlight the company’s balanced approach: enhancing usability where beneficial, while ensuring safeguards are in place when interactions become potentially harmful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!