Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.
The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.
According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.
To activate the feature, users can tap a small pencil icon that appears while composing a message.
In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’
By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.
Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.
The issue has sparked concern among creators, who argue that the lack of consent undermines trust.
YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.
However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.
Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.
Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.
YouTube has not confirmed whether the experiment will expand or when it might end.
For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.
Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.
Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.
The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.
Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
According to The Japan Times and Reuters, US officials are discussing sanctions on the EU or member state representatives responsible for implementing the Digital Services Act (DSA), a sweeping law that forces online platforms to police illegal content. Washington argues the regulation censors Americans and unfairly burdens US companies.
While governments often complain about foreign rules they deem restrictive, directly sanctioning allied officials would mark a sharp escalation. So far, discussions have centred on possible visa bans, though no decision has been made.
Last week, Internal State Department meetings focused on whom such measures might target. Secretary of State Marco Rubio has ordered US diplomats in Europe to lobby against the DSA, urging allies to amend or repeal the law.
Washington insists that the EU is curbing freedom of speech under the banner of combating hate speech and misinformation, while the EU maintains that the act is designed to protect citizens from illegal material such as child exploitation and extremist propaganda.
Trump has framed the dispute in broader terms, threatening tariffs and export restrictions on any country that imposes digital regulations he deems discriminatory. In recent months, he has repeatedly warned that measures like the DSA, or national digital taxes, are veiled attacks on US companies and conservative voices online. At the same time, the administration has not hesitated to sanction foreign officials in other contexts, including a Brazilian judge overseeing cases against Trump ally Jair Bolsonaro.
US leaders, including Vice President JD Vance, have accused European authorities of suppressing right-wing parties and restricting debate on issues such as immigration. In contrast, European officials argue that their rules are about fairness and safety and do not silence political viewpoints. At a transatlantic conference earlier this year, Vance stunned European counterparts by charging that the EU was undermining democracy, remarks that underscored the widening gap.
The question remains whether Washington will take the extraordinary step of sanctioning officials in Brussels or the EU capitals. Such action could further destabilise an already fragile trade relationship while putting the US squarely at odds with Europe over the future of digital governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI-generated writing may be giving itself away, and the em dash is its most flamboyant tell. Long beloved by grammar nerds for its versatility, the em dash has become AI’s go-to flourish, but not everyone is impressed.
Pacing, pauses, and a suspicious number of em dashes are often a sign that a machine had its hand in the prose. Even simple requests for editing can leave users with sentences reworked into what feels like an AI-powered monologue.
Though tools like ChatGPT or Gemini can be powerful assistants, using them blindly can dull the human spark. Overuse of certain AI quirks, like rhetorical questions, generic phrases or overstyled punctuation, can make even an honest email feel like corporate poetry.
Writers are being advised to take the reins back. Draft the first version by hand, let the AI refine it, then strip out anything that feels artificial, especially the dashes. Keeping your natural voice intact may be the best way to make sure your readers are connecting with you, not just the machine behind the curtain.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Bluesky, a decentralised social media platform, has ceased operations in Mississippi due to a new state law requiring strict age verification.
The company said compliance would require tracking users, identifying children, and collecting sensitive personal information. For a small team like Bluesky’s, the burden of such infrastructure, alongside privacy concerns, made continued service unfeasible.
The law mandates age checks not just for explicit content, but for access to general social media. Bluesky highlighted that even the UK Online Safety Act does not require platforms to track which users are children.
US Mississippi law has sparked debate over whether efforts to protect minors are inadvertently undermining online privacy and free speech. Bluesky warned that such legislation may stifle innovation and entrench dominance by larger tech firms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
College students are increasingly turning to AI chatbots for emotional support, prompting concern among mental health professionals. A 2025 report ranked ‘therapy and companionship’ as the top use case for generative AI, particularly among younger users.
Studies by MIT and OpenAI show that frequent AI use can lower social confidence and increase avoidance of face-to-face interaction. On campuses, digital mental health platforms now supplement counselling services, offering tools that identify at-risk students and provide basic support.
Experts warn that chatbot companionship may create emotional habits that lack grounding in reality and hinder social skill development. Counsellors advocate for educating students on safe AI use and suggest universities adopt tools that flag risky engagement patterns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI’s rollout of GPT-5 has faced criticism from users attached to older models, who say the new version lacks the character of its predecessors.
GPT-5 was designed as an all-in-one model, featuring a lightweight version for rapid responses and a reasoning version for complex tasks. A routing system determines which option to use, although users can manually select from several alternatives.
Modes include Auto, Fast, Thinking, Thinking mini, and Pro, with the last available to Pro subscribers for $200 monthly. Standard paid users can still access GPT-4o, GPT-4.1, 4o-mini, and even 3o through additional settings.
Chief executive Sam Altman has said the long-term goal is to give users more control over ChatGPT’s personality, making customisation a solution to concerns about style. He promised ample notice before permanently retiring older models.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.
In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.
Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.
AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.
Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.
While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles.
Improving access to justice
Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain.
NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.
While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure.
Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.
AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law.
Risking human rights
While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.
Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.
Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight.
Image via Pixabay / jessica45
Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.
However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors.
While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.
The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance.
The policy path forward
As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.
The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights.
Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems.
The future of justice
AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.
However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support.
Image via Pixabay / souandresantana
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!