Vietnam accelerates modernization of foreign affairs through technology and AI

The Ministry of Foreign Affairs of Vietnam spearheads an extensive digital transformation initiative in line with the Politburo’s Resolution No. 57-NQ/TW issued in December 2024. This resolution highlights the necessity of advancements in science, technology, and national digital transformation.

Under the guidance of Deputy Prime Minister and Minister Bui Thanh Son, the Ministry is committed to modernising its operations and improving efficiency, reflecting Vietnam’s broader digital evolution strategy across all sectors.

Key implementations of this transformation include the creation of three major digital platforms: an electronic information portal providing access to foreign policies and online public services, an online document management system for internal digitalisation, and an integrated data-sharing platform for connectivity and multi-dimensional data exchange.

The Ministry has digitised 100% of its administrative procedures, linking them to a national-level system, showcasing a significant stride towards administrative reform and efficiency. Additionally, the Ministry has fully adopted social media channels, including Facebook and Twitter, indicating its efforts to enhance foreign information dissemination and public engagement.

A central component of this initiative is the ‘Digital Literacy for All’ movement, inspired by President Ho Chi Minh’s historic ‘Popular Education’ campaign. This movement focuses on equipping diplomatic personnel with essential digital skills, transforming them into proficient ‘digital civil servants’ and ‘digital ambassadors.’ The Ministry aims to enhance its diplomatic functions in today’s globally connected environment by advancing its ability to navigate and utilise modern technologies.

The Ministry plans to develop its digital infrastructure further, strengthen data management, and integrate AI for strategic planning and predictive analysis.

Establishing a digital data warehouse for foreign information and enhancing human resources by nurturing technology experts within the diplomatic sector are also on the agenda. These actions reflect a strong commitment to fostering a professional and globally adept diplomatic industry, poised to safeguard national interests and thrive in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI controversy surrounds Will Smith’s comeback shows

Footage from Will Smith’s comeback tour has sparked claims that AI was used to alter shots of the crowd. Viewers noticed faces appearing blurred or distorted, along with extra fingers and oddly shaped hands in several clips.

Some accused Smith of boosting audience shots with AI, while others pointed to YouTube, which has been reported to apply AI upscaling without creators’ knowledge.

Guitarist and YouTuber Rhett Shull recently suggested the platform had altered his videos, raising concerns that artists might be wrongly accused of using deepfakes.

The controversy comes as the boundary between reality and fabrication grows increasingly uncertain. AI has been reshaping how audiences perceive authenticity, from fake bands to fabricated images of music legends.

Singer SZA is among the artists criticising the technology, highlighting its heavy energy use and potential to undermine creativity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New WhatsApp features help manage unwanted groups

WhatsApp is expanding its tools to give users greater control over the groups they join and the conversations they take part in.

When someone not saved in a user’s contacts adds them to a group, WhatsApp now provides details about that group so they can immediately decide whether to stay or leave. If a user chooses to exit, they can also report the group directly to WhatsApp.

Privacy settings allow people to decide who can add them to groups. By default, the setting is set to ‘Everyone,’ but it can be adjusted to ‘My contacts’ or ‘My contacts except…’ for more security. Messages within groups can also be reported individually, with users having the option to block the sender.

Reported messages and groups are sent to WhatsApp for review, including the sender’s or group’s ID, the time the message was sent, and the message type.

Although blocking an entire group is impossible, users can block or report individual members or administrators if they are sending spam or inappropriate content. Reporting a group will send up to five recent messages from that chat to WhatsApp without notifying other members.

Exiting a group remains straightforward: users can tap the group name and select ‘Exit group.’ With these tools, WhatsApp aims to strengthen user safety, protect privacy, and provide better ways to manage unwanted interactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FTC cautions US tech firms over compliance with EU and UK online safety laws

The US Federal Trade Commission (FTC) has warned American technology companies that following European Union and United Kingdom rules on online content and encryption could place them in breach of US legislation.

In a letter sent to chief executives, FTC Chair Andrew Ferguson said that restricting access to content for American users to comply with foreign legal requirements might amount to a violation of Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive commercial practices.

Ferguson cited the EU’s Digital Services Act and the UK’s Online Safety Act, as well as reports of British efforts to gain access to encrypted Apple iCloud data, as examples of measures that could put companies at risk under US law.

Although Section 5 has traditionally been used in cases concerning consumer protection, Ferguson noted that the same principles could apply if companies changed their services for US users due to foreign regulation. He argued that such changes could ‘mislead’ American consumers, who would not reasonably expect their online activity to be governed by overseas restrictions.

The FTC chair invited company leaders to meet with his office to discuss how they intend to balance demands from international regulators while continuing to fulfil their legal obligations in the United States.

Earlier this week, a senior US intelligence official said the British government had withdrawn a proposed legal measure aimed at Apple’s encrypted iCloud data after discussions with US Vice President JD Vance.

The issue has arisen amid tensions over the enforcement of UK online safety rules. Several online platforms, including 4chan, Gab, and Kiwi Farms, have publicly refused to comply, and British authorities have indicated that internet service providers could ultimately be ordered to block access to such sites.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump threatens sanctions on EU over Digital Services Act

Only five days after the Joint Statement on a United States-European Union framework on an agreement on reciprocal, fair and balanced trade (‘Framework Agreement’), the Trump administration is weighing an unprecedented step against the EU over its new tech rules.

According to The Japan Times and Reuters, US officials are discussing sanctions on the EU or member state representatives responsible for implementing the Digital Services Act (DSA), a sweeping law that forces online platforms to police illegal content. Washington argues the regulation censors Americans and unfairly burdens US companies.

While governments often complain about foreign rules they deem restrictive, directly sanctioning allied officials would mark a sharp escalation. So far, discussions have centred on possible visa bans, though no decision has been made.

Last week, Internal State Department meetings focused on whom such measures might target. Secretary of State Marco Rubio has ordered US diplomats in Europe to lobby against the DSA, urging allies to amend or repeal the law.

Washington insists that the EU is curbing freedom of speech under the banner of combating hate speech and misinformation, while the EU maintains that the act is designed to protect citizens from illegal material such as child exploitation and extremist propaganda.

‘Freedom of expression is a fundamental right in the EU. It lies at the heart of the DSA,’ an EU Commission spokesperson said, rejecting US accusations as ‘completely unfounded.’

Trump has framed the dispute in broader terms, threatening tariffs and export restrictions on any country that imposes digital regulations he deems discriminatory. In recent months, he has repeatedly warned that measures like the DSA, or national digital taxes, are veiled attacks on US companies and conservative voices online. At the same time, the administration has not hesitated to sanction foreign officials in other contexts, including a Brazilian judge overseeing cases against Trump ally Jair Bolsonaro.

US leaders, including Vice President JD Vance, have accused European authorities of suppressing right-wing parties and restricting debate on issues such as immigration. In contrast, European officials argue that their rules are about fairness and safety and do not silence political viewpoints. At a transatlantic conference earlier this year, Vance stunned European counterparts by charging that the EU was undermining democracy, remarks that underscored the widening gap.

The question remains whether Washington will take the extraordinary step of sanctioning officials in Brussels or the EU capitals. Such action could further destabilise an already fragile trade relationship while putting the US squarely at odds with Europe over the future of digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s overuse of the em dash could be your biggest giveaway

AI-generated writing may be giving itself away, and the em dash is its most flamboyant tell. Long beloved by grammar nerds for its versatility, the em dash has become AI’s go-to flourish, but not everyone is impressed.

Pacing, pauses, and a suspicious number of em dashes are often a sign that a machine had its hand in the prose. Even simple requests for editing can leave users with sentences reworked into what feels like an AI-powered monologue.

Though tools like ChatGPT or Gemini can be powerful assistants, using them blindly can dull the human spark. Overuse of certain AI quirks, like rhetorical questions, generic phrases or overstyled punctuation, can make even an honest email feel like corporate poetry.

Writers are being advised to take the reins back. Draft the first version by hand, let the AI refine it, then strip out anything that feels artificial, especially the dashes. Keeping your natural voice intact may be the best way to make sure your readers are connecting with you, not just the machine behind the curtain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky shuts down in Mississippi over new age law

Bluesky, a decentralised social media platform, has ceased operations in Mississippi due to a new state law requiring strict age verification.

The company said compliance would require tracking users, identifying children, and collecting sensitive personal information. For a small team like Bluesky’s, the burden of such infrastructure, alongside privacy concerns, made continued service unfeasible.

The law mandates age checks not just for explicit content, but for access to general social media. Bluesky highlighted that even the UK Online Safety Act does not require platforms to track which users are children.

US Mississippi law has sparked debate over whether efforts to protect minors are inadvertently undermining online privacy and free speech. Bluesky warned that such legislation may stifle innovation and entrench dominance by larger tech firms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musicians report surge in AI fakes appearing on Spotify and iTunes

Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.

Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.

Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.

Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.

Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!