Trump-backed WLFI boosts crypto portfolio with SEI token acquisition

World Liberty Financial (WLFI), a cryptocurrency project backed by the Trump family, has added 4.89 million SEI tokens to its portfolio. The purchase, valued at approximately $775,000, was made by using USDC transferred from the project’s main wallet.

The move increases WLFI’s growing collection of altcoins, which includes Bitcoin (BTC), Ether (ETH), and Tron (TRX). WLFI’s total portfolio now includes 11 different tokens, amounting to over $346 million in investments.

Despite this large accumulation, the project has yet to realise any profits, with its portfolio currently down by $145.8 million. Its Ethereum holdings have suffered a particular blow, with losses exceeding $114 million.

The SEI acquisition comes amid growing speculation surrounding the Trump family’s involvement in the crypto market. WLFI’s proposal for a USD1 stablecoin has raised concerns among lawmakers about its potential to replace the US dollar in federal transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zhipu AI launches free agent to rival DeepSeek

Chinese AI startup Zhipu AI has introduced a free AI agent, AutoGLM Rumination, aimed at assisting users with tasks such as web browsing, travel planning, and drafting research reports.

The product was unveiled by CEO Zhang Peng at an event in Beijing, where he highlighted the agent’s use of the company’s proprietary models—GLM-Z1-Air for reasoning and GLM-4-Air-0414 as the foundation.

According to Zhipu, the new GLM-Z1-Air model outperforms DeepSeek’s R1 in both speed and resource efficiency. The launch reflects growing momentum in China’s AI sector, where companies are increasingly focusing on cost-effective solutions to meet rising demand.

AutoGLM Rumination stands out in a competitive landscape by being freely accessible through Zhipu’s official website and mobile app, unlike rival offerings such as Manus’ subscription-only AI agent. The company positions this move as part of a broader strategy to expand access and adoption.

Founded in 2019 as a spinoff from Tsinghua University, Zhipu has developed the GLM model series and claims its GLM4 has surpassed OpenAI’s GPT-4 on several evaluation benchmarks.

In March, Zhipu secured major government-backed investment, including a 300 million yuan (US$41.5 million) contribution from Chengdu.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use EU user data for AI training amid scrutiny

Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.

The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.

Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.

Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.

The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.

Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.

Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire for scrapping diversity and moderation policies

The NAACP Legal Defense Fund (LDF) has withdrawn from Meta’s civil rights advisory group, citing deep concerns over the company’s rollback of diversity, equity and inclusion (DEI) policies and changes to content moderation.

The decision follows Meta’s January announcement that it would end DEI programmes, eliminate factchecking teams, and revise moderation rules across its platforms.

Civil rights organisations, including LDF, expressed alarm at the time, warning that the changes could silence marginalised voices and increase the risk of online harm.

In a letter to Meta CEO Mark Zuckerberg, they criticised the company for failing to consult the advisory group or consider the impact on protected communities. LDF’s Todd A Cox later said the policy shift posed a ‘grave risk’ to Black communities and public discourse.

LDF also noted that the company had seen progress under previous DEI policies, including a significant increase in Black and Hispanic employees.

Its reversal, the group argues, may breach federal civil rights laws and expose Meta to legal consequences.

LDF urged Meta to assess the effects of its policy changes and increase transparency about how harmful content is reported and removed. Meta has not commented publicly on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft users at risk from tax-themed cyberattack

As the US tax filing deadline of April 15 approaches, cybercriminals are ramping up phishing attacks designed to exploit the urgency many feel during this stressful period.

Windows users are particularly at risk, as attackers are targeting Microsoft account credentials by distributing emails disguised as tax-related reminders.

These emails include a PDF attachment titled ‘urgent reminder,’ which contains a malicious QR code. Once scanned, it leads users through fake bot protection and CAPTCHA checks before prompting them to enter their Microsoft login details, details that are then sent to a server controlled by criminals.

Security researchers, including Peter Arntz from Malwarebytes, warn that the email addresses in these fake login pages are already pre-filled, making it easier for unsuspecting victims to fall into the trap.

Entering your password at this stage could hand your credentials to malicious actors, possibly operating from Russia, who may exploit your account for maximum profit.

The form of attack takes advantage of both the ticking tax clock and the stress many feel trying to meet the deadline, encouraging impulsive and risky clicks.

Importantly, this threat is not limited to Windows users or those filing taxes by the April 15 deadline. As phishing techniques become more advanced through the use of AI and automated smartphone farms, similar scams are expected to persist well beyond tax season.

The IRS rarely contacts individuals via email and never to request sensitive information through links or attachments, so any such message should be treated with suspicion instead of trust.

To stay safe, users are urged to remain vigilant and avoid clicking on links or scanning codes from unsolicited emails. Instead of relying on emails for tax updates or returns, go directly to official websites.

The IRS offers resources to help recognise and report scams, and reviewing this guidance could be an essential step in protecting your personal information, not just today, but in the months ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China halts rare earth exports in trade war escalation

Exports of critical rare earth minerals and magnets from China have ground to a halt following new export restrictions, threatening global supply chains across the semiconductor, automotive, defence, and energy sectors.

The suspension took effect on 4 April, after Beijing imposed strict new licensing requirements in response to steep United States tariffs introduced by President Donald Trump.

China dominates the global supply of rare earth materials such as dysprosium and terbium, which are essential for manufacturing everything from electric vehicles to drones and missiles.

Industry insiders say licence applications could take up to several months to process, sparking fears of shortages if the halt persists beyond two months. Traders estimate shipments might resume after at least 60 days, but delays could stretch further.

Trump defended the tariffs, claiming they are necessary to address trade imbalances, particularly with China. He hinted at further tariffs targeting semiconductors and electronic devices, while his commerce secretary confirmed that smartphones and laptops may also be subject to new levies.

Critics, including Senator Elizabeth Warren, have condemned the approach, warning it will lead to confusion and instability in global markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers leak data from Indian software firm in major breach

A major cybersecurity breach has reportedly compromised a software company based in India, with hackers claiming responsibility for stealing nearly 1.6 million rows of sensitive data on 19 December 2024.

A hacker identified as @303 is said to have accessed and exposed customer information and internal credentials, with the dataset later appearing on a dark web forum via a user known as ‘frog’.

The leaked data includes email addresses linked to major Indian insurance providers, contact numbers, and possible administrative access credentials.

Analysts found that the sample files feature information tied to employees of companies such as HDFC Ergo, Bajaj Allianz, and ICICI Lombard, suggesting widespread exposure across the sector.

Despite the firm’s stated dedication to safeguarding data, the incident raises doubts about its cybersecurity protocols.

The breach also comes as India’s insurance regulator, IRDAI, has begun enforcing stricter cyber measures. In March 2025, it instructed insurers to appoint forensic auditors in advance and perform full IT audits instead of waiting for threats to surface.

A breach like this follows a string of high-profile incidents, including the Star Health Insurance leak affecting 31 million customers.

With cyberattacks in India up by 261% in early 2024 and the average cost of a breach now ₹19.5 crore, experts warn that insurance firms must adopt stronger protections instead of relying on outdated defences.

For more information on these topics, visit diplomacy.edu.

EU plans new law to tackle online consumer manipulation

The European Commission is preparing to introduce the Digital Fairness Act, a new law that aims to boost consumer protection online instead of adding more regulatory burden on businesses.

Justice Commissioner Michael McGrath described the upcoming legislation as both pro-consumer and pro-business during a speech at the European Retail Innovation Summit, seeking to calm industry concerns about further EU regulation following the Digital Services Act and the Digital Markets Act.

Designed to tackle deceptive practices in the digital space, the law will address issues such as manipulative design tricks known as ‘dark patterns’, influencer marketing, and personalised pricing based on user profiling.

It will also target concerns around addictive service design and virtual currencies in video games—areas where current EU consumer rules fall short. The legislation will be based on last year’s Digital Fairness Fitness Check, which highlighted regulatory gaps in the online marketplace.

McGrath acknowledged the cost of complying with EU-wide consumer protection measures, which can run into millions for businesses.

However, he stressed that the new act would provide legal clarity and ease administrative pressure, particularly for smaller companies, instead of complicating compliance requirements further.

A public consultation will begin in the coming weeks, ahead of a formal legislative proposal expected by mid-2026.

Maria-Myrto Kanellopoulou, head of the Commission’s consumer law unit, promised a thoughtful approach, saying the process would be both careful and thorough to ensure the right balance is struck.

For more information on these topics, visit diplomacy.edu

Victims of AI-driven sex crimes in Korea continue to grow

South Korea is facing a sharp rise in AI-related digital sex crimes, with deepfake pornography and online abuse increasingly affecting young women and children.

According to figures released by the Ministry of Gender Equality and Family and the Women’s Human Rights Institute, over 10,000 people sought help last year, marking a 14.7 percent increase from 2023.

Women made up more than 70 percent of those who contacted the Advocacy Center for Online Sexual Abuse Victims.

The majority were in their teens or twenties, with abuse often occurring via social media, messaging apps, and anonymous platforms. A growing portion of victims, including children under 10, were targeted due to the easy accessibility of AI tools.

The most frequently reported issue was ‘distribution anxiety,’ where victims feared the release of sensitive or manipulated videos, followed by blackmail and illegal filming.

Deepfake cases more than tripled in one year, with synthetic content often involving the use of female students’ images. In one notable incident, a university student and his peers used deepfake techniques to create explicit fake images of classmates and shared them on Telegram.

With over 300,000 pieces of illicit content removed in 2024, authorities warn that the majority of illegal websites are hosted overseas, complicating efforts to take down harmful material.

The South Korean government plans to strengthen its response by expanding educational outreach, supporting victims further, and implementing new laws to prevent secondary harm by allowing the removal of personal information alongside explicit images.

For more information on these topics, visit diplomacy.edu.

ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.