Zhipu AI launches free agent to rival DeepSeek

Chinese AI startup Zhipu AI has introduced a free AI agent, AutoGLM Rumination, aimed at assisting users with tasks such as web browsing, travel planning, and drafting research reports.

The product was unveiled by CEO Zhang Peng at an event in Beijing, where he highlighted the agent’s use of the company’s proprietary models—GLM-Z1-Air for reasoning and GLM-4-Air-0414 as the foundation.

According to Zhipu, the new GLM-Z1-Air model outperforms DeepSeek’s R1 in both speed and resource efficiency. The launch reflects growing momentum in China’s AI sector, where companies are increasingly focusing on cost-effective solutions to meet rising demand.

AutoGLM Rumination stands out in a competitive landscape by being freely accessible through Zhipu’s official website and mobile app, unlike rival offerings such as Manus’ subscription-only AI agent. The company positions this move as part of a broader strategy to expand access and adoption.

Founded in 2019 as a spinoff from Tsinghua University, Zhipu has developed the GLM model series and claims its GLM4 has surpassed OpenAI’s GPT-4 on several evaluation benchmarks.

In March, Zhipu secured major government-backed investment, including a 300 million yuan (US$41.5 million) contribution from Chengdu.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use EU user data for AI training amid scrutiny

Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.

The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.

Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.

Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.

The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.

Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.

Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X faces EU probe over AI data use

Elon Musk’s X platform is under formal investigation by the Irish Data Protection Commission over its alleged use of public posts from EU users to train the Grok AI chatbot.

The probe is centred on whether X Internet Unlimited Company, the platform’s newly renamed Irish entity, has adhered to key GDPR principles while sharing publicly accessible data, like posts and interactions, with its affiliate xAI, which develops the chatbot.

Concerns have grown over the lack of explicit user consent, especially as other tech giants such as Meta signal similar data usage plans.

A move like this is part of a wider regulatory push in the EU to hold AI developers accountable instead of allowing unchecked experimentation. Experts note that many AI firms have deployed tools under a ‘build first, ask later’ mindset, an approach at odds with Europe’s strict data laws.

Should regulators conclude that public data still requires user consent, it could force a dramatic shift in how AI models are developed, not just in Europe but around the world.

Enterprises are now treading carefully. The investigation into X is already affecting AI adoption across the continent, with legal and reputational risks weighing heavily on decision-makers.

In one case, a Nordic bank halted its AI rollout midstream after its legal team couldn’t confirm whether European data had been used without proper disclosure. Instead of pushing ahead, the project was rebuilt using fully documented, EU-based training data.

The consequences could stretch far beyond the EU. Ireland’s probe might become a global benchmark for how governments view user consent in the age of data scraping and machine learning.

Instead of enforcement being region-specific, this investigation could inspire similar actions from regulators in places like Singapore and Canada. As AI continues to evolve, companies may have no choice but to adopt more transparent practices or face a rising tide of legal scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TheStage AI makes neural network optimisation easy

In a move set to ease one of the most stubborn hurdles in AI development, Delaware-based startup TheStage AI has secured $4.5 million to launch its Automatic NNs Analyzer (ANNA).

Instead of requiring months of manual fine-tuning, ANNA allows developers to optimise AI models in hours, cutting deployment costs by up to five times. The technology is designed to simplify a process that has remained inaccessible to all but the largest tech firms, often limited by expensive GPU infrastructure.

TheStage AI’s system automatically compresses and refines models using techniques like quantisation and pruning, adapting them to various hardware environments without locking users into proprietary platforms.

Instead of focusing on cloud-based deployment, their models, called ‘Elastic models’, can run anywhere from smartphones to on-premise GPUs. This gives startups and enterprises a cost-effective way to adjust quality and speed with a simple interface, akin to choosing video resolution on streaming platforms.

Backed by notable investors including Mehreen Malik and Atlantic Labs, and already used by companies like Recraft.ai, the startup addresses a growing need as demand shifts from AI training to real-time inference.

Unlike competitors acquired by larger corporations and tied to specific ecosystems, TheStage AI takes a dual-market approach, helping both app developers and AI researchers. Their strategy supports scale without complexity, effectively making AI optimisation available to teams of any size.

Founded by a group of PhD holders with experience at Huawei, the team combines deep academic roots with practical industry application.

By offering a tool that streamlines deployment instead of complicating it, TheStage AI hopes to enable broader use of generative AI technologies in sectors where performance and cost have long been limiting factors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia expands AI chip production in the US amid political pressure and global shifts

Nvidia is significantly ramping up its presence in the United States by commissioning over a million square feet of manufacturing space in Arizona and Texas to build and test its powerful AI chips. The tech giant has begun producing its Blackwell chips at TSMC facilities in Phoenix and is developing large-scale ‘supercomputer’ manufacturing plants in partnership with Foxconn in Houston and Wistron in Dallas.

The company projects mass production to begin within the next 12 to 15 months, with ambitions to manufacture up to half a trillion dollars’ worth of AI infrastructure in the US over the next four years. CEO Jensen Huang emphasised that this move marks the first time the core components of global AI infrastructure are being built domestically.

He cited growing global demand, supply chain resilience, and national security as key reasons for the shift. Nvidia’s decision follows an agreement with the Trump administration that helped the company avoid export restrictions on its H20 chip, a top-tier processor still eligible for export to China.

Nvidia joins a broader wave of AI industry leaders aligning with the Trump administration’s ‘America-first’ strategy. Companies like OpenAI and Microsoft have pledged massive investments in US-based AI infrastructure, hoping to secure political goodwill and avoid regulatory hurdles.

Trump has also reportedly pressured key suppliers like TSMC to expand American operations, threatening tariffs as high as 100% if they fail to comply. Despite the enthusiasm, Nvidia’s expansion faces headwinds.

A shortage of skilled workers and potential retaliation from China—particularly over raw material access—pose serious risks. Meanwhile, Trump’s recent moves to undermine the Chips Act, which provides critical funding for domestic chipmaking, have raised concerns about the long-term viability of US semiconductor investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tool helps spot cataracts in babies

A groundbreaking medical device designed to detect cataracts in newborns is being enhanced with the help of AI. The Neocam, a handheld digital imaging tool created by Addenbrooke’s eye surgeon, Dr Louise Allen, allows midwives to take photos of a baby’s eyes to spot congenital cataracts — the leading cause of preventable childhood blindness.

A new AI feature under development will instantly assess whether a photo is clear enough for diagnosis, streamlining the process and reducing the need for retakes. The improvements are being developed by Cambridgeshire-based consultancy 42 Technology (42T), whose software engineers train a machine-learning model using a vast dataset of 46,000 anonymised images.

The AI project is backed by an innovation grant from Addenbrooke’s Charitable Trust (ACT) to make Neocam more accurate and accessible, especially in areas with limited specialist care. Neocam is currently being trialled in maternity units across the UK as part of a large-scale study called DIvO, where over 140,000 babies will have their eyes screened using both traditional methods and the new device.

Although the final results are not expected until 2027, early findings suggest Neocam has already identified rare visual conditions that would have otherwise gone undetected. Dr Allen emphasised the importance of collaboration and public support for the project, saying that the AI-enhanced Neocam could make early detection of eye conditions more reliable worldwide.

Why does it matter?

With growing support from institutions like the National Institute for Health and Care Research and ACT, this innovation could significantly improve childhood eye care across both urban and remote settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Benchmark backlash hits Meta’s Maverick model

Meta’s latest open-source language model, Llama 4 Maverick, has ranked poorly on a widely used AI benchmark after the company was criticised for initially using a heavily modified, unreleased version to boost its results.

LM Arena, the platform where the performance was measured, has since updated its rules and retested Meta’s vanilla version.

The plain Maverick model, officially named ‘Llama-4-Maverick-17B-128E-Instruct,’ placed behind older competitors such as OpenAI’s GPT-4o, Anthropic’s Claude 3.5 Sonnet, and Google’s Gemini 1.5 Pro.

Meta admitted that the stronger-performing variant used earlier had been ‘optimised for conversationality,’ which likely gave it an unfair advantage in LM Arena’s human-rated comparisons.

Although LM Arena’s reliability as a performance gauge has been questioned, the controversy has raised concerns over transparency and benchmarking practices in the AI industry.

Meta has since released its open-source model to developers, encouraging them to customise it for real-world use and provide feedback.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI voice hacks put fake Musk and Zuckerberg at crosswalks

Crosswalk buttons in several Californian cities have been hacked to play AI-generated voices impersonating tech moguls Elon Musk and Mark Zuckerberg, delivering bizarre and satirical messages to pedestrians.

The spoof messages, which mock the CEOs with lines like ‘Can we be friends?’ and ‘Cooking our grandparents’ brains with AI slop,’ have been heard in Palo Alto, Redwood City, and Menlo Park.

US Palo Alto officials confirmed that 12 intersections were affected and the audio systems have since been disabled.

While the crosswalk signals themselves remain operational, authorities are investigating how the hack was carried out. Similar issues are being addressed in nearby cities, with local governments moving quickly to secure the compromised systems.

The prank, which uses AI voice cloning, appears to layer these spoofed messages on top of the usual accessibility features rather than replacing them entirely.

Though clearly comedic in intent, the incident has raised concerns about the growing ease with which public systems can be manipulated using generative technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft users at risk from tax-themed cyberattack

As the US tax filing deadline of April 15 approaches, cybercriminals are ramping up phishing attacks designed to exploit the urgency many feel during this stressful period.

Windows users are particularly at risk, as attackers are targeting Microsoft account credentials by distributing emails disguised as tax-related reminders.

These emails include a PDF attachment titled ‘urgent reminder,’ which contains a malicious QR code. Once scanned, it leads users through fake bot protection and CAPTCHA checks before prompting them to enter their Microsoft login details, details that are then sent to a server controlled by criminals.

Security researchers, including Peter Arntz from Malwarebytes, warn that the email addresses in these fake login pages are already pre-filled, making it easier for unsuspecting victims to fall into the trap.

Entering your password at this stage could hand your credentials to malicious actors, possibly operating from Russia, who may exploit your account for maximum profit.

The form of attack takes advantage of both the ticking tax clock and the stress many feel trying to meet the deadline, encouraging impulsive and risky clicks.

Importantly, this threat is not limited to Windows users or those filing taxes by the April 15 deadline. As phishing techniques become more advanced through the use of AI and automated smartphone farms, similar scams are expected to persist well beyond tax season.

The IRS rarely contacts individuals via email and never to request sensitive information through links or attachments, so any such message should be treated with suspicion instead of trust.

To stay safe, users are urged to remain vigilant and avoid clicking on links or scanning codes from unsolicited emails. Instead of relying on emails for tax updates or returns, go directly to official websites.

The IRS offers resources to help recognise and report scams, and reviewing this guidance could be an essential step in protecting your personal information, not just today, but in the months ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT hits 800 million users after viral surge

ChatGPT’s user base has doubled in recent weeks, with OpenAI CEO Sam Altman estimating up to 800 million people now use the platform weekly.

Speaking at TED 2025, Altman confirmed the surge during an on-stage conversation, acknowledging the figure after being pressed by TED curator Chris Anderson. He suggested the user growth was accelerating rapidly, describing the adoption as covering around 10% of the global population.

The platform’s popularity has soared thanks to viral new features, including a March update that introduced Ghibli mode—an image and video generator inspired by the animation style of Studio Ghibli.

Altman noted that this single feature drew in a million users within an hour of launch. When asked about artist compensation, he said OpenAI may eventually offer automatic payments to creators whose styles are used in prompts, though safeguards remain in place to avoid generating copyrighted material.

Other major updates include the rollout of a memory function that allows ChatGPT to remember user interactions indefinitely, making it a more personalised assistant over time. Altman also spoke about the development of autonomous AI agents capable of acting on users’ behalf, framed with safety guardrails.

While acknowledging fears of AI replacing human jobs, he encouraged a view of AI as a tool to unlock greater capabilities rather than a threat to livelihoods.

For more information on these topics, visit diplomacy.edu.