China’s robotics industry set to double by 2028, led by drones and humanoid robots

China’s robotics industry is on course to double in size by 2028, with Morgan Stanley projecting market growth from US$47 billion in 2024 to US$108 billion.

With an annual expansion rate of 23 percent, the country is expected to strengthen its leadership in this fast-evolving field. Analysts credit China’s drive for innovation and cost efficiency as key to advancing next-generation robotics.

A cornerstone of the ‘Made in China 2025’ initiative, robotics is central to the nation’s goal of dominating global high-tech industries. Last year, China accounted for 40 percent of the worldwide robotics market and over half of all industrial robot installations.

Recent data shows industrial robot production surged 35.5 percent in May, while service robot output climbed nearly 14 percent.

Morgan Stanley anticipates drones will remain China’s largest robotics segment, set to grow from US$19 billion to US$40 billion by 2028.

Meanwhile, the humanoid robot sector is expected to see an annual growth rate of 63 percent, expanding from US$300 million in 2025 to US$3.4 billion by 2030. By 2050, China could be home to 302 million humanoid robots, making up 30 percent of the global population.

The researchers describe 2025 as a milestone year, marking the start of mass humanoid robot production.

They emphasise that automation is already reshaping China’s manufacturing industry, boosting productivity and quality instead of simply replacing workers and setting the stage for a brighter industrial future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Diplo empowers Armenian civil society on digital issues

A new round of training sessions has been launched in Armenia to strengthen civil society’s understanding of digital governance. The initiative, which began on 12 June, brings together NGO representatives from both the regions and the capital to deepen their knowledge of crucial digital topics, including internet governance, AI, and digital rights.

The training program combines online and offline components, aiming to equip participants with the tools needed to actively shape the digital future of Armenia. By increasing the digital competence of civil society actors, the program aspires to promote broader democratic engagement and more informed contributions to policy discussions in the digital space.

The educational initiative is being carried out by Diplo as part of the ‘Digital Democracy for ALL’ measure by GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit), in close cooperation with several regional GIZ projects that focus on civil society and public administration reform in Eastern Partnership countries. The sessions have been praised for their depth and impact, with particular appreciation extended to Angela Saghatelyan for her leadership, and to Diplo’s experts Vladimir Radunovic, Katarina Bojovic, and Marília Maciel for their contributions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump unveils gold smartphone and new 5G wireless service

US President Donald Trump and his sons have launched a mobile phone service called Trump Mobile 5G, alongside plans to release a gold-coloured smartphone branded with the Trump name.

The service is being offered through partnerships with all three major US mobile networks, though they are not named directly.

The monthly plan, known as the ’47 Plan’, costs $47.45- referencing Trump’s position as the 45th and 47th president. Customers can join their current Android or iPhone devices, either with a physical SIM or an eSIM.

A new Trump-branded Android device, the T1, will launch in September. Priced at $499, it comes with Android 15, a 6.8-inch screen and biometric features like fingerprint scanning and AI facial recognition.

At a press event in New York, Donald Trump Jr. and Eric Trump introduced the initiative, saying it would combine high-quality service with an ‘America First’ approach.

They emphasised that the company is US-based, including its round-the-clock customer service, which promises real human support instead of automated systems.

While some critics may see the move as political branding, the Trump Organisation framed it as a business venture.

The company has already earned hundreds of millions from Trump-branded consumer goods. As with other mobile providers, the new service will fall under the regulatory oversight of the Federal Communications Commission, led by a Trump-appointed chair.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT and generative AI have polluted the internet — and may have broken themselves

The explosion of generative AI tools like ChatGPT has flooded the internet with low-quality, AI-generated content, making it harder for future models to learn from authentic human knowledge.

As AI continues to train on increasingly polluted data, a loop forms in which AI imitates already machine-made content, leading to a steady drop in originality and usefulness. The worrying trend is referred to as ‘model collapse’.

To illustrate the risk, researchers compare clean pre-AI data to ‘low-background steel’ — a rare kind of steel made before nuclear testing in 1945, which remains vital for specific medical and scientific uses.

Just as modern steel became contaminated by radiation, modern data is being tainted by artificial content. Cambridge researcher Maurice Chiodo notes that pre-2022 data is now seen as ‘safe, fine, clean’, while everything after is considered ‘dirty’.

A key concern is that techniques like retrieval-augmented generation, which allow AI to pull real-time data from the internet, risk spreading even more flawed content. Some research already shows that it leads to more ‘unsafe’ outputs.

If developers rely on such polluted data, scaling models by adding more information becomes far less effective, potentially hitting a wall in progress.

Chiodo argues that future AI development could be severely limited without a clean data reserve. He and his colleagues urge the introduction of clear labelling and tighter controls on AI content.

However, industry resistance to regulation might make meaningful reform difficult, raising doubts about whether the pollution can be reversed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Jensen Huang clashes with Anthropic CEO over AI Job loss predictions

A fresh dispute has erupted between Nvidia and Anthropic after CEO Dario Amodei warned that AI could eliminate 50% of entry-level white-collar jobs in the next five years, potentially causing a 20% unemployment spike.

Nvidia’s Jensen Huang dismissed the claim, saying at VivaTech in Paris that he ‘pretty much disagreed with almost everything’ Amodei says, accusing him of fearmongering and advocating for a monopoly on AI development.

Huang emphasized the importance of open, transparent development, stating, ‘If you want things to be done safely and responsibly, you do it in the open… Don’t do it in a dark room and tell me it’s safe.’

Anthropic pushed back, saying Amodei supports national AI transparency standards and never claimed only Anthropic can build safe AI.

The clash comes amid growing scrutiny of Anthropic, which faces a lawsuit from Reddit for allegedly scraping content without consent and controversy over a Claude 4 Opus test that simulated blackmail scenarios.

The companies have also clashed over AI export controls to China, with Anthropic urging tighter rules and Nvidia denying reports that its chips were smuggled using extreme methods like fake pregnancies or shipments with live lobsters.

Huang maintains an optimistic outlook, saying AI will create new jobs in fields like prompt engineering. At the same time, Amodei has consistently warned that the economic fallout could be severe, rejecting universal basic income as a long-term solution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Messages beta bug causes crashes on Pixel and Samsung Phones

A bug in the latest Google Messages beta (version 20250610_00_RC02.phone.openbeta_dynamic) is causing the app to crash on Pixel and Samsung phones when users press the forward button—the circular icon with two arrow points used to share text or images.

The crash also occurs when sharing content via Android’s system Share sheet from apps like Chrome. Affected users can check their version by going to Settings > Apps > See all apps > Messages, and scrolling to the bottom of the App info page.

Until a fix is released, users can manually copy and paste links, or share images from the Gallery within a conversation thread. To stop crashes, users can leave the beta program and install the stable version of Google Messages from the Play Store.

Meanwhile, Google is testing a Material 3 redesign for the Messages settings page, featuring new toggles and a more expressive UI. This design update hasn’t reached all devices yet, including Pixel phones running Android 16 QPR1 Beta 2.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New cyberattack method poses major threat to smart grids, study finds

A new study published in ‘Engineering’ highlights a growing cybersecurity threat to smart grids as they become more complex due to increased integration of distributed energy sources.

The research, conducted by Zengji Liu, Mengge Liu, Qi Wang, and Yi Tang, focuses on a sophisticated form of cyberattack known as a false data injection attack (FDIA) that targets data-driven algorithms used in smart grid operations.

As modern power systems adopt technologies like battery storage and solar panels, they rely more heavily on algorithms to manage energy distribution and grid stability. However, these algorithms can be exploited.

The study introduces a novel black-box FDIA method that injects false data directly at the measurement modules of distributed power supplies, using generative adversarial networks (GANs) to produce stealthy attack vectors.

What makes this method particularly dangerous is that it doesn’t require detailed knowledge of the grid’s internal workings, making it more practical and harder to detect in real-world scenarios.

The researchers also proposed an approach to estimate controller and filter parameters in distributed energy systems, making it easier to launch these attacks.

To test the method, the team simulated attacks on the New England 39-bus system, specifically targeting a deep learning model used for transient stability prediction. Results showed a dramatic drop in accuracy—from 98.75% to 56%—after the attack.

The attack also proved effective across multiple neural network models and on larger grid systems, such as IEEE’s 118-bus and 145-bus networks.

These findings underscore the urgent need for better cybersecurity defenses in the evolving smart grid landscape. As systems grow more complex and reliant on AI-driven management, developing robust protection against FDIA threats will be critical.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Armenia plans major AI hub with NVIDIA and Firebird

Armenia has unveiled plans to develop a $500mn AI supercomputing hub in partnership with US tech leader NVIDIA, AI cloud firm Firebird, and local telecoms group Team.

Announced at the Viva Technology conference in Paris, the initiative marks the largest tech investment ever seen in the South Caucasus.

Due to open in 2026, the facility will house thousands of NVIDIA’s Blackwell GPUs and offer more than 100 megawatts of scalable computing power. Designed to advance AI research, training and entrepreneurship, the hub aims to position Armenia as a leading player in global AI development.

Prime Minister Nikol Pashinyan described the project as the ‘Stargate of Armenia’, underscoring its potential to transform the national tech sector.

Firebird CEO Razmig Hovaghimian said the hub would help develop local talent and attract international attention, while the Afeyan Foundation, led by Noubar Afeyan, is set to come on board as a founding investor.

Instead of limiting its role to funding, the Armenian government will also provide land, tax breaks and simplified regulation to support the project, strengthening its push toward a competitive digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!