Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Diplo empowers Armenian civil society on digital issues

A new round of training sessions has been launched in Armenia to strengthen civil society’s understanding of digital governance. The initiative, which began on 12 June, brings together NGO representatives from both the regions and the capital to deepen their knowledge of crucial digital topics, including internet governance, AI, and digital rights.

The training program combines online and offline components, aiming to equip participants with the tools needed to actively shape the digital future of Armenia. By increasing the digital competence of civil society actors, the program aspires to promote broader democratic engagement and more informed contributions to policy discussions in the digital space.

The educational initiative is being carried out by Diplo as part of the ‘Digital Democracy for ALL’ measure by GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit), in close cooperation with several regional GIZ projects that focus on civil society and public administration reform in Eastern Partnership countries. The sessions have been praised for their depth and impact, with particular appreciation extended to Angela Saghatelyan for her leadership, and to Diplo’s experts Vladimir Radunovic, Katarina Bojovic, and Marília Maciel for their contributions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump unveils gold smartphone and new 5G wireless service

US President Donald Trump and his sons have launched a mobile phone service called Trump Mobile 5G, alongside plans to release a gold-coloured smartphone branded with the Trump name.

The service is being offered through partnerships with all three major US mobile networks, though they are not named directly.

The monthly plan, known as the ’47 Plan’, costs $47.45- referencing Trump’s position as the 45th and 47th president. Customers can join their current Android or iPhone devices, either with a physical SIM or an eSIM.

A new Trump-branded Android device, the T1, will launch in September. Priced at $499, it comes with Android 15, a 6.8-inch screen and biometric features like fingerprint scanning and AI facial recognition.

At a press event in New York, Donald Trump Jr. and Eric Trump introduced the initiative, saying it would combine high-quality service with an ‘America First’ approach.

They emphasised that the company is US-based, including its round-the-clock customer service, which promises real human support instead of automated systems.

While some critics may see the move as political branding, the Trump Organisation framed it as a business venture.

The company has already earned hundreds of millions from Trump-branded consumer goods. As with other mobile providers, the new service will fall under the regulatory oversight of the Federal Communications Commission, led by a Trump-appointed chair.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scientists convert brain signals into words using AI

Australian scientists have developed an AI model that converts brainwaves into spoken words and sentences using a wearable EEG cap.

The system, created at the University of Technology Sydney, marks a significant step in communication technology and cognitive care.

The deep learning model, designed by Daniel Leong, Charles Zhou, and Chin-Teng Lin, currently works with a limited vocabulary but has achieved around 75% accuracy. Researchers aim to improve this to 90% by expanding training data and refining brainwave analysis.

Bioelectronics expert Mohit Shivdasani noted that AI now detects neural patterns previously hidden from human interpretation. Future uses include real-time thought-to-text interfaces or direct communication between people via brain signals.

However, breakthrough opens new possibilities for patients with speech or movement impairments, pointing to future human-machine interaction that bypasses traditional input methods.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smart machines, dark intentions: UN urges global action on AI threats

The United Nations has warned that terrorists could seize control of AI-powered vehicles to launch devastating attacks in public spaces. A new report outlines how extremists might exploit autonomous cars and drones to bypass traditional defences.

AI is also feared to be a tool for facial recognition targeting and mass ‘swarm’ assaults using aerial devices. Experts suggest that key parts of modern infrastructure could be turned against the public if hacked.

Britain’s updated counter-terrorism strategy now reflects these growing concerns, including the risk of AI-generated propaganda and detailed attack planning. The UN has called for immediate global cooperation to limit how such technologies can be misused.

Security officials maintain that AI also offers valuable tools in the fight against extremism, enabling quicker intelligence processing and real-time threat identification. Nonetheless, authorities have been urged to prepare for worst-case scenarios involving AI-directed violence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Armenia plans major AI hub with NVIDIA and Firebird

Armenia has unveiled plans to develop a $500mn AI supercomputing hub in partnership with US tech leader NVIDIA, AI cloud firm Firebird, and local telecoms group Team.

Announced at the Viva Technology conference in Paris, the initiative marks the largest tech investment ever seen in the South Caucasus.

Due to open in 2026, the facility will house thousands of NVIDIA’s Blackwell GPUs and offer more than 100 megawatts of scalable computing power. Designed to advance AI research, training and entrepreneurship, the hub aims to position Armenia as a leading player in global AI development.

Prime Minister Nikol Pashinyan described the project as the ‘Stargate of Armenia’, underscoring its potential to transform the national tech sector.

Firebird CEO Razmig Hovaghimian said the hub would help develop local talent and attract international attention, while the Afeyan Foundation, led by Noubar Afeyan, is set to come on board as a founding investor.

Instead of limiting its role to funding, the Armenian government will also provide land, tax breaks and simplified regulation to support the project, strengthening its push toward a competitive digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon launches AU$ 20 bn investment in Australian solar-powered data centres

Amazon will invest AU$ 20 billion to expand its data centre infrastructure in Australia, using solar and wind power instead of traditional energy sources.

The plan includes power purchase agreements with three utility-scale solar plants developed by European Energy, one of which—Mokoan Solar Park in Victoria—is already operational. The other two projects, Winton North and Bullyard Solar Parks, are expected to lift total solar capacity to 333MW.

The investment supports Australia’s aim to enhance its cloud and AI capabilities. Amazon’s commitment includes purchasing over 170MW of power from these projects, contributing to both data centre growth and the country’s renewable energy transition.

According to the International Energy Agency, electricity demand from data centres is expected to more than double by 2030, driven by AI.

Amazon Web Services CEO Matt Garman said the move positions Australia to benefit from AI’s economic potential. The company, already active in solar projects across New South Wales, Queensland and Victoria, continues to prioritise renewables to decarbonise operations and meet surging energy needs.

Instead of pursuing growth through conventional means, Amazon’s focus on clean energy could set a precedent for other tech giants expanding in the region.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI turns to Google Cloud in shift from solo AI race

OpenAI has entered into an unexpected partnership with Google, using Google Cloud to support its growing AI infrastructure needs.

Despite being fierce competitors in AI, the two tech giants recognise that long-term success may require collaboration instead of isolation.

As the demand for high-performance hardware soars, traditional rivals join forces to keep pace. OpenAI, previously backed heavily by Microsoft, now draws from Google’s vast cloud resources, hinting at a changing attitude in the AI race.

Rather than going it alone, firms may benefit more by leveraging each other’s strengths to accelerate development.

Google CEO Sundar Pichai, speaking on a podcast, suggested there is room for multiple winners in the AI sector. He even noted that a major competitor had ‘invited me to a dance’, underscoring a new phase of pragmatic cooperation.

While Google still faces threats to its search dominance from tools like ChatGPT, business incentives may override rivalry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI health tools need clinicians to prevent serious risks, Oxford study warns

The University of Oxford has warned that AI in healthcare, primarily through chatbots, should not operate without human oversight.

Researchers found that relying solely on AI for medical self-assessment could worsen patient outcomes instead of improving access to care. The study highlights how these tools, while fast and data-driven, fall short in delivering the judgement and empathy that only trained professionals can offer.

The findings raise alarm about the growing dependence on AI to fill gaps caused by doctor shortages and rising costs. Chatbots are often seen as scalable solutions, but without rigorous human-in-the-loop validation, they risk providing misleading or inconsistent information, particularly to vulnerable groups.

Rather than helping, they might increase health disparities by delaying diagnosis or giving patients false reassurance.

Experts are calling for safer, hybrid approaches that embed clinicians into the design and ongoing use of AI tools. The Oxford researchers stress that continuous testing, ethical safeguards and clear protocols must be in place.

Instead of replacing clinical judgement, AI should support it. The future of digital healthcare hinges not just on innovation but on responsibility and partnership between technology and human care.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!