Cybercriminals are increasingly abusing legitimate administrative software to access corporate networks, making malicious activity harder to detect. Attackers are blending into normal operations by relying on trusted workforce and IT management tools rather than custom malware.
Recent campaigns have repurposed ‘Net Monitor for Employees Professional’ and ‘SimpleHelp’, tools usually used for staff oversight and remote support. Screen viewing, file management, and command features were exploited to control systems without triggering standard security alerts.
Researchers at Huntress identified the activity in early 2026, finding that the tools were used to maintain persistent, hidden access. Analysis showed that attackers were actively preparing compromised systems for follow-on attacks rather than limiting their activity to surveillance.
The access was later linked to attempts to deploy ‘Crazy’ ransomware and steal cryptocurrency, with intruders disguising the software as legitimate Microsoft services. Monitoring agents were often renamed to resemble standard cloud processes, thereby remaining active without attracting attention.
Huntress advised organisations to limit software installation rights, enforce multi-factor authentication, and audit networks for unauthorised management tools. Monitoring for antivirus tampering and suspicious program names remains critical for early detection.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.
Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.
Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.
Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.
Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.
They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has unconditionally approved Google’s proposed acquisition of Wiz under the EU Merger Regulation, concluding that the deal raises no competition concerns in the European Economic Area.
The assessment focused on the fast-growing cloud security market, where both companies are active. Google provides cloud infrastructure and security services via Google Cloud Platform, while Wiz offers a cloud-native application protection platform for multi-cloud environments.
Regulators examined whether Google could restrict competition by bundling Wiz’s tools or limiting interoperability with rival cloud providers. The market investigation found customers would retain access to credible alternatives and could switch suppliers if needed.
The Commission also considered whether the acquisition would give Google access to commercially sensitive data relating to competing cloud infrastructure providers. Feedback from customers and rivals indicated that the data involved is not sensitive and is generally accessible to other cloud security firms.
Based on these findings, the Commission concluded that the transaction would not significantly impede effective competition in any relevant market. The deal was therefore cleared unconditionally following a Phase I review.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The South Korean government has confirmed that 33.67 million user accounts were exposed in a major data breach at Coupang in South Korea. The findings were released by the Ministry of Science and ICT in Seoul.
Investigators in South Korea said names and email addresses were leaked, while delivery lists containing addresses and phone numbers were accessed 148 million times. Officials warned that the impact in South Korea could extend beyond the headline account figure.
Authorities in South Korea identified a former employee as the attacker, alleging misuse of authentication signing keys. The probe concluded that weaknesses in internal controls at Coupang enabled the breach in South Korea.
The ministry in South Korea criticised delayed reporting and plans to impose a fine on Coupang. The company disputed aspects of the findings but said 33.7 million accounts were involved in South Korea.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has launched an Action Plan Against Cyberbullying aimed at protecting the mental health and well-being of children and teenagers online across the EU. The initiative focuses on reporting access, national coordination, and prevention.
A central element is the development of an EU-wide reporting app that would allow victims to report cyberbullying, receive support, and safely store evidence. The Commission will provide a blueprint for Member States to adapt and link to national helplines.
To ensure consistent protection, Member States are encouraged to adopt a shared understanding of cyberbullying and develop national action plans. This would support comparable data collection and a more coordinated EU response.
The Action Plan builds on existing legislation, including the Digital Services Act, the Audiovisual Media Services Directive, and the AI Act. Updated guidelines will strengthen platform obligations and address AI-enabled forms of abuse.
Prevention and education are also prioritised through expanded resources for schools and families via Safer Internet Centres and the Better Internet for Kids platform. The Commission will implement the plan with Member States, industry, civil society, and children.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In Cambridge, instructors at MIT and the Harvard Negotiation Project are using AI negotiation bots to enhance classroom simulations. The tools are designed to prompt reflection rather than offer fixed answers.
Students taking part in a multiparty exercise called Harborco engage with preparation, back-table and debriefing bots. The system helps them analyse stakeholder interests and test strategies before and after live negotiations.
Back-table bots simulate unseen political or organisational actors who often influence real-world negotiations. Students can safely explore trade-offs and persuasion tactics in a protected digital setting.
According to reported course findings, most participants said the AI bots improved preparation and sharpened their understanding of opposing interests. Instructors in Cambridge stress that AI supports, rather than replaces, human teaching and peer learning.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Union is revisiting the idea of an EU-wide social media age restriction as several member states move ahead with national measures to protect children online. Spain, France, and Denmark are among the countries considering the enforcement of age limits for access to social platforms.
The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use.
Commission President Ursula von der Leyen announced the creation of an expert panel last September, although its launch was delayed until early 2026. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.
The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.
So far, the Commission has relied on non-binding guidance under the Digital Services Act to encourage platforms such as TikTok, Instagram, and Snap to protect minors. Increasing pressure from member states pursuing national bans may now prompt a shift towards more formal EU-level regulation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.
In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.
Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.
Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.
Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Wales has launched a national programme of practical AI workshops to help tourism and hospitality businesses adopt digital tools. Funded by Visit Wales and the Welsh Government, the initiative aims to strengthen the sector’s competitiveness by assisting companies to save time and enhance their online presence.
Strong demand reflects growing readiness within the sector to embrace AI. Delivered through Business Wales, the free sessions have quickly reached near capacity, with most places booked shortly after launch. The programme is tailored to small and medium-sized enterprises and prioritises hands-on learning over technical theory.
Workshops focus on simple, immediately usable tools that improve website content, search visibility, and customer engagement. Organisers highlight that AI-driven search features are reshaping how visitors discover tourism services, making accuracy, consistency, and authoritative digital content increasingly important.
At the centre of the initiative is Harri, a bespoke AI tool explicitly developed for Welsh tourism businesses. Designed to reflect the local context, it supports listings management, customer enquiries, and search optimisation. Early feedback indicates that the approach delivers practical and measurable benefits.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US-based conglomerate Cisco is promoting a future in which AI agents work alongside employees rather than operate as mere tools. Jeetu Patel, the company’s president, revealed that Cisco has already produced a product written entirely with AI-generated code and expects several more by the end of 2026.
A shift to spec-driven development that allows smaller human teams to work with digital agents instead of relying on larger groups of developers.
Human oversight will still play a central role. Coders will be asked to review AI-generated outputs as they adjust to a workplace where AI influences every stage of development. Patel argues that AI should be viewed as part of every loop rather than kept at the edge of decision-making.
Security concerns dominate the company’s planning. Patel warns that AI agents acting as digital co-workers must undergo background checks in the same way that employees do.
Cisco is investing billions in security systems to protect agents from external attacks and to prevent agents that malfunction or act independently from harming society.
Looking ahead, Cisco expects AI to deliver insights that extend beyond human knowledge. Patel believes that the most significant gains will emerge from breakthroughs in science, health, energy and poverty reduction rather than simple productivity improvements.
He also positions Cisco as a core provider of infrastructure designed to support the next stage of the AI era.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!