AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI must protect dignity, say US bishops

The US Conference of Catholic Bishops has urged Congress to centre AI policy on human dignity and the common good.

Their message outlines moral principles rather than technical guidance, warning against misuse of technology that may erode truth, justice, or the protection of the vulnerable.

The bishops caution against letting AI replace human moral judgement, especially in sensitive areas like family life, work, and warfare. They express concern about AI deepening inequality and harming those already marginalised without strict oversight.

Their call includes demands for greater transparency, regulation of autonomous weapons, and stronger protections for children and workers in the US.

Rooted in Catholic social teaching, the letter frames AI not as a neutral innovation but as a force that must serve people, not displace them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Guardz doubles down on SMB protection with $56M funding boost

Cybersecurity startup Guardz has secured $56 million in Series B funding to expand its AI-native platform designed for managed service providers (MSPs).

The round was led by ClearSky, with backing from Phoenix Financial, Glilot Capital Partners, SentinelOne, Hanaco Ventures, and others, bringing the company’s total funding to $84 million in just over two years.

Since emerging from stealth in early 2023, Guardz has built a global presence, partnering with hundreds of MSPs to secure thousands of small and mid-sized businesses.

With the new capital, the company aims to accelerate go-to-market efforts and enhance its platform with more automation, compliance tools, and cyber insurance capabilities.

The Guardz platform integrates threat protection across identities, email, endpoints, cloud, and data into a single engine. Combining AI-driven automation with human-led Managed Detection and Response (MDR), it provides 24/7 monitoring and rapid response to threats.

Seamless integrations with Microsoft 365 and Google Workspace allow MSPs to pre-emptively detect suspicious activity and respond in real time.

‘Our goal is to empower MSPs with enterprise-grade security tools to protect the global economy’s most vulnerable targets — small and mid-sized businesses,’ said Guardz CEO and co-founder Dor Eisner. ‘This funding allows us to further that mission and help businesses thrive in a secure environment.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Massive leak exposes data of millions in China

Cybersecurity researchers have uncovered a brief but significant leak of over 600 gigabytes of data, exposing information on millions of Chinese citizens.

The haul, containing WeChat, Alipay, banking, and residential records, is part of a centralised system, possibly aimed at large-scale surveillance instead of a random data breach.

According to research from Cybernews and cybersecurity consultant Bob Diachenko, the data was likely used to build individuals’ detailed behavioural, social and economic profiles.

They warned the information could be exploited for phishing, fraud, blackmail or even disinformation campaigns instead of remaining dormant. Although only 16 datasets were reviewed before the database vanished, they indicated a highly organised and purposeful collection effort.

The source of the leak remains unknown, but the scale and nature of the data suggest it may involve government-linked or state-backed entities rather than lone hackers.

The exposed information could allow malicious actors to track residence locations, financial activity and personal identifiers, placing millions at risk instead of keeping their lives private and secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital Social Security cards coming this summer

The US Social Security Administration is launching digital access to Social Security numbers in the summer of 2025 through its ‘My Social Security’ portal. The initiative aims to improve convenience, reduce physical card replacement delays, and protect against identity theft.

The digital rollout responds to the challenges of outdated paper cards, rising fraud risks, and growing demand for remote access to US government services. Cybersecurity experts also recommend using VPNs, antivirus software, and identity monitoring services to guard against phishing scams and data breaches.

While it promises faster and more secure access, experts urge users to bolster account protection through strong passwords, two-factor authentication, and avoidance of public Wi-Fi when accessing sensitive data.

Users should regularly check their credit reports and SSA records and consider requesting an IRS PIN to prevent tax-related fraud. The SSA says this move will make Social Security more efficient without compromising safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts AGI efforts with new team

Mark Zuckerberg, Meta Platforms CEO, is reportedly building a new team dedicated to achieving artificial general intelligence (AGI), aiming for machines that can match or exceed human intellect.

The initiative is linked to an investment exceeding $10 billion in Scale AI, whose founder, Alexandr Wang, is expected to join the AGI group. Meta has not yet commented on these reports.

Zuckerberg’s personal involvement in recruiting around 50 experts, including a new head of AI research, is partly driven by dissatisfaction with Meta’s recent large language model, Llama 4. Last month, Meta even delayed the release of its flagship ‘Behemoth’ AI model due to internal concerns about its performance.

The move signals an intensifying race in the AI sector, as rivals like OpenAI are also making strategic adjustments to attract further investment in their pursuit of AGI. This highlights a clear push by major tech players towards developing more advanced and capable AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing push in Europe to regulate children’s social media use

Several European countries, led by Denmark, France, and Greece, are intensifying efforts to shield children from the potentially harmful effects of social media. With Denmark taking over the EU Council presidency from July, its Digital Minister, Caroline Stage Olsen, has made clear that her country will push for a ban on social media for children under 15.

Olsen criticises current platforms for failing to remove illegal content and relying on addictive features that encourage prolonged use. She also warned that platforms prioritise profit and data harvesting over the well-being of young users.

That initiative builds on growing concern across the EU about the mental and physical toll social media may take on children, including the spread of dangerous content, disinformation, cyberbullying, and unrealistic body image standards. France, for instance, has already passed legislation requiring parental consent for users under 15 and is pressing platforms to verify users’ ages more rigorously.

While the European Commission has issued draft guidelines to improve online safety for minors, such as making children’s accounts private by default, some countries are calling for tougher enforcement under the EU’s Digital Services Act. Despite these moves, there is currently no consensus across the EU for an outright ban.

Cultural differences and practical hurdles, like implementing consistent age verification, remain significant challenges. Still, proposals are underway to introduce a unified age of digital adulthood and a continent-wide age verification application, possibly even embedded into devices, to limit access by minors.

Olsen and her allies remain adamant, planning to dedicate the October summit of the EU digital ministers entirely to the issue of child online safety. They are also looking to future legislation, like the Digital Fairness Act, to enforce stricter consumer protection standards that explicitly account for minors. Meanwhile, age verification and parental controls are seen as crucial first steps toward limiting children’s exposure to addictive and damaging online environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Workers struggle as ChatGPT goes down

The temporary outage of ChatGPT this morning left thousands of users struggling with their daily tasks, highlighting a growing reliance on AI.

Social media was flooded with humorous yet telling posts from users expressing their inability to perform even basic functions without AI. This incident has reignited concerns about society’s increasing dependence on closed-source AI tools for work and everyday life.

OpenAI, the developer of ChatGPT, is currently investigating the technical issues that led to ‘elevated error rates and latency.’ The widespread disruption underscores a broader debate about AI’s impact on critical thinking and productivity.

While some research suggests AI chatbots can enhance efficiency, others, like Paul Armstrong, argue that frequent reliance on generative tools may diminish critical thinking skills and understanding.

The discussion around AI’s role in the workplace was a key theme at the recent SXSW London event. Despite concerns about job displacement, exemplified by redundancies at Canva, firms like Lloyd’s Market Association are increasingly adopting AI, with 40% of London market companies now using it.

Industry leaders maintain that AI aims to rethink workflows and empower human creativity, with a ‘human layer’ remaining essential for refining and adding nuanced value.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity alarm after 184 million credentials exposed

A vast unprotected database containing over 184 million credentials from major platforms and sectors has highlighted severe weaknesses in data security worldwide.

The leaked credentials, harvested by infostealer malware and stored in plain text, pose significant risks to consumers and businesses, underscoring an urgent need for stronger cybersecurity and better data governance.

Cybersecurity researcher Jeremiah Fowler discovered the 47 GB database exposing emails, passwords, and authorisation URLs from tech giants like Google, Microsoft, Apple, Facebook, and Snapchat, as well as banking, healthcare, and government accounts.

The data was left accessible without any encryption or authentication, making it vulnerable to anyone with the link.

The credentials were reportedly collected by infostealer malware such as Lumma Stealer, which silently steals sensitive information from infected devices. The stolen data fuels a thriving underground economy involving identity theft, fraud, and ransomware.

The breach’s scope extends beyond tech, affecting critical infrastructure like healthcare and government services, raising concerns over personal privacy and national security. With recurring data breaches becoming the norm, industries must urgently reinforce security measures.

Chief Data Officers and IT risk leaders face mounting pressure as regulatory scrutiny intensifies. The leak highlights the need for proactive data stewardship through encryption, access controls, and real-time threat detection.

Many organisations struggle with legacy systems, decentralised data, and cloud adoption, complicating governance efforts.

Enterprise leaders must treat data as a strategic asset and liability, embedding cybersecurity into business processes and supply chains. Beyond technology, cultivating a culture of accountability and vigilance is essential to prevent costly breaches and protect brand trust.

The massive leak signals a new era in data governance where transparency and relentless improvement are critical. The message is clear: there is no room for complacency in safeguarding the digital world’s most valuable assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Milei cleared of ethics breach over LIBRA token post

Argentina’s Anti-Corruption Office has concluded that President Javier Milei did not violate ethics laws when he published a now-deleted post promoting the LIBRA memecoin. The agency stated the February post was made in a personal capacity and did not constitute an official act.

The ruling clarified that Milei’s X account, where the post appeared, is personally managed and predates his political role. It added that the account identifies him as an economist rather than a public official, meaning the post is protected as a private expression under the constitution.

The investigation had been launched after LIBRA’s price soared and then crashed following Milei’s endorsement, which linked to the token’s contract and a promotional site. Investors reportedly lost millions, and allegations of insider trading surfaced.

Although the Anti-Corruption Office cleared him, a separate federal court investigation remains ongoing, with Milei and his sister’s assets temporarily frozen.

Despite the resolution, the scandal damaged public trust. Milei has maintained he acted in good faith, claiming the aim was to raise awareness of a private initiative to support small Argentine businesses through crypto.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot