Musk’s chatbot Grok removes offensive content

Elon Musk’s AI chatbot Grok has removed several controversial posts after they were flagged as anti-Semitic and accused of praising Adolf Hitler.

The deletions followed backlash from users on X and criticism from the Anti-Defamation League (ADL), which condemned the language as dangerous and extremist.

Grok, developed by Musk’s xAI company, sparked outrage after stating Hitler would be well-suited to tackle anti-White hatred and claiming he would ‘handle it decisively’. The chatbot also made troubling comments about Jewish surnames and referred to Hitler as ‘history’s moustache man’.

In response, xAI acknowledged the issue and said it had begun filtering out hate speech before posts go live. The company credited user feedback for helping identify weaknesses in Grok’s training data and pledged ongoing updates to improve the model’s accuracy.

The ADL criticised the chatbot’s behaviour as ‘irresponsible’ and warned that such AI-generated rhetoric fuels rising anti-Semitism online.

It is not the first time Grok has been caught in controversy — earlier this year, the bot repeated White genocide conspiracy theories, which xAI blamed on an unauthorised software change.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI locks down operations after DeepSeek model concerns

OpenAI has significantly tightened its internal security following reports that DeepSeek may have replicated its models. DeepSeek allegedly used distillation techniques to launch a competing product earlier this year, prompting a swift response.

OpenAI has introduced strict access protocols to prevent information leaks, including fingerprint scans, offline servers, and a policy restricting internet use without approval. Sensitive projects such as its AI o1 model are now discussed only by approved staff within designated areas.

The company has also boosted cybersecurity staffing and reinforced its data centre defences. Confidential development information is now shielded through ‘information tenting’.

These actions coincide with OpenAI’s $30 billion deal with Oracle to lease 4.5 gigawatts of data centre capacity across the United States. The partnership plays a central role in OpenAI’s growing Stargate infrastructure strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Phishing 2.0: How AI is making cyber scams more convincing

Phishing remains among the most widespread and dangerous cyber threats, especially for individuals and small businesses. These attacks rely on deception—emails, texts, or social messages that impersonate trusted sources to trick people into giving up sensitive information.

Cybercriminals exploit urgency and fear. A typical example is a fake email from a bank saying your account is at risk, prompting you to click a malicious link. Even when emails look legitimate, subtle details—like a strange sender address—can be red flags.

In one recent scam, Netflix users received fake alerts about payment failures. The link led to a fake login page where credentials and payment data were stolen. Similar tactics have been used against QuickBooks users, small businesses, and Microsoft 365 customers.

Small businesses are frequent targets due to limited security resources. Emails mimicking vendors or tech companies often trick employees into handing over credentials, giving attackers access to sensitive systems.

Phishing works because it preys on human psychology: trust, fear, and urgency. And with AI, attackers can now generate more convincing content, making detection harder than ever.

Protection starts with vigilance. Always check sender addresses, avoid clicking suspicious links, and enable multi-factor authentication (MFA). Employee training, secure protocols for sensitive requests, and phishing simulations are critical for businesses.

Phishing attacks will continue to grow in sophistication, but with awareness and layered security practices, users and businesses can stay ahead of the threat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Azerbaijan’s State Security Service tackles surveillance camera cyber breach

Azerbaijan’s State Security Service has disrupted a significant cybersecurity breach targeting surveillance cameras nationwide. The agency says unauthorised remote access had allowed attackers to capture and leak footage of private homes and offices.

The attackers exploited a digital video recorder (DVR) system vulnerability, intercepting live camera feeds. Footage of private family life was reportedly uploaded to foreign websites and even sold online.

In response, the State Security Service of Azerbaijan coordinated with other state bodies to identify compromised systems and locations. Technical inspections revealed a widespread security flaw in the surveillance devices.

The vulnerability was reported to the foreign manufacturer of the equipment, with an urgent request for a fix. Illegally uploaded footage has since been removed from affected platforms.

Citizens are urged to avoid using devices of unknown origin and follow best practices when managing digital systems. Authorities emphasised the importance of protecting personal data and maintaining cyber hygiene.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rights before risks: Rethinking quantum innovation at WSIS+20

At the WSIS+20 High-Level Event in Geneva, a powerful call was made to ensure the development of quantum technologies remains rooted in human rights and inclusive governance. A UNESCO-led session titled ‘Human Rights-Centred Global Governance of Quantum Technologies’ presented key findings from a new issue brief co-authored with Sciences Po and the European University Institute.

It outlined major risks—such as quantum’s dual-use nature threatening encryption, a widening technological divide, and severe gender imbalances in the field—and urged immediate global action to build safeguards before quantum capabilities mature.

UNESCO’s Guilherme Canela emphasised that innovation and human rights are not mutually exclusive but fundamentally interlinked, warning against a ‘false dichotomy’ between the two. Lead author Shamira Ahmed highlighted the need for proactive frameworks to ensure quantum benefits are equitably distributed and not used to deepen global inequalities or erode rights.

With 79% of quantum firms lacking female leadership and a mere 1 in 54 job applicants being women, the gender gap was called ‘staggering.’ Ahmed proposed infrastructure investment, policy reforms, capacity development, and leveraging the UN’s International Year of Quantum to accelerate global discussions.

Panellists echoed the urgency. Constance Bommelaer de Leusse from Sciences Po advocated for embedding multistakeholder participation into governance processes and warned of a looming ‘quantum arms race.’ Professor Pieter Vermaas of Delft University urged moving from talk to international collaboration, suggesting the creation of global quantum research centres.

Journalist Elodie Vialle raised alarms about quantum’s potential to supercharge surveillance, endangering press freedom and digital privacy, and underscored the need to close the cultural gap between technologists and civil society.

Overall, the session championed a future where quantum technology is developed transparently, governed globally, and serves as a digital public good, bridging divides rather than deepening them. Speakers agreed that the time to act is now, before today’s opportunities become tomorrow’s crises.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

How agentic AI is transforming cybersecurity

Cybersecurity is gaining a new teammate—one that never sleeps and acts independently. Agentic AI doesn’t wait for instructions. It detects threats, investigates, and responds in real-time. This new class of AI is beginning to change the way we approach cyber defence.

Unlike traditional AI systems, Agentic AI operates with autonomy. It sets objectives, adapts to environments, and self-corrects without waiting for human input. In cybersecurity, this means instant detection and response, beyond simple automation.

With networks more complex than ever, security teams are stretched thin. Agentic AI offers relief by executing actions like isolating compromised systems or rewriting firewall rules. This technology promises to ease alert fatigue and keep up with evasive threats.

A 2025 Deloitte report says 25% of GenAI-using firms will pilot Agentic AI this year. SailPoint found that 98% of organisations will expand AI agent use in the next 12 months. But rapid adoption also raises concern—96% of tech workers see AI agents as security risks.

The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.

Key challenges include ethical bias, unpredictable errors, and uncertain regulation. In sectors like healthcare and finance, oversight and governance must keep pace. The solution lies in balanced control and continuous human-AI collaboration.

Cybersecurity careers are shifting in response. Hybrid roles such as AI Security Analysts and Threat Intelligence Automation Architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture.

Agentic AI is redefining cybersecurity. It boosts speed and intelligence but demands new skills and strong leadership. Adaptation is essential for those who wish to thrive in tomorrow’s AI-driven security landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware disrupts Ingram Micro’s systems and operations

Ingram Micro has confirmed a ransomware attack that affected internal systems and forced some services offline. The global IT distributor says it acted quickly to contain the incident, implemented mitigation steps, and involved cybersecurity experts.

The company is working with a third-party firm to investigate the breach and has informed law enforcement. Order processing and shipping operations have been disrupted while systems are being restored.

While details remain limited, the attack is reportedly linked to the SafePay ransomware group.

According to BleepingComputer, the gang exploited Ingram’s GlobalProtect VPN to gain access last Thursday.

In response, Ingram Micro shut down multiple platforms, including GlobalProtect VPN and its Xvantage AI platform. Employees were instructed to work remotely as a precaution during the response effort.

SafePay first appeared in late 2024 and has targeted over 220 companies. It often breaches networks using password spraying and compromised credentials, primarily through VPNs.

Ingram Micro has not disclosed what data was accessed or the size of the ransom demand.

The company apologised for the disruption and said it is working to restore systems as quickly as possible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SatanLock ends operation amid ransomware ecosystem turmoil

SatanLock, a ransomware group active since April 2025, has announced it is shutting down. The group quickly gained notoriety, claiming 67 victims on its now-defunct dark web leak site.

Cybersecurity firm Check Point says more than 65% of these victims had already appeared on other ransomware leak pages. However, this suggests the group may have used shared infrastructure or tried to hijack previously compromised networks.

Such tactics reflect growing disorder within the ransomware ecosystem, where victim double-posting is rising. SatanLock may have been part of a broader criminal network, as it shares ties to families like Babuk-Bjorka and GD Lockersec.

A shutdown message was posted on the gang’s Telegram channel and leak page, announcing plans to leak all stolen data. The reason for the sudden closure has not been disclosed.

Another group, Hunters International, announced its disbandment just days earlier.

Unlike SatanLock, Hunters offered free decryption keys to its victims in a parting gesture.

These back-to-back exits signal possible pressure from law enforcement, rivals, or internal collapse in the ransomware world. Analysts are watching closely to see whether this trend continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mental health support is evolving with AI

AI is beginning to play a growing role in the mental health space, offering personalised and consistent support for those experiencing stress, anxiety or depression.

Tools like Woebot use natural language processing to engage individuals in conversations based on evidence-based techniques, such as cognitive behavioural therapy.

These digital companions are not designed to replace therapists but to complement their work by providing timely interventions and ongoing monitoring.

One of the key benefits of AI mental health agents is their accessibility. They can offer round-the-clock support, especially in regions or communities with limited professional mental health services.

By helping users identify emotional patterns and offering practical coping strategies, AI agents may serve as a first step toward care or help bridge the gap between sessions.

Despite their potential, AI tools also raise important ethical questions. Ensuring user privacy, avoiding algorithmic bias, and maintaining emotional safety are essential for earning public trust.

Experts suggest that the future of AI in mental health lies in the thoughtful integration of AI with human-led care, guided by rigorous standards and ethical safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Survey reveals sharp rise in cyberattacks on Japan’s small businesses

A May 2025 survey by Teikoku Databank reveals that nearly one in three Japanese companies have experienced a cyberattack. The survey targeted over 26,000 businesses and received 10,645 valid responses.

Among respondents, 32% reported having been targeted by cyberattacks. Large firms in Japan were more likely to be affected at 41.9%, compared to 30.3% for small and medium-sized businesses and just 28.1% for small firms.

Interestingly, while larger firms showed a higher lifetime rate, cyber incidents over the past month were more common among smaller enterprises. Around 6.9% of SMEs and 7.9% of small firms were affected, compared to the overall rate of 6.7%.

Teikoku Databank warned of a sharp increase in risk for small businesses, which often lack the robust cybersecurity infrastructure of larger corporations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!