Energy and government sectors in Poland face mounting hacktivist threats

Poland has become the leading global target for politically and socially motivated cyberattacks, recording over 450 incidents in the second quarter of 2025, according to Spain’s Industrial Cybersecurity Center.

The report ranked Poland ahead of Ukraine, the UK, France, Germany, and other European states in hacktivist activity. Government institutions and the energy sector were among the most targeted, with organisations supporting Ukraine described as especially vulnerable.

ZIUR’s earlier first-quarter analysis had warned of a sharp rise in attacks against state bodies across Europe. Pro-Russian groups were identified as among the most active, increasingly turning to denial-of-service campaigns to disrupt critical operations.

Europe accounted for the largest share of global hacktivism in the second quarter, with more than 2,500 successful denial-of-service attacks recorded between April and June, underlining the region’s heightened exposure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How to spot AI-generated videos with simple visual checks

Mashable offers a hands-on guide to help users detect AI-generated videos by observing subtle technical cues. Key warning signs include mismatched lip movements and speech, where voices are dubbed over real footage and audio isn’t perfectly aligned with mouth motions.

Users are also advised to look for visual anomalies such as unnatural blurs, distorted shadows or odd lighting effects that seem inconsistent with natural environments. Deepfake videos can show slight flickers around faces or uneven reflections that betray their artificial origin.

Blinking, or the lack thereof, can also be revealing. AI faces often fail to replicate natural blinking patterns, and may display either no blinking or irregular frequency.

Viewers should also note unnatural head or body movements that do not align with speech or emotional expression, such as stiff postures or awkward gestures.

Experts stress these cues are increasingly well-engineered, making deepfakes harder to detect visually. They recommend combining observation with source verification, such as tracing the video back to reputable outlets or conducting reverse image searches for robust protection.

Ultimately, better detection tools and digital media literacy are essential to maintaining trust in online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity Instagram hack fuels Solana meme coin scam

The Instagram accounts of Adele, Future, Tyla, and Michael Jackson were hacked late Thursday to promote an unauthorised meme coin. Posts showed an AI image of the Future with a ‘FREEBANDZ’ coin, falsely suggesting ties to the rapper.

The token, launched on the Solana platform Pump.fun, surged briefly to nearly $900,000 in market value before collapsing by 98% after its creator dumped 700 million tokens. The scheme netted more than $49,000 in Solana for the perpetrator, suspected of being behind the account hijackings.

None of the affected celebrities has issued a statement, while Future’s Instagram account remains deactivated. The hack continues a trend of using celebrity accounts for crypto pump-and-dump schemes. Previous cases involved the UFC, Barack Obama, and Elon Musk.

Such scams are becoming increasingly common, with attackers exploiting the visibility of major social media accounts to drive short-lived token gains before leaving investors with losses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hong Kong deepfake scandal exposes gaps in privacy law

The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.

Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.

The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.

The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.

Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Court filing details Musk’s outreach to Zuckerberg over OpenAI bid

Elon Musk attempted to bring Meta chief executive Mark Zuckerberg into his consortium’s $97.4 billion bid for OpenAI earlier this year, the company disclosed in a court filing.

According to sworn interrogations, OpenAI said Musk had discussed possible financing arrangements with Zuckerberg as part of the bid. Musk’s AI startup xAI, a competitor to OpenAI, did not respond to requests for comment.

In the filing, OpenAI asked a federal judge to order Meta to provide documents related to any bid for OpenAI, including internal communications about restructuring or recapitalisation. The firm argued these records could clarify motivations behind the bid.

Meta countered that such documents were irrelevant and suggested OpenAI seek them directly from Musk or xAI. A US judge ruled that Musk must face OpenAI’s claims of attempting to harm the company through public remarks and what it described as a sham takeover attempt.

The legal dispute follows Musk’s lawsuit against OpenAI and Sam Altman over its for-profit transition, with OpenAI filing a countersuit in April. A jury trial is scheduled for spring 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Students seek emotional support from AI chatbots

College students are increasingly turning to AI chatbots for emotional support, prompting concern among mental health professionals. A 2025 report ranked ‘therapy and companionship’ as the top use case for generative AI, particularly among younger users.

Studies by MIT and OpenAI show that frequent AI use can lower social confidence and increase avoidance of face-to-face interaction. On campuses, digital mental health platforms now supplement counselling services, offering tools that identify at-risk students and provide basic support.

Experts warn that chatbot companionship may create emotional habits that lack grounding in reality and hinder social skill development. Counsellors advocate for educating students on safe AI use and suggest universities adopt tools that flag risky engagement patterns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5 criticised for lacking flair as users seek older ChatGPT options

OpenAI’s rollout of GPT-5 has faced criticism from users attached to older models, who say the new version lacks the character of its predecessors.

GPT-5 was designed as an all-in-one model, featuring a lightweight version for rapid responses and a reasoning version for complex tasks. A routing system determines which option to use, although users can manually select from several alternatives.

Modes include Auto, Fast, Thinking, Thinking mini, and Pro, with the last available to Pro subscribers for $200 monthly. Standard paid users can still access GPT-4o, GPT-4.1, 4o-mini, and even 3o through additional settings.

Chief executive Sam Altman has said the long-term goal is to give users more control over ChatGPT’s personality, making customisation a solution to concerns about style. He promised ample notice before permanently retiring older models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CBA reverses AI-driven job cuts after union pressure

The Commonwealth Bank of Australia has reversed plans to cut 45 customer service roles following union pressure over the use of AI in its call centres.

The Finance Sector Union argued that CBA was not transparent about call volumes, taking the case to the Workplace Relations Tribunal. Staff reported rising workloads despite claims that the bank’s voice bot reduced calls by 2,000 weekly.

CBA admitted its redundancy assessment was flawed, stating that it had not fully considered the business needs. Impacted employees are being offered the option to remain in their current roles, relocate within the firm, or depart.

The Bank of Australia apologised and pledged to review internal processes. Chief executive Matt Comyn has promoted AI adoption, including a new partnership with OpenAI, but the union called the reversal a ‘massive win’ for workers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!