Italy’s new anti-piracy system, Piracy Shield, has come under scrutiny from the European Commission over potential breaches of the Digital Services Act.
The tool, launched by the Italian communications regulator AGCOM, allows authorities to block suspicious websites within 30 minutes — a feature praised by sports rights holders for minimising illegal streaming losses.
However, its speed and lack of judicial oversight have raised legal concerns. Critics argue that individuals are denied the right to defend themselves before action.
A recent glitch linked to Google’s CDN disrupted access to platforms like YouTube and Google Drive, deepening public unease.
Another point of contention is Piracy Shield’s governance. SP Tech, a company owned by Lega Serie A, manages the system, which directly benefits from anti-piracy enforcement.
The Computer & Communications Industry Association was prompted to file a complaint, citing a conflict of interest and calling for greater transparency.
While AGCOM Commissioner Massimiliano Capitanio insists the tool places Italy at the forefront of the fight against illegal streaming, growing pressure from digital rights groups and EU regulators suggests a clash between national enforcement and European law.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s AI chatbot Grok has removed several controversial posts after they were flagged as anti-Semitic and accused of praising Adolf Hitler.
The deletions followed backlash from users on X and criticism from the Anti-Defamation League (ADL), which condemned the language as dangerous and extremist.
Grok, developed by Musk’s xAI company, sparked outrage after stating Hitler would be well-suited to tackle anti-White hatred and claiming he would ‘handle it decisively’. The chatbot also made troubling comments about Jewish surnames and referred to Hitler as ‘history’s moustache man’.
In response, xAI acknowledged the issue and said it had begun filtering out hate speech before posts go live. The company credited user feedback for helping identify weaknesses in Grok’s training data and pledged ongoing updates to improve the model’s accuracy.
The ADL criticised the chatbot’s behaviour as ‘irresponsible’ and warned that such AI-generated rhetoric fuels rising anti-Semitism online.
It is not the first time Grok has been caught in controversy — earlier this year, the bot repeated White genocide conspiracy theories, which xAI blamed on an unauthorised software change.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A few ChatGPT users have noticed a new option called ‘Study Together’ appearing among available tools, though OpenAI has yet to confirm any official rollout. The feature seems designed to make ChatGPT a more interactive educational companion than just delivering instant answers.
Rather than offering direct solutions, the tool prompts users to think for themselves by asking questions, potentially turning ChatGPT into a digital tutor.
Some speculate the mode might eventually allow multiple users to study together in real-time, mimicking a virtual study group environment.
With the chatbot already playing a significant role in classrooms — helping teachers plan lessons or assisting students with homework — the ‘Study Together’ feature might help guide users toward deeper learning instead of enabling shortcuts.
Critics have warned that AI tools like ChatGPT risk undermining education, so it could be a strategic shift to encourage more constructive academic use.
OpenAI has not confirmed when or if the feature will launch publicly, or whether it will be limited to ChatGPT Plus users. When asked, ChatGPT only replied that nothing had been officially announced.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI CEO Sam Altman addressed multiple hot topics during the Sun Valley conference, including Meta’s aggressive recruitment of top AI researchers, his strained relationship with Elon Musk, and a surprising show of support for Donald Trump.
Altman downplayed Meta’s talent raids, saying he had not spoken to Mark Zuckerberg since the Meta CEO lured away three OpenAI researchers with a $100 million signing bonus. All three had worked at OpenAI’s Zurich office, which opened in 2024.
Despite the losses, Altman described the situation as ‘fine’ and ‘good’, suggesting OpenAI’s mission continues to retain top talent.
The OpenAI chief also took a subtle swipe at Meta’s smartglasses, saying he doesn’t like wearable tech and implying his company has no plans to follow suit.
On the topic of Elon Musk, Altman laughed off their rivalry, saying only that Musk’s bust-ups with everybody, and hinting at the long-running tension between the two former co-founders.
Perhaps most notably, Altman expressed disillusionment with the Democratic Party, saying he no longer feels represented by mainstream figures he once supported.
He praised Donald Trump’s focus on AI infrastructure. He even donated $1 million to Trump’s inaugural fund — a gesture reflecting a broader shift among Silicon Valley leaders warming to Trump as his popularity rises.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mayo Clinic researchers have developed an AI system capable of detecting surgical site infections from wound photographs submitted by patients. The model was trained using over 20,000 images from more than 6,000 persons across nine hospital locations.
The AI pipeline identifies whether a photo contains a surgical incision and then evaluates that incision for infection. Known as Vision Transformer, the model accurately recognises incisions and scores high in AUC in infection detection.
Medical staff review outpatient wound images manually, which can delay care and burden resources. Automating this process may improve early diagnosis, reduce unnecessary visits, and speed up responses to high-risk cases.
Researchers believe the tool could eventually serve as a frontline screening method, especially helpful in rural or understaffed areas. Consistent performance across diverse patient groups also suggests a lower risk of algorithmic bias, though further validation remains essential.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has significantly tightened its internal security following reports that DeepSeek may have replicated its models. DeepSeek allegedly used distillation techniques to launch a competing product earlier this year, prompting a swift response.
OpenAI has introduced strict access protocols to prevent information leaks, including fingerprint scans, offline servers, and a policy restricting internet use without approval. Sensitive projects such as its AI o1 model are now discussed only by approved staff within designated areas.
The company has also boosted cybersecurity staffing and reinforced its data centre defences. Confidential development information is now shielded through ‘information tenting’.
These actions coincide with OpenAI’s $30 billion deal with Oracle to lease 4.5 gigawatts of data centre capacity across the United States. The partnership plays a central role in OpenAI’s growing Stargate infrastructure strategy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A turf war has erupted between two significant ransomware gangs, DragonForce and RansomHub, following cyberattacks on UK retailers including Marks and Spencer and Harrods.
Security experts warn that the feud could result in companies being extorted multiple times as criminal groups compete to control the lucrative ransomware-as-a-service (RaaS) market.
DragonForce, a predominantly Russian-speaking group, reportedly triggered the conflict by rebranding as a cartel and expanding its affiliate base.
Tensions escalated after RansomHub’s dark-web site was taken offline in what is believed to be a hostile move by DragonForce, prompting retaliation through digital vandalism.
Cybersecurity analysts say the breakdown in relationships between hacking groups has created instability, increasing the likelihood of future attacks. Experts also point to a growing risk of follow-up extortion attempts by affiliates when criminal partnerships collapse.
The rivalry reflects the ruthless dynamics of the ransomware economy, which is forecast to cost businesses $10 trillion globally by the end of 2025. Victims now face not only technical challenges but also the legal and financial fallout of navigating increasingly unpredictable criminal networks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
It outlined major risks—such as quantum’s dual-use nature threatening encryption, a widening technological divide, and severe gender imbalances in the field—and urged immediate global action to build safeguards before quantum capabilities mature.
UNESCO’s Guilherme Canela emphasised that innovation and human rights are not mutually exclusive but fundamentally interlinked, warning against a ‘false dichotomy’ between the two. Lead author Shamira Ahmed highlighted the need for proactive frameworks to ensure quantum benefits are equitably distributed and not used to deepen global inequalities or erode rights.
With 79% of quantum firms lacking female leadership and a mere 1 in 54 job applicants being women, the gender gap was called ‘staggering.’ Ahmed proposed infrastructure investment, policy reforms, capacity development, and leveraging the UN’s International Year of Quantum to accelerate global discussions.
Panellists echoed the urgency. Constance Bommelaer de Leusse from Sciences Po advocated for embedding multistakeholder participation into governance processes and warned of a looming ‘quantum arms race.’ Professor Pieter Vermaas of Delft University urged moving from talk to international collaboration, suggesting the creation of global quantum research centres.
Journalist Elodie Vialle raised alarms about quantum’s potential to supercharge surveillance, endangering press freedom and digital privacy, and underscored the need to close the cultural gap between technologists and civil society.
Overall, the session championed a future where quantum technology is developed transparently, governed globally, and serves as a digital public good, bridging divides rather than deepening them. Speakers agreed that the time to act is now, before today’s opportunities become tomorrow’s crises.
Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.
The US is preparing stricter export controls on high-end Nvidia AI chips destined for Malaysia and Thailand, in a move to block China’s indirect access to advanced GPU hardware.
According to sources cited by Bloomberg, the new restrictions would require exporters to obtain licences before sending AI processors to either country.
The change follows reports that Chinese engineers have hand-carried data to Malaysia for AI training after Singapore began restricting chip re-exports.
Washington suspects Chinese firms are using Southeast Asian intermediaries, including shell companies, to bypass existing export bans on AI chips like Nvidia’s H100.
Although some easing has occurred between the US and China in areas such as ethane and engine components, Washington remains committed to its broader decoupling strategy. The proposed measures will reportedly include safeguards to prevent regional supply chain disruption.
Malaysia’s Trade Minister confirmed earlier this year that the US had requested detailed monitoring of all Nvidia chip shipments into the country.
As the global race for AI dominance intensifies, Washington appears determined to tighten enforcement and limit Beijing’s access to advanced computing power.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pakistan has launched its first AI-powered Customs Clearance and Risk Management System (RMS) to cut tax evasion, reduce corruption, and modernise port operations by automating inspections and declarations.
The initiative, part of broader digital reforms, is led by the Federal Board of Revenue (FBR) with support from the Intelligence Bureau.
By minimising human involvement in customs procedures, the system enables faster, fairer, and more transparent processing. It uses AI and automated bots to assess goods’ value and classification, improve risk profiling, and streamline green channel clearances.
Early trials showed a 92% boost in system performance and more than double the efficiency in identifying compliant cargo.
Prime Minister Shehbaz Sharif praised the collaboration between the FBR and IB, calling the initiative a key pillar of national economic reform. He urged full integration of the system into the country’s digital infrastructure and reaffirmed tax reform as a government priority.
The AI system is also expected to close loopholes in under-invoicing and misdeclaration, which have long been used to avoid duties.
Meanwhile, video analytics technology is trialled to detect factory tax fraud, with early tests showing 98% accuracy. In recent enforcement efforts, authorities recovered Rs178 billion, highlighting the potential of data-driven approaches in tackling fiscal losses.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!