RIPE NCC and Czech regulator partner to strengthen internet coordination

The RIPE NCC and the Czech Telecommunication Office (CTU) have signed a non-binding Memorandum of Understanding (MoU) to strengthen cooperation in internet coordination and share technical expertise within the Czech Republic. The partnership focuses on training internet operators, collaborating on network measurements, and managing internet number resources.

It is part of a broader initiative by RIPE NCC to establish similar agreements with national regulators in countries like Georgia and Saudi Arabia, reflecting their commitment to closer regional cooperation. The Czech Republic is strategically positioned in European internet infrastructure, hosting several major data centres and internet exchange points (IXPs).

By facilitating collaboration between public and private sectors, the MoU aims to ensure that internet policies are developed with broad input and expertise. The CTU benefits from access to valuable data and technical knowledge that support national digital policy objectives.

Additionally, in a region where geopolitical tensions may affect internet infrastructure, this agreement promotes transparency and cooperation that help stabilise internet operations and build stakeholder trust. Overall, the RIPE NCC continues to evolve as a key technical partner in digital policy discussions across Europe and beyond.

The agreement highlights the need for close cooperation between technical bodies and regulators as digital infrastructure grows more complex, emphasising multistakeholder governance to improve stability and efficiency in Central and Eastern Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple brings AI tools to apps and Siri

Apple is rolling out Apple Intelligence, its generative AI platform, across popular apps including Messages, Mail, and Notes. Introduced in late 2024 and expanded in 2025, the platform blends text and image generation, redesigned Siri features, and integrations with ChatGPT.

The AI-enhanced Siri can now edit photos, summarise content, and interact across apps with contextual awareness. Writing tools offer grammar suggestions, tone adjustments, and content generation, while image tools allow for Genmoji creation and prompt-based visuals via the Image Playground app.

Unlike competitors, Apple uses on-device processing for many tasks, prioritising privacy. More complex queries are sent to its Private Cloud Compute system running on Apple Silicon, with a visible fallback if offline. Additional features like Visual Intelligence and Live Translation are expected later in 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India unveils AI incident reporting guidelines for critical infrastructure

India is developing AI incident reporting guidelines for companies, developers, and public institutions to report AI-related issues affecting critical infrastructure sectors such as telecommunications, power, and energy. The government aims to create a centralised database to record and classify incidents like system failures, unexpected results, or harmful impacts caused by AI.

That initiative will help policymakers and stakeholders better understand and manage the risks AI poses to vital services, ensuring transparency and accountability. The proposed guidelines will require detailed reporting of incidents, including the AI application involved, cause, location, affected sector, and severity of harm.

The Telecommunications Engineering Centre (TEC) is spearheading the effort, focusing initially on telecom and digital infrastructure, with plans to extend the standard across other sectors and pitch it globally through the International Telecommunication Union. The framework aligns with international initiatives such as the OECD’s AI Incident Monitor and builds on government recommendations to improve oversight while fostering innovation.

Why does it matter?

The draft emphasises learning from incidents rather than penalising reporters, encouraging self-regulation to avoid excessive compliance burdens. The following approach complements broader AI safety goals of India, including the recent launch of the IndiaAI Safety Institute, which works on risk management, ethical frameworks, and detection tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok contest the EU’s compliance charges

Meta and TikTok have taken their fight against an the EU supervisory fee to Europe’s second-highest court, arguing that the charges are disproportionate and based on flawed calculations.

The fee, introduced under the Digital Services Act (DSA), requires major online platforms to pay 0.05% of their annual global net income to cover the European Commission’s oversight costs.

Meta questioned the Commission’s methodology, claiming the levy was based on the entire group’s revenue instead of the specific EU-based subsidiary.

The company’s lawyer told judges it still lacked clarity on how the fee was calculated, describing the process as opaque and inconsistent with the spirit of the law.

TikTok also criticised the charge, alleging inaccurate and discriminatory data inflated its payment.

Its legal team argued that user numbers were double-counted when people switched between devices. The Commission had wrongly calculated fees based on group profits rather than platform-specific earnings.

The Commission defended its approach, saying group resources should bear the cost when consolidated accounts are used. A ruling is expected from the General Court sometime next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools are not enough without basic cybersecurity

At London Tech Week, Darktrace and UK officials warned that many firms are over-relying on AI tools while failing to implement basic cybersecurity practices.

Despite the hype around AI, essential measures like user access control and system segmentation remain missing in many organisations.

Cybercriminals are already exploiting AI to automate phishing and accelerate intrusions in the UK, while outdated infrastructure and short-term thinking leave companies vulnerable.

Boards often struggle to assess AI tools properly, buying into trends rather than addressing real threats.

Experts stressed that AI is not a silver bullet and must be used alongside human expertise and solid security foundations.

Domain-specific AI models, built with transparency and interpretability, are needed to avoid the dangers of overconfidence and misapplication in high-risk areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI must protect dignity, say US bishops

The US Conference of Catholic Bishops has urged Congress to centre AI policy on human dignity and the common good.

Their message outlines moral principles rather than technical guidance, warning against misuse of technology that may erode truth, justice, or the protection of the vulnerable.

The bishops caution against letting AI replace human moral judgement, especially in sensitive areas like family life, work, and warfare. They express concern about AI deepening inequality and harming those already marginalised without strict oversight.

Their call includes demands for greater transparency, regulation of autonomous weapons, and stronger protections for children and workers in the US.

Rooted in Catholic social teaching, the letter frames AI not as a neutral innovation but as a force that must serve people, not displace them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup faces lawsuit from Disney and Universal

Two of Hollywood’s most powerful studios, Disney and Universal, have launched a copyright infringement lawsuit against the AI firm Midjourney, accusing it of illegally replicating iconic characters.

The studios claim the San Francisco-based company copied their creative works without permission, describing it as a ‘bottomless pit of plagiarism’.

Characters such as Darth Vader, Elsa, and the Minions were cited in the 143-page complaint, which alleges Midjourney used these images to train its AI system and generate similar content.

Disney and Universal argue that the AI firm failed to invest in the creative process, yet profited heavily from the output — reportedly earning $US300 million in paid subscriptions last year.

Despite early attempts by the studios to raise concerns and propose safeguards already adopted by other AI developers,

Midjourney allegedly ignored them and pressed ahead with further product releases. The company, which calls itself a small, self-funded team of 11, has declined to comment on the lawsuit directly but insists it has a long future ahead.

Disney’s legal chief, Horacio Gutierrez, stressed the importance of protecting creative works that result from decades of investment. While supporting AI as a tool for innovation, he maintained that ‘piracy is piracy’, regardless of whether humans or machines carry it out.

The studios are seeking damages and a court order to stop the AI firm from continuing its alleged copyright violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia halts AI summaries test after backlash

Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.

The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.

However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.

Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.

Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.

For many, the possibility of similar errors appearing on Wikipedia was unacceptable.

Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.

While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

INTERPOL cracks down on global cybercrime networks

Over 20,000 malicious IP addresses and domains linked to data-stealing malware have been taken down during Operation Secure, a coordinated cybercrime crackdown led by INTERPOL between January and April 2025.

Law enforcement agencies from 26 countries worked together to locate rogue servers and dismantle criminal networks instead of tackling threats in isolation.

The operation, supported by cybersecurity firms including Group-IB, Kaspersky and Trend Micro, led to the removal of nearly 80 per cent of the identified malicious infrastructure. Authorities seized 41 servers, confiscated over 100GB of stolen data and arrested 32 suspects.

More than 216,000 individuals and organisations were alerted, helping them act quickly by changing passwords, freezing accounts or blocking unauthorised access.

Vietnamese police arrested 18 people, including a group leader found with cash, SIM cards and business records linked to fraudulent schemes. Sri Lankan and Nauruan authorities carried out home raids, arresting 14 suspects and identifying 40 victims.

In Hong Kong, police traced 117 command-and-control servers across 89 internet providers. INTERPOL hailed the effort as proof of the impact of cross-border cooperation in dismantling cybercriminal infrastructure instead of allowing it to flourish undisturbed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM sets 2029 target for quantum breakthrough

IBM has set out a detailed roadmap to deliver a practical quantum computer by 2029, marking a major milestone in its long-term strategy.

The company plans to build its ‘Starling’ quantum system at a new data centre in Poughkeepsie, New York, targeting around 200 logical qubits—enough to begin outperforming classical computers in specific tasks instead of lagging due to error correction limitations.

Quantum computers rely on qubits to perform complex calculations, but high error rates have held back their potential. IBM shifted its approach in 2019, designing error-correction algorithms based on real, manufacturable chips instead of theoretical models.

The change, as the company says, will significantly reduce the qubits needed to fix errors.

With confidence in its new method, IBM will build a series of quantum systems until 2027, each advancing toward a larger, more capable machine.

Vice President Jay Gambetta stated the key scientific questions have already been resolved, meaning what remains is primarily an engineering challenge instead of a scientific one.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!