Dublin launches major data centre microgrid

A new 110MW data centre microgrid has been launched in Dublin to support rising AI-driven energy demand. The system is designed to provide reliable power during early development stages before full grid connection.

The project combines energy generation, battery storage and heat recovery to improve efficiency and resilience. Developers say the system can help address power constraints affecting large-scale cloud and AI facilities.

Industry leaders in Dublin say the microgrid offers a model for integrating renewable energy with traditional infrastructure. The approach could be replicated in other European markets facing similar grid limitations.

Experts say the system also enables future innovations such as hydrogen integration and district heating. The project reflects a broader shift towards treating energy as a strategic asset in the expansion of AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CEOs worry about AI progress

Business leaders in Cyprus are increasingly concerned about whether their organisations are adapting quickly enough to AI-driven change. A recent PwC survey shows many executives feel the pace of transformation is too slow.

Despite growing interest, most companies have yet to see significant financial returns from AI. Only a minority reported increased revenue or reduced costs, while many said the impact remains limited. These returns are not limited to Cyprus, but are also seen around the world.

Companies in Cyprus are still building the foundations for wider AI adoption. The challenges include limited investment, difficulty attracting skilled talent and uncertainty about organisational readiness.

Executives expect AI to affect junior roles more than senior positions over the coming years. Leaders emphasise the need for clear strategy, workforce development and stronger alignment between technology and business goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU calls on US tech firms to respect rules on handling staff data

Concerns over data protection have intensified as the European Commission calls on major technology companies to apply the EU standards when handling sensitive staff information linked to digital regulation.

Pressure follows requests from the US House Judiciary Committee seeking access to communications between US firms and the EU officials involved in enforcing laws such as the Digital Services Act and Digital Markets Act.

The EU officials emphasise that formal exchanges with companies take place through official channels, including documented correspondence, rather than informal messaging platforms. Internal communication practices may involve encrypted tools, reflecting growing concerns about data security and external scrutiny.

Debate surrounding the issue reflects wider tensions between the EU and the US over digital governance, privacy protections and regulatory authority. Questions over jurisdiction and access to sensitive communications are likely to remain central as transatlantic tech policy evolves.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

GDPR changes debated as EU seeks balance on data protection rules

Debate over potential updates to the GDPR is intensifying, as Marina Kaljurand advocates a focused ‘fitness check’ rather than sweeping legislative changes in an omnibus package.

Concerns raised in the European Parliament highlight risks associated with altering foundational elements of the regulation, particularly its definitions to personal data. Preserving these core principles is seen as essential to maintaining the integrity of the EU’s data protection framework.

Ongoing discussions reflect broader policy tensions within the EU, where efforts to reduce regulatory complexity must be balanced against the need to uphold strong privacy safeguards. Proposals for simplification are therefore facing scrutiny from lawmakers prioritising stability and legal clarity.

Future developments are likely to shape how the EU adapts its data protection rules to evolving digital markets, while ensuring that existing protections remain effective in a rapidly changing technological environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Global leaders gather to tackle fraud

A major international effort to tackle fraud is set to take place in Vienna, as global leaders gather for the Global Fraud Summit 2026 on 16–17 March. The event will highlight emerging challenges in cross-border and digital fraud, bringing global attention to the need for stronger cooperation.

The meeting is organised by the UNODC in partnership with INTERPOL, bringing together government officials, law enforcement authorities, private sector representatives, civil society and academics to discuss emerging fraud trends.

Fraud is increasingly seen as a cross-border and digitally driven threat, making coordination between countries more important than ever. Discussions among leaders and other representatives are expected to focus on how fraud operates across jurisdictions, examine current and emerging fraud trends, why detection remains difficult, and what practical steps can improve both prevention and enforcement.

Particular attention will be given to how institutions and their leaders can enhance information sharing and cooperation. Stronger partnerships between public and private actors are seen as key to responding more effectively, especially as fraud schemes grow more sophisticated.

Beyond immediate enforcement, the summit aims to strengthen long-term capacity and build more resilient systems. Greater alignment between states and organisations could play a decisive role in addressing fraud globally.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU moves to strengthen digital resilience with subsea cable funding

Efforts to improve the security of Europe’s digital infrastructure have advanced as the European Commission opens a €180 million funding call to support backup systems for subsea internet cables.

Investment by the EU will focus on developing alternative routes and redundancy mechanisms, ensuring continuity of connectivity in the event of disruptions affecting critical undersea networks that carry global data traffic.

Growing concerns around infrastructure vulnerability have increased attention on subsea cables, which play a central role in international communications. Strengthening resilience is therefore becoming a priority within broader European strategies on technological sovereignty and security.

Planned projects are expected to enhance reliability across the region, reducing risks associated with outages or potential external threats to essential telecommunications infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

US senators question Meta facial recognition in smart glasses

Three Democratic senators have raised concerns about Meta’s reported exploration of facial recognition in its smart glasses, warning that it could normalise public surveillance. In a letter to CEO Mark Zuckerberg, Senators Edward Markey, Ron Wyden, and Jeff Merkley asked about consent, biometric data, and the risks of misuse.

The lawmakers said the proposed feature ‘risks normalising mass surveillance at a moment when the federal government is using similar tools to intimidate protesters and chill speech. Although facial recognition may offer real benefits for blind and visually impaired users, Meta’s history of failing to protect user privacy raises serious questions about its plan to deploy this technology in its smart glasses.’

‘Americans do not consent to biometric data collection simply by walking down a public street, entering a café, or standing in a crowd,’ the senators added. ‘Yet, the deployment of this technology would appear to do exactly that – subjecting countless individuals to covert identification without notice, without consent, and without any meaningful opportunity to opt out.’ They warned that such practices would erode longstanding expectations of privacy in public spaces, effectively eliminating public anonymity.’

Concerns grew after reports of US Border Patrol and ICE agents using Meta smart glasses. While there is no evidence of facial recognition use, senators argue that adding identification tools to eyewear could expand undetectable surveillance. The letter questions if Meta might link facial data with information from its platforms, enabling real-time identification tied to profiles. Lawmakers warn that this could increase the risks of harassment and targeting.

Meta had previously discontinued facial recognition on Facebook in 2021, citing societal concerns. The senators argue that reintroducing similar technology in wearable devices suggests a shift rather than a retreat. ‘Five years later, Meta appears less worried about those societal concerns and is reportedly planning to deploy facial recognition technology in one of the most dangerous possible settings,’ they wrote.

‘Moreover,’ they continued, ‘Meta is apparently aware of the risks with this technology,’ noting that ‘an internal memo recommended launching the product ‘during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.’

‘In other words,’ the senators added, ‘Meta appears to recognise the serious privacy and civil liberties risks of facial recognition but thinks it can avoid attention by slipping the once-abandoned, ethically fraught product back onto the market while the world is distracted by the Trump administration’s daily chaos.’

The senators have asked Meta to clarify how it would obtain consent from both users and bystanders, how long it would retain biometric data, whether it would use it to train AI models, and whether it could share it with law enforcement, including the Department of Homeland Security. The company has been given until 6 April to respond.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NSA warns of AI supply chain risks in new cybersecurity guidance

The National Security Agency has released new guidance on managing risks across the AI supply chain, highlighting growing cybersecurity concerns tied to AI and machine learning systems. The joint information sheet outlines how organisations can better assess vulnerabilities when deploying or sourcing AI technologies.

The document defines the AI and machine learning supply chain as a combination of key components, including training data, models, software, infrastructure, hardware, and third-party services. Each element can introduce risks affecting confidentiality, integrity, or availability, particularly as advanced tools such as large language models and AI agents become more widely adopted.

Security risks associated with data include bias, poisoning attacks, and exposure via techniques such as model inversion and data extraction. For models, the guidance warns of hidden backdoors, malware, evasion attacks, and model manipulation. Organisations are advised to use trusted sources, perform integrity checks, and maintain verified model registries to mitigate such threats.

The paper also highlights software and infrastructure vulnerabilities, noting that AI systems often rely on complex dependencies that expand the attack surface. Recommended measures include malware scanning, testing, patching, and maintaining software bills of materials. Additional risks arise from third-party services, which may introduce weaknesses through their own supply chains or shared environments.

To manage these risks, organisations are urged to improve visibility across their AI ecosystems, identify suppliers and subcontractors, and require documentation such as AI and software bills of materials. The guidance aligns with frameworks from the National Institute of Standards and Technology and MITRE, reinforcing the need for coordinated approaches to AI supply chain security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI agents test limits of EU rules

AI agents are rapidly gaining traction, raising questions about whether existing EU rules can keep pace. Unlike chatbots, these systems can act autonomously and interact with digital tools on behalf of users.

Experts warn that AI agents require deeper access to personal data and online services to function effectively. Regulators in Europe are monitoring potential risks as the technology becomes more integrated into daily life.

Lawmakers are examining whether current legislation, such as the AI Act and GDPR, adequately covers agent-based systems. Legal experts highlight challenges around contracts, liability and accountability when AI acts independently.

Despite concerns, many governments remain reluctant to introduce new rules, citing regulatory fatigue. Policymakers may rely on existing frameworks unless major incidents force a reassessment of AI oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Publishers challenge OpenAI over alleged copyright infringement

Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.

According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.

Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.

Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.

Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!