Singapore urges organisations to strengthen AI governance frameworks

GovTech Singapore has argued that stronger AI governance in workplaces is essential for trust, compliance, risk management, and responsible innovation as AI adoption expands across business operations.

The agency leading Singapore’s Smart Nation and digital government efforts defines AI governance as a framework of policies, processes, and responsibilities guiding the ethical, transparent, and accountable development and deployment of AI systems within an organisation. The framework is linked to oversight across the AI lifecycle, from design through to ongoing monitoring.

Key elements identified by GovTech Singapore include transparency and explainability, fairness and bias mitigation, accountability and human oversight, and data privacy and security. Responsible AI is also linked to Singapore’s wider Smart Nation agenda, which the agency describes as a national priority.

The guidance recommends that organisations establish clear internal policies on AI use, build AI literacy across teams, carry out regular audits and assessments, and prioritise secure development practices. It also points to Singapore’s Model AI Governance Framework for Generative AI, developed by the AI Verify Foundation and the Infocomm Media Development Authority, as a reference point for businesses adapting governance frameworks to their own needs.

As part of its effort to support responsible AI use in the public sector, GovTech Singapore also highlights its AI Guardian suite. The suite includes Litmus, a testing platform using adversarial prompts to identify risks and vulnerabilities, and Sentinel, a guardrails service designed to detect and mitigate unsafe or irrelevant content before it affects AI models or users.

Overall, GovTech Singapore presents AI governance not only as a compliance issue, but as part of building a trusted digital environment in which AI can be deployed safely and effectively.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Atos launches digital sovereignty offering for AI and regulated environments

Atos Group has launched an integrated digital sovereignty offering, designed to help organisations retain control and accountability over their data, infrastructure and digital operations.

The proposition combines capabilities across cloud, cybersecurity, AI and digital workplace services. It draws on Atos and Eviden expertise, including fully European data encryption products from Eviden.

Sovereignty is embedded by design across existing portfolios, with graduated levels tailored to each customer’s workloads. Open standards and interoperability sit at the core, aiming to reduce vendor lock-in.

The offering targets regulated sectors including the public sector, defence, financial services and healthcare. Atos Group digital sovereignty leader Michael Kollar said the initiative helps organisations ‘turn sovereignty into an operational capability.’

The launch complements the recent introduction of Atos Sovereign Agentic Studios, which focused on moving AI deployments into production under sovereign control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT researchers develop tool to estimate energy use of AI workloads

Researchers from the Massachusetts Institute of Technology and the MIT-IBM Watson AI Lab have developed a rapid estimation system that calculates the energy consumption of AI workloads in seconds, offering a major improvement over traditional methods that take hours or days.

The tool, known as EnergAIzer, is designed to support data centre operators as AI demand accelerates and electricity consumption rises. With AI infrastructure expected to account for a significant share of US power usage in the coming years, more efficient resource planning has become increasingly critical.

EnergAIzer analyses repeatable workload patterns and GPU behaviour to generate fast predictions of energy use across different hardware setups. After incorporating real GPU measurements, the system achieves high accuracy while remaining lightweight and adaptable to current and future chip designs.

By providing immediate feedback on energy consumption, the tool allows developers and operators to optimise workloads, reduce waste, and test different configurations before deployment. The approach is positioned as a practical step towards improving sustainability across large-scale AI systems.

Why does it matter? 

Energy use is becoming one of the defining constraints of AI growth, as large-scale models push data centres towards unprecedented electricity demand. A tool like EnergAIzer directly addresses this bottleneck by making power consumption visible and measurable before deployment.

Faster and more accurate estimation changes how AI systems are designed and scaled. Rather than reacting to energy costs after deployment, developers and operators can optimise workloads in advance, cutting waste and improving efficiency.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta partners with Overview and Noon Energy to power AI data centres

Meta has announced two energy partnerships to support its AI infrastructure, teaming up with Overview Energy for space solar power and Noon Energy for ultra-long-duration storage, with up to 1 GW reserved under each agreement.

Overview Energy operates satellites in geosynchronous orbit, roughly 22,000 miles above Earth, where sunlight is constant. The satellites collect solar energy and beam it to existing ground-based solar farms as low-intensity, near-infrared light, enabling around-the-clock electricity generation without requiring additional land or grid infrastructure.

Noon Energy‘s technology relies on modular, reversible solid-oxide fuel cells and carbon-based storage, offering over 100 hours of energy storage. Meta has reserved up to 1 GW/100 GWh, with an initial 25 MW/2.5 GWh pilot demonstration expected by 2028. The company describes this as among the largest commitments to ultra-long-duration storage in the industry.

Both partnerships build on Meta’s existing energy portfolio, which includes more than 30 GW of contracted clean and renewable energy. The company is also one of the largest corporate purchasers of nuclear energy in the US, with 7.7 GW secured across agreements with Vistra, TerraPower, Oklo and Constellation Energy.

Overview Energy’s orbital demonstration is planned for 2028, with commercial delivery to the US grid potentially starting as early as 2030. Noon Energy’s demonstration project targets the same year, with its modular design allowing capacity to scale alongside Meta’s growing data centre footprint.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI policy updated by Australian Research Council

The Australian Research Council has updated its policy on the use of generative AI in its grants programmes, setting out how the rules apply to applicants, administering organisations, and assessors in the National Competitive Grants Program.

The revised policy has officially taken effect and applies to applications and assessments for Discovery Indigenous 2027 and all scheme rounds opening after that date.

The policy says applicants may use generative AI tools to support tasks such as testing ideas, improving language, and summarising text, but remain responsible for the content they submit and are considered the authors of that content.

Administering organisations are also responsible for ensuring that applications are complete, accurate, and free from false or misleading information, while delegated research leaders must certify that participants are responsible for the authorship and intellectual content of applications and that they have not infringed the intellectual property rights of others.

A notable change in the revised policy is that assessors are now permitted to use generative AI tools in limited ways. The ARC says assessors may use AI only to correct or improve grammar, spelling, formatting, and the readability of drafted assessments.

At the same time, the policy states that assessors must not use AI to help form an opinion on the quality of an application and must preserve the confidentiality of all application materials. Inputting any application material into public generative AI tools such as ChatGPT, Gemini, Claude, or Perplexity is described by the ARC as a serious breach of confidentiality and is not permitted.

The ARC also says assessors will be asked about their use of AI and must be transparent when requested. Where assessors’ inappropriate use of generative AI is suspected, the ARC may remove that assessment from the process. If a breach is established following investigation, the ARC may impose consequential actions in addition to any imposed by the assessor’s employing institution.

The revised policy explains that its approach is shaped by concerns including intellectual integrity and authorship, safeguarding intellectual property, culturally appropriate use of data, content reliability and bias, human oversight and expert judgement, and energy and environmental impacts. It also states that the ARC will continue to monitor developments in generative AI and update the policy as required.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and Oxford University launch global AI course for courts

A free online course aimed at preparing judicial systems for the growing role of AI in legal decision-making has been launched, with UNESCO in partnership with the University of Oxford positioned at the centre of the initiative.

AI is already shaping court processes, influencing evidence assessment, and affecting access to justice. Yet, many legal professionals lack structured guidance to evaluate such systems within a rule-of-law framework.

The UNESCO programme introduces a practical, human rights-based approach to AI, combining legal, ethical, and operational perspectives.

Developed with institutions including Oxford’s Saïd Business School and Blavatnik School of Government, the course equips participants with tools to assess algorithmic outputs, manage risks of bias, and maintain judicial independence in increasingly digital court environments.

Central to UNESCO’s initiative is a newly developed AI and Rule of Law Checklist, designed to help courts scrutinise AI systems and their outputs, including use as evidence.

The course also addresses broader concerns, including fairness, transparency, accountability, and the protection of vulnerable groups, reflecting rising global reliance on AI across justice systems.

Supported by the EU, the course is available globally, free of charge, with certification from the University of Oxford. As AI becomes embedded in judicial processes, capacity-building efforts aim to ensure technological adoption strengthens rather than undermines the rule of law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU pushes Android changes to open AI competition

The European Commission has outlined draft measures requiring Google to improve interoperability on Android as part of ongoing proceedings under the Digital Markets Act. Regulators are focusing on how third-party AI services can interact with hardware and software features controlled by the Android operating system.

The proposed measures are intended to give competing AI services access to key Android features already used by Google’s own AI services, including Gemini. In practice, that could allow rival services to support actions such as sending messages, sharing content, or completing tasks through user-preferred applications rather than being limited by Google’s default ecosystem.

The Commission’s approach could also make it easier for users to activate alternative AI assistants through customised interactions and device-level features, reducing dependence on default system tools. The broader aim is to give third-party providers a more equal opportunity to innovate and compete in the fast-moving market for AI services on mobile devices.

Feedback on the proposed measures is being gathered as part of the Commission’s specification proceedings under the DMA. The consultation forms part of a wider regulatory effort to enforce fair access to core platform features and strengthen digital competition across European markets, including in the AI sector.

Why does it matter?

The move targets one of the most important control points in the digital economy: the operating system layer. Opening Android features to competing AI services could reduce the structural advantage held by Google and shift power towards a more competitive, multi-provider mobile ecosystem. This is an inference based on the Commission’s stated objective of giving third-party AI services access equivalent to that available to Google’s own AI tools.

Greater interoperability under the Digital Markets Act could reshape how AI reaches users, turning smartphones into more open platforms rather than tightly controlled default environments. At the same time, the case also shows how strongly the EU is trying to apply competition law to the next phase of AI distribution, not only to search, app stores, and browsers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK backs self-learning AI push to advance scientific discovery

The UK’s Sovereign AI Fund has invested in Ineffable Intelligence, a British startup developing self-learning AI systems designed to generate new knowledge rather than rely solely on existing data. The investment is being made alongside the British Business Bank.

The company is building algorithms intended to improve through interaction with their environment, refining outcomes through iterative experimentation. The approach is aimed at enabling AI systems to identify new patterns and solutions for use in science, engineering, and healthcare.

Led by AI researcher David Silver, known for his work in reinforcement learning, the project reflects a broader shift towards more autonomous and exploratory forms of AI. Support from the Sovereign AI Fund is intended to help the company scale its development from within the UK and strengthen longer-term domestic innovation capacity.

The investment forms part of a wider strategy to strengthen sovereign AI capability in the UK, reduce reliance on external technologies, and reinforce domestic expertise. In that context, infrastructure support and talent development are being positioned as part of a broader effort to support the growth of next-generation AI systems and expand the UK’s role in frontier research.

Why does it matter?

Investment in self-learning AI reflects a broader shift in how advanced AI is being developed, from systems that mainly analyse existing information towards systems intended to generate new insights through exploration and interaction. If those approaches prove effective, they could accelerate discovery in fields where conventional modelling and data-driven methods have clear limits. This is an inference based on the company’s stated aims and the government’s framing of the investment.

More broadly, sovereign investment in advanced AI highlights a growing focus on technological independence and strategic control over critical digital capability. Strengthening domestic capacity could help ensure that future AI innovation is developed within national ecosystems, with implications for economic competitiveness and long-term research direction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!