Mongolia anti-corruption authority adopts AI system for decision monitoring

The Independent Authority Against Corruption (IAAC) in Mongolia has started using the tuss.io platform to monitor orders, decisions, rules and regulations adopted by state organisations and officials.

The platform, developed by Tus Solution company, is used to check whether decisions, orders, rules or regulations meet legal requirements, create unnecessary procedural steps, or establish conflicts of interest or conflicts with the law.

According to IAAC, a total of 388 orders, decisions, rules and regulations have been monitored. Out of these, 152 have been revised, amended or invalidated over the past three years.

Why does it matter?

The initiative reflects broader efforts of Mongolia to strengthen transparency and accountability in public administration through digital tools. By integrating AI-powered analysis and compliance monitoring, platforms like tuss.io can more efficiently identify regulatory inconsistencies and support evidence-based decision-making, reducing opportunities for corruption and improving the overall quality of governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

UK Digital Inclusion Action Plan delivers devices funding and online access support

The UK Department for Science, Innovation and Technology said more than one million people have been helped online through its Digital Inclusion Action Plan. The update was published in a one-year progress report on the government strategy.

The department said over 22,000 devices were donated through government schemes and industry partnerships. It also confirmed £11.9 million in funding that supported more than 80 local digital inclusion programmes.

According to the report, the plan aims to improve access to devices, connectivity and digital skills. The government said all commitments in the strategy have either been delivered or remain on track.

The department added that partnerships with industry and charities helped expand access to broadband and mobile services, including more affordable connectivity. The programme also supported training and local initiatives to improve digital participation.

Secretary of State for Science, Innovation and Technology, Liz Kendall, said the programme is intended to expand access to online services, employment opportunities and communication tools. She added that the government plans to continue developing the initiative.

The department also confirmed it will take over the Essential Digital Skills Framework from Lloyds Banking Group and update it to reflect current needs, including online safety and the growing role of AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

EDPB summarises conference on cross-regulatory cooperation in the EU

The European Data Protection Board has published a summary of its 17 March conference in Brussels on cross-regulatory interplay and cooperation in the EU from a data protection perspective. According to the EDPB, the event brought together representatives of the EU institutions, European Data Protection Authorities, academia, and industry.

Three panels structured the conference discussion. One focused on data protection and competition, another on the Digital Markets Act and the General Data Protection Regulation (GDPR), and a third on the Digital Services Act and the GDPR.

Discussion in the first panel centred on cooperation between regulatory bodies in data protection and competition, including lessons from the aftermath of the Bundeskartellamt ruling. The EDPB said speakers emphasised the need for regulators to align their approaches and recognise synergies between the two fields. Speakers also said data protection should be considered in competition analysis only when relevant and on a case-by-case basis. The EDPB added that it had recently agreed with the European Commission to develop joint guidelines on the interplay between competition law and data protection.

The second panel focused on joint guidelines on the Digital Markets Act and the GDPR, developed by the European Commission and the EDPB and recently opened to public consultation. According to the EDPB, speakers described the guidelines as an example of regulatory cooperation aimed at developing a coherent and compatible interpretation of the two frameworks while respecting regulatory competences. The Board said participants linked the guidelines to stronger consistency, legal clarity, and easier compliance. Some speakers also suggested changes to the final version, including points related to proportionality and the relationship between DMA obligations and the GDPR.

The final panel examined the interaction between the Digital Services Act and the GDPR. The EDPB said panellists referred to the protection of minors as one example, arguing that age verification should be effective while remaining fully in line with data protection legislation. Speakers also highlighted the need for coordination between the two frameworks, including cooperation involving the EU institutions such as the European Board for Digital Services, the European Commission, the EDPB, and national authorities. Emerging technologies such as AI were also mentioned in the discussion.

The event also featured keynote speeches from European Commission Executive Vice President Henna Virkkunen and European Parliament LIBE Committee Chair Javier Zarzalejos. According to the EDPB, Virkkunen said the Commission remained committed to cooperation between different frameworks and highlighted the need to support compliance through stronger coordination among regulators. Zarzalejos said close cross-regulatory cooperation was essential for consistency, effective enforcement, and trust, and pointed to the intersections among data protection law, competition law, the DMA, and the DSA.

EDPB Chair Anu Talus closed the conference by reiterating that the EDPB and European Data Protection Authorities are committed to supporting stakeholders in navigating what the Board described as a new cross-regulatory landscape. The EDPB said future work will include continued cooperation with the Commission on joint guidelines on the interplay between the AI Act and the GDPR, finalisation of the joint guidelines on the interplay between the DMA and the GDPR, and work on the recently announced Joint Guidelines on the interplay between data protection and competition law.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brain inspired chip could cut AI energy use by up to 70%

Researchers at the University of Cambridge have developed a nanoelectronic device to reduce energy consumption in AI hardware. The team, led by Dr Babak Bakhit, designed the system to mimic how the human brain processes information.

The device uses a new form of hafnium oxide to create a stable, low-energy memristor. It processes and stores data in the same location, similar to how neurons function in the brain.

To achieve this, the researchers added strontium and titanium to form internal electronic junctions. This allows the device to change resistance smoothly without relying on unstable conductive filaments.

Tests showed the device operates with switching currents up to a million times lower than some conventional technologies. It also demonstrated stable multi-level states required for advanced in-memory computing.

The team said the approach could reduce AI hardware energy use by up to 70%. The findings were published in the journal Science Advances.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telefónica Tech moves to combine AI and quantum computing

Telefónica Tech has partnered with three European firms to bring AI and quantum computing closer together. The collaboration aims to improve how advanced models are developed and deployed across different environments.

The initiative brings together Qilimanjaro Quantum Tech, Multiverse Computing and Qcentroid. Their combined expertise is expected to support more efficient, compact and locally deployable AI systems.

Quantum computing is seen as a way to reduce the heavy processing demands of large AI models. Faster computation could yield more accurate results while reducing the time required to solve complex problems.

Each partner contributes specialised capabilities, from quantum hardware and algorithms to software platforms and orchestration tools. These technologies could support applications such as simulations, edge AI and rapid prototyping.

Telefónica Tech is also strengthening its role in integrating AI and quantum solutions for enterprise clients. The move reflects a broader push to build scalable, sovereign and next-generation digital infrastructure in Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

AI-generated songs used in $10 million streaming fraud

A large-scale fraud scheme using AI-generated music has exposed vulnerabilities in streaming platforms and royalty systems. Billions of fake streams were used to divert payments away from legitimate artists and rights holders.

The scheme ran from 2017 to 2024 and involved uploading hundreds of thousands of AI-generated tracks. Automated programs were then used to stream the songs at scale, inflating play counts and generating revenue.

The operation relied on thousands of bot accounts, bulk email registrations and cloud-based systems. Streaming activity was spread across many tracks to reduce detection and maintain consistent earnings over time.

Michael Smith, a 54-year-old from North Carolina, has pleaded guilty to conspiracy to commit wire fraud in federal court. Prosecutors say he obtained more than $10 million and agreed to forfeit over $8 million in proceeds.

Authorities say the case highlights how AI and automation can be used to manipulate digital platforms. The court will determine the final sentence as concerns grow over similar schemes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Inspired Education introduces AI-driven learning for primary schools

Inspired Education has unveiled a new AI-enabled primary teaching model designed to modernise traditional learning systems. The programme aims to better align education with how children learn in a digital and fast-changing environment.

The model combines core academic subjects in the morning with applied learning in the afternoon. Students focus on life skills such as problem-solving, entrepreneurship and communication alongside standard curriculum content.

Learning is structured around mastery rather than age, allowing children to progress at their own pace. AI-powered tools are used to personalise lessons and support faster and more adaptive learning outcomes.

The first early-access programme will launch in Central London in January 2027. Further rollouts are planned across cities, including Lisbon, Milan, Madrid, Mexico City, São Paulo and Auckland.

Developers say the approach responds to growing demand from parents for AI-integrated education. The initiative reflects broader efforts to prepare students with digital, practical and future-ready skills.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Mastercard expands AI strategy with new payments model

Mastercard has introduced a generative AI foundation model trained on billions of anonymised transactions. The model is designed as a backend system to power insights across payments and commerce services.

The company plans to extend AI use beyond fraud detection into cybersecurity, loyalty programmes and small-business tools. The model is being developed with support from Nvidia and Databricks technologies.

Earlier AI tools focused on fraud detection, significantly improving accuracy and reducing false positives. The new model marks a shift towards a broader infrastructure approach across multiple products.

This move aligns with Mastercard’s growing reliance on value-added services, which generated over $13 billion in revenue. These services include security, analytics and digital payment solutions beyond the core network.

Competitors such as Visa and PayPal are also expanding AI-driven commerce platforms. The race is intensifying as firms build integrated systems for payments, automation and intelligent services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Parents underestimate how teenagers use AI in daily life

Parents often believe they understand how their children use AI tools in daily life, but recent studies suggest a clear and growing disconnect. Teenagers are using AI more frequently and in more complex ways than most adults realise.

Research indicates that 64% of teens use AI, while only 51% of parents think their children do. A large share of families have never discussed AI, leaving teenagers to navigate its role without guidance.

Teenagers commonly use AI for schoolwork, research and entertainment as part of their routine activities. However, a notable number also rely on it for advice, conversation and even emotional support in personal situations.

Experts warn that this awareness gap can increase risks linked to misuse and emotional dependence on AI tools. Limited parental understanding means many overlook how strongly AI is influencing behaviour and decision-making.

Despite these concerns, many teenagers feel confident using AI and see it as a helpful tool. Specialists emphasise that open conversations are essential to ensure more responsible and balanced use at home.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot