EU moves to reinforce cooperation against VAT fraud

The European Commission has presented a plan to strengthen cooperation among the European Public Prosecutor’s Office, the European Anti-Fraud Office, and member states as part of a broader effort to combat VAT fraud.

The proposal establishes a legal framework for the sharing of information. It grants the EU bodies immediate access to VAT data, which is expected to enhance the detection of cross-border tax evasion schemes.

Real-time reporting of cross-border trade, delivered through the VAT in the Digital Age package, provides national authorities with the information needed to identify suspicious activity, rather than relying on delayed or incomplete records.

Carousel fraud alone costs EU taxpayers billions each year and remains a significant element of the broader VAT compliance gap, which stood at over €89 billion in 2022.

The Commission argues that faster access to VAT information will help investigators uncover fraudulent networks, halt their activities and pursue prosecutions more effectively.

EPPO, OLAF and the Eurofisc network would gain direct communication channels, enabling closer coordination and rapid intelligence sharing throughout the Union.

A proposal that will now move to the Council for agreement and to the European Parliament and the Economic and Social Committee for consultation.

Once adopted and published, the changes will take effect and initiate the implementation phase across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teenagers still face harmful content despite new protections

In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.

A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.

The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.

While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.

Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New funding round by Meta strengthens local STEAM education

Meta is inviting applications for its 2026 Data Centre Community Action Grants, which support schools, nonprofits and local groups in regions that host the company’s data centres.

The programme has been a core part of Meta’s community investment strategy since 2011, and the latest round expands support to seven additional areas linked to new facilities. The company views the grants as a means of strengthening long-term community vitality, rather than focusing solely on infrastructure growth.

Funding is aimed at projects that use technology for public benefit and improve opportunities in science, technology, engineering, arts and mathematics. More than $ 74 million has been awarded to communities worldwide, with $ 24 million distributed through the grant programme alone.

Recipients can reapply each year, which enables organisations to sustain programmes and increase their impact over time.

Several regions have already demonstrated how the funding can reshape local learning opportunities. Northern Illinois University used grants to expand engineering camps for younger students and to open a STEAM studio that supports after-school programmes and workforce development.

In New Mexico, a middle school used funding to build a STEM centre with advanced tools such as drones, coding kits and 3D printing equipment. In Texas, an enrichment organisation created a digital media and STEM camp for at-risk youth, offering skills that can encourage empowerment instead of disengagement.

Meta presents the programme as part of a broader pledge to deepen education and community involvement around emerging technologies.

The company argues that long-term support for digital learning will strengthen local resilience and create opportunities for young people who want to pursue future careers in technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital records gain official status in Uzbekistan

Uzbekistan has granted full legal validity to online personal data stored on the my.gov.uz Unified Interactive Public Services Portal, placing it on equal footing with traditional documents.

The measure, in force from 1 November, supports the country’s digital transformation by simplifying how citizens interact with state bodies.

Personal information can now be accessed, shared and managed entirely through the portal instead of relying on printed certificates.

State institutions are no longer permitted to request paper versions of records that are already available online, which is expected to reduce queues and alleviate the administrative burden faced by the public.

Officials in Uzbekistan anticipate that centralising personal data on one platform will save time and resources for both citizens and government agencies. The reform aims to streamline public services, remove redundant steps and improve overall efficiency across state procedures.

Government bodies have encouraged citizens to use the portal’s functions more actively and follow official channels for updates on new features and improvements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China targets deepfake livestreams of public figures

Chinese cyberspace authorities announced a crackdown on AI deepfakes impersonating public figures in livestream shopping. Regulators said platforms have removed thousands of posts and sanctioned numerous accounts for misleading users.

Officials urged platforms to conduct cleanups and hold marketers accountable for deceptive promotions. Reported actions include removing over 8,700 items and dealing with more than 11,000 impersonation accounts.

Measures build on wider campaigns against AI misuse, including rules targeting deep synthesis and labelling obligations. Earlier efforts focused on curbing rumours, impersonation and harmful content across short videos and e-commerce.

Chinese authorities pledged a continued high-pressure stance to safeguard consumers and protect celebrity likenesses online. Platforms risk penalties if complaint handling and takedowns fail to deter repeat infringements in livestream commerce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New guidelines by Apple curb how apps send user data to external AI systems

Apple has updated its App Review Guidelines to require developers to disclose and obtain permission before sharing personal data with third-party AI systems. The company says the change enhances user control as AI features become more prevalent across apps.

The revision arrives ahead of Apple’s planned 2026 release of an AI-enhanced Siri, expected to take actions across apps and rely partly on Google’s Gemini technology. Apple is also moving to ensure external developers do not pass personal data to AI providers without explicit consent.

Previously, rule 5.1.2(i) already limited the sharing of personal information without permission. The update adds explicit language naming third-party AI as a category that requires disclosure, reflecting growing scrutiny of how apps use machine learning and generative models.

The shift could affect developers who use external AI systems for features such as personalisation or content generation. Enforcement details remain unclear, as the term ‘AI’ encompasses a broad range of technologies beyond large language models.

Apple released several other guideline updates alongside the AI change, including support for its new Mini Apps Programme and amendments involving creator tools, loan products, and regulated services such as crypto exchanges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Firefox expands AI features with full user choice

Mozilla has outlined its vision for integrating AI into Firefox in a way that protects user choice instead of limiting it. The company argues that AI should be built like the open web, allowing people and developers to use tools on their own terms rather than being pushed into a single ecosystem.

Recent features such as the AI sidebar chatbot and Shake to Summarise on iOS reflect that approach.

The next step is an ‘AI Window’, a controlled space inside Firefox that lets users chat with an AI assistant while browsing. The feature is entirely optional, offers full control, and can be switched off at any time. Mozilla has opened a waitlist so users can test the feature early and help shape its development.

Mozilla believes browsers must adapt as AI becomes a more common interface to the web. The company argues that remaining independent allows it to prioritise transparency, accountability and user agency instead of the closed models promoted by competitors.

The goal is an assistant that enhances browsing and guides users outward to the wider internet rather than trapping them in isolated conversations.

Community involvement remains central to Mozilla’s work. The organisation is encouraging developers and users to contribute ideas and support open-source projects as it works to ensure Firefox stays fast, secure and private while embracing helpful forms of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CERN unveils AI strategy to advance research and operations

CERN has approved a comprehensive AI strategy to guide its use across research, operations, and administration. The strategy unites initiatives under a coherent framework to promote responsible and impactful AI for science and operational excellence.

It focuses on four main goals: accelerating scientific discovery, improving productivity and reliability, attracting and developing talent, and enabling AI at scale through strategic partnerships with industry and member states.

Common tools and shared experiences across sectors will strengthen CERN’s community and ensure effective deployment.

Implementation will involve prioritised plans and collaboration with EU programmes, industry, and member states to build capacity, secure funding, and expand infrastructure. Applications of AI will support high-energy physics experiments, future accelerators, detectors, and data-driven decision-making.

AI is now central to CERN’s mission, transforming research methodologies and operations. From intelligent automation to scalable computational insight, the technology is no longer optional but a strategic imperative for the organisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI drives a new identity security crisis

New research from Rubrik Zero Labs warns that agentic AI is reshaping the identity landscape faster than organisations can secure it.

The study reveals a surge in non-human identities created through automation and API driven workflows, with numbers now exceeding human users by a striking margin.

Most firms have already introduced AI agents into their identity systems or plan to do so, yet many struggle to govern the growing volume of machine credentials.

Experts argue that identity has become the primary attack surface as remote work, cloud adoption and AI expansion remove traditional boundaries. Threat actors increasingly rely on valid credentials instead of technical exploits, which makes weaknesses in identity governance far more damaging.

Rubrik’s researchers and external analysts agree that a single compromised key or forgotten agent account can provide broad access to sensitive environments.

Industry specialists highlight that agentic AI disrupts established IAM practices by blurring distinctions between human and machine activity.

Organisations often cannot determine whether a human or an automated agent performed a critical action, which undermines incident investigations and weakens zero-trust strategies. Poor logging, weak lifecycle controls and abandoned machine identities further expand the attack surface.

Rubrik argues that identity resilience is becoming essential, since IAM tools alone cannot restore trust after a breach. Many firms have already switched IAM providers, reflecting widespread dissatisfaction with current safeguards.

Analysts recommend tighter control of agent creation, stronger credential governance and a clearer understanding of how AI-driven identities reshape operational and security risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft expands AI model Aurora to improve global weather forecasts

Extreme weather displaced over 800,000 people worldwide in 2024, highlighting the importance of accurate forecasts for saving lives, protecting infrastructure, and supporting economies. Farmers, coastal communities, and energy operators rely on timely forecasts to prepare and respond effectively.

Microsoft is reaffirming its commitment to Aurora, an AI model designed to help scientists better understand Earth systems. Trained on vast datasets, Aurora can predict weather, track hurricanes, monitor air quality, and model ocean waves and energy flows.

The platform will remain open-source, enabling researchers worldwide to innovate, collaborate, and apply it to new climate and weather challenges.

Through partnerships with Professor Rich Turner at the University of Cambridge and initiatives like SPARROW, Microsoft is expanding access to high-quality environmental data.

Community-deployable weather stations are improving data coverage and forecast reliability in underrepresented regions. Aurora’s open-source releases, including model weights and training pipelines, will let scientists and developers adapt and build upon the platform.

The AI model has applications beyond research, with energy companies, commodity traders, and national meteorological services exploring its use.

By supporting forecasting systems tailored to local environments, Aurora aims to improve resilience against extreme weather, optimise renewable energy, and drive innovation across multiple industries, from humanitarian aid to financial services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot