Austrian DPA finds Microsoft 365 Education violates GDPR

Microsoft has been found in violation of the EU’s General Data Protection Regulation (GDPR) over how its Microsoft 365 Education platform handles student data.

The Austrian Data Protection Authority (DSB) issued the ruling after a student, represented by privacy group noyb, was denied full access to their personal data. The complaint exposed a three-way responsibility gap between Microsoft, schools, and national education authorities.

During the COVID-19 pandemic, many schools adopted cloud-based tools like Microsoft 365 Education. However, Microsoft shifted responsibility for GDPR compliance onto schools and ministries, which often lack access to, or control over, student data processed by Microsoft.

In this case, Microsoft redirected the student’s data request to their school, which was unable to provide complete information.

The DSB found Microsoft guilty of multiple GDPR breaches. These included the illegal use of tracking cookies without consent and failing to provide the student full access to their data, violating Article 15.

Microsoft was also ordered to clarify how it uses data for purposes like ‘business modelling’ and whether it shares data with third parties like LinkedIn, OpenAI, or adtech firm Xandr.

Microsoft’s claim that its EU entity in Ireland was responsible for the product was rejected. The DSB ruled that key decisions were made in the USA, making Microsoft Corp the main data controller.

The decision has broad implications, with millions of students and public-sector users relying on Microsoft 365. As Max Schrems of noyb warned, schools and other European institutions will remain unable to meet their legal obligations under the GDPR unless Microsoft makes structural changes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands safeguards economic security through Nexperia intervention

The Dutch Minister of Economic Affairs has invoked the Goods Availability Act in response to serious governance issues at semiconductor manufacturer Nexperia.

The measure, announced on 30 September 2025, seeks to ensure the continued availability of the company’s products in the event of an emergency. Nexperia, headquartered in Nijmegen, will be allowed to maintain its normal production activities.

A decision that follows recent indications of significant management deficiencies and actions within Nexperia that could affect the safeguarding of vital technological knowledge and capacity in the Netherlands and across Europe.

Authorities view these capabilities as essential for economic security, as Nexperia supplies chips for the automotive sector and consumer electronics industries.

Under the order, the Minister of Economic Affairs may block or reverse company decisions considered harmful to Nexperia’s long-term stability or to the preservation of Europe’s semiconductor value chain.

The Netherlands government described the use of the Goods Availability Act as exceptional, citing the urgency and scale of the governance concerns.

Officials emphasised that the action applies only to Nexperia and does not target other companies, sectors, or countries. The decision may be contested through the courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI joins dialogue with the EU on fair and transparent AI development

The US AI company, OpenAI, has met with the European Commission to discuss competition in the rapidly expanding AI sector.

A meeting focused on how large technology firms such as Apple, Microsoft and Google shape access to digital markets through their operating systems, app stores and search engines.

During the discussion, OpenAI highlighted that such platforms significantly influence how users and developers engage with AI services.

The company encouraged regulators to ensure that innovation and consumer choice remain priorities as the industry grows, noting that collaboration between major and minor players can help maintain a balanced ecosystem.

An issue arises as OpenAI continues to partner with several leading technology companies. Microsoft, a key investor, has integrated ChatGPT into Windows 11’s Copilot, while Apple recently added ChatGPT support to Siri as part of its Apple Intelligence features.

Therefore, OpenAI’s engagement with regulators is part of a broader dialogue about maintaining open and competitive markets while fostering cooperation across the industry.

Although the European Commission has not announced any new investigations, the meeting reflects ongoing efforts to understand how AI platforms interact within the broader digital economy.

OpenAI and other stakeholders are expected to continue contributing to discussions to ensure transparency, fairness and sustainable growth in the AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan turns to AI to fight the shadow economy

Kazakhstan’s Prime Minister Olzhas Bektenov has directed the full implementation of AI across government agencies to meet President Kassym-Jomart Tokayev’s goal of reducing the shadow economy’s share in GDP to 15 percent in 2025.

At a government session, Bektenov said progress must go beyond reports and correspondence, calling for structural reforms in taxation, digitalisation, and business regulation. He urged ministries to pursue a ‘transparent economy’ through comprehensive AI and data integration initiatives.

The State Revenue Committee of Kazakhstan will lead the digital transformation, supported by a new Data Processing Centre established by the Ministry of Artificial Intelligence and Digital Development.

Bektenov stressed that digitalisation projects such as cashless payments and the digital tenge have already proven effective in curbing unrecorded transactions and improving financial oversight.

AI will also be deployed in customs risk profiling and cargo inspection analysis to detect fraud and reduce corruption.

The Ministries of Finance, Justice, Trade, and National Economy were instructed to integrate databases under the Smart Data Finance system and to finalise an automated risk management system for company registration by 25 November.

Deputy Prime Minister Serik Zhumangarin will oversee coordination.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The global struggle to regulate children’s social media use

Finding equilibrium in children’s use of social media

Social media has become a defining part of modern childhood. Platforms like Instagram, TikTok, Snapchat and YouTube offer connection, entertainment and information at an unprecedented scale.

Yet concerns have grown about their impact on children’s mental health, education, privacy and safety. Governments, parents and civil society increasingly debate whether children should access these spaces freely, with restrictions, or not at all.

The discussion is no longer abstract. Across the world, policymakers are moving beyond voluntary codes to legal requirements, some proposing age thresholds or even outright bans for minors.

Supporters argue that children face psychological harm and exploitation online, while critics caution that heavy restrictions can undermine rights, fail to solve root problems and create new risks.

The global conversation is now at a turning point, where choices about social media regulation will shape the next generation’s digital environment.

Why social media is both a lifeline and a threat for youth

The influence of social media on children is double-edged. On the one side, these platforms enable creativity, allow marginalised voices to be heard, and provide educational content. During the pandemic, digital networks offered a lifeline of social interaction when schools were closed.

multiracial group of school kids using touchpads and listening to a teacher during computer class

Children and teens can build communities around shared interests, learn new skills, and sometimes even gain economic opportunities through digital platforms.

On the other side, research has linked heavy use of social media with increased rates of anxiety, depression, disrupted sleep and body image issues among young users. Recommendation algorithms often push sensational or harmful content, reinforcing vulnerabilities rather than mitigating them.

Cyberbullying, exposure to adult material, and risks of predatory contact are persistent challenges. Instead of strengthening resilience, platforms often prioritise engagement metrics that exploit children’s attention and emotional responses.

The scale of the issue is enormous. Billions of children around the world hold smartphones before the age of 12. With digital life inseparable from daily routines, even well-meaning parents struggle to set boundaries.

Governments face pressure to intervene, but approaches vary widely, reflecting different cultural norms, levels of trust in technology firms, and political attitudes toward child protection.

The Australian approach

Australia is at the forefront of regulation. In recent years, the country has passed strong online safety laws, led by its eSafety Commissioner. These rules include mandatory age verification for certain online services and obligations for platforms to design products with child safety in mind.

Most notably, Australia has signalled its willingness to explore outright bans on general social media access for children under 16. The government has pointed to mounting evidence of harm, from cyberbullying to mental health concerns, and has emphasised the need for early intervention.

australian social media laws for children safety

Instead of leaving responsibility entirely to parents, the state argues that platforms themselves must redesign the way they serve children.

Critics highlight several problems. Age verification requires identity checks, which can endanger privacy and create surveillance risks. Bans may also drive children to use less-regulated spaces or fake their ages, undermining the intended protections.

Others argue that focusing only on prohibition overlooks the need for broader digital literacy education. Yet Australia’s regulatory leadership has sparked a wider debate, prompting other countries to reconsider their own approaches.

Greece’s strong position

Last week, Greece reignited the global debate with its own strong position on restricting youth access to social media.

Speaking at the United Nations General Assembly during an event hosted by Australia on digital child safety, PM Kyriakos Mitsotakis said his government was prepared to consider banning social media for children under 16.

sweden social media ban for children

Mitsotakis warned that societies are conducting the ‘largest uncontrolled experiment on children’s minds’ by allowing unrestricted access to social media platforms. He cautioned that while the long-term effects of the experiment remain uncertain, they are unlikely to be positive.

Additionally, the prime minister pointed to domestic initiatives already underway, such as the ban on mobile phones in schools, which he claimed has already transformed the educational experience.

Mitsotakis acknowledged the difficulties of enforcing such regulations but insisted that complexity cannot be an excuse for inaction.

Across the whole world, similar conversations are gaining traction. Let’s review some of them.

National initiatives across the globe

UK

The UK introduced its Online Safety Act in 2023, one of the most comprehensive frameworks for regulating online platforms. Under the law, companies must assess risks to children and demonstrate how they mitigate harms.

Age assurance is required for certain services, including those hosting pornography or content promoting suicide or self-harm. While not an outright ban, the framework places a heavy responsibility on platforms to restrict harmful material and tailor their products to younger users.

EU

The EU has not introduced a specific social media ban, but its Digital Services Act requires major platforms to conduct systemic risk assessments, including risks to minors.

However, the European Commission has signalled that it may support stricter measures on youth access to social media, keeping the option of a bloc-wide ban under review.

Commission President Ursula von der Leyen has recently endorsed the idea of a ‘digital majority age’ and pledged to gather experts by year’s end to consider possible actions.

The Commission has pointed to the Digital Services Act as a strong baseline but argued that evolving risks demand continued vigilance.

EU

Companies must show regulators how algorithms affect young people and must offer transparency about their moderation practices.

In parallel, several EU states are piloting age verification measures for access to certain platforms. France, for example, has debated requiring parental consent for children under 15 to use social media.

USA

The USA lacks a single nationwide law, but several states are acting independently, although there are some issues with the Supreme Court and the First Amendment.

Florida, Texas, Utah, and Arkansas have passed laws requiring parental consent for minors to access social media, while others are considering restrictions.

The federal government has debated child online safety legislation, although political divides have slowed progress. Instead of a ban, American initiatives often blend parental rights, consumer protection, and platform accountability.

Canada

The Canadian government has introduced Bill C-63, the Online Harms Act, aiming to strengthen online child protection and limit the spread of harmful content.

Justice Minister Arif Virani said the legislation would ensure platforms take greater responsibility for reducing risks and preventing the amplification of content that incites hate, violence, or self-harm.

The framework would apply to platforms, including livestreaming and adult content services.

canada flag is depicted on the screen with the program code 1

They would be obliged to remove material that sexually exploits children or shares intimate content without consent, while also adopting safety measures to limit exposure to harmful content such as bullying, terrorism, and extremist propaganda.

However, the legislation also does not impose a complete social media ban for minors.

China

China’s cyberspace regulator has proposed restrictions on children’s smartphone use. The draft rules limit use to a maximum of two hours daily for those under 18, with stricter limits for younger age groups.

The Cyberspace Administration of China (CAC) said devices should include ‘minor mode’ programmes, blocking internet access for children between 10 p.m. and 6 a.m.

Teenagers aged 16 to 18 would be allowed two hours a day, those between eight and 16 just one hour, and those under eight years old only eight minutes.

It is important to add that parents could opt out of the restrictions if they wish.

India

In January, India proposed new rules to tighten controls on children’s access to social media, sparking a debate over parental empowerment and privacy risks.

The draft rules required parental consent before minors can create accounts on social media, e-commerce, or gaming platforms.

Verification would rely on identity documents or age data already held by providers.

Supporters argue the measures will give parents greater oversight and protect children from risks such as cyberbullying, harmful content, and online exploitation.

Singapore

PM Lawrence Wong has warned of the risks of excessive screen time while stressing that children must also be empowered to use technology responsibly. The ultimate goal is the right balance between safety and digital literacy.

In addition, researchers suggest schools should not ban devices out of fear but teach children how to manage them, likening digital literacy to learning how to swim safely. Such a strategy highlights that no single solution fits all societies.

Balancing rights and risks

Bans and restrictions raise fundamental rights issues. Children have the right to access information, to express themselves, and to participate in culture and society.

Overly strict bans can exclude them from opportunities that their peers elsewhere enjoy. Critics argue that bans may create inequalities between children whose families find workarounds and those who comply.

social media ban for under 16s

At the same time, the rights to health, safety and privacy must also be protected. The difficulty lies in striking a balance. Advocates of stronger regulation argue that platforms have failed to self-regulate effectively, and that states must step in.

Opponents argue that bans may create unintended harms and encourage authoritarian tendencies, with governments using child safety as a pretext for broader control of online spaces.

Instead of choosing one path, some propose hybrid approaches: stronger rules for design and data collection, combined with investment in education and digital resilience. Such approaches aim to prepare children to navigate online risks while making platforms less exploitative.

The future of social media and child protection

Looking forward, the global landscape is unlikely to converge on a single model. Some countries will favour bans and strict controls, others will emphasise parental empowerment, and still others will prioritise platform accountability.

What is clear is that the status quo is no longer acceptable to policymakers or to many parents.

Technological solutions will also evolve. Advances in privacy-preserving age verification may ease some concerns, although sceptics warn that surveillance risks will remain. At the same time, platforms may voluntarily redesign products for younger audiences, either to comply with regulations or to preserve trust.

Ultimately, the challenge is not whether to regulate, but how. Instead of focusing solely on prohibition, governments and societies may need to build layered protections: legal safeguards, technological checks, educational investments and cultural change.

If these can align, children may inherit a safer digital world that still allows them to learn, connect and create. If they cannot, the risks of exclusion or exploitation will remain unresolved.

black woman hands and phone for city map location gps or social media internet search in new york

In conclusion, the debate over banning or restricting social media for children reflects broader tensions between freedom, safety, privacy, and responsibility. Around the globe, governments are experimenting with different balances of control and empowerment.

Australia, as we have already shown, represents one of the boldest approaches, while others, from the UK and Greece to China and Singapore, are testing different variations.

What unites them is the recognition that children cannot simply be left alone in a digital ecosystem designed for profit rather than protection.

The next decade will determine whether societies can craft a sustainable balance, where technology serves the needs of the young instead of exploiting them.

In the end, it is our duty as human beings and responsible citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic’s Claude to power Deloitte’s new enterprise AI expansion

Deloitte entered a new enterprise AI partnership with Anthropic shortly after refunding the Australian government for a report that included inaccurate AI-generated information.

The A$439,000 (US$290,618) contract was intended for an independent review but contained fabricated citations to non-existent academic sources. Deloitte has since repaid the final instalment, and the government of Australia has released a corrected version of the report.

Despite the controversy, Deloitte is expanding its use of AI by integrating Anthropic’s Claude chatbot across its global workforce of nearly half a million employees.

A collaboration will focus on developing AI-driven tools for compliance, automation and data analysis, especially in highly regulated industries such as finance and healthcare.

The companies also plan to design AI agent personas tailored to Deloitte’s various departments to enhance productivity and decision-making. Financial terms of the agreement were not disclosed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s competition watchdog urges AI self-audits to prevent market distortions

The Competition Commission of India (CCI) has urged companies to self-audit their AI systems to prevent anti-competitive practices and ensure responsible autonomy.

A call came as part of the CCI’s market study on AI, emphasising the risks of opacity and algorithmic collusion while highlighting AI’s potential to enhance innovation and productivity.

The study warned that dominant firms could exploit their control over data, infrastructure, and proprietary models to reinforce market power, creating barriers to entry. It also noted that opaque AI systems in user sectors may lead to tacit algorithmic coordination in pricing and strategy, undermining fair competition.

The regulatory approach of India, the CCI said, aims to balance technological progress with accountability through a co-regulatory framework that promotes both competition and innovation.

Additionally, the Commission plans to strengthen its technical capacity, establish a digital markets think tank and host a conference on AI and regulatory challenges.

A report recommended a six-step self-audit framework for enterprises, requiring evaluation of AI systems against competition risks, senior management oversight and clear accountability in high-risk deployments.

It also highlighted AI’s pro-competitive effects, particularly for MSMEs, which benefit from improved efficiency and greater access to digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A new AI strategy by the EU to cut reliance on the US and China

The EU is preparing to unveil a new strategy to reduce reliance on American and Chinese technology by accelerating the growth of homegrown AI.

The ‘Apply AI strategy’, set to be presented by the EU tech chief Henna Virkkunen, positions AI as a strategic asset essential for the bloc’s competitiveness, security and resilience.

According to draft documents, the plan will prioritise adopting European-made AI tools across healthcare, defence and manufacturing.

Public administrations are expected to play a central role by integrating open-source EU AI systems, providing a market for local start-ups and reducing dependence on foreign platforms. The Commission has pledged €1bn from existing financing programmes to support the initiative.

Brussels has warned that foreign control of the ‘AI stack’ (the hardware and software that underpin advanced systems) could be ‘weaponised’ by state and non-state actors.

These concerns have intensified following Europe’s continued dependence on American tech infrastructure. Meanwhile, China’s rapid progress in AI has further raised fears that the Union risks losing influence in shaping the technology’s future.

Several high-potential AI firms have already been hosted by the EU, including France’s Mistral and Germany’s Helsing. However, they rely heavily on overseas suppliers for software, hardware, and critical minerals.

The Commission wants to accelerate the deployment of European AI-enabled defence tools, such as command-and-control systems, which remain dependent on NATO and US providers. The strategy also outlines investment in sovereign frontier models for areas like space defence.

President Ursula von der Leyen said the bloc aims to ‘speed up AI adoption across the board’ to ensure it does not miss the transformative wave.

Brussels hopes to carve out a more substantial global role in the next phase of technological competition by reframing AI as an industrial sovereignty and security instrument.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!