
9-16 January 2026
HIGHLIGHT OF THE WEEK
The Grok shock: How AI deepfakes triggered reactions worldwide
In the last week, a regulatory firestorm engulfed Grok, the AI tool built into Elon Musk’s X platform, as reports surfaced that Grok was being used to produce non-consensual sexualised and deepfake images, including depictions of individuals undressed or in compromising scenarios without their consent.
The backlash was swift and severe. The UK’s Ofcom launched an investigation under the Online Safety Act, to determine whether X has complied with its duties to protect people in the UK from content that is illegal in the country. UK Prime Minister Keir Starmer condemned the ‘disgusting’ outputs. The EU declared the content, especially involving children, had ‘no place in Europe.’ Southeast Asia acted decisively: Malaysia and Indonesia blocked Grok entirely, citing obscene image generation, and the Philippines swiftly followed suit on child-protection grounds.
Under pressure, X announced tightened controls on Grok’s image-editing capabilities. The platform said it had introduced technological safeguards to block the generation and editing of sexualised images of real people in jurisdictions where such content is illegal.
However, regulatory authorities signalled that this step, while positive, would not halt oversight.
In the UK, Ofcom emphasised that its formal investigation into X’s handling of Grok and the emergence of deepfake imagery will continue, even as it welcomes the platform’s policy changes. The regulator emphasised its commitment to understanding how the platform facilitated the proliferation to such content and to ensuring that corrective measures are implemented.
Canada’s Privacy Commissioner widened an existing investigation into X Corp. and opened a parallel probe into xAI to assess whether the companies obtained valid consent for the collection, use, and disclosure of personal information to create AI-generated deepfakes, including sexually explicit content.
The red lines. The reaction was so immediate and widespread precisely because it struck two rather universal nerves: the profound violation of privacy through non-consensual sexual imagery—a moral line nearly everyone agrees cannot be crossed—combined with the unique perils of AI, a trigger for acute governmental sensitivity.

IN OTHER NEWS THIS WEEK
This week in AI governance
Spain. Spain’s cabinet has approved draft legislation aimed at curbing AI-generated deepfakes and tightening consent rules on the use of images and voices. The bill sets 16 as the minimum age for consenting to image use and prohibits the reuse of online images or AI-generated likenesses without explicit permission — including for commercial purposes — while allowing clear, labelled satire or creative works involving public figures. The reform reinforces child protection measures and mirrors broader EU plans to criminalise non-consensual sexual deepfakes by 2027. Prosecutors are also examining whether certain AI-generated content could qualify as child pornography under Spanish law.
Malta. The Maltese government is preparing tougher legal measures to tackle abuses of deepfake technology. Current legislation is under review with proposals to introduce penalties for the misuse of AI in harassment, blackmail, and bullying cases, building on existing cyberbullying and cyberstalking laws by extending similar protections to harms stemming from AI-generated content. Officials emphasise that while AI adoption is a national priority, robust safeguards against abusive use are essential to protect individuals and digital rights.
Morocco. Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation. The plan aims to add an estimated $10 billion to GDP by 2030, create tens of thousands of AI-related jobs, and integrate AI across industry and government, including modernising public services and strengthening technological autonomy. Central to the strategy is the launch of the JAZARI ROOT Institute, the core hub of a planned network of AI centres of excellence that will bridge research, regional innovation, and practical deployment; additional initiatives include sovereign data infrastructure and partnerships with global AI firms. Authorities also emphasise building national skills and trust in AI, with governance structures and legislative proposals expected to accompany implementation.
Taiwan. Taiwan’s government has set an ambitious goal to train 500,000 AI professionals by 2040 as part of its long-term AI development strategy, backed by a NT$100 billion (approximately US$3.2 billion) venture fund and a national computing centre initiative. President Lai Ching-te announced the target at a 2026 AI Talent Forum in Taipei, highlighting the need for broad AI literacy across disciplines to sustain national competitiveness, support innovation ecosystems, and accelerate digital transformation in small and medium-sized enterprises. The government is introducing training programmes for students and public servants and emphasising cooperation between industry, academia, and government to develop a versatile AI talent pipeline.
The EU and the USA. The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have released ten principles for good AI practice in the medicines lifecycle. The guidelines provide broad direction for AI use in research, clinical trials, manufacturing, and safety monitoring. The principles are relevant to pharmaceutical developers, marketing authorisation applicants, and holders, and will form the basis for future AI guidance in different jurisdictions.
Internet access under pressure in Iran and Uganda
As anti-government protests deepened across Iran in early January 2026, nationwide communications were brought to an almost complete standstill when authorities enacted a near-total shutdown of the internet. Amid these conditions, some Iranians attempted to bypass government controls by using Elon Musk’s Starlink satellite internet service, which remained partially accessible despite Tehran’s efforts to ban and disrupt it. Latest reports suggest that security forces in parts of Tehran have started door-to-door operations to remove satellite dishes.
Separately, Ugandan authorities ordered restrictions on internet access ahead of the country’s presidential election on January 15, 2026. The Uganda Communications Commission directed telecom providers to suspend public internet access on the eve of the vote, citing concerns about misinformation, electoral fraud and incitement to violence. Critics, including civil liberties groups and opposition figures, argued that the blackout was part of a broader pattern of repression.
Zooming out. In both contexts — Tehran and Kampala — the suspension of internet access illustrates how control over information flows is a potent instrument in high-stakes political contests.
Worldwide focus on child safety online continues
The momentum behind policies to restrict children’s access to social media has carried from 2025 into early 2026. In Australia, the first country to enact such a ban, social media companies reported having deactivated about 4.7 million accounts believed to belong to users under 16 within the first month of enforcement.
In France, policymakers are debating proposals that would restrict social media access for children under 15. The country’s health watchdog has highlighted research pointing to a range of documented negative effects of social media use on adolescent mental health, noting that online platforms amplify harmful pressures, cyberbullying and unrealistic beauty standards.
In the UK, the Prime Minister has signalled that he is open to age‑based restrictions similar to Australia’s approach, as well as proposals to limit screen time or the design features of platforms used by children. Support for stricter regulation has emerged across party lines, and the issue is being debated within Parliament.
The future of bans. The number of countries eyeing a ban is climbing, and it’s far from final. The world is watching Australia—its success or struggle will decide who follows next
Chips and geopolitics
The global semiconductor industry entered 2026 amid developments that originated in late 2025.
On January 14, 2026, President Trump signed a presidential proclamation imposing a 25% tariff on certain advanced computing and AI‑oriented chips, including high‑end products such as Nvidia’s H200 and AMD’s MI325X, under a national security review.
Officials described the measure as a ‘phase one’ step aimed at strengthening domestic production and reducing dependence on foreign manufacturers, particularly those in Taiwan, while also capturing revenue from imports that do not contribute to US manufacturing capacity. The administration suggested that further actions could follow depending on how negotiations with trading partners and the industry evolve.
Just a day later, the USA and Taiwan announced a landmark semiconductor-focused trade agreement. Under the deal, tariffs on a broad range of Taiwanese exports will be reduced or eliminated, while Taiwanese semiconductor companies, including leading firms like TSMC, have committed to invest at least $250 billion in U.S. chip manufacturing, AI, and energy projects, supported by an additional $250 billion in government-backed credit.
The protracted legal and political dispute over Dutch semiconductor manufacturer Nexperia, a Netherlands‑based firm owned by China’s Wingtech Technology, also continues. The dispute erupted in autumn 2025, when Dutch authorities briefly seized control of Nexperia, citing national security and concerns about potential technology transfers to China. Nexperia’s European management and Wingtech representatives are now squaring off in an Amsterdam court, which is deciding whether to launch a formal investigation into alleged mismanagement. The court is set to make a decision within four weeks.
On the horizon. As countries jockey for control over critical semiconductors, alliances and rivalries are clashing, and 2026 promises even more high-stakes moves.
Western cyber agencies issue guidance on cyber risks to industrial sectors
A group of international cybersecurity agencies has released new technical guidance addressing the security of operational technology (OT) used in industrial and critical infrastructure environments.
The guidance, led by the UK’s National Cyber Security Centre (NCSC), provides recommendations for securely connecting industrial control systems, sensors, and other operational equipment that support essential services.
According to the co-authoring agencies, industrial environments are being targeted by a range of actors, including cybercriminal groups and state-linked actors. The guidance references a joint advisory issued in June 2023 on China-linked cyber activity, as well as a more recent advisory from the US Cybersecurity and Infrastructure Security Agency (CISA) that notes opportunistic activity by pro-Russia hacktivist groups affecting critical infrastructure globally.
LOOKING AHEAD

World Economic Forum Annual Meeting 2026
The World Economic Forum Annual Meeting 2026 will take place 19–23 January in Davos‑Klosters, Switzerland. Bringing together leaders from government, business, civil society, academia, and culture, the meeting provides a platform to discuss global economic, technological, and societal challenges. A central theme will be the technological transformation—from AI and quantum computing to next-generation biotech and energy systems—reshaping economies, work, and growth.
Our team will be reporting from the event, covering key discussions and insights on developments shaping the global agenda. Be sure to bookmark the dedicated page.
READING CORNER
On 7 January, the USA withdrew from a slate of international organisations and initiatives. Despite the wider retrenchment, the technology and digital governance ecosystem was largely spared, as most major tech-relevant bodies remained on the ‘white list.’ The bigger uncertainty lies with the US decision to step back from UNCTAD and UN DESA as this could still create knock-on effects for digital initiatives linked to these organisations, Dr Jovan Kurbalija writes.
In 2026, Switzerland will have to navigate a critical and highly uncertain AI transformation, Dr Jovan Kurbalija argues. With so much at stake and future AI trajectories unclear, the nation must build its resilience on a distinctly Swiss AI Trinity: Zurich’s entrepreneurship, Geneva’s governance, and communal subsidiarity, all anchored in the enduring values and practices outlined here.
In her new article, Dr Anita Lamprecht examines how sci-fi narratives have been inverted in contemporary AI discourse, increasingly positioning technology beyond regulation and human governance. She introduces the concept of the ‘science fiction native’ (sci-fi native) to describe how immersion in speculative imaginaries over several generations is influencing legal and governance assumptions about control, responsibility, and social contracts.



