Porn site fined £1m for ignoring UK child safety age checks

A UK pornographic website has been fined £1m by Ofcom for failing to comply with mandatory age verification under the Online Safety Act. The company, AVS Group Ltd, did not respond to repeated contact from the regulator, prompting an additional £50,000 penalty.

The Act requires websites hosting adult content to implement ‘highly effective age assurance’ to prevent children from accessing explicit material. Ofcom has ordered the company to comply within 72 hours or face further daily fines.

Other tech platforms are also under scrutiny, with one unnamed major social media company undergoing compliance checks. Regulators warn that non-compliance will result in formal action, highlighting the growing enforcement of child safety online.

Critics argue the law must be tougher to ensure real protection, particularly for minors and women online. While age checks have reduced UK traffic to some sites, loopholes like VPNs remain a concern, and regulators are pushing for stricter adherence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Workspace Studio for AI-powered automation

Google has made Workspace Studio generally available, allowing employees to design, manage, and share AI agents directly within Workspace. Powered by Gemini 3, these agents automate tasks ranging from simple routines to complex business workflows, all without coding.

The platform aims to save time on repetitive work, freeing employees to focus on higher-value activities.

Agents can understand context, reason through problems, and integrate with core Workspace apps such as Gmail, Drive, and Chat, as well as enterprise platforms like Asana, Jira, Mailchimp, and Salesforce.

Early adopters, including cleaning solutions leader Kärcher, have utilised Workspace Studio to streamline workflows, reducing planning time by up to 90% and consolidating multiple tasks into a single minute.

Workspace Studio allows users to build agents using templates or natural language prompts, making automation accessible to non-specialists. Agents can manage status reports, reminders, email triage, and critical tasks, such as legal notices or travel requests.

Teams can also easily share agents, ensuring collaboration and consistency across workflows.

The rollout to business customers will continue over the coming weeks. Users can start creating agents immediately, explore templates, use prompts for automations, and join the Gemini Alpha program to test early features and controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaigning in the age of generative AI

Generative AI is rapidly altering the political campaign landscape, argues the ORF article, which outlines how election teams worldwide are adopting AI tools for persuasion, outreach and content creation.

Campaigns can now generate customised messages for different voter groups, produce multilingual content at scale, and automate much of the traditional grunt work of campaigning.

On one hand, proponents say the technology makes campaigning more efficient and accessible, particularly in multilingual or resource-constrained settings. But the ease and speed with which content can be generated also lowers the barrier for misuse: AI-driven deepfakes, synthetic voices and disinformation campaigns can be deployed to mislead voters or distort public discourse.

Recent research supports these worries. For example, a large-scale study published in Science and Nature demonstrated that AI chatbots can influence voter opinions, swaying a non-trivial share of undecided voters toward a target candidate simply by presenting persuasive content.

Meanwhile, independent analyses show that during the 2024 US election campaign, a noticeable fraction of content on social media was AI-generated, sometimes used to spread misleading narratives or exaggerate support for certain candidates.

For democracy and governance, the shift poses thorny challenges. AI-driven campaigns risk eroding public trust, exacerbating polarisation and undermining electoral legitimacy. Regulators and policymakers now face pressure to devise new safeguards, such as transparency requirements around AI usage in political advertising, stronger fact-checking, and clearer accountability for misuse.

The ORF article argues these debates should start now, before AI becomes so entrenched that rollback is impossible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns lead India to withdraw cyber safety app mandate

India has scrapped its order mandating smartphone manufacturers to pre-install the state-run Sanchar Saathi cyber safety app. The directive, which faced widespread criticism, had raised concerns over privacy and potential government surveillance.

Smartphone makers, including Apple and Samsung, reportedly resisted the order, highlighting that it was issued without prior consultation and challenged user privacy norms. The government argued the app was necessary to verify handset authenticity.

So far, the Sanchar Saathi app has attracted 14 million users, reporting around 2,000 frauds daily, with a sharp spike of 600,000 new registrations in a single day. Despite these figures, the mandatory pre-installation rule provoked intense backlash from cybersecurity experts and digital rights advocates.

India’s Minister of Communications, Jyotiraditya Scindia, dismissed concerns about surveillance, insisting that the app does not enable snooping. Digital advocacy groups welcomed the withdrawal but called for complete legal clarity on the revised Cyber Security Rules, 2024.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta begins removing underage users in Australia

Meta has begun removing Australian users under 16 from Facebook, Instagram and Threads ahead of a national ban taking effect on 10 December. Canberra requires major platforms to block younger users or face substantial financial penalties.

Meta says it is deleting accounts it reasonably believes belong to underage teenagers while allowing them to download their data. Authorities expect hundreds of thousands of adolescents to be affected, given Instagram’s large cohort of 13 to 15 year olds.

Regulators argue the law addresses harmful recommendation systems and exploitative content, though YouTube has warned that safety filters will weaken for unregistered viewers. The Australian communications minister has insisted platforms must strengthen their own protections.

Rights groups have challenged the law in court, claiming unjust limits on expression. Officials concede teenagers may try using fake identification or AI-altered images, yet still expect platforms to deploy strong countermeasures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and automation need human oversight in decision-making

Leaders from academia and industry in Hyderabad, India are stressing that humans must remain central in decision-making as AI and automation expand across society. Collaborative intelligence, combining AI experts, domain specialists and human judgement, is seen as essential for responsible adoption.

Universities are encouraged to treat students as primary stakeholders, adapting curricula to integrate AI responsibly and avoid obsolescence. Competency-based, values-driven learning models are being promoted to prepare students to question, shape and lead through digital transformation.

Experts highlighted that modern communication is co-produced by humans, machines and algorithms. Designing AI to augment human agency rather than replace it ensures a balance between technology and human decision-making across education and industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sega cautiously adopts AI in game development

Game development is poised to transform as Sega begins to incorporate AI selectively. The Japanese studio aims to enhance efficiency across production processes while preserving the integrity of creative work, such as character design.

Executives emphasised that AI will primarily support tasks such as content transcription and workflow optimisation, avoiding roles that require artistic skills. Careful evaluation of each potential use case will guide its implementation across projects.

The debate over generative AI continues to divide the gaming industry, with some developers raising concerns that candidates may misrepresent AI-generated work during the hiring process. Studios are increasingly requiring proof of actual creative ability to avoid productivity issues.

Other developers, including Arrowhead Game Studios, emphasise the importance of striking a balance between AI use and human creativity. By reducing repetitive tasks rather than replacing artistic roles, studios aim to enhance efficiency while preserving the unique contributions of human designers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU opens antitrust probe into Meta’s WhatsApp AI rollout

Brussels has opened an antitrust inquiry into Meta over how AI features were added to WhatsApp, focusing on whether the updated access policies hinder market competition. Regulators say scrutiny is needed as integrated assistants become central to messaging platforms.

Meta AI has been built into WhatsApp across Europe since early 2025, prompting questions about whether external AI providers face unfair barriers. Meta rejects the accusations and argues that users can reach rival tools through other digital channels.

Italy launched a related proceeding in July and expanded it in November, examining claims that Meta curtailed access for competing chatbots. Authorities worry that dominance in messaging could influence the wider AI services market.

EU officials confirmed the case will proceed under standard antitrust rules rather than the Digital Markets Act. Investigators aim to understand how embedded assistants reshape competitive dynamics in services used by millions.

European regulators say outcomes could guide future oversight as generative AI becomes woven into essential communications. The case signals growing concern about concentrated power in fast-evolving AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!