FTC signals flexibility on COPPA age checks

The US FTC has issued a policy statement signalling greater flexibility in enforcing parts of the Children’s Online Privacy Protection Act when companies deploy age verification tools. The agency said it will not take enforcement action where personal data is collected solely for age verification purposes.

The FTC framed age assurance as a key safeguard to prevent children from accessing inappropriate content online in the US. Officials said the approach is intended to encourage broader adoption of age verification technologies by online services.

While offering flexibility, the US regulator stressed that organisations must maintain strong safeguards, including data deletion practices and clear notice to parents and children. The FTC also warned that personal data used beyond age verification could still trigger enforcement action under COPPA.

Similar to previous 2023 amendments, legal experts cautioned that companies using age assurance may face additional compliance duties under state youth privacy laws, even as federal requirements evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hyundai invests in AI, robotics and hydrogen infrastructure

Hyundai will invest 9 trillion won ($6.3B) to build an AI data centre, robot hub, and hydrogen plant in Saemangeum. The project is part of Hyundai’s 125.2 trillion won domestic investment plan through 2030. Shares surged 10.7% following the announcement.

The AI data centre, costing 5.8 trillion won and due in 2029, will host up to 50,000 GPUs to process data from Hyundai’s automotive, steel, logistics, and defence units. The facility enables ‘physical AI,’ adding intelligence to vehicles and robots, not just software.

Hyundai will invest 400 billion won in a robot manufacturing complex with a capacity of 30,000 units annually. The fully automated facility integrates assembly, parts production, and logistics.

Robotics is central to Hyundai’s shift from automaker to AI platform operator, building on innovations such as the Atlas humanoid robot.

The plan includes a 200-megawatt hydrogen plant powered by solar energy, gigawatt-scale solar generation, and a pilot AI Hydrogen City zone. Hyundai estimates 16 trillion won in economic impact and 71,000 jobs.

President Lee Jae Myung highlighted the project as key to South Korea’s AI, robotics, and clean energy ambitions, promising regulatory support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Topshop unveils AI shoppable catwalk in Manchester

Topshop has staged what it describes as a world-first AI-driven shoppable catwalk in Manchester, as part of its UK brand revival. The Manchester event combined physical runway looks with real-time digital purchasing through a bespoke Front Row AI app.

Guests in Manchester were able to buy outfits instantly as models walked, while also trying on virtual versions after the show. The experience was adjudicated by the World Record Certification Agency and positioned as a new model for immersive retail in the UK.

The Manchester showcase formed part of Topshop’s regional strategy beyond London, highlighting the North West’s role in the UK fashion sector. Students from the University of Salford and Manchester Metropolitan University designed and presented the finale in Manchester.

Topshop’s broader comeback in the UK includes pop ups in John Lewis stores, a standalone website relaunch and a partnership with Liberty in London. Executives said Manchester marked a new phase where AI and commerce converge to reshape retail experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI data centre planned for East Manchester

Latos Data Centres is preparing plans for a 28,000 sq ft data centre in Monsall, East Manchester, aimed at serving rising demand for AI computing. The scheme would occupy a three acre brownfield site at Bower Street and Ten Acres Lane in Manchester.

The East Manchester project is designed as a neural edge data centre, bringing AI processing closer to end users than traditional cloud facilities. Latos said the Manchester development would form part of a broader plan to deliver 30 UK sites by 2030.

A live consultation in Manchester will run until 16 March, with Create Architecture leading the design. Advisers on the Manchester scheme include Euan Kellie Property Solutions on planning and SK Transport Planning on transport matters.

Latos said the Manchester facility would regenerate a vacant industrial plot and operate to high environmental and safety standards. The developer is also delivering a separate data centre in Tees Valley as it expands its AI-focused portfolio across the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

McKinsey claims agentic AI will reshape global banking

Agentic AI is set to transform banking operations in the US and Asia, according to a McKinsey podcast featuring senior partners from New York, Mumbai and London. The technology goes beyond traditional automation by handling less structured tasks and supporting end to end decision making.

Research cited in the discussion suggests many banks are experimenting with AI, yet few report material financial gains. Leaders in the US and Asia are urged to avoid narrow pilot projects and instead redesign workflows, teams and governance around AI at scale.

McKinsey partners said successful banks in the US and Asia are aligning chief executives, technology leaders and risk officers behind a shared strategy. Operations, risk management and frontline services are seen as areas where AI could deliver significant productivity and quality gains.

Banks in India and other Asian markets are also benefiting from regulatory engagement, including guidance from the Reserve Bank of India. Speakers argued that workforce training, cross functional collaboration and clear accountability will determine whether AI delivers lasting impact in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI smart glasses raise new privacy and safeguarding concerns

AI-powered smart glasses are quietly moving from novelty gadget to mainstream consumer device, and the shift is raising uncomfortable questions about privacy, consent and safeguarding. Models such as the Ray-Ban Meta glasses are now widely available in the UK, offering hands-free video capture, livestreaming and AI-driven features such as object recognition and translation. Yet as functionality expands, scrutiny is growing.

Public concern intensified after a BBC report revealed Meta AI glasses had recorded a woman without her consent. The episode reignited debate over whether existing privacy laws are equipped to deal with wearable devices that can identify, track and analyse people in real time. Unlike smartphones, smart glasses operate discreetly, blurring the line between passive wearables and active recording devices.

Manufacturers insist safeguards are being built in. EssilorLuxottica, which partners on the Meta glasses, says design changes have made recording more visible, including enlarging the camera lens and providing user guidance during setup.

The company says it is exploring further design adjustments, including mechanisms that turn off recording when the lens is covered. Compliance with current regulations, it argues, remains a priority.
Critics, however, believe regulation is lagging behind technological capability. Iain Rice, professor of industrial AI at Birmingham City University, warns that UK privacy frameworks were not designed with real-time AI surveillance tools in mind.

He points to risks including facial recognition integration, automated identity matching and the potential for large-scale deepfake generation using live public footage. While cloud processing enables useful features such as navigation and translation, experts argue that stronger safeguards may be needed, including on-device masking of individuals who have not consented to being recorded. The debate suggests that AI glasses may soon test the limits of existing digital rights frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Action-capable AI highlights new security challenges

AI agents are evolving from demos into autonomous tools, with OpenClaw emerging as a leading example. Unlike chatbots, these agents execute tasks directly, interacting with software and systems without constant human input.

The rise of action-capable AI introduces new security challenges. Agents can be manipulated through untrusted input or prompt injection. Persistent memory can also prolong mistakes or unintended behaviour.

The combination of access to sensitive data, external actions, and unverified content, sometimes called the ‘lethal trifecta’, amplifies risks, making careful configuration and oversight essential.

Self-hosted agents offer more control, while cloud-based versions simplify setup but shift security responsibility. Experts recommend running agents in isolated environments, limiting permissions, and requiring approval for sensitive actions.

These precautions reduce the chance of accidental or malicious harm while allowing users to experiment safely.

OpenClaw illustrates the potential of AI agents to automate workflows, handle repetitive tasks, and act proactively rather than passively advising. These tools show the future of consumer AI, but broader adoption requires stronger safety measures and awareness of risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands London research hub

OpenAI is turning its London office into its largest research hub outside the US, marking a strategic shift towards deeper engagement with the UK’s rapidly developing AI landscape. The move places the company in direct competition with Google DeepMind for scientific talent.

An expansion that strengthens OpenAI’s long-term presence in Europe by building a substantial research base rather than relying on satellite operations. The firm aims to attract researchers seeking strong academic links, regulatory clarity and access to the UK’s growing AI ecosystem.

The enlarged London team is expected to support frontier model development and experimental work that aligns with OpenAI’s international ambitions. Senior leadership framed the decision as a vote of confidence in the UK’s capacity to become one of the most influential centres for advanced AI research.

The announcement intensifies debate over global competition for expertise, as major labs seek locations that balance research freedom with responsible oversight.

OpenAI’s investment signals a belief that the UK can offer such conditions while positioning itself as a key player in shaping the next generation of AI capabilities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Data sovereignty becomes an infrastructure strategy in the AI era

For most of the past decade, data governance was treated as a legal issue. IT built networks and bought tools, while regulators were someone else’s problem. That division no longer holds. Cloud adoption and AI have turned data sovereignty into a core infrastructure and strategy question.

Regulatory frameworks such as GDPR, NIS2, and DORA are expanding and being enforced more strictly. Governments are also scrutinising foreign cloud providers and cross-border access. Local data storage no longer ensures absolute data sovereignty if critical control layers remain outside national jurisdiction.

Traditional SASE and SSE models were not built for this environment. Many still separate outbound cloud traffic from inbound controls. That split creates blind spots in distributed architectures and complicates consistent policy enforcement.

AI workloads intensify the pressure. Retailers, banks, and manufacturers are deploying models locally, not just in hyperscale clouds. Securing east-west traffic across systems and APIs without undermining data sovereignty is becoming a central architectural challenge.

Managed sovereign infrastructure is one response. It reduces reliance on external cloud paths while preserving operational scale. Ultimately, organisations must align security, AI deployment, and governance with long-term resilience goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nano Banana 2 brings Flash speed to Gemini image generation

Google has introduced Nano Banana 2, branded Gemini 3.1 Flash Image, combining Flash speed with advanced reasoning. The update narrows the gap between rapid generation and visual quality, enabling faster edits. Improved instruction-following enhances the handling of complex prompts.

Nano Banana 2 integrates real-time web grounding to improve subject accuracy and contextual awareness. The model supports more precise text rendering and in-image translation for marketing and localisation tasks. It can also assist with diagrams, infographics, and data visualisations.

Upgrades include stronger subject consistency across multiple characters and objects within a single workflow. Users can create assets in aspect ratios and resolutions from 512px to 4K. Google highlighted improvements in lighting, textures, and photorealism while maintaining Flash-level speed.

The model is rolling out across the Gemini app, Search, Lens, AI Studio, Vertex AI, Flow, and Google Ads. In Gemini, Nano Banana 2 replaces Nano Banana Pro by default, though Pro remains available for specialised tasks. Availability is expanding to additional countries and languages.

Google also reinforced its provenance strategy by combining SynthID watermarking with C2PA Content Credentials. The company said verification tools in Gemini have been used millions of times to identify AI-generated media. C2PA verification will be added to the app in a future update.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!