Ocado has announced plans to cut 1,000 jobs from its 20,000 strong global workforce, with roles mainly affected in technology and support. The company, headquartered in Hatfield, Hertfordshire, said the move would save £150m and follows major investment in robotics and automation.
Chief executive Tim Steiner said Ocado had completed a significant phase of investment in automation, but the company declined to confirm that AI directly led to the redundancies. At its Luton warehouse, opened in 2023, human staff continue to work alongside AI powered robots.
Analysts suggested that competition has intensified as retailers in the UK, the US and Canada adopt similar AI driven systems. Some former clients in the US and Canada have invested in their own technology, reducing reliance on Ocado’s platform.
Retail experts argued that deeper structural challenges, including changing consumer expectations and cost pressures in Hertfordshire and beyond, are also at play. Local leaders in Welwyn Hatfield have requested urgent talks as the company reshapes its operating model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.
Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.
In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.
Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has outlined a plan to strengthen Chrome’s HTTPS security against future quantum-computing threats. Rather than expanding traditional X.509 certificate chains in Chrome with post-quantum cryptography, the company is developing a new model based on Merkle Tree Certificates (MTCs).
The proposal from the PLANTS working group seeks to modernise the web public key infrastructure. Under the MTC model, a Certification Authority signs a single ‘Tree Head’ covering many certificates. Browsers receive a lightweight proof instead of a full certificate chain.
Google said this structure reduces authentication data exchanged during TLS handshakes while supporting post-quantum algorithms. By decoupling cryptographic strength from certificate size, the approach seeks to preserve performance as stronger security standards are adopted.
The company is already testing MTCs with real internet traffic. Phase one involves feasibility studies with Cloudflare, while phase two, in early 2027, will invite selected Certificate Transparency log operators to support initial public deployment.
By the third quarter of 2027, Google plans to establish requirements for onboarding certificate authorities to the quantum-resistant Chrome Root Store, which exclusively supports MTCs. The company described the initiative as foundational to maintaining long-term web security resilience.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A high-severity vulnerability in Chrome’s integrated Gemini AI assistant exposed users to the potential activation of the camera and microphone, local file access, and phishing attacks. The issue, tracked as CVE-2026-0628, was disclosed by Palo Alto Networks’ Unit 42 and patched by Google in January 2026.
Gemini Live operates as a privileged AI panel embedded within the browser, capable of web page summarisation and task automation. To enable multimodal functionality, the panel is granted elevated permissions, including access to screenshots, local files, and device hardware.
Researchers identified inconsistent handling of the declarativeNetRequest API when gemini.google.com was loaded inside the AI side panel rather than a standard browser tab. While extensions could inject JavaScript in both cases, the panel context inherited browser-level privileges.
A malicious extension exploiting this distinction could hijack the trusted panel and execute arbitrary code with elevated access. Potential impacts included silent activation of a camera or microphone, screenshot capture, local file exfiltration, and high-credibility phishing attacks.
Google released a fix on 5 January 2026 following responsible disclosure. Users running the latest version of Chrome are protected, and organisations are advised to ensure updates are applied across all endpoints.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Thailand has published a draft public guidance document to help citizens use AI safely and responsibly. The ‘AI Guide for Citizens’ outlines key AI concepts, benefits, limitations, and practical examples for users engaging with generative AI tools.
Data safety is a central focus, with officials warning against entering personal identifiers, financial data, confidential information, or government secrets into public AI platforms.
The guide also details technical risks such as AI’ hallucinations,’ prompt injection, and data poisoning, advising users to verify outputs and treat AI as a support tool rather than a decision maker.
The guidance addresses ethical and legal responsibilities, warning against using AI to generate misinformation, deepfakes, or harmful content. It emphasises fairness and bias, noting AI systems can inherit human prejudices from training data.
Citizens encountering AI-related scams or harmful content are advised to collect evidence, report incidents to cybercrime authorities, and contact Thailand’s personal data protection agency if privacy is compromised.
The draft aligns Thailand’s AI policies with national rules and international standards, including ISO governance principles and the EU AI Act. The initiative aims to boost AI literacy and safeguards as AI becomes more integrated into daily life.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.
Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.
They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.
The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.
Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.
Concern escalated after early deployments in Italy and France, where verification is already mandatory.
Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The social media platform, X, has introduced a new ‘Paid Partnership’ label that creators can attach to posts to show when content is promotional instead of leaving audiences unsure about commercial intent.
An update that improves transparency for followers while meeting rules set by the Federal Trade Commission, which expects sponsored material to be disclosed clearly.
Creators previously relied on hashtags such as #ad or #paidpartnership instead of an integrated disclosure option. The new feature allows users to apply the label through a content-disclosure toggle either during posting or afterwards.
X’s product lead, Nikita Bier, said undisclosed promotions damage trust and weaken the platform’s integrity, so the tool is meant to support creators and regulators simultaneously.
X has been trying to build a stronger creator ecosystem by offering payouts, subscriptions and other incentives. Yet many creators still favour Instagram or YouTube over X as their primary channel, because those platforms have longer-standing monetisation tools.
The addition of a built-in label aligns X with broader industry practice and aims to regain credibility among advertisers and creators.
The company has also tightened API access, preventing programmatic replies unless a user is directly mentioned or quoted.
A change that seeks to limit LLM-generated spam instead of allowing automated responses to distort discussions or appear as fake engagement beneath sponsored content.
X hopes these combined measures will enhance authenticity around commercial posts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
As organisations expand across cloud environments, non-human identities are becoming a critical component of modern cybersecurity strategies. Managing machine identities and their associated secrets is increasingly central to reducing risk and improving AI-driven threat detection.
As digital infrastructure grows, machine identities function as secure access credentials for applications, services, and automated processes. Effective governance can reduce vulnerabilities, improve compliance, and streamline operations across sectors such as finance and healthcare.
Integrating non-human identities into AI security frameworks enables more contextual anomaly detection and improved visibility into network behaviour. Rather than relying solely on static scanning, organisations can adopt adaptive models that enhance predictive threat response.
Challenges remain, particularly around coordination between security, DevOps, and research teams. Gaps in collaboration and limited awareness of identity lifecycle management can create blind spots that weaken overall cyber resilience.
Automation is increasingly seen as essential for scaling non-human identity management. By automating secrets rotation, certificate renewal, and access reviews, organisations can strengthen governance while enabling security teams to focus on higher-value strategic priorities.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the European Parliament are pressing the European Commission for clarity after reports that Meta’s smart glasses recorded people in intimate moments without their knowledge.
Concerns intensified when Swedish outlets reported that Ray-Ban AI glasses captured and uploaded sensitive footage in violation of strict consent requirements under the EU’s General Data Protection Regulation.
The reports indicate that personal data from EU users was sent to Sama, a third-party contractor, in Kenya for human review. Annotators working there said they viewed images of individuals changing clothes and believed the recordings were taken without consent.
They added that Meta’s attempts to blur faces or apply other safeguards failed often enough to expose identifiable material instead of ensuring proper anonymisation.
EU privacy law requires clear information and consent before collecting and processing personal data, and additional safeguards when exporting data to countries without recognised adequacy status.
Kenya is still negotiating such recognition with the Commission, meaning contractual protections would be necessary.
The Irish Data Protection Commission, responsible for Meta’s GDPR oversight, has been contacted amid questions about whether Meta complied with EU requirements.
Lawmakers also want the Commission to examine whether proposed changes in the Digital Omnibus package could dilute privacy protections rather than strengthen them.
Critics argue the reforms might ease data-use rules for AI training at a moment when allegations about Meta’s smart glasses have intensified scrutiny of the EU’s broader digital policy agenda.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Britain has opened a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy.
Officials are considering several options beyond a full ban, including disabling addictive platform features, introducing overnight curfews, regulating access to AI chatbots, and tightening age verification rules. Pilot schemes will test proposed measures to gather practical evidence on their effectiveness.
The debate follows international momentum after Australia introduced restrictions on under-16 access to major platforms, with Spain signalling similar intentions. Political parties, charities and campaigners remain divided over whether bans or stronger safety regulations offer better protection.
Children’s organisations warn blanket prohibitions could push young users towards less regulated online spaces, creating a ‘false sense of security’. Researchers and policymakers instead emphasise improving platform safety standards while allowing young people to socialise and express themselves online responsibly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!