Capita hit with £14 million fine after major data breach

The UK outsourcing firm Capita has been fined £14 million after a cyber-attack exposed the personal data of 6.6 million people. Sensitive information, including financial details, home addresses, passport images, and criminal records, was compromised.

Initially, the fine was £45 million, but it was reduced after Capita improved its cybersecurity, supported affected individuals, and engaged with regulators.

A breach that affected 325 of the 600 pension schemes Capita manages, highlighting risks for organisations handling large-scale sensitive data.

The Information Commissioner’s Office (ICO) criticised Capita for failing to secure personal information, emphasising that proper security measures could have prevented the incident.

Experts note that holding companies financially accountable reinforces the importance of data protection and sends a message to the market.

Capita’s CEO said the company has strengthened its cyber defences and remains vigilant to prevent future breaches.

The UK government has advised companies like Capita to prepare contingency plans following a rise in nationally significant cyberattacks, a trend also seen at Co-op, M&S, Harrods, and Jaguar Land Rover earlier in the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft to support UAE investment analytics with responsible AI tools

The UAE Ministry of Investment and Microsoft signed a Memorandum of Understanding at GITEX Global 2025 to apply AI to investment analytics, financial forecasting, and retail optimisation. The deal aims to strengthen data governance across the investment ecosystem.

Under the MoU, Microsoft will support upskilling through its AI National Skilling Initiative, targeting 100,000 government employees. Training will focus on practical adoption, responsible use, and measurable outcomes, in line with the UAE’s National AI Strategy 2031.

Both parties will promote best practices in data management using Azure services such as Data Catalog and Purview. Workshops and knowledge-sharing sessions with local experts will standardise governance. Strong controls are positioned as the foundation for trustworthy AI at scale.

The agreement was signed by His Excellency Mohammad Alhawi and Amr Kamel. Officials say the collaboration will embed AI agents into workflows while maintaining compliance. Investment teams are expected to gain real-time insights and automation that shorten the time to action.

The partnership supports the ambition to make the UAE a leader in AI-enabled investment. It also signals deeper public–private collaboration on sovereign capabilities. With skills, standards, and use cases in place, the ministry aims to attract capital and accelerate diversification.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scaling a cell ‘language’ model yields new immunotherapy leads

Yale University and Google unveiled Cell2Sentence-Scale 27B, a 27-billion-parameter model built on Gemma to decode the ‘language’ of cells. The system generated a novel hypothesis about cancer cell behaviour, and CEO Sundar Pichai called it ‘an exciting milestone’ for AI in science.

The work targets a core problem in immunotherapy: many tumours are ‘cold’ and evade immune detection. Making them visible requires boosting antigen presentation. C2S-Scale sought a ‘conditional amplifier’ drug that boosts signals only in immune-context-positive settings.

Smaller models lacked the reasoning to solve the problem, but scaling to 27B parameters unlocked the capability. The team then simulated 4,000 drugs across patient samples. The model flagged context-specific boosters of antigen presentation, with 10–30% already known and the rest entirely novel.

Researchers emphasise that conditional amplification aims to raise immune signals only where key proteins are present. That could reduce off-target effects and make ‘cold’ tumours discoverable. The result hints at AI-guided routes to more precise cancer therapies.

Google has released C2S-Scale 27B on GitHub and Hugging Face for the community to explore. The approach blends large-scale language modelling with cell biology, signalling a new toolkit for hypothesis generation, drug prioritisation, and patient-relevant testing.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam unveils draft AI law inspired by EU model

Vietnam is preparing to become one of Asia’s first nations with a dedicated AI law, following the release of a draft bill that mirrors key elements of the EU’s AI Act. The proposal aims to consolidate rules for AI use, strengthen rights protections and promote innovation.

The law introduces a four-tier system for classifying risks, from banned applications such as manipulative facial recognition to low-risk uses subject to voluntary standards. High-risk systems, including those in healthcare or finance, would require registration, oversight and incident reporting to a national database.

Under the law, companies deploying powerful general-purpose AI models must meet strict transparency, safety and intellectual property standards. The law would create a National AI Commission and a National AI Development Fund to support local research, sandboxes and tax incentives for emerging businesses.

Violations involving unsafe AI systems could lead to revenue-based fines and suspensions. The phased rollout begins in January 2026, with full compliance for high-risk systems expected by mid-2027. The government of Vietnam says the initiative reflects its ambition to build a trustworthy AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quebec man fined for using AI-generated evidence in court

A Quebec court has fined Jean Laprade C$5,000 (US$3,562) for submitting AI-generated content as part of his legal defence. Justice Luc Morin described the move as ‘highly reprehensible,’ warning that it could undermine the integrity of the judicial system.

The case concerned a dispute over a contract for three helicopters and an airplane in Guinea, where a clerical error awarded Laprade a more valuable aircraft than agreed. He resisted attempts by aviation companies to recover it, and a 2021 Paris arbitration ruling ordered him to pay C$2.7 million.

Laprade submitted fabricated AI-generated materials, including non-existent legal citations and inconsistent conclusions, in an attempt to strengthen his defence.

The judge emphasised that AI-generated information must be carefully controlled by humans, and the filing of legal documents remains a solemn responsibility. Morin acknowledged the growing influence of AI in courts but stressed the dangers of misuse.

While noting Laprade’s self-representation, the judge condemned his use of ‘hallucinated’ AI evidence and warned of future challenges from AI in courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK and US freeze assets of Southeast Asian online scam network

The UK and US governments have jointly sanctioned a transnational network operating illegal scam centres across Southeast Asia. These centres use sophisticated methods, including fake romantic relationships, to defraud victims worldwide.

Many of the individuals forced to conduct these scams are trafficked foreign nationals, coerced under threat of torture. Authorities have frozen a £12 million North London mansion, along with a £100 million City office and several London flats.

Network leader Chen Zhi and his associates used corporate proxies and overseas companies to launder proceeds from their scams through London’s property market.

The sanctioned entities include the Prince Group, Jin Bei Group, Golden Fortune Resorts World Ltd., and Byex Exchange. Scam operations trap foreign nationals with fake job adverts, forcing them to commit online fraud, often through fake cryptocurrency schemes.

Proceeds are then laundered through a complex system of front businesses and gambling platforms.

Foreign Secretary Yvette Cooper and Fraud Minister Lord Hanson said the action protects human rights, UK citizens, and blocks criminals from storing illicit funds. Coordination with the US ensures these sanctions disrupt the network’s international operations and financial access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wider AI applications take centre stage at Japan’s CEATEC electronics show

At this year’s CEATEC exhibition in Japan, more companies and research institutions are promoting AI applications that stretch well beyond traditional factory or industrial automation.

Innovations on display suggest an increasing emphasis on ‘AI as companion’ systems, tools that help, advise, or augment human abilities in everyday settings.

Fujitsu’s showcase is a strong example. The company is using AI skeleton recognition and agent-based analysis to help people improve movement, whether for sports performance (such as refining a golf swing) or for healthcare settings. These systems give live feedback, coaching form, and offer suggestions, all in real time.

Other exhibits combine sensor tech, vision, and AI in consumer-friendly ways. For example, smart fridge compartments that monitor produce, earbuds or glasses that recognise real-world context (a flyer in a shop, say) and suggest recipes, or wearable systems that adapt to your motion.

These are not lab demos, they’re meant for direct, everyday interaction. Rising numbers of startups and university groups at CEATEC underscore Japan’s push toward embedding AI deeply in daily life.

The ‘AI for All’ theme and ‘Partner Parks’ at the show reflect a movement toward socially oriented technologies, with suggestions, health, ease, and personalisation. Japan seems to be leaning into AI not just for productivity gains but for lifestyle and well-being enhancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI forms Expert Council to guide well-being in AI

OpenAI has announced the establishment of an Expert Council on Well-Being and AI to help it shape ChatGPT, Sora and other products in ways that promote healthier interactions and better emotional support.

The council comprises eight distinguished figures from psychology, psychiatry, human-computer interaction, developmental science and clinical practice.

Members include David Bickham (Digital Wellness Lab, Harvard), Munmun De Choudhury (Georgia Tech), Tracy Dennis-Tiwary (Hunter College), Sara Johansen (Stanford), Andrew K. Przybylski (University of Oxford), David Mohr (Northwestern), Robert K. Ross (public health) and Mathilde Cerioli (everyone.AI).

OpenAI says this new body will meet regularly with internal teams to examine how AI should function in ‘complex or sensitive situations,’ advise on guardrails, and explore what constitutes well-being in human-AI interaction. For example, the council already influenced how parental controls and user-teen distress notifications were prioritised.

OpenAI emphasises that it remains accountable for its decisions, but commits to ongoing learning through this council, the Global Physician Network, policymakers and experts. The company notes that different age groups, especially teenagers, use AI tools differently, hence the need for tailored insights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot