Report warns of AI-driven divide in higher education

A new report from the Higher Education Policy Institute warns of an urgent need to improve AI literacy among staff and students in the UK. The study argues that without coordinated investment in training and policy, higher education risks deepening digital divides and losing relevance in an AI-driven world.

British report contributors say universities must move beyond acknowledging AI’s presence and instead adopt structured strategies for skill development. Kate Borthwick adds that both staff and students require ongoing education to manage how AI reshapes teaching, assessment, and research.

The publication highlights growing disparities in access and use of generative AI based on gender, wealth, and academic discipline. In a chapter written by ChatGPT, the report suggests universities create AI advisory teams within research offices and embed AI training into staff development programmes.

Elsewhere, Ant Bagshaw from the Australian Public Policy Institute warns that generative AI could lead to cuts in professional services staff as universities seek financial savings. He acknowledges the transition will be painful but argues that it could drive a more efficient and focused higher education sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vietnam unveils draft AI law inspired by EU model

Vietnam is preparing to become one of Asia’s first nations with a dedicated AI law, following the release of a draft bill that mirrors key elements of the EU’s AI Act. The proposal aims to consolidate rules for AI use, strengthen rights protections and promote innovation.

The law introduces a four-tier system for classifying risks, from banned applications such as manipulative facial recognition to low-risk uses subject to voluntary standards. High-risk systems, including those in healthcare or finance, would require registration, oversight and incident reporting to a national database.

Under the law, companies deploying powerful general-purpose AI models must meet strict transparency, safety and intellectual property standards. The law would create a National AI Commission and a National AI Development Fund to support local research, sandboxes and tax incentives for emerging businesses.

Violations involving unsafe AI systems could lead to revenue-based fines and suspensions. The phased rollout begins in January 2026, with full compliance for high-risk systems expected by mid-2027. The government of Vietnam says the initiative reflects its ambition to build a trustworthy AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government uses AI to boost efficiency and save taxpayer money

The UK government has developed an AI tool, named ‘Consult’, which analysed over 50,000 responses to the Independent Water Commission review in just two hours. The system matched human accuracy and could save 75,000 days of work annually, worth £20 million in staffing costs.

Consult sorted responses into key themes at a cost of just £240, with experts needing only 22 hours to verify the results. The AI agreed with human experts 83% of the time, versus 55% between humans, letting officials focus on policy instead of administrative work.

The technology has also been used to analyse consultations for the Scottish government on non-surgical cosmetics and the Digital Inclusion Action Plan. Part of the Humphrey suite, the tool helps government act faster and deliver better value for taxpayers.

Digital Government Minister Ian Murray highlighted the potential of AI to deliver efficient services and save costs. Engineers are using insights from Consult and Redbox to develop new tools, including GOV.UK Chat, a generative AI chatbot soon to be trialled in the GOV.UK App.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quebec man fined for using AI-generated evidence in court

A Quebec court has fined Jean Laprade C$5,000 (US$3,562) for submitting AI-generated content as part of his legal defence. Justice Luc Morin described the move as ‘highly reprehensible,’ warning that it could undermine the integrity of the judicial system.

The case concerned a dispute over a contract for three helicopters and an airplane in Guinea, where a clerical error awarded Laprade a more valuable aircraft than agreed. He resisted attempts by aviation companies to recover it, and a 2021 Paris arbitration ruling ordered him to pay C$2.7 million.

Laprade submitted fabricated AI-generated materials, including non-existent legal citations and inconsistent conclusions, in an attempt to strengthen his defence.

The judge emphasised that AI-generated information must be carefully controlled by humans, and the filing of legal documents remains a solemn responsibility. Morin acknowledged the growing influence of AI in courts but stressed the dangers of misuse.

While noting Laprade’s self-representation, the judge condemned his use of ‘hallucinated’ AI evidence and warned of future challenges from AI in courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands AI infrastructure with sustainable data centre in El Paso

The US tech giant, Meta, has begun construction on a new AI-optimised data centre in El Paso, Texas, designed to scale up to 1GW and power the company’s expanding AI ambitions.

The 29th in Meta’s global network, the site will support the next generation of AI models, underpinning technologies such as smart glasses, AI assistants, and real-time translation tools.

A data centre project that represents a major investment in both technology and the local community, contributing over $1.5 billion and creating about 1,800 construction jobs and 100 operational roles in its first phase.

Meta’s Community Accelerator programme will also help local businesses build digital and AI skills, while Community Action Grants are set to launch in El Paso next year.

Environmental sustainability remains central to the development. The data centre will operate on 100% renewable energy, with Meta covering the costs of new grid connections through El Paso Electric.

Using a closed-loop cooling system, the facility will consume no water for most of the year, aligning with Meta’s target to be water positive by 2030. The company plans to restore twice the amount of water used to local watersheds through partnerships with DigDeep and the Texas Water Action Collaborative.

The El Paso project, Meta’s third in Texas, underscores its long-term commitment to sustainable AI infrastructure. By combining efficiency, clean energy, and community investment, Meta aims to build the foundations for a responsible and scalable AI-driven future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SenseTime and Cambricon strengthen cooperation for China’s AI future

SenseTime and Cambricon Technologies have entered a strategic cooperation agreement to jointly develop an open and mutually beneficial AI ecosystem in China. The partnership will focus on software-hardware integration, vertical industry innovation, and the globalisation of AI technologies.

By combining SenseTime’s strengths in large model R&D, AI infrastructure, and industrial applications with Cambricon’s expertise in intelligent computing chips and high-performance hardware, the collaboration supports the national ‘AI+’ strategy of China.

Both companies aim to foster a new AI development model defined by synergy between software and hardware, enhancing domestic innovation and global competitiveness in the AI sector.

The agreement also includes co-development of adaptive chip solutions and integrated AI systems for enterprise and industrial use. By focusing on compatibility between the latest AI models and hardware architectures, the two firms plan to offer scalable, high-efficiency computing solutions.

A partnership that seeks to drive intelligent transformation across industries and promote the growth of emerging AI enterprises through joint innovation and ecosystem building.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mozilla integrates Perplexity AI into Firefox’s search features

Mozilla has announced that it is integrating Perplexity’s AI answer engine into Firefox as a choice available in the browser’s search options.

The feature had already been piloted in markets including the US, UK and Germany. Now Firefox is bringing the option to desktop users globally, with mobile rollout expected in the coming months.

When enabled, Perplexity AI offers conversational search. Instead of just showing a list of web pages, answers appear with citations. Users can activate it via the unified search button in the address bar or by configuring their default search engine settings.

Mozilla says the integration reflects positive feedback from early users and signals a desire to give people more choice in how they get information. The company also notes that Perplexity ‘doesn’t share or sell users’ personal data,’ which aligns with Mozilla’s privacy principles.

Firefox also continues to evolve other browser features. One is profiles, now broadly available, which allows users to maintain separate browser setups (for example, work vs home). The browser is also experimenting with visual search features using Google Lens for users who keep Google as their default provider.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Wider AI applications take centre stage at Japan’s CEATEC electronics show

At this year’s CEATEC exhibition in Japan, more companies and research institutions are promoting AI applications that stretch well beyond traditional factory or industrial automation.

Innovations on display suggest an increasing emphasis on ‘AI as companion’ systems, tools that help, advise, or augment human abilities in everyday settings.

Fujitsu’s showcase is a strong example. The company is using AI skeleton recognition and agent-based analysis to help people improve movement, whether for sports performance (such as refining a golf swing) or for healthcare settings. These systems give live feedback, coaching form, and offer suggestions, all in real time.

Other exhibits combine sensor tech, vision, and AI in consumer-friendly ways. For example, smart fridge compartments that monitor produce, earbuds or glasses that recognise real-world context (a flyer in a shop, say) and suggest recipes, or wearable systems that adapt to your motion.

These are not lab demos, they’re meant for direct, everyday interaction. Rising numbers of startups and university groups at CEATEC underscore Japan’s push toward embedding AI deeply in daily life.

The ‘AI for All’ theme and ‘Partner Parks’ at the show reflect a movement toward socially oriented technologies, with suggestions, health, ease, and personalisation. Japan seems to be leaning into AI not just for productivity gains but for lifestyle and well-being enhancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot