Arm Holdings has unveiled Lumex, its next-generation chip designs built to bring advanced AI performance directly to mobile devices.
The new designs range from highly energy-efficient chips for wearables to high-performance versions capable of running large AI models on smartphones without cloud support.
Lumex forms part of Arm’s Compute Subsystems business, offering handset makers pre-integrated designs, while also strengthening Arm’s broader strategy to expand smartphone and data centre revenues.
The chips are tailored for 3-nanometre manufacturing processes provided by suppliers such as TSMC, whose technology is also used in Apple’s latest iPhone chips. Arm has indicated further investment in its own chip development to capitalise on demand.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.
Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.
VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.
This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.
Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.
Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.
Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.
While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.
RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The 2026 Adwanted Media Research Awards will feature a new category for Best Use of AI in Research Projects, reflecting the growing importance of this technology in the industry.
Head judge Denise Turner of IPA said AI should be viewed as a tool to expedite workflows, not replace human insight, emphasising that researchers remain essential to interpreting results and posing the right questions.
Route CEO Euan Mackay said AI enables digital twins, synthetic data, and clean-room integrations, shifting researchers’ roles from survey design to auditing and ensuring data integrity in an AI-driven environment.
OMD’s Laura Rowe highlighted AI’s ability to rapidly process raw data, transcribe qualitative research, and extend insights across strategy and planning — provided ethical oversight remains in place.
ITV’s Neil Mortensen called this the start of a ‘gold rush’, urging the industry to use AI to automate tedious tasks while preserving rigorous methods and enabling more time for deep analysis.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.
Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.
Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.
Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.
Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.
Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
As AI tools such as ChatGPT become more common among students, schools and colleges report that some educators see assignments done at home as almost sure to involve AI. Educators say take-home writing tasks and traditional homework risk being devalued.
Teachers and students are confused over what constitutes legitimate versus dishonest AI use. Some students use AI to outline, edit, or translate texts. Others avoid asking for guidance about AI usage because rules vary by classroom, and admitting AI help might lead to accusations.
Schools are adapting by shifting towards in-class writing, verbal assessments and locked-down work environments.
Faculty at institutions, like the University of California, Berkeley and Carnegie Mellon, have started to issue updated syllabus templates that spell out AI expectations, including bans, approvals or partial allowances.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The selection means Google will work with DARPA’s technical experts, who will be independent validators for its quantum computing roadmap. The evaluation aims to provide rigorous third-party benchmarking, a critical capability for the broader quantum industry.
DARPA’s QBI is not only about validation. It aims to compare different quantum technologies, superconducting qubits, photonic systems, trapped ions and other modalities under shared metrics.
Google’s involvement underscores its ongoing mission to build quantum infrastructure capable of addressing problems such as new medicine design, energy innovation and machine-learning optimisation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government is expanding its use of AI across prisons, probation and courts to monitor offenders, assess risk and prevent crime before it occurs under the AI Action Plan.
One key measure involves an AI violence prediction tool that uses factors like an offender’s age, past violent incidents and institutional behaviour to identify those most likely to pose risk.
These predictions will inform decisions to increase supervision or relocate prisoners in custody wings ahead of potential violence.
Another component scans seized mobile phone content to highlight secret or coded messages that may signal plotting of violent acts, intelligence operations or contraband activities.
Officials are also working to merge offender records across courts, prisons and probation to create a single digital identity for each offender.
UK authorities say the goal is to reduce reoffending and prioritise public and staff safety, while shifting resources from reactive investigations to proactive prevention. Civil liberties groups caution about privacy, bias and the risk of overreach if transparency and oversight are not built in.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.
Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.
Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better.
The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.
Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.
As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
‘Europe is in a fight,’ European Commission President Ursula von der Leyen declared as she opened her 2025 State of the Union speech. Addressing the European Parliament in Strasbourg, von der Leyen noted that ‘Europe must fight. For its place in a world in which many major powers are either ambivalent or openly hostile to Europe.’ In response, she argued for Europe’s ‘Independence Moment’ – a call for strategic autonomy.
One of the central pillars of her plan? A major push to invest in digital and clean technologies. Let’s explore the details we’ve heard in the speech.
The EU plans measures to support businesses and innovation, including a digital Euro and an upcoming omnibus on digital. Many European startups in key technologies like quantum, AI, and biotech seek foreign investment, which jeopardises the EU’s tech sovereignty, the speech notes. In response, the Commission will launch a multi-billion-euro Scaleup Europe Fund with private partners.
The Single Market remains incomplete, von der Leyen noted, mostly in three domains: finance, energy, and telecommunications. A Single Market Roadmap to 2028 will be presented, which will provide clear political deadlines.
Standing out in the speech was von der Leyen’s defence of Europe’s right to set its own standards and regulations. The assertion came right after her defence of the US-EU trade deal, making it a direct response to the mounting pressure and tariff threats from the US administration.
The EU needs ‘a European AI’, von der Leyen noted. Key initiatives include the Cloud and AI Development Act, the Quantum Sandbox, and the creation of European AI Gigafactories to help startups develop, train, and deploy next-generation AI models.
Additionally, CEOs of Europe’s leading tech companies will present their European AI & Tech Declaration, pledging to invest in and strengthen Europe’s tech sovereignty, von der Leyen stated.
Europe should consider implementing guidelines or limits for children’s social media use, von der Leyen noted. She pointed to Australia’s pioneering social media restrictions as a model under observation, indicating that Europe could adopt a similar approach. To ensure a well-informed and balanced policy, she announced plans to commission a panel of experts by the end of the year to advise on the best strategies for Europe.
Von der Leyen’s bet is that a potent mix of massive investment, streamlined regulation, and a unified public-private front can finally stop Europe from playing catch-up in the global economic race.
History is on her side in one key regard: when the EU and corporate champions unite, they win big on setting global standards, and GSM is just one example. But past glory is no guarantee of future success. The rhetoric is sharp, and the stakes are existential. Now, the pressure is on to deliver more than just a powerful speech.
IN OTHER NEWS THIS WEEK
The world’s eyes turned to Nepal this week, where authorities banned 26 social media platforms for 24 hours after nationwide protests, led largely by youth, against corruption. According to officials, the ban was introduced in an effort to curb misinformation, online fraud, and hate speech. The ban has been lifted after the protests intensified and left 22 people dead. The events are likely to offer lessons for other governments grappling with the role of censorship during times of unrest.
Another country fighting corruption is Albania, using unusual means – the government made a pioneering move by introducing the world’s first AI-powered public official, named Diella. Appointed to oversee public procurement, the virtual minister represents an attempt to use technology itself to create a more transparent and efficient government, with the goal of ensuring procedures are ‘100% incorruptible.’ A laudable goal, but AI is only as unbiased as the data and algorithms it’s relying on. Still, it’s a daring first step.
Speaking of AI (and it seems we speak of little else these days), another nation is trying its best to adapt to the global transformation driven by rapid digitalisation and AI. Kazakhstan has announced an ambitious goal: to become a fully digital country within three years.
The central policy is the establishment of a new Ministry of Artificial Intelligence and Digital Development, which will ensure the total implementation of AI to modernise all sectors of the economy. This effort will be guided by a national strategy called ‘Digital Kazakhstan’ to combine all digital initiatives.
A second major announcement was the development of Alatau City, envisioned as the country’s innovation hub. Planned as the region’s first fully digital city, it will integrate Smart City technologies, allow cryptocurrency payments, and is being developed with the expertise of a leading Chinese company that helped build Shenzhen.
Has Kazakhstan bitten off more than it can chew in 3 years’ time? Even developing a national strategy can take years; implementing AI across every sector of the economy is exponentially more complex. Kazakhstan has dared to dream big; now it must work hard to achieve it.
AI’s ‘magic’ comes with a price. Authors sued Apple last Friday for allegedly training its AI on their copyrighted books. In a related development, AI company Anthropic agreed to a massive $1.5 billion settlement for a similar case – what plaintiffs’ lawyers are calling the largest copyright recovery in history, even though the company admitted no fault. Will this settlement mark a dramatic shift in how AI companies operate? Without a formal court ruling, it creates no legal precedent. For now, the slow grind of the copyright fight continues.
The digital governance scene has been busy in Geneva this week. Here’s what we have tried to follow.
At the International Telecommunication Union (ITU), the Council Working Group (CWG) on WSIS and SDGs met on Tuesday and Wednesday to look at the work undertaken by ITU with regard to the implementation of WSIS outcomes and the Agenda 2030 and to discuss issues related to the ongoing WSIS+20 review process.
As we write this newsletter, the Expert Group on ITRs is working on the final report it needs to submit to the ITU Council in response to the task it was given to review the International Telecommunication Regulations (ITRs), considering evolving global trends, tech developments, and current regulatory practices.
A draft version of the report notes that members have divergent views on whether the ITRs need revision and even on their overall relevance; there also doesn’t seem to be a consensus on whether and how the work on revising the ITRs should continue. On another topic, the CWG on international internet-related public policy issues is holding an open consultation on ensuring meaningful connectivity for landlocked developing countries.
Earlier in the week, the UN Institute for Disarmament Research (UNIDIR) hosted the Outer Space Security Conference, bringing together diplomats, policy makers, private actors, experts from the military sectors and others to look at ways in which to shape a secure, inclusive and sustainable future for outer space.
Some of the issues discussed revolved around the implications of using emerging technologies such as AI and autonomous systems in the context of space technology and the cybersecurity challenges associated with such uses.
African priorities for the Global Digital Compact In 2022 the idea of a Global Digital Compact was floated by the UN with the intention of developing shared
LOOKING AHEAD
The next meeting of the UN’s ‘Multi-Stakeholder Working Group on Data Governance’ is scheduled for 15-16 September in Geneva and is open to observers (both onsite and online).
In a recent event, experts from Diplo, the Open Knowledge Foundation (OKFN), and the Geneva Internet Platform analysed the Group’s progress and looked ahead to the September meeting. Catch up on the discussion and watch the full recording.
The 2025 WTO Public Forum will be held on 17–18 September in Geneva, and carries the theme ‘Enhance, Create, and Preserve.’ The forum aims to explore how digital advancements are reshaping global trade norms.
The agenda includes sessions that dig into the opportunities posed by e-commerce (such as improving connectivity, opening pathways for small businesses, and increasing market inclusivity), but also shows awareness of the risks – fragmentation of the digital space, uneven infrastructure, and regulatory misalignment, especially amid geopolitical tensions.
The Human Rights Council started its 60th session, which will continue until 8 October. A report on privacy in the digital age by OHCHR will be discussed next Thursday, 18 September. It looks at challenges and risks with regard to discrimination and the unequal enjoyment of the right to privacy associated with the collection and processing of data, and offers some recommendations on how to prevent digitalisation from perpetuating or deepening discrimination and exclusion.
Among these are a recommendation for states to protect individuals from human rights abuses linked to corporate data processing and to ensure that digital public infrastructures are designed and used in ways that uphold the rights to privacy, non-discrimination and equality.
This summer saw power plays over US chips and China’s minerals, alongside the global AI race with its competing visions. Lessons of disillusionment and clarity reframed AI’s trajectory, while digital intrusions continued to reshape geopolitics. And in New York, the UN took a decisive step toward a permanent cybersecurity mechanism.
eIDAS 2 and the European Digital Identity Wallet aim to secure online interactions, reduce bureaucracy, and empower citizens across the EU with a reliable and user-friendly digital identity.