Teachers across Colorado are exploring how AI can be utilised as an instructional assistant to support classroom instruction and student learning.
Some educators are experimenting with generative AI tools that help with tasks like lesson planning, summarising material and creating examples, while also educating students on responsible use of AI.
The broader trend mirrors state and district efforts to develop AI strategies for education. Reports indicate that many districts are establishing steering committees and policies to guide the safe and effective use of classrooms.
In contrast, others limit student access due to privacy concerns, underscoring the need for training and clear guidelines.
Teachers have noted both benefits, such as time savings and personalised support, and challenges, including ethical questions about plagiarism and student independence, highlighting a period of experimentation and adjustment as AI becomes part of mainstream education.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Rising living costs and economic instability are the biggest worries for young people worldwide. A World Economic Forum survey shows inflation dominates personal and global concerns.
Many young people fear that AI-driven automation will shrink entry-level job opportunities. Two-thirds expect fewer early career roles despite growing engagement with AI tools.
Nearly 60 per cent already use AI to build skills and improve employability. Side hustles and freelance work are increasingly common responses to economic pressure.
Youth respondents call for quality jobs, better education access and affordable housing. Climate change also ranks among the most serious long-term global risks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Innovations across China are moving rapidly from laboratories into everyday use, spanning robotics, autonomous vehicles and quantum computing. Airports, hotels and city streets are increasingly becoming testing grounds for advanced technologies.
In Hefei, humanoid cleaning robots developed by local start-up Zerith are already operating in public venues across major cities. The company scaled from prototype to mass production within a year, securing significant commercial orders.
Beyond robotics, frontier research is finding industrial applications in energy, healthcare and manufacturing. Advances from fusion research and quantum mechanics are being adapted for cancer screening, battery safety and precision measurement.
Policy support and investment are accelerating this transition from research to market. National planning and local funding initiatives aim to turn scientific breakthroughs into scalable technologies with global reach.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China’s AI sector could narrow the technological AI gap with the United States through growing risk-taking and innovation, according to leading researchers. Despite export controls on advanced chipmaking tools, Chinese firms are accelerating development across multiple AI fields.
Yao Shunyu, a former senior researcher at ChatGPT maker OpenAI and now Tencent’s AI scientist, said a Chinese company could become the world’s leading AI firm within three to five years. He pointed to China’s strengths in electricity supply and infrastructure as key advantages.
Yao said the main bottlenecks remain production capacity, including access to advanced lithography machines and a mature software ecosystem. Such limits still restrict China’s ability to manufacture the most advanced semiconductors and narrow the AI gap with the US.
China has developed a working prototype of an extreme-ultraviolet lithography machine that could eventually rival Western technology. However, Reuters reported the system has not yet produced functioning chips.
Sources familiar with the project said commercial chip production using the machine may not begin until around 2030. Until then, Chinese AI ambitions are likely to remain constrained by hardware limitations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.
The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.
Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.
Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is accelerating the creation of digital twins by reducing the time and labour required to build complex models. Consulting firm McKinsey says specialised virtual replicas can take six months or more to develop, but generative AI tools can now automate much of the coding process.
McKinsey analysts say AI can structure inputs and synthesise outputs for these simulations, while the models provide safe testing environments for AI systems. Together, the technologies can reduce costs, shorten development cycles, and accelerate deployment.
Quantum Elements, a startup backed by QNDL Participations and the USC Viterbi School of Engineering, is applying this approach to quantum computing. Its Constellation platform combines AI agents, natural language tools, and simulation software.
The company says quantum systems are hard to model because qubits behave differently across hardware types such as superconducting circuits, trapped ions, and photonics. These variations affect stability, error rates, and performance.
By using digital twins, developers can test algorithms, simulate noise, and evaluate error correction without building physical hardware. Quantum Elements says this can cut testing time from months to minutes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.
The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.
Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.
Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.
The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.
Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.
This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.
The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.
Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.
The discussions focus on shared regulatory approaches rather than immediate bans.
X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.
In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.
Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.
X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.
European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google removed some AI health summaries after a Guardian investigation found they gave misleading and potentially dangerous information. The AI Overviews contained inaccurate liver test data, potentially leading patients to believe they were healthy falsely.
Experts have criticised AI Overviews for oversimplifying complex medical topics, ignoring essential factors such as age, sex, and ethnicity. Charities have warned that misleading AI content could deter people from seeking medical care and erode trust in online health information.
Google removed AI Overviews for some queries, but concerns remain over cancer and mental health summaries that may still be inaccurate or unsafe. Professionals emphasise that AI tools must direct users to reliable sources and advise seeking expert medical input.
The company stated it is reviewing flagged examples and making broad improvements, but experts insist that more comprehensive oversight is needed to prevent AI from dispensing harmful health misinformation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!