SenseTime and Cambricon strengthen cooperation for China’s AI future

SenseTime and Cambricon Technologies have entered a strategic cooperation agreement to jointly develop an open and mutually beneficial AI ecosystem in China. The partnership will focus on software-hardware integration, vertical industry innovation, and the globalisation of AI technologies.

By combining SenseTime’s strengths in large model R&D, AI infrastructure, and industrial applications with Cambricon’s expertise in intelligent computing chips and high-performance hardware, the collaboration supports the national ‘AI+’ strategy of China.

Both companies aim to foster a new AI development model defined by synergy between software and hardware, enhancing domestic innovation and global competitiveness in the AI sector.

The agreement also includes co-development of adaptive chip solutions and integrated AI systems for enterprise and industrial use. By focusing on compatibility between the latest AI models and hardware architectures, the two firms plan to offer scalable, high-efficiency computing solutions.

A partnership that seeks to drive intelligent transformation across industries and promote the growth of emerging AI enterprises through joint innovation and ecosystem building.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google and World Bank join forces to build AI-driven public infrastructure

Google and the World Bank Group have announced a partnership to develop AI-powered digital infrastructure for emerging markets. The collaboration aims to accelerate digital transformation by deploying Open Network Stacks that make essential public services more accessible.

The initiative combines Google Cloud’s Gemini AI models with the World Bank Group’s development expertise to help governments build interoperable networks in key areas such as healthcare, agriculture and education. Citizens will be able to access these services in over 40 languages, even on basic devices.

A successful pilot project in India’s Uttar Pradesh demonstrated how AI can improve livelihoods, with smallholder farmers increasing profitability through digital tools.

To support long-term growth, Google.org is funding a new nonprofit, Networks for Humanity, which will build universal digital infrastructure, create regional innovation labs and test social impact applications globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft finds 71% of UK workers use unapproved AI tools on the job

A new Microsoft survey has revealed that nearly three in four employees in the UK use AI tools at work without company approval.

A practice, referred to as ‘shadow AI’, that involves workers relying on unapproved systems such as ChatGPT to complete routine tasks. Microsoft warned that unauthorised AI use could expose businesses to data leaks, non-compliance risks, and cyber attacks.

The survey, carried out by Censuswide, questioned over 2,000 employees across different sectors. Seventy-one per cent admitted to using AI tools outside official policies, often because they were already familiar with them in their personal lives.

Many reported using such tools to respond to emails, prepare presentations, and perform financial or administrative tasks, saving almost eight hours of work each week.

Microsoft said only enterprise-grade AI systems can provide the privacy and security organisations require. Darren Hardman, Microsoft’s UK and Ireland chief executive, urged companies to ensure workplace AI tools are designed for professional use rather than consumer convenience.

He emphasised that secure integration can allow firms to benefit from AI’s productivity gains while protecting sensitive data.

The study estimated that AI technology saves 12.1 billion working hours annually across the UK, equivalent to about £208 billion in employee time. Workers reported using the time gained through AI to improve work-life balance, learn new skills, and focus on higher-value projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen content on Instagram now guided by PG-13 standards

Instagram is aligning its Teen Accounts with PG-13 movie standards, aiming to ensure that users under 18 only see age-appropriate material. Teens will automatically be placed in a 13+ setting and will need parental permission to change it.

Parents who want tighter supervision can activate a new ‘Limited Content’ mode that filters out even more material and restricts comments and AI interactions.

The company reviewed its policies to match familiar parental guidelines, further limiting exposure to content with strong language, risky stunts, or references to substances. Teens will also be blocked from following accounts that share inappropriate content or contain suggestive names and bios.

Searches for sensitive terms such as ‘gore’ or ‘alcohol’ will no longer return results, and the same restrictions will extend to Explore, Reels, and AI chat experiences.

Instagram worked with thousands of parents worldwide to shape these policies, collecting more than three million content ratings to refine its protections. Surveys show strong parental support, with most saying the PG-13 system makes it easier to understand what their teens are likely to see online.

The updates begin rolling out in the US, UK, Australia, and Canada and will expand globally by the end of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Argentina poised to host Latin America’s first Stargate AI project

Argentina is set to become the host of Latin America’s first Stargate project, a major AI infrastructure initiative powered by clean energy. Led by Sur Energy with OpenAI, the plan aims to make Argentina a regional and global AI leader while boosting economic growth.

OpenAI and Sur Energy have signed a Letter of Intent to explore building a large-scale data centre in Argentina. Sur Energy will lead the consortium responsible for developing the project, ensuring that the ecosystem is powered by secure, efficient, and sustainable energy sources.

OpenAI is expected to be a key offtaker for the facility.

The project follows high-level talks in Buenos Aires between President Javier Milei, government ministers, and an OpenAI delegation led by Chris Lehane. With AI use tripling and millions using ChatGPT, Argentina ranks among Latin America’s top AI developers, making it an ideal choice for the project.

As part of OpenAI’s OpenAI for Countries initiative, discussions are underway to integrate AI tools into government operations. CEO Sam Altman said the project represents ‘more than just infrastructure’ and will help make Argentina an AI hub for Latin America.

Sur Energy’s Emiliano Kargieman called it a historic opportunity that combines renewable energy with digital innovation to create jobs and attract global investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI predicts future knee X-rays for osteoarthritis patients

In the UK, an AI system developed at the University of Surrey can predict what a patient’s knee X-ray will look like a year in the future, offering a visual forecast alongside a risk score for osteoarthritis progression.

The technology is designed to help both patients and doctors better understand how the condition may develop, allowing earlier and more informed treatment decisions.

Trained on nearly 50,000 knee X-rays from almost 5,000 patients, the system delivers faster and more accurate predictions than existing AI tools.

It uses a generative diffusion model to produce a future X-ray and highlights 16 key points in the joint, giving clinicians transparency and confidence in the areas monitored. Patients can compare their current and predicted X-rays, which can encourage adherence to treatment plans and lifestyle changes.

Researchers hope the technology could be adapted for other chronic conditions, including lung disease in smokers or heart disease progression, providing similar visual insights.

The team is seeking partnerships to integrate the system into real-world clinical settings, potentially transforming how millions of people manage long-term health conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Abu Dhabi deploys AI-first systems with NVIDIA and Oracle

Oracle and NVIDIA have joined forces to advance sovereign AI, supporting Abu Dhabi’s vision of becoming an AI-native government by 2027.

The partnership combines the computing platforms of NVIDIA with Oracle Cloud Infrastructure to create secure, high-performance systems that deliver next-generation citizen services, including multilingual AI assistants, automatic notifications, and intelligent compliance solutions.

The Government Digital Strategy 2025-2027 of Abu Dhabi, backed by a 13-billion AED investment, follows a phased ‘crawl, walk, run’ approach. The initiative has already gone live across 25 government entities, enabling over 15,000 daily users to access AI-accelerated services.

Generative AI applications are now integrated into human resources, procurement, and financial reporting, while advanced agentic AI and autonomous workflows will further enhance government-wide operations.

The strategy ensures full data sovereignty while driving innovation and efficiency across the public sector.

Partnerships with Deloitte and Core42 provide infrastructure and compliance support, while over 200 AI-powered capabilities are deployed to boost digital skills, economic growth, and employment opportunities.

By 2027, the initiative is expected to contribute more than 24 billion AED to Abu Dhabi’s GDP and create over 5,000 jobs, demonstrating a global blueprint for AI-native government transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers expose weak satellite security with cheap equipment

Scientists in the US have shown how easy it is to intercept private messages and military information from satellites using equipment costing less than €500.

Researchers from the University of California, San Diego and the University of Maryland scanned internet traffic from 39 geostationary satellites and 411 transponders over seven months.

They discovered unencrypted data, including phone numbers, text messages, and browsing history from networks such as T-Mobile, TelMex, and AT&T, as well as sensitive military communications from the US and Mexico.

The researchers used everyday tools such as TV satellite dishes to collect and decode the signals, proving that anyone with a basic setup and a clear view of the sky could potentially access unprotected data.

They said there is a ‘clear mismatch’ between how satellite users assume their data is secured and how it is handled in reality. Despite the industry’s standard practice of encrypting communications, many transmissions were left exposed.

Companies often avoid stronger encryption because it increases costs and reduces bandwidth efficiency. The researchers noted that firms such as Panasonic could lose up to 30 per cent in revenue if all data were encrypted.

While intercepting satellite data still requires technical skill and precise equipment alignment, the study highlights how affordable tools can reveal serious weaknesses in global satellite security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New YouTube tools provide trusted health advice for teens

YouTube is introducing a new shelf of mental health and wellbeing content designed specifically for teenagers. The feature will provide age-appropriate, evidence-based videos covering topics such as depression, anxiety, ADHD, and eating disorders.

Content is created in collaboration with trusted organisations and creators, including Black Dog Institute, ReachOut Australia, and Dr Syl, to ensure it is both reliable and engaging.

The initiative will initially launch in Australia, with plans to expand to the US, the UK, and Canada. Videos are tailored to teens’ developmental stage, offering practical advice, coping strategies, and medically-informed guidance.

By providing credible information on a familiar platform, YouTube hopes to improve mental health literacy and reduce stigma among young users.

YouTube has implemented teen-specific safeguards for recommendations, content visibility, and advertising eligibility, making it easier for adolescents to explore their interests safely.

The company emphasises that the platform is committed to helping teens access trustworthy resources, while supporting their wellbeing in a digital environment increasingly filled with misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!