ChatGPT for Teachers launched as OpenAI expands educator tools

OpenAI has launched ChatGPT for Teachers, offering US US educators a secure workspace to plan lessons and utilise AI safely. The service is free for verified K–12 staff until June 2027. OpenAI states that its goal is to support classroom tasks without introducing data risks.

Educators can tailor responses by specifying grades, curriculum needs, and preferred formats. Content shared in the workspace is not used to train models by default. The platform includes GPT-5.1 Auto, search, file uploads, and image tools.

The system integrates with widely used school software, including Google Drive, Microsoft 365, and Canva. Teachers can import documents, design presentations, and organise materials in one place. Shared prompt libraries offer examples from other educators.

Collaboration features enable co-planned lessons, shared templates, and school-specific GPTs. OpenAI says these tools aim to reduce administrative workloads. Schools can create collective workspaces to coordinate teaching resources more easily.

The service remains free through June 2027, with pricing updates to follow later. OpenAI plans to keep costs accessible for schools. Educators can begin using the platform by verifying their status through SheerID.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT unveils new shopping research experience

Since yesterday, ChatGPT has introduced a more comprehensive approach to product discovery with a new shopping research feature, designed to simplify complex purchasing decisions.

Users describe what they need instead of sifting through countless sites, and the system generates personalised buyer guides based on high-quality sources. The feature adapts to each user by asking targeted questions and reflecting previously stored preferences in memory.

The experience has been built with a specialised version of GPT-5 mini trained for shopping tasks through reinforcement learning. It gathers fresh information such as prices, specifications, and availability by reading reliable retail pages directly.

Users can refine the process in real-time by marking products as unsuitable or requesting similar alternatives, enabling a more precise result.

The tool is available on all ChatGPT plans and offers expanded usage during the holiday period. OpenAI emphasises that no chats are shared with retailers and that search results are sourced from public data sources, rather than sponsored content.

Some errors may still occur in product details, yet the intention is to develop a more intuitive and personalised way to navigate an increasingly crowded digital marketplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s results fail to ease AI bubble fears

Record profits and year-on-year revenue growth above 60 percent have put Nvidia at the centre of debate over whether the surge in AI spending signals a bubble or a long-term boom.

CEO Jensen Huang and CFO Colette Kress dismissed concerns about the bubble, highlighting strong demand and expectations of around $65 billion in revenue for the next quarter.

Executives forecast global AI infrastructure spending could reach $3–4 trillion annually by the end of the decade as both generative AI and traditional cloud computing workloads increasingly run on GPUs.

Widespread adoption by major partners, including Meta, Anthropic and Salesforce, suggests lasting momentum rather than short-term hype.

Analysts generally agree that Nvidia’s performance remains robust, but questions persist over the sustainability of heavy investment in AI. Investors continue to monitor whether Big Tech can maintain this pace and if highly leveraged customers might expose Nvidia to future risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Waymo wins regulatory green light to expand robotaxi reach in Bay Area and SoCal

Waymo has received regulatory approval from the California Department of Motor Vehicles to deploy its fully autonomous vehicles across significantly more territory.

In the Bay Area, the newly permitted regions include much of the East Bay, the North Bay (including Napa), and the Sacramento area. In Southern California, Waymo’s newly approved zone stretches from Santa Clarita down to San Diego.

While this approval allows for driverless operation, Waymo still requires additional regulatory clearances before it can begin carrying paying passengers in certain parts of the expansion area. The company says it plans to start welcoming riders in San Diego by mid-2026.

From a policy and urban mobility perspective, this marks a significant milestone for Waymo, laying the groundwork for a truly statewide robotaxi network. It will be essential to monitor how this expansion interacts with local transit planning, safety regulation, and infrastructure demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI helps you shop smarter this holiday season

Holiday shoppers can now rely on AI to make Black Friday and Cyber Monday less stressful. AI tools help track prices across multiple retailers and notify users when items fall within their budget, saving hours of online searching.

Finding gifts for difficult-to-shop-for friends and family is also easier with AI. By describing a person’s interests or lifestyle, shoppers receive curated recommendations with product details, reviews, and availability, drawing from billions of listings in Google’s Shopping Graph.

Local shopping is more convenient thanks to AI features that enhance the shopping experience. Shoppers can check stock at nearby stores without having to call around, and virtual try-on technology allows users to see how clothing looks on them before making a purchase.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US warns of rising senior health fraud as AI lifts scam sophistication

AI-driven fraud schemes are on the rise across the US health system, exposing older adults to increasing financial and personal risks. Officials say tens of billions in losses have already been uncovered this year. High medical use and limited digital literacy leave seniors particularly vulnerable.

Criminals rely on schemes such as phantom billing, upcoding and identity theft using Medicare numbers. Fraud spans home health, hospice care and medical equipment services. Authorities warn that the ageing population will deepen exposure and increase long-term harm.

AI has made scams harder to detect by enabling cloned voices, deepfakes and convincing documents. The tools help impersonate providers and personalise attacks at scale. Even cautious seniors may struggle to recognise false calls or messages.

Investigators are also using AI to counter fraud by spotting abnormal billing, scanning records for inconsistencies and flagging high-risk providers. Cross-checking data across clinics and pharmacies helps identify duplicate claims. Automated prompts can alert users to suspicious contacts.

Experts urge seniors to monitor statements, ignore unsolicited calls and avoid clicking unfamiliar links. They should verify official numbers, protect Medicare details and use strong login security. Suspicious activity should be reported to Medicare or to local fraud response teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe needs clearer regulation to capture AI growth, Google says

Google says Europe faces a pivotal moment as AI reshapes global competitiveness, arguing that the region has the talent to lead the way. It points to growing demand for tools that help businesses innovate and expand. Startups like Idoven are highlighted as examples of Europe’s emerging strengths.

Google highlights its long-standing partnership with Europe, pointing to significant investments in infrastructure, security, and research. It has more than 40 offices and 31,000 staff across the region. DeepMind’s scientific advances, including broad use of AlphaFold, remain central to that work.

Despite this foundation, Google warns that Europe risks falling behind other regions without faster access to advanced AI models.

Only 14% of European companies currently utilise AI, which is significantly lower than the adoption rates in China and the United States. Google says outdated technology limits competitiveness across sectors.

Regulatory complexity is another concern, with more than 100 digital rules introduced since 2019. Google supports regulation but notes that abrupt changes and overlapping requirements can slow product launches and hinder smaller developers. The company calls for more straightforward, more explicit rules that avoid penalising innovation.

Google argues that Europe must also expand AI skills, from technical expertise to leadership and workforce readiness. It cites a decade of training initiatives that helped 15 million Europeans gain digital skills. With the right tools and support, Google says Europe could unlock €1.2 trillion in economic value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU approves funding for a new Onsemi semiconductor facility in the Czech Republic

The European Commission has approved €450 million in Czech support for a new integrated Onsemi semiconductor facility in Rožnov pod Radhoštěm.

A project that will help strengthen Europe’s technological autonomy by advancing Silicon Carbide power device production instead of relying on non-European manufacturing.

The Czech Republic plans to back a €1.64 billion investment that will create the first EU facility covering every stage from crystal growth to finished components. These products will be central to electric vehicles, fast charging systems and renewable energy technologies.

Onsemi has agreed to contribute new skills programmes, support the development of next-generation 200 mm SiC technology and follow priority-rated orders in future supply shortages.

The Commission reviewed the measure under Article 107(3)(c) of the Treaty on the Functioning of the EU and concluded that the aid is necessary, proportionate and limited to the minimum required to trigger the investment.

In a scheme that addresses a segment of the semiconductor market where the EU lacks sufficient supply, which improves resilience rather than distorts competition.

The facility is expected to begin commercial activity by 2027 and will support the wider European semiconductor ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!