OpenAI launches training courses for workers and teachers

OpenAI has unveiled two training courses designed to prepare workers and educators for careers shaped by AI. The new AI Foundations course is delivered directly inside ChatGPT, enabling learners to practise tasks, receive guidance, and earn a credential that signals job-ready skills.

Employers, including Walmart, John Deere, Lowe’s, BCG and Accenture, are among the early adopters. Public-sector partners in the US are also joining pilots, while universities such as Arizona State and the California State system are testing certification pathways for students.

A second course, ChatGPT Foundations for Teachers, is available on Coursera and is designed for K-12 educators. It introduces core concepts, classroom applications and administrative uses, reflecting growing teacher reliance on AI tools.

OpenAI states that demand for AI skills is increasing rapidly, with workers trained in the field earning significantly higher salaries. The company frames the initiative as a key step toward its upcoming jobs platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US War Department unveils AI-powered GenAI.mil for all personnel

The War Department has formally launched GenAI.mil, a bespoke generative AI platform powered initially by Gemini for Government, making frontier AI capabilities available to its approximately three million military, civilian, and contractor staff.

According to the department’s announcement, GenAI.mil supports so-called ‘intelligent agentic workflows’: users can summarise documents, generate risk assessments, draft policy or compliance material, analyse imagery or video, and automate routine tasks, all on a secure, IL5-certified platform designed for Controlled Unclassified Information (CUI).

The rollout, described as part of a broader push to cultivate an ‘AI-first’ workforce, follows a July directive from the administration calling for the United States to achieve ‘unprecedented levels of AI technological superiority.’

Department leaders said the platform marks a significant shift in how the US military operates, embedding AI into daily workflows and positioning AI as a force multiplier.

Access is limited to users with a valid DoW common-access card, and the service is currently restricted to non-classified work. The department also says the first rollout is just the beginning; additional AI models from other providers will be added later.

From a tech-governance and defence-policy perspective, this represents one of the most sweeping deployments of generative AI in a national security organisation to date.

It raises critical questions about security, oversight and the balance between efficiency and risk, especially if future iterations expand into classified or operational planning contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen chatbot use surges across the US

Nearly a third of US teenagers engage with AI chatbots each day, according to new Pew data. Researchers say nearly 70% have tried a chatbot, reflecting growing dependence on digital tools during schoolwork and leisure time. Concerns remain over exposure to mature content and possible mental health harms.

Pew surveyed almost 1,500 US teens aged 13 to 17, finding broadly similar usage patterns across gender and income. Older teens reported higher engagement, while Black and Hispanic teens showed slightly greater adoption than White peers.

Experts warn that frequent chatbot use may hinder development or encourage cheating in academic settings. Safety groups have urged parents to limit access to companion-like AI tools, citing risks posed by romantic or intimate interactions with minors.

Companies are now rolling out safeguards in response to public scrutiny and legal pressure. OpenAI and Character.AI have tightened controls, while Meta says it has adjusted policies following reports of inappropriate exchanges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data centre power demand set to triple by 2035

Data centre electricity use is forecast to surge almost threefold by 2035. BloombergNEF reported that global facilities are expected to consume around 106 gigawatts by then.

Analysts linked the growth to larger sites and rising AI workloads, pushing utilisation rates higher. New projects are expanding rapidly, with many planned facilities exceeding 500 megawatts.

Major capacity is heading to states within the PJM grid, alongside significant additions in Texas. Regulators warned that grid operators must restrict connections when capacity risks emerge.

Industry monitors argued that soaring demand contributes to higher regional electricity prices. They urged clearer rules to ensure reliability as early stage project numbers continue accelerating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Utah governor urges state control over AI rules

Utah’s governor, Spencer Cox, has again argued that states should retain authority over AI policy, warning that centralised national rules might fail to reflect local needs. He said state governments remain closer to communities and, therefore, better placed to respond quickly to emerging risks.

Cox explained that innovation often moves faster than federal intervention, and excessive national control could stifle responsible development. He also emphasised that different states face varied challenges, suggesting that tailored AI rules may be more effective in balancing safety and opportunity.

Debate across the US has intensified as lawmakers confront rapid advances in AI tools, with several states drafting their own frameworks. Cox suggested a cooperative model, where states lead, and federal agencies play a supporting role without overriding regional safeguards.

Analysts say the governor’s comments highlight a growing split between national uniformity and local autonomy in technology governance. Supporters argue that adaptable state systems foster trust, while critics warn that a patchwork approach could complicate compliance for developers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New phishing kit targets Microsoft 365 users

Researchers have uncovered a large phishing operation, known as Quantum Route Redirect (QRR), that creates fake Microsoft 365 login pages across nearly 1,000 domains. The campaign uses convincing email lures, including DocuSign notices and payment alerts, to steal user credentials.

QRR operations have reached 90 countries, with US users hit hardest. Analysts say the platform evades scanners by sending bots to safe pages while directing real individuals to credential-harvesting sites on compromised domains.

The kit emerged shortly after Microsoft disrupted the RaccoonO365 network, which had stolen thousands of accounts. Similar tools, such as VoidProxy and Darcula, have appeared; yet, QRR stands out for its automation and ease of use, which enable rapid, large-scale attacks.

Cybersecurity experts warn that URL scanning alone can no longer stop such operations. Organisations are urged to adopt layered protection, stronger sign-in controls and behavioural monitoring to detect scams that increasingly mimic genuine Microsoft systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI development by Chinese companies shifts abroad

Leading Chinese technology companies are increasingly training their latest AI models outside the country to maintain access to Nvidia’s high-performance chips, according to a report by the Financial Times. Firms such as Alibaba and ByteDance are shifting parts of their AI development to data centres in Southeast Asia, a move that comes as the United States tightens restrictions on advanced chip exports to China.

The trend reportedly accelerated after Washington imposed new limits in April on the sale of Nvidia’s H20 chips, a key component for developing sophisticated large language models. By relying on leased server space operated by non-Chinese companies abroad, tech firms are able to bypass some of the effects of US export controls while continuing to train next-generation AI systems.

One notable exception is DeepSeek, which had already stockpiled a significant number of Nvidia chips before the export restrictions took effect. The company continues to train its models domestically and is now collaborating with Chinese chipmakers led by Huawei to develop and optimise homegrown alternatives to US hardware.

Neither Alibaba, ByteDance, Nvidia, DeepSeek, nor Huawei has commented publicly on the report, and Reuters stated that it could not independently verify the claims. However, the developments underscore the increasing complexity of global AI competition and the lengths to which companies may go to maintain technological momentum amid geopolitical pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI use by US immigration agents sparks concern

A US federal judge has condemned immigration agents in Chicago for using AI to draft use-of-force reports, warning that the practice undermines credibility. Judge Sara Ellis noted that one agent fed a short description and images into ChatGPT before submitting the report.

Body camera footage cited in the ruling showed discrepancies between events recorded and the written narrative. Experts say AI-generated accounts risk inaccuracies in situations where courts rely on an officer’s personal recollection to assess reasonableness.

Researchers argue that poorly supervised AI use could erode public trust and compromise privacy. Some warn that uploading images into public tools relinquishes control of sensitive material, exposing it to misuse.

Police departments across the US are still developing policies for safe deployment of generative tools. Several states now require officers to label AI-assisted reports, while specialists call for stronger guardrails before the technology is applied in high-stakes legal settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AWS commits $50bn to US government AI

Amazon Web Services plans to invest $50 billion in high performance AI infrastructure dedicated to US federal agencies. The programme aims to broaden access to AWS tools such as SageMaker AI, Bedrock and model customisation services, alongside support for Anthropic’s Claude.

The expansion will add around 1.3 gigawatts of compute capacity, enabling agencies to run larger models and speed up complex workloads. AWS expects construction of the new data centres to begin in 2026, marking one of its most ambitious government-focused buildouts to date.

Chief executive Matt Garman argues the upgrade will remove long-standing technology barriers within government. The company says enhanced AI capabilities could accelerate work in areas ranging from cybersecurity to medical research while strengthening national leadership in advanced computing.

AWS has spent more than a decade developing secure environments for classified and sensitive government operations. Competitors have also stepped up US public sector offerings, with OpenAI, Anthropic and Google all rolling out heavily discounted AI products for federal use over the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US warns of rising senior health fraud as AI lifts scam sophistication

AI-driven fraud schemes are on the rise across the US health system, exposing older adults to increasing financial and personal risks. Officials say tens of billions in losses have already been uncovered this year. High medical use and limited digital literacy leave seniors particularly vulnerable.

Criminals rely on schemes such as phantom billing, upcoding and identity theft using Medicare numbers. Fraud spans home health, hospice care and medical equipment services. Authorities warn that the ageing population will deepen exposure and increase long-term harm.

AI has made scams harder to detect by enabling cloned voices, deepfakes and convincing documents. The tools help impersonate providers and personalise attacks at scale. Even cautious seniors may struggle to recognise false calls or messages.

Investigators are also using AI to counter fraud by spotting abnormal billing, scanning records for inconsistencies and flagging high-risk providers. Cross-checking data across clinics and pharmacies helps identify duplicate claims. Automated prompts can alert users to suspicious contacts.

Experts urge seniors to monitor statements, ignore unsolicited calls and avoid clicking unfamiliar links. They should verify official numbers, protect Medicare details and use strong login security. Suspicious activity should be reported to Medicare or to local fraud response teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!