US approaches universal 5G as global adoption surges

New data from Omdia and 5G Americas showed rapid global growth in wireless connectivity during the third quarter of 2025, with nearly three billion 5G connections worldwide.

North America remained the most advanced region in terms of adoption, reaching penetration levels that almost match its population.

The US alone recorded 341 million 5G connections, marking one of the highest per capita adoption rates in the world, compared to the global average, which remains far lower.

Analysts noted that strong device availability and sustained investment continue to reinforce the region’s leadership. Enhanced features such as improved uplink performance and integrated sensing are expected to accelerate the shift towards early 5G-Advanced capabilities.

Growth in cellular IoT also remained robust. North America supported more than 270 million connected devices and is forecast to reach nearly half a billion by 2030 as sectors such as manufacturing and utilities expand their use of connected systems.

AI is becoming central to these deployments by managing traffic, automating operations and enabling more innovative industrial applications.

Future adoption is set to intensify as regional 5G connections are projected to surpass 8.6 billion by 2030.

Rising interest in fixed wireless access is driving multi-device usage, offering high-speed connectivity for households and small firms instead of relying solely on fibre networks that remain patchy in many areas.

Globally, the sector has reached more than 78 million connections, with strong annual growth. Analysts believe that expanding infrastructure will support demand for low-latency connectivity, and the addition of satellite-based systems is expected to extend coverage to remote locations.

By mid-November 2025, operators had launched 379 commercial 5G networks worldwide, including seventeen in North America. A similar number of LTE networks operated across the region.

Industry observers said that expanding terrestrial and non-terrestrial networks will form a layered architecture that strengthens resilience, supports emergency response and improves service continuity across land, sea and air.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Three in ten US teens now use AI chatbots every day, survey finds

According to new data from the Pew Research Center, roughly 64% of US teens (aged 13–17) say they have used an AI chatbot; about three in ten (≈ 30%) report daily use. Among those teens, the leading chatbot is ChatGPT (used by 59%), followed by Gemini (23%) and Meta AI (20%).

The widespread adoption raises growing safety and welfare concerns. As teenagers increasingly rely on AI for information, companionship or emotional support, critics point to potential risks, including exposure to biased content, misinformation, or emotionally manipulative interactions, particularly among vulnerable youth.

Legal action has already followed, with families of at least two minors suing AI-developer companies after alleged harmful advice from chatbots.

Demographic patterns reveal that Black and Hispanic teens report higher daily usage rates (around 33-35%) compared to their White peers (≈ 22%). Daily use is also more common among older teens (15–17) than younger ones.

For policymakers and digital governance stakeholders, the findings add urgency to calls for AI-specific safeguarding frameworks, especially where young people are concerned. As AI tools become embedded in adolescent life, ensuring transparency, responsible design, and robust oversight will be critical to preventing unintended harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China pushes global leadership on AI governance

Global discussions on artificial intelligence have multiplied, yet the world still lacks a coherent system to manage the technology’s risks. China is attempting to fill that gap by proposing a new World Artificial Intelligence Cooperation Organisation to coordinate regulation internationally.

Countries face mounting concerns over unsafe AI development, with the US relying on fragmented rules and voluntary commitments from tech firms. The EU has introduced binding obligations through its AI Act, although companies continue to push for weaker oversight.

China’s rapid rollout of safety requirements, including pre-deployment checks and watermarking of AI-generated content, is reshaping global standards as many firms overseas adopt Chinese open-weight models.

A coordinated international framework similar to the structure used for nuclear oversight could help governments verify compliance and stabilise the global AI landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches training courses for workers and teachers

OpenAI has unveiled two training courses designed to prepare workers and educators for careers shaped by AI. The new AI Foundations course is delivered directly inside ChatGPT, enabling learners to practise tasks, receive guidance, and earn a credential that signals job-ready skills.

Employers, including Walmart, John Deere, Lowe’s, BCG and Accenture, are among the early adopters. Public-sector partners in the US are also joining pilots, while universities such as Arizona State and the California State system are testing certification pathways for students.

A second course, ChatGPT Foundations for Teachers, is available on Coursera and is designed for K-12 educators. It introduces core concepts, classroom applications and administrative uses, reflecting growing teacher reliance on AI tools.

OpenAI states that demand for AI skills is increasing rapidly, with workers trained in the field earning significantly higher salaries. The company frames the initiative as a key step toward its upcoming jobs platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US War Department unveils AI-powered GenAI.mil for all personnel

The War Department has formally launched GenAI.mil, a bespoke generative AI platform powered initially by Gemini for Government, making frontier AI capabilities available to its approximately three million military, civilian, and contractor staff.

According to the department’s announcement, GenAI.mil supports so-called ‘intelligent agentic workflows’: users can summarise documents, generate risk assessments, draft policy or compliance material, analyse imagery or video, and automate routine tasks, all on a secure, IL5-certified platform designed for Controlled Unclassified Information (CUI).

The rollout, described as part of a broader push to cultivate an ‘AI-first’ workforce, follows a July directive from the administration calling for the United States to achieve ‘unprecedented levels of AI technological superiority.’

Department leaders said the platform marks a significant shift in how the US military operates, embedding AI into daily workflows and positioning AI as a force multiplier.

Access is limited to users with a valid DoW common-access card, and the service is currently restricted to non-classified work. The department also says the first rollout is just the beginning; additional AI models from other providers will be added later.

From a tech-governance and defence-policy perspective, this represents one of the most sweeping deployments of generative AI in a national security organisation to date.

It raises critical questions about security, oversight and the balance between efficiency and risk, especially if future iterations expand into classified or operational planning contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen chatbot use surges across the US

Nearly a third of US teenagers engage with AI chatbots each day, according to new Pew data. Researchers say nearly 70% have tried a chatbot, reflecting growing dependence on digital tools during schoolwork and leisure time. Concerns remain over exposure to mature content and possible mental health harms.

Pew surveyed almost 1,500 US teens aged 13 to 17, finding broadly similar usage patterns across gender and income. Older teens reported higher engagement, while Black and Hispanic teens showed slightly greater adoption than White peers.

Experts warn that frequent chatbot use may hinder development or encourage cheating in academic settings. Safety groups have urged parents to limit access to companion-like AI tools, citing risks posed by romantic or intimate interactions with minors.

Companies are now rolling out safeguards in response to public scrutiny and legal pressure. OpenAI and Character.AI have tightened controls, while Meta says it has adjusted policies following reports of inappropriate exchanges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data centre power demand set to triple by 2035

Data centre electricity use is forecast to surge almost threefold by 2035. BloombergNEF reported that global facilities are expected to consume around 106 gigawatts by then.

Analysts linked the growth to larger sites and rising AI workloads, pushing utilisation rates higher. New projects are expanding rapidly, with many planned facilities exceeding 500 megawatts.

Major capacity is heading to states within the PJM grid, alongside significant additions in Texas. Regulators warned that grid operators must restrict connections when capacity risks emerge.

Industry monitors argued that soaring demand contributes to higher regional electricity prices. They urged clearer rules to ensure reliability as early stage project numbers continue accelerating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Utah governor urges state control over AI rules

Utah’s governor, Spencer Cox, has again argued that states should retain authority over AI policy, warning that centralised national rules might fail to reflect local needs. He said state governments remain closer to communities and, therefore, better placed to respond quickly to emerging risks.

Cox explained that innovation often moves faster than federal intervention, and excessive national control could stifle responsible development. He also emphasised that different states face varied challenges, suggesting that tailored AI rules may be more effective in balancing safety and opportunity.

Debate across the US has intensified as lawmakers confront rapid advances in AI tools, with several states drafting their own frameworks. Cox suggested a cooperative model, where states lead, and federal agencies play a supporting role without overriding regional safeguards.

Analysts say the governor’s comments highlight a growing split between national uniformity and local autonomy in technology governance. Supporters argue that adaptable state systems foster trust, while critics warn that a patchwork approach could complicate compliance for developers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New phishing kit targets Microsoft 365 users

Researchers have uncovered a large phishing operation, known as Quantum Route Redirect (QRR), that creates fake Microsoft 365 login pages across nearly 1,000 domains. The campaign uses convincing email lures, including DocuSign notices and payment alerts, to steal user credentials.

QRR operations have reached 90 countries, with US users hit hardest. Analysts say the platform evades scanners by sending bots to safe pages while directing real individuals to credential-harvesting sites on compromised domains.

The kit emerged shortly after Microsoft disrupted the RaccoonO365 network, which had stolen thousands of accounts. Similar tools, such as VoidProxy and Darcula, have appeared; yet, QRR stands out for its automation and ease of use, which enable rapid, large-scale attacks.

Cybersecurity experts warn that URL scanning alone can no longer stop such operations. Organisations are urged to adopt layered protection, stronger sign-in controls and behavioural monitoring to detect scams that increasingly mimic genuine Microsoft systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI development by Chinese companies shifts abroad

Leading Chinese technology companies are increasingly training their latest AI models outside the country to maintain access to Nvidia’s high-performance chips, according to a report by the Financial Times. Firms such as Alibaba and ByteDance are shifting parts of their AI development to data centres in Southeast Asia, a move that comes as the United States tightens restrictions on advanced chip exports to China.

The trend reportedly accelerated after Washington imposed new limits in April on the sale of Nvidia’s H20 chips, a key component for developing sophisticated large language models. By relying on leased server space operated by non-Chinese companies abroad, tech firms are able to bypass some of the effects of US export controls while continuing to train next-generation AI systems.

One notable exception is DeepSeek, which had already stockpiled a significant number of Nvidia chips before the export restrictions took effect. The company continues to train its models domestically and is now collaborating with Chinese chipmakers led by Huawei to develop and optimise homegrown alternatives to US hardware.

Neither Alibaba, ByteDance, Nvidia, DeepSeek, nor Huawei has commented publicly on the report, and Reuters stated that it could not independently verify the claims. However, the developments underscore the increasing complexity of global AI competition and the lengths to which companies may go to maintain technological momentum amid geopolitical pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!