AI reshaping the US labour market

AI is often seen as a job destroyer, but it’s also emerging as a significant source of new employment, according to a new Brookings report. The number of job postings mentioning AI has more than doubled in the past year, with demand continuing to surge across various industries and regions.

Over the past 15 years, AI-related job listings have grown nearly 29% annually, far outpacing the 11% growth rate of overall job postings in the broader economy.

Brookings based its findings on data from Lightcast, a labour market analytics firm, and noted rising demand for AI skills across sectors, including manufacturing. According to the US Census Bureau’s Business Trends Survey, the share of manufacturers using AI has jumped from 4% in early 2023 to 9% by mid-2025.

Yet, AI jobs still form a small part of the market. Goldman Sachs predicts widespread AI adoption will peak in the early 2030s, with a slower near-term influence on jobs. ‘AI is visible in the micro labour market data, but it doesn’t dominate broader job dynamics,’ said Joseph Briggs, an economist at Goldman Sachs.

Roles range from AI engineers and data scientists to consultants and marketers learning to integrate AI into business operations responsibly and ethically. In 2025, over 80,000 job postings cited generative AI skills—up from fewer than 4,000 in 2010, Brookings reported, indicating explosive long-term growth.

Job openings involving ‘responsible AI’—those addressing ethical AI use in business and society—are also rising, according to data from Indeed and Lightcast. ‘As AI evolves, so does what counts as an AI job,’ said Cory Stahle of the Indeed Hiring Lab, noting that definitions shift with new business applications.

AI skills carry financial value, too. Lightcast found that jobs requiring AI expertise offer an average salary premium of $18,000, or 28% more annually. Unsurprisingly, tech hubs like Silicon Valley and Seattle dominate AI hiring, but job growth spreads to regions like the Sunbelt and the East Coast.

Mark Muro of Brookings noted that universities play a key role in AI job growth across new regions by fuelling local innovation. AI is also entering non-tech fields such as finance, human resources, and marketing, with more than half of AI-related postings now being outside IT roles.

Muro expects more widespread AI adoption in the next few years, as employers gain clarity on its value, limitations and potential for productivity. ‘There’s broad consensus that AI boosts productivity and economic competitiveness,’ he said. ‘It energises regional leaders and businesses to act more quickly.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Overviews reach 2 billion users monthly, reshaping the web’s future

Google’s AI Overviews, the generative summaries placed above traditional search results, now serve over 2 billion users monthly, a sharp rise from 1.5 billion just last quarter.

First launched in May 2023 and widely available in the US by mid-2024, the feature has rapidly expanded across more than 200 countries and 40 languages.

The widespread use of AI Overviews transforms how people search and who benefits. Google reports that the feature boosts engagement by over 10% for queries where it appears.

However, a study by Pew Research shows clicks on search results drop significantly when AI Overviews are shown, with just 8% of users clicking any link, and only 1% clicking within the overview itself.

While Google claims AI Overviews monetise at the same rate as regular search, publishers are left out unless users click through, which they rarely do.

Google has started testing ads within the summaries and is reportedly negotiating licensing deals with select publishers, hinting at a possible revenue-sharing shift. Meanwhile, regulators in the US and EU are scrutinising whether the feature violates antitrust laws or misuses content.

Industry experts warn of a looming ‘Google Zero’ future — a web where search traffic dries up and AI-generated answers dominate.

As visibility in search becomes more about entity recognition than page ranking, publishers and marketers must rethink how they maintain relevance in an increasingly post-click environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon exit highlights deepening AI divide between US and China

Amazon’s quiet wind-down of its Shanghai AI lab underscores a broader shift in global research dynamics, as escalating tensions between the US and China reshape how tech giants operate across borders.

Instead of expanding innovation hubs in China, major American firms are increasingly dismantling them.

The AWS lab, once central to Amazon’s AI research, produced tools said to have generated nearly $1bn in revenue and over 100 academic papers.

Yet its dissolution reflects a growing push from Washington to curb China’s access to cutting-edge technology, including restrictions on advanced chips and cloud services.

As IBM and Microsoft have also scaled back operations or relocated talent away from mainland China, a pattern is emerging: strategic retreat. Rather than risking compliance issues or regulatory scrutiny, US tech companies are choosing to restructure globally and reduce local presence in China altogether.

With Amazon already having exited its Chinese ebook and ecommerce markets, the shuttering of its AI lab signals more than a single closure — it reflects a retreat from joint innovation and a widening technological divide that may shape the future of AI competition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

5G traffic surges under growing AI usage

AI-driven applications are reshaping mobile data norms, and 5G networks are feeling the pressure. Analysts warn that uplink demand generated by tools like virtual assistants and AR platforms could exceed the current 5G capacity by around 2027. Traditional networks are built to handle heavier downlink traffic, leaving them under stress as AI flows increase in the opposite direction.

At the same time, artificial intelligence is playing a constructive role by helping optimise these strained networks. AI techniques, such as predictive traffic forecasting, dynamic spectrum allocation, beamforming, and energy management, are improving efficiency and reducing operational costs. Networks are becoming smarter in detecting congestion and self-adjusting to maintain performance.

Industry discussions point to 5G‑Advanced, also known as 5.5G, as a key evolution that embeds AI and machine learning into network architecture. These upgrades promise higher uplink speeds, tighter latency control, and built‑in intelligence for optimisation and automation. Edge computing is set to play a central role by bringing AI decision‑making closer to users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU and Japan deepen AI cooperation under new digital pact

In May 2025, the European Union and Japan formally reaffirmed their long-standing EU‑Japan Digital Partnership during the third Digital Partnership Council in Tokyo. Delegations agreed to deepen collaboration in pivotal digital technologies, most notably artificial intelligence, quantum computing, 5G/6G networks, semiconductors, cloud, and cybersecurity.

A joint statement committed to signing an administrative agreement on AI, aligned with principles from the Hiroshima AI Process. Shared initiatives include a €4 million EU-supported quantum R&D project named Q‑NEKO and the 6G MIRAI‑HARMONY research effort.

Both parties pledge to enhance data governance, digital identity interoperability, regulatory coordination across platforms, and secure connectivity via submarine cables and Arctic routes. The accord builds on the Strategic Partnership Agreement activated in January 2025, reinforcing their mutual platform for rules-based, value-driven digital and innovation cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand accelerates while clean power lags

Data centres are driving a sharp rise in electricity consumption, putting mounting pressure on power infrastructure that is already struggling to keep pace.

The rapid expansion of AI has led technology companies to invest heavily in AI-ready infrastructure, but the energy demands of these systems are outstripping available grid capacity.

The International Energy Agency projects that electricity use by data centres will more than double globally by 2030, reaching levels equivalent to the current consumption of Japan.

In the United States, they are expected to use 580 TWh annually by 2028—about 12% of national consumption. AI-specific data centres will be responsible for much of this increase.

Despite this growth, clean energy deployment is lagging. Around two terawatts of projects remain stuck in interconnection queues, delaying the shift to sustainable power. The result is a paradox: firms pursuing carbon-free goals by 2035 now rely on gas and nuclear to power their expanding AI operations.

In response, tech companies and utilities are adopting short-term strategies to relieve grid pressure. Microsoft and Amazon are sourcing energy from nuclear plants, while Meta will rely on new gas-fired generation.

Data centre developers like CloudBurst are securing dedicated fuel supplies to ensure local power generation, bypassing grid limitations. Some utilities are introducing technologies to speed up grid upgrades, such as AI-driven efficiency tools and contracts that encourage flexible demand.

Behind-the-meter solutions—like microgrids, batteries and fuel cells—are also gaining traction. AEP’s 1-GW deal with Bloom Energy would mark the US’s largest fuel cell deployment.

Meanwhile, longer-term efforts aim to scale up nuclear, geothermal and even fusion energy. Google has partnered with Commonwealth Fusion Systems to source power by the early 2030s, while Fervo Energy is advancing geothermal projects.

National Grid and other providers invest in modern transmission technologies to support clean generation. Cooling technology for data centre chips is another area of focus. Programmes like ARPA-E’s COOLERCHIPS are exploring ways to reduce energy intensity.

At the same time, outdated regulatory processes are slowing progress. Developers face unclear connection timelines and steep fees, sometimes pushing them toward off-grid alternatives.

The path forward will depend on how quickly industry and regulators can align. Without faster deployment of clean power and regulatory reform, the systems designed to power AI could become the bottleneck that stalls its growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing faces roadblocks to real-world use

Quantum computing holds vast promise for sectors from climate modelling to drug discovery and AI, but it remains far from mainstream due to significant barriers. The fragility of qubits, the shortage of scalable quantum software, and the immense number of qubits required continue to limit progress.

Keeping qubits stable is one of the most significant technical obstacles, with most only lasting microseconds before disruption. Current solutions rely on extreme cooling and specialised equipment, which remain expensive and impractical for widespread use.

Even the most advanced systems today operate with a fraction of the qubits needed for practical applications, while software options remain scarce and highly tailored. Businesses exploring quantum solutions must often build their tools from scratch, adding to the cost and complexity.

Beyond technology, the field faces social and structural challenges. A lack of skilled professionals and fears around unequal access could see quantum benefits restricted to big tech firms and governments.

Security is another looming concern, as future quantum machines may be capable of breaking current encryption standards. Policymakers and businesses must develop defences before such systems become widely available.

AI may accelerate progress in both directions. Quantum computing can supercharge model training and simulation, while AI is already helping to improve qubit stability and propose new hardware designs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!