Bezos predicts gigantic gains from the current AI investment bubble

Jeff Bezos has acknowledged that an ‘AI bubble’ is underway but believes its long-term impact will be overwhelmingly positive.

Speaking at Italian Tech Week in Turin, the Amazon founder described it as an ‘industrial bubble’ rather than a purely financial.

He argued that the intense competition and heavy investment will ultimately leave society better off, even if many projects fail. ‘When the dust settles and you see who the winners are, societies benefit from those investors,’ he said, adding that the benefits of AI will be ‘gigantic’.

Bezos’s comments come amid surging spending by Big Tech on AI chips and data centres. Citigroup forecasts that investment will exceed $2.8 trillion by 2029.

OpenAI, Meta, Microsoft, Google and others are pouring billions into infrastructure, with projects like OpenAI’s $500 billion Stargate initiative and Meta’s $29 billion capital raise for AI data centres.

Industry leaders, including Sam Altman of OpenAI, warned of an AI bubble. Yet many argue that, unlike the dot-com era, today’s market is anchored by Nvidia and OpenAI, whose products form the backbone of AI development.

The challenge for tech giants will be finding ways to recover vast investments while sustaining rapid growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI industry faces recalibration as Altman delays AGI

OpenAI CEO Sam Altman has again adjusted his timeline for achieving artificial general intelligence (AGI). After earlier forecasts for 2023 and 2025, Altman suggests 2030 as a more realistic milestone. The move reflects mounting pressure and shifting expectations in the AI sector.

OpenAI’s public projections come amid challenging financials. Despite a valuation near $500 billion, the company reportedly lost $5 billion last year on $3.7 billion in revenue. Investors remain drawn to ambitious claims of AGI, despite widespread scepticism. Predictions now span from 2026 to 2060.

Experts question whether AGI is feasible under current large language model (LLM) architectures. They point out that LLMs rely on probabilistic patterns in text, lack lived experience, and cannot develop human judgement or intuition from data alone.

Another point of critique is that text-based models cannot fully capture embodied expertise. Fields like law, medicine, or skilled trades depend on hands-on training, tacit knowledge, and real-world context, where AI remains fundamentally limited.

As investors and commentators calibrate expectations, the AI industry may face a reckoning. Altman’s shifting forecasts underscore how hype and uncertainty continue to shape the race toward perceived machine-level intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Frontier firms reshape work with AI integration

Forward-thinking companies, known as Frontier Firms, are reshaping business by integrating AI deeply into their operations. US employees’ adoption of AI tools has doubled in two years, reflecting a rapid shift.

These firms are not just experimenting but are setting new standards by redesigning workflows to leverage AI, particularly in software development. The impacts are spreading to sales, service, finance, and marketing. Three distinct patterns define this transformation.

The first, human + AI assistant, pairs individuals with AI to eliminate repetitive tasks, allowing developers to focus on design and quality.

The second, human-agent teams, integrate AI as digital workers in workflows for tasks like code testing and compliance, boosting efficiency.

The third, human-led, agent-operated pattern sees AI managing entire processes like automated release pipelines, with humans setting goals and intervening only when needed.

These patterns do not follow a linear path but appear simultaneously across different business functions. A single team might use AI to draft code, test it collaboratively, and automate releases in one day.

As these practices compound, they accelerate innovation and scale. Leaders must embrace these changes to stay competitive, as AI-driven workflows are poised to transform industries beyond software development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Future of work shaped by AI, flexible ecosystems and soft retirement

As technology reshapes workplaces, how we work is set for significant change in the decade’s second half. Seven key trends are expected to drive this transformation, shaped by technological shifts, evolving employee expectations, and new organisational realities.

AI will continue to play a growing role in 2026. Beyond simply automating tasks, companies will increasingly design AI-native workflows built from the ground up to automate, predict, and support decision-making.

Hybrid and remote work will solidify flexible ecosystems of tools, networks, and spaces to support employees wherever they are. The trend emphasises seamless experiences, global talent access, and stronger links between remote workers and company culture.

The job landscape will continue to change as AI affects hiring in clerical, administrative, and managerial roles, while sectors such as healthcare, education, and construction grow. Human skills, such as empathy, communication, and leadership, will become increasingly valuable.

Data-driven people management will replace intuition-based approaches, with AI used to find patterns and support evidence-based decisions. Employee experience will also become a key differentiator, reflecting customer-focused strategies to attract and retain talent.

An emerging ‘soft retirement’ trend will see healthier older workers reduce hours rather than stop altogether, offering businesses valuable expertise. Those who adapt early to these trends will be better positioned to thrive in the future of work.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nintendo denies lobbying the Japanese government over generative AI

The video game company, Nintendo, has denied reports that it lobbied the Japanese government over the use of generative AI. The company issued an official statement on its Japanese X account, clarifying that it has had no contact with authorities.

However, this rumour originated from a post by Satoshi Asano, a member of Japan’s House of Representatives, who suggested that private companies had pressed the government on intellectual property protection concerning AI.

After Nintendo’s statement, Asano retracted his remarks and apologised for spreading misinformation.

Nintendo stressed that it would continue to protect its intellectual property against infringement, whether AI was involved or not. The company reaffirmed its cautious approach toward generative AI in game development, focusing on safeguarding creative rights rather than political lobbying.

The episode underscores the sensitivity around AI in the creative industries of Japan, where concerns about copyright and technological disruption are fuelling debate. Nintendo’s swift clarification signals how seriously it takes misinformation and protects its brand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour market stability persists despite the rise of AI

Public fears of AI rapidly displacing workers have not yet materialised in the US labour market.

A new study finds that the overall occupational mix has shifted only slightly since the launch of generative AI in November 2022, with changes resembling past technological transitions such as the rise of computers and the internet.

The pace of disruption is not significantly faster than historical benchmarks.

Industry-level data show some variation, particularly in information services, finance, and professional sectors, but trends were already underway before AI tools became widely available.

Similarly, younger workers have not seen a dramatic divergence in opportunities compared with older graduates, suggesting that AI’s impact on early careers remains modest and difficult to isolate.

Exposure, automation, and augmentation metrics offer little evidence of widespread displacement. OpenAI’s exposure data and Anthropic’s usage data suggest stability in the proportion of workers most affected by AI, including those unemployed.

Even in roles theoretically vulnerable to automation, there has been no measurable increase in job losses.

The study concludes that AI’s labour effects are gradual rather than immediate. Historical precedent suggests that large-scale workforce disruption unfolds over decades, not months. Researchers plan to monitor the data to track whether AI’s influence becomes more visible over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

What a Hollywood AI actor can teach CEOs about the future of work

Tilly Norwood, a fully AI-created actor, has become the centre of a heated debate in Hollywood after her creator revealed that talent agents were interested in representing her.

The actors’ union responded swiftly, warning that Tilly was trained on the work of countless performers without their consent or compensation. It also reminded producers that hiring her would involve dealing with the union.

The episode highlights two key lessons for business leaders in any industry. First, never assume a technology’s current limitations will remain its inherent limitations. Some commentators, including Whoopi Goldberg, have argued that AI actors pose little threat because their physical movements still appear noticeably artificial.

Yet history shows that early limitations often disappear over time. Once-dismissed technologies like machine translation and chess software have since far surpassed human abilities. Similarly, AI-generated performers may eventually become indistinguishable from human actors.

The second lesson concerns human behaviour. People are often irrational; their preferences can upend even the most carefully planned strategies. Producers avoided publicising actors’ names in Hollywood’s early years to maintain control.

Audiences, however, demanded to know everything about the stars they admired, forcing studios to adapt. This human attachment created the star system that shaped the industry. Whether audiences will embrace AI performers like Tilly remains uncertain, but cultural and emotional factors will play a decisive role.

Hollywood offers a high-profile glimpse of the challenges and opportunities of advanced AI. As other sectors face similar disruptions, business leaders may find that technology alone does not determine outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Diag2Diag brings fusion reactors closer to commercial viability

Researchers have developed an AI tool that could make fusion power more reliable and affordable. Diag2Diag reconstructs missing sensor data to give scientists a clearer view of plasma, helping address one of fusion energy’s biggest challenges.

Developed through a collaboration led by Princeton University and the US Department of Energy’s Princeton Plasma Physics Laboratory, Diag2Diag analyses multiple diagnostics in real time to generate synthetic, high-resolution data. It improves plasma control and cuts reliance on costly hardware.

A key use of Diag2Diag is improving the study of the plasma pedestal, the fuel’s outer layer. Current methods miss sudden changes or lack detail. The AI fills these gaps without new instruments, helping researchers fine-tune stability.

The system has also advanced research into edge-localised modes, or ELMs, which are bursts of energy that can damage reactor walls. It revealed how magnetic perturbations create ‘magnetic islands’ that flatten plasma temperature and density, supporting a leading theory on ELM suppression.

Although designed for fusion, Diag2Diag could also enhance reliability in fields such as spacecraft monitoring and robotic surgery. For fusion specifically, it supports smaller, cheaper, and more dependable reactors, bringing the prospect of clean, round-the-clock power closer to reality.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transcription tool aims to speed up police report writing

The Washington County Sheriff’s Office in Oregon is testing an AI transcription service to speed up police report writing. The tool, Draft One, analyses Axon body-worn camera footage to generate draft reports for specific calls, including theft, trespassing, and DUII incidents.

Corporal David Huey stated that the technology is designed to provide deputies more time in the field. He noted that reports that took around 90 minutes can now be completed in 15 to 20 minutes, freeing officers to focus on policing rather than paperwork.

Deputies in the 60-day pilot must review and edit all AI-generated drafts. At least 20 percent of each report must be manually adjusted to ensure accuracy. Huey explained that the system deliberately inserts minor errors to ensure officers remain engaged with the content.

He added that human judgement remains essential for interpreting emotional cues, such as tense body language, which AI cannot detect solely from transcripts. All data generated by Draft One is securely stored within Axon’s network.

After the pilot concludes, the sheriff’s office and the district attorney will determine whether to adopt the system permanently. If successful, the tool could mark a significant step in integrating AI into everyday law enforcement operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!