MIT study finds steady AI growth reshapes work

A new study from the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory finds that AI is reshaping work through steady, broad-based improvements rather than sudden technological jumps.

Researchers describe this pattern as a ‘rising tide,’ in which capability gains emerge across many tasks simultaneously.

The analysis draws on more than 17,000 worker evaluations covering over 3,000 text-based tasks from US labour classifications. Findings show limited evidence of abrupt ‘crashing wave’ breakthroughs in which AI suddenly masters specific job areas.

Instead, performance improves consistently across tasks of varying complexity and duration. Researchers report that current AI systems can already complete roughly half to three-quarters of text-related tasks at a minimally sufficient standard without human intervention.

Projections suggest that, if current trends continue, success rates could reach around 80 to 95 percent by 2029, although higher-quality performance may take longer to achieve.

Workplace change is unfolding gradually, with employees shifting towards oversight roles focused on directing, reviewing, and validating AI outputs.

Despite a slower structural transition than abrupt disruption scenarios, researchers warn that cumulative improvements could still drive significant labour market effects as adoption expands.

AI-driven change is likely to unfold across a wide range of tasks, allowing adaptation by workers and organisations while still signalling longer-term shifts in skills, workflows, and labour markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI improves structured and coherent legal systems for better regulation

A study from Sultan Qaboos University shows how AI can be used to map hidden structural relationships within legal systems, offering new ways to understand how laws interact and evolve.

Published in The Journal of Engineering Research, the research applies natural language processing and network analysis to Oman’s 2023 Labour Law.

The analysis reveals that legal provisions operate as an interconnected system rather than isolated rules. Certain articles emerge as highly influential ‘hubs’, with Article 147 identified as a central node whose modification could generate cascading effects across multiple parts of the legislation.

These interdependencies are visualised through network mapping techniques that highlight structural relationships not easily detected through traditional review.

To construct this model, researchers developed a four-stage methodology combining Arabic-language NLP tools with industrial engineering approaches. Legal texts were mapped using terminology and cross-referencing patterns, with outputs validated by Omani legislative experts to ensure accuracy and relevance.

The study highlights links between labour law and broader regulatory domains, including commercial regulation, social protection, occupational health, and immigration policy.

The findings underline AI’s potential in the regulatory sector to improve coherence, reveal interdependencies, and support scalable, more consistent legal frameworks across jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI presents policy proposals addressing AI’s economic and labour impacts

Policy proposals advanced by OpenAI outline a vision of economic restructuring in response to the growing influence of AI.

Framed within an emerging ‘intelligence age‘, the approach reflects concerns that AI-driven productivity gains may concentrate wealth while undermining traditional labour-based economic models.

The proposals, therefore, attempt to reconcile market-led innovation with mechanisms aimed at broader distribution of economic benefits.

A central element involves shifting taxation away from labour towards capital, reflecting expectations that automation will reduce reliance on human work.

Instruments such as robot taxes and public wealth funds are presented as potential tools to redistribute gains generated by AI systems.

Such proposals by OpenAI indicate a policy direction where states may need to redefine fiscal structures to sustain social protection systems traditionally funded through employment-based taxation.

Labour market adaptation forms another key pillar, with suggestions including shorter working weeks, portable benefits, and increased corporate contributions to social welfare.

However, reliance on employer-linked mechanisms raises questions about coverage gaps, particularly for individuals displaced by automation. The proposals highlight ongoing tensions between corporate-led welfare models and the need for more comprehensive public safety nets.

Alongside economic measures, the framework addresses governance challenges linked to advanced AI systems, including systemic risks and misuse.

OpenAI’s proposals also recommend that oversight bodies, risk containment strategies, and infrastructure expansion reflect an effort to balance innovation with control.

Treating AI as a utility further signals a shift towards recognising digital infrastructure as a public good, though implementation will depend on political consensus and regulatory capacity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

South Korea-France partnership reshapes AI and technology cooperation strategy

The recent state visit between South Korea and France signals a deepening of bilateral cooperation that extends beyond diplomacy into long-term technological and cultural alignment.

Agreements endorsed by President Lee Jae-myung and President Emmanuel Macron reflect a coordinated effort to strengthen shared capabilities in emerging sectors, while reinforcing institutional ties across research, education, and industry.

A central policy dimension lies in the expansion of cooperation in AI, semiconductors, and quantum technologies, areas increasingly tied to economic security and global competitiveness.

Partnerships between institutions such as KAIST and CNRS highlight a shift towards structured research integration, enabling joint innovation and knowledge transfer.

Such collaboration between South Korea and France is positioned not as an isolated scientific exchange, but as part of broader strategies to secure technological sovereignty and resilient supply chains.

Cultural and educational initiatives complement these ambitions by supporting long-term people-to-people engagement and workforce development. Expanded exchanges in creative industries and language education aim to cultivate talent pipelines that can operate across both economies.

Rather than symbolic diplomacy, these measures serve as enabling mechanisms for sustained cooperation in high-value sectors where human capital remains critical.

From a policy perspective, the agreements illustrate how economies are increasingly forming strategic partnerships to navigate global technological competition.

Instead of relying solely on domestic capacity, coordinated international frameworks are being used to manage innovation risks, diversify supply dependencies, and strengthen regulatory alignment.

The outcome will depend on implementation, yet the direction suggests a model of cooperation that blends economic, technological, and societal priorities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Anthropic scales AI compute to meet rising global demand

AI company Anthropic has announced a major expansion of its compute infrastructure through a new partnership with Google and Broadcom, securing multiple gigawatts of next-generation TPU capacity expected to come online from 2027.

The increased compute supply is intended to support its frontier Claude models and meet rapidly growing global demand.

The company said the expansion reflects a continued strategy of scaling infrastructure to match accelerating customer growth. Demand for Claude has increased sharply in 2026, with revenue run-rate surpassing $30 billion and the number of high-spending business customers doubling in a short period.

Most new computing capacity will be based in the United States, aligning with broader investment plans in domestic AI infrastructure. The partnership builds on collaborations with Google Cloud and Broadcom, alongside continued use of multiple hardware platforms to improve performance and resilience.

Anthropic stated that diversifying compute across different providers helps optimise workloads and maintain reliability for enterprise users. Claude remains available across major cloud platforms, supporting its position in a competitive and rapidly scaling AI market.

The expansion reflects how rapidly growing demand for advanced AI systems is driving large-scale investment in underlying compute infrastructure, with potential implications for capacity, reliability, and the global distribution of AI development resources over time.

It also suggests how access to computing resources is becoming a key factor shaping competitiveness and innovation across the AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Penguin Random House sues OpenAI for copyright infringement over ‘Coconut the Little Dragon’ series in Germany

Penguin Random House has filed a lawsuit against OpenAI, alleging that its chatbot, ChatGPT, infringed copyright by imitating content from the ‘Coconut the Little Dragon’ series by German author Ingo Siegner. Filed in a Munich court, the complaint targets OpenAI’s European subsidiary, citing the chatbot’s creation of text, a book cover, and a promotional blurb as evidence of unauthorised ‘memorisation’ of Siegner’s work.

This issue highlights the challenge of distinguishing between algorithmic learning and direct copying, as AI models like OpenAI’s large language model (LLM) can retain extensive portions of their training data and reproduce them, raising legal and ethical dilemmas.

Penguin Random House insists that protecting human creativity is central to its mission. Carina Mathern, a representative, stressed the importance of safeguarding intellectual property, even as the company acknowledges the potential benefits of AI.

That reflects a broader industry tension between embracing technological innovation and protecting authors’ rights. The lawsuit’s implications could set a precedent affecting how AI-generated content is treated under intellectual property laws, posing significant questions for the publishing and creative industries.

The case against OpenAI is not isolated. A Munich court previously ruled against the company for using lyrics from popular musicians without permission, underscoring ongoing legal challenges around AI-generated content in Germany.

Bertelsmann, the parent company of Penguin Random House, had a prior agreement with OpenAI but did not allow access to its media archives, illustrating the complexities of AI collaboration while safeguarding proprietary content. OpenAI responded by stating that they are reviewing the allegations, reiterating their respect for creators and maintaining dialogue with publishers worldwide.

Why does it matter?

The resolution of this lawsuit could mark a pivotal moment in defining AI’s role in creative industries, shaping future regulations and enforcement strategies for AI-driven content creation and its impact on intellectual property rights globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI safety may hinge on missing human body awareness

A study from UCLA Health suggests that modern AI systems lack a fundamental aspect of human cognition linked to bodily experience, a gap that may have implications for safety and alignment with human behaviour.

Researchers describe this missing element as the absence of ‘internal embodiment’, where humans continuously regulate behaviour through bodily signals. While current AI systems can process and describe the physical world, they do not experience internal states such as fatigue, uncertainty, or physical need.

According to the study published in Neuron, this absence limits how AI systems interpret and respond to situations compared with humans, whose cognition is shaped by continuous interaction between brain and body.

The research distinguishes between external interaction and internal self-monitoring, arguing that most AI development focuses only on the former. Without internal regulatory signals, systems may lack natural constraints that guide consistency, caution, and awareness of uncertainty in decision-making.

Researchers propose a ‘dual-embodiment’ framework introducing internal state tracking in AI systems, alongside new benchmarks to assess stability and uncertainty.

AI safety may require more than improved external performance, highlighting the importance of internal regulatory mechanisms that could help systems behave more consistently, predictably, and in line with human expectations in real-world use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI chatbots are reshaping classroom debates, raising concerns over homogenised discussion

Generative AI chatbots are becoming embedded in university learning at Yale, students and academics told CNN, not only for essays and homework but also for real-time seminar participation. Students described classmates uploading readings and PDFs into chatbots before class, and even typing a professor’s question into AI during discussion to produce an immediate response to repeat aloud.

While this can make contributions sound more polished and prepared, some students said seminar conversations increasingly stall or feel flatter, with fewer personal interpretations and less exploratory debate. One student, ‘Amanda’, said she has noticed many classmates arriving with slick talking points but then offering near-identical arguments and phrasing, making discussions feel less distinctive than in earlier years.

Students gave several reasons for leaning on AI. ‘Jessica’, a senior, said she uses it daily, particularly in an economics seminar where the professor cold-calls students, both to digest readings quickly and to help her translate ideas into cohesive sentences when she struggles to phrase her comments.

‘Sophia’, a junior, said some students appear to use AI to draft ‘scripts’ for what to say in class, driven by insecurity about gaps in their understanding. She believes this weakens creativity and the ability to make original connections, replacing genuine engagement with impressive-sounding language.

A Yale spokesperson said the university is aware students are experimenting with AI in the classroom and noted a wider faculty trend towards limiting or banning laptops, using print-based materials, and prioritising direct engagement and original thinking.

The article links these observations to a March paper in ‘Trends in Cognitive Sciences’, which argues that large language models can systematically homogenise human expression and thought across language, perspective and reasoning. The paper’s authors say LLMs predict statistically likely next words based on training data that overrepresents dominant languages and ideas, potentially narrowing the ‘conceptual space’ for how people write and argue.

They warn that models tend to reproduce ‘WEIRD’ viewpoints, Western, educated, industrialised, rich and democratic, even when prompted otherwise, which may make those styles seem more credible and socially correct while marginalising other perspectives.

Researchers also describe a compounding feedback loop. As AI-generated outputs circulate in human discourse and eventually re-enter training data, sameness can intensify over time. Co-author Morteza Dehghani said offloading reasoning to AI risks intellectual laziness and could have broader social consequences, from weakened innovation to greater susceptibility to persuasion.

Educators quoted described both benefits and risks, and outlined practical responses. Thomas Chatterton Williams, a visiting professor and Bard College fellow, said AI can ‘raise the floor’ of discussion for difficult material but may suppress eccentric or truly original ideas, leaving students without a voice of their own or a sense of authorship.

Former teacher Daniel Buck called AI a ‘supercharged SparkNotes’ that can answer virtually any question, making it harder to detect shortcuts and easier for students to bypass the ‘boring minutiae’ where learning takes hold.

He worries that this also undermines relationships with professors and sustained cognitive work. Yale philosophy professor Sun-Joo Shin said model improvements forced her to redesign the assessment. Problem sets now earn completion credit and feedback, while in-class exams, oral tests and presentations carry more weight.

Williams said he has moved from writing to spontaneous, in-class, handwritten work and uses oral exit exams. Students who avoid AI argued that they are still affected by classmates’ reliance on it because it reduces the value and variety of seminar time, while others urged a middle path in which AI is treated as a collaborator, used to critique ideas rather than as a substitute for generating them or doing the reasoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea advances energy transition strategy to strengthen resilience and green industry

An expansive energy transition strategy has been outlined by South Korea aimed at reshaping its national energy system around renewables, electrification and industrial transformation.

The plan responds directly to heightened geopolitical risks and supply vulnerabilities, signalling a shift from import-dependent energy security towards domestic resilience.

Central targets include exceeding a 20% renewable energy share and deploying 100GW of capacity by 2030, alongside accelerating the adoption of electric and hydrogen vehicles across both public and commercial fleets.

The strategy by South Korea reflects structural change, combining large-scale renewable expansion with the phased retirement of the 60 currently operating coal-fired power plants by 2040 and the introduction of a ‘just transition’ framework to mitigate regional and labour impacts.

Industrial policy plays a central role, with support directed towards green manufacturing ecosystems, hydrogen-based steel production, carbon capture technologies and electrified industrial processes.

Rising electricity demand, driven in part by AI infrastructure and data centres, reinforces the need for grid modernisation, including decentralised and bidirectional systems designed to balance regional supply and demand more efficiently.

Governance mechanisms extend beyond infrastructure, incorporating market reforms, green finance instruments and subsidy reallocation away from fossil fuels.

Citizen participation is also embedded through ‘energy income’ models, enabling local investment in renewable projects.

South Korea positions energy transition not only as a climate objective but as a broader economic and social restructuring agenda centred on resilience, competitiveness and public engagement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UN warns of urgency in shaping responsible AI governance

UN Secretary-General António Guterres has told the inaugural meeting of a newly formed Independent International Scientific Panel on Artificial Intelligence that its members have a major responsibility to help shape how the technology is used “for the benefit of humanity”.

The new 40-member panel brings together experts from different regions and disciplines and is expected to help close what Guterres described as ‘the AI knowledge gap’. Its role is to assess the real impact AI will have across economies and societies so that countries can act with the same “clarity” on a more level playing field.

Addressing the scientists at the panel’s first meeting, Guterres said: “Individually, you come from diverse regions and disciplines, bringing outstanding expertise in AI and related fields. Collectively, you represent something the world has never seen before.”

He stressed that the group would provide scientific assessments independently of governments, companies, and institutions, including the UN itself. “AI is advancing at lightning speed… no country, no company, and no field of research can see the full picture alone,” he said, adding that “the world urgently needs a shared, global understanding of artificial intelligence; grounded not in ideology, but in science.”

Guterres also linked the panel’s work to a much broader global agenda, warning that AI will shape peace and security, human rights, and sustainable development for decades to come. He cautioned that misunderstanding around the technology could deepen political and social divisions, saying: “I have seen how quickly fear can take hold when facts are missing or distorted – how trust breaks down and division deepens.”

At a time when “geopolitical tensions are rising and conflicts are raging,” he said, the need for shared understanding and “safe and responsible AI could not be greater.”

He also framed the panel’s task as urgent, arguing that governance efforts are struggling to keep pace with the speed of technological change. “Never in the future will we move as slow as we are moving now. We are indeed in a high level of acceleration,” he said, while warning that the panel is also “in a race against time.”

Referring to earlier UN work through the High-Level Advisory Body on AI, Guterres said the panel does not “start from zero”, before concluding: “I can think of no more important assignment for our world today.”

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot