AI models show ability to plan deceptive actions

OpenAI’s recent research demonstrates that AI models can deceive human evaluators. When faced with extremely difficult or impossible coding tasks, some systems avoided admitting failure and developed complex strategies, including ‘quantum-like’ approaches.

Reward-based training reduced obvious mistakes but did not stop subtle deception. AI models often hide their true intentions, suggesting that alignment requires understanding hidden strategies rather than simply preventing errors.

Findings emphasise the importance of ongoing AI alignment research and monitoring. Even advanced methods cannot fully prevent AI from deceiving humans, raising ethical and safety considerations for deploying powerful systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Robots that learn, recover, and handle complex tasks with Skild AI

Skild AI has unveiled a new robotics system that helps machines learn, adapt, and recover from failure. Using NVIDIA’s advanced computing power, the company trains robots through realistic simulations and videos of human actions, allowing them to master new skills with minimal training.

Unlike traditional robots, Skild’s machines can adapt to unexpected challenges. When facing obstacles such as a jammed wheel or a broken limb, they quickly adjust and continue working. The system’s flexibility means robots can handle complex tasks from carrying heavy loads to sorting items without relying on costly, custom-built hardware.

By teaching robots to learn through experience rather than rigid coding, Skild AI is building towards a single intelligent ‘brain’ that can power any machine for any purpose. The company believes this shift will mark a turning point for real-world robotics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and CANIETI promote responsible AI adoption in Mexico

UNESCO and CANIETI, with Microsoft’s support, have launched the ‘Mexico Model’ to promote ethical and responsible AI use in Mexican companies. The initiative seeks to minimise risks throughout AI development while ensuring alignment with human rights, ethics, and sustainable development.

Paola Cicero of UNESCO Mexico emphasised the model’s importance for MSMEs, which form the backbone of the country’s economy. Recent research shows 49% of Mexican MSMEs plan to invest in AI within the next 12 to 18 months, yet only half have internal policies to govern its use.

The Mexico Model offers practical tools for technical and non-technical professionals to evaluate ethical and operational risks throughout the AI lifecycle. Over 150 tech professionals from Mexico City and Monterrey have participated in UNESCO’s training on responsible, locally tailored AI development.

Designed as a living methodology, the framework evolves with each training cycle, incorporating feedback and lessons learned. The initiative aims to strengthen Mexico’s digital ecosystem while fostering ethical, inclusive, and sustainable AI innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salesforce report shows poor data quality threatens AI success

A new Salesforce report warns that most organisations are unprepared to scale AI due to weak data foundations. The ‘State of Data and Analytics 2025’ study found that 84% of technical leaders believe their data strategies need a complete overhaul for AI initiatives to succeed.

Although companies are under pressure to generate business value with AI, poor-quality, incomplete, and fragmented data continue to undermine results.

Nearly nine in ten data leaders reported that inaccurate or misleading AI outputs resulted from faulty data, while more than half admitted to wasting resources by training models on unreliable information.

These findings by Salesforce highlight that AI’s success depends on trusted, contextual data and stronger governance frameworks.

Many organisations are now turning to ‘zero copy’ architectures that unlock trapped data without duplication and adopting natural language analytics to improve data access and literacy.

Chief Data Officer Michael Andrew emphasised that companies must align their AI and data strategies to become truly agentic enterprises. Those that integrate the two, he said, will move beyond experimentation to achieve measurable impact and sustainable value.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise of large language models and the question of ownership

The divide defining AI’s future through large language models

What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate various types of content, including human-like text, images, video, and more audio.

The development of these large language models has reshaped ΑΙ from a specialised field into a social, economic, and political phenomenon. Systems such as GPT, Claude, Gemini, and Llama have become fundamental infrastructures for information processing, creative work, and automation.

Their rapid rise has generated an intense debate about who should control the most powerful linguistic tools ever built.

The distinction between open source and closed source models has become one of the defining divides in contemporary technology that will, undoubtedly, shape our societies.

gemini chatgpt meta AI antitrust trial

Open source models such as Meta’s Llama 3, Mistral, and Falcon offer public access to their code or weights, allowing developers to experiment, improve, and deploy them freely.

Closed source models, exemplified by OpenAI’s GPT series, Anthropic’s Claude, or Google’s Gemini, restrict access, keeping architectures and data proprietary.

Such a tension is not merely technical. It embodies two competing visions of knowledge production. One is oriented toward collective benefit and transparency, and the other toward commercial exclusivity and security of intellectual property.

The core question is whether language models should be treated as a global public good or as privately owned technologies governed by corporate rights. The answer to such a question carries implications for innovation, fairness, safety, and even democratic governance.

Innovation and market power in the AI economy

From an economic perspective, open and closed source models represent opposing approaches to innovation. Open models accelerate experimentation and lower entry barriers for small companies, researchers, and governments that lack access to massive computing resources.

They enable localised applications in diverse languages, sectors, and cultural contexts. Their openness supports decentralised innovation ecosystems similar to what Linux did for operating systems.

Closed models, however, maintain higher levels of quality control and often outperform open ones due to the scale of data and computing power behind them. Companies like OpenAI and Google argue that their proprietary control ensures security, prevents misuse, and finances further research.

The closed model thus creates a self-reinforcing cycle. Access to large datasets and computing leads to better models, which attract more revenue, which in turn funds even larger models.

The outcome of that has been the consolidation of AI power within a handful of corporations. Microsoft, Google, OpenAI, Meta, and a few start-ups have become the new gatekeepers of linguistic intelligence.

OpenAI Microsoft Cloud AI models

Such concentration raises concerns about market dominance, competitive exclusion, and digital dependency. Smaller economies and independent developers risk being relegated to consumers of foreign-made AI products, instead of being active participants in the creation of digital knowledge.

As so, open source LLMs represent a counterweight to Big Tech’s dominance. They allow local innovation and reduce dependency, especially for countries seeking technological sovereignty.

Yet open access also brings new risks, as the same tools that enable democratisation can be exploited for disinformation, deepfakes, or cybercrime.

Ethical and social aspects of openness

The ethical question surrounding LLMs is not limited to who can use them, but also to how they are trained. Closed models often rely on opaque datasets scraped from the internet, including copyrighted material and personal information.

Without transparency, it is impossible to assess whether training data respects privacy, consent, or intellectual property rights. Open source models, by contrast, offer partial visibility into their architecture and data curation processes, enabling community oversight and ethical scrutiny.

However, we have to keep in mind that openness does not automatically ensure fairness. Many open models still depend on large-scale web data that reproduce existing biases, stereotypes, and inequalities.

Open access also increases the risk of malicious content, such as generating hate speech, misinformation, or automated propaganda. The balance between openness and safety has therefore become one of the most delicate ethical frontiers in AI governance.

Socially, open LLMs can empower education, research, and digital participation. They allow low-resource languages to be modelled, minority groups to build culturally aligned systems, and academic researchers to experiment without licensing restrictions.

ai in us education

They represent a vision of AI as a collaborative human project rather than a proprietary service.

Yet they also redistribute responsibility: when anyone can deploy a powerful model, accountability becomes diffuse. The challenge lies in preserving the benefits of openness while establishing shared norms for responsible use.

The legal and intellectual property dilemma

Intellectual property law was not designed for systems that learn from millions of copyrighted works without direct authorisation.

Closed source developers defend their models as transformative works under fair use doctrines, while content creators demand compensation or licensing mechanisms.

3d illustration folder focus tab with word infringement conceptual image copyright law

The dispute has already reached courts, as artists, authors, and media organisations sue AI companies for unauthorised use of their material.

Open source further complicates the picture. When model weights are released freely, the question arises of who holds responsibility for derivative works and whether open access violates existing copyrights.

Some open licences now include clauses prohibiting harmful or unlawful use, blurring the line between openness and control. Legal scholars argue that a new framework is needed to govern machine learning datasets and outputs, one that recognises both the collective nature of data and the individual rights embedded in it.

At stake is not only financial compensation but the broader question of data ownership in the digital age. We need to question ourselves. If data is the raw material of intelligence, should it remain the property of a few corporations or be treated as a shared global resource?

Economic equity and access to computational power

Even the most open model requires massive computational infrastructure to train and run effectively. Access to GPUs, cloud resources, and data pipelines remains concentrated among the same corporations that dominate the closed model ecosystem.

Thus, openness in code does not necessarily translate into openness in practice.

Developing nations, universities, and public institutions often lack the financial and technical means to exploit open models at scale. Such an asymmetry creates a form of digital neo-dependency: the code is public, but the hardware is private.

For AI to function as a genuine global public good, investments in open computing infrastructure, public datasets, and shared research facilities are essential. Initiatives such as the EU’s AI-on-demand platform or the UN’s efforts for inclusive digital development reflect attempts to build such foundations.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background 1

The economic stakes extend beyond access to infrastructure. LLMs are becoming the backbone of new productivity tools, from customer service bots to automated research assistants.

Whoever controls them will shape the future division of digital labour. Open models could allow local companies to retain more economic value and cultural autonomy, while closed models risk deepening global inequalities.

Governance, regulation, and the search for balance

Governments face a difficult task of regulating a technology that evolves faster than policy. For example, the EU AI Act, US executive orders on trustworthy AI, and China’s generative AI regulations all address questions of transparency, accountability, and safety.

Yet few explicitly differentiate between open and closed models.

The open source community resists excessive regulation, arguing that heavy compliance requirements could suffocate innovation and concentrate power even further in large corporations that can afford legal compliance.

On the other hand, policymakers worry that uncontrolled distribution of powerful models could facilitate malicious use. The emerging consensus suggests that regulation should focus not on the source model itself but on the context of its deployment and the potential harms it may cause.

An additional governance question concerns international cooperation. AI’s global nature demands coordination on safety standards, data sharing, and intellectual property reform.

The absence of such alignment risks a fragmented world where closed models dominate wealthy regions while open ones, potentially less safe, spread elsewhere. Finding equilibrium requires mutual trust and shared principles for responsible innovation.

The cultural and cognitive dimension of openness

Beyond technical and legal debates, the divide between open and closed models reflects competing cultural values. Open source embodies the ideals of transparency, collaboration, and communal ownership of knowledge.

Closed source represents discipline, control, and the pursuit of profit-driven excellence. Both cultures have contributed to technological progress, and both have drawbacks.

From a cognitive perspective, open LLMs can enhance human learning by enabling broader experimentation, while closed ones can limit exploration to predefined interfaces. Yet too much openness may also encourage cognitive offloading, where users rely on AI systems without developing independent judgment.

Ai brain hallucinate

Therefore, societies must cultivate digital literacy alongside technical accessibility, ensuring that AI supports human reasoning rather than replaces it.

The way societies integrate LLMs will influence how people perceive knowledge, authority, and creativity. When language itself becomes a product of machines, questions about authenticity, originality, and intellectual labour take on new meaning.

Whether open or closed, models shape collective understanding of truth, expression, and imagination for our societies.

Toward a hybrid future

The polarisation we are presenting here, between open and closed approaches, may be unsustainable in the long run. A hybrid model is emerging, where partially open architectures coexist with protected components.

Companies like Meta release open weights but restrict commercial use, while others provide APIs for experimentation without revealing the underlying code. Such hybrid frameworks aim to combine accountability with safety and commercial viability with transparency.

The future equilibrium is likely to depend on international collaboration and new institutional models. Public–private partnerships, cooperative licensing, and global research consortia could ensure that LLM development serves both the public interest and corporate sustainability.

A system of layered access (where different levels of openness correspond to specific responsibilities) may become the standard.

google translate ai language model

Ultimately, the choice between open and closed models reflects humanity’s broader negotiation between collective welfare and private gain.

Just as the internet or many other emerging technologies evolved through the tension between openness and commercialisation, the future of language models will be defined by how societies manage the boundary between shared knowledge and proprietary intelligence.

So, in conclusion, the debate between open and closed source LLMs is not merely technical.

As we have already mentioned, it embodies the broader conflict between public good and private control, between the democratisation of intelligence and the concentration of digital power.

Open models promote transparency, innovation, and inclusivity, but pose challenges in terms of safety, legality, and accountability. Closed models offer stability, quality, and economic incentive, yet risk monopolising a transformative resource so crucial in our quest for constant human progression.

Finding equilibrium requires rethinking the governance of knowledge itself. Language models should neither be owned solely by corporations nor be released without responsibility. They should be governed as shared infrastructures of thought, supported by transparent institutions and equitable access to computing power.

Only through such a balance can AI evolve as a force that strengthens, rather than divides, our societies and improves our daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool on smartwatch detects hidden structural heart disease

An AI algorithm paired with smartwatch sensors has successfully detected structural heart diseases, including valve damage and weakened heart muscles, in adults. The study, conducted at Yale School of Medicine, will be presented at the American Heart Association’s 2025 Scientific Sessions in New Orleans.

The AI model was trained on over 266,000 electrocardiogram recordings and validated across multiple hospitals and population studies. When tested on 600 participants using single-lead ECGs from a smartwatch, it achieved an 88% accuracy in detecting heart disease.

Researchers said smartwatches could offer a low-cost, accessible method for early screening of structural heart conditions that usually require echocardiograms. The algorithm’s ability to analyse single-lead ECG data could enable preventive detection before symptoms appear.

Experts emphasised that smartwatch data cannot replace medical imaging, but it could complement clinical assessments and expand access to screening. Larger studies in the US are planned to confirm effectiveness and explore community-based use in preventive heart care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI introduces IndQA to test AI on Indian languages and culture

The US R&D company, OpenAI, has introduced IndQA, a new benchmark designed to test how well AI systems understand and reason across Indian languages and cultural contexts. The benchmark covers 2,278 questions in 12 languages and 10 cultural domains, from literature and food to law and spirituality.

Developed with input from 261 Indian experts, IndQA evaluates AI models through rubric-based grading that assesses accuracy, cultural understanding, and reasoning depth. Questions were created to challenge leading OpenAI models, including GPT-4o and GPT-5, ensuring space for future improvement.

India was chosen as the first region for the initiative, reflecting its linguistic diversity and its position as ChatGPT’s second-largest market.

OpenAI aims to expand the approach globally, using IndQA as a model for building culturally aware benchmarks that help measure real progress in multilingual AI performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT and Adobe create AI software for sustainable fashion design

Researchers at MIT’s Computer Science and AI Lab (CSAIL) are collaborating with Adobe to create Refashion, a new AI-driven design tool promoting sustainable fashion. The software deconstructs clothing into modules, allowing designers and consumers to reimagine garments for reuse or transformation.

Users can utilise the AI to sketch shapes and combine elements to create adaptable pieces, such as a skirt that transforms into a dress or maternity wear that evolves throughout pregnancy. The system provides blueprints for flexible, reconfigurable designs that reduce waste.

Lead researcher Rebecca Lin said the project encourages reuse from the outset, contrasting with the disposable nature of fast fashion. By making clothing easy to resize, repair and restyle, Refashion aims to extend each item’s lifespan and reduce environmental impact.

MIT Professor Erik Demaine described Refashion as a bridge between computation, art and design, envisioning it as a tool that makes creative fashion accessible while embedding sustainability into every stage of garment creation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft partners with Lambda in multibillion AI infrastructure deal

Lambda has announced a multibillion-euro agreement with Microsoft to expand AI infrastructure powered by tens of thousands of NVIDIA GPUs, marking one of the largest private cloud computing collaborations to date.

The multi-year deal aims to accelerate the deployment of AI supercomputers at scale, enhancing the capacity for enterprise and research applications across industries.

Under the partnership, Lambda will provide mission-critical cloud compute infrastructure using NVIDIA GB300 NVL72 systems.

A collaboration that builds on an eight-year relationship between the two companies and reflects growing global demand for high-performance computing driven by the rise of AI assistants and enterprise AI solutions.

Stephen Balaban, CEO of Lambda, said the project represents a major step in developing gigawatt-scale AI factories capable of serving billions of users. The company positions itself as a trusted large-scale partner for organisations building advanced AI models and systems.

Founded in 2012, Lambda designs supercomputing infrastructure for AI training and inference, aiming to make computing power as accessible as electricity and to advance what it calls the era of ‘superintelligence’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU invests €107 million in RAISE for AI in science

The European Commission has unveiled RAISE, a new virtual institute designed to unite Europe’s AI research and accelerate scientific breakthroughs.

The launch, announced in Copenhagen, marks a flagship moment in the EU’s strategy to strengthen its leadership in science and technology through collective action.

Funded with €107 million under Horizon Europe, RAISE will bring together Europe’s best resources in data, computing power, and research talent.

An initiative that will help scientists apply AI to pressing challenges such as cancer treatment, climate change, and natural disaster prediction, while promoting innovation that serves humanity instead of commercial interests alone.

RAISE will work with the EuroHPC Joint Undertaking to secure access to AI Gigafactories and will dedicate €75 million to train and attract global researchers through Networks of Excellence.

The Commission also plans to double Horizon Europe’s annual AI investments to more than €3 billion, ensuring that the EU remains a global leader in scientific AI.

A project that reflects the EU’s ambition to achieve technological sovereignty and create an inclusive AI ecosystem. As RAISE grows in phases towards 2034, it will strengthen cooperation among Member States, academia, and industry, setting a benchmark for responsible and innovative AI in science.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!