OpenAI backs away from for-profit transition amid scrutiny

OpenAI has announced it will no longer pursue a full transition to a for-profit company. Instead, it will restructure its commercial arm as a public benefit corporation (PBC), retaining oversight by its nonprofit board.

The move comes after discussions with the attorneys general of California and Delaware, and growing concerns about governance and mission drift. The nonprofit board—best known for briefly removing CEO Sam Altman—will continue to oversee the company and appoint the PBC board.

Investors will now hold regular, uncapped equity in the PBC, replacing the previous 100x return cap, a change designed to attract future funding. The nonprofit will also gain a growing equity stake in the business arm.

In a message to staff, Altman said OpenAI remains committed to building AI that benefits humanity and sees this structure as the best path forward. Critics, including former staff, say questions remain about technology ownership and long-term priorities.

At the same time, Meta is positioning itself as a major rival. It recently launched a standalone AI assistant app, powered by its Llama 4 model and available across platforms including Ray-Ban smart glasses. The app includes a social Discover feed, encouraging interaction with shared AI outputs.

OpenAI’s new structure attempts to balance commercial growth with ethical governance—a model that may influence how other AI firms approach funding, control, and public accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK artists urge PM to shield creative work from AI exploitation

More than 400 prominent British artists, including Dua Lipa, Elton John, and Sir Ian McKellen, have signed a letter urging Prime Minister Keir Starmer to update UK copyright laws to protect their work from being used without consent in training AI systems. The signatories argue that current laws leave their creative output vulnerable to exploitation by tech companies, which could ultimately undermine the UK’s status as a global cultural leader.

The artists are backing a proposed amendment to the Data (Use and Access) Bill by Baroness Beeban Kidron, requiring AI developers to disclose when and how they use copyrighted materials. They believe this transparency could pave the way for licensing agreements that respect the rights of creators while allowing responsible AI development.

Nobel laureate Kazuo Ishiguro and music legends like Paul McCartney and Kate Bush have joined the call, warning that creators risk ‘giving away’ their life’s work to powerful tech firms. While the government insists it is consulting all parties to ensure a balanced outcome that supports both the creative sector and AI innovation, not everyone supports the amendment.

Critics, like Julia Willemyns of the Centre for British Progress, argue that stricter copyright rules could stifle technological growth, offshore development, and damage the UK economy.

Why does it matter?

The debate reflects growing global tension between protecting intellectual property and enabling AI progress. With a key vote approaching in the House of Lords, artists are pressing for urgent action to secure a fair and sustainable path forward that upholds innovation and artistic integrity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Technological inventions blurring the line between reality and fiction

The rapid progress of AI over the past few years has unsettled the global population, reaching a point where it is extremely difficult to say with certainty whether certain content has been created by AI or not.

We are confronted with this phenomenon through photos, video and audio recordings that can easily confuse us and force us to question our perception of reality.

Digital twins are being used by scammers in the crypto space to impersonate influencers and execute fraudulent schemes.

And while the public often focuses on deepfakes, at the same time we are witnessing inventions and patents emerging around the world that deserve admiration, but also spark important reflection: are we nearing, or have we already crossed, the ethical red line?

For these and many other reasons, in a world where the visual and functional differences between science fiction and reality have almost disappeared, the latest inventions come as a shock.

We are now at a point where we are facing technologies that force us to redefine what we mean by the word ‘reality’.

Neuralink: Crossing the boundary between brain and machine

Amyotrophic lateral sclerosis (ALS) is a rare neurological disease caused by damage and degeneration of motor neurons—nerve cells in the brain and spinal cord. This damage disrupts the transmission of nerve impulses to muscles via peripheral nerves, leading to a progressive loss of muscle function.

However, the Neuralink chip, developed by Elon Musk’s company, has helped one patient type with their mind and speak using their voice. This breakthrough opens the door to a new form of communication where thoughts become direct interactions.

Liquid robot from South Korea

Scenes from sci-fi films are becoming reality, and in this case (thankfully), a liquid robot has a noble purpose—to assist in rescue missions and be applied in medicine.

Currently in the early prototype stage, it has been demonstrated in labs through a collaboration between MIT and Korean research institutes.

ULS exoskeleton as support for elderly care

Healthcare workers and caregivers in China have had their work greatly simplified thanks to the ULS Robotics exoskeleton, weighing only five kilograms but enabling users to lift up to 30 kilograms.

This represents a leap forward in caring for people with limited mobility, while also increasing safety and efficiency. Commercial prototypes have been tested in hospitals and industrial environments.

https://twitter.com/ulsrobotics/status/1317426742168940545

Agrorobots: Autonomous crop spraying

Another example from China that has been in use for several years. Robots equipped with AI perform precise crop spraying. The system analyses pests and targets them without the need for human presence, reducing potential health risks.

The application has become standardised, with expectations for further expansion and improvement in the near future.

The stretchable battery of the future

Researchers in Sweden have developed a flexible battery that can double in length without losing energy, making it ideal for wearable technologies.

Although not yet commercially available, it has been covered in scientific journals. The aim is for it to become a key component in bendable devices, smart clothing and medical implants.

Volonaut Airbike: A sci-fi vehicle takes off

When it comes to innovation, the Volonaut Airbike hits the mark perfectly. Designed to resemble a single-seat speeder bike from Star Wars, it represents a giant leap toward personal air travel.

Functional prototypes exist, but testing remains limited due to high production costs and regulatory hurdles related to traffic laws. Nevertheless, the Polish company behind it remains committed to this idea, and it will be exciting to follow its progress.

NEO robot: The humanoid household assistant

A Norwegian company has been developing a humanoid robot capable of performing household tasks, including gardening chores like collecting and bagging leaves or grass.

These are among the first serious steps toward domestic humanoid assistants. Currently functioning in demo mode, the robot has received backing from OpenAI.

Lenovo Yoga Solar: The laptop that loves sunlight

If you find yourself without a charger but with access to direct sunlight, this laptop will do everything it can to keep you powered. Using solar energy, 20 minutes of charging in sunlight provides around one hour of video playback.

Perfect for ecologists and digital nomads. Although not yet commercially available, it has been showcased at several major tech expos.

https://www.youtube.com/watch?v=px1iEW600Pk

What comes next: The need for smart regulation

As technology races ahead, regulation must catch up. From neurotech to autonomous robots, each innovation raises new questions about privacy, accountability, and ethics.

Governments and tech developers alike must collaborate to ensure that these inventions remain tools for good, not risks to society.

So, what is real and what is generated?

This question will only become harder to answer as time goes on. But on the other hand, if the technological revolution continues to head in a useful and positive direction, perhaps there is little to fear.

The true dilemma in this era of rapid innovation may not be about the tools themselves, but about the fundamental question: Is technology shaping us, or do we still shape it?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rewriting the AI playbook: How Meta plans to win through openness

Meta hosted its first-ever LlamaCon, a high-profile developer conference centred around its open-source language models. Timed to coincide with the release of its Q1 earnings, the event showcased Llama 4, Meta’s newest and most powerful open-weight model yet.

The message was clear – Meta wants to lead the next generation of AI on its own terms, and with an open-source edge. Beyond presentations, the conference represented an attempt to reframe Meta’s public image.

Once defined by social media and privacy controversies, Meta is positioning itself as a visionary AI infrastructure company. LlamaCon wasn’t just about a model. It was about a movement Meta wants to lead, with developers, startups, and enterprises as co-builders.

By holding LlamaCon the same week as its earnings call, Meta strategically emphasised that its AI ambitions are not side projects. They are central to the company’s identity, strategy, and investment priorities moving forward. This convergence of messaging signals a bold new chapter in Meta’s evolution.

The rise of Llama: From open-source curiosity to strategic priority

When Meta introduced LLaMA 1 in 2023, the AI community took notice of its open-weight release policy. Unlike OpenAI and Anthropic, Meta allowed researchers and developers to download, fine-tune, and deploy Llama models on their own infrastructure. That decision opened a floodgate of experimentation and grassroots innovation.

Now with Llama 4, the models have matured significantly, featuring better instruction tuning, multilingual capacity, and improved safety guardrails. Meta’s AI researchers have incorporated lessons learned from previous iterations and community feedback, making Llama 4 an update and a strategic inflexion point.

Crucially, Meta is no longer releasing Llama as a research novelty. It is now a platform and stable foundation for third-party tools, enterprise solutions, and Meta’s AI products. That is a turning point, where open-source ideology meets enterprise-grade execution.

Zuckerberg’s bet: AI as the engine of Meta’s next chapter

Mark Zuckerberg has rarely shied away from bold, long-term bets—whether it’s the pivot to mobile in the early 2010s or the more recent metaverse gamble. At LlamaCon, he clarified that AI is now the company’s top priority, surpassing even virtual reality in strategic importance.

He framed Meta as a ‘general-purpose AI company’, focused on both the consumer layer (via chatbots and assistants) and the foundational layer (models and infrastructure). Meta CEO envisions a world where Meta powers both the AI you talk to and the AI your apps are built on—a dual play that rivals Microsoft’s partnership with OpenAI.

This bet comes with risk. Investors are still sceptical about Meta’s ability to turn research breakthroughs into a commercial advantage. But Zuckerberg seems convinced that whoever controls the AI stack—hardware, models, and tooling—will control the next decade of innovation, and Meta intends to be one of those players.

A costly future: Meta’s massive AI infrastructure investment

Meta’s capital expenditure guidance for 2025—$60 to $65 billion—is among the largest in tech history. These funds will be spent primarily on AI training clusters, data centres, and next-gen chips.

That level of spending underscores Meta’s belief that scale is a competitive advantage in the LLM era. Bigger compute means faster training, better fine-tuning, and more responsive inference—especially for billion-parameter models like Llama 4 and beyond.

However, such an investment raises questions about whether Meta can recoup this spending in the short term. Will it build enterprise services, or rely solely on indirect value via engagement and ads? At this point, no monetisation plan is directly tied to Llama—only a vision and the infrastructure to support it.

Economic clouds: Revenue growth vs Wall Street’s expectations

Meta reported an 11% year-over-year increase in revenue in Q1 2025, driven by steady performance across its ad platforms. However, Wall Street reacted negatively, with the company’s stock falling nearly 13% following the earnings report, because investors are worried about the ballooning costs associated with Meta’s AI ambitions.

Despite revenue growth, Meta’s margins are thinning, mainly due to front-loaded investments in infrastructure and R&D. While Meta frames these as essential for long-term dominance in AI, investors are still anchored to short-term profit expectations.

A fundamental tension is at play here – Meta is acting like a venture-stage AI startup with moonshot spending, while being valued as a mature, cash-generating public company. Whether this tension resolves through growth or retrenchment remains to be seen.

Global headwinds: China, tariffs, and the shifting tech supply chain

Beyond internal financial pressures, Meta faces growing external challenges. Trade tensions between the US and China have disrupted the global supply chain for semiconductors, AI chips, and data centre components.

Meta’s international outlook is dimming with tariffs increasing and Chinese advertising revenue falling. That is particularly problematic because Meta’s AI infrastructure relies heavily on global suppliers and fabrication facilities. Any disruption in chip delivery, especially GPUs and custom silicon, could derail its training schedules and deployment timelines.

At the same time, Meta is trying to rebuild its hardware supply chain, including in-house chip design and alternative sourcing from regions like India and Southeast Asia. These moves are defensive but reflect how AI strategy is becoming inseparable from geopolitics.

Llama 4 in context: How it compares to GPT-4 and Gemini

Llama 4 represents a significant leap from Llama 2 and is now comparable to GPT-4 in a range of benchmarks. Early feedback suggests strong performance in logic, multilingual reasoning, and code generation.

However, how it handles tool use, memory, and advanced agentic tasks is still unclear. Compared to Gemini 1.5, Google’s flagship model, Llama 4 may still fall short in certain use cases, especially those requiring long context windows and deep integration with other Google services.

But Llama has one powerful advantage – it’s free to use, modify, and self-host. That makes Llama 4 a compelling option for developers and companies seeking control over their AI stack without paying per-token fees or exposing sensitive data to third parties.

Open source vs closed AI: Strategic gamble or masterstroke?

Meta’s open-weight philosophy differentiates it from rivals, whose models are mainly gated, API-bound, and proprietary. By contrast, Meta freely gives away its most valuable assets, such as weights, training details, and documentation.

Openness drives adoption. It creates ecosystems, accelerates tooling, and builds developer goodwill. Meta’s strategy is to win the AI competition not by charging rent, but by giving others the keys to build on its models. In doing so, it hopes to shape the direction of AI development globally.

Still, there are risks. Open weights can be misused, fine-tuned for malicious purposes, or leaked into products Meta doesn’t control. But Meta is betting that being everywhere is more powerful than being gated. And so far, that bet is paying off—at least in influence, if not yet in revenue.

Can Meta’s open strategy deliver long-term returns?

Meta’s LlamaCon wasn’t just a tech event but a philosophical declaration. In an era where AI power is increasingly concentrated and monetised, Meta chooses a different path based on openness, infrastructure, and community adoption.

The company invests tens of billions of dollars without a clear monetisation model. It is placing a massive bet that open models and proprietary infrastructure can become the dominant framework for AI development.

Meta is facing a major antitrust trial as the FTC argues its Instagram and WhatsApp acquisitions were made to eliminate competition rather than foster innovation.

Meta’s move positions it as the Android of the LLM era—ubiquitous, flexible, and impossible to ignore. The road ahead will be shaped by both technical breakthroughs and external forces—regulation, economics, and geopolitics.

Whether Meta’s open-source gamble proves visionary or reckless, one thing is clear – the AI landscape is no longer just about who has the most innovative model. It’s about who builds the broadest ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU criticised for secretive security AI plans

A new report by Statewatch has revealed that the European Union is quietly laying the groundwork for the widespread use of experimental AI technologies in policing, border control, and criminal justice.

The report warns that these developments pose serious threats to transparency, accountability, and fundamental rights.

Despite the adoption of the EU AI Act in 2024, broad exemptions allow law enforcement and migration agencies to bypass safeguards, including a full exemption for certain high-risk systems until 2031.

Institutions like Europol and eu-LISA are involved in building technical infrastructure for security-focused AI, often without public knowledge or oversight.

The study also highlights how secretive working groups, such as the European Clearing Board, have influenced legislation to favour police interests.

Critics argue that these moves risk entrenching discrimination and reducing democratic control, especially at a time of rising authoritarian influence within EU institutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN prepares for possible shifts in US financial contributions

The United Nations faces renewed financial uncertainty as Donald Trump’s administration reviews all US support for international organisations. Trump has already slashed voluntary funding across multiple UN agencies and withdrawn from bodies like the World Health Organization and the Human Rights Council.

A leaked White House memo even suggests that cuts to assessed contributions—mandatory payments that keep core UN operations running—are on the table, sparking fears of a major financial crisis. While a complete US withdrawal from the UN is seen as unlikely, experts warn that the US could cripple the organisation by indefinitely halting payments, creating a gaping hole in its budget.

In 2023, the US contributed around $13 billion to the UN, covering about a quarter of its budget. The potential for missed payments raises concerns not just about immediate financial collapse, but about the future of multilateralism itself, drawing parallels to the League of Nations’ demise in the early 20th century.

The situation is complicated by internal divisions within the Republican Party, with some favouring a transactional approach to UN reform while others push a hardline, anti-multilateralist agenda. With peacekeeping budget negotiations looming and no US ambassador to the UN yet appointed, uncertainty dominates.

Meanwhile, UN Secretary-General António Guterres has launched the UN80 initiative, aiming to streamline operations and reassure sceptical donors, but it remains unclear if these reforms will be enough to placate Washington.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urged to outlaw apps creating deepfake abuse images

The Children’s Commissioner has urged the UK Government to ban AI apps that create sexually explicit images through “nudification” technology. AI tools capable of manipulating real photos to make people appear naked are being used to target children.

Concerns in the UK are growing as these apps are now widely accessible online, often through social media and search platforms. In a newly published report, Dame Rachel warned that children, particularly girls, are altering their online behaviour out of fear of becoming victims of such technologies.

She stressed that while AI holds great potential, it also poses serious risks to children’s safety. The report also recommends stronger legal duties for AI developers and improved systems to remove explicit deepfake content from the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japanese startup Craif raises funds to expand urine-based cancer test

Cancer remains one of the leading causes of death worldwide, with nearly 20 million new cases and 9.7 million deaths recorded in 2022.

In response, Japanese startup Craif, spun off from Nagoya University in 2018, is developing an AI-powered early cancer detection software using microRNA (miRNA) instead of relying on traditional methods.

The company has just raised $22 million in Series C funding, bringing its total to $57 million, with plans to expand into the US market and strengthen its research and development efforts.

Craif was founded after co-founder and CEO Ryuichi Onose experienced the impact of cancer within his own family. Partnering with associate professor Takao Yasui, who had discovered a new technique for early cancer detection using urinary biomarkers, the company created a non-invasive urine-based test.

Instead of invasive blood tests, Craif’s technology allows patients to detect cancers as early as Stage 1 from the comfort of their own homes, making regular screening more accessible and less daunting.

Unlike competitors who depend on cell-free DNA (cfDNA), Craif uses microRNA, a biomarker known for its strong link to early cancer biology. Urine is chosen instead of blood because it contains fewer impurities, offering clearer signals and reducing measurement errors.

Craif’s first product, miSignal, which tests for seven different types of cancers, is already on the market in Japan and has attracted around 20,000 users through clinics, pharmacies, direct sales, and corporate wellness programmes.

The new funding will enable Craif to enter the US market, complete clinical trials by 2029, and seek FDA approval. It also plans to expand its detection capabilities to cover ten types of cancers this year and explore applications for other conditions like dementia instead of limiting its technology to cancer alone.

With a growing presence in California and partnerships with dozens of US medical institutions, Craif is positioning itself as a major player in the future of early disease detection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI educational race between China and USA brings some hope

The AI race between China and the USA shifts to classrooms. As AI governance expert Jovan Kurbalija highlights in his analysis of global AI strategies, two countries see AI literacy as a ‘strategic imperative’. From President Trump’s executive order to advance AI education to China’s new AI education strategy, both superpowers are betting big on nurturing homegrown AI talent.

Kurbalija sees focus on AI education as a rare bright spot in increasingly fractured tech geopolitics: ‘When students in Shanghai debug code alongside peers in Silicon Valley via open-source platforms, they’re not just building algorithms—they’re building trust.’

This grassroots collaboration, he argues, could soften the edges of emerging AI nationalism and support new types of digital and AI diplomacy.

He concludes that the latest AI education initiatives are ‘not just about who wins the AI race but, even more importantly, how we prepare humanity for the forthcoming AI transformation and coexistence with advanced technologies.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI startup Cluely offers controversial cheating tool

A controversial new startup called Cluely has secured $5.3 million in seed funding to expand its AI-powered tool designed to help users ‘cheat on everything,’ from job interviews to exams.

Founded by 21-year-old Chungin ‘Roy’ Lee and Neel Shanmugam—both former Columbia University students—the tool works via a hidden browser window that remains invisible to interviewers or test supervisors.

The project began as ‘Interview Coder,’ originally intended to help users pass technical coding interviews on platforms like LeetCode.

Both founders faced disciplinary action at Columbia over the tool, eventually dropping out of the university. Despite ethical concerns, Cluely claims its technology has already surpassed $3 million in annual recurring revenue.

The company has drawn comparisons between its tool and past innovations like the calculator and spellcheck, arguing that it challenges outdated norms in the same way. A viral launch video showing Lee using Cluely on a date sparked backlash, with critics likening it to a scene from Black Mirror.

Cluely’s mission has sparked widespread debate over the use of AI in high-stakes settings. While some applaud its bold approach, others worry it promotes dishonesty.

Amazon, where Lee reportedly landed an internship using the tool, declined to comment on the case directly but reiterated that candidates must agree not to use unauthorised tools during the hiring process.

The startup’s rise comes amid growing concern over how AI may be used—or misused—in both professional and personal spheres.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!