Gartner warns that more than 40 percent of agentic AI projects could be cancelled by 2027

More than 40% of agentic AI projects will likely be cancelled by the end of 2027 due to rising costs, limited business value, and poor risk control, according to research firm Gartner.

These cancellations are expected as many early-stage initiatives remain trapped in hype, often misapplied and far from ready for real-world deployment.

Gartner analyst Anushree Verma warned that most agentic AI efforts are still at the proof-of-concept stage. Instead of focusing on scalable production, many companies have been distracted by experimental use cases, underestimating the cost and complexity of full-scale implementation.

A recent poll by Gartner found that only 19% of organisations had made significant investments in agentic AI, while 31% were undecided or waiting.

Much of the current hype is fuelled by vendors engaging in ‘agent washing’ — marketing existing tools like chatbots or RPA under a new agentic label without offering true agentic capabilities.

Out of thousands of vendors, Gartner believes only around 130 offer legitimate agentic solutions. Verma noted that most agentic models today lack the intelligence to deliver strong returns or follow complex instructions independently.

Still, agentic AI holds long-term promise. Gartner expects 15% of daily workplace decisions to be handled autonomously by 2028, up from zero in 2024. Moreover, one-third of enterprise applications will include agentic capabilities by then.

However, to succeed, organisations must reimagine workflows from the ground up, focusing on enterprise-wide productivity instead of isolated task automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube adds AI search results for travel, shopping and more

YouTube is launching a new AI-powered search feature that mirrors Google’s AI Overviews, aiming to improve how users discover content on the platform.

The update introduces an ‘AI-powered search results carousel’ when YouTube users search for shopping, travel, or local activities.

The carousel offers a collection of video thumbnails and an AI-generated summary highlighting the key topics related to the search. For example, someone searching for ‘best beaches in Hawaii’ might see curated clips of snorkelling locations, volcanic coastlines, and planning tips — all surfaced by the AI.

Currently, the feature is available only to YouTube Premium users in the US. However, the platform plans to expand its conversational AI tool — which provides deeper insights, suggestions, and video summaries — to non-Premium users in the US soon.

That tool was first launched in 2023 to help users better understand content while watching.

YouTube is doubling down on AI features to keep users engaged and make content discovery more intuitive, especially in categories involving planning and decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta hires top OpenAI researcher for AI superintelligence push

Meta has reportedly hired AI researcher Trapit Bansal, who previously worked closely with OpenAI co-founder Ilya Sutskever on reinforcement learning and co-created the o1 reasoning model.

Bansal joins Meta’s ambitious superintelligence team, which is focused on further pushing AI reasoning capabilities.

Former Scale AI CEO Alexandr Wang leads the new team, brought in after Meta invested $14.3 billion in the AI data labelling company.

Alongside Bansal, several other notable figures have recently joined, including three OpenAI researchers from Zurich, a former Google DeepMind expert, Jack Rae, and a senior machine learning lead from Sesame AI.

Meta CEO Mark Zuckerberg is accelerating AI recruitment by negotiating with prominent names like former GitHub CEO Nat Friedman and Safe Superintelligence co-founder Daniel Gross.

Despite these aggressive efforts, OpenAI CEO Sam Altman revealed that even $100 million joining bonuses have failed to lure key staff away from his firm.

Zuckerberg has also explored acquiring startups such as Sutskever’s Safe SuperIntelligence and Perplexity AI, further highlighting Meta’s urgency in catching up in the generative AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025: Africa charts a sovereign path for AI governance

African leaders at the Internet Governance Forum (IGF) 2025 in Oslo called for urgent action to build sovereign and ethical AI systems tailored to local needs. Hosted by the German Federal Ministry for Economic Cooperation and Development (BMZ), the session brought together voices from government, civil society, and private enterprises.

Moderated by Ashana Kalemera, Programmes Manager at CIPESA, the discussion focused on ensuring AI supports democratic governance in Africa. ‘We must ensure AI reflects our realities,’ Kalemera said, emphasising fairness, transparency, and inclusion as guiding principles.

Executive Director of Policy Neema Iyer warned that AI harms governance through surveillance, disinformation, and political manipulation. ‘Civil society must act as watchdogs and storytellers,’ she said, urging public interest impact assessments and grassroots education.

Representing South Africa, Mlindi Mashologu stressed the need for transparent governance frameworks rooted in constitutional values. ‘Policies must be inclusive,’ he said, highlighting explainability, data bias removal, and citizen oversight as essential components of trustworthy AI.

Lacina Koné, CEO of Smart Africa, called for urgent action to avoid digital dependency. ‘We cannot be passively optimistic. Africa must be intentional,’ he stated. Over 1,000 African startups rely on foreign AI models, creating sovereignty risks.

Koné emphasised that Africa should focus on beneficial AI, not the most powerful. He highlighted agriculture, healthcare, and education sectors where local AI could transform. ‘It’s about opportunity for the many, not just the few,’ he said.

From Mauritania, Matchiane Soueid Ahmed shared her country’s experience developing a national AI strategy. Challenges include poor rural infrastructure, technical capacity gaps, and lack of institutional coordination. ‘Sovereignty is not just territorial—it’s digital too,’ she noted.

Shikoh Gitau, CEO of KALA in Kenya, brought a private sector perspective. ‘We must move from paper to pavement,’ she said. Her team runs an AI literacy campaign across six countries, training teachers directly through their communities.

Gitau stressed the importance of enabling environments and blended financing. ‘Governments should provide space, and private firms must raise awareness,’ she said. She also questioned imported frameworks: ‘What definition of democracy are we applying?’

Audience members from Gambia, Ghana, and Liberia raised key questions about harmonisation, youth fears over job loss and AI readiness. Koné responded that Smart Africa is benchmarking national strategies and promoting convergence without erasing national sovereignty.

Though 19 African countries have published AI strategies, speakers noted that implementation remains slow. Practical action—such as infrastructure upgrades, talent development, and public-private collaboration—is vital to bring these frameworks to life.

The panel underscored the need to build AI systems prioritising inclusion, utility, and human rights. Investments in digital literacy, ethics boards, and regulatory sandboxes were cited as key tools for democratic AI governance.

Kalemera concluded, ‘It’s not yet Uhuru for AI in Africa—but with the right investments and partnerships, the future is promising.’ The session reflected cautious optimism and a strong desire for Africa to shape its AI destiny.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

EU urged to pause AI act rollout

The digital sector is urging the EU leaders to delay the AI act, citing missing guidance and legal uncertainty. Industry group CCIA Europe warns that pressing ahead could damage AI innovation and stall the bloc’s economic ambitions.

The AI Act’s rules for general-purpose AI models are set to apply in August, but key frameworks are incomplete. Concerns have grown as the European Commission risks missing deadlines while the region seeks a €3.4 trillion AI-driven economic boost by 2030.

CCIA Europe calls for the EU heads of state to instruct a pause on implementation to ensure companies have time to comply. Such a delay would allow final standards to be established, offering developers clarity and supporting AI competitiveness.

Failure to adjust the timeline could leave Europe struggling to lead in AI, according to CCIA Europe’s leadership. A rushed approach, they argue, risks harming the very innovation the AI Act aims to promote.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Infosys chairman warns of global risks from tariffs and AI

Infosys chairman Nandan Nilekani has warned of mounting global uncertainty driven by tariff wars, AI and the ongoing energy transition.

At the company’s 44th annual general meeting, he urged businesses to de-risk sourcing and diversify supply chains as geopolitical trade tensions reshape global commerce.

He described a ‘perfect storm’ of converging challenges pushing the world away from a single global market and towards fragmented trade blocs. As firms navigate the shift, they must choose between regions and adopt more strategic, resilient supply networks.

Addressing AI, Nilekani acknowledged the disruption it may bring to the workforce but framed it as an opportunity for digital transformation. He said Infosys is investing in both ‘AI foundries’ for innovation and ‘AI factories’ for scale, with over 275,000 employees already trained in AI technologies.

Energy transition was also flagged as a significant uncertainty, as the future depends on breakthroughs in renewable sources like solar, wind and hydrogen. Nilekani stressed that all businesses now face rapid technological and operational change before they can progress confidently into an unpredictable future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI feature to sum up all the unread messages

WhatsApp has introduced a new feature using Meta AI to help users manage unread messages more easily. Named ‘Message Summaries’, the tool provides quick overviews of missed messages in individual and group chats, assisting users to catch up without scrolling through long threads.

The summaries are generated using Meta’s Private Processing technology, which operates inside a Trusted Execution Environment. The secure cloud-based system ensures that neither Meta nor WhatsApp — nor anyone else in the conversation — can access your messages or the AI-generated summaries.

According to WhatsApp, Message Summaries are entirely private. No one else in the chat can see the summary created for you. If someone attempts to interfere with the secure system, operations will stop immediately, or the change will be exposed using a built-in transparency check.

Meta has designed the system around three principles: secure data handling during processing and transmission, strict enforcement of protections against tampering, and provable transparency to track any breach attempt.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia becomes world’s most valuable company after stock surge

Nvidia shares hit an all-time high on 25 June, rising 4.3 percent to US$154.31. The stock has surged 63 percent since April, adding another US$1.5 trillion to its market value.

With a total market capitalisation of about US$3.77 trillion, Nvidia has overtaken Microsoft to become the world’s most valuable listed company.

Strong earnings and growing AI infrastructure spending by major clients — including Microsoft, Meta, Alphabet and Amazon — have reinforced investor confidence.

Nvidia’s CEO, Jensen Huang, told shareholders that demand remains strong and that the computer industry is still in the early stages of a major AI upgrade cycle.

Despite gaining 15 percent in 2025, following a 170 percent rise in 2024 and a 240 percent surge in 2023, Nvidia still appears reasonably valued. It trades at 31.5 times forward earnings, below its 10-year average and close to the Nasdaq 100 multiple, even though its projected growth rate is higher.

Analyst sentiment remains firmly bullish. Nearly 90 percent of analysts tracked by Bloomberg recommend buying the stock, which trades below their average price target.

Yet, Nvidia is less widely held among institutional investors than peers like Microsoft and Apple, indicating further room for buying as AI momentum continues into 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI sandboxes pave path for responsible innovation in developing countries

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts from around the world gathered to examine how AI sandboxes—safe, controlled environments for testing new technologies under regulatory oversight—can help ensure that innovation remains responsible and inclusive, especially in developing countries. Moderated by Sophie Tomlinson of the DataSphere Initiative, the session spotlighted the growing global appeal of sandboxes, initially developed for fintech, and now extending into healthcare, transportation, and data governance.

Speakers emphasised that sandboxes provide a much-needed collaborative space for regulators, companies, and civil society to test AI solutions before launching them into the real world. Mariana Rozo-Paz from the DataSphere Initiative likened them to childhood spaces for building and experimentation, underscoring their agility and potential for creative governance.

From the European AI Office, Alex Moltzau described how the EU AI Act integrates sandboxes to support safe innovation and cross-border collaboration. On the African continent, where 25 sandboxes already exist (mainly in finance), countries like Nigeria are using them to implement data protection laws and shape national AI strategies. However, funding and legal authority remain hurdles.

The workshop laid bare several shared challenges: limited resources, lack of clear legal frameworks, and insufficient participation in civil society. Natalie Cohen of the OECD pointed out that just 41% of countries trust governments to regulate new technologies effectively—a gap that sandboxes can help bridge. By enabling evidence-based experimentation and promoting transparency, they serve as trust-building tools among governments, businesses, and communities.

Despite regional differences, there was consensus that AI sandboxes—when well-designed and inclusive—can drive equitable digital innovation. With initiatives like the Global Sandboxes Forum and OECD toolkits in progress, stakeholders signalled a readiness to move from theory to practice, viewing sandboxes as more than just regulatory experiments—they are, increasingly, catalysts for international cooperation and responsible AI deployment.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.