The reality behind AI hype

As governments and tech leaders gather at global forums such as the AI Impact Summit in New Delhi, one assumption dominates discussion: the more computing power poured into AI, the better it will become. In his blog ‘‘The elephant in the AI room’: Does more computing power really bring more useful AI?’, Jovan Kurbalija questions whether that belief is as solid as it seems.

For years, the AI race has been driven by the idea that ever-larger models and vast GPU farms are the key to progress. That logic has justified enormous energy consumption and multi-billion-dollar investments in data centres. But Kurbalija argues that bigger is not always better, especially when everyday tasks often require far less computational firepower than frontier models provide.

He points out that most people rely on a limited vocabulary and a small set of reasoning tools in their daily work. Smaller, specialised AI systems can already draft emails, summarise meetings, or classify documents effectively. The push for trillion-parameter models, he suggests, may reflect ambition more than necessity.

There are also technical limits to consider. Adding more computing power can lead to diminishing returns, and some prominent researchers doubt that simply scaling up large language models will lead to human-level intelligence. More hardware, Kurbalija notes, does not automatically solve deeper conceptual challenges in AI design.

The economic picture is equally complex. Training cutting-edge proprietary models can cost hundreds of millions of dollars, while newer open-source systems have been developed at a fraction of that price. If cheaper models can deliver similar performance, questions arise about the sustainability of current spending and whether investors are backing efficiency or hype.

Beyond cost and performance lies a broader ethical issue. Even if massive computing power could eventually produce superintelligent systems, the key question is whether society truly needs them. Kurbalija warns that technological possibilities should not be confused with social desirability, and that innovation without a clear purpose can create new risks.

Rather than escalating an arms race for ever-larger models, the blog calls for a shift toward needs-driven design. Right-sized tools, viable business models, and ethical clarity about AI’s role in society may prove more valuable than raw computing muscle.

In challenging the prevailing narrative, Kurbalija urges policymakers and industry leaders to rethink whether the future of AI depends on scale alone or on smarter priorities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Windows 11 gains enterprise 5G management through Ericsson partnership

Ericsson and Microsoft have integrated advanced 5G into Windows 11 to simplify secure enterprise laptop connectivity. The update embeds AI-driven 5G management, enabling IT teams to automate connections and enforce policy-based controls at scale.

The solution combines Microsoft Intune with Ericsson Enterprise 5G Connect, a cloud-based platform that monitors network quality and optimises performance. Enterprises can switch service providers and automatically apply internal connectivity policies.

IT departments can remotely provision eSIMs, prioritise 5G networks, and enforce secure profiles across laptop fleets. Automation reduces manual configuration and ensures consistent compliance across locations and service providers.

The companies say the integration addresses long-standing barriers to adopting cellular-connected PCs, including complexity and fragmented management. Multi-market pilots have preceded commercial availability in the United States, Sweden, Singapore, and Japan.

Additional launches are planned in 2026 across Spain, Germany, and Finland. Executives from both firms describe the collaboration as a step toward AI-ready enterprise devices with secure, always-on connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mistral AI expands European footprint with acquisition of Koyeb

Mistral AI has strengthened its position in Europe’s AI sector through the acquisition of Koyeb. The deal forms part of its strategy to build end-to-end capacity for deploying advanced AI systems across European infrastructure.

The company has been expanding beyond model development into large-scale computing. It is currently building new data centre facilities, including a primary site in France and a €1.2 billion facility in Sweden, both aimed at supporting high-performance AI workloads.

The acquisition follows a period of rapid growth for Mistral AI, which reached a valuation of €11.7 billion after investment from ASML. French public support has also played a role in accelerating its commercial and research progress.

Mistral AI now positions itself as a potential European technology champion, seeking to combine model development, compute infrastructure and deployment tools into a fully integrated AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

WordPress.com integrates AI assistant into its editing workflow

Major updates to AI tooling are reshaping website creation as WordPress.com brings an integrated assistant directly into its editor.

The new system works within each site rather than relying on external chat windows, allowing users to adjust layouts, create content, and modify designs in real time. The tool is available to customers on Business and Commerce plans, although activation requires a manual opt-in.

The assistant appears across several core areas of the platform. Inside the editor, it can refine writing, modify styles, translate text and generate new sections with simple instructions.

In the Media Library, you can create new images or apply targeted edits through the platform’s in-house Nano Banana models, eliminating the need for separate subscriptions. Block notes provide an additional way to request suggestions, checks, or link-based context directly within each page.

The updates aim to make site building faster and more efficient by keeping all AI interactions within the existing workflow. Users who prefer a manual experience can ignore the feature entirely, since the assistant remains inactive unless deliberately enabled.

WordPress.com also notes that the system works best with block themes, although image tools are still available for classic themes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Rising DRAM prices push memory to the centre of AI strategy

The cost of running AI systems is shifting towards memory rather than compute, as the price of DRAM has risen sharply over the past year. Efficient memory orchestration is now becoming a critical factor in keeping inference costs under control, particularly for large-scale deployments.

Analysts such as Doug O’Laughlin and Val Bercovici of Weka note that prompt caching is turning into a complex field.

Anthropic has expanded its caching guidance for Claude, with detailed tiers that determine how long data remains hot and how much can be saved through careful planning. The structure enables significant efficiency gains, though each additional token can displace previously cached content.

The growing complexity reflects a broader shift in AI architecture. Memory is being treated as a valuable and scarce resource, with optimisation required at multiple layers of the stack.

Startups such as Tensormesh are already working on cache optimisation tools, while hyperscalers are examining how best to balance DRAM and high-bandwidth memory across their data centres.

Better orchestration should reduce the number of tokens required for queries, and models are becoming more efficient at processing those tokens. As costs fall, applications that are currently uneconomical may become commercially viable.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China boosts AI leadership with major model launches ahead of Lunar New Year

Leading Chinese AI developers have unveiled a series of advanced models ahead of the Lunar New Year, strengthening the country’s position in the global AI sector.

Major firms such as Alibaba, ByteDance, and Zhipu AI introduced new systems designed to support more sophisticated agents, faster workflows and broader multimedia understanding.

Industry observers also expect an imminent release from DeepSeek, whose previous model disrupted global markets last year.

Alibaba’s Qwen 3.5 model provides improved multilingual support across text, images and video while enabling rapid AI agent deployment instead of slower generation pipelines.

ByteDance followed up with updates to its Doubao chatbot and the second version of its image-to-video tool, SeeDance, which has drawn copyright concerns from the Motion Picture Association due to the ease with which users can recreate protected material.

Zhipu AI expanded the landscape further with GLM-5, an open-source model built for long-context reasoning, coding tasks, and multi-step planning. The company highlighted the model’s reliance on Huawei hardware as part of China’s efforts to strengthen domestic semiconductor resilience.

Meanwhile, excitement continues to build for DeepSeek’s fourth-generation system, expected to follow the widespread adoption and market turbulence associated with its V3 model.

Authorities across parts of Europe have restricted the use of DeepSeek models in public institutions because of data security and cybersecurity concerns.

Even so, the rapid pace of development in China suggests intensifying competition in the design of agent-focused systems capable of managing complex digital tasks without constant human oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Meta explores AI system for digital afterlife

Meta has been granted a patent describing an AI system that could simulate a person’s social media activity, even after their death. The patent, originally filed in 2023 and approved in late December, outlines how AI could replicate a user’s online presence by drawing on their past posts, messages and interactions.

According to the filing, a large language model could analyse a person’s digital history, including comments, chats, voice messages and reactions, to generate new content that mirrors their tone and behaviour. The system could respond to other users, publish updates and continue conversations in a way that resembles the original account holder.

The patent suggests the technology could be used when someone is temporarily absent from a platform, but it also explicitly addresses the possibility of continuing activity after a user’s death. It notes that such a scenario would carry more permanent implications, as the person would not be able to return and reclaim control of the account.

More advanced versions of the concept could potentially simulate voice or even video interactions, effectively creating a digital persona capable of engaging with others in real time. The idea aligns with previous comments by Meta CEO Mark Zuckerberg, who has said AI could one day help people interact with digital representations of loved ones, provided consent mechanisms are in place.

Meta has stressed that the patent does not signal an imminent product launch, describing it as a protective filing for a concept that may never be developed. Still, similar services offered by startups have already sparked ethical debate, raising questions about digital identity, consent and the emotional impact of recreating the online presence of someone who has died.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

AI cheating allegation sparks discrimination lawsuit

A University of Michigan student has filed a federal lawsuit accusing the university of disability discrimination after professors allegedly claimed she used AI to write her essays. The student, identified in court documents as ‘Jane Doe,’ denies using AI and argues that symptoms linked to her medical conditions were wrongly interpreted as signs of cheating.

According to the complaint, Doe has obsessive-compulsive disorder and generalised anxiety disorder. Her lawyers argue that traits associated with those conditions, including a formal tone, structured writing, and consistent style, were cited by instructors as evidence that her work was AI-generated. They say she provided proof and medical documentation supporting her case but was still subjected to disciplinary action and prevented from graduating.

The lawsuit alleges that the university failed to provide appropriate disability-related accommodations during the academic integrity process. It also claims that the same professor who raised the concerns remained responsible for grading and overseeing remedial work, despite what the complaint describes as subjective judgments and questionable AI-detection methods.

The case highlights broader tensions on campuses as educators grapple with the rapid rise of generative AI tools. Professors across the United States report growing difficulty distinguishing between student work and machine-generated text, while students have increasingly challenged accusations they say rely on unreliable detection software.

Similar legal disputes have emerged elsewhere, with students and families filing lawsuits after being accused of submitting AI-written assignments. Research has suggested that some AI-detection systems can produce inaccurate results, raising concerns about fairness and due process in academic settings.

The University of Michigan has been asked to comment on the lawsuit, which is likely to intensify debate over how institutions balance academic integrity, disability rights, and the limits of emerging AI detection technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Researchers teach AI to interpret complex scientific data from brain scans to alloy design

Research teams are developing artificial intelligence systems designed to assist scientists in making sense of complex, high-dimensional data across disciplines such as neuroscience and materials engineering.

Traditional analysis methods often require extensive human expertise and time; AI models trained to identify patterns, reduce noise, and suggest hypotheses could significantly accelerate research cycles.

In neuroscience, AI is being used to extract meaningful features from detailed brain imaging datasets, enabling better understanding of neural processes and potentially enhancing diagnosis and treatment development.

In materials science, generative and predictive models help identify promising alloy compositions and properties by learning from vast experimental datasets, reducing reliance on trial-and-error experimentation.

Researchers emphasise that these AI tools don’t replace domain expertise but rather augment scientists’ abilities to navigate complex datasets, improve reproducibility and prioritise experiments with higher scientific payoff.

Ethical considerations and careful validation remain important to ensure models don’t propagate biases or misinterpret subtle signals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE launches first AI clinical platform

A Pakistani American surgeon has launched what is described as the UAE’s first AI clinical intelligence platform across the country’s public healthcare system. The rollout was announced in Dubai in partnership with Emirates Health Services.

Boston Health AI, founded by Dr Adil Haider, introduced the platform known as Amal at a major health expo in Dubai. The system conducts structured medical interviews in Arabic, English and Urdu before consultations, generating summaries for physicians.

The company said the technology aims to reduce documentation burdens and cognitive load on clinicians in the UAE. By organising patient histories and symptoms in advance, Amal is designed to support clinical decision making and improve workflow efficiency in Dubai and other emirates.

Before entering the UAE market, Boston Health AI deployed its platform in Pakistan across more than 50 healthcare facilities. The firm states that over 30,000 patient interactions were recorded in Pakistan, where a local team continues to develop and refine the AI system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot