Monolith’s AI powers Nissan push to halve testing time

Nissan and Monolith have extended their strategic partnership for three years to apply AI across more vehicle programmes in Europe. The collaboration supports Nissan. RE: Nissan Plans to Compress Development Timelines and Improve Operational Efficiency. Early outcomes are guiding a wider rollout.

Engineers at Nissan Technical Centre Europe will utilise Monolith to predict test results based on decades of data and simulations. Reducing prototypes and conducting targeted, high-value experiments enables teams to focus more effectively on design decisions. Ensuring both accuracy and coverage remains essential.

A prior project on chassis bolt joints saw AI recommend optimal torque ranges and prioritise the following best tests for engineers. Compared with the non-AI process, physical testing fell by 17 percent in controlled comparisons. Similar approaches are being prepared for future models beyond LEAF.

Leaders say that a broader deployment could halve testing time across European programmes if comparable gains are achieved. Governance encompasses rigorous validation before changes are deployed to production. Operational benefits include faster iteration cycles and reduced test waste.

Monolith’s toolkit includes next-test recommendation and anomaly detection to flag outliers before rework. Nissan frames the push as an innovation with sustainability benefits, cutting material use while maintaining quality across a complex supply chain. Partners will share results as adoption scales.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study finds AI summaries can flatten understanding compared with reading sources

AI summaries can speed learning, but an extensive study finds they often blunt depth and recall. More than 10,000 participants used chatbots or traditional web search to learn assigned topics. Those relying on chatbot digests showed shallower knowledge and offered fewer concrete facts afterwards.

Researchers from Wharton and New Mexico State conducted seven experiments across various tasks, including gardening, health, and scam awareness. Some groups saw the same facts, either as an AI digest or as source links. Advice written after AI use was shorter, less factual, and more similar across users.

Follow-up raters judged AI-derived advice as less informative and less trustworthy. Participants who used AI also reported spending less time with sources. Lower effort during synthesis reduces the mental work that cements understanding.

Findings land amid broader concerns about summary reliability. A BBC-led investigation recently found that major chatbots frequently misrepresented news content in their responses. The evidence suggests that to serves as support for critical reading, rather than a substitute for it.

The practical takeaway for learners and teachers is straightforward. Use AI to scaffold questions, outline queries, and compare viewpoints. Build lasting understanding by reading multiple sources, checking citations, and writing your own synthesis before asking a model to refine it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK teachers rethink assignments as AI reshapes classroom practice

Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.

Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.

Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.

Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.

Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Telekom joins Theta Network as enterprise validator

Deutsche Telekom has joined the Theta Network as a strategic enterprise validator, alongside Google, Samsung and Sony. The company becomes the first major telecom provider to take part in securing the decentralised blockchain platform.

The partnership involves staking THETA tokens and operating validator nodes that support Theta’s layer-1 infrastructure for AI, cloud and media applications. Deutsche Telekom’s unit, T-Systems MMS, will manage the validator operations.

Theta Labs said the collaboration enhances network resilience and underlines growing enterprise interest in decentralised computing. The project’s EdgeCloud system is designed to distribute AI workloads across global nodes more efficiently.

Deutsche Telekom noted that Theta’s decentralised model aligns with its vision of providing reliable, scalable cloud and edge services for future digital ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Growing scrutiny over AI errors in professional use

Judges and employers are confronting a surge in AI-generated mistakes, from fabricated legal citations to inaccurate workplace data. Courts in the United States have already recorded hundreds of flawed filings, raising concerns about unchecked reliance on generative systems.

Experts urge professionals to treat AI as an assistant rather than an authority. Tools can support research and report writing, yet unchecked outputs often contain subtle inaccuracies that could mislead users or damage reputations.

Data scientist Damien Charlotin has identified nearly 500 court documents containing false AI-generated information within months. Even established firms have faced judicial penalties after submitting briefs with non-existent case references, underlining growing professional risks.

Workplace advisers recommend verifying AI results, protecting confidential information, and obtaining consent when using digital notetakers. Training and prompt literacy are becoming essential skills as AI tools continue shaping daily operations across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI performance improves by mimicking human brain networks

Scientists at the University of Surrey have developed a new method to make artificial intelligence smarter by copying the way the human brain works. Their approach, called Topographical Sparse Mapping, connects AI ‘neurons’ only to nearby or related ones, mimicking how the brain organises information efficiently.

An advanced version, Enhanced Topographical Sparse Mapping, prunes unneeded connections during training, similar to how the brain strengthens useful pathways as it learns. Researchers are also exploring applications in neuromorphic computing, which designs computer systems to mimic the structure and function of the human brain.

The approach helps AI models, including tools like ChatGPT, work better while using less electricity. Traditional AI training can waste huge amounts of energy, but the new brain-inspired design cuts unnecessary connections without losing accuracy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Millions turn to AI to manage finances across the UK

AI is playing an increasingly important role in personal finance, with over 28 million UK adults using AI over the past year.

Lloyds Banking Group’s latest Consumer Digital Index reveals that many individuals turn to platforms like ChatGPT for budgeting, savings planning, and financial education, reporting an average annual savings of £399 through AI insights.

Digital confidence strongly supports financial empowerment. Two-thirds of internet users report that online tools enhance their ability to manage money, while those with higher digital skills experience lower stress and greater control over their finances.

Regular engagement with AI and other digital tools enhances both knowledge and confidence in financial decision-making.

Trust remains a significant concern despite growing usage. Around 80% of users worry about inaccurate information or insufficient personalisation, emphasising the need for reliable guidance.

Jas Singh, CEO of Consumer Relationships at Lloyds, highlights that banks must combine AI innovation with trusted expertise to help people make more intelligent choices and build long-term financial resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Saudi Arabia pushes global AI ambitions with Humain

Saudi Arabia is accelerating its ambitions in AI with the launch of Humain, a homegrown AI company backed by the kingdom’s $1 trillion sovereign wealth fund. The company, financed by the Public Investment Fund, aims to offer a wide range of AI services and tools, including an Arabic large language model capable of understanding diverse dialects and observing Islamic values.

The company has secured major deals to expand its operations, including a $3 billion data centre project with Blackstone’s AirTrunk, a partnership with US chipmaker Qualcomm, and a significant stake acquisition by state-owned Saudi Aramco. The agreements aim to boost AI integration across the kingdom’s key sectors.

Challenges remain, from talent shortages to access to advanced technology, while regional competition is strong. Yet Humain’s leadership remains confident, aiming to position Saudi Arabia as a major player in the global AI landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brain-inspired networks boost AI performance and cut energy use

Researchers at the University of Surrey have developed a new method to enhance AI by imitating how the human brain connects information. The approach, called Topographical Sparse Mapping, links each artificial neuron only to nearby or related ones, replicating the brain’s efficient organisation.

According to findings published in Neurocomputing, the structure reduces redundant connections and improves performance without compromising accuracy. Senior lecturer Dr Roman Bauer said intelligent systems can now be designed to consume less energy while maintaining power.

Training large models today often requires over a million kilowatt-hours of electricity, a trend he described as unsustainable.

An advanced version, Enhanced Topographical Sparse Mapping, introduces a biologically inspired pruning process that refines neural connections during training, similar to how the brain learns.

Researchers believe that the system could contribute to more realistic neuromorphic computers, which simulate brain functions to process data more efficiently.

The Surrey team said that such a discovery may advance generative AI systems and pave the way for sustainable large-scale model training. Their work highlights how lessons from biology can shape the next generation of energy-efficient computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Unexpected language emerges as best for AI prompting

A new joint study by the University of Maryland and Microsoft has found that Polish is the most effective language for prompting AI, outperforming 25 others, including English, French, and Chinese.

The researchers tested leading AI models, including OpenAI, Google Gemini, Qwen, Llama, and DeepSeek, by providing identical prompts in 26 languages. Polish achieved an average accuracy of 88 percent, securing first place. English, often seen as the natural choice for AI interaction, came only sixth.

According to the study, Polish proved to be the most precise in issuing commands to AI, despite the fact that far less Polish-language data exists for model training. The Polish Patent Office noted that while humans find the language difficult, AI systems appear to handle it with remarkable ease.

Other high-performing languages included French, Italian, and Spanish, with Chinese ranking among the lowest. The finding challenges the widespread assumption that English dominates AI communication and could reshape future research on multilingual model optimisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!