Organisations across public and private sectors are using Salesforce’s Agentforce to engage people whenever and wherever they need support.
From local governments to hospitals and education platforms, AI systems are transforming how services are delivered and accessed.
In the city of Kyle, Texas, an Agentforce-driven 311 app enables residents to report issues such as potholes or water leaks. The city plans to make the system voice-enabled, reducing traditional call volumes while maintaining a steady flow of service requests and faster responses.
At Pearson, AI enables students to access their online learning platforms instantly, regardless of their time zone. The company stated that the technology fosters loyalty by providing immediate assistance, rather than requiring users to wait for human support.
Meanwhile, UChicago Medicine utilises AI to streamline patient interactions, from prescription refills to scheduling, while ambient listening tools enable doctors to focus entirely on patients rather than typing notes.
Salesforce said Agentforce empowers organisations to save resources while enhancing trust, accessibility, and service quality. By meeting people on their own terms, AI enables more responsive and human-centred interactions across various industries.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has introduced a new Facebook update allowing group administrators to change their private groups to public while keeping members’ privacy protected. The company said the feature gives admins more flexibility to grow their communities without exposing existing private content.
All posts, comments, and reactions shared before the change will remain visible only to previous members, admins, and moderators. The member list will also stay private. Once converted, any new posts will be visible to everyone, including non-Facebook users, which helps discussions reach a broader audience.
Admins have three days to review and cancel the conversion before it becomes permanent. Members will be notified when a group changes its status, and a globe icon will appear when posting in public groups as a reminder of visibility settings.
Groups can be switched back to private at any time, restoring member-only access.
Meta said the feature supports community growth and deeper engagement while maintaining privacy safeguards. Group admins can also utilise anonymous or nickname-based participation options, providing users with greater control over their engagement in public discussions.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nissan and Monolith have extended their strategic partnership for three years to apply AI across more vehicle programmes in Europe. The collaboration supports Nissan. RE: Nissan Plans to Compress Development Timelines and Improve Operational Efficiency. Early outcomes are guiding a wider rollout.
Engineers at Nissan Technical Centre Europe will utilise Monolith to predict test results based on decades of data and simulations. Reducing prototypes and conducting targeted, high-value experiments enables teams to focus more effectively on design decisions. Ensuring both accuracy and coverage remains essential.
A prior project on chassis bolt joints saw AI recommend optimal torque ranges and prioritise the following best tests for engineers. Compared with the non-AI process, physical testing fell by 17 percent in controlled comparisons. Similar approaches are being prepared for future models beyond LEAF.
Leaders say that a broader deployment could halve testing time across European programmes if comparable gains are achieved. Governance encompasses rigorous validation before changes are deployed to production. Operational benefits include faster iteration cycles and reduced test waste.
Monolith’s toolkit includes next-test recommendation and anomaly detection to flag outliers before rework. Nissan frames the push as an innovation with sustainability benefits, cutting material use while maintaining quality across a complex supply chain. Partners will share results as adoption scales.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.
Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.
Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.
Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.
Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
World Economic Forum President Borge Brende has warned that massive investments in AI and cryptocurrencies may create financial bubbles. Speaking in Berlin, he noted that $500 billion has been invested in AI this year, raising concerns about speculative bubbles in AI and cryptocurrency.
Brende described frontier technologies as a ‘big paradigm shift’ that could drive global growth, with potential productivity gains of up to 10% over the next decade. He noted that breakthroughs in medicine, synthetic biology, space, and energy could transform economies, but stressed that the benefits must be widely shared.
Geopolitical uncertainty remains a significant concern, according to Brende. He pointed to rising tensions between the US and China, calling it a race for technological dominance that could shape global power.
He also urged multilateral cooperation to address global challenges, including pandemics, cybercrime, and investment uncertainty.
Despite the disorder in world politics, Brende highlighted the resilience of economies like those in the US, China, and India. He called for patient investment strategies and stronger international coordination to ensure that new technologies translate into sustainable prosperity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scientists at the University of Surrey have developed a new method to make artificial intelligence smarter by copying the way the human brain works. Their approach, called Topographical Sparse Mapping, connects AI ‘neurons’ only to nearby or related ones, mimicking how the brain organises information efficiently.
An advanced version, Enhanced Topographical Sparse Mapping, prunes unneeded connections during training, similar to how the brain strengthens useful pathways as it learns. Researchers are also exploring applications in neuromorphic computing, which designs computer systems to mimic the structure and function of the human brain.
The approach helps AI models, including tools like ChatGPT, work better while using less electricity. Traditional AI training can waste huge amounts of energy, but the new brain-inspired design cuts unnecessary connections without losing accuracy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mustafa Suleyman, CEO of Microsoft AI, argues that AI should be built for people, not to replace them. Growing belief in chatbot consciousness risks campaigns for AI rights and a needless struggle over personhood that distracts from human welfare.
Debates over true consciousness miss the urgent issue of convincing imitation. Seemingly conscious AI may speak fluently, recall interactions, claim experiences, and set goals that appear to exhibit agency. Capabilities are close, and the social effects will be real regardless of metaphysics.
People already form attachments to chatbots and seek meaning in conversations. Reports of dependency and talk of ‘AI psychosis‘ show persuasive systems can nudge vulnerable users. Extending moral status to uncertainty, Suleyman argues, would amplify delusions and dilute existing rights.
Norms and design principles are needed across the industry. Products should include engineered interruptions that break the illusion, clear statements of nonhuman status, and guardrails for responsible ‘personalities’. Microsoft AI is exploring approaches that promote offline connection and healthy use.
A positive vision keeps AI empowering without faking inner life. Companions should organise tasks, aid learning, and support collaboration while remaining transparently artificial. The focus remains on safeguarding humans, animals, and the natural world, not on granting rights to persuasive simulations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft’s AI head, Mustafa Suleyman, has dismissed the idea that AI could ever become conscious, arguing that consciousness is a property exclusive to biological beings.
Speaking at the AfroTech Conference in Houston, Suleyman said researchers should stop exploring the notion of sentient AI, calling it ‘the wrong question’.
He explained that while AI can simulate experience, it cannot feel pain or possess subjective awareness.
Suleyman compared AI’s output to a narrative illusion rather than genuine consciousness, aligning with the philosophical theory of biological naturalism, which ties awareness to living brain processes.
Suleyman has become one of the industry’s most outspoken critics of conscious AI research. His book ‘The Coming Wave’ and his recent essay ‘We must build AI for people;’ not to be a person warn against anthropomorphising machines.
He also confirmed that Microsoft will not develop erotic chatbots, a direction that has been embraced by competitors such as OpenAI and xAI.
He stressed that Microsoft’s AI systems are designed to serve humans, not mimic them. The company’s Copilot assistant now includes a ‘real talk’ mode that challenges users’ assumptions instead of offering flattery.
Suleyman said responsible development must avoid ‘unbridled accelerationism’, adding that fear and scepticism are essential for navigating AI’s rapid evolution.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of Surrey have developed a new method to enhance AI by imitating how the human brain connects information. The approach, called Topographical Sparse Mapping, links each artificial neuron only to nearby or related ones, replicating the brain’s efficient organisation.
According to findings published in Neurocomputing, the structure reduces redundant connections and improves performance without compromising accuracy. Senior lecturer Dr Roman Bauer said intelligent systems can now be designed to consume less energy while maintaining power.
Training large models today often requires over a million kilowatt-hours of electricity, a trend he described as unsustainable.
An advanced version, Enhanced Topographical Sparse Mapping, introduces a biologically inspired pruning process that refines neural connections during training, similar to how the brain learns.
Researchers believe that the system could contribute to more realistic neuromorphic computers, which simulate brain functions to process data more efficiently.
The Surrey team said that such a discovery may advance generative AI systems and pave the way for sustainable large-scale model training. Their work highlights how lessons from biology can shape the next generation of energy-efficient computing.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new joint study by the University of Maryland and Microsoft has found that Polish is the most effective language for prompting AI, outperforming 25 others, including English, French, and Chinese.
The researchers tested leading AI models, including OpenAI, Google Gemini, Qwen, Llama, and DeepSeek, by providing identical prompts in 26 languages. Polish achieved an average accuracy of 88 percent, securing first place. English, often seen as the natural choice for AI interaction, came only sixth.
According to the study, Polish proved to be the most precise in issuing commands to AI, despite the fact that far less Polish-language data exists for model training. The Polish Patent Office noted that while humans find the language difficult, AI systems appear to handle it with remarkable ease.
Other high-performing languages included French, Italian, and Spanish, with Chinese ranking among the lowest. The finding challenges the widespread assumption that English dominates AI communication and could reshape future research on multilingual model optimisation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!