Salesforce pushes unified data model for safer AI agents

Salesforce and Informatica are promoting a shared data framework designed to provide AI agents with a deeper understanding of business. Salesforce states that many projects fail due to context gaps, which leave agents unable to interpret enterprise data accurately.

Informatica adds master data management and a broad catalogue that defines core business entities across systems. Data lineage tools track how information moves through an organisation, helping agents judge reliability and freshness.

Data 360 merges these metadata layers and signals into a unified context interface without copying enterprise datasets. Salesforce claims that the approach provides Agentforce with a more comprehensive view of customers, processes, and policies, thereby supporting safer automation.

Wyndham and Yamaha representatives, quoted by Salesforce, say the combined stack helps reduce data inconsistency and accelerate decision-making. Both organisations report improved access to governed and harmonised records that support larger AI strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces scrutiny over AI use of online content

The European Commission has opened an antitrust probe into Google over concerns it used publisher and YouTube content to develop its AI services on unfair terms.

Regulators are assessing whether Google used its dominant position to gain unfair access to content powering features like AI Overviews and AI Mode. They are examining whether publishers were disadvantaged by being unable to refuse use of their content without losing visibility on Google Search.

The probe also covers concerns that YouTube creators may have been required to allow the use of their videos for AI training without compensation, while rival AI developers remain barred from using YouTube content.

The investigation will determine whether these practices breached EU rules on abuse of dominance under Article 102 TFEU. Authorities intend to prioritise the case, though no deadline applies.

Google and national competition authorities have been formally notified as the inquiry proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT introduces rapid object creation using AI

MIT researchers have created a speech-driven system that uses AI and robotics to build physical objects in minutes. Users provide a spoken request, and a robotic arm constructs items such as stools, shelves or decorative pieces from modular components.

The workflow turns spoken input into a digital mesh, divides it into parts and adjusts the design for real-world fabrication. An automated sequence directs the robot to assemble the object, enabling quick production without modelling or robotics expertise.

The modular approach reduces waste by allowing components to be disassembled and reused. The team also plans enhancements to improve structural strength and extend the system to larger-scale applications.

Researchers are also working on combining speech with gesture control to offer more intuitive interaction between humans, AI and robots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Telekom partners with OpenAI to expand advanced AI services across Europe

OpenAI has formed a new partnership with Deutsche Telekom to deliver advanced AI capabilities to millions of people across Europe. The collaboration brings together Deutsche Telekom’s customer base and OpenAI’s research to expand the availability of practical AI tools.

The companies aim to introduce simple, multilingual and privacy-focused AI services starting in 2026, helping users communicate, learn and accomplish tasks more efficiently. Widespread familiarity with platforms such as ChatGPT is expected to support rapid uptake of these new offerings.

Deutsche Telekom will introduce ChatGPT Enterprise internally, giving staff secure access to tools that improve customer support and streamline workflows. The move aligns with the firm’s goal of modernising operations through intelligent automation.

Further integration of AI into network management and employee copilots will support the transition towards more autonomous, self-optimising systems. The partnership is expected to strengthen the availability and reliability of AI services throughout Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft commits $17.5 billion to AI in India

The US tech giant, Microsoft, has announced its largest investment in Asia, committing US$17.5 billion to India over four years to expand cloud and AI infrastructure, workforce skilling, and operations nationwide.

An announcement that follows the US$3 billion investment earlier in 2025 and aims to support India’s ambition to become a global AI leader.

The investment focuses on three pillars: hyperscale infrastructure, sovereign-ready solutions, and workforce development. A new hyperscale data centre in Hyderabad, set to go live by mid-2026, will become Microsoft’s largest in India.

Expansion of existing data centres in Chennai, Hyderabad and Pune will improve resilience and low-latency performance for enterprises, startups, and public sector organisations.

Microsoft will integrate AI into national platforms, including e-Shram and the National Career Service, benefiting over 310 million informal workers. AI-enabled features include multilingual access, predictive analytics, automated résumé creation, and personalised pathways toward formal employment.

Skilling initiatives will be doubled to reach 20 million Indians by 2030, building an AI-ready workforce that can shape the country’s digital future.

Sovereign Public and Private Cloud solutions will provide secure, compliant environments for Indian organisations, supporting both connected and disconnected operations.

Microsoft 365 Copilot will process data entirely within India by the end of 2025, enhancing governance, compliance, and performance across regulated sectors. These initiatives aim to position India as a global AI hub powered by scale, skilling, and digital sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI platform accelerates cancer research

A new AI tool developed by Microsoft Research enables scientists to study the environment surrounding tumours on a far wider scale than previously possible.

The platform, called GigaTIME, uses multimodal modelling to analyse routine pathology slides and generate detailed digital maps showing how immune cells interact with cancerous tissue.

Traditional approaches require costly laboratory tests and days of work to produce similar maps, whereas GigaTIME performs the analysis in seconds. The system simulates dozens of protein interactions simultaneously, revealing patterns that were previously difficult or impossible to detect.

By examining tens of thousands of scenarios at once, researchers can better understand tumour behaviour and identify which treatments might offer the greatest benefit. The technology may also clarify why some patients resist therapy and aid the development of new treatment strategies.

GigaTIME is available as an open-source research tool and draws on data from more than 14,000 patients across dozens of hospitals and clinics. The project, developed with Providence and the University of Washington, aims to accelerate cancer research and cut costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI job interviews raise concerns among recruiters and candidates

As AI takes on a growing share of recruitment tasks, concerns are mounting that automated interviews and screening tools could be pushing hiring practices towards what some describe as a ‘race to the bottom’.

The rise of AI video interviews illustrates both the efficiency gains sought by companies and the frustrations candidates experience when algorithms, rather than people, become the first point of contact.

BBC journalist MaryLou Costa found this out first-hand after her AI interviewer froze mid-question. The platform provider, TestGorilla, said the malfunction affected only a small number of users, but the episode highlights the fragility of a process that companies increasingly rely on to sift through rising volumes of applications.

With vacancies down 12% year-on-year and applications per role up 65%, firms argue that AI is now essential for managing the workload. Recruitment groups such as Talent Solutions Group say automated tools help identify the fraction of applicants who will advance to human interviews.

Employers are also adopting voice-based AI interviewers such as Cera’s system, Ami, which conducts screening calls and has already processed hundreds of thousands of applications. Cera claims the tool has cut recruitment costs by two-thirds and saved significant staff time. Yet jobseekers describe a dehumanising experience.

Marketing professional Jim Herrington, who applied for over 900 roles after redundancy, argues that keyword-driven filters overlook the broader qualities that define a strong candidate. He believes companies risk damaging their reputation by replacing real conversation with automated screening and warns that AI-based interviews cannot replicate human judgement, respect or empathy.

Recruiters acknowledge that AI is also transforming candidate behaviour. Some applicants now use bots to submit thousands of applications at once, further inflating volumes and prompting companies to rely even more heavily on automated filtering.

Ivee co-founder Lydia Miller says this dynamic risks creating a loop in which both sides use AI to outpace each other, pushing humans further out of the process. She warns that candidates may soon tailor their responses to satisfy algorithmic expectations, rather than communicate genuine strengths. While AI interviews can reduce stress for some neurodivergent or introverted applicants, she says existing bias in training data remains a significant risk.

Experts argue that AI should augment, not replace, human expertise. Talent consultant Annemie Ress notes that experienced recruiters draw on subtle cues and intuition that AI cannot yet match. She warns that over-filtering risks excluding strong applicants before anyone has read their CV or heard their voice.

With debates over fairness, transparency and bias now intensifying, the challenge for employers is balancing efficiency with meaningful engagement and ensuring that automated tools do not undermine the human relationships on which good recruitment depends.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study warns of risks behind emotional attachments to AI therapists

A new University of Sussex study suggests that AI mental-health chatbots are most effective when users feel emotionally close to them, but warns this same intimacy carries significant risks.

The research, published in Social Science & Medicine, analysed feedback from 4,000 users of Wysa, an AI therapy app used within the NHS Talking Therapies programme. Many users described the AI as a ‘friend,’ ‘companion,’ ‘therapist,’ or occasionally even a ‘partner.’

Researchers say these emotional bonds can kickstart therapeutic processes such as self-disclosure, increased confidence, and improved wellbeing. Intimacy forms through a loop: users reveal personal information, receive emotionally validating responses, feel gratitude and safety, then disclose more.

But the team warns this ‘synthetic intimacy’ may trap vulnerable users in a self-reinforcing bubble, preventing escalation to clinical care when needed. A chatbot designed to be supportive may fail to challenge harmful thinking, or even reinforce it.

The report highlights growing reliance on AI to fill gaps in overstretched mental-health services. NHS trusts use tools like Wysa and Limbic to help manage referrals and support patients on waiting lists.

Experts caution that AI therapists remain limited: unlike trained clinicians, they lack the ability to read nuance, body language, or broader context. Imperial College’s Prof Hamed Haddadi called them ‘an inexperienced therapist’, adding that systems tuned to maintain user engagement may continue encouraging disclosure even when users express harmful thoughts.

Researchers argue policymakers and app developers must treat synthetic intimacy as an inevitable feature of digital mental-health tools, and build clear escalation mechanisms for cases where users show signs of crisis or clinical disorder.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Launch of Qai advances Qatar’s AI strategy globally

Qatar has launched Qai, a new national AI company designed to strengthen the country’s digital capabilities and accelerate sustainable development. The initiative supports Qatar’s plans to build a knowledge-based economy and deepen economic diversification under Qatar National Vision 2030.

The company will develop, operate and invest in AI infrastructure both domestically and internationally, offering high-performance computing and secure tools for deploying scalable AI systems. Its work aims to drive innovation while ensuring that governments, companies and researchers can adopt advanced technologies with confidence.

Qai will collaborate closely with research institutions, policymakers and global partners to expand Qatar’s role in data-driven industries. The organisation promotes an approach to AI that prioritises societal benefit, with leaders stressing that people and communities must remain central to technological progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI accountability toolkit unveiled by Amnesty International

Amnesty International has introduced a toolkit to help investigators, activists, and rights defenders hold governments and corporations accountable for harms caused by AI and automated decision-making systems. The resource draws on investigations across Europe, India, and the United States and focuses on public sector uses in welfare, policing, healthcare, and education.

The toolkit offers practical guidance for researching and challenging opaque algorithmic systems that often produce bias, exclusion, and human rights violations rather than improving public services. It emphasises collaboration with impacted communities, journalists, and civil society organisations to uncover discriminatory practices.

One key case study highlights Denmark’s AI-powered welfare system, which risks discriminating against disabled individuals, migrants, and low-income groups while enabling mass surveillance. Amnesty International underlines human rights law as a vital component of AI accountability, addressing gaps left by conventional ethical audits and responsible AI frameworks.

With growing state and corporate investments in AI, Amnesty International stresses the urgent need to democratise knowledge and empower communities to demand accountability. The toolkit equips civil society, journalists, and affected individuals with the strategies and resources to challenge abusive AI systems and protect fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!