How multimodal sensing powers physical AI

Multimodal sensing allows physical AI systems to combine inputs such as vision, audio, lidar and touch to build situational awareness in real time. The approach enables machines to operate autonomously in complex physical environments.

The architecture typically includes input modules for individual sensors, a fusion module to combine relevant data, and an output module to generate actions. Applications range from robotics and autonomous vehicles to spatial AI systems navigating dynamic 3D spaces.

Fusion techniques vary by use case, from Bayesian networks for uncertainty management to Kalman filters for navigation and neural networks for robotic manipulation. The aim is to leverage complementary sensor strengths while maintaining reliability.

Implementation presents technical challenges including environmental noise filtering, calibration across time and space, and balancing redundant versus complementary sensing. Engineers must also manage tradeoffs in processing power, controllers and system design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UiPath launches agentic AI to streamline healthcare operations

UiPath has unveiled new agentic AI solutions for healthcare providers and payers. The tools focus on medical record summarisation, claim denial prevention, and prior authorisation, connecting data to speed workflows and improve efficiency.

Healthcare organisations face labour shortages and fragmented systems, making revenue cycle management challenging. Providers produce large volumes of clinical documentation that must be quickly turned into actionable insights for accurate reimbursement.

The platform converts records into concise, citation-backed summaries, automates claim review and appeals, and streamlines eligibility checks. AI predicts risks, reduces errors, and accelerates clinical and administrative processes for providers and payers alike.

UiPath partners with innovators such as Genzeon to embed domain expertise. The solution addresses rising costs, complex regulations, and labour challenges, helping teams make data-driven decisions and improve patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI accelerates drug formulation through predictive modelling

Low solubility and poor bioavailability remain major hurdles in small-molecule drug development, often preventing promising candidates from reaching clinical trials. Traditional trial-and-error methods are time-consuming and depend heavily on the limited availability of active pharmaceutical ingredients (APIs).

AI and machine learning now provide predictive models that anticipate solubility, permeability and systemic exposure. These tools let scientists prioritise high-impact experiments while conserving valuable material.

Digital platforms combine predictive algorithms with stability testing to guide excipient and technology selection. AI can simulate molecular interactions and dose scenarios, helping teams identify risks early and refine first-in-human doses safely.

End-to-end AI/ML workflows integrate data, modelling and manufacturing insights. However, this accelerates development timelines, lowers the risk of late-stage reformulations and connects early formulation choices directly to clinical and manufacturing outcomes.

While AI enhances efficiency and precision, it does not replace human expertise. It amplifies formulation scientists’ work, freeing them to focus on innovative design, problem-solving and delivering high-quality therapies to patients more rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

National security concerns reshape US data policy

US policymakers are increasingly treating personal data as a dual use asset that carries both economic value and national security risks. Regulators have raised concerns about sensitive information, including geolocation data linked to military personnel.

Measures such as the Protecting Americans Data from Foreign Adversaries Act of 2024 and the Department of Justice Data Security Program aim to curb misuse by designated foreign adversaries. Both frameworks impose broad restrictions on cross border data transfers.

Experts warn that compliance remains complex and uncertain, with companies adapting in what one adviser described as a fog. Enforcement signals have already emerged, including a draft noncompliance letter from the Federal Trade Commission and litigation.

Organizations are being urged to integrate national security expertise into privacy and cybersecurity teams. Observers say early preparation is essential as selective enforcement risks increase under strict but evolving US data protection regimes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI respond better to clarity than courtesy

Large language models are designed to mimic human conversation, but treating them like people can mislead users. Politeness, flattery, or threats do not consistently improve the accuracy of AI responses.

Experts recommend focusing on how questions are structured rather than on word choice. Asking for multiple options, giving examples, and conducting step-by-step interviews can make AI outputs more relevant and useful.

Role-playing may be effective for creative or exploratory tasks, but it can reduce reliability when precise answers are required. AI models are constantly updated, making old prompting tricks largely ineffective.

Maintaining neutrality in prompts prevents biased responses, and while politeness may not improve AI performance, it can make interactions more comfortable. Developing careful prompt strategies is more effective than relying on manners alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pope Leo XIV calls for responsible AI use in homilies

Pope Leo XIV has called for responsible and discerning use of AI in religious ministry, warning clergy against over-reliance on digital tools. Speaking during a dialogue with priests of the Diocese of Rome, he stressed that technology should not replace personal reflection, prayer, and critical thinking.

Central to his message was a caution against using AI to prepare homilies. He emphasised that preaching is not merely about producing structured text but about sharing lived faith and spiritual experience, which AI cannot replicate.

The Pope underlined that intellectual and spiritual capacities must be exercised rather than delegated to automated systems. He warned that excessive dependence on AI could weaken the depth and authenticity of pastoral work.

He also raised concerns about the illusion created by online platforms such as TikTok, noting that likes and followers do not equate to a life rooted in faith. Broader discussions touched on priestly responsibility, community engagement, isolation, and the importance of serving as role models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-animated videos strengthen BBC World Service content strategy

AI is increasingly being tested in media production as organisations adapt to changing digital consumption patterns. Generative AI tools are being used to repurpose archival material, experiment with new formats, and expand distribution across online platforms.

In this context, the BBC World Service has launched its first AI-animated video adaptations. The initiative transforms audio episodes of Witness History into short animated films, marking a new application of generative AI within the World Service’s programming.

Five episodes are scheduled for release, starting with The World’s First Labradoodle on the BBC World Service’s YouTube channel. Further adaptations cover Brazil’s largest bank heist, the restoration of Ramesses II’s mummy, the discovery of Lord Sipán in Peru, and an arrest related to football in Brazil.

The project aims to extend the reach of existing audio content and attract digital audiences who may not engage with radio. Editorial oversight remains in place, with AI positioned as a production support tool rather than a replacement for journalistic processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPS and regulators unite to address misuse of AI imagery across jurisdictions

The European Data Protection Supervisor (EDPS) and authorities from 61 jurisdictions issued a joint statement on AI-generated imagery, warning about tools that create realistic depictions of identifiable individuals without consent. The move underscores concerns over privacy, dignity and child safety.

Authorities said advances in AI image and video tools, especially when integrated into social media platforms, have enabled non-consensual intimate imagery, defamatory depictions, and other harmful content. Children and vulnerable groups are seen as particularly at risk.

The EDPS and the other signatories reminded organisations that AI content-generation systems must comply with applicable data protection and privacy laws. They stressed that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions.

Organisations are urged to implement safeguards against misuse of personal data, ensure transparency about system capabilities and uses, and provide accessible mechanisms for swift content removal. Stronger protections and age-appropriate information are expected where children are involved.

Authorities signalled plans for coordinated responses, including enforcement, policy development and education initiatives. The EDPS and fellow signatories urged organisations to engage proactively with regulators and ensure innovation does not undermine fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act enforcement begins, reshaping startup compliance landscape

The first enforcement provisions of the EU AI Act entered into force on 2 February 2025, marking a turning point for Europe’s AI startup ecosystem. The initial phase targets ‘unacceptable risk’ systems, including social scoring, real-time biometric surveillance in public spaces, and manipulative AI practices.

Under the regulation, penalties can reach €35 million or 7% of global annual turnover, whichever is higher. Although the current enforcement covers only prohibited practices, the move signals that Europe’s AI rulebook is now operational rather than theoretical.

Broader obligations for high-risk AI systems, such as hiring tools, credit scoring, and medical diagnostics, will apply from August 2026. Separate rules for general-purpose AI models are scheduled to take effect in August 2025.

Surveys from European SME groups indicate that many smaller technology companies feel unprepared. A significant share of reports have not conducted formal risk classification of their AI systems, despite this being a foundational requirement under the EU AI Act’s tiered framework.

While some founders warn that compliance costs could slow innovation, others point to long-term benefits from clearer governance standards. For startups, the coming months will focus on aligning products with AI Act risk tiers and strengthening documentation and oversight before stricter rules apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Project Prometheus opens Zurich office

Project Prometheus, the AI company founded last year by Amazon entrepreneur Jeff Bezos, is expanding its international footprint with a new office in Zurich. The move underscores the firm’s ambitions to position itself among the leading players in the rapidly evolving AI sector.

The US-based company has begun recruiting staff in the Swiss city, with job postings shared on the social media platform X. In addition to Zurich, Project Prometheus is hiring in San Francisco and London, signalling a broader push to build a global presence.

Launched with an initial investment of $6.2 billion and led by Bezos as CEO, Project Prometheus is expected to focus on AI applications in space exploration, automotive technology, and advanced computing, according to The New York Times. Despite the significant funding and high-profile leadership, the company has disclosed few details about its precise objectives or planned operations in Switzerland.

Swiss media have so far been unable to clarify what activities the firm intends to carry out in Zurich. The lack of publicly available information has left open the question of whether the office will focus on research, engineering, or business development.

Zurich has become an increasingly attractive magnet for major US technology companies investing in AI. Firms such as Anthropic, Nvidia, OpenAI, and Google have established a presence in the city, drawn in part by access to top-tier talent from ETH Zurich, one of Europe’s leading technical universities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!