Uber and Chinese startup Momenta will begin robotaxi testing in Munich in 2026, marking their first public deployment in continental Europe. The trials will start with human safety operators, with plans to expand across additional European cities.
Founded in 2016, Momenta is one of China’s leading autonomous vehicle companies, having tested self-driving cars since 2018. The company is already collaborating with automakers such as Mercedes-Benz and BMW to integrate advanced driver assistance systems.
Uber is broadening its global AV network, which already spans 20 partners across mobility, delivery, and freight. In the US, Waymo robotaxis operate via Uber’s app, while international partnerships include WeRide in the Gulf and Wayve in London.
Competition in Europe is intensifying. Baidu from China and Lyft plan to roll out robotaxis in Germany and the UK next year, while Uber has chosen Munich, Germany, as its engineering base and a strong automotive ecosystem.
German regulators must still certify Momenta’s technology and approve geo-fenced operating areas. If successful, Munich will become Momenta’s first European launchpad, building on its Shanghai robotaxi service and global ADAS deployment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.
Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.
The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.
Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.
Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A malvertising campaign is targeting IT workers in the EU with fake GitHub Desktop installers, according to Arctic Wolf. The goal is to steal credentials, deploy ransomware, and infiltrate sensitive systems. The operation has reportedly been active for over six months.
Attackers used malicious Google Ads that redirected users to doctored GitHub repositories. Modified README files mimicked genuine download pages but linked to a lookalike domain. MacOS users received the AMOS Stealer, while Windows victims downloaded bloated installers hiding malware.
The Windows malware evaded detection using GPU-based checks, refusing to run in sandboxes that lacked real graphics drivers. On genuine machines, it copied itself to %APPDATA%, sought elevated privileges, and altered Defender settings. Analysts dubbed the technique GPUGate.
The payload persisted by creating privileged tasks and sideloading malicious DLLs into legitimate executables. Its modular system could download extra malware tailored to each victim. The campaign was geo-fenced to EU targets and relied on redundant command servers.
Researchers warn that IT staff are prime targets due to their access to codebases and credentials. With the campaign still active, Arctic Wolf has published indicators of compromise, Yara rules, and security advice to mitigate the GPUGate threat.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Researchers combine human tissue models with explainable AI to analyse patient data and identify treatments that work best for specific patients. First applied to inflammatory bowel disease, the approach could improve clinical trial success rates and accelerate drug discovery.
REPROCELL, IBM, and the STFC Hartree Centre have developed Pharmacology-AI, a platform uniting tissue models with machine learning. Delivered through the Hartree National Centre for Digital Innovation, it reduces costs, enhances trial design, and enables more targeted therapies.
Unlike tools that seek to replace human expertise, the platform acts as a decision-support system. It helps scientists detect patterns in complex datasets while keeping outputs interpretable for clinical trial use. Developers emphasised usability, ensuring non-technical staff can work with the system.
Human fresh tissue models play a central role, preserving biological complexity and simulating drug effects before trials. However, this generates reliable data that can be paired with AI to identify optimal patient profiles and reduce the risk of costly trial failures.
The project’s success suggests broad applications beyond IBD. With explainable AI and high-quality patient data, Pharmacology-AI could improve outcomes across multiple disease areas, making drug development faster, more efficient, and more precise.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Stablecoins have become central to the digital economy, with billions in daily transactions and stronger regulatory backing under the GENIUS Act. Yet experts warn that advances in quantum computing could undermine their very foundations.
Elliptic curve and RSA cryptography, widely used in stablecoin systems, are expected to be breakable once ‘Q-Day’ arrives. Quantum-equipped attackers could instantly derive private keys from public addresses, exposing entire networks to theft.
The immutability of blockchains makes upgrading cryptographic schemes especially challenging. Dormant wallets and legacy addresses may prove vulnerable, putting billions of dollars at risk if issuers fail to take action promptly.
Researchers highlight lattice-based and hash-based algorithms as viable ‘quantum-safe’ alternatives. Stablecoins built with crypto-agility, enabling seamless upgrades, will better adapt to new standards and avoid disruptive forks.
Regulators are also moving. NIST is finalising post-quantum cryptographic standards, and new rules will likely be established before 2030. Stablecoins that embed resilience today may set the global benchmark for digital trust in the quantum age.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Production at Jaguar Land Rover (JLR) is to remain halted until at least next week after a cyberattack crippled the carmaker’s operations. Disruption is expected to last through September and possibly into October.
The UK’s largest car manufacturer, owned by Tata, has suspended activity at its plants in Halewood, Solihull, and Wolverhampton. Thousands of staff have been told to stay home on full pay while ‘banking’ hours are to be recovered later.
Suppliers, including Evtec, WHS Plastics, SurTec, and OPmobility, which employ more than 6,000 people in the UK, have also paused their operations. The Sunday Times reported speculation that the outage could drag on for most of September.
While there is no evidence of a data breach, JLR has notified the Information Commissioner’s Office about potential risks. Dozens of internal systems, including spare parts databases, remain offline, forcing dealerships to revert to manual processes.
Hackers linked to the groups Scattered Spider, Lapsus$, and ShinyHunters have claimed responsibility for the incident. JLR stated that it was collaborating with cybersecurity experts and law enforcement to restore systems in a controlled and safe manner.
Australia has announced plans to curb AI tools that generate nude images and enable online stalking. The government said it would introduce new legislation requiring tech companies to block apps designed to abuse and humiliate people.
Communications Minister Anika Wells said such AI tools are fuelling sextortion scams and putting children at risk. So-called ‘nudify’ apps, which digitally strip clothing from images, have spread quickly online.
A Save the Children survey found one in five young people in Spain had been targeted by deepfake nudes, showing how widespread the abuse has become.
Canberra pledged to use every available measure to restrict access, while ensuring that legitimate AI services are not harmed. Australia has already passed strict laws banning under-16s from social media, with the new measures set to build on its reputation as a leader in online safety.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Universal Internet connectivity by 2030 could cost up to $2.8 trillion, according to the International Telecommunication Union (ITU) and Saudi Arabia’s Communications, Space, and Technology (CST) Commission. The blueprint urges global cooperation to connect the one-third of humanity still offline.
The largest share, up to $1.7 trillion, would be allocated to expanding broadband through fibre, wireless, and satellite networks. Nearly $1 trillion is needed for affordability measures, alongside $152 billion for digital skills programmes.
ITU Secretary-General Doreen Bogdan-Martin emphasised that connectivity is essential for access to education, employment, and vital services. She noted the stark divide between high-income countries, where 93% of people are online, and low-income states, where only 27% use the Internet.
The study shows costs have risen fivefold since ITU’s 2020 Connecting Humanity report, reflecting both higher demand and widening divides. Haytham Al-Ohali from Saudi Arabia said the figures underscore the urgency of investment and knowledge sharing to achieve meaningful connectivity.
The report recommends new business models and stronger cooperation between governments, industry, and civil society. Proposed measures include using schools as Internet gateways, boosting Africa’s energy infrastructure, and improving localised data collection to accelerate digital inclusion.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Jaguar Land Rover (JLR) has ordered factory staff to work from home until at least next Tuesday as it recovers from a major cyberattack. Production remains suspended at key UK sites, including Halewood, Solihull, and Wolverhampton.
The disruption, first reported earlier this week, has ‘severely impacted’ production and sales, according to JLR. Reports suggest that assembly line workers have been instructed not to return before 9 September, while the situation remains under review.
The hack has hit operations beyond manufacturing, with dealerships unable to order parts and some customer handovers delayed. The timing is particularly disruptive, coinciding with the September release of new registration plates, which traditionally boosts demand.
A group of young hackers on Telegram, calling themselves Scattered Lapsus$ Hunters, has claimed responsibility for the incident. Linked to earlier attacks on Marks & Spencer and Harrods, the group reportedly shared screenshots of JLR’s internal IT systems as proof.
The incident follows a wider spate of UK retail and automotive cyberattacks this year. JLR has stated that it is working quickly to restore systems and emphasised that there is ‘no evidence’ that customer data has been compromised.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Union Aviation Safety Agency (EASA) has published survey results probing the ethical outlook of aviation professionals on AI deployment, released during its AI Days event in Cologne.
The AI Days conference gathered nearly 200 on-site attendees from across the globe, with even more participating online.
The survey measured acceptance, trust and comfort across eight hypothetical AI use cases, yielding an average acceptance score of 4.4 out of 7. Despite growing interest, two-thirds of respondents declined at least one scenario.
Their key concerns included limitations of AI performance, privacy and data protection, accountability, safety risks and the potential for workforce de-skilling. A clear majority called for stronger regulation and oversight by EASA and national authorities.
In a keynote address, Christine Berg from the European Commission highlighted that AI in aviation is already practical, optimising air traffic flow and predictive maintenance, while emphasising the need for explainable, reliable and certifiable systems under the EU AI Act.
Survey findings will feed into EASA’s AI Roadmap and prompt public consultations as the agency advances policy and regulatory frameworks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!