The European Commission has asked Apple, Booking.com, Google and Microsoft how they tackle financial scams under the Digital Services Act. The inquiry covers major platforms and search engines, including Apple App Store, Google Play, Booking.com, Bing and Google Search.
Officials want to know how these companies detect fraudulent content and what safeguards they use to prevent scams. For app stores, the focus is on fake financial applications imitating legitimate banking or trading services.
For Booking.com, attention is paid to fraudulent accommodation listings, while Bing and Google Search face scrutiny over links and ads, leading to scam websites.
The Commission asked platforms how they verify business identities under ‘Know Your Business Customer’ rules to prevent harm from suspicious actors. Companies must also share details of their ad repositories, enabling regulators and researchers to spot fraudulent ads and patterns.
By taking these steps, the Commission aims to ensure that actions under the DSA complement broader consumer protection measures already in force across the European Union.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Stellantis, the parent company of Jeep, Chrysler and Dodge, has disclosed a data breach affecting its North American customer service operations.
The company said it recently discovered unauthorised access to a third-party service platform and confirmed that customer contact details were exposed. Stellantis stressed that no financial information was compromised and that affected customers and regulators are being notified.
Cybercriminal group ShinyHunters has claimed responsibility, telling tech site BleepingComputer it had stolen over 18 million Salesforce records from the automaker, including names and contact information. Stellantis has not confirmed the number of records involved.
ShinyHunters has targeted several global firms this year, including Google, Louis Vuitton and Allianz Life, often using voice phishing to trick employees into downloading malicious software. The group claims to have stolen 1.5 billion Salesforce records from more than 700 companies worldwide.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Routine hospital blood samples could help predict spinal cord injury severity and even mortality, a University of Waterloo study has found. Researchers used machine learning to analyse millions of data points from over 2,600 patients.
The models identified patterns in routine blood measurements, including electrolytes and immune cells, collected during the first three weeks following injury. These patterns forecast recovery outcomes even when neurological exams were unreliable or impossible.
Unlike MRI or fluid-based biomarkers, which are not always accessible, routine blood tests are low-cost and widely available in hospitals. The approach could help clinicians make more informed and faster treatment decisions.
The team says its findings could reshape early critical care for spinal cord injuries. Predicting severity sooner could guide resource allocation and prioritise patients needing urgent intervention.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new Pew Research Center survey shows Americans are more worried than excited about AI shaping daily life. Half of adults say AI’s rise will harm creative thinking and meaningful relationships, while only small shares see improvements.
Many want greater control over its use, even as most are willing to let it assist with routine tasks.
The survey of over 5,000 US adults found 57% consider AI’s societal risks to be high, with just a quarter rating the benefits as significant. Most respondents also doubt their ability to recognise AI-generated content, although three-quarters believe being able to tell human from machine output is essential.
Americans remain sceptical about AI in personal spheres such as religion and matchmaking, instead preferring its application in heavy data tasks like weather forecasting, fraud detection and medical research.
Younger adults are more aware of AI than older generations, yet they are also more likely to believe it will undermine creativity and human connections.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A leading AI developer has released the third iteration of its Frontier Safety Framework (FSF), aiming to identify and mitigate severe risks from advanced AI models. The update expands risk domains and refines the process for assessing potential threats.
Key changes include the introduction of a Critical Capability Level (CCL) focused on harmful manipulation. The update targets AI models with the potential to systematically influence beliefs and behaviours in high-stakes contexts, ensuring safety measures keep pace with growing model capabilities.
The framework also enhances protocols for misalignment risks, addressing scenarios where AI could override operators’ control or shutdown attempts. Safety case reviews are now conducted before external launches and large-scale internal deployments reach critical thresholds.
The updated FSF sharpens risk assessments and applies safety and security mitigations in proportion to threat severity. It reflects a commitment to evidence-based AI governance, expert collaboration, and ensuring AI benefits humanity while minimising risks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Misconfigurations in cloud systems and enterprise networks remain one of the most persistent and damaging causes of data breaches worldwide.
Recent incidents have highlighted the scale of the issue, including a cloud breach at the US Department of Homeland Security, where sensitive intelligence data was inadvertently exposed to thousands of unauthorised users.
Experts say such lapses are often more about people and processes than technology. Complex workflows, rapid deployment cycles and poor oversight allow errors to spread across entire systems. Misconfigured servers, storage buckets or access permissions then become easy entry points for attackers.
Analysts argue that preventing these mistakes requires better governance, training and process discipline rather. Building strong safeguards and ensuring staff have the knowledge to configure systems securely are critical to closing one of the most exploited doors in cybersecurity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.
Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.
Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.
Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.
The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China’s internet watchdog, the Cyberspace Administration of China (CAC), has warned online platforms Kuaishou Technology and Weibo for failing to curb celebrity gossip and harmful content on their platforms.
The CAC issued formal warnings, citing damage to the ‘online ecosystem’ and demanding corrective action. Both firms pledged compliance, with Kuaishou forming a task force and Weibo promising self-reflection.
The move follows similar disciplinary action against lifestyle app RedNote and is part of a broader two-month campaign targeting content that ‘viciously stimulates negative emotions.’
Separately, Kuaishou is under investigation by the State Administration for Market Regulation for alleged malpractice in live-streaming e-commerce.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Children’s Advertising Review Unit (CARU) has advised MrBeast, LLC and Feastables to strengthen their advertising and privacy practices following concerns over promotions aimed at children.
CARU found that some videos on the MrBeast YouTube channel included undisclosed advertising in descriptions and pinned comments, which could mislead young viewers.
It also raised concerns about a promotional taste test for Feastables chocolate bars, which appeared to children as a valid comparison despite lacking a scientific basis.
Privacy issues were also identified, with Feastables collecting personal data from under-13s without parental consent. CARU noted the absence of an effective age gate and highlighted that information provided via popups was sent to third parties.
MrBeast and Feastables said many of the practices under review had already been revised or discontinued, but pledged to take CARU’s recommendations into account in future campaigns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nissan has announced plans to launch its next-generation ProPILOT system in fiscal year 2027. The upgraded system will include Nissan Ground Truth Perception, next-generation Lidar, and Wayve AI Driver, enhancing collision avoidance and autonomous driving.
Wayve AI Driver software is built on an embodied AI foundation model that enables human-like decision-making in complex real-world driving conditions. By efficiently learning from large volumes of data, the system continuously enhances Nissan vehicles’ performance and safety.
Wayve, a global AI company, specialises in embodied AI for driving. Its foundation model leverages extensive real-world experience to deliver reliable point-to-point navigation across urban and highway environments, while adapting quickly to new scenarios and platforms.
The partnership positions Nissan at the forefront of autonomous vehicle technology, combining cutting-edge sensors, AI, and adaptive software to redefine safety and efficiency in future mobility.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!