EU demands answers from Apple, Google, Microsoft and Booking.com on scam risks

The European Commission has asked Apple, Booking.com, Google and Microsoft how they tackle financial scams under the Digital Services Act. The inquiry covers major platforms and search engines, including Apple App Store, Google Play, Booking.com, Bing and Google Search.

Officials want to know how these companies detect fraudulent content and what safeguards they use to prevent scams. For app stores, the focus is on fake financial applications imitating legitimate banking or trading services.

For Booking.com, attention is paid to fraudulent accommodation listings, while Bing and Google Search face scrutiny over links and ads, leading to scam websites.

The Commission asked platforms how they verify business identities under ‘Know Your Business Customer’ rules to prevent harm from suspicious actors. Companies must also share details of their ad repositories, enabling regulators and researchers to spot fraudulent ads and patterns.

By taking these steps, the Commission aims to ensure that actions under the DSA complement broader consumer protection measures already in force across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta offers Llama AI to US allies amid global tech race

Meta will provide its Llama AI model to key European institutions, NATO, and several allied countries as part of efforts to strengthen national security capabilities.

The company confirmed that France, Germany, Italy, Japan, South Korea, and the EU will gain access to the open-source model. US defence and security agencies and partners in Australia, Canada, New Zealand, and the UK already use Llama.

Meta stated that the aim is to ensure democratic allies have the most advanced AI tools for decision-making, mission planning, and operational efficiency.

Although its terms bar use for direct military or espionage applications, the company emphasised that supporting allied defence strategies is in the interest of nations.

The move highlights the strategic importance of AI models in global security. Meta has positioned Llama as a counterweight to other countries’ developments, after allegations that researchers adapted earlier versions of the model for military purposes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini brings conversational AI to Google TV

Google has launched Gemini for TV, bringing conversational AI to the living room. The update builds on Google TV and Google Assistant, letting viewers chat naturally with their screens to discover shows, plan trips, or even tackle homework questions.

Instead of scrolling endlessly, users can ask Gemini to find a film everyone will enjoy or recap last season’s drama. The AI can handle vague requests, like finding ‘that new hospital drama,’ and provide reviews before you press play.

Gemini also turns the TV into an interactive learning tool. From explaining why volcanoes erupt to guiding kids through projects, it offers helpful answers with supporting YouTube videos for hands-on exploration.

Beyond schoolwork, Gemini can help plan meals, teach new skills like guitar, or brainstorm family trips, all through conversational prompts. Such features make the TV a hub for entertainment, education, and inspiration.

Gemini is now available on the TCL QM9K series, with rollout to additional Google TV devices planned for later this year. Google says additional features are coming soon, making TVs more capable and personalised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stellantis hit by breach affecting millions of customers

Stellantis, the parent company of Jeep, Chrysler and Dodge, has disclosed a data breach affecting its North American customer service operations.

The company said it recently discovered unauthorised access to a third-party service platform and confirmed that customer contact details were exposed. Stellantis stressed that no financial information was compromised and that affected customers and regulators are being notified.

Cybercriminal group ShinyHunters has claimed responsibility, telling tech site BleepingComputer it had stolen over 18 million Salesforce records from the automaker, including names and contact information. Stellantis has not confirmed the number of records involved.

ShinyHunters has targeted several global firms this year, including Google, Louis Vuitton and Allianz Life, often using voice phishing to trick employees into downloading malicious software. The group claims to have stolen 1.5 billion Salesforce records from more than 700 companies worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT Go launches in Indonesia with $4.5 monthly plan

OpenAI has launched its low-cost ChatGPT Go subscription in Indonesia, pricing it at 75,000 rupiah ($4.5) per month. The new plan offers ten times more messaging capacity, image generation tools and double memory compared with the free version.

The rollout follows last month’s successful launch in India, where ChatGPT subscriptions more than doubled. India has since become OpenAI’s largest market, accounting for around 13.5% of global monthly active users. The US remains second.

Nick Turley, OpenAI Vice President and head of ChatGPT, said Indonesia is already one of the platform’s top five markets by weekly activity. The new tier is aimed at expanding reach in populous, price-sensitive regions while ensuring broader access to AI services.

OpenAI is also strengthening its financial base as it pushes into new markets. On Monday, the company secured a $100 billion investment commitment from NVIDIA, joining Microsoft and SoftBank among its most prominent backers. The funding comes amid intensifying competition in the AI industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Americans fear AI will weaken creativity and human connections

A new Pew Research Center survey shows Americans are more worried than excited about AI shaping daily life. Half of adults say AI’s rise will harm creative thinking and meaningful relationships, while only small shares see improvements.

Many want greater control over its use, even as most are willing to let it assist with routine tasks.

The survey of over 5,000 US adults found 57% consider AI’s societal risks to be high, with just a quarter rating the benefits as significant. Most respondents also doubt their ability to recognise AI-generated content, although three-quarters believe being able to tell human from machine output is essential.

Americans remain sceptical about AI in personal spheres such as religion and matchmaking, instead preferring its application in heavy data tasks like weather forecasting, fraud detection and medical research.

Younger adults are more aware of AI than older generations, yet they are also more likely to believe it will undermine creativity and human connections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle to oversee TikTok algorithm in US deal

The White House has confirmed that TikTok’s prized algorithm will be managed in the US under Oracle’s supervision as part of a deal to place the app’s US operations under majority American ownership. The agreement would transfer control of TikTok’s US business, along with a copy of the algorithm, to a new joint venture run by a board dominated by American investors.

The confirmed participants are Oracle and private equity firm Silver Lake, with Fox Corp. also expected to join the group. President Donald Trump has suggested that high-profile figures such as Michael Dell, Rupert, and Lachlan Murdoch could be involved, though CNN sources say that the Murdochs personally will not invest. ByteDance will keep a stake of less than 20% in the new US entity.

The deal follows years of negotiations over concerns that TikTok’s Chinese parent company could be pressured to manipulate the platform for political influence. By law, ByteDance is barred from cooperating on the algorithm with any new American owners. The code will be reviewed, retrained on US user data to address these fears, and monitored by Oracle to ensure its independence.

President Trump is expected to sign an executive order later this week certifying that the deal meets national security requirements under last year’s ‘ban-or-sale’ law. He will also extend the pause on enforcement by 120 days, giving Washington and Beijing time to finalise regulatory approvals. The White House said the deal could be signed within days, with completion likely early next year.

The arrangement deepens Oracle’s role in managing TikTok’s American presence, building on its existing partnership to store US user data. The development coincided with Oracle announcing a leadership shake-up, with CEO Safra Catz stepping down to become vice chair and two co-CEOs taking over. It is unclear if the timing is connected, but Catz, a close Trump ally, could take a role in the TikTok venture.

While financial details remain uncertain, the White House has ruled out taking a direct stake in the company. The deal, valued in the billions, would conclude a years-long effort to bring TikTok under US oversight and resolve national security concerns tied to its Chinese ownership.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Misconfigurations drive major global data breaches

Misconfigurations in cloud systems and enterprise networks remain one of the most persistent and damaging causes of data breaches worldwide.

Recent incidents have highlighted the scale of the issue, including a cloud breach at the US Department of Homeland Security, where sensitive intelligence data was inadvertently exposed to thousands of unauthorised users.

Experts say such lapses are often more about people and processes than technology. Complex workflows, rapid deployment cycles and poor oversight allow errors to spread across entire systems. Misconfigured servers, storage buckets or access permissions then become easy entry points for attackers.

Analysts argue that preventing these mistakes requires better governance, training and process discipline rather. Building strong safeguards and ensuring staff have the knowledge to configure systems securely are critical to closing one of the most exploited doors in cybersecurity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Research shows AI complements, not replaces, human work

AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.

Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.

Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.

Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.

The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Behavioural AI could be the missing piece in the $2 trillion AI economy

Global AI spending is projected to reach $1.5 trillion in 2025 and exceed $2 trillion in 2026, yet a critical element is missing: human judgement. A growing number of organisations are turning to behavioural science to bridge this gap, coding it directly into AI systems to create what experts call behavioural AI.

Early adopters like Clarity AI utilise behavioural AI to flag ESG controversies before they impact earnings. Morgan Stanley uses machine learning and satellite data to monitor environmental risks, while Google Maps influences driver behaviour, preventing over one million tonnes of CO₂ annually.

Behavioural AI is being used to predict how leaders and societies act under uncertainty. These insights guide corporate strategy, PR campaigns, and decision-making. Mind Friend combines a network of 500 mental health experts with AI to build a ‘behavioural infrastructure’ that enhances judgement.

The behaviour analytics market was valued at $1.1 billion in 2024 and is projected to grow to $10.8 billion by 2032. Major players, such as IBM and Adobe, are entering the field, while Davos and other global forums debate how behavioural frameworks should shape investment and policy decisions.

As AI scrutiny grows, ethical safeguards are critical. Companies that embed governance, fairness, and privacy protections into their behavioural AI are earning trust. In a $2 trillion market, winners will be those who pair algorithms with a deep understanding of human behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!