Apple may replace Google with AI in Safari

Apple may soon reshape how users search the web on iPhones and other devices by integrating AI-powered search engines directly into Safari instead of relying solely on Google.

According to Bloomberg, the company is ‘actively looking at’ expanding options in its browser to include AI systems such as OpenAI’s ChatGPT and Perplexity, potentially disrupting Google’s long-held dominance in online search.

Currently, Google pays Apple around $20 billion a year to remain the default search engine in Safari — about 36% of the search ad revenue generated through Apple devices. But that relationship may be under pressure, especially as AI tools gain popularity.

Apple has already partnered with OpenAI to bring ChatGPT into Siri, while Google is now pushing to include its Gemini AI system in future Apple products.

Alphabet’s shares dropped 6% following the news, while Apple saw a 2% dip. Apple executive Eddy Cue, testifying in an ongoing antitrust case, noted a recent decline in Safari searches and said he expects AI search tools to eventually replace traditional engines like Google.

Apple, he added, plans to introduce these AI services as built-in alternatives in Safari in the near future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netflix introduces AI chatbot to help you pick what to watch

Netflix is trialling an AI chatbot inside its iOS app, offering a new way for users to find content by simply typing natural phrases instead of relying on standard searches. In this small, opt-in beta, users might say things like ‘I want something funny and upbeat; to receive tailored recommendations.

The company believes the AI chatbot could soon become a core part of its app on both iOS and Android, and perhaps even land on TVs in future.

Alongside this, Netflix is reshaping the user experience by surfacing helpful labels like ‘Emmy Award Winner’ and ‘#1 in TV Shows’ to help viewers choose faster instead of scrolling endlessly.

Search and My List are moving to the top of TV screens for better visibility, and the homepage is getting a cleaner, more modern design.

Netflix says recommendations will also shift dynamically based on a viewer’s mood or interests, although it hasn’t explained exactly how this will work.

On mobile, Netflix plans to roll out a vertical feed of show and movie clips in the coming weeks. You’ll be able to tap to watch, save, or share immediately—turning content discovery into a quick and interactive experience instead of a chore.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit plans identity checks after AI bots spark authenticity crisis

Reddit is preparing to introduce identity verification measures following a major controversy over AI bots impersonating humans on its platform.

The move comes after researchers released more than 1,700 AI-generated comments on the ‘Change My View’ subreddit, posing as various personas including abuse survivors and anti-Black Lives Matter supporters.

The large-scale experiment, designed to test AI’s persuasiveness, left many users alarmed and raised serious concerns about trust and authenticity on the site.

The company, which condemned the incident as an ‘improper and highly unethical experiment,’ filed a formal complaint against the university responsible.

However, Reddit faces a broader and more persistent issue: generative AI bots infiltrating the platform for purposes ranging from scientific studies to political manipulation.

To counter this, CEO Steve Huffman announced that Reddit would soon collaborate with third-party services to verify whether users are human — a major shift for a platform built on anonymity.

‘To keep Reddit human and meet evolving regulatory requirements, we are going to need a little more information,’ Huffman explained. While the company says it will not seek names or deeply personal data, age and humanity checks will become necessary in certain cases.

This comes amid growing regulatory pressure globally, with some jurisdictions already mandating age verification on social media.

Despite Huffman’s assurances that anonymity remains essential to Reddit, privacy advocates have voiced concerns. Opponents of ID checks warn that verifying user identities could pose serious risks, especially if authorities demand access to sensitive user data.

Examples like Meta’s controversial handover of private messages in Nebraska — which resulted in felony charges in an abortion-related case — highlight how anonymity breaches could have severe consequences.

Reddit insists it will rely on external partners to collect only the essential data and will continue resisting unreasonable demands from public or private authorities. As AI’s influence grows, the company faces the challenge of balancing user anonymity with the need to protect its platform from manipulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft adds AI assistant to Windows 11 settings

Microsoft is bringing more AI to Windows 11 with a new AI assistant built into the Settings app. This smart agent can adjust system settings like mouse precision, help users navigate the interface, and even troubleshoot problems—all by request.

With the user’s permission, it can also make changes automatically instead of relying on manual adjustments.

The AI assistant will first roll out to testers in the Windows Insider programme on Snapdragon-powered Copilot+ PCs, followed by support for x86-based systems.

Although Microsoft has not confirmed a release date for the general public, this feature marks a major step in making Windows settings more intuitive and responsive.

Several other AI-powered updates are on the way, including smarter tools in File Explorer and the Snipping Tool, plus dynamic lighting in the Photos app.

Copilot will also gain a new ‘Vision’ feature, letting it see shared windows for better in-app assistance instead of being limited to text prompts alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google faces DOJ’s request to sell key ad platforms

The US Department of Justice (DOJ) has moved to break up Google’s advertising technology business after a federal judge ruled that the company holds illegal monopolies across two markets.

The DOJ is seeking the sale of Google’s AdX digital advertising marketplace and its DFP platform, which helps publishers manage their ad inventory.

It follows a ruling in April by Federal Judge Leonie Brinkema, who found that Google’s dominance in the online advertising market violated antitrust laws.

AdX and DFP were key acquisitions for Google, particularly the purchase of DoubleClick in 2008 for $3.1 billion. The DOJ argues that Google used monopolistic tactics, such as acquisitions and customer lock-ins, to control the ad tech market and stifle competition.

In response, Google has disputed the DOJ’s move, claiming the proposed sale of its advertising tools exceeds the court’s findings and could harm publishers and advertisers.

The DOJ’s latest filing also comes amid a separate legal action over Google’s Chrome browser, and the company is facing additional scrutiny in the UK for its dominance in the online search market.

The UK’s Competition and Markets Authority (CMA) has found that Google engaged in anti-competitive practices in open-display advertising technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI to boost India’s media and entertainment sector

AI could boost revenues by 10% and reduce costs by 15% for media and entertainment firms, according to a report by EY, unveiled during the first WAVES Summit.

The report, A Studio Called India, outlines how AI is reshaping the global media landscape—transforming everything from content creation and personalisation to monetisation and distribution.

India, already a global leader in content production and IT, is well-positioned to lead this AI-driven shift.

EY highlighted India’s unique combination of technical skill, creative depth, and a rapidly expanding AI ecosystem, which positions it as a critical hub in the evolving media value chain instead of remaining just an outsourcing destination.

Indian companies are increasingly using generative AI for tasks like campaign optimisation, audience targeting, automated dubbing, and voice cloning.

These tools enable faster localisation of international content and allow global studios to scale up multi-language releases without sacrificing cultural authenticity or narrative integrity.

With 2.8 million people directly employed and around 10 million in indirect roles, India’s media sector is growing rapidly, driven by digital platforms, government support, and rising demand for AI-enhanced content services.

EY concluded that India offers foreign investors a powerful combination of creative scale, cost advantage, and favourable policies instead of regulatory barriers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chefs quietly embrace AI in the kitchen

At this year’s Michelin Guide awards in France, AI sparked nearly as much conversation as the stars themselves.

Paris-based chef Matan Zaken, of the one-star restaurant Nhome, said AI dominated discussions among chefs, even though many are hesitant to admit they already rely on tools like ChatGPT for inspiration and recipe development.

Zaken openly embraces AI in his kitchen, using platforms like ChatGPT Premium to generate ingredient pairings—such as peanuts and wild garlic—that he might not have considered otherwise. Instead of starting with traditional tastings, he now consults vast databases of food imagery and chemical profiles.

In a recent collaboration with the digital collective Obvious Art, AI-generated food photos came first, and Zaken created dishes to match them.

Still, not everyone is sold on AI’s place in haute cuisine. Some top chefs insist that no algorithm can replace the human palate or creativity honed by years of training.

Philippe Etchebest, who just earned a second Michelin star, argued that while AI may be helpful elsewhere, it has no place in the artistry of the kitchen. Others worry it strays too far from the culinary traditions rooted in local produce and craftsmanship.

Many chefs, however, seem more open to using AI behind the scenes. From managing kitchen rotas to predicting ingredient costs or carbon footprints, phone apps like Menu and Fullsoon are gaining popularity.

Experts believe molecular databases and cookbook analysis could revolutionise flavour pairing and food presentation, while robots might one day take over laborious prep work—peeling potatoes included.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google admits using opted-out content for AI training

Google has admitted in court that it can use website content to train AI features in its search products, even when publishers have opted out of such training.

Although Google offers a way for sites to block their data from being used by its AI lab, DeepMind, the company confirmed that its broader search division can still use that data for AI-powered tools like AI Overviews.

An initiative like this has raised concern among publishers who seek reduced traffic as Google’s AI summarises answers directly at the top of search results, diverting users from clicking through to original sources.

Eli Collins, a vice-president at Google DeepMind, acknowledged during a Washington antitrust trial that Google’s search team could train AI using data from websites that had explicitly opted out.

The only way for publishers to fully prevent their content from being used in this way is by opting out of being indexed by Google Search altogether—something that would effectively make them invisible on the web.

Google’s approach relies on the robots.txt file, a standard that tells search bots whether they are allowed to crawl a site.

The trial is part of a broader effort by the US Department of Justice to address Google’s dominance in the search market, which a judge previously ruled had been unlawfully maintained.

The DOJ is now asking the court to impose major changes, including forcing Google to sell its Chrome browser and stop paying to be the default search engine on other devices. These changes would also apply to Google’s AI products, which the DOJ argues benefit from its monopoly.

Testimony also revealed internal discussions at Google about how using extensive search data, such as user session logs and search rankings, could significantly enhance its AI models.

Although no model was confirmed to have been built using that data, court documents showed that top executives like DeepMind CEO Demis Hassabis had expressed interest in doing so.

Google’s lawyers have argued that competitors in AI remain strong, with many relying on direct data partnerships instead of web scraping.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s CEO Altman confirms rollback of GPT-4o after criticism

OpenAI has reversed a recent update to its GPT-4o model after users complained it had become overly flattering and blindly agreeable. The behaviour, widely mocked online, saw ChatGPT praising dangerous or clearly misguided user ideas, leading to concerns over the model’s reliability and integrity.

The change had been part of a broader attempt to make GPT-4o’s default personality feel more ‘intuitive and effective’. However, OpenAI admitted the update relied too heavily on short-term user feedback and failed to consider how interactions evolve over time.

In a blog post published Tuesday, OpenAI said the model began producing responses that were ‘overly supportive but disingenuous’. The company acknowledged that sycophantic interactions could feel ‘uncomfortable, unsettling, and cause distress’.

Following CEO Sam Altman’s weekend announcement of an impending rollback, OpenAI confirmed that the previous, more balanced version of GPT-4o had been reinstated.

It also outlined steps to avoid similar problems in future, including refining model training, revising system prompts, and expanding safety guardrails to improve honesty and transparency.

Further changes in development include real-time feedback mechanisms and allowing users to choose between multiple ChatGPT personalities. OpenAI says it aims to incorporate more diverse cultural perspectives and give users greater control over the assistant’s behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!