Tea dating app suspends messaging after the major data breach

The women’s dating safety app Tea has suspended its messaging feature following a cyberattack that exposed thousands of private messages, posts and images.

The app, which helps women run background checks on men, confirmed that direct messages were accessed during the initial breach disclosed in late July.

Tea has 1.6 million users, primarily in the US. Affected users will be contacted directly and offered free identity protection services, including credit monitoring and fraud alerts.

The company said it is working to strengthen its security and will provide updates as the investigation continues. Some of the leaked conversations reportedly contain sensitive discussions about infidelity and abortion.

Experts have warned that the leak of both images and messages raises the risk of emotional harm, blackmail or identity theft. Cybersecurity specialists recommend that users accept the free protection services as soon as possible.

The breach affected those who joined the app before February 2024, including users who submitted ID photos that Tea had promised would be deleted after verification.

Tea is known for allowing women to check if a potential partner is married or has a criminal record, as well as share personal experiences to flag abusive or trustworthy behaviour.

The app’s recent popularity surge has also sparked criticism, with some claiming it unfairly targets men. As users await more information, experts urge caution and vigilance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trust in human doctors remains despite AI advancements

OpenAI CEO Sam Altman has stated that AI, especially ChatGPT, now surpasses many doctors in diagnosing illnesses. However, he pointed out that individuals still prefer human doctors because of the trust and emotional connection they provide.

Altman also expressed concerns about the potential misuse of AI, such as using voice cloning for fraud and identity theft. He emphasised the need for stronger privacy protections for sensitive conversations with AI tools like ChatGPT, noting that current standards are inadequate and should align with those for therapists.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU clears Microsoft deal after privacy changes

The European Data Protection Supervisor (EDPS) has ended its enforcement action against the European Commission over its use of Microsoft, following improvements to data protection practices. The decision came after the Commission revised its contract with Microsoft to improve privacy standards.

Under the updated terms, Microsoft must clarify the reasons for data transfers outside the European Economic Area and name the recipients. Transfers are only allowed to countries with EU-recognised protections or in public interest cases.

Microsoft must also inform the Commission if a foreign government requests access to EU data, unless the request comes from within the EU or a country with equivalent safeguards. The EDPS urged other EU institutions to adopt similar contractual protections if using Microsoft 365.

Despite the EDPS’ clearance, the Commission remains concerned about relying too heavily on a non-EU tech provider for essential digital services. It continues to support the current EU-US data adequacy deal, though recent political changes in the US have cast doubt on its long-term stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings AI Mode to UK search results

Google has officially introduced its AI Mode to UK users, calling it the most advanced version of its search engine.

Instead of listing web links, the feature provides direct, human-like answers to queries. It allows users to follow up with more detailed questions or multimedia inputs such as voice and images. The update aims to keep pace with the rising trend of longer, more conversational search phrases.

The tool first launched in the US and uses a ‘query fan-out’ method, breaking down complex questions into multiple search threads to create a combined answer from different sources.

While Google claims this will result in more meaningful site visits, marketers and publishers are worried about a growing trend known as ‘zero-click searches’, where users find what they need without clicking external links.

Research already shows a steep drop in engagement. Data from the Pew Research Centre reveals that only 8% of users click a link when AI summaries are present, nearly half the rate of traditional search pages. Experts warn that without adjusting strategies, many online brands risk becoming invisible.

Instead of relying solely on classic SEO tactics, businesses are being urged to adopt Generative Engine Optimisation. Using tools like schema markup, GEO focuses on conversational content, visual media, and context-aware formatting.

With nearly half of UK users engaging with AI search daily, adapting to these shifts may prove essential for maintaining visibility and sales.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft adds AI Copilot Mode to Edge browser

Microsoft has launched Copilot Mode in its Edge browser, adding AI features to streamline online activity.

Instead of switching between tabs or manually comparing information, users can ask Copilot to complete tasks, search for content, and make suggestions. The tool is available for PC and Mac users and opens in a side panel, letting people interact with it while still viewing the original page.

Copilot can help with everyday tasks such as writing content, preparing grocery lists, and scheduling appointments. It works across multiple tabs if the user permits, enabling comparisons like hotel or flight prices in a single command.

Voice input is also supported, making it easier for those with limited mobility or less familiarity with AI tools to interact naturally.

Microsoft notes that Copilot Mode remains experimental, but users can still set it as the default. It supports conversational prompts, dynamic interactions like turning recipes vegan, and even measurements or language translations, all without losing browser position.

Users may eventually provide login or history access for more advanced tasks, although full consent and clear notifications will be required.

With growing reliance on digital assistants, Microsoft’s move puts Edge in direct competition with other AI-enabled browsers. As more AI tools become embedded in everyday software, the company expects Copilot to evolve rapidly and suggest next steps to help users pick up where they left off.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants back Trump’s AI deregulation plan amid public concern over societal impacts

Donald Trump recently hosted an AI summit in Washington, titled ‘Winning the AI Race,’ geared towards a deregulated atmosphere for AI innovation. Key figures from the tech industry, including Nvidia’s CEO Jensen Huang and Palantir’s CTO Shyam Sankar, attended the event.

Co-hosted by the Hill and Valley Forum and the Silicon Valley All-in Podcast, the summit was a platform for Trump to introduce his ‘AI Action Plan‘, comprised of three executive orders focusing on deregulation. Trump’s objective is to dismantle regulatory restrictions he perceives as obstacles to innovation, aiming to re-establish the US as a leader in AI exportation globally.

The executive orders announced target the elimination of ‘ideological dogmas such as diversity, equity, and inclusion (DEI)’ in AI models developed by federally funded companies. Additionally, one order promotes exporting US-developed AI technologies internationally, while another seeks to lessen environmental restrictions and speed up approvals for energy-intensive data centres.

These measures are seen as reversing the Biden administration’s policies, which stressed the importance of safety and security in AI development. Technology giants Apple, Meta, Amazon, and Alphabet have shown significant support for Trump’s initiatives, contributing to his inauguration fund and engaging with him at his Mar-a-Lago estate. Leaders like OpenAI’s Sam Altman and Nvidia’s Jensen Huang have also pledged substantial investments in US AI infrastructure.

Despite this backing, over 100 groups, including labour, environmental, civil rights, and academic organisations, have voiced their opposition through a ‘People’s AI action plan’. These groups warn of the potential risks of unregulated AI, which they fear could undermine civil liberties, equality, and environmental safeguards.

They argue that public welfare should not be compromised for corporate gains, highlighting the dangers of allowing tech giants to dominate policy-making. That discourse illustrates the divide between industry aspirations and societal consequences.

The tech industry’s influence on AI legislation through lobbying is noteworthy, with a report from Issue One indicating that eight of the largest tech companies spent a collective $36 million on lobbying in 2025 alone. Meta led with $13.8 million, employing 86 lobbyists, while Nvidia and OpenAI saw significant increases in their expenditure compared to previous years. The substantial financial outlay reflects the industry’s vested interest in shaping regulatory frameworks to favour business interests, igniting a debate over the ethical responsibilities of unchecked AI progress.

As tech companies and pro-business entities laud Trump’s deregulation efforts, concerns persist over the societal impacts of such policies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China issues action plan for global AI governance and proposes global AI cooperation organisation

At the 2025 World AI Conference in Shanghai, Chinese Premier Li Qiang urged the international community to prioritise joint efforts in governing AI, making reference to a need to establish a global framework and set of rules widely accepted by the global community. He unveiled a proposal by the Chinese government to create a global AI cooperation organisation to foster international collaboration, innovation, and inclusivity in AI across nations.

China attaches great importance to global AI governance, and has been actively promoting multilateral and bilateral cooperation with a willingness to offer more Chinese solutions‘.

An Action Plan for AI Global Governance was also presented at the conference. The plan outlines, in its introduction, a call for ‘all stakeholders to take concrete and effective actions based on the principles of serving the public good, respecting sovereignty, development orientation, safety and controllability, equity and inclusiveness, and openness and cooperation, to jointly advance the global development and governance of AI’.

The document includes 13 points related to key areas of international AI cooperation, including promoting inclusive infrastructure development, fostering open innovation ecosystems, ensuring high-quality data supply, and advancing sustainability through green AI practices. It also calls for consensus-building around technical standards, advancing international cooperation on AI safety governance, and supporting countries – especially those in the Global South – in ‘developing AI technologies and services suited to their national conditions’.

Notably, the plan indicates China’s support for multilateralism when it comes to the governance of AI, calling for an active implementation of commitments made by UN member states in the Pact for the Future and the Global Digital Compact, and expressing support for the establishment of the International AI Scientific Panel and a Global Dialogue on AI Governance (whose terms of reference are currently negotiated by UN member states in New York).

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT Agent brings autonomous task handling to OpenAI users

OpenAI has launched the ChatGPT Agent, a feature that transforms ChatGPT from a conversational tool into a proactive digital assistant capable of performing complex, real-world tasks.

By activating ‘agent mode,’ users can instruct ChatGPT to handle activities such as booking restaurant reservations, ordering groceries, managing emails and creating presentations.

The Agent operates within a virtual browser environment, allowing it to interact with websites, fill out forms, and execute multi-step tasks autonomously.

However, this advancement builds upon OpenAI’s previous tool, Operator, which enabled AI-driven task execution. However, the ChatGPT Agent offers enhanced capabilities, including integration with third-party services like Gmail and Google Drive, allowing it to manage emails and documents seamlessly.

Users can monitor the Agent’s actions in real-time and intervene when necessary, particularly during tasks involving sensitive information.

While the ChatGPT Agent offers significant convenience, it also questions data privacy and security. OpenAI has implemented safety measures, such as requiring explicit user consent for sensitive actions and training the Agent to refuse risky or malicious requests.

Despite these precautions, concerns persist regarding handling personal information and access to third-party services. Users must review the Agent’s permissions and settings to ensure their data remains secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup Daydream revolutionises online fashion search

Online shopping for specific items like bridesmaid dresses can be challenging due to overwhelming choices. A new tech startup, Daydream, aims to simplify this. It uses AI to let users search for products by describing them in natural language, making the process easier and more intuitive.

For instance, a user could ask for a ‘revenge dress to wear to a party in Sicily in July,’ or ‘a summer bag to carry to work and cocktails after.’

Daydream, with staff based in New York and San Francisco, represents the latest venture in a growing trend of tech companies utilising AI to streamline and personalise online retail.

Consumer demand for such tools is evident: an Adobe Analytics survey of 5,000 US consumers revealed that 39% had used a generative AI tool for online shopping last year, with 53% planning to do so this year. Daydream faces competition from tech giants already active in this space.

Meta employs AI to facilitate seller listings and to target users with more relevant product advertisements. OpenAI has launched an AI agent capable of shopping across the web for users, and Amazon is trialling a similar feature.

Google has also introduced various AI shopping tools, including automated price tracking, a ‘circle to search’ function for identifying products in photos, and virtual try-on options for clothing.

Despite the formidable competition, Daydream’s CEO, Julie Bornstein, believes her company possesses a deeper understanding of the fashion and retail industries.

Bornstein’s extensive background includes helping build Nordstrom’s website as its vice president of e-commerce in the early 2000s and holding C-suite positions at Sephora and Stitch Fix. In 2018, she co-founded her first AI-powered shopping startup, The Yes, which was sold to Pinterest in 2022.

Bornstein asserts, ‘They don’t have the people, the mindset, the passion to do what needs to be done to make a category like fashion work for AI recommendations.’ She added, ‘Because I’ve been in this space my whole career, I know that having the catalogue with everything and being able to show the right person the right stuff makes shopping easier.’

Daydream has already secured $50 million in its initial funding round, attracting investors such as Google Ventures and model Karlie Kloss, founder of Kode With Klossy. The platform operates as a free, digital personal stylist.

Users can input their desired products using natural language, eliminating the need for complex Boolean search terms, thanks to its AI text recognition technology, or upload an inspiration photo.

Daydream then presents recommendations from over 8,000 brand partners, ranging from budget-friendly Uniqlo to luxury brand Gucci. Users can further refine their search through a chat interface, for example, by requesting more casual or less expensive alternatives.

As users interact more with the platform, it progressively tailors recommendations based on their search history, clicks, and saved items.

When customers are ready to purchase, they are redirected to the respective brand’s website to complete the transaction, with Daydream receiving a 20% commission on the sale.

Unlike many other major e-commerce players, Bornstein is deliberately avoiding ad-based rankings. She aims for products to appear on recommendation pages purely because they are a suitable match for the customer, not due to paid placements.

Bornstein stated, ‘As soon as Amazon started doing paid sponsorships, I’m like, ‘How can I find the real good product?’ She emphasised, ‘We want this to be a thing where we get paid when we show the customer the right thing.’

A recent CNN test of Daydream yielded mixed results. A search for a ‘white, fitted button-up shirt for the office with no pockets’ successfully returned a $145 cotton long-sleeve shirt from Theory that perfectly matched the description.

However, recommendations are not always flawless. A query for a ‘mother of the bride dress for a summer wedding in California’ presented several slinky slip dresses, some in white, alongside more formal styles, appearing more suitable for a bachelorette party.

Bornstein confirmed that the company continuously refined its AI models and gathered user feedback. She noted, ‘We want data on what people are doing so we can focus and learn where we do well and where we don’t.’

Part of this ongoing development involves training the AI to understand nuanced contextual cues, such as the implications of a ‘dress for a trip to Greece in August’ (suggesting hot weather) or an outfit for a ‘black-tie wedding’ (implying formality).

Daydream’s web version launched publicly last month, and it is currently in beta testing, with plans for an app release in the autumn. Bornstein envisions a future where AI extends beyond shopping, assisting with broader fashion needs like pairing new purchases with existing wardrobe items.

She concluded, ‘This was one of my earliest ideas, but I didn’t know the term (generative AI) and I didn’t know a large language model would be the unlock.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!