Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba reveals Quark AI glasses to rival Meta and Xiaomi

Alibaba entered the wearable tech scene at the World Artificial Intelligence Conference in Shanghai by unveiling its first smart glasses, Quark AI Glasses, powered by its proprietary Qwen large language model and the Quark assistant.

The glasses are designed for professional and consumer use and feature hands-free calling, live transcription and translation, music playback, and a built-in camera.

The AR-type eyewear runs on a dual-chip platform, featuring Qualcomm’s Snapdragon AR1 and a dedicated low-power chip. It uses a hybrid operating system setup to balance interactivity and battery life.

Integration with Alibaba’s ecosystem lets users navigate via Amap’s near-eye maps, scan Taobao products for price comparison, make purchases via Alipay, and receive notifications from Ali platforms—all through voice and gesture commands.

Set for release in China by the end of 2025, Quark AI Glasses aim to compete directly with Meta’s Ray-Ban smart eyewear and Xiaomi’s AI glasses.

While product pricing and global availability remain unannounced, Alibaba’s ecosystem depth and hardware‑software integration signal a strategic push into wearable intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup Daydream revolutionises online fashion search

Online shopping for specific items like bridesmaid dresses can be challenging due to overwhelming choices. A new tech startup, Daydream, aims to simplify this. It uses AI to let users search for products by describing them in natural language, making the process easier and more intuitive.

For instance, a user could ask for a ‘revenge dress to wear to a party in Sicily in July,’ or ‘a summer bag to carry to work and cocktails after.’

Daydream, with staff based in New York and San Francisco, represents the latest venture in a growing trend of tech companies utilising AI to streamline and personalise online retail.

Consumer demand for such tools is evident: an Adobe Analytics survey of 5,000 US consumers revealed that 39% had used a generative AI tool for online shopping last year, with 53% planning to do so this year. Daydream faces competition from tech giants already active in this space.

Meta employs AI to facilitate seller listings and to target users with more relevant product advertisements. OpenAI has launched an AI agent capable of shopping across the web for users, and Amazon is trialling a similar feature.

Google has also introduced various AI shopping tools, including automated price tracking, a ‘circle to search’ function for identifying products in photos, and virtual try-on options for clothing.

Despite the formidable competition, Daydream’s CEO, Julie Bornstein, believes her company possesses a deeper understanding of the fashion and retail industries.

Bornstein’s extensive background includes helping build Nordstrom’s website as its vice president of e-commerce in the early 2000s and holding C-suite positions at Sephora and Stitch Fix. In 2018, she co-founded her first AI-powered shopping startup, The Yes, which was sold to Pinterest in 2022.

Bornstein asserts, ‘They don’t have the people, the mindset, the passion to do what needs to be done to make a category like fashion work for AI recommendations.’ She added, ‘Because I’ve been in this space my whole career, I know that having the catalogue with everything and being able to show the right person the right stuff makes shopping easier.’

Daydream has already secured $50 million in its initial funding round, attracting investors such as Google Ventures and model Karlie Kloss, founder of Kode With Klossy. The platform operates as a free, digital personal stylist.

Users can input their desired products using natural language, eliminating the need for complex Boolean search terms, thanks to its AI text recognition technology, or upload an inspiration photo.

Daydream then presents recommendations from over 8,000 brand partners, ranging from budget-friendly Uniqlo to luxury brand Gucci. Users can further refine their search through a chat interface, for example, by requesting more casual or less expensive alternatives.

As users interact more with the platform, it progressively tailors recommendations based on their search history, clicks, and saved items.

When customers are ready to purchase, they are redirected to the respective brand’s website to complete the transaction, with Daydream receiving a 20% commission on the sale.

Unlike many other major e-commerce players, Bornstein is deliberately avoiding ad-based rankings. She aims for products to appear on recommendation pages purely because they are a suitable match for the customer, not due to paid placements.

Bornstein stated, ‘As soon as Amazon started doing paid sponsorships, I’m like, ‘How can I find the real good product?’ She emphasised, ‘We want this to be a thing where we get paid when we show the customer the right thing.’

A recent CNN test of Daydream yielded mixed results. A search for a ‘white, fitted button-up shirt for the office with no pockets’ successfully returned a $145 cotton long-sleeve shirt from Theory that perfectly matched the description.

However, recommendations are not always flawless. A query for a ‘mother of the bride dress for a summer wedding in California’ presented several slinky slip dresses, some in white, alongside more formal styles, appearing more suitable for a bachelorette party.

Bornstein confirmed that the company continuously refined its AI models and gathered user feedback. She noted, ‘We want data on what people are doing so we can focus and learn where we do well and where we don’t.’

Part of this ongoing development involves training the AI to understand nuanced contextual cues, such as the implications of a ‘dress for a trip to Greece in August’ (suggesting hot weather) or an outfit for a ‘black-tie wedding’ (implying formality).

Daydream’s web version launched publicly last month, and it is currently in beta testing, with plans for an app release in the autumn. Bornstein envisions a future where AI extends beyond shopping, assisting with broader fashion needs like pairing new purchases with existing wardrobe items.

She concluded, ‘This was one of my earliest ideas, but I didn’t know the term (generative AI) and I didn’t know a large language model would be the unlock.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Viasat launches global IoT satellite service

Viasat has unveiled a new global connectivity service designed to improve satellite-powered internet of things (IoT) communication, even in remote environments. The new offering, IoT Nano, supports industries like agriculture, mining, transport with reliable, low-data and low-power two-way messaging.

The service builds on Orbcomm’s upgraded OGx platform, delivering faster message speeds, greater data capacity and improved energy efficiency. It maintains compatibility with older systems while allowing for advanced use cases through larger messages and reduced power needs.

Executives at Viasat and Orbcomm believe IoT Nano opens up new opportunities by combining wider satellite coverage with smarter, more frequent data delivery. The service is part of Viasat’s broader effort to expand its scalable and energy-efficient satellite IoT portfolio.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Guess AI model sparks fashion world debate

A striking new ‘supermodel’ has appeared in the August print edition of Vogue, featuring in a Guess advert for their summer collection. Uniquely, the flawless blonde model is not real, as a small disclaimer reveals she was created using AI.

While Vogue clarifies the AI model’s inclusion was an advertising decision, not editorial, it marks a significant first for the magazine and has ignited widespread controversy.

The development raises serious questions for real models, who have long campaigned for greater diversity, and consumers, particularly young people, are already grappling with unrealistic beauty standards.

Seraphinne Vallora, the company behind the controversial Guess advert, comprises founders Valentina Gonzalez and Andreea Petrescu. They told the BBC that Guess’s co-founder, Paul Marciano, approached them on Instagram to create an AI model for the brand’s summer campaign.

Valentina Gonzalez explained, ‘We created 10 draft models for him and he selected one brunette woman and one blonde that we developed further.’ Petrescu described AI image generation as a complex process, with their five employees taking up to a month to create a finished product, charging clients like Guess up to the low six figures.

However, plus-size model Felicity Hayward, with over a decade in the industry, criticised the use of AI models, stating it ‘feels lazy and cheap’ and worried it could ‘undermine years of work towards more diversity in the industry.’

Hayward believes the fashion industry, which saw strides in inclusivity in the 2010s, has regressed, leading to fewer bookings for diverse models. She warned, ‘The use of AI models is another kick in the teeth that will disproportionately affect plus-size models.’

Gonzalez and Petrescu insist they do not reinforce narrow beauty standards, with Petrescu claiming, ‘We don’t create unattainable looks – the AI model for Guess looks quite realistic.’ They contended, ‘Ultimately, all adverts are created to look perfect and usually have supermodels in, so what we do is no different.’

While admitting their company’s Instagram shows a lack of diversity, Gonzalez explained to the BBC that attempts to post AI images of women with different skin tones did not gain traction, stating, ‘people do not respond to them – we don’t get any traction or likes.’

They also noted that the technology is not yet advanced enough to create plus-size AI women. However, this mirrors a 2024 Dove campaign that highlighted AI bias by showing image generators consistently producing thin, white, blonde women when asked for ‘the most beautiful woman in the world.’

Vanessa Longley, CEO of eating disorder charity Beat, found the advert ‘worrying,’ telling the BBC, ‘If people are exposed to images of unrealistic bodies, it can affect their thoughts about their own body, and poor body image increases the risk of developing an eating disorder.’

The lack of transparent labelling for AI-generated content in the UK is also a concern, despite Guess having a small disclaimer. Sinead Bovell, a former model and now tech entrepreneur, told the BBC that not clearly labelling AI content is ‘exceptionally problematic’ due to ‘AI is already influencing beauty standards.’

Sara Ziff, a former model and founder of Model Alliance, views Guess’s campaign as “less about innovation and more about desperation and need to cut costs,’ advocating for ‘meaningful protections for workers’ in the industry.

Seraphinne Vallora, however, denies replacing models, with Petrescu explaining, ‘We’re offering companies another choice in how they market a product.’

Despite their website claiming cost-efficiency by ‘eliminating the need for expensive set-ups… hiring models,’ they involve real models and photographers in their AI creation process. Vogue’s decision to run the advert has drawn criticism on social media, with Bovell noting the magazine’s influential position, which means they are ‘in some way ruling it as acceptable.’

Looking ahead, Bovell predicts more AI-generated models but not their total dominance, foreseeing a future where individuals might create personal AI avatars to try on clothes and a potential ‘society opting out’ if AI models become too unattainable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to retaliate against cyber attacks, minister warns

Britain’s security minister has warned that hackers targeting UK institutions will face consequences, including potential retaliatory cyber operations.

Speaking to POLITICO at the British Library — still recovering from a 2023 ransomware attack by Rysida — Security Minister Dan Jarvis said the UK is prepared to use offensive cyber capabilities to respond to threats.

‘If you are a cybercriminal and think you can attack a UK-based institution without repercussions, think again,’ Jarvis stated. He emphasised the importance of sending a clear signal that hostile activity will not go unanswered.

The warning follows a recent government decision to ban ransom payments by public sector bodies. Jarvis said deterrence must be matched by vigorous enforcement.

The UK has acknowledged its offensive cyber capabilities for over a decade, but recent strategic shifts have expanded its role. A £1 billion investment in a new Cyber and Electromagnetic Command will support coordinated action alongside the National Cyber Force.

While Jarvis declined to specify technical capabilities, he cited the National Crime Agency’s role in disrupting the LockBit ransomware group as an example of the UK’s growing offensive posture.

AI is accelerating both cyber threats and defensive measures. Jarvis said the UK must harness AI for national advantage, describing an ‘arms race’ amid rapid technological advancement.

Most cyber threats originate from Russia or its affiliated groups, though Iran, China, and North Korea remain active. The UK is also increasingly concerned about ‘hack-for-hire’ actors operating from friendly nations, including India.

Despite these concerns, Jarvis stressed the UK’s strong security ties with India and ongoing cooperation to curb cyber fraud. ‘We will continue to invest in that relationship for the long term,’ he said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

5G Advanced lays the groundwork for 6G, says 5G Americas

5G Americas has released a new white paper outlining how 5G Advanced features in 3GPP Releases 18 to 20 are shaping the path to 6G.

The report highlights how 5G Advanced is evolving mobile networks through embedded AI, scaled IoT, improved energy efficiency, and broader service capabilities. Viet Nguyen, President of 5G Americas, called it a turning point for wireless systems, offering more intelligent, resilient, and sustainable connectivity.

AI-native networking is a key innovation which brings machine learning into the radio and core network. The innovation enables zero-touch automation, predictive maintenance, and self-organising systems, cutting fault detection by 90% and reducing false alarms by 70%.

Energy efficiency is another core benefit. Features like cell sleep modes and antenna switching can reduce energy use by up to 56%. Ambient IoT also advances, enabling battery-less devices for industrial and consumer use in energy-constrained environments.

Latency improvements like L4S and enhanced QoS allow scalable support for immersive XR and real-time automation. Advances in spectral efficiency and satellite support are boosting uplink speeds above 500 Mbps and expanding coverage to remote areas.

Andrea Brambilla of Nokia noted that 5G Advanced supports digital twins, private networks, and AI-driven transformation. Pei Hou of T-Mobile said it builds on 5G Standalone to prepare for a sustainable shift to 6G.

The paper urges updated policies on AI governance, spectrum sharing, and IoT standards to ensure global interoperability. Strategic takeaways include AI, automation, and energy savings as key to long-term innovation and monetisation across the public and private sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How agentic AI is transforming cybersecurity

Cybersecurity is gaining a new teammate—one that never sleeps and acts independently. Agentic AI doesn’t wait for instructions. It detects threats, investigates, and responds in real-time. This new class of AI is beginning to change the way we approach cyber defence.

Unlike traditional AI systems, Agentic AI operates with autonomy. It sets objectives, adapts to environments, and self-corrects without waiting for human input. In cybersecurity, this means instant detection and response, beyond simple automation.

With networks more complex than ever, security teams are stretched thin. Agentic AI offers relief by executing actions like isolating compromised systems or rewriting firewall rules. This technology promises to ease alert fatigue and keep up with evasive threats.

A 2025 Deloitte report says 25% of GenAI-using firms will pilot Agentic AI this year. SailPoint found that 98% of organisations will expand AI agent use in the next 12 months. But rapid adoption also raises concern—96% of tech workers see AI agents as security risks.

The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.

Key challenges include ethical bias, unpredictable errors, and uncertain regulation. In sectors like healthcare and finance, oversight and governance must keep pace. The solution lies in balanced control and continuous human-AI collaboration.

Cybersecurity careers are shifting in response. Hybrid roles such as AI Security Analysts and Threat Intelligence Automation Architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture.

Agentic AI is redefining cybersecurity. It boosts speed and intelligence but demands new skills and strong leadership. Adaptation is essential for those who wish to thrive in tomorrow’s AI-driven security landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SatanLock ends operation amid ransomware ecosystem turmoil

SatanLock, a ransomware group active since April 2025, has announced it is shutting down. The group quickly gained notoriety, claiming 67 victims on its now-defunct dark web leak site.

Cybersecurity firm Check Point says more than 65% of these victims had already appeared on other ransomware leak pages. However, this suggests the group may have used shared infrastructure or tried to hijack previously compromised networks.

Such tactics reflect growing disorder within the ransomware ecosystem, where victim double-posting is rising. SatanLock may have been part of a broader criminal network, as it shares ties to families like Babuk-Bjorka and GD Lockersec.

A shutdown message was posted on the gang’s Telegram channel and leak page, announcing plans to leak all stolen data. The reason for the sudden closure has not been disclosed.

Another group, Hunters International, announced its disbandment just days earlier.

Unlike SatanLock, Hunters offered free decryption keys to its victims in a parting gesture.

These back-to-back exits signal possible pressure from law enforcement, rivals, or internal collapse in the ransomware world. Analysts are watching closely to see whether this trend continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!