Weak cyber hygiene in smart devices risks national infrastructure

The UK’s designation of data centres as Critical National Infrastructure highlights their growing strategic importance, yet a pressing concern remains over vulnerabilities in their OT and IoT systems. While IT security often receives significant investment, the same cannot be said for other technologies.

Attackers increasingly target these overlooked systems, gaining access through insecure devices such as IP cameras and biometric scanners. Many of these operate on outdated firmware and lack even basic protections, making them ideal footholds for malicious actors.

There have already been known breaches, with OT systems used in botnet activity and crypto mining, often without detection. These attacks not only compromise security in the UK but can destabilise infrastructure by overloading resources or bypassing safeguards.

Addressing these threats requires full visibility across all connected systems, with real-time monitoring, wireless traffic analysis, and network segmentation. Experts urge data centre operators to act now, not in response to a breach, but to prevent one entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta bets on smartglasses to lead future tech

Mark Zuckerberg is boldly pushing to replace the smartphone with smartglasses powered by superintelligent AI. The Meta CEO described a future where wearable devices replace phones, using sight and sound to assist users throughout the day.

Meta is heavily investing, offering up to $100 million to attract top AI talent. Zuckerberg’s idea of ‘personal superintelligence’ merges AI and hardware to offer personalised help and build an Apple-style ecosystem under Meta’s control.

The company’s smartglasses already feature cameras, microphones and speakers, and future models could include built-in screens and AI-generated interfaces.

Other major players are also chasing the next computing shift. Amazon is acquiring a startup that builds AI wearables, while OpenAI’s Sam Altman and former Apple designer Jony Ive are working on a new physical AI device.

These efforts all point to a changing landscape in which mobile screens might no longer dominate.

Apple CEO Tim Cook responded by defending the iPhone’s central role in modern life, though he acknowledged complementary technologies may emerge. While Apple remains dominant, Meta’s advances signal that the competition to define the next computing platform is wide open.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Children’s screen time debate heats up as experts question evidence

A growing number of scientists are questioning whether fears over children’s screen time are truly backed by evidence. While many parents worry about smartphones, social media, and gaming, experts say the science behind these concerns is often flawed or inconsistent.

Professor Pete Etchells of Bath Spa University and other researchers argue that common claims about screen time harming adolescent brains or causing depression lack strong evidence.

Much of the existing research relies on self-reported data and fails to account for critical factors like loneliness or the type of screen engagement.

One major study found no link between screen use and poor mental wellbeing, while others stress the importance of distinguishing between harmful content and positive online interaction.

Still, many campaigners and psychologists maintain that screen restrictions are vital. Groups such as Smartphone Free Childhood are pushing to delay access to smartphones and social media.

Others, like Professor Jean Twenge, say the risks of screen overuse—less sleep, reduced social time, and more time alone—create a ‘terrible formula for mental health.’

With unclear guidance and evolving science, parents face tough choices in a rapidly changing tech world. As screens become more common via AI, smart glasses, and virtual communities, the focus shifts to how children can use technology wisely and safely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba reveals Quark AI glasses to rival Meta and Xiaomi

Alibaba entered the wearable tech scene at the World Artificial Intelligence Conference in Shanghai by unveiling its first smart glasses, Quark AI Glasses, powered by its proprietary Qwen large language model and the Quark assistant.

The glasses are designed for professional and consumer use and feature hands-free calling, live transcription and translation, music playback, and a built-in camera.

The AR-type eyewear runs on a dual-chip platform, featuring Qualcomm’s Snapdragon AR1 and a dedicated low-power chip. It uses a hybrid operating system setup to balance interactivity and battery life.

Integration with Alibaba’s ecosystem lets users navigate via Amap’s near-eye maps, scan Taobao products for price comparison, make purchases via Alipay, and receive notifications from Ali platforms—all through voice and gesture commands.

Set for release in China by the end of 2025, Quark AI Glasses aim to compete directly with Meta’s Ray-Ban smart eyewear and Xiaomi’s AI glasses.

While product pricing and global availability remain unannounced, Alibaba’s ecosystem depth and hardware‑software integration signal a strategic push into wearable intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup Daydream revolutionises online fashion search

Online shopping for specific items like bridesmaid dresses can be challenging due to overwhelming choices. A new tech startup, Daydream, aims to simplify this. It uses AI to let users search for products by describing them in natural language, making the process easier and more intuitive.

For instance, a user could ask for a ‘revenge dress to wear to a party in Sicily in July,’ or ‘a summer bag to carry to work and cocktails after.’

Daydream, with staff based in New York and San Francisco, represents the latest venture in a growing trend of tech companies utilising AI to streamline and personalise online retail.

Consumer demand for such tools is evident: an Adobe Analytics survey of 5,000 US consumers revealed that 39% had used a generative AI tool for online shopping last year, with 53% planning to do so this year. Daydream faces competition from tech giants already active in this space.

Meta employs AI to facilitate seller listings and to target users with more relevant product advertisements. OpenAI has launched an AI agent capable of shopping across the web for users, and Amazon is trialling a similar feature.

Google has also introduced various AI shopping tools, including automated price tracking, a ‘circle to search’ function for identifying products in photos, and virtual try-on options for clothing.

Despite the formidable competition, Daydream’s CEO, Julie Bornstein, believes her company possesses a deeper understanding of the fashion and retail industries.

Bornstein’s extensive background includes helping build Nordstrom’s website as its vice president of e-commerce in the early 2000s and holding C-suite positions at Sephora and Stitch Fix. In 2018, she co-founded her first AI-powered shopping startup, The Yes, which was sold to Pinterest in 2022.

Bornstein asserts, ‘They don’t have the people, the mindset, the passion to do what needs to be done to make a category like fashion work for AI recommendations.’ She added, ‘Because I’ve been in this space my whole career, I know that having the catalogue with everything and being able to show the right person the right stuff makes shopping easier.’

Daydream has already secured $50 million in its initial funding round, attracting investors such as Google Ventures and model Karlie Kloss, founder of Kode With Klossy. The platform operates as a free, digital personal stylist.

Users can input their desired products using natural language, eliminating the need for complex Boolean search terms, thanks to its AI text recognition technology, or upload an inspiration photo.

Daydream then presents recommendations from over 8,000 brand partners, ranging from budget-friendly Uniqlo to luxury brand Gucci. Users can further refine their search through a chat interface, for example, by requesting more casual or less expensive alternatives.

As users interact more with the platform, it progressively tailors recommendations based on their search history, clicks, and saved items.

When customers are ready to purchase, they are redirected to the respective brand’s website to complete the transaction, with Daydream receiving a 20% commission on the sale.

Unlike many other major e-commerce players, Bornstein is deliberately avoiding ad-based rankings. She aims for products to appear on recommendation pages purely because they are a suitable match for the customer, not due to paid placements.

Bornstein stated, ‘As soon as Amazon started doing paid sponsorships, I’m like, ‘How can I find the real good product?’ She emphasised, ‘We want this to be a thing where we get paid when we show the customer the right thing.’

A recent CNN test of Daydream yielded mixed results. A search for a ‘white, fitted button-up shirt for the office with no pockets’ successfully returned a $145 cotton long-sleeve shirt from Theory that perfectly matched the description.

However, recommendations are not always flawless. A query for a ‘mother of the bride dress for a summer wedding in California’ presented several slinky slip dresses, some in white, alongside more formal styles, appearing more suitable for a bachelorette party.

Bornstein confirmed that the company continuously refined its AI models and gathered user feedback. She noted, ‘We want data on what people are doing so we can focus and learn where we do well and where we don’t.’

Part of this ongoing development involves training the AI to understand nuanced contextual cues, such as the implications of a ‘dress for a trip to Greece in August’ (suggesting hot weather) or an outfit for a ‘black-tie wedding’ (implying formality).

Daydream’s web version launched publicly last month, and it is currently in beta testing, with plans for an app release in the autumn. Bornstein envisions a future where AI extends beyond shopping, assisting with broader fashion needs like pairing new purchases with existing wardrobe items.

She concluded, ‘This was one of my earliest ideas, but I didn’t know the term (generative AI) and I didn’t know a large language model would be the unlock.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Viasat launches global IoT satellite service

Viasat has unveiled a new global connectivity service designed to improve satellite-powered internet of things (IoT) communication, even in remote environments. The new offering, IoT Nano, supports industries like agriculture, mining, transport with reliable, low-data and low-power two-way messaging.

The service builds on Orbcomm’s upgraded OGx platform, delivering faster message speeds, greater data capacity and improved energy efficiency. It maintains compatibility with older systems while allowing for advanced use cases through larger messages and reduced power needs.

Executives at Viasat and Orbcomm believe IoT Nano opens up new opportunities by combining wider satellite coverage with smarter, more frequent data delivery. The service is part of Viasat’s broader effort to expand its scalable and energy-efficient satellite IoT portfolio.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Guess AI model sparks fashion world debate

A striking new ‘supermodel’ has appeared in the August print edition of Vogue, featuring in a Guess advert for their summer collection. Uniquely, the flawless blonde model is not real, as a small disclaimer reveals she was created using AI.

While Vogue clarifies the AI model’s inclusion was an advertising decision, not editorial, it marks a significant first for the magazine and has ignited widespread controversy.

The development raises serious questions for real models, who have long campaigned for greater diversity, and consumers, particularly young people, are already grappling with unrealistic beauty standards.

Seraphinne Vallora, the company behind the controversial Guess advert, comprises founders Valentina Gonzalez and Andreea Petrescu. They told the BBC that Guess’s co-founder, Paul Marciano, approached them on Instagram to create an AI model for the brand’s summer campaign.

Valentina Gonzalez explained, ‘We created 10 draft models for him and he selected one brunette woman and one blonde that we developed further.’ Petrescu described AI image generation as a complex process, with their five employees taking up to a month to create a finished product, charging clients like Guess up to the low six figures.

However, plus-size model Felicity Hayward, with over a decade in the industry, criticised the use of AI models, stating it ‘feels lazy and cheap’ and worried it could ‘undermine years of work towards more diversity in the industry.’

Hayward believes the fashion industry, which saw strides in inclusivity in the 2010s, has regressed, leading to fewer bookings for diverse models. She warned, ‘The use of AI models is another kick in the teeth that will disproportionately affect plus-size models.’

Gonzalez and Petrescu insist they do not reinforce narrow beauty standards, with Petrescu claiming, ‘We don’t create unattainable looks – the AI model for Guess looks quite realistic.’ They contended, ‘Ultimately, all adverts are created to look perfect and usually have supermodels in, so what we do is no different.’

While admitting their company’s Instagram shows a lack of diversity, Gonzalez explained to the BBC that attempts to post AI images of women with different skin tones did not gain traction, stating, ‘people do not respond to them – we don’t get any traction or likes.’

They also noted that the technology is not yet advanced enough to create plus-size AI women. However, this mirrors a 2024 Dove campaign that highlighted AI bias by showing image generators consistently producing thin, white, blonde women when asked for ‘the most beautiful woman in the world.’

Vanessa Longley, CEO of eating disorder charity Beat, found the advert ‘worrying,’ telling the BBC, ‘If people are exposed to images of unrealistic bodies, it can affect their thoughts about their own body, and poor body image increases the risk of developing an eating disorder.’

The lack of transparent labelling for AI-generated content in the UK is also a concern, despite Guess having a small disclaimer. Sinead Bovell, a former model and now tech entrepreneur, told the BBC that not clearly labelling AI content is ‘exceptionally problematic’ due to ‘AI is already influencing beauty standards.’

Sara Ziff, a former model and founder of Model Alliance, views Guess’s campaign as “less about innovation and more about desperation and need to cut costs,’ advocating for ‘meaningful protections for workers’ in the industry.

Seraphinne Vallora, however, denies replacing models, with Petrescu explaining, ‘We’re offering companies another choice in how they market a product.’

Despite their website claiming cost-efficiency by ‘eliminating the need for expensive set-ups… hiring models,’ they involve real models and photographers in their AI creation process. Vogue’s decision to run the advert has drawn criticism on social media, with Bovell noting the magazine’s influential position, which means they are ‘in some way ruling it as acceptable.’

Looking ahead, Bovell predicts more AI-generated models but not their total dominance, foreseeing a future where individuals might create personal AI avatars to try on clothes and a potential ‘society opting out’ if AI models become too unattainable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!