Chrome update brings AI shopping summaries to US users

Google has updated its Chrome browser to include AI-generated summaries of online stores, aimed at helping shoppers in the US make more informed buying decisions.

Instead of manually searching through reviews, users can now click an icon next to the web address to see a summary of a shop’s performance across key areas like product quality, pricing, returns, and customer service.

The feature is currently available only in English and is limited to desktop users.

The summaries are generated from a range of trusted review platforms, including Trustpilot, Bazaarvoice, Bizrate Insights, and others. Google says that the tool will offer a more efficient and secure online shopping experience.

It also helps the tech giant better compete with Amazon, which has already rolled out AI tools for product comparisons, fit suggestions, and ratings analysis. The move forms part of Google’s wider push to turn Chrome into a more powerful e-commerce assistant.

The company is also integrating AI tools like the Gemini assistant and developing agentic AI systems that can carry out tasks in the browser on a user’s behalf.

At the same time, Chrome faces fresh competition from AI-first browsers such as Perplexity’s Comet, Opera Neon, and a possible entry from OpenAI.

By adding AI-powered features directly into Chrome, Google hopes to future-proof its browser while strengthening its position in online retail.

As rivals begin to build intelligent browsers from the ground up, Google is reimagining how Chrome can serve users beyond simple search and browsing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot captures veteran workers’ knowledge to support UK care teams

Peterborough City Council has turned the knowledge of veteran therapy practitioner Geraldine Jinks into an AI chatbot to support adult social care workers.

After 35 years of experience, colleagues frequently approached Jinks seeking advice, leading to time pressures despite her willingness to help.

In response, the council developed a digital assistant called Hey Geraldine, built on the My AskAI platform, which mimics her direct and friendly communication style to provide instant support to staff.

Developed in 2023, the chatbot offers practical answers to everyday care-related questions, such as how to support patients with memory issues or discharge planning. Jinks collaborated with the tech team to train the AI, writing all the responses herself to ensure consistency and clarity.

Thanks to its natural tone and humanlike advice, some colleagues even mistook the chatbot for the honest Geraldine.

The council hopes Hey Geraldine will reduce hospital discharge delays and improve patient access to assistive technology. Councillor Shabina Qayyum, who also works as a GP, said the tool empowers staff to help patients regain independence instead of facing unnecessary delays.

The chatbot is seen as preserving valuable institutional knowledge while improving frontline efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Allianz breach affects most US customers

Allianz Life has confirmed a major cyber breach that exposed sensitive data from most of its 1.4 million customers in North America.

The attack was traced back to 16 July, when a threat actor accessed a third-party cloud system using social engineering tactics.

The cybersecurity breach affected a customer relationship management platform but did not compromise the company’s core network or policy systems.

Allianz Life acted swiftly by notifying the FBI and other regulators, including the attorney general’s office in Maine.

Those impacted are offered two years of credit monitoring and identity theft protection. The company has begun contacting affected individuals but declined to reveal the full number involved due to an ongoing investigation.

No other Allianz subsidiaries were affected by the breach. Allianz Life employs around 2,000 staff in the US and remains a key player within the global insurer’s North American operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Guess AI model sparks fashion world debate

A striking new ‘supermodel’ has appeared in the August print edition of Vogue, featuring in a Guess advert for their summer collection. Uniquely, the flawless blonde model is not real, as a small disclaimer reveals she was created using AI.

While Vogue clarifies the AI model’s inclusion was an advertising decision, not editorial, it marks a significant first for the magazine and has ignited widespread controversy.

The development raises serious questions for real models, who have long campaigned for greater diversity, and consumers, particularly young people, are already grappling with unrealistic beauty standards.

Seraphinne Vallora, the company behind the controversial Guess advert, comprises founders Valentina Gonzalez and Andreea Petrescu. They told the BBC that Guess’s co-founder, Paul Marciano, approached them on Instagram to create an AI model for the brand’s summer campaign.

Valentina Gonzalez explained, ‘We created 10 draft models for him and he selected one brunette woman and one blonde that we developed further.’ Petrescu described AI image generation as a complex process, with their five employees taking up to a month to create a finished product, charging clients like Guess up to the low six figures.

However, plus-size model Felicity Hayward, with over a decade in the industry, criticised the use of AI models, stating it ‘feels lazy and cheap’ and worried it could ‘undermine years of work towards more diversity in the industry.’

Hayward believes the fashion industry, which saw strides in inclusivity in the 2010s, has regressed, leading to fewer bookings for diverse models. She warned, ‘The use of AI models is another kick in the teeth that will disproportionately affect plus-size models.’

Gonzalez and Petrescu insist they do not reinforce narrow beauty standards, with Petrescu claiming, ‘We don’t create unattainable looks – the AI model for Guess looks quite realistic.’ They contended, ‘Ultimately, all adverts are created to look perfect and usually have supermodels in, so what we do is no different.’

While admitting their company’s Instagram shows a lack of diversity, Gonzalez explained to the BBC that attempts to post AI images of women with different skin tones did not gain traction, stating, ‘people do not respond to them – we don’t get any traction or likes.’

They also noted that the technology is not yet advanced enough to create plus-size AI women. However, this mirrors a 2024 Dove campaign that highlighted AI bias by showing image generators consistently producing thin, white, blonde women when asked for ‘the most beautiful woman in the world.’

Vanessa Longley, CEO of eating disorder charity Beat, found the advert ‘worrying,’ telling the BBC, ‘If people are exposed to images of unrealistic bodies, it can affect their thoughts about their own body, and poor body image increases the risk of developing an eating disorder.’

The lack of transparent labelling for AI-generated content in the UK is also a concern, despite Guess having a small disclaimer. Sinead Bovell, a former model and now tech entrepreneur, told the BBC that not clearly labelling AI content is ‘exceptionally problematic’ due to ‘AI is already influencing beauty standards.’

Sara Ziff, a former model and founder of Model Alliance, views Guess’s campaign as “less about innovation and more about desperation and need to cut costs,’ advocating for ‘meaningful protections for workers’ in the industry.

Seraphinne Vallora, however, denies replacing models, with Petrescu explaining, ‘We’re offering companies another choice in how they market a product.’

Despite their website claiming cost-efficiency by ‘eliminating the need for expensive set-ups… hiring models,’ they involve real models and photographers in their AI creation process. Vogue’s decision to run the advert has drawn criticism on social media, with Bovell noting the magazine’s influential position, which means they are ‘in some way ruling it as acceptable.’

Looking ahead, Bovell predicts more AI-generated models but not their total dominance, foreseeing a future where individuals might create personal AI avatars to try on clothes and a potential ‘society opting out’ if AI models become too unattainable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK enforces age checks to block harmful online content for children

The United Kingdom has introduced new age verification laws to prevent children from accessing harmful online content, marking a significant shift in digital child protection.

The measures, enforced by media regulator Ofcom, require websites and apps to implement strict age checks such as facial recognition and credit card verification.

Around 6,000 pornography websites have already agreed to the new regulations, which stem from the 2023 Online Safety Act. The rules also target content related to suicide, self-harm, eating disorders and online violence, instead of just focusing on pornography.

Companies failing to comply risk fines of up to £18 million or 10% of global revenue, and senior executives could face criminal charges if they ignore Ofcom’s directives.

Technology Secretary Peter Kyle described the move as a turning point, saying children will now experience a ‘different internet for the first time’.

Ofcom data shows that around 500,000 children aged eight to fourteen encountered online pornography in just one month, highlighting the urgency of the reforms. Campaigners, including the NSPCC, called the new rules a ‘milestone’, though they warned loopholes could remain.

The UK government is also exploring further restrictions, including a potential daily two-hour time limit on social media use for under-16s. Kyle has promised more announcements soon, as Britain moves to hold tech platforms accountable instead of leaving children exposed to harmful content online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI feature to reshape how search results appear

Google has introduced a new experimental feature named Web Guide, aimed at reorganising search results by using AI to group information based on the query’s different aspects.

Available through Search Labs, the tool helps users explore topics in a more structured way instead of relying on the standard, linear results page.

Powered by Google’s Gemini AI, Web Guide works particularly well for open-ended or complex queries. For example, searches such as ‘how to solo travel in Japan’ would return results neatly arranged into guides, safety advice, or personal experiences instead of a simple list.

The feature handles multi-sentence questions, offering relevant answers broken into themed sections.

Users who opt in can access Web Guide via the Web tab and toggle it off without exiting the entire experiment. While it works only on that tab, Google plans to expand it to the broader ‘All’ tab in time.

The move follows Google’s broader push to incorporate Gemini into tools like AI Mode, Flow, and other experimental products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft replaces the blue screen of death with a sleek black version in Windows 11

Microsoft has officially removed the infamous Blue Screen of Death (BSOD) from Windows 11 and replaced it with a sleeker, black version.

As part of the update KB5062660, the Black Screen of Death now appears briefly—around two seconds—before a restart, showing only a short error message without the sad face or QR code that became symbolic of Windows crashes.

The update, which brings systems to Build 26100.4770, is optional and must be installed manually through Windows Update or the Microsoft Update Catalogue.

It is available for both x64 and arm64 platforms. Microsoft plans to roll out the update more broadly in August 2025 as part of its Windows 11 24H2 feature preview.

In addition to the screen change, the update introduces ‘Recall’ for EU users, a tool designed to operate locally and allow users to block or turn off tracking across apps and websites. The feature aims to comply with European privacy rules while enhancing user control.

Also included is Quick Machine Recovery, which can identify and fix system-wide failures using the Windows Recovery Environment. If a device becomes unbootable, it can download a repair patch automatically to restore functionality instead of requiring manual intervention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts urge broader values in AI development

Since the launch of ChatGPT in late 2023, the private sector has led AI innovation. Major players like Microsoft, Google, and Alibaba—alongside emerging firms such as Anthropic and Mistral—are racing to monetise AI and secure long-term growth in the technology-driven economy.

But during the Fortune Brainstorm AI conference in Singapore this week, experts stressed the importance of human values in shaping AI’s future. Anthea Roberts, founder of Dragonfly Thinking, argued that AI must be built not just to think faster or cheaper, but also to think better.

She highlighted the risk of narrow thinking—national, disciplinary or algorithmic—and called for diverse, collaborative thinking to counter it. Roberts sees potential in human-AI collaboration, which can help policymakers explore different perspectives, boosting the chances of sound outcomes.

Russell Wald, executive director at Stanford’s Institute for Human-Centred AI, called AI a civilisation-shifting force. He stressed the need for an interdisciplinary ecosystem—combining academia, civil society, government and industry—to steer AI development.

‘Industry must lead, but so must academia,’ Wald noted, as well as universities’ contributions to early research, training, and transparency. Despite widespread adoption, AI scepticism persists, due to issues like bias, hallucination, and unpredictable or inappropriate language.

Roberts said most people fall into two camps: those who use AI uncritically, such as students and tech firms, and those who reject it entirely.

She labelled the latter as practising ‘critical non-use’ due to concerns over bias, authenticity and ethical shortcomings in current models. Inviting a broader demographic into AI governance, Roberts urged more people—especially those outside tech hubs like Silicon Valley—to shape its future.

Wald noted that in designing AI, developers must reflect the best of humanity: ‘Not the crazy uncle at the Thanksgiving table.’

Both experts believe the stakes are high, and the societal benefits of getting AI right are too great to ignore or mishandle. ‘You need to think not just about what people want,’ Roberts said, ‘but what they want to want—their more altruistic instincts.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!