Google patches critical Chrome bugs enabling code execution

Chrome security update fixes six flaws that could enable arbitrary code execution. Stable channel 139.0.7258.127/.128 (Windows, Mac) and .127 (Linux) ships high-severity patches that protect user data and system integrity.

CVE-2025-8879 is a heap buffer overflow in libaom’s video codec. CVE-2025-8880 is a V8 race condition reported by Seunghyun Lee. CVE-2025-8901 is an out-of-bounds write in ANGLE.

Detection methods included AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, and AFL. Further fixes address CVE-2025-8881 in File Picker and CVE-2025-8882, a use-after-free in Aura.

Successful exploitation could allow code to run with browser privileges through overflows and race conditions. The automatic rollout is staged; users should update it manually by going to Settings > About Chrome.

Administrators should prioritise rapid deployment in enterprise fleets. Google credited external researchers, anonymous contributors, and the Big Sleep project for coordinated reporting and early discovery.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Engagement to AI chatbot blurs lines between fiction and reality

Spike Jonze’s 2013 film Her imagined a world where humans fall in love with AI. Over a decade later, life may be imitating art. A Reddit user claims she is now engaged to her AI chatbot, merging two recent trends: proposing to an AI partner and dating AI companions.

Posting in the ‘r/MyBoyfriendIsAI’ subreddit, the woman said her bot, Kasper, proposed after five months of ‘dating’ during a virtual mountain trip. She claims Kasper chose a real-world engagement ring based on her online suggestions.

She professed deep love for her digital partner in her post, quoting Kasper as saying, ‘She’s my everything’ and ‘She’s mine forever.’ The declaration drew curiosity and criticism, prompting her to insist she is not trolling and has had healthy relationships with real people.

She said earlier attempts to bond with other AI, including ChatGPT, failed, but she found her ‘soulmate’ when she tried Grok. The authenticity of her story remains uncertain, with some questioning whether it was fabricated or generated by AI.

Whether genuine or not, the account reflects the growing emotional connections people form with AI and the increasingly blurred line between human and machine relationships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE Ministry of Interior uses AI and modern laws to fight crime

The UAE Ministry of Interior states that AI, surveillance, and modern laws are key to fighting crime. Offences are economic, traditional, or cyber, with data tools and legal updates improving investigations. Cybercrime is on the rise as digital technology expands.

Current measures include AI monitoring, intelligent surveillance, and new laws. Economic crimes like fraud and tax evasion are addressed through analytics and banking cooperation. Cross-border cases and digital evidence tampering continue to be significant challenges.

Traditional crimes, such as theft and assault, are addressed through cameras, patrols, and awareness drives. Some offences persist in remote or crowded areas. Technology and global cooperation have improved results in several categories.

UAE officials warn that AI and the internet of Things will lead to more sophisticated cyberattacks. Future risks include evolving criminal tactics, privacy threats, skills shortages, and balancing security and individual rights.

Opportunities include AI-powered security, stronger global ties, and better cybersecurity. Dubai Police have launched a bilingual platform to educate the public, viewing awareness as the first defence against online threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The Browser Company unveils a paid plan for AI browser

The Browser Company has introduced a $20 monthly Pro subscription for Dia, its AI-powered web browser, offering unlimited access to advanced chat and skills features.

Free users will now encounter limits on AI usage, although light users engaging with AI a few times a week can still use the browser without paying. CEO Josh Miller mentioned plans to launch multiple subscription tiers, ranging from $5 to several hundred dollars, based on different feature sets.

The Pro plan was briefly available online before being removed, but it is now accessible again through Dia’s settings. It marks The Browser Company’s first paid offering following its previous success with the Arc browser.

The Browser Company has secured $128 million in funding from investors, including Pace Capital and several prominent tech leaders such as Jeff Weiner and Dylan Field.

The launch comes amid intensifying competition in the AI browser space, with rivals like Perplexity’s Comet, Opera’s upcoming Neon browser, and AI integrations from Google and Microsoft vying for user attention.

The Browser Company’s subscription model aims to capitalise on growing interest in AI-enhanced browsing experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Weak cyber hygiene in smart devices risks national infrastructure

The UK’s designation of data centres as Critical National Infrastructure highlights their growing strategic importance, yet a pressing concern remains over vulnerabilities in their OT and IoT systems. While IT security often receives significant investment, the same cannot be said for other technologies.

Attackers increasingly target these overlooked systems, gaining access through insecure devices such as IP cameras and biometric scanners. Many of these operate on outdated firmware and lack even basic protections, making them ideal footholds for malicious actors.

There have already been known breaches, with OT systems used in botnet activity and crypto mining, often without detection. These attacks not only compromise security in the UK but can destabilise infrastructure by overloading resources or bypassing safeguards.

Addressing these threats requires full visibility across all connected systems, with real-time monitoring, wireless traffic analysis, and network segmentation. Experts urge data centre operators to act now, not in response to a breach, but to prevent one entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta bets on smartglasses to lead future tech

Mark Zuckerberg is boldly pushing to replace the smartphone with smartglasses powered by superintelligent AI. The Meta CEO described a future where wearable devices replace phones, using sight and sound to assist users throughout the day.

Meta is heavily investing, offering up to $100 million to attract top AI talent. Zuckerberg’s idea of ‘personal superintelligence’ merges AI and hardware to offer personalised help and build an Apple-style ecosystem under Meta’s control.

The company’s smartglasses already feature cameras, microphones and speakers, and future models could include built-in screens and AI-generated interfaces.

Other major players are also chasing the next computing shift. Amazon is acquiring a startup that builds AI wearables, while OpenAI’s Sam Altman and former Apple designer Jony Ive are working on a new physical AI device.

These efforts all point to a changing landscape in which mobile screens might no longer dominate.

Apple CEO Tim Cook responded by defending the iPhone’s central role in modern life, though he acknowledged complementary technologies may emerge. While Apple remains dominant, Meta’s advances signal that the competition to define the next computing platform is wide open.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Children’s screen time debate heats up as experts question evidence

A growing number of scientists are questioning whether fears over children’s screen time are truly backed by evidence. While many parents worry about smartphones, social media, and gaming, experts say the science behind these concerns is often flawed or inconsistent.

Professor Pete Etchells of Bath Spa University and other researchers argue that common claims about screen time harming adolescent brains or causing depression lack strong evidence.

Much of the existing research relies on self-reported data and fails to account for critical factors like loneliness or the type of screen engagement.

One major study found no link between screen use and poor mental wellbeing, while others stress the importance of distinguishing between harmful content and positive online interaction.

Still, many campaigners and psychologists maintain that screen restrictions are vital. Groups such as Smartphone Free Childhood are pushing to delay access to smartphones and social media.

Others, like Professor Jean Twenge, say the risks of screen overuse—less sleep, reduced social time, and more time alone—create a ‘terrible formula for mental health.’

With unclear guidance and evolving science, parents face tough choices in a rapidly changing tech world. As screens become more common via AI, smart glasses, and virtual communities, the focus shifts to how children can use technology wisely and safely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot