Google adds clever email safety feature

Thanks to a new feature that shows verified brand logos, Gmail users will now find it easier to spot phishing emails. The update uses BIMI, a standard that allows trusted companies to display official logos next to their messages.

To qualify, brands must secure their domain with DMARC and have their logos verified by authorities such as Entrust or DigiCert. Once approved, they receive a Verified Mark Certificate, linking their logo to their domain.

The feature helps users quickly distinguish between genuine emails and fraudulent ones. Early adopters include Bank of America in the US, whose logo now appears directly in inboxes.

Google’s move is expected to drive broader adoption, with services like MailChimp and Verizon Media already supporting the system. The change could significantly reduce phishing risks for Gmail’s vast user base.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Gemini AI creates personalised illustrated storybooks from your photos and ideas

Google has introduced a new feature in its Gemini AI that allows users to create short, illustrated storybooks from prompts, essays, photos, and drawings. The tool can transform everyday materials into customised children’s books with art and narration.

The company demonstrated how a mother’s CV could be reimagined as a colouring book to explain her career to her children. Gemini can also turn vacation photos, children’s sketches, or personal life events into unique 10-page books in over 45 languages.

Users can select from various visual styles, including pixel art, claymation, crochet, comics, and colouring books.

People describe their desired story and upload optional images or files to use the feature. Gemini then generates a personalised book with illustrations and audio. The service is available worldwide on desktop and mobile through the Gemini app in all supported languages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google DeepMind launches Genie 3 to create interactive 3D worlds from text

Google DeepMind has introduced Genie 3, an AI world model capable of generating explorable 3D environments in real time from a simple text prompt.

Unlike earlier versions, it supports several minutes of continuous interaction, basic visual memory, and real-time changes such as altering weather or adding characters.

The system allows users to navigate these spaces at 24 frames per second in 720p resolution, retaining object placement for about a minute.

Users can trigger events within the virtual world by typing new instructions, making Genie 3 suitable for applications ranging from education and training to video games and robotics.

Genie 3’s improvements over Genie 2 include frame-by-frame generation with memory tracking and dynamic scene creation without relying on pre-built 3D assets.

However, the AI model still has limits, including the inability to replicate real-world locations with geographic accuracy and restricted interaction capabilities. Multi-agent features are still in development.

Currently offered as a limited research preview to select academics and creators, Genie 3 will be made more widely available over time.

Google DeepMind has noted that safety and responsibility remain central concerns during the gradual rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google signs groundbreaking deal to cut data centre energy use

Google has become the first major tech firm to sign formal agreements with US electric utilities to ease grid pressure. The deals come as data centres drive unprecedented energy demand, straining power infrastructure in several regions.

The company will work with Indiana Michigan Power and Tennessee Valley Authority to reduce electricity usage during peak demand. These arrangements will help divert power to general utilities when needed.

Under the agreements, Google will temporarily scale down its data centre operations, particularly those linked to energy-intensive AI and machine learning workloads.

Google described the initiative as a way to speed up data centre integration with local grids while avoiding costly infrastructure expansion. The move reflects growing concern over AI’s rising energy footprint.

Demand-response programmes, once used mainly in heavy manufacturing and crypto mining, are now being adopted by tech firms to stabilise grids in return for lower energy costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out Deep Think to Gemini AI Ultra users

Google has launched Deep Think for AI Ultra subscribers within the Gemini app, with the Gemini 2.5-based model also available to select mathematicians, offering powerful tools for complex problem-solving and mathematical exploration.

Google’s Deep Think AI, improved from its I/O version, offers quicker reasoning and enhanced usability. It achieved Bronze-level performance on the 2025 IMO standard in internal benchmarks.

Select mathematicians are now using Deep Think to test conjectures. Google notes its excellence in creative problem-solving through parallel reasoning for refined outcomes.

The model has been given extended inference time, enabling deeper analysis and more inventive answers. Reinforcement learning techniques guide it to explore longer reasoning paths, improving its problem-solving ability.

Beyond maths, Google considers Deep Think useful for design, planning, and coding. It can enhance web development, reason through scientific literature, and tackle algorithmic challenges, supporting users with strategic and iterative thinking across disciplines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI is the next iPhone moment, says Apple CEO Tim Cook

Any remaining doubts about Apple’s commitment to AI have been addressed directly by its CEO, Tim Cook.

At an all-hands meeting on Apple’s Cupertino campus, Cook told employees that the AI revolution is as big as the internet, smartphones, cloud computing, and apps.

According to Bloomberg’s Power On newsletter, Cook clarified that Apple sees AI as an imperative. ‘Apple must do this,’ he said, describing the opportunity as ‘ours to grab’.

Despite Apple unveiling its AI suite, Apple Intelligence, only in June, well after competitors, Cook remains optimistic about Apple’s ability to take the lead.

‘We’ve rarely been first,’ he told staff. ‘There was a PC before the Mac; a smartphone before the iPhone; many tablets before the iPad; an MP3 player before the iPod.’

Cook stressed that Apple had redefined these categories and suggested a similar future for AI, declaring, ‘This is how I feel about AI.’

Cook also outlined concrete steps the company is taking. Around 40% of the 12,000 hires made last year were allocated to research and development, with much of the focus on AI.

According to Bloomberg, Apple is also reportedly developing a new cloud-computing chip, code-named Baltra, designed to support AI features. In a recent interview with CNBC, Cook stated that Apple is open to acquisitions that could accelerate its progress in the AI sector.

Apple is not alone in its intense focus on AI. Rival firms are also increasing expectations and pressure. Sergey Brin, the former Google CEO who has returned to the company, told employees that 60-hour in-office work weeks may be necessary to win the AI race.

Reports of burnout and extreme workloads are becoming more frequent across leading AI firms. Former OpenAI engineer Calvin French-Owen recently described the company’s high-pressure and secretive culture.

French-Owen noted that the environment had become so intense that leadership offered the entire staff a week off to recover, according to Wired.

AI has become the next major battleground in big tech, with companies ramping up investment and reshaping internal structures to secure dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US court mandates Android app competition, loosens billing rules

Long-standing dominance over Android app distribution has been declared illegal by the Ninth Circuit Court of Appeals, reinforcing a prior jury verdict in favour of Epic Games. Google now faces an injunction that compels it to allow rival app stores and alternative billing systems inside the Google Play Store ecosystem for a three-year period ending November 2027.

A technical committee jointly selected by Epic and Google will oversee sensitive implementation tasks, including granting competitors approved access to Google’s expansive app catalogue while ensuring minimal security risk. The order also requires that developers not be tied to Google’s billing system for in-app purchases.

Market analysts warn that reduced dependency on Play Store exclusivity and the option to use alternative payment processors could cut Google’s app revenue by as much as $1 to $1.5 billion annually. Despite brand recognition, developers and consumers may shift toward lower-cost alternatives competing on platform flexibility.

While the ruling aims to restore competition, Google maintains it is appealing and has requested additional delays to avoid rapid structural changes. Proponents, including Microsoft, regulators, and Epic Games, hail the decision as a landmark step toward fairer mobile market access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!