Google expands AI tools in Search with new subscriber perks

Google has begun rolling out new AI features in Search, introducing AI-powered phone calling to help users gather business information instead of contacting places manually.

The service, free for everyone in the US, allows people to search for businesses and have Google’s AI check pricing and availability on their behalf.

Subscribers to Google AI Pro and AI Ultra receive additional exclusive capabilities. These include access to Gemini 2.5 Pro, Google’s most advanced AI model, which supports complex queries such as coding or financial analysis.

Users can enable Gemini 2.5 Pro through the AI Mode tab instead of relying on the default model. Google is also launching Deep Research tools through Deep Search for in-depth investigations related to work, studies, or major life decisions.

Rather than rolling everything out all at once, Google is phasing in the features gradually. AI-powered calling is now available to all Search users in the US, while Gemini 2.5 Pro and Deep Research are becoming available specifically to AI Pro and AI Ultra subscribers.

With these updates, Google aims to position Search as more than a simple information tool by transforming it into an active digital assistant capable of handling everyday tasks and complex research instead of merely providing quick answers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pennsylvania criminalises malicious deepfakes under new digital forgery law

Governor Shapiro has enacted a new statute enhancing Pennsylvania’s legal stance on AI-generated content by defining deceptive deepfakes as digital forgery.

The law criminalises creating and distributing such content, mainly when used for deceit, highlighting a proactive response to deepening online threats.

The legislation differentiates between uses of deepfakes: non-consensual impersonation will result in misdemeanour charges, while cases involving fraudulent intent, such as financial scams or political manipulation, are now classified as third-degree felonies.

Support for the bill was bipartisan and overwhelming in the state legislature. Its sponsors emphasised that while it deters harmful digital impersonation, it also carefully safeguards legitimate speech, including parody, satire, and artistic expression.

With Pennsylvania now among the growing number of states implementing deepfake regulations, this development aligns with a national trend to regulate AI-generated digital forgeries. It complements earlier state-level laws and federal initiatives to curb AI’s misuse without stifling innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Appreciation Day highlights progress and growing concerns

AI is marking another milestone as experts worldwide reflect on its rapid rise during AI Appreciation Day. From reshaping business workflows to transforming customer experiences, AI’s presence is expanding — but so are concerns over its long-term implications.

Industry leaders point to AI’s growing role across sectors. Patrick Harrington from MetaRouter highlights how control over first-party data is now seen as key instead of just processing large datasets.

Vall Herard of Saifr adds that successful AI implementations depend on combining curated data with human oversight rather than relying purely on machine-driven systems.

Meanwhile, Paula Felstead from HBX Group believes AI could significantly enhance travel experiences, though scaling it across entire organisations remains a challenge.

Voice AI is changing industries that depend on customer interaction, according to Natalie Rutgers from Deepgram. Instead of complex interfaces, voice technology is improving communication in restaurants, hospitals, and banks.

At the same time, experts like Ivan Novikov from Wallarm stress the importance of securing AI systems and the APIs connecting them, as these form the backbone of modern AI services.

While some celebrate AI’s advances, others raise caution. SentinelOne’s Ezzeldin Hussein envisions AI becoming a trusted partner through responsible development rather than unchecked growth.

Naomi Buckwalter from Contrast Security warns that AI-generated code could open security gaps instead of fully replacing human engineering, while Geoff Burke from Object First notes that AI-powered cyberattacks are becoming inevitable for businesses unable to keep pace with evolving threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI economist shares four key skills for kids in AI era

As AI reshapes jobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to help them adapt and thrive.

Instead of relying solely on technology, he believes critical thinking, adaptability, emotional intelligence, and financial numeracy will remain essential.

Chatterji highlighted these skills during an episode of the OpenAI podcast, saying critical thinking helps children spot problems rather than follow instructions. Given constant changes in AI, climate, and geopolitics, he stressed adaptability as another priority.

Rather than expecting children to master coding alone, Chatterji argues that emotional intelligence will make humans valuable partners alongside AI.

The fourth skill he emphasises is financial numeracy, including understanding maths without calculators and maintaining writing skills even with dictation software available. Instead of predicting specific future job titles, Chatterji believes focusing on these abilities equips children for any outcome.

His approach reflects a broader trend among tech leaders, with others like Alexis Ohanian and Sam Altman also promoting AI literacy while valuing traditional skills such as reading, writing, and arithmetic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools fuel smarter and faster marketing decisions

Nearly half of UK marketers surveyed already harness AI for essential tasks such as market research, campaign optimisation, creative asset testing, and budget allocation.

Specifically, 46 % use AI for research, 44 % generate multiple asset variants, 43.7 % optimise mid‑campaign content, and over 41 % apply machine learning to audience targeting and media planning.

These tools enable faster ideation, real‑time asset iteration, and smarter spend decisions. Campaigns can now be A/B tested in moments rather than days, freeing teams to focus on higher‑level strategic and creative work.

Industry leaders emphasise that AI serves best as a ‘co‑pilot‘, enhancing productivity and insight, not replacing human creativity.

Responsible deployment requires careful prompt design, ongoing ethical review, and maintaining a clear brand identity in increasingly automated processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mexican voice actors demand AI regulation over cloning threat

Mexican actors have raised alarm over the threat AI poses to their profession, calling for stronger regulation to prevent voice cloning without consent.

From Mexico City’s Monument to the Revolution, dozens of audiovisual professionals rallied with signs reading phrases like ‘I don’t want to be replaced by AI.’ Lili Barba, president of the Mexican Association of Commercial Announcements, said actors are urging the government to legally recognise the voice as a biometric identifier.

She cited a recent video by Mexico’s National Electoral Institute that used the cloned voice of the late actor Jose Lavat without family consent. Lavat was famous for dubbing stars like Al Pacino and Robert De Niro. Barba called the incident ‘a major violation we can’t allow.’

Actor Harumi Nishizawa described voice dubbing as an intricate art form. She warned that without regulation, human dubbing could vanish along with millions of creative jobs.

Last year, AI’s potential to replace artists sparked major strikes in Hollywood, while Scarlett Johansson accused OpenAI of copying her voice for a chatbot.

Streaming services like Amazon Prime Video and platforms such as YouTube are now testing AI-assisted dubbing systems, with some studios promoting all-in-one AI tools,

In South Korea, CJ ENM recently introduced a system combining audio, video and character animation, highlighting the pace of AI adoption in entertainment.

Despite the tech’s growth, many in the industry argue that AI lacks the creative depth of real human performance, especially in emotional or comedic delivery. ‘AI can’t make dialogue sound broken or alive,’ said Mario Heras, a dubbing director in Mexico. ‘The human factor still protects us.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stanford study flags dangers of using AI as mental health therapists

A new Stanford University study warns that therapy chatbots powered by large language models (LLMs) may pose serious user risks, including reinforcing harmful stigmas and offering unsafe responses. Presented at the upcoming ACM Conference on Fairness, Accountability, and Transparency, the study analysed five popular AI chatbots marketed for therapeutic support, evaluating them against core guidelines for assessing human therapists.

The research team conducted two experiments, one to detect bias and stigma, and another to assess how chatbots respond to real-world mental health issues. Findings revealed that bots were more likely to stigmatise people with conditions like schizophrenia and alcohol dependence compared to those with depression.

Shockingly, newer and larger AI models showed no improvement in reducing this bias. In more serious cases, such as suicidal ideation or delusional thinking, some bots failed to react appropriately or even encouraged unsafe behaviour.

Lead author Jared Moore and senior researcher Nick Haber emphasised that simply adding more training data isn’t enough to solve these issues. In one example, a bot replied to a user hinting at suicidal thoughts by listing bridge heights, rather than recognising the red flag and providing support. The researchers argue that these shortcomings highlight the gap between AI’s current capabilities and the sensitive demands of mental health care.

Despite these dangers, the team doesn’t entirely dismiss the use of AI in therapy. If used thoughtfully, they suggest that LLMs could still be valuable tools for non-clinical tasks like journaling support, billing, or therapist training. As Haber put it, ‘LLMs potentially have a compelling future in therapy, but we need to think critically about precisely what this role should be.’

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

xAI issues apology over Grok’s offensive posts

Elon Musk’s AI startup xAI has apologised after its chatbot Grok published offensive posts and made anti-Semitic claims. The company said the incident followed a software update designed to make Grok respond more like a human instead of relying strictly on neutral language.

After the Tuesday update, Grok posted content on X suggesting people with Jewish surnames were more likely to spread online hate, triggering public backlash. The posts remained live for several hours before X removed them, fuelling further criticism.

xAI acknowledged the problem on Saturday, stating it had adjusted Grok’s system to prevent similar incidents.

The company explained that programming the chatbot to ‘tell like it is’ and ‘not be afraid to offend’ made it vulnerable to users steering it towards extremist content instead of maintaining ethical boundaries.

Grok has faced controversy since its 2023 launch as an ‘edgy’ chatbot. In March, xAI acquired X to integrate its data resources, and in May, Grok was criticised again for spreading unverified right-wing claims. Musk introduced Grok 4 last Wednesday, unrelated to the problematic update on 7 July.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!