AI Plus subscription by Google expands to 35 new countries and territories

Google has expanded its AI subscription offering to 35 additional countries and territories, bringing Google AI Plus to all regions where its AI plans are currently available, including the United States.

The paid tier bundles access to advanced tools such as Gemini 3 Pro and Nano Banana Pro in the Gemini app, alongside creative features in Flow and research assistance through NotebookLM.

Users also receive 200GB of cloud storage, with the option to share benefits across up to five family members, positioning the plan as both a productivity and household service.

Existing Google One Premium 2TB subscribers in newly supported markets will automatically gain access to Google AI Plus features in the coming days, according to the company.

In the US, pricing starts at $7.99 per month, with a limited-time offer providing a 50 percent discount for new subscribers during the first two months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Facial recognition and AI power Android’s new theft protection upgrades

Android is rolling out expanded theft protection features aimed at reducing financial fraud and safeguarding personal data when smartphones are stolen, with new security controls now available across recent device versions.

The latest updates introduce stronger protections against unauthorised access, including tighter lockout rules after failed authentication attempts and broader biometric safeguards covering third-party apps such as banking services and password managers.

Recovery tools are also being enhanced, with remote locking now offering optional security challenges to ensure only verified owners can secure lost or stolen devices through web access.

For new Android devices activated in Brazil, AI-powered theft detection and remote locking are enabled by default, using on-device intelligence to identify snatch-and-run incidents and immediately lock the screen.

The expanded protections reflect a broader shift towards multi-layered mobile security, as device makers respond to rising phone theft linked to identity fraud, financial crime, and data exploitation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI Overviews leans heavily on YouTube for health information

Google’s health-related search results increasingly draw on YouTube rather than hospitals, government agencies, or academic institutions, as new research reveals how AI Overviews select citation sources in automated results.

An analysis by SEO platform SE Ranking reviewed more than 50,000 German-language health queries and found AI Overviews appeared on over 82% of searches, making healthcare one of the most AI-influenced information categories on Google.

Across all cited sources, YouTube ranked first by a wide margin, accounting for more than 20,000 references and surpassing medical publishers, hospital websites, and public health authorities.

Academic journals and research institutions accounted for less than 1% of citations, while national and international government health bodies accounted for under 0.5%, highlighting a sharp imbalance in source authority.

Researchers warn that when platform-scale content outweighs evidence-based medical sources, the risk extends beyond misinformation to long-term erosion of trust in AI-powered search systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google fixes Gmail bug that sent spam into primary inboxes

Gmail experienced widespread email filtering issues on Saturday, sending spam into primary inboxes and mislabelling legitimate messages as suspicious, according to Google’s Workspace status dashboard.

Problems began around 5 a.m. Pacific time, with users reporting disrupted inbox categories, unexpected spam warnings and delays in email delivery. Many said promotional and social emails appeared in primary folders, while trusted senders were flagged as potential threats.

Google acknowledged the malfunction throughout the day, noting ongoing efforts to restore normal service as complaints spread across social media platforms.

By Saturday evening, the company confirmed the issue had been fully resolved for all users, although some misclassified messages and spam warnings may remain visible for emails received before the fix.

Google said it is conducting an internal investigation and will publish a detailed incident analysis to explain what caused the disruption.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New phishing attacks exploit visual URL tricks to impersonate major brands

Generative phishing techniques are becoming harder to detect as attackers use subtle visual tricks in web addresses to impersonate trusted brands. A new campaign reported by Cybersecurity News shows how simple character swaps create fake websites that closely resemble real ones on mobile browsers.

The phishing attacks rely on a homoglyph technique where the letters ‘r’ and ‘n’ are placed together to mimic the appearance of an ‘m’ in a domain name. On smaller screens, the difference is difficult to spot, allowing phishing pages to appear almost identical to real Microsoft or Marriott login sites.

Cybersecurity researchers observed domains such as rnicrosoft.com being used to send fake security alerts and invoice notifications designed to lure victims into entering credentials. Once compromised, accounts can be hijacked for financial fraud, data theft, or wider access to corporate systems.

Experts warn that mobile browsing increases the risk, as users are less likely to inspect complete URLs before logging in. Directly accessing official apps or typing website addresses manually remains the safest way to avoid falling into these traps.

Security specialists also continue to recommend passkeys, strong, unique passwords, and multi-factor authentication across all major accounts, as well as heightened awareness of domains that visually resemble familiar brands through character substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn phishing campaign exposes dangerous DLL sideloading attack

A multi-faceted phishing campaign is abusing LinkedIn private messages to deliver weaponised malware using DLL sideloading, security researchers have warned. The activity relies on PDFs and archive files that appear trustworthy to bypass conventional security controls.

Attackers contact targets on LinkedIn and send self-extracting archives disguised as legitimate documents. When opened, a malicious DLL is sideloaded into a trusted PDF reader, triggering memory-resident malware that establishes encrypted command-and-control channels.

Using LinkedIn messages increases engagement by exploiting professional trust and bypassing email-focused defences. DLL sideloading allows malicious code to run inside legitimate applications, complicating detection.

The campaign enables credential theft, data exfiltration and lateral movement through in-memory backdoors. Encrypted command-and-control traffic makes containment more difficult.

Organisations using common PDF software or Python tooling face elevated risk. Defenders are advised to strengthen social media phishing awareness, monitor DLL loading behaviour and rotate credentials where compromise is suspected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cambodia Internet Governance Forum marks major step toward inclusive digital policy

The first national Internet Governance Forum in Cambodia has taken place, establishing a new platform for digital policy dialogue. The Cambodia Internet Governance Forum (CamIGF) included civil society, private sector and youth participants.

The forum follows an Internet Universality Indicators assessment led by UNESCO and national partners. The assessment recommended a permanent multistakeholder platform for digital governance, grounded in human rights, openness, accessibility and participation.

Opening remarks from national and international stakeholders framed the CamIGF as a move toward people-centred and rights-based digital transformation. Speakers stressed the need for cross-sector cooperation to ensure connectivity, innovation and regulation deliver public benefit.

Discussions focused on online safety in the age of AI, meaningful connectivity, youth participation and digital rights. The programme also included Cambodia’s Youth Internet Governance Forum, highlighting young people’s role in addressing data protection and digital skills gaps.

By institutionalising a national IGF, Cambodia joins a growing global network using multistakeholder dialogue to guide digital policy. UNESCO confirmed continued support for implementing assessment recommendations and strengthening inclusive digital governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI fuels surge in online fraud risks in 2026

Online scams are expected to surge in 2026, overtaking ransomware as the top cyber-risk, the World Economic Forum warned, driven by the growing use of generative AI.

Executives are increasingly concerned about AI-driven scams that are easier to launch and harder to detect than traditional cybercrime. WEF managing director Jeremy Jurgens said leaders now face the challenge of acting collectively to protect trust and stability in an AI-driven digital environment.

Consumers are also feeling the impact. An Experian report found 68% of people now see identity theft as their main concern, while US Federal Trade Commission data shows consumer fraud losses reached $12.5 billion in 2024, up 25% year on year.

Generative AI is enabling more convincing phishing, voice cloning, and impersonation attempts. The WEF reported that 62% of executives experienced phishing attacks, 37% encountered invoice fraud, and 32% reported identity theft, with vulnerable groups increasingly targeted through synthetic content abuse.

Experts warn that many organisations still lack the skills and resources to defend against evolving threats. Consumer groups advise slowing down, questioning urgent messages, avoiding unsolicited requests for information, and verifying contacts independently to reduce the risk of generative AI-powered scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT introduces age prediction to strengthen teen safety

New safeguards are being introduced as ChatGPT uses age prediction to identify accounts that may belong to under-18s. Extra protections limit exposure to harmful content while still allowing adults full access.

The age prediction model analyses behavioural and account-level signals, including usage patterns, activity times, account age, and stated age information. OpenAI says these indicators help estimate whether an account belongs to a minor, enabling the platform to apply age-appropriate safeguards.

When an account is flagged as potentially under 18, ChatGPT limits access to graphic violence, sexual role play, viral challenges, self-harm, and unhealthy body image content. The safeguards reflect research on teen development, including differences in risk perception and impulse control.

ChatGPT users who are incorrectly classified can restore full access by confirming their age through a selfie check using Persona, a secure identity verification service. Account holders can review safeguards and begin the verification process at any time via the settings menu.

Parental controls allow further customisation, including quiet hours, feature restrictions, and notifications for signs of distress. OpenAI says the system will continue to evolve, with EU-specific deployment planned in the coming weeks to meet regional regulatory requirements.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!