Google spoofed in sophisticated phishing attack

A sophisticated phishing attack recently targeted Google users, exploiting a well-known email authentication method to bypass security measures.

The attackers sent emails appearing to be from Google’s legitimate address, [email protected], and claimed the recipient needed to comply with a subpoena.

The emails contained a link to a Google Sites page, prompting users to log in and revealing a fake legal support page.

What made this phishing attempt particularly dangerous was that it successfully passed both DMARC and DKIM email authentication checks, making it appear entirely genuine to recipients.

In another cyber-related development, Microsoft issued a warning regarding the use of Node.js in distributing malware. Attackers have been using the JavaScript runtime environment to deploy malware through scripts and executables, particularly targeting cryptocurrency traders via malvertising campaigns.

The new technique involves executing JavaScript directly from the command line, making it harder to detect by traditional security tools.

Meanwhile, the US has witnessed a significant change in its disinformation-fighting efforts.

The State Department has closed its Counter Foreign Information Manipulation and Interference group, previously known as the Global Engagement Center, after accusations that it was overreaching in its censorship activities.

The closure, led by Secretary of State Marco Rubio, has sparked criticism, with some seeing it as a victory for foreign powers like Russia and China.

Finally, gig workers face new challenges as the Tech Transparency Project revealed that Facebook groups are being used to trade fake gig worker accounts for platforms like Uber and Lyft.

Sellers offer access to verified accounts, bypassing safety checks, and putting passengers and customers at risk. Despite reports to Meta, many of these groups remain active, with the social media giant’s automated systems failing to curb the activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT search grows rapidly in Europe

ChatGPT search, the web-accessing feature within OpenAI’s chatbot, has seen rapid growth across Europe, attracting an average of 41.3 million monthly active users in the six months leading up to March 31.

It marks a sharp rise from 11.2 million in the previous six-month period, according to a regulatory filing by OpenAI Ireland Limited.

Instead of operating unnoticed, the service must now report this data under the EU’s Digital Services Act (DSA), which defines monthly recipients as users who actively view or interact with the platform.

Should usage cross 45 million, ChatGPT search could be classified as a ‘very large’ online platform and face stricter rules, including transparency obligations, user opt-outs from personalised recommendations, and regular audits.

Failure to follow DSA regulations could lead to serious penalties, up to 6% of OpenAI’s global revenue, or even a temporary ban in the EU for ongoing violations. The law aims to ensure online platforms operate more responsibly and with better oversight in the digital space.

Despite gaining ground, ChatGPT search still lags far behind Google, which handles hundreds of times more queries.

Studies have also raised concerns about the accuracy of AI search tools, with ChatGPT found to misidentify a majority of news articles and occasionally misrepresent licensed content from publishers.

Instead of fully replacing traditional search, these AI tools may still need improvement to become reliable alternatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta uses AI to spot teens lying about age

Meta has announced it is ramping up efforts to protect teenagers on Instagram by deploying AI to detect users who may have lied about their age. The technology will automatically place suspected underage users into Teen Accounts, even if their profiles state they are adults.

These special accounts come with stricter safety settings designed for users under 16. Those who believe they’ve been misclassified will have the option to adjust their settings manually.

Instead of relying solely on self-reported birthdates, Meta is using its AI to analyse behaviour and signals that suggest a user might be younger than claimed.

While the company has used this technology to estimate age ranges before, it is now applying it more aggressively to catch teens who attempt to bypass the platform’s safeguards. The tech giant insists it’s working to ensure the accuracy of these classifications to prevent mistakes.

Alongside this new AI tool, Meta will also begin sending notifications to parents about their children’s Instagram settings.

These alerts, which are sent only to parents who have Instagram accounts of their own, aim to encourage open conversations at home about the importance of honest age representation online.

Teen Accounts were first introduced last year and are designed to limit access to harmful content, reduce contact from strangers, and promote healthier screen time habits.

Instead of granting unrestricted access, these accounts are private by default, block unsolicited messages, and remind teens to take breaks after prolonged scrolling.

Meta says the goal is to adapt to the digital age and partner with parents to make Instagram a safer space for young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake banking apps leave sellers thousands out of pocket

Scammers are using fake mobile banking apps to trick people into handing over valuable items without receiving any payment.

These apps, which convincingly mimic legitimate platforms, display false ‘successful payment’ screens in person, allowing fraudsters to walk away with goods while the money never arrives.

Victims like Anthony Rudd and John Reddock have lost thousands after being targeted while selling items through social media marketplaces. Mr Rudd handed over £1,000 worth of tools from his Salisbury workshop, only to realise the payment notification was fake.

Mr Reddock, from the UK, lost a £2,000 gold bracelet he had hoped to sell to fund a holiday for his children.

BBC West Investigations found that some of these fake apps, previously removed from the Google Play store, are now being downloaded directly from the internet onto Android phones.

The Chartered Trading Standards Institute described this scam as an emerging threat, warning that in-person fraud is growing more complex instead of fading away.

With police often unable to track down suspects, small business owners like Sebastian Liberek have been left feeling helpless after being targeted repeatedly.

He has lost hundreds of pounds to fake transfers and believes scammers will continue striking, while enforcement remains limited and platforms fail to do enough to stop the spread of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI deploys new safeguards for AI models to curb biothreat risks

OpenAI has introduced a new monitoring system to reduce the risk of its latest AI models, o3 and o4-mini, being misused to create chemical or biological threats.

The ‘safety-focused reasoning monitor’ is built to detect prompts related to dangerous materials and instruct the AI models to withhold potentially harmful advice, instead of providing answers that could aid bad actors.

These newer models represent a major leap in capability compared to previous versions, especially in their ability to respond to prompts about biological weapons. To counteract this, OpenAI’s internal red teams spent 1,000 hours identifying unsafe interactions.

Simulated tests showed the safety monitor successfully blocked 98.7% of risky prompts, although OpenAI admits the system does not account for users trying again with different wording, a gap still covered by human oversight instead of relying solely on automation.

Despite assurances that neither o3 nor o4-mini meets OpenAI’s ‘high risk’ threshold, the company acknowledges these models are more effective at answering dangerous questions than earlier ones like o1 and GPT-4.

Similar monitoring tools are also being used to block harmful image generation in other models, yet critics argue OpenAI should do more.

Concerns have been raised over rushed testing timelines and the lack of a safety report for GPT-4.1, which was launched this week instead of being accompanied by transparency documentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI pushes Grok forward with memory update

Elon Musk’s AI venture, xAI, has introduced a new ‘memory’ feature for its Grok chatbot in a bid to compete more closely with established rivals like ChatGPT and Google’s Gemini.

The update allows Grok to remember details from past conversations, enabling it to provide more personalised responses when asked for advice or recommendations, instead of offering generic answers.

Unlike before, Grok can now ‘learn’ a user’s preferences over time, provided it’s used frequently enough. The move mirrors similar features from competitors, with ChatGPT already referencing full chat histories and Gemini using persistent memory to shape its replies.

According to xAI, the memory is fully transparent. Users can view what Grok has remembered and choose to delete specific entries at any time.

The memory function is currently available in beta on Grok’s website and mobile apps, although not yet accessible to users in the EU or UK.

Instead of being automatically enabled, it can be turned off in the settings menu under Data Controls. Deleting individual memories is also possible via the web chat interface, with Android support expected shortly.

xAI has confirmed it is working on adding memory support to Grok’s version on X. However, this expansion aims to deepen the bot’s integration with users’ digital lives instead of limiting the experience to one platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Apple AI model uses private email comparisons

Apple has outlined a new approach to improving its AI features by privately analysing user data with the help of synthetic data. The move follows criticism of the company’s AI products, especially notification summaries, which have underperformed compared to competitors.

The new method relies on ‘differential privacy,’ where Apple generates synthetic messages that resemble real user data without containing any actual content.

These messages are used to create embeddings—abstract representations of message characteristics—which are then compared with real emails on user’ devices that have opted in to share analytics.

Devices send back signals indicating which synthetic data most closely matches real content, without sharing the actual messages with Apple.

Apple said the technique is already being used to improve its Genmoji models and will soon be applied to other features, including Image Playground, Image Wand, Memories Creation, Writing Tools, and Visual Intelligence.

The company also confirmed plans to improve email summaries using the same privacy-focused method, aiming to refine its AI tools while maintaining a strong commitment to user data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI updates safety rules amid AI race

OpenAI has updated its Preparedness Framework, the internal system used to assess AI model safety and determine necessary safeguards during development.

The company now says it may adjust its safety standards if a rival AI lab releases a ‘high-risk’ system without similar protections, a move that reflects growing competitive pressure in the AI industry.

Instead of outright dismissing such flexibility, OpenAI insists that any changes would be made cautiously and with public transparency.

Critics argue OpenAI is already lowering its standards for the sake of faster deployment. Twelve former employees recently supported a legal case against the company, warning that a planned corporate restructure might encourage further shortcuts.

OpenAI denies these claims, but reports suggest compressed safety testing timelines and increasing reliance on automated evaluations instead of human-led reviews. According to sources, some safety checks are also run on earlier versions of models, not the final ones released to users.

The refreshed framework also changes how OpenAI defines and manages risk. Models are now classified as having either ‘high’ or ‘critical’ capability, the former referring to systems that could amplify harm, the latter to those introducing entirely new risks.

Instead of deploying models first and assessing risk later, OpenAI says it will apply safeguards during both development and release, particularly for models capable of evading shutdown, hiding their abilities, or self-replicating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Opera brings AI assistant to Opera Mini on Android

Opera, the Norway-based browser maker, has announced the rollout of its AI assistant, Aria, to Opera Mini users on Android. The move represents a strategic effort to bring advanced AI capabilities to users with low-end devices and limited data access, rather than confining such tools to high-spec platforms.

Aria allows users to access up-to-date information, generate images, and learn about a range of topics using a blend of models from OpenAI and Google.

Since its 2005 launch, Opera Mini has been known for saving data during browsing, and Opera claims that the inclusion of Aria won’t compromise that advantage nor increase the app’s size.

It makes the AI assistant more accessible for users in regions where data efficiency is critical, instead of making them choose between smart features and performance.

Opera has long partnered with telecom providers in Africa to offer free data to Opera Mini users. However, last year, it had to end its programme in Kenya due to regulatory restrictions around ads on browser bookmark tiles.

Despite such challenges, Opera Mini has surpassed a billion downloads on Android and now serves more than 100 million users globally.

Alongside this update, Opera continues testing new AI functions, including features that let users manage tabs using natural language and tools that assist with task completion.

An effort like this reflects the company’s ambition to embed AI more deeply into everyday browsing instead of limiting innovation to its main browser.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!