Fake banking apps leave sellers thousands out of pocket

Scammers are using fake mobile banking apps to trick people into handing over valuable items without receiving any payment.

These apps, which convincingly mimic legitimate platforms, display false ‘successful payment’ screens in person, allowing fraudsters to walk away with goods while the money never arrives.

Victims like Anthony Rudd and John Reddock have lost thousands after being targeted while selling items through social media marketplaces. Mr Rudd handed over £1,000 worth of tools from his Salisbury workshop, only to realise the payment notification was fake.

Mr Reddock, from the UK, lost a £2,000 gold bracelet he had hoped to sell to fund a holiday for his children.

BBC West Investigations found that some of these fake apps, previously removed from the Google Play store, are now being downloaded directly from the internet onto Android phones.

The Chartered Trading Standards Institute described this scam as an emerging threat, warning that in-person fraud is growing more complex instead of fading away.

With police often unable to track down suspects, small business owners like Sebastian Liberek have been left feeling helpless after being targeted repeatedly.

He has lost hundreds of pounds to fake transfers and believes scammers will continue striking, while enforcement remains limited and platforms fail to do enough to stop the spread of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA extends MITRE’s CVE program for 11 months

The US Cybersecurity and Infrastructure Security Agency (CISA) has extended its contract with the MITRE Corporation to continue operating the Common Vulnerabilities and Exposures (CVE) program for an additional 11 months. The decision was made one day before the existing contract was set to expire.

A CISA spokesperson confirmed that the agency exercised the option period in its $57.8 million contract with MITRE to prevent a lapse in CVE services. The contract, which originally concluded on April 17, includes provisions for optional extensions through March 2026.

‘The CVE Program is invaluable to the cyber community and a priority of CISA,’ the spokesperson stated, expressing appreciation for stakeholder support.

Yosry Barsoum, vice president of MITRE and director of its Center for Securing the Homeland, said that CISA identified incremental funding to maintain operations.

He noted that MITRE remains committed to supporting both the CVE and CWE (Common Weakness Enumeration) programs, and acknowledged the widespread support from government, industry, and the broader cybersecurity community.

The extension follows public concern raised earlier this week after Barsoum issued a letter indicating that program funding was at risk of expiring without renewal.

MITRE officials noted that, in the event of a contract lapse, the CVE program website would eventually go offline and no new CVEs would be published. Historical data would remain accessible via GitHub.

Launched in 1999, the CVE program serves as a central catalogue for publicly disclosed cybersecurity vulnerabilities. It is widely used by governments, private sector organisations, and critical infrastructure operators for vulnerability identification and coordination.

Amid recent uncertainty about the program’s future, a group of CVE Board members announced the formation of a new non-profit organisation — the CVE Foundation — aimed at supporting the long-term sustainability and governance of the initiative.

In a public statement, the group noted that while US government sponsorship had enabled the program’s growth, it also introduced concerns around reliance on a single national sponsor for what is considered a global public good.

The CVE Foundation is intended to provide a neutral, independent structure to ensure continuity and community oversight.

The foundation aims to enhance global governance, eliminate single points of failure in vulnerability management, and reinforce the CVE program’s role as a trusted and collaborative resource. Further information about the foundation’s structure and plans is expected to be released in the coming days.

CISA did not comment on the creation of the CVE Foundation. A MITRE spokesperson indicated the organisation intends to work with federal agencies, the CVE Board, and the cybersecurity community on options for ongoing support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zoom service restored after major outage

Zoom has resumed normal service after a widespread outage left users unable to join meetings or access its website for nearly two hours.

The disruption began around 2:40PM ET and was visible on monitoring platforms like Cisco’s ThousandEyes, which showed a sharp drop in connectivity.

Many users reported seeing an ‘Unable to Connect’ message when trying to join meetings, while others were locked out entirely.

The company’s main website displayed a 502 Bad Gateway error, and even Zoom’s press email was unreachable.

Although the exact cause remains unconfirmed, a Reddit post suggested the issue may have stemmed from the domain being temporarily placed in a server hold, possibly due to DNS or verification problems.

The issue appeared to be resolved around 4:12PM ET, though some users experienced delays as DNS updates propagated across networks.

Zoom confirmed via X that service had been restored and thanked users for their patience. Further details from Zoom or its domain provider, GoDaddy, have yet to be released.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI deploys new safeguards for AI models to curb biothreat risks

OpenAI has introduced a new monitoring system to reduce the risk of its latest AI models, o3 and o4-mini, being misused to create chemical or biological threats.

The ‘safety-focused reasoning monitor’ is built to detect prompts related to dangerous materials and instruct the AI models to withhold potentially harmful advice, instead of providing answers that could aid bad actors.

These newer models represent a major leap in capability compared to previous versions, especially in their ability to respond to prompts about biological weapons. To counteract this, OpenAI’s internal red teams spent 1,000 hours identifying unsafe interactions.

Simulated tests showed the safety monitor successfully blocked 98.7% of risky prompts, although OpenAI admits the system does not account for users trying again with different wording, a gap still covered by human oversight instead of relying solely on automation.

Despite assurances that neither o3 nor o4-mini meets OpenAI’s ‘high risk’ threshold, the company acknowledges these models are more effective at answering dangerous questions than earlier ones like o1 and GPT-4.

Similar monitoring tools are also being used to block harmful image generation in other models, yet critics argue OpenAI should do more.

Concerns have been raised over rushed testing timelines and the lack of a safety report for GPT-4.1, which was launched this week instead of being accompanied by transparency documentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI pushes Grok forward with memory update

Elon Musk’s AI venture, xAI, has introduced a new ‘memory’ feature for its Grok chatbot in a bid to compete more closely with established rivals like ChatGPT and Google’s Gemini.

The update allows Grok to remember details from past conversations, enabling it to provide more personalised responses when asked for advice or recommendations, instead of offering generic answers.

Unlike before, Grok can now ‘learn’ a user’s preferences over time, provided it’s used frequently enough. The move mirrors similar features from competitors, with ChatGPT already referencing full chat histories and Gemini using persistent memory to shape its replies.

According to xAI, the memory is fully transparent. Users can view what Grok has remembered and choose to delete specific entries at any time.

The memory function is currently available in beta on Grok’s website and mobile apps, although not yet accessible to users in the EU or UK.

Instead of being automatically enabled, it can be turned off in the settings menu under Data Controls. Deleting individual memories is also possible via the web chat interface, with Android support expected shortly.

xAI has confirmed it is working on adding memory support to Grok’s version on X. However, this expansion aims to deepen the bot’s integration with users’ digital lives instead of limiting the experience to one platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe struggles to explain quantum to its citizens

Most Europeans remain unclear about quantum technology, despite increasing attention from EU leaders. A new survey, released on World Quantum Day, reveals that while 78 per cent of adults in France and Germany are aware of quantum, only a third truly understand what it is.

Nearly half admitted they had heard of the term but didn’t know what it means.

Quantum science studies the smallest building blocks of the universe, particles like electrons and atoms, that behave in ways classical physics can’t explain. Though invisible even to standard microscopes, they already power technologies such as GPS, MRI scanners and semiconductors.

Quantum tools could lead to breakthroughs in healthcare, cybersecurity, and climate change, by enabling ultra-precise imaging, improved encryption, and advanced environmental monitoring.

The survey showed that 47 per cent of respondents expect quantum to positively impact their country within five years, with many hopeful about its role in areas like energy, medicine and fraud prevention.

For example, quantum computers might help simulate complex molecules for drug development, while quantum encryption could secure communications better than current systems.

The EU has committed to developing a European quantum chip and is exploring a potential Quantum Act, backed by €65 million in funding under the EU Chips Act. The UK has pledged £121 million for quantum initiatives.

However, Europe still trails behind China and the US, mainly due to limited private investment and slower deployment. Former ECB president Mario Draghi warned that Europe must build a globally competitive quantum ecosystem instead of falling behind further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU plans major staff boost for digital rules

The European Commission is ramping up enforcement of its Digital Services Act (DSA) by hiring 60 more staff to support ongoing investigations into major tech platforms. Despite beginning probes into companies such as X, Meta, TikTok, AliExpress and Temu since December 2023, none have concluded.

The Commission currently has 127 employees working on the DSA and aims to reach 200 by year’s end. Applications for the new roles, including legal experts, policy officers, and data scientists, remain open until 10 May.

The DSA, which came into full effect in February last year, applies to all online platforms in the EU. However, the 25 largest platforms, those with over 45 million monthly users like Google, Amazon, and Shein, fall under the direct supervision of the Commission instead of national regulators.

The most advanced case is against X, with early findings pointing to a lack of transparency and accountability.

The law has drawn criticism from the current Republican-led US government, which views it as discriminatory. Brendan Carr of the US Federal Communications Commission called the DSA ‘an attack on free speech,’ accusing the EU of unfairly targeting American companies.

In response, EU Tech Commissioner Henna Virkkunen insisted the rules are fair, applying equally to platforms from Europe, the US, and China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude can now read your Gmail and Docs

Anthropic has introduced a new integration that allows its AI chatbot, Claude, to connect directly with Google Workspace.

The feature, now in beta for premium subscribers, enables Claude to reference content from Gmail, Google Calendar, and Google Docs to deliver more personalised and context-aware responses.

Users can expect in-line citations showing where specific information originated from within their Google account.

This integration is available for subscribers on the Max, Team, Enterprise, and Pro plans, though multi-user accounts require administrator approval.

While Claude can read emails and review documents, it cannot send emails or schedule events. Anthropic insists the system uses strict access controls and does not train its models on user data by default.

The update arrives as part of Anthropic’s broader efforts to enhance Claude’s appeal in a competitive AI landscape.

Alongside the Workspace integration, the company launched Claude Research, a tool that performs real-time web searches to provide fast, in-depth answers.

Although still smaller than ChatGPT’s user base, Claude is steadily growing, reaching 3.3 million web users in March 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea’s $23B chip industry boost in response to global trade war

South Korea announced a $23 billion support package for its semiconductor industry, increasing from last year’s $19 billion to protect giants like Samsung and SK Hynix from US tariff uncertainties and China’s growing competition

The plan allocates 20 trillion won in financial aid, up from 17 trillion, to drive innovation and production, addressing a 31.8% drop in chip exports to China due to US trade restrictions.

The package responds to US policies under President Trump, including export curbs on high-bandwidth chips to China, which have disrupted global demand. 

At the same time, Finance Minister Choi Sang-mok will negotiate with the US to mitigate potential national security probes on chip trade. 

South Korea’s strategy aims to safeguard a critical economic sector that powers everything from smartphones to AI, especially as its auto industry faces US tariff challenges. 

Analysts view this as a preemptive effort to shield the chip industry from escalating global trade tensions.

Why does it matter?

For South Koreans, the semiconductor sector is a national lifeline, tied to jobs and economic stability, with the government betting big to preserve its global tech dominance. As China’s tech ambitions grow and US policies remain unpredictable, Seoul’s $23 billion investment speaks out about the cost of staying competitive in a tech-driven world.