Anthropic aims to decode AI ‘black box’ within two years​

Anthropic CEO Dario Amodei has unveiled an ambitious plan to make AI systems more transparent by 2027. In a recent essay titled ‘The Urgency of Interpretability,’ Amodei highlighted the pressing need to understand the inner workings of AI models.

He expressed concern over deploying highly autonomous systems without a clear grasp of their decision-making processes, deeming it ‘basically unacceptable’ for humanity to remain ignorant of how these systems function.

Anthropic is at the forefront of mechanistic interpretability, a field dedicated to deciphering the decision-making pathways of AI models. Despite these advancements, Amodei emphasized that much more research is needed to fully decode these complex systems.​

Looking ahead, Amodei envisions conducting ‘brain scans’ or ‘MRIs’ of advanced AI models to detect potential issues like tendencies to deceive or seek power. He believes that achieving this level of interpretability could take five to ten years but is essential for the safe deployment of future AI systems.

Amodei also called on industry peers, including OpenAI and Google DeepMind, to intensify their research efforts in this area and urged governments to implement ‘light-touch’ regulations to promote transparency and safety in AI development.​

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Perplexity to track users online for personalised ads

Perplexity is entering the browser space with bold ambitions, aiming to compete directly with Google by closely tracking user behaviour online. The CEO revealed that the company’s upcoming browser, named Comet, will collect data from user activity beyond its app to serve “hyper personalised” advertising.

He argued that browsing patterns and consumer behaviour offer far more insightful data than work-related prompts typed into AI chat tools. Srinivas suggested that users will accept this level of tracking because it results in more relevant advertisements and a potentially improved discovery experience.

The strategy mirrors tactics long used by Google and Meta, which have built lucrative advertising businesses through extensive user tracking. Despite recent scrutiny around data privacy, Srinivas remained confident in the approach, pointing to Comet’s May launch date.

In a move to expand its presence in the mobile ecosystem, Perplexity has partnered with Motorola to pre-install its app on the Razr phone series. The app will be accessible through Motorola’s Moto AI with a simple “Ask Perplexity” prompt.

Talks with Samsung are also reportedly ongoing, highlighting the startup’s intent to rival established tech giants not only in search and browsing, but also across devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube is testing AI-generated video highlights

Google is expanding its AI Overviews feature to YouTube, bringing algorithmically generated video highlights and search suggestions to the platform. Initially rolled out to a limited number of YouTube Premium users in the US, the experimental tool uses AI to identify and surface the most relevant clips.

The AI-generated results are currently focused on shopping and travel content, offering viewers a new way to discover videos and related topics without watching entire clips.

Google says the feature is designed to streamline content discovery, though it arrives with some scepticism following the rocky debut of AI Overviews in Google Search last year. That version, introduced in May 2024, was widely criticised for factual errors and bizarre “hallucinations” in responses.

Despite its troubled track record, Google is pushing ahead with AI integration across its platforms. The company’s blog post emphasised that the YouTube trial remains limited in scope for now, while promising future refinements.

Whether the move improves user experience or adds confusion remains to be seen, as critics question the reliability of AI-generated summaries on such a massive and diverse video platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MTN confirms cybersecurity breach and data exposure

MTN Group has confirmed a cybersecurity breach that exposed personal data of some customers in certain markets. The telecom giant assured the public, however, that its core infrastructure remains secure and fully operational.

The breach involved an unknown third party gaining unauthorised access to parts of MTN’s systems, though the company emphasised that critical services, including mobile money and digital wallets, were unaffected.

In a statement released on Thursday, MTN clarified that investigations are ongoing, but no evidence suggests any compromise of its central infrastructure, such as its network, billing, or financial service platforms.

MTN has alerted the law enforcement of South Africa and is collaborating with regulatory bodies in the affected regions.

The company urged customers to take steps to safeguard their data, such as monitoring financial statements, using strong passwords, and being cautious with suspicious communications.

MTN also recommended enabling multi-factor authentication and avoiding sharing sensitive information like PINs or passwords through unsecured channels.

While investigations continue, MTN has committed to providing updates as more details emerge, reiterating its dedication to transparency and customer protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korean hackers create fake US firms to target crypto developers

North Korea’s Lazarus Group has launched a sophisticated campaign to infiltrate the cryptocurrency industry by registering fake companies in the US and using them to lure developers into downloading malware.

According to a Reuters investigation, these US-registered shell companies, including Blocknovas LLC and Softglide LLC, were set up using false identities and addresses, giving the operation a veneer of legitimacy instead of drawing suspicion.

Once established, the fake firms posted job listings through legitimate platforms like LinkedIn and Upwork to attract developers. Applicants were guided through fake interview processes and instructed to download so-called test assignments.

Instead of harmless software, the files installed malware that enabled the hackers to steal passwords, crypto wallet keys, and other sensitive information.

The FBI has since seized Blocknovas’ domain and confirmed its connection to Lazarus, labelling the campaign a significant evolution in North Korea’s cyber operations.

These attacks were supported by Russian infrastructure, allowing Lazarus operatives to bypass North Korea’s limited internet access.

Tools such as VPNs and remote desktop software enabled them to manage operations, communicate over platforms like GitHub and Telegram, and even record training videos on how to exfiltrate data.

Silent Push researchers confirmed that the campaign has impacted hundreds of developers and likely fed some stolen access to state-aligned espionage units instead of limiting the effort to theft.

Officials from the US, South Korea, and the UN say the revenue from such cyberattacks is funneled into North Korea’s nuclear missile programme. The FBI continues to investigate and has warned that not only the hackers but also those assisting their operations could face serious consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT expands Deep Research to more users

A new feature introduced by ChatGPT in February, called Deep Research, is gradually becoming available across its user base. This includes subscribers on the Plus, Team, and Pro plans, while even those using the free ChatGPT app on iOS and Android can now access a simplified version.

Designed to carry out in-depth reports and analyses within minutes, Deep Research uses OpenAI’s o3 model to perform tasks that would otherwise take people hours to complete.

Instead of limiting access to paid users alone, OpenAI has rolled out a lightweight version powered by its o4-mini AI model for free users. Although responses are shorter, the company insists the quality and depth remain comparable.

The more efficient model also helps reduce costs, while delivering what OpenAI calls ‘nearly as intelligent’ results as the full version.

The feature’s capabilities stretch from suggesting personalised product purchases like cars or TVs, to helping with complex decisions such as choosing a university or analysing market trends.

Free-tier users are currently allowed up to five Deep Research tasks each month, whereas Plus and Team plans get ten full and fifteen lightweight tasks. Pro users enjoy a generous 125 tasks of each version per month, and EDU and Enterprise plans will begin access next week.

Once users hit their full version limit, they’ll be automatically shifted to the lightweight tool instead of losing access altogether. Meanwhile, Google’s GeminiAI offers a similar function for its paying customers, also aiming to deliver quick, human-level research and analysis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity expands iPhone app with voice features

AI research firm Perplexity has rolled out a new voice assistant for iPhones, expanding its app’s functionality to include reminders, email writing, and third-party services like ride-booking.

The assistant allows for continuous voice interaction even when the app is running in the background, although it cannot access system-level features due to Apple’s limitations. First launched on Android in January, the AI now supports multiple apps and can play media or draft emails via default Apple apps.

Users can activate it using the Action button on newer iPhones, but some features still require manual input depending on system permissions. The assistant is free to use, with limitations on the number of messages, while a £20/month subscription lifts those restrictions.

Despite comparisons with Siri, Perplexity lacks screen or camera-sharing capabilities, though it can search content from podcasts and YouTube. Developers say the update marks a significant step towards offering an AI assistant that rivals native options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under scrutiny in France over digital Ad practices

Meta, the parent company of Facebook, is facing fresh legal backlash in France as 67 French media companies representing over 200 publications filed a lawsuit alleging unfair competition in the digital advertising market. 

The case, brought before the Paris business tribunal, accuses Meta of abusing its dominant position through massive personal data collection and targeted advertising without proper consent.

The case marks the latest legal dispute in a string of EU legal challenges for the tech giant this week. 

Media outlets such as TF1, France TV, BFM TV, and major newspaper groups like Le Figaro, Liberation, and Radio France are among the plaintiffs. 

They argue that Meta’s ad dominance is built on practices that undermine fair competition and jeopardise the sustainability of traditional media.

The French case adds to mounting pressure across the EU. In Spain, Meta is due to face trial over a €551 million complaint filed by over 80 media firms in October. 

Meanwhile, the EU regulators fined Meta and Apple earlier this year for breaching European digital market rules, while online privacy advocates have launched parallel complaints over Meta’s data handling.

Legal firms Scott+Scott and Darrois Villey Maillot Brochier represent the French media alliance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek faces South Korean scrutiny over unauthorised data transfers

South Korea’s data protection authority has flagged serious privacy concerns over the operations of Chinese AI startup DeepSeek, accusing the company of transferring personal data and user-generated content abroad without consent. 

The findings come after a months-long investigation into the company’s conduct following its app launch in the South Korean market earlier this year.

According to the Personal Information Protection Commission, DeepSeek, officially registered as Hangzhou DeepSeek Artificial Intelligence Co. Ltd., failed to obtain user permission before transmitting personal information and AI prompt content to companies based in China and the US. 

This activity reportedly occurred during the app’s availability in local app stores in January.

In a particularly troubling revelation, the commission stated that DeepSeek forwarded user prompts, along with device and network information, to Beijing Volcano Engine Technology Co. Ltd. 

The startup later explained this was part of an effort to enhance user experience, but confirmed it stopped the transfer of such data on 10 April.

As a result, the commission has recommended that DeepSeek delete the previously shared content and immediately secure a lawful framework for any future overseas data transfers. 

Responding indirectly, China’s Foreign Ministry stressed that Beijing does not require companies to collect or store data illegally, asserting its stance amid growing international scrutiny over Chinese firms’ data practices. 

Meanwhile, DeepSeek has yet to respond publicly to the commission’s findings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK introduces landmark online safety rules to protect children

The UK’s regulator, Ofcom, has unveiled new online safety rules to provide stronger protections for children, requiring platforms to adjust algorithms, implement stricter age checks, and swiftly tackle harmful content by 25 July or face hefty fines. These measures target sites hosting pornography or content promoting self-harm, suicide, and eating disorders, demanding more robust efforts to shield young users.

Ofcom chief Dame Melanie Dawes called the regulations a ‘gamechanger,’ emphasising that platforms must adapt if they wish to serve under-18s in the UK. While supporters like former Facebook safety officer Prof Victoria Baines see this as a positive step, critics argue the rules don’t go far enough, with campaigners expressing disappointment over perceived gaps, particularly in addressing encrypted private messaging.

The rules, part of the Online Safety Act pending parliamentary approval, include over 40 obligations such as clearer terms of service for children, annual risk reviews, and dedicated accountability for child safety. The NSPCC welcomed the move but urged Ofcom to tighten oversight, especially where hidden online risks remain unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!