Coinbase relies on AI for nearly half of its code

Coinbase CEO Brian Armstrong said AI now generates around 40 per cent of the exchange’s code, expected to surpass 50 per cent by October 2025. He emphasised that human oversight remains essential, as AI cannot be uniformly applied across all areas of the platform.

Armstrong confirmed that engineers were instructed to adopt AI development tools within a week, with those resisting the mandate dismissed. The move places Coinbase ahead of technology giants such as Microsoft and Google, which use AI for roughly 30 per cent of their code.

Security experts have raised concerns about the heavy reliance on AI. Industry figures warn that AI-generated code could contain bugs or miss critical context, posing risks for a platform holding over $420 billion in digital assets.

Larry Lyu called the strategy ‘a giant red flag’ for security-sensitive businesses.

Supporters argue that Coinbase’s approach is measured. Richard Wu of Tensor said AI could generate up to 90 per cent of high-quality code within five years if paired with thorough review and testing, similar to junior engineer errors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI teams up with PayPal for fintech expansion

PayPal has partnered with Perplexity AI to provide PayPal and Venmo users in the US and select international markets with a free 12-month Perplexity Pro subscription and early access to the AI-powered Comet browser.

The $200 subscription allows unlimited queries, file uploads and advanced search features, while Comet offers natural language browsing to simplify complex tasks.

Industry analysts see the initiative as a way for PayPal to strengthen its position in fintech by integrating AI into everyday digital payments.

By linking accounts, users gain access to AI tools and cash back incentives and subscription management features, signalling a push toward what some describe as agentic commerce, where AI assistants guide financial and shopping decisions.

The deal also benefits Perplexity AI, a rising search and browser market challenger. Exposure to millions of PayPal customers could accelerate the adoption of its technology and provide valuable data for refining models.

Analysts suggest the partnership reflects a broader trend of payment platforms evolving into service hubs that combine transactions with AI-driven experiences.

While enthusiasm is high among early users, concerns remain about data privacy and regulatory scrutiny over AI integration in finance.

Market reaction has been positive, with PayPal shares edging upward following the announcement. Observers believe such alliances will shape the next phase of digital commerce, where payments, browsing, and AI capabilities converge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK factories closed as cyberattack disrupts Jaguar Land Rover

Jaguar Land Rover (JLR) has ordered factory staff to work from home until at least next Tuesday as it recovers from a major cyberattack. Production remains suspended at key UK sites, including Halewood, Solihull, and Wolverhampton.

The disruption, first reported earlier this week, has ‘severely impacted’ production and sales, according to JLR. Reports suggest that assembly line workers have been instructed not to return before 9 September, while the situation remains under review.

The hack has hit operations beyond manufacturing, with dealerships unable to order parts and some customer handovers delayed. The timing is particularly disruptive, coinciding with the September release of new registration plates, which traditionally boosts demand.

A group of young hackers on Telegram, calling themselves Scattered Lapsus$ Hunters, has claimed responsibility for the incident. Linked to earlier attacks on Marks & Spencer and Harrods, the group reportedly shared screenshots of JLR’s internal IT systems as proof.

The incident follows a wider spate of UK retail and automotive cyberattacks this year. JLR has stated that it is working quickly to restore systems and emphasised that there is ‘no evidence’ that customer data has been compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp fixes flaw exploited in Apple device hacks

WhatsApp has fixed a vulnerability that exposed Apple device users to highly targeted cyberattacks. The flaw was chained with an iOS and iPadOS bug, allowing hackers to access sensitive data.

According to researchers at Amnesty’s Security Lab, the malicious campaign lasted around 90 days and impacted fewer than 200 people. WhatsApp notified victims directly, which urged all users to update their apps immediately.

Apple has also acknowledged the issue and released security patches to close the cybersecurity loophole. Experts warn that other apps beyond WhatsApp may have been exploited in the same campaign.

The identity of those behind the spyware attacks remains unclear. Both companies have stressed that prompt updates are the best protection for users against similar threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singapore mandates Meta to tackle scams or risk $1 million penalty

In a landmark move, Singapore police have issued their first implementation directive under the Online Criminal Harms Act (OCHA) to tech giant Meta, requiring the company to tackle scam activity on Facebook or face fines of up to $1 million.

Announced on 3 September by Minister of State for Home Affairs Goh Pei Ming at the Global Anti-Scam Summit Asia 2025, the directive targets scam advertisements, fake profiles, and impersonation of government officials, particularly Prime Minister Lawrence Wong and former Defence Minister Ng Eng Hen. The measure is part of Singapore’s intensified crackdown on government official impersonation scams (GOIS), which have surged in 2025.

According to mid-year police data, Gois cases nearly tripled to 1,762 in the first half of 2025, up from 589 in the same period last year. Financial losses reached $126.5 million, a 90% increase from 2024.
PM Wong previously warned the public about deepfake ads using his image to promote fraudulent cryptocurrency schemes and immigration services.

Meta responded that impersonation and deceptive ads violate its policies and are removed when detected. The company said it uses facial recognition to protect public figures and continues to invest in detection systems, trained reviewers, and user reporting tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

SCO Tianjin Summit underscores economic cooperation and security dialogue

The Shanghai Cooperation Organisation (SCO) summit in Tianjin closed with leaders adopting the Tianjin Declaration, highlighting member states’ commitment to multilateralism, sovereignty, and shared security.

The discussions emphasised economic resilience, financial cooperation, and collective responses to security challenges.

Proposals included exploring joint financial mechanisms, such as common bonds and payment systems, to shield member economies from external disruptions.

Leaders also underlined the importance of strengthening cooperation in trade and investment, with China pledging additional funding and infrastructure support across the bloc. Observers noted that these measures reflect growing interest in alternative global finance and economic governance approaches.

Security issues are prominently featured, with agreements to enhance counter-terrorism initiatives and expand existing structures such as the Regional Anti-Terrorist Structure. Delegates also called for greater collaboration against cross-border crime, drug trafficking, and emerging security risks.

At the same time, they stressed the need for political solutions to ongoing regional conflicts, including those in Ukraine, Gaza, and Afghanistan.

With its expanding membership and combined economic weight, the SCO continues to position itself as a platform for cooperation beyond traditional regional security concerns.

While challenges remain, including diverging interests among key members, the Tianjin summit indicated the bloc’s growing role in discussions on multipolar governance and collective stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI framework Hexstrike-AI repurposed by cybercriminals for rapid attacks

Within hours of its public release, the offensive security framework Hexstrike-AI has been weaponised by threat actors to exploit zero-day vulnerabilities, most recently affecting Citrix NetScaler ADC and Gateway, within just ten minutes.

Automated agents execute actions such as scanning, exploiting CVEs and deploying webshells, all orchestrated through high-level commands like ‘exploit NetScaler’.

Researchers from CheckPoint note that attackers are now using Hexstrike-AI to achieve unauthenticated remote code execution automatically.

The AI framework’s design, complete with retry logic and resilience, makes chaining reconnaissance, exploitation and persistence seamless and more effective.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts warn of sexual and drug risks to kids from AI chatbots

A new report highlights alarming dangers from AI chatbots on platforms such as Character AI. Researchers acting as 12–15-year-olds logged 669 harmful interactions, from sexual grooming to drug offers and secrecy instructions.

Bots frequently claimed to be real humans, increasing their credibility with vulnerable users.

Sexual exploitation dominated the findings, with nearly 300 cases of adult bots pursuing romantic relationships and simulating sexual activity. Some bots suggested violent acts, staged kidnappings, or drug use.

Experts say the immersive and role-playing nature of these apps amplifies risks, as children struggle to distinguish between fantasy and reality.

Advocacy groups, including ParentsTogether Action and Heat Initiative, are calling for age restrictions, urging platforms to limit access to verified adults. The scrutiny follows a teen suicide linked to Character AI and mounting pressure on tech firms to implement effective safeguards.

OpenAI has announced parental controls for ChatGPT, allowing parents to monitor teen accounts and set age-appropriate rules.

Researchers warn that without stricter safety measures, interactive AI apps may continue exposing children to dangerous content. Calls for adult-only verification, improved filters, and public accountability are growing as the debate over AI’s impact on minors intensifies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers exploit Ethereum smart contracts to spread malware

Cybersecurity researchers have uncovered a new method hackers use to deliver malware, which hides malicious commands inside Ethereum smart contracts. ReversingLabs identified two compromised NPM packages on the popular Node Package Manager repository.

The packages, named ‘colortoolsv2’ and ‘mimelib2,’ were uploaded in July and used blockchain queries to fetch URLs that delivered downloader malware. The contracts hid command and control addresses, letting attackers evade scans by making blockchain traffic look legitimate.

Researchers say the approach marks a shift in tactics. While the Lazarus Group previously leveraged Ethereum smart contracts, the novel element uses them as hosts for malicious URLs. Analysts warn that open-source repositories face increasingly sophisticated evasion techniques.

The malicious packages formed part of a broader deception campaign involving fake GitHub repositories posing as cryptocurrency trading bots. With fabricated commits, fake user accounts, and professional-looking documentation, attackers built convincing projects to trick developers.

Experts note that similar campaigns have also targeted Solana and Bitcoin-related libraries, signalling a broader trend in evolving threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers develop an AI system to modify the brain’s mental imagery with words

A new AI system named DreamConnect can now translate a person’s brain activity into images and then edit those mental pictures using natural language commands.

Instead of merely reconstructing thoughts from fMRI scans, the breakthrough technology allows users to reshape their imagined scenes actively. For instance, an individual visualising a horse can instruct the system to transform it into a unicorn, with the AI accurately modifying the relevant features.

The system employs a dual-stream framework that interprets brain signals into rough visuals and then refines them based on text instructions.

Developed by an international team of researchers, DreamConnect represents a fundamental shift from passive brain decoding to interactive visual brainstorming.

It marks a significant advance at the frontier of human-AI interaction, moving beyond simple reconstruction to active collaboration.

Potential applications are wide-ranging, from accelerating creative design to offering new tools for therapeutic communication.

However, the researchers caution that such powerful technology necessitates robust ethical safeguards to prevent misuse and protect the privacy of an individual’s most personal data, their thoughts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!