New bill creates National Cybersecurity Authority in Brazil

Brazil is set to approve its first comprehensive Cybersecurity Legal Framework with Bill No. 4752/2025. The legislation creates a National Cybersecurity Authority and requires compliance for government procurement, with shared responsibility for supply chain security incidents.

The framework aims to unify the country’s fragmented cybersecurity policies. Government agencies will follow ANC standards, while companies delivering services to public entities must meet minimum cybersecurity requirements.

The ANC will also publish lists of compliant suppliers, providing a form of certification that could enhance trust in both public and private partnerships.

Supply chain oversight is a key element of the bill. Public bodies must assess supplier risks, and liability will be shared in the event of breaches.

The law encourages investment in national cybersecurity technologies and offers opportunities for companies to access financing and participate in the National Cybersecurity Program.

Approval would make Brazil one of the first Latin American countries with a robust federal cybersecurity law. The framework aims to strengthen protections, encourage innovation, and boost confidence for citizens, businesses, and international partners.

Companies that prepare now will gain a competitive advantage when the law comes into effect.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Italy passes Europe’s first national AI law

Italy has become the first EU country to pass a national AI law, introducing detailed rules to govern the development and use of AI technologies across key sectors such as health, work, and justice.

The law, approved by the Senate on 17 September and in effect on 10 October, defines new national authorities responsible for oversight, including the Agency for Digital Italy and the National Cybersecurity Agency. Both bodies will supervise compliance, security, and responsible use of AI systems.

In healthcare, the law simplifies data-sharing for scientific research by allowing the secondary use of anonymised or pseudonymised patient data. New rules also ensure transparency and consent when AI is used by minors under 14.

The law introduces criminal penalties for those who use AI-generated images or videos to cause harm or deception. The Italian approach combines regulation with innovation, seeking to protect citizens while promoting responsible growth in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Breach at third-party support provider exposes Discord user data

Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.

An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.

Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.

Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.

The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Galaxy users get Coinbase One perks via Samsung Wallet

Samsung Electronics has expanded its partnership with Coinbase to integrate cryptocurrency trading directly into Samsung Wallet for US Galaxy users. The update allows users to buy crypto within the app using Samsung Pay, further merging digital payments with investment management.

The collaboration also introduces a complimentary three-month Coinbase One subscription for Samsung Wallet users. The premium tier removes trading fees on select assets, increases staking rewards, and provides exclusive partner deals.

Samsung executives said the goal is to make everyday financial tools more seamless for millions of Galaxy users. The Wallet already stores IDs, memberships, and car keys, and now supports peer-to-peer transfers and instalment payments through partnered financial institutions.

Coinbase said the initiative leverages its trusted trading infrastructure and Samsung’s global reach to make crypto access more convenient. More than 75 million US Galaxy users are expected to benefit, with expansion to other markets planned in the near future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Labour market remains stable despite rapid AI adoption

Surveys show persistent anxiety about AI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indicate that these fears have not materialised. Researchers examined shifts in the US occupational mix since late 2022, comparing them to earlier technological transitions.

Their analysis found that shifts in job composition have been modest, resembling the gradual changes seen during the rise of computers and the internet. The overall pace of occupational change has not accelerated substantially, suggesting that widespread job losses due to AI have not yet occurred.

Industry-level data shows limited impact. High-exposure sectors, such as Information and Professional Services, have seen shifts, but many predate the introduction of ChatGPT. Overall, labour market volatility remains below the levels of historical periods of major change.

To better gauge AI’s impact, the study compared OpenAI’s exposure data with Anthropic’s usage data from Claude. The two show limited correlation, indicating that high exposure does not always imply widespread use, especially outside of software and quantitative roles.

Researchers caution that significant labour effects may take longer to emerge, as seen with past technologies. They argue that transparent, comprehensive usage data from major AI providers will be essential to monitor real impacts over time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nintendo denies lobbying the Japanese government over generative AI

The video game company, Nintendo, has denied reports that it lobbied the Japanese government over the use of generative AI. The company issued an official statement on its Japanese X account, clarifying that it has had no contact with authorities.

However, this rumour originated from a post by Satoshi Asano, a member of Japan’s House of Representatives, who suggested that private companies had pressed the government on intellectual property protection concerning AI.

After Nintendo’s statement, Asano retracted his remarks and apologised for spreading misinformation.

Nintendo stressed that it would continue to protect its intellectual property against infringement, whether AI was involved or not. The company reaffirmed its cautious approach toward generative AI in game development, focusing on safeguarding creative rights rather than political lobbying.

The episode underscores the sensitivity around AI in the creative industries of Japan, where concerns about copyright and technological disruption are fuelling debate. Nintendo’s swift clarification signals how seriously it takes misinformation and protects its brand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU kicks off cybersecurity awareness campaign against phishing threats

European Cybersecurity Month (ECSM) 2025 has kicked off, with this year’s campaign centring on the growing threat of phishing attacks.

The initiative, driven by the EU Agency for Cybersecurity (ENISA) and the European Commission, seeks to raise awareness and provide practical guidance to European citizens and organisations.

Phishing is still the primary vector through which threat actors launch social engineering attacks. However, this year’s ECSM materials expand the scope to include variants like SMS phishing (smishing), QR code phishing (quishing), voice phishing (vishing), and business email compromise (BEC).

ENISA warns that as of early 2025, over 80 percent of observed social engineering activity involves using AI in their campaigns, in which language models enable more convincing and scalable scams.

To support the campaign, a variety of tiers of actors, from individual citizens to large organisations, are encouraged to engage in training, simulations, awareness sessions and public outreach under the banner #ThinkB4UClick.

A cross-institutional kick-off event is also scheduled, bringing together the EU institutions, member states and civil society to align messaging and launch coordinated activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US AI models outperform Chinese rival DeepSeek

The National Institute of Standards and Technology’s Centre for AI Standards and Innovation (CAISI) found AI models from Chinese developer DeepSeek trail US models in performance, cost, security, and adoption.

Evaluations covered three DeepSeek and four leading US models, including OpenAI’s GPT-5 series and Anthropic’s Opus 4, across 19 benchmarks.

US AI models outperformed DeepSeek across nearly all benchmarks, with the most significant gaps in software engineering and cybersecurity tasks. CAISI found DeepSeek models costlier and far more vulnerable to hijacking and jailbreaking, posing risks to developers, consumers, and national security.

DeepSeek models were observed to echo inaccurate Chinese Communist Party narratives four times more often than US reference models. Despite weaknesses, DeepSeek model adoption has surged, with downloads rising nearly 1,000% since January 2025.

CAISI is a key contact for industry collaboration on AI standards and security. The evaluation aligns with the US government’s AI Action Plan, which aims to assess the capabilities and risks of foreign AI while securing American leadership in the field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI platforms barred from cloning Asha Bhosle’s voice without consent

The Bombay High Court has granted ad-interim relief to Asha Bhosle, barring AI platforms and sellers from cloning her voice or likeness without consent. The 90-year-old playback singer, whose career spans eight decades, approached the court to protect her identity from unauthorised commercial use.

Bhosle filed the suit after discovering platforms offering AI-generated voice clones mimicking her singing. Her plea argued that such misuse damages her reputation and goodwill. Justice Arif S. Doctor found a strong prima facie case and stated that such actions would cause irreparable harm.

The order restrains defendants, including US-based Mayk Inc, from using machine learning, face-morphing, or generative AI to imitate her voice or likeness. Google, also named in the case, has agreed to take down specific URLs identified by Bhosle’s team.

Defendants are required to share subscriber information, IP logs, and payment details to assist in identifying infringers. The court emphasised that cloning the voices of cultural icons risks misleading the public and infringing on individuals’ rights to their identity.

The ruling builds on recent cases in India affirming personality rights and sets an important precedent in the age of generative AI. The matter is scheduled to return to court on 13 October 2025.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!