Italy fines major bank over data protection failures

The Italian Data Protection Authority has imposed a €31.8 million fine on Intesa Sanpaolo following serious shortcomings in its handling of personal data.

The case stems from unauthorised access by an employee to thousands of customer accounts, raising concerns about internal oversight and data protection safeguards.

Investigations revealed that monitoring systems failed to detect repeated unjustified access to sensitive financial information over an extended period. The breach also involved high-risk individuals, highlighting weaknesses in risk-based controls instead of robust, targeted protection measures.

Authorities in Italy identified violations of core data protection principles, including integrity, confidentiality and accountability. Additional concerns arose from delays in notifying both regulators and affected individuals, limiting the ability to respond effectively to the incident.

The case of Intesa Sanpaolo underscores increasing regulatory scrutiny of data governance practices in the financial sector. Strengthening internal controls and ensuring timely breach reporting remain essential for maintaining trust and compliance in data-driven banking environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK authorities have fined an Apple subsidiary over a sanctions breach

The UK has fined Apple Inc. subsidiary Apple Distribution International £390,000 for breaching sanctions linked to Russia. The penalty relates to payments routed through a UK bank to a Russian streaming platform.

The payments, totalling more than £635,000, were made to Okko from a UK-based account. The subsidiary, responsible for Apple product sales across Europe and the Middle East, instructed the transfers despite the platform’s ownership links to sanctioned entities.

The Office of Financial Sanctions Implementation found the funds were linked to Sberbank and a company later sanctioned after the 2022 Ukraine invasion. Payments were made shortly after those restrictions came into force.

Regulators said the firm had voluntarily disclosed the transactions and had not been aware of the sanctions breach at the time. Apple stated it follows all applicable laws and has strengthened its compliance procedures following the incident.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK-backed SPOQC mission launches to test space-based quantum communications

A UK-led research mission aimed at advancing space-based quantum communications has launched aboard a SpaceX Transporter-16 rocket from Vandenberg Space Force Base in California. The Satellite Platform for Optical Quantum Communications, or SPOQC, was developed under the Integrated Quantum Networks (IQN) Hub led by Heriot-Watt University and was launched on 30 March 2026.

The mission builds on research and development carried out first through the Quantum Communications Hub and later through the IQN Hub, both funded by the Engineering and Physical Sciences Research Council. Five UK research institutions are involved in the collaboration, which is intended to strengthen UK capabilities in space-based quantum communications as governments and researchers prepare for the cybersecurity implications of more powerful quantum computing systems.

SPOQC is now in the final stages of commissioning before it begins transmitting quantum signals to receivers at the Hub Optical Ground Station at Heriot-Watt University in Edinburgh. The CubeSat is operating in a low Earth, Sun-synchronous orbit and passes over the UK about twice a day, with most measurements expected to take place during night-time passes, when experimental conditions are more favourable.

The mission’s wider policy relevance lies in its connection to the UK’s National Quantum Strategy, which views quantum technologies as important to national resilience, digital infrastructure, and long-term competitiveness. The project presents satellite-based systems as the most practical route towards resilient international quantum communication, since terrestrial fibre links face distance-related limitations that can degrade quantum signals over time.

A distinctive feature of the mission is its dual quantum source payload. One source uses discrete quantum signals at the single-photon level and was developed by the University of Bristol team, while the other uses continuous-variable signals and was developed by researchers at the University of York. Both connect to dedicated receivers at the optical ground station, allowing researchers to compare two established but technically different communication methods under varying atmospheric and orbital conditions.

‘The SPOQC mission is the culmination of outstanding collaborations between leading UK Universities, STFC RAL Space, and external industry partners. It offers a world-first platform to critically compare different quantum communication modalities, including the first use of continuous variable approaches from space. Through the IQN Hub, the SPOQC mission is a vital enabler towards truly global quantum communication via integration into terrestrial UK networks.’, said Professor Gerald Buller, Director of the IQN Hub.

The collaboration brings together the Universities of Bristol, Heriot-Watt, Strathclyde and York, alongside the Science and Technology Facilities Council’s RAL Space. STFC RAL Space contributed engineering, systems integration and mission support, while Heriot-Watt is operating the optical ground station. ISISPACE provided the satellite and technical support.

Researchers say the mission will also test whether quantum technologies can be scaled down to a 12U CubeSat, roughly the size of a microwave oven, as a proof of concept for future compact and lower-cost satellite quantum networks. SPOQC follows the November 2025 launch of SpeQtre, a UK-Singapore collaboration led by STFC RAL Space and SpeQtral, making it the second quantum mission supported by UK research to launch within six months.

Full quantum communication experiments are expected to begin in the second half of 2026 once commissioning is complete. Professor Tim Spiller from the University of York said: ‘As Director of the preceding Quantum Communications Hub, it is very pleasing to see six years of R&D by that Hub team to develop SPOQC and HOGS finally be rewarded with the launch of SPOQC. However, this will add a crucial link to the UK’s expanding quantum networking capability. I look forward to the first quantum demonstrations from SPOQC and HOGS later this year.’

Andy Vick, Disruptive Technology Programme Lead at STFC RAL Space, said: ‘The launch of two quantum CubeSats in close succession highlights the UK’s growing leadership in quantum technology. While both missions share a common satellite platform, SPOQC has united new partners to address new challenges. The RAL Space team is proud to have contributed from the outset, working closely with the Quantum Communications Hub, whose initial work laid strong foundations for the mission, and now supporting its delivery under the leadership of the IQN Hub. SPOQC is a big step for all the teams involved, one that we hope will pave the way for the UK’s national quantum network mission.’

Dr Kedar Pandya, Executive Director of EPSRC’s Strategy Directorate, said: ‘The SPOQC mission is a powerful example of how UK research leadership is shaping the future of secure global communications. By uniting world-class expertise across our quantum research hubs, we’re demonstrating not only scientific excellence but real technological ambition. This launch marks a major step toward quantum-secure networks that will help safeguard the UK’s digital infrastructure for decades to come.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom proposes tougher rules on scam mobile messages

New proposals from Ofcom aim to reduce scam activity on mobile messaging services across the UK. The measures are designed to strengthen protections for users and businesses affected by large-scale fraud campaigns.

Scammers often combine mobile messages with other channels such as calls, emails, social media and online adverts to trick victims into revealing personal information or making payments.

While telecom operators have introduced safeguards in recent years, regulators say current efforts do not go far enough.

The proposed framework would require mobile operators and messaging aggregators to prevent scammers from accessing messaging systems and to detect and disrupt malicious activity where it occurs.

The goal is to close existing gaps in industry defences and reduce the volume of scam messages reaching users. Ofcom plans to finalise its decision in summer 2026, following completion of its consultation process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN launches Global Mechanism on ICT security, elects chair for 2026–2027

The United Nations has convened the organisational session of the Global Mechanism on developments in the field of ICTs in the context of international security and advancing responsible state behaviour in the use of ICTs, a new permanent forum established by UN General Assembly resolution 80/16.

The session was opened by Izumi Nakamitsu, Under-Secretary-General and High Representative for Disarmament Affairs, who facilitated the election of Ambassador Egriselda López of El Salvador as chair for the 2026–2027 biennium.

During the meeting, the Russian Federation said it would not block the consensus-based appointment of López to ensure the swift launch of the mechanism. However, it expressed ‘deep disquiet’ regarding the pre-election process, stating that the UN Office for Disarmament Affairs (UNODA) had initiated an informal silence procedure on 13 March regarding López’s candidacy without prior discussion with member states. The delegation described the step as ‘unauthorised’ under UN General Assembly resolutions 79/237 and 80/16.

In her remarks following the election, López emphasised that the mechanism should focus on implementation of existing commitments, stating the need to move from agreements to ‘concrete results.’ She underlined that the process remains intergovernmental and should be guided by consensus among member states.

The session adopted its provisional agenda and proceeded with a general exchange of views among delegations.

Several regional groups outlined priorities for the mechanism. Nigeria, speaking on behalf of the African Group, highlighted capacity development as a cross-cutting priority and pointed to cybersecurity threats affecting developing countries, including ransomware and attacks on critical infrastructure.

The Pacific Islands Forum, represented by the Solomon Islands, emphasised the vulnerabilities of Small Island Developing States and called for practical implementation of agreed measures.

The Arab Group and the European Union also stressed the importance of translating existing frameworks into action, with the EU highlighting the need to enhance implementation of the agreed framework for responsible state behaviour in cyberspace.

Across statements, delegations highlighted several common priorities, including:

  • strengthening capacity development efforts;
  • addressing ransomware and threats to critical infrastructure;
  • advancing the application of international law in cyberspace;
  • ensuring that the mechanism builds on the outcomes of the previous Open-Ended Working Group.

Member states also welcomed the establishment of two dedicated thematic groups, one focusing on substantive issues and another on capacity development, and called for clear mandates and coordination between them.

The Global Mechanism is mandated to advance discussions across five pillars:

  • threats
  • norms and principles
  • the application of international law
  • confidence-building measures
  • capacity development.

It will convene annual plenary sessions, thematic group meetings, and a review conference every five years, leading up to the 2030 review.

The organisational session marks the start of the mechanism’s substantive work as a permanent UN forum on ICT security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Brazil study maps age assurance practices across 25 digital services

A new study by CGI.br and NIC.br examines how digital services in Brazil implement age assurance measures. Presented in Brasília during an event on the Digital Child and Adolescent Statute (ECA Digital), the study reviewed 25 popular online services used by children and adolescents.

The study found that most of the services analysed do not apply age checks at the point of registration, including some platforms aimed at adults. According to the release, age assurance usually appears later, when users try to access specific features such as livestreaming or monetisation.

Titled ‘Age assurance practices in 25 digital services used by children in Brazil’, the study analysed governance documents published before the ECA Digital entered into force. From 18 March, the law requires information-society services aimed at children and adolescents in Brazil, or likely to be accessed by them, to adopt effective age-assurance measures and parental supervision.

The study found that 11 of the 25 platforms relied on third-party age-assurance services, particularly social media and generative AI platforms. Official identity document submission was the most common verification method, while selfie-based checks were the most common age-estimation tool. Differences were also found between the minimum ages stated by services and those listed in app stores, and some adult-oriented platforms could still be accessed by younger users with parental consent.

Parental supervision tools were available in 15 of the 25 services, but activation was usually optional and depended on parents or guardians. Transparency also emerged as a weakness: only six services published Brazil-specific reports, and only one explained how its minimum-age policy was applied. Policies were often spread across multiple pages, averaging 22 pages per service, and around 40% of the services provided related information in other languages.

Fábio Senne, General Research Coordinator at Cetic.br | NIC.br, said: ‘One of the study’s central aims was to verify the integrity of the information made available by digital services in Brazil. It is essential that data on age protection be communicated clearly and accessibly, allowing more informed and effective parental supervision.’

Juliana Cunha, manager of the Digital Public Policy Advisory Office at CGI.br | NIC.br, said: ‘This survey was developed to support the debate on implementation of the ECA Digital and to offer a clear understanding of the current landscape. This initiative forms part of a broader set of actions by CGI.br and NIC.br aimed at providing technical evidence to support effective enforcement of the law. Our commitment is to foster a safer and more responsible digital ecosystem for children and adolescents in Brazil.’

The release says the study used as a methodological reference the OECD technical paper ‘Age assurance practices of 50 online services used by children’, published in 2025. Information was collected between 10 and 30 January 2026 from public documents made available by the services in Brazil, totalling 550 pages analysed. The event also marked the launch of TIC Kids Online Brazil 2025, a publication on internet use by children and adolescents aged 9 to 17 in Brazil.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea sets ambition to become AI leader

South Korea has unveiled a national strategy to become one of the world’s top three AI powers by 2028. The plan combines investment in digital infrastructure, data systems and next-generation connectivity.

Authorities aim to expand networks by advancing 5G capabilities and preparing for the commercial deployment of 6G by 2030. Cybersecurity and data integration are also key priorities to support a stronger digital ecosystem.

The strategy includes developing talent across education levels and investing in core technologies such as semiconductors and quantum computing. AI adoption is expected to expand across sectors, including manufacturing, healthcare and agriculture.

The South Korean officials also plan to promote digital inclusion through learning centres and assistive technologies. Coordination between ministries will be strengthened to ensure effective delivery of the long-term roadmap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Ofcom tightens online safety enforcement across major platforms

Enforcement of the Online Safety Act intensifies in 2026, with regulators pushing stronger age verification across social media, gaming, messaging, and adult platforms. Significant progress has been reported in the adult sector, with most major pornography services now using age assurance or restricting UK access.

Ofcom has issued new expectations for major children’s platforms, including stricter age verification, stronger protections against grooming, safer feeds, and tighter product testing. The regulator has warned that further enforcement action may follow if compliance is not met.

New obligations are also being introduced, including a requirement from April 2026 for services to report child sexual exploitation and abuse content to the National Crime Agency.

Providers are being instructed to keep risk assessments up to date and adapt to evolving regulatory guidance, including upcoming consultations and expanded reporting duties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Stanford study warns about the risks of ‘sycophantic’ AI chatbots

A new study from Stanford University has raised concerns about the growing use of AI chatbots for personal advice, highlighting risks linked to a behaviour known as ‘sycophancy’, where systems validate users’ views instead of challenging them.

Researchers argue that such responses are not merely stylistic but have broader consequences for decision-making and social behaviour.

The analysis examined multiple leading models, including ChatGPT, Claude, and Gemini, and found that chatbot responses supported user perspectives far more often than human feedback.

In scenarios involving questionable or harmful actions, systems frequently endorsed behaviour that human evaluators would criticise, raising concerns about reliability in sensitive contexts such as relationships or ethical decisions.

Further experiments involving thousands of participants showed that users tend to prefer and trust sycophantic responses, increasing the likelihood of repeated use.

However, such interactions also appeared to reinforce self-centred thinking and reduce willingness to reconsider or apologise, suggesting a deeper impact on social judgement and interpersonal skills.

Researchers warn that users’ tendency to favour agreeable responses may create incentives for developers to prioritise engagement over accuracy or ethical balance.

The findings highlight the need for oversight and caution, with experts advising against relying on AI systems as substitutes for human guidance in complex personal situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!