Microsoft warns of a surge in ransomware and extortion incidents

Financially motivated cybercrime now accounts for the majority of global digital threats, according to Microsoft’s latest Digital Defense Report.

The company’s analysts found that over half of all cyber incidents with known motives in the past year were driven by extortion or ransomware, while espionage represented only a small fraction.

Microsoft warns that automation and accessible off-the-shelf tools have allowed criminals with limited technical skills to launch widespread attacks, making cybercrime a constant global threat.

The report reveals that attackers increasingly target critical services such as hospitals and local governments, where weak security and urgent operational demands make them easy victims.

Cyberattacks on these sectors have already led to real-world harm, from disrupted emergency care to halted transport systems. Microsoft highlights that collaboration between governments and private industry is essential to protect vulnerable sectors and maintain vital services.

While profit-seeking criminals dominate by volume, nation-state actors are also expanding their reach. State-sponsored operations are growing more sophisticated and unpredictable, with espionage often intertwined with financial motives.

Some state actors even exploit the same cybercriminal networks, complicating attribution and increasing risks for global organisations.

Microsoft notes that AI is being used by both attackers and defenders. Criminals are employing AI to refine phishing campaigns, generate synthetic media and develop adaptive malware, while defenders rely on AI to detect threats faster and close security gaps.

The report urges leaders to prioritise cybersecurity as a strategic responsibility, adopt phishing-resistant multifactor authentication, and build strong defences across industries.

Security, Microsoft concludes, must now be treated as a shared societal duty rather than an isolated technical task.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lehane backs OpenAI’s Australia presence as AI copyright debate heats up

OpenAI signalled a break with Australia’s tech lobby on copyright, with global affairs chief Chris Lehane telling SXSW Sydney the company’s models are ‘going to be in Australia, one way or the other’, regardless of reforms or data-mining exemptions.

Lehane framed two global approaches: US-style fair use that enables ‘frontier’ AI, versus a tighter, historical copyright that narrows scope, saying OpenAI will work under either regime. Asked if Australia risked losing datacentres without loser laws, he replied ‘No’.

Pressed on launching and monetising Sora 2 before copyright issues are settled, Lehane argued innovation precedes adaptation and said OpenAI aims to ‘benefit everyone’. The company paused videos featuring Martin Luther King Jr.’s likeness after family complaints.

Lehane described the US-China AI rivalry as a ‘very real competition’ over values, predicting that one ecosystem will become the default. He said US-led frontier models would reflect democratic norms, while China’s would ‘probably’ align with autocratic ones.

To sustain a ‘democratic lead’, Lehane said allies must add gigawatt-scale power capacity each week to build AI infrastructure. He called Australia uniquely positioned, citing high AI usage, a 30,000-strong developer base, fibre links to Asia, Five Eyes membership, and fast-growing renewables.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adaptive optics meets AI for cellular-scale eye care

AI is moving from lab demos to frontline eye care, with clinicians using algorithms alongside routine fundus photos to spot disease before symptoms appear. The aim is simple: catch diabetic retinopathy early enough to prevent avoidable vision loss and speed referrals for treatment.

New imaging workflows pair adaptive optics with machine learning to shrink scan times from hours to minutes while preserving single-cell detail. At the US National Eye Institute, models recover retinal pigment epithelium features and clean noisy OCT data to make standard scans more informative.

Duke University’s open-source DCAOSLO goes further by combining multiplexed light signals with AI to capture cellular-scale images quickly. The approach eases patient strain and raises the odds of getting diagnostic-quality data in busy clinics.

Clinic-ready diagnostics are already changing triage. LumineticsCore, the first FDA-cleared AI to detect more-than-mild diabetic retinopathy from primary-care images, flags who needs urgent referral in seconds, enabling earlier laser or pharmacologic therapy.

Researchers also see the retina as a window on wider health, linking vascular and choroidal biomarkers to diabetes, hypertension and cardiovascular risk. Standardised AI tools promise more reproducible reads, support for trials and, ultimately, home-based monitoring that extends specialist insight beyond the clinic.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Renew Europe urges European Commission to curb addictive design and bolster child safety online

Renew Europe is urging the European Commission to deploy its legal tools, including the Digital Services Act (DSA), GDPR and the AI Act, to curb ‘addictive design’ and protect young people’s mental health, as evidence from the Commission’s Joint Research Centre shows intensive social media use among adolescents.

Momentum is building across Brussels and the Member States. The EU digital ministers endorsed the ‘Jutland Declaration’ on child safety online. The push comes after von der Leyen’s call for tougher limits on children’s social media use in her State of the Union address and the Commission’s publication of DSA guidelines for platforms on minor protection.

Renew wants clearer rules against dark patterns and mandatory child-safe defaults such as limiting night-time notifications, switching off autoplay, banning screenshots of minors’ content, and removing filters linked to body-image risks.

The group also calls for robust, privacy-preserving age checks and regular updates to DSA guidance, alongside stronger enforcement powers for the national Digital Services Coordinators. Further action may come via the Digital Fairness Act, now out for consultation until 24 October 2025, an act targeting addictive design and misleading influencer practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Capita hit with £14 million fine after major data breach

The UK outsourcing firm Capita has been fined £14 million after a cyber-attack exposed the personal data of 6.6 million people. Sensitive information, including financial details, home addresses, passport images, and criminal records, was compromised.

Initially, the fine was £45 million, but it was reduced after Capita improved its cybersecurity, supported affected individuals, and engaged with regulators.

A breach that affected 325 of the 600 pension schemes Capita manages, highlighting risks for organisations handling large-scale sensitive data.

The Information Commissioner’s Office (ICO) criticised Capita for failing to secure personal information, emphasising that proper security measures could have prevented the incident.

Experts note that holding companies financially accountable reinforces the importance of data protection and sends a message to the market.

Capita’s CEO said the company has strengthened its cyber defences and remains vigilant to prevent future breaches.

The UK government has advised companies like Capita to prepare contingency plans following a rise in nationally significant cyberattacks, a trend also seen at Co-op, M&S, Harrods, and Jaguar Land Rover earlier in the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple launches M5 with bigger AI gains

Apple unveiled the M5 chip, targeting a major jump in on-device AI. Apple says peak GPU compute for AI is over four times M4, with a Neural Accelerator in each of the 10 GPU cores.

The CPU pairs up to four performance cores with six efficiency cores for up to 15 percent faster multithreaded work versus M4. A faster 16-core Neural Engine and higher unified memory bandwidth at 153 GB/s aim to speed Apple Intelligence features.

Graphics upgrades include third-generation ray tracing and reworked caching for up to 45 percent higher performance than M4 in supported apps. With the help of AI, Apple notes smoother gameplay and quicker 3D renders, plus Vision Pro refresh up to 120 Hz.

The M5 chip reaches the 14-inch MacBook Pro, iPad Pro, and Apple Vision Pro, with pre-orders open. Apple highlights tighter tie-ins with Core ML, Metal 4 and Tensor APIs, and support for larger local models via unified memory up to 32 GB.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI at scale with Salesforce and AWS

Salesforce and AWS outlined a tighter partnership on agentic AI, citing rapid growth in enterprise agents and usage. They set four pillars for the ‘Agentic Enterprise’: unified data, interoperable agents, modernised contact centres and streamlined procurement via AWS Marketplace.

Data 360 ‘Zero Copy’ accesses Amazon Redshift without duplication, while Data 360 Clean Rooms integrate with AWS Clean Rooms for privacy-preserving collaboration. 1-800Accountant reports agents resolving most routine inquiries so human experts focus on higher-value work.

Agentforce supports open standards such as Model Context Protocol and Agent2Agent to coordinate multi-vendor agents. Pilots link Bedrock-based agents and Slack integrations that surface Quick Suite tools, with Anthropic and Amazon Nova models available inside Salesforce’s trust boundary.

Contact centres extend agentic workflows through Salesforce Contact Center with Amazon Connect, adding voice self-service plus real-time transcription and sentiment. Complex issues hand off to representatives with full context, and Toyota Motor North America plans automation for service tasks.

Procurement scales via AWS Marketplace, where Salesforce surpassed $2bn in lifetime sales across 30 countries. AgentExchange listings provide prebuilt, customisable agents and workflows, helping enterprises adopt agentic AI faster with governance and security intact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New ISO 27701 update strengthens privacy compliance

The International Organization for Standardization has released a major update to ISO 27701, the global standard for managing privacy compliance programmes. The revised version, published in 2025, separates the Privacy Information Management System (PIMS) from ISO 27001.

The updated standard introduces detailed clauses defining how organisations should establish, implement and continually improve their PIMS. It places strong emphasis on leadership accountability, risk assessment, performance evaluation and continual improvement.

Annex A of the standard sets out new control tables for both data controllers and processors. The update also refines terminology and aligns more closely with the principles of the EU GDPR and UK GDPR, making it suitable for multinational organisations seeking a unified privacy management approach.

Experts say the revised ISO 27701 offers a flexible structure but should not be seen as a substitute for legal compliance. Instead, it provides a foundation for building stronger, auditable privacy frameworks that align global business operations with evolving regulatory standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adult erotica tests OpenAI’s safety claims

OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.

The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.

Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.

It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.

Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!