Apple fined over unfair iPhone sales contracts in France

A Paris court has ordered Apple to pay around €39 million to French mobile operators, ruling that the company imposed unfair terms in contracts governing iPhone sales more than a decade ago. The court also fined Apple €8 million and annulled several clauses deemed anticompetitive.

Judges found that Apple required carriers to sell a set number of iPhones at fixed prices, restricted how its products were advertised, and used operators’ patents without compensation. The French consumer watchdog DGCCRF had first raised concerns about these practices years earlier.

Under the ruling, Apple must compensate three of France’s four major mobile networks; Bouygues Telecom, Free, and SFR. The decision applies immediately despite Apple’s appeal, which will be heard at a later date.

Apple said it disagreed with the ruling and would challenge it, arguing that the contracts reflected standard commercial arrangements of the time. French regulators have increasingly scrutinised major tech firms as part of wider efforts to curb unfair market dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Emergency cardiology gets a lift from AI-read ECGs, with fewer false activations

AI ECG analysis improved heart attack detection and reduced false alarms in a multicentre study of 1,032 suspected STEMI cases. Conducted across three primary PCI centres from January 2020 to May 2024, it points to quicker, more accurate triage, especially beyond specialist hospitals.

ST-segment elevation myocardial infarction occurs when a major coronary artery is blocked. Guideline targets call for reperfusion within 90 minutes of first medical contact. Longer delays are associated with roughly a 3-fold increase in mortality, underscoring the need for rapid, reliable activation.

The AI ECG model, trained to detect acute coronary occlusion and STEMI equivalents, analysed each patient’s initial tracing. Confirmatory angiography and biomarkers identified 601 true STEMIs and 431 false positives. AI detected 553 of 601 STEMIs, versus 427 identified by standard triage on the first ECG.

False positives fell sharply with AI. Investigators reported a 7.9 percent false-positive rate with the model, compared with 41.8 percent under standard protocols. Clinicians said earlier that more precise identification could streamline transfers from non-PCI centres and help teams reach reperfusion targets.

An editorial welcomed the gains but urged caution. The model targets acute occlusion rather than STEMI, needs prospective validation in diverse populations, and must be integrated with clear governance and human oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ontario updates deidentification guidelines for safer data use

Ontario’s privacy watchdog has released an expanded set of deidentification guidelines to help organisations protect personal data while enabling innovation. The 100-page document from the Office of the Information and Privacy Commissioner (IPC) offers step-by-step advice, checklists and examples.

The update modernises the 2016 version to reflect global regulatory changes and new data protection practices. She emphasised that the guidelines aim to help organisations of all sizes responsibly anonymise data while maintaining its usefulness for research, AI development and public benefit.

Developed through broad stakeholder consultation, the guidelines were refined with input from privacy experts and the Canadian Anonymization Network. The new version responds to industry requests for more detailed, operational guidance.

Although the guidelines are not legally binding, experts said following them can reduce liability risks and strengthen compliance with privacy laws. The IPC hopes they will serve as a practical reference for executives and data officers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Labels press platforms to curb AI slop and protect artists

Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.

Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.

Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.

Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.

Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO surveys women on AI fairness and safety

UNESCO’s Office for the Caribbean has launched a regional survey examining gender and AI, titled Perception of AI Fairness and Online Safety among Women and Girls in the Caribbean. The initiative addresses the lack of data on how women and girls experience technology, AI, and online violence in the region.

Results will guide policy recommendations to promote human rights and safer digital environments.

The 2025 survey is part of a broader UNESCO effort to understand AI’s impact on gender equality. It covers gender-based online violence, generative AI’s implications for privacy, and potential biases in large AI models.

The findings will be used to develop a regional policy brief compared with global data.

UNESCO encourages participation from women and girls across the Caribbean, highlighting that community input is vital for shaping effective AI policies. A one-day workshop on 10 December 2025 will equip young women with skills to navigate AI safely.

The initiative aims to position the Caribbean as a leader in ensuring AI respects dignity, equality, and human rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK retail investors can now access crypto ETNs

The FCA has lifted the ban on retail access to certain crypto exchange trade notes (cETNs), effective 8 October. UK consumers can now invest in cETNs listed on the Official List and traded on a Recognised Investment Exchange.

Firms offering cETNs must meet strict requirements. Products are categorised as Restricted Mass Market Investments (RMMIs), meaning financial promotions cannot include incentives, and firms must carry out appropriateness assessments, client categorisation, and risk disclosures.

Compliance with the Consumer Duty is also required, including acting in good faith, avoiding foreseeable harm, and ensuring products meet the needs of the target market.

The FCA emphasises that cETNs are complex products, and firms should have the correct permissions to offer them. Those seeking authorisation or new permissions can request pre-application support meetings.

The regulator is also advancing its crypto roadmap to integrate crypto assets more fully into its regulatory framework, with ongoing consultations on applying Handbook rules to crypto activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IBM unveils Digital Asset Haven for secure institutional blockchain management

IBM has introduced Digital Asset Haven, a unified platform designed for banks, corporations, and governments to securely manage and scale their digital asset operations. The platform manages the full asset lifecycle from custody to settlement while maintaining compliance.

Built with Dfns, the platform combines IBM’s security framework with Dfns’ custody technology. The Dfns platform supports 15 million wallets for 250 clients, providing multi-party authorisation, policy governance, and access to over 40 blockchains.

IBM Digital Asset Haven includes tools for identity verification, crime prevention, yield generation, and developer-friendly APIs for extra services. Security features include Multi-Party Computation, HSM-based signing, and quantum-safe cryptography to ensure compliance and resilience.

According to IBM’s Tom McPherson, the platform gives clients ‘the opportunity to enter and expand into the digital asset space backed by IBM’s level of security and reliability.’ Dfns CEO Clarisse Hagège said the partnership builds infrastructure to scale digital assets from pilots to global use.

IBM plans to roll out Digital Asset Haven via SaaS and hybrid models in late 2025, with on-premises deployment expected in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!