New guidelines by Apple curb how apps send user data to external AI systems

Apple has updated its App Review Guidelines to require developers to disclose and obtain permission before sharing personal data with third-party AI systems. The company says the change enhances user control as AI features become more prevalent across apps.

The revision arrives ahead of Apple’s planned 2026 release of an AI-enhanced Siri, expected to take actions across apps and rely partly on Google’s Gemini technology. Apple is also moving to ensure external developers do not pass personal data to AI providers without explicit consent.

Previously, rule 5.1.2(i) already limited the sharing of personal information without permission. The update adds explicit language naming third-party AI as a category that requires disclosure, reflecting growing scrutiny of how apps use machine learning and generative models.

The shift could affect developers who use external AI systems for features such as personalisation or content generation. Enforcement details remain unclear, as the term ‘AI’ encompasses a broad range of technologies beyond large language models.

Apple released several other guideline updates alongside the AI change, including support for its new Mini Apps Programme and amendments involving creator tools, loan products, and regulated services such as crypto exchanges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ohanian predicts AI-driven jobs growth despite economic jitters

Reddit co-founder Alexis Ohanian says AI remains a durable long-term trend despite growing investor concern that the sector has inflated a market bubble. He argues the technology is now too deeply embedded in workflows to be dismissed as hype.

Tech stocks fell sharply on Thursday as uncertainty over US interest rate cuts prompted investors to seek safer assets. The Nasdaq Composite slid more than two percent, and the AI-driven Magnificent Seven posted broad losses, with Nvidia among the hardest-hit names.

Ohanian says valuations are not his focus but insists the underlying innovations are meaningful, pointing to faster software development as an example of measurable progress. He maintains confidence in technology trends even amid short-term market swings.

He also believes AI will create more roles than it eliminates, despite estimates that widespread adoption could disrupt up to seven percent of the US workforce. He argues that major technological shifts consistently open new career paths.

Ohanian notes that jobs once unimaginable, such as full-time online content creation, are now mainstream aspirations. He expects AI-led change to follow a similar pattern, delivering overall gains while acknowledging that the transition may be uneven.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Google over potential Digital Markets Act breach

The European Commission has opened an investigation into whether Google may be breaching the Digital Markets Act by unfairly demoting news publishers in search results.

An inquiry that centres on Google’s ‘site reputation abuse policy’, which appears to lower rankings for publishers that host content from commercial partners, even when those partnerships support legitimate ways of monetising online journalism.

The Commission is examining whether Alphabet’s approach restricts publishers from conducting business, innovating, and cooperating with third-party content providers. Officials highlighted concerns that such demotions may undermine revenue at a difficult moment for the media sector.

These proceedings do not imply a final decision; instead, they allow the EU to gather evidence and assess Google’s practices in detail.

If the Commission finds evidence of non-compliance, it will present preliminary findings and request corrective measures. The investigation is expected to conclude within 12 months.

Under the DMA, infringements can lead to fines of up to ten percent of a company’s worldwide turnover, rising to twenty percent for repeated violations, alongside possible structural remedies.

Senior Commissioners stressed that gatekeepers must offer fair and non-discriminatory access to their platforms. They argued that protecting publishers’ ability to reach audiences supports media pluralism, innovation, and democratic resilience.

Google Search, designated as a core platform service under the DMA, has been required to comply fully with the regulation since March 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Explainable AI predicts cardiovascular events in hospitalised COVID-19 patients

In the article published by BMC Infectious Diseases, researchers developed predictive models using machine learning (LightGBM) to identify cardiovascular complications (such as arrhythmia, acute heart failure, myocardial infarction) in 10,700 hospitalised COVID-19 patients across Brazil.

The study reports moderate discriminatory performance, with AUROC values of 0.752 and 0.760 for the two models, and high overall accuracy (~94.5%) due to the large majority of non-event cases.

However, due to the rarity of cardiovascular events (~5.3% of cases), the F1-scores for detecting the event class remained very low (5.2% and 4.2%, respectively), signalling that the models struggle to reliably identify the minority class despite efforts to rebalance the data.

Using SHAP (Shapley Additive exPlanations) values, the researchers identified the most influential predictors: age, urea level, platelet count and SatO₂/FiO₂ (oxygen saturation to inspired oxygen fraction) ratio.

The authors emphasise that while the approach shows promise for resource-constrained settings and contributes to risk stratification, the limitations around class imbalance and generalisability remain significant obstacles for clinical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI platforms approved for Surrey Schools classrooms

Surrey Schools has approved MagicSchool, SchoolAI, and TeachAid for classroom use, giving teachers access through the ONE portal with parental consent. The district says the tools are intended to support instruction while maintaining strong privacy and safety safeguards.

Officials say each platform passes rigorous reviews covering educational value, data protection, and technical security before approval. Teachers receive structured guidance on appropriate use, supported by professional development aligned with wider standards for responsible AI in education.

A two-year digital literacy programme helps staff explore online identity, digital habits, and safe technology use as AI becomes more common in lessons. Students use AI to generate ideas, check code, and analyse scientific or mathematical problems, reinforcing critical reasoning.

Educators stress that pupils are taught to question AI outputs rather than accept them at face value. Leaders argue this approach builds judgment and confidence, preparing young people to navigate automated systems with greater agency beyond school settings.

Families and teachers can access AI safety resources through the ONE platform, including videos, podcasts and the ‘Navigating an AI Future’ series. Materials include recordings from earlier workshops and parent sessions, supporting shared understanding of AI’s benefits and risks across the community.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI credentials grow as AWS launches practical training pathway

AWS is launching four solutions to help close the AI skills gap as demand rises and job requirements shift. The company positions the new tools as a comprehensive learning journey, offering structured pathways that progress from foundational knowledge to hands-on practice and formal validation.

AWS Skill Builder now hosts over 220 free AI courses, ranging from beginner introductions to advanced topics in generative and agentic AI. The platform enables learners to build skills at their own pace, with flexible study options that accommodate work schedules.

Practical experience anchors the new suite. The Meeting Simulator helps learners explain AI concepts to realistic personas and refine communication with instant feedback. Cohorts Studio offers team-based training through study groups, boot camps, and game-based challenges.

AWS is expanding its credential portfolio with the AWS Certified Generative AI Developer – Professional certification. The exam helps cloud practitioners demonstrate proficiency in foundation models, RAG architectures, and responsible deployment, supported by practice tasks and simulated environments.

Learners can validate hands-on capability through new microcredentials that require troubleshooting and implementation in real AWS settings. Combined credentials signal both conceptual understanding and task-ready skills, with Skill Builder’s more expansive library offering a clear starting point for career progression.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools deployed to set tailored attendance goals for English schools

England will introduce AI-generated attendance targets for each school, setting tailored improvement baselines based on the context and needs of each school. Schools with higher absence rates will be paired with strong performers for support. Thirty-six new Attendance and Behaviour Hubs will help drive the rollout.

Education Secretary Bridget Phillipson said raising attendance is essential for opportunity. She highlighted the progress made since the pandemic, but noted that variation remains too high. The AI targets aim to disseminate effective practices across all schools.

A new toolkit will guide schools through key transition points, such as the transition from Year 7 to Year 8. CHS South in Manchester is highlighted for using summer family activities to ease anxiety. Officials say early engagement can stabilise attendance.

CHS South Deputy Head Sue Burke said the goal is to ensure no pupil feels left out. She credited the attendance team for combining support with firm expectations. The model is presented as a template for broader adoption.

The policy blends AI analysis with pastoral strategies to address entrenched absence. Ministers argue that consistent attendance drives long-term outcomes. The UK government expects personalised targets and shared practice to embed lasting improvement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU regulators, UK and eSafety lead the global push to protect children in the digital world

Children today spend a significant amount of their time online, from learning and playing to communicating.

To protect them in an increasingly digital world, Australia’s eSafety Commissioner, the European Commission’s DG CNECT, and the UK’s Ofcom have joined forces to strengthen global cooperation on child online safety.

The partnership aims to ensure that online platforms take greater responsibility for protecting and empowering children, recognising their rights under the UN Convention on the Rights of the Child.

The three regulators will continue to enforce their online safety laws to ensure platforms properly assess and mitigate risks to children. They will promote privacy-preserving age verification technologies and collaborate with civil society and academics to ensure that regulations reflect real-world challenges.

By supporting digital literacy and critical thinking, they aim to provide children and families with safer and more confident online experiences.

To advance the work, a new trilateral technical group will be established to deepen collaboration on age assurance. It will study the interoperability and reliability of such systems, explore the latest technologies, and strengthen the evidence base for regulatory action.

Through closer cooperation, the regulators hope to create a more secure and empowering digital environment for young people worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta, TikTok and Snapchat prepare to block under-16s as Australia enforces social media ban

Social media platforms, including Meta, TikTok and Snapchat, will begin sending notices to more than a million Australian teens, telling them to download their data, freeze their profiles or lose access when the national ban for under-16s comes into force on 10 December.

According to people familiar with the plans, platforms will deactivate accounts believed to belong to users under the age of 16. About 20 million Australians who are older will not be affected. However, this marks a shift from the year-long opposition seen from tech firms, which warned the rules would be intrusive or unworkable.

Companies plan to rely on their existing age-estimation software, which predicts age from behaviour signals such as likes and engagement patterns. Only users who challenge a block will be pushed to the age assurance apps. These tools estimate age from a selfie and, if disputed, allow users to upload ID. Trials show they work, but accuracy drops for 16- and 17-year-olds.

Yoti’s Chief Policy Officer, Julie Dawson, said disruption should be brief, with users adapting within a few weeks. Meta, Snapchat, TikTok and Google declined to comment. In earlier hearings, most respondents stated that they would comply.

The law blocks teenagers from using mainstream platforms without any parental override. It follows renewed concern over youth safety after internal Meta documents in 2021 revealed harm linked to heavy social media use.

A smooth rollout is expected to influence other countries as they explore similar measures. France, Denmark, Florida and the UK have pursued age checks with mixed results due to concerns over privacy and practicality.

Consultants say governments are watching to see whether Australia’s requirement for platforms to take ‘reasonable steps’ to block minors, including trying to detect VPN use, works in practice without causing significant disruption for other users.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!