Meta’s Facebook uses phone photos for AI if users allow it

Meta has introduced a new feature that allows Facebook to access and analyse users’ photos stored on their phones, provided they give explicit permission.

The move is part of a broader push to improve the company’s AI tools, especially after the underwhelming reception of its Llama 4 model. Users who opt in will be agreeing to Meta’s AI Terms of Service, which grants the platform the right to retain and use personal media for content suggestions.

The new feature, currently being tested in the US and Canada, is designed to offer Facebook users creative ideas for Stories by processing their photos and videos through cloud infrastructure.

When enabled, users may receive suggestions such as collages or travel highlights based on when and where images were captured, as well as who or what appears in them. However, participation is strictly optional and can be turned off at any time.

Facebook clarifies that the media analysed under the feature is not used to train AI models in the current test. Still, the system does upload selected media to Meta’s servers on an ongoing basis, raising privacy concerns.

The option to activate these suggestions can be found in the Facebook app’s settings, where users are asked whether they want camera roll data to inform sharing ideas.

Meta has been actively promoting its AI ambitions, with CEO Mark Zuckerberg pushing for the development of ‘superintelligence’. The company recently launched Meta Superintelligence Labs to lead these efforts.

Despite facing stiff competition from OpenAI, DeepSeek and Google, Meta appears determined to deepen its use of personal data to boost its AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Africa risks being left behind in global AI development

Africa is falling far behind in the global race to develop AI, according to a new report by Oxford University.

The study mapped the location of advanced AI infrastructure and revealed that only 32 countries — just 16% of the world — currently operate major AI data centres.

These facilities are essential for training and developing modern AI systems. In contrast, most African nations remain dependent on foreign technology providers, limiting their control over digital development.

Rather than building local capacity, Africa has essentially been treated as a market for AI products developed elsewhere. Regional leaders have often focused on distributing global tech tools instead of investing in infrastructure for homegrown innovation.

One notable exception is Strive Masiyiwa’s Cassava Technologies, which recently partnered with Nvidia to launch the continent’s first AI factory, which is located in South Africa. The project aims to expand across Egypt, Kenya, Morocco and Nigeria.

Unlike typical data centres, an AI factory is explicitly built to support the full AI lifecycle, from raw data to trained models. Nvidia’s GPUs will power the facility, enabling ‘AI as a service’ to be used by governments, businesses, and researchers across the continent.

Cassava’s model offers a more sustainable vision, where African data is used to create local solutions, instead of exporting value abroad.

Experts argue that Africa needs more such initiatives to reduce dependence and participate meaningfully in the AI economy. An AI Fund supported by leading African nations could help finance new factories and infrastructure.

With time running out, leaders must move beyond surface-level engagement and begin coordinated action to address the continent’s growing digital divide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenInfra Summit Europe brings focus on AI and VMware alternatives

The OpenInfra Foundation and its global community will gather at the OpenInfra Summit Europe from 17 to 19 October in Paris-Saclay to explore how open source is reshaping digital infrastructure.

It will be the first summit since the Foundation joined the Linux Foundation, uniting major projects such as Linux, Kubernetes and OpenStack under the OpenInfra Blueprint. The agenda includes a strong focus on digital sovereignty, VMware migration strategies and infrastructure support for AI workloads.

Taking place at École Polytechnique in Palaiseau, the summit arrives at a time when open source software is powering nearly $9 trillion of economic activity.

With over 38% of the global OpenInfra community based in Europe, the event will focus on regional priorities like data control, security, and compliance with new EU regulations such as the Cyber Resilience Act.

Developers, IT leaders and business strategists will explore how projects like Kata Containers, Ceph and RISC-V integrate to support cost-effective, scalable infrastructure.

The summit will also mark OpenStack’s 15th anniversary, with use cases shared by the UN, BMW and nonprofit Restos du Coeur.

Attendees will witness a live VMware migration demo featuring companies like Canonical and Rackspace, highlighting real-world approaches to transitioning away from proprietary platforms. Sessions will dive into topics like CI pipelines, AI-powered infrastructure, and cloud-native operations.

As a community-led event, OpenInfra Summit Europe remains focused on collaboration.

With sponsors including Canonical, Mirantis, Red Hat and others, the gathering offers developers and organisations an opportunity to share best practices, shape open source development, and strengthen the global infrastructure ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lung cancer caught early thanks to AI

A 69-year-old woman from Surrey has credited AI with saving her life after it detected lung cancer that human radiologists initially missed.

The software flagged a concerning anomaly in a chest X-ray that had been given the all-clear, prompting urgent follow-up and surgery.

NHS hospitals increasingly use AI tools like Annalise.ai, which analyses scans and prioritises urgent cases for radiologists.

Dianne Covey, whose tumour was caught at stage one, avoided chemotherapy or radiotherapy and has since made a full recovery.

With investments exceeding £36 million, the UK government and NHS are rapidly deploying AI to improve cancer diagnosis rates and reduce waiting times. AI has now been trialled or implemented across more than 45 NHS trusts and is also used for skin and prostate cancer detection.

Doctors and technologists say AI is not replacing medical professionals but enhancing their capabilities by highlighting critical cases and improving speed.

Experts warn that outdated machines, biassed training data and over-reliance on consumer AI tools remain risks to patient outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Balancing security and usability in digital authentication

A report by the FIDO Alliance revealed that 53% of consumers observed an increase in suspicious messages in 2024, with SMS, emails, and phone calls being the primary vectors.

As digital scams and AI-driven fraud rise, businesses face growing pressure to strengthen authentication methods without compromising user experience.

No clear standard has emerged despite the range of available authentication options—including passkeys, one-time passwords (OTP), multi-factor authentication (MFA), and biometric systems.

Industry experts warn that focusing solely on advanced tools can lead to overlooking basic user needs. Minor authentication hurdles such as CAPTCHA errors have led to customer drop-offs and failed transactions.

Organisations are exploring risk-based, adaptive authentication models that adjust security levels based on user behaviour and context. The systems could eventually replace static logins with continuous, behind-the-scenes verification.

AI complicates the landscape further. As autonomous assistants handle tasks like booking tickets or making purchases, distinguishing legitimate user activity from malicious bots becomes increasingly tricky.

With no universal solution, experts say businesses must offer a flexible range of secure options tailored to user preferences. The challenge remains to find the right balance between security and usability in an evolving threat environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan leads in AI defence of democracy

Taiwan has emerged as a global model for using AI to defend democracy, earning recognition for its success in combating digital disinformation.

The island joined a new international coalition led by the International Foundation for Electoral Systems to strengthen election integrity through AI collaboration.

Constantly targeted by foreign actors, Taiwan has developed proactive digital defence systems that serve as blueprints for other democracies.

Its rapid response strategies and tech-forward approach have made it a leader in countering AI-powered propaganda.

While many nations are only beginning to grasp the risks posed by AI to democratic systems, Taiwan has already faced these threats and adapted.

Its approach now shapes global policy discussions around safeguarding elections in the digital era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands AI ambitions with more OpenAI hires

According to a report published by The Information on Sunday, Meta Platforms has hired four additional researchers from OpenAI.

The researchers—Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren—are set to join Meta’s AI team as part of a broader recruitment drive. All four were previously involved in AI development at OpenAI, the Microsoft-backed company behind ChatGPT and other generative models.

Earlier in the week, The Wall Street Journal reported that Meta had hired three more OpenAI researchers—Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai—based in the firm’s Zurich office.

The hires reflect Meta’s increased investment in advanced AI research, particularly in ‘superintelligence’, a term CEO Mark Zuckerberg has used to describe future AI capabilities.

Meta and OpenAI have not yet responded to requests for comment. Reuters noted that it could not independently verify the hiring details at the time of reporting.

With growing competition among tech giants in AI innovation, Meta’s continued talent acquisition suggests a clear intention to strengthen its internal capabilities through strategic hiring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New NHS plan adds AI to protect patient safety

The NHS is set to introduce a world-first AI system to detect patient safety risks early by analysing hospital data for warning signs of deaths, injuries, or abuse.

Instead of waiting for patterns to emerge through traditional oversight, the AI will use near real-time data to trigger alerts and launch rapid inspections.

Health Secretary Wes Streeting announced that a new maternity-focused AI tool will roll out across NHS trusts in November. It will monitor stillbirths, brain injuries and death rates, helping identify issues before they become scandals.

The initiative forms part of a new 10-year plan to modernise the health service and move it from analogue to digital care.

The technology will send alerts to the Care Quality Commission, whose teams will investigate flagged cases. Professor Meghana Pandit, NHS England’s medical director, said the UK would become the first country to trial this AI-enabled early warning system to improve patient care.

CQC chief Sir Julian Hartley added it would strengthen quality monitoring across services.

However, nursing leaders voiced concerns that AI could distract from more urgent needs. Professor Nicola Ranger of the Royal College of Nursing warned that low staffing levels remain a critical issue.

She stressed that one nurse often handles too many patients, and technology should not replace the essential investment in frontline staff.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia insiders sell over $1bn in shares amid AI market boom

Senior Nvidia executives have sold more than $1bn worth of shares over the past year, with over half of those sales taking place in June.

The move comes as Nvidia’s stock soared to record highs, driven by renewed investor enthusiasm for AI. According to the Financial Times, insiders took advantage of the AI-driven rally instead of waiting for further market shifts.

Among those selling shares was Nvidia CEO Jensen Huang, who offloaded stock for the first time since September, as revealed in recent regulatory filings.

The surge in share price helped the company briefly reclaim its title as the world’s most valuable firm, following upbeat forecasts from analysts predicting Nvidia will ride a ‘Golden Wave’ of AI growth.

Nvidia’s stock has recovered more than 60% since early April, when markets were rattled by President Donald Trump’s global tariff plans.

The rebound reflects optimism that upcoming trade negotiations may soften the economic blow and keep momentum behind tech and AI-focused firms.

Nvidia declined to comment on the report.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!