Meta faces fines in Netherlands over algorithm-first timelines

A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.

The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.

Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.

If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.

Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.

The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.

Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle systems targeted in unverified data theft claims, Google warns

Google has warned that hackers are emailing company executives, claiming to have stolen sensitive data from Oracle business applications. The group behind the campaign identifies itself as affiliated with the Cl0p ransomware gang.

In a statement, Google said the attackers target executives at multiple organisations with extortion emails linked to Oracle’s E-Business Suite. The company stated that it lacks sufficient evidence to verify the claims or confirm whether any data has been taken.

Neither Cl0p nor Oracle responded to requests for comment. Google did not provide additional information about the scale or specific campaign targets.

The cl0p ransomware gang has been involved in several high-profile extortion cases, often using claims of data theft to pressure organisations into paying ransoms, even when breaches remain unverified.

Google advised recipients to treat such messages cautiously and report any suspicious emails to security teams while investigations continue.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global survey reveals slow AI adoption across the construction industry

RICS has published its 2025 report on AI in Construction, offering a global snapshot of how the built-environment sector views AI integration. The findings draw on over 2,200 survey responses from professionals across geography and disciplines.

The report finds that AI adoption remains limited: 45 percent of organisations report no AI use, and just under 12 percent say AI is used regularly in specific workflows. Fewer than 1 percent have AI embedded across multiple processes.

Preparedness is also low. While some firms are exploring AI, most have yet to move beyond early discussions. Only about 20 percent are engaged in strategic planning or proof-of-concept pilots, and very few have budgeted implementation roadmaps.

Despite this, confidence in AI is strong. Professionals see the most significant potential in progress monitoring, scheduling, resource optimisation, contract review and risk management. Over the next five years, many expect the most critical impact in design optioneering, where AI could help evaluate multiple alternatives in early project phases.

The survey also flags key barriers: lack of skilled personnel (46 percent), integration with existing systems (37 percent), data quality and availability (30 percent), and high implementation costs (29 percent).

To overcome these challenges, RICS recommends a coordinated roadmap with leadership from industry, government support, ethical guardrails, workforce upskilling, shared data standards and transparent pilot projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Few Americans rely on AI chatbots for news

A recent Pew Research survey shows that relatively few Americans use AI chatbots like ChatGPT to get news. About 2 percent say they often get news this way, and 7 percent say they do so sometimes.

The majority of US adults thus do not turn to AI chatbots as a regular news source, signalling a limited role for chatbots in news dissemination, at least for now.

However, this finding is part of a broader pattern: despite the growing usage of chatbots, news consumption via these tools remains in the niche. Pew’s data also shows that 34 percent of US adults report using ChatGPT, which has roughly doubled since 2023.

While AI chatbots are not yet mainstream for news, their limited uptake raises questions about trust, accuracy and the user motivation behind news consumption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use AI interactions for content and ad recommendations

Meta has announced that beginning 16 December 2025, it will start personalising content and ad recommendations on Facebook, Instagram and other apps using users’ interactions with its generative AI features.

The update means that if you chat with Meta’s AI about a topic, such as hiking, the system may infer your interests and show related content, including posts from hiking groups or ads for boots. Meta emphasises that content and ad recommendations already use signals like likes, shares and follows, but the new change adds AI interactions as another signal.

Meta will notify users starting 7 October via in-app messages and emails to maintain user control. Users will retain access to settings such as Ads Preferences and feed controls to adjust what they see. Meta says it will not use sensitive AI chat content (religion, health, political beliefs, etc.) to personalise ads.

If users have linked those accounts in Meta’s Accounts Centre, interactions with AI on particular accounts will only be used for cross-account personalisation. Also, unless a WhatsApp account is added to the same Accounts Centre, AI interactions won’t influence experience in other apps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Breakthrough platform gives warning of painful sickle cell attacks

A London-based health tech firm has developed an AI platform that can predict painful sickle cell crises before they occur. Sanius Health says its system forecasts vaso-occlusive crises with up to 92% sensitivity, offering patients and clinicians valuable lead time.

The technology combines biometric data from wearables with patient-reported outcomes and clinical records to generate daily risk scores. Patients and care teams receive alerts when thresholds are met, enabling early action to prevent hospitalisation.

In real-world studies involving nearly 400 patients, the AI system identified measurable changes in activity and sleep days before emergencies. Patients using the platform reported fewer admissions, shorter stays, and improved quality of life.

The World Health Organisation says sickle cell disease affects almost eight million people worldwide. Sanius Health is scaling its registry-driven model globally to ensure predictive care reaches patients from London to Lagos and beyond.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch AI actress ignites Hollywood backlash

An AI ‘actress’ created in the Netherlands has sparked controversy across the global film industry. Tilly Norwood, designed by Dutch actress Eline van der Velde, is capable of talking, waving, and crying, and is reportedly being pitched to talent agencies.

Hollywood unions and stars have voiced strong objections. US-based SAG-AFTRA said Norwood was trained on the work of professional actors without life experience or human emotion, warning that its use could undermine existing contracts.

Actresses Natasha Lyonne and Emily Blunt also criticised the Dutch project, with Lyonne calling for a boycott of agencies working with Norwood, and Blunt describing it as ‘really scary’.

Van der Velde defended her AI creation, describing Norwood as a piece of art rather than a replacement for performers. She argued the project should be judged as a new genre rather than compared directly to human actors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China’s new K visa sparks public backlash

China’s new K visa, aimed at foreign professionals in science and technology, has sparked heated debate and online backlash. The scheme, announced in August and launched this week, has been compared by Indian media to the US H-1B visa.

Tens of thousands of social media users in China have voiced fears that the programme will worsen job competition in an already difficult market. Comments also included xenophobic remarks, particularly directed at Indian nationals.

State media outlets have stepped in, defending the policy as a sign of China’s openness while stressing that it is not a simple work permit or immigration pathway. Officials say the visa is designed to attract graduates and researchers from top institutions in STEM fields.

The government has yet to clarify whether the visa allows foreign professionals to work, adding to uncertainty. Analysts note that language barriers, cultural differences, and China’s political environment may pose challenges for newcomers despite Beijing’s drive to attract global talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NIST pushes longer passphrases and MFA over strict rules

The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.

Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.

Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.

The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.

Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Gmail phishing attack hides malware inside fake PDFs

Researchers have uncovered a phishing toolkit disguised as a PDF attachment to bypass Gmail’s defences. Known as MatrixPDF, the technique blurs document text, embeds prompts, and uses hidden JavaScript to redirect victims to malicious sites.

The method exploits Gmail’s preview function, slipping past filters because the PDF contains no visible links. Users are lured into clicking a fake button to ‘open secure document,’ triggering the attack and fetching malware outside Gmail’s sandbox.

A second variation embeds scripts that connect directly to payload URLs when PDFs are opened in desktop or browser readers. Victims see permission prompts that appear legitimate, but allowing access launches downloads that compromise devices.

Experts warn that PDFs are trusted more than other file types, making this a dangerous evolution of social engineering. Once inside a network, attackers can move laterally, escalate privileges, and plant further malware.

Security leaders recommend restricting personal email access on corporate devices, increasing sandboxing capabilities, and expanding employee training initiatives. Analysts emphasise that awareness and recognition of suspicious files remain crucial in countering this new phishing threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!