Russia orders Apple to set Russian search engine by default

Russia’s federal anti-monopoly service has ordered Apple to preinstall a Russian-made search engine, such as Yandex or Mail.ru, by default on all devices sold in Russia and the Eurasian Economic Union. The regulator claims Apple’s current setup gives foreign providers unfair market advantages.

The letter from FAS director Maxim Shaskolsky said Apple’s practices breach consumer protection laws by denying users equal access to local services. Authorities argue that default settings favour non-Russian search engines and restrict fair competition within domestic markets.

Apple has until 31 October to comply or face potential fines and restrictions. Russia’s Ministry of Digital Affairs warned of serious consequences if the company ignores the directive. Officials noted that Google previously avoided penalties after offering users a search engine choice.

Apple’s relations with Moscow have been tense since 2024, when the firm removed VPN apps under government pressure. Digital rights groups described the move as a threat to privacy, and analysts see the latest demand as part of Russia’s push for greater online control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta seeks delay in complying with Dutch court order on Facebook and Instagram timelines

Meta has yet to adjust Facebook and Instagram’s timelines despite an Amsterdam court ruling that found its current design violates European law. The company says it needs more time to make the required changes and has asked the court to extend its deadline until 31 January 2026.

The dispute stems from Meta’s use of algorithmic recommendation systems that determine what posts appear on users’ feeds and in what order. Both Instagram and Facebook have the option to set your timeline to chronological order. However, the option is hard to find and is set back to the original algorithmic timeline as soon as users close the app.

The Amsterdam court earlier ruled that these systems, which reset user preferences and hide options for chronological viewing, breach the Digital Services Act (DSA) by denying users genuine autonomy, freedom of choice, and control over how information is presented.

The judge ordered Meta to modify both apps within two weeks or face penalties of €100,000 per day, up to €5 million. More than two weeks later, Meta has yet to comply, arguing that the technical changes cannot be completed within the court’s timeline.

Dutch civil rights group Bits of Freedom, which brought the case, criticised the delay as a refusal to take responsibility. ‘The legislator wants it, experts say it can be done, and the court says it must be done. Yet Meta fails to bring its platforms into line with our legislation,’ said Evelyn Austin, the organisation’s director said in a statement.

The Amsterdam Court of Appeal will review Meta’s request for an extension on 27 October.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Deepfake targeting Irish presidential candidate sparks election integrity warning

Irish presidential candidate Catherine Connolly condemned a deepfake AI video that falsely announced her withdrawal from the race. The clip, designed to resemble an RTÉ News broadcast, spread online before being reported and removed from major social media platforms.

Connolly said the video was a disgraceful effort to mislead voters and damage democracy. Her campaign team filed a complaint with the Irish Electoral Commission and requested that all copies be clearly labelled as fake.

Experts at Dublin City University identified slight distortions in speech and lighting as signs of AI manipulation. They warned that the rapid spread of synthetic videos underscores weak content moderation by online platforms.

Connolly urged the public not to share the clip and to respond through civic participation. Authorities are monitoring digital interference as Ireland prepares for its presidential vote on Friday.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

‘Wicked’ AI data scraping: Pullman calls for regulation to protect creative rights

Author Philip Pullman has publicly urged the UK government to intervene in what he describes as the ‘wicked’ practice of AI firms scraping authors’ works for training models. Pullman insists that writing is more than data, it is creative labour, and authors deserve protection.

Pullman’s intervention comes amid increasing concern in the literary community about how generative AI models are built using large volumes of existing texts, often without permission or clear compensation. He argues that uninhibited scraping undermines the rights of creators and could hollow out the foundations of culture.

He has called on UK policymakers to establish clearer rules and safeguards over how AI systems access, store, and reuse writers’ content. Pullman warns that without intervention, authors may lose control over their work, and the public could be deprived of authentic, quality literature.

His statement adds to growing pressure from writers, unions and rights bodies calling for better transparency, consent mechanisms and a balance between innovation and creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch watchdog warns AI chatbots threaten election integrity

The Dutch data authority warns AI chatbots are biased and unreliable for voting advice ahead of national elections. An AP investigation found chatbots often steered users to the same two parties, ignoring their actual preferences.

In over half of the tests, the bots suggested either Geert Wilders’ far-right Freedom Party (PVV) or the leftwing GroenLinks-PvdA led by Frans Timmermans. Other parties, such as the centre-right CDA, were rarely mentioned even when users’ answers closely matched their platforms.

AP deputy head Monique Verdier said that voters were being steered towards parties that did not necessarily reflect their political views, warning that this undermines the integrity of free and fair elections.

The report comes ahead of the 29 October election, where the PVV currently leads the polls. However, the race remains tight, with GroenLinks-PvdA and CDA still in contention and many voters undecided.

Although the AP noted that the bias was not intentional, it attributed the problem to the way AI chatbots function, highlighting the risks of relying on opaque systems for democratic decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is the world ready for AI to rule justice?

AI is creeping into almost every corner of our lives, and it seems the justice system’s turn has finally come. As technology reshapes the way we work, communicate, and make decisions, its potential to transform legal processes is becoming increasingly difficult to ignore. The justice system, however, is one of the most ethically sensitive and morally demanding fields in existence. 

For AI to play a meaningful role in it, it must go beyond algorithms and data. It needs to understand the principles of fairness, context, and morality that guide every legal judgement. And perhaps more challengingly, it must do so within a system that has long been deeply traditional and conservative, one that values precedent and human reasoning above all else. Jet, from courts to prosecutors to lawyers, AI promises speed, efficiency, and smarter decision-making, but can it ever truly replace the human touch? 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI in courts: Smarter administration, not robot judges… yet

Courts across the world are drowning in paperwork, delays, and endless procedural tasks, challenges that are well within AI’s capacity to solve efficiently. From classifying cases and managing documentation to identifying urgent filings and analysing precedents, AI systems are beginning to serve as silent assistants within courtrooms. 

The German judiciary, for example, has already shown what this looks like in practice. AI tools such as OLGA and Frauke have helped categorise thousands of cases, extract key facts, and even draft standardised judgments in air passenger rights claims, cutting processing times by more than half. For a system long burdened by backlogs, such efficiency is revolutionary.

Still, the conversation goes far beyond convenience. Justice is not a production line; it is built on fairness, empathy, and the capacity to interpret human intent. Even the most advanced algorithm cannot grasp the nuance of remorse, the context of equality, or the moral complexity behind each ruling. The question is whether societies are ready to trust machine intelligence to participate in moral reasoning.

The final, almost utopian scenario would be a world where AI itself serves as a judge who is unbiased, tireless, and immune to human error or emotion. Yet even as this vision fascinates technologists, legal experts across Europe, including the EU Commission and the OECD, stress that such a future must remain purely theoretical. Human judges, they argue, must always stay at the heart of justice- AI may assist in the process, but it must never be the one to decide it. The idea is not to replace judges but to help them navigate the overwhelming sea of information that modern justice generates.

Courts may soon become smarter, but true justice still depends on something no algorithm can replicate: the human conscience. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for prosecutors: Investigating with superhuman efficiency

Prosecutors today are also sifting through thousands of documents, recordings, and messages for every major case. AI can act as a powerful investigative partner, highlighting connections, spotting anomalies, and bringing clarity to complex cases that would take humans weeks to unravel. 

Especially in criminal law, cases can involve terabytes of documents, evidence that humans can hardly process within tight legal deadlines or between hearings, yet must be reviewed thoroughly. AI tools can sift through this massive data, flag inconsistencies, detect hidden links between suspects, and reveal patterns that might otherwise remain buried. Subtle details that might escape the human eye can be detected by AI, making it an invaluable ally in uncovering the full picture of a case. By handling these tasks at superhuman speed, AI could also help accelerate the notoriously slow pace of legal proceedings, giving prosecutors more time to focus on strategy and courtroom preparation. 

More advanced systems are already being tested in Europe and the US, capable of generating detailed case summaries and predicting which evidence is most likely to hold up in court. Some experimental tools can even evaluate witness credibility based on linguistic cues and inconsistencies in testimony. In this sense, AI becomes a strategic partner, guiding prosecutors toward stronger, more coherent arguments. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for lawyers: Turning routine into opportunity

The adoption of AI and its capabilities might reach their maximum when it comes to the work of lawyers, where transforming information into insight and strategy is at the core of the profession. AI can take over repetitive tasks: reviewing contracts, drafting documents, or scanning case files, freeing lawyers to focus on the work that AI cannot replace, such as strategic thinking, creative problem-solving, and providing personalised client support. 

AI can be incredibly useful for analysing publicly available cases, helping lawyers see how similar situations have been handled, identify potential legal opportunities, and craft stronger, more informed arguments. By recognising patterns across multiple cases, it can suggest creative questions for witnesses and suspects, highlight gaps in the evidence, and even propose potential defence strategies. 

AI also transforms client communication. Chatbots and virtual assistants can manage routine queries, schedule meetings, and provide concise updates, giving lawyers more time to understand clients’ needs and build stronger relationships. By handling the mundane, AI allows lawyers to spend their energy on reasoning, negotiation, and advocacy.

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

Balancing promise with responsibility

AI is transforming the way courts, prosecutors, and lawyers operate, but its adoption is far from straightforward. While it can make work significantly easier, the technology also carries risks that legal professionals cannot ignore. Historical bias in data can shape AI outputs, potentially reinforcing unfair patterns if humans fail to oversee its use. Similarly, sensitive client information must be protected at all costs, making data privacy a non-negotiable responsibility. 

Training and education are therefore crucial. It is essential to understand not only what AI can do but also its limits- how to interpret suggestions, check for hidden biases, and decide when human judgement must prevail. Without this understanding, AI risks being a tool that misleads rather than empowers. 

The promise of AI lies in its ability to free humans from repetitive work, allowing professionals to focus on higher-value tasks. But its power is conditional: efficiency and insight mean little without the ethical compass of the human professionals guiding it.

Ultimately, the justice system is more than a process. It is about fairness, empathy, and moral reasoning. AI can assist, streamline, and illuminate, but the responsibility for decisions, for justice itself, remains squarely with humans. In the end, the true measure of AI’s success in law will be how it enhances human judgement, not how it replaces it.

So, is the world ready for AI to rule justice? The answer remains clear. While AI can transform how justice is delivered, the human mind, heart, and ethical responsibility must remain at the centre. AI may guide the way, but it cannot and should not hold the gavel.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chats with ‘Jesus’ spark curiosity and criticism

Text With Jesus, an AI chatbot from Catloaf Software, lets users message figures like ‘Jesus’ and ‘Moses’ for scripture-quoting replies. CEO Stéphane Peter says curiosity is driving rapid growth despite accusations of blasphemy and worries about tech intruding on faith.

Built on OpenAI’s ChatGPT, the app now includes AI pastors and counsellors for questions on scripture, ethics, and everyday dilemmas. Peter, who describes himself as not particularly religious, says the aim is access and engagement, not replacing ministry or community.

Examples range from ‘Do not be anxious…’ (Philippians 4:6) to the Golden Rule (Matthew 7:12), with answers framed in familiar verse. Fans call it a safe, approachable way to explore belief; critics argue only scripture itself should speak.

Faith leaders and commentators have cautioned against mistaking AI outputs for wisdom. The Vatican has stressed that AI is a tool, not truth, and that young people need guidance, not substitution, in spiritual formation.

Reception is sharply split online. Supporters praise convenience and curiosity-spark; detractors cite theological drift, emoji-laden replies, and a ‘Satan’ mode they find chilling. The app holds a 4.7 rating on the Apple App Store from more than 2,700 reviews.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!