ICE-tracking apps pulled from the App Store

Apple has taken down several mobile apps used to track US Immigration and Customs Enforcement (ICE) activity, sparking backlash from developers and digital rights advocates. The removals follow reported pressure from the US Department of Justice, which has cited safety and legal concerns.

One affected app, Eyes Up, was designed to alert users to ICE raids and detention locations. Its developer, identified only as Mark for safety reasons, said he believes the decision was politically motivated and vowed to contest it.

The takedown reflects a wider debate over whether app stores should host software linked to law enforcement monitoring or protest activity. Developers argue their tools support community safety and transparency, while regulators say such apps could risk interference with federal operations.

Apple has not provided detailed reasoning for its decision beyond referencing its developer guidelines. Google has also reportedly removed similar apps from its Play Store, citing policy compliance. Both companies face scrutiny over how content moderation intersects with political and civil rights issues.

Civil liberties groups warn that the decision could set a precedent limiting speech and digital activism in the US. The affected developers have said they will continue to distribute their apps through alternative channels while challenging the removals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta may bring Reels to the big screen with Instagram TV

Instagram is reportedly exploring plans to launch a dedicated TV app aimed at expanding its video reach across larger screens.

The move was revealed by CEO Adam Mosseri at the Bloomberg Screentime conference in Los Angeles, where he said that as consumption behaviour shifts toward TV, Instagram must follow.

Mosseri clarified that there’s no official launch yet, but that the company is actively considering how to present Instagram content, especially Reels, on TV devices in a compelling way.

He also ruled out plans to license live sports or Hollywood content for the TV app, emphasising Instagram would carry over its existing focus on short-form and vertical video rather than pivoting fully into full-length entertainment.

The proposed TV app would deepen Instagram’s stake in the video space and help it compete more directly with YouTube, TikTok and other video platforms, especially as users increasingly watch video content in living rooms.

However, translating vertical video formats like Reels to a horizontal, large-screen environment poses design, UX and monetisation challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age verification and online safety dominate EU ministers’ Horsens meeting

EU digital ministers are meeting in Horsens on 9–10 October to improve the protection of minors online. Age verification, child protection, and digital sovereignty are at the top of the agenda under the Danish EU Presidency.

The Informal Council Meeting on Telecommunications is hosted by the Ministry of Digital Affairs of Denmark and chaired by Caroline Stage. European Commission Executive Vice-President Henna Virkkunen is also attending to support discussions on shared priorities.

Ministers are considering measures to prevent children from accessing age-inappropriate platforms and reduce exposure to harmful features like addictive designs and adult content. Stronger safeguards across digital services are being discussed.

The talks also focus on Europe’s technological independence. Ministers aim to enhance the EU’s digital competitiveness and sovereignty while setting a clear direction ahead of the Commission’s upcoming Digital Fairness Act proposal.

A joint declaration, ‘The Jutland Declaration’, is expected as an outcome. It will highlight the need for stronger EU-level measures and effective age verification to create a safer online environment for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI joins dialogue with the EU on fair and transparent AI development

The US AI company, OpenAI, has met with the European Commission to discuss competition in the rapidly expanding AI sector.

A meeting focused on how large technology firms such as Apple, Microsoft and Google shape access to digital markets through their operating systems, app stores and search engines.

During the discussion, OpenAI highlighted that such platforms significantly influence how users and developers engage with AI services.

The company encouraged regulators to ensure that innovation and consumer choice remain priorities as the industry grows, noting that collaboration between major and minor players can help maintain a balanced ecosystem.

An issue arises as OpenAI continues to partner with several leading technology companies. Microsoft, a key investor, has integrated ChatGPT into Windows 11’s Copilot, while Apple recently added ChatGPT support to Siri as part of its Apple Intelligence features.

Therefore, OpenAI’s engagement with regulators is part of a broader dialogue about maintaining open and competitive markets while fostering cooperation across the industry.

Although the European Commission has not announced any new investigations, the meeting reflects ongoing efforts to understand how AI platforms interact within the broader digital economy.

OpenAI and other stakeholders are expected to continue contributing to discussions to ensure transparency, fairness and sustainable growth in the AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe leads digital governance dialogue at SEEDIG 2025 in Athens

The Council of Europe is taking an active role in shaping regional digital policy by leading three key panels at the Southeastern European Dialogue on Internet Governance (SEEDIG 2025), held in Athens on 10-11 October. The discussions bring together policymakers, industry leaders, and civil society to strengthen cooperation on human rights, democracy, and the rule of law in the digital age.

The first day focuses on bridging human rights and digital innovation. A panel on ‘Public-Private Policy Dialogue’ examines how governments and companies can align emerging technologies with ethical standards through frameworks like the Council of Europe’s AI Convention. Another session tackles harmful online content and disinformation, exploring ways to balance content moderation with freedom of expression and democratic resilience in South-Eastern Europe.

On 11 October, the spotlight shifts to ‘Cyber Interference with Democracy,’ addressing how digital technologies can be misused to manipulate elections and public trust. Experts will discuss real-world cases of cyber interference and propose measures to protect democratic institutions through human rights–based approaches.

Ahead of the event, Council of Europe representatives will also meet participants of the SEEDIG Youth School to discuss opportunities within the Council’s Digital Agenda.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Facebook and Instagram Reels get multilingual boost with Meta AI

Meta has introduced new AI-powered translation features that allow Facebook and Instagram users to enjoy reels from around the world in multiple languages.

Meta AI now translates, dubs, and lip-syncs short videos in English, Spanish, Hindi, and Portuguese, with more languages to be added soon.

A tool that reproduces a creator’s voice and tone while automatically syncing translated audio to their lip movements, providing a natural viewing experience. It is free for Facebook creators with over 1,000 followers and all public Instagram accounts in countries where Meta AI is available.

The expansion is part of Meta’s goal to make global content more accessible and to help creators reach wider audiences. By breaking language barriers, Meta aims to strengthen community connections and turn Reels into a platform for global cultural exchange.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Retailers face new pressure under California privacy law

California has entered a new era of privacy and AI enforcement after the state’s privacy regulator fined Tractor Supply USD1.35 million for failing to honour opt-outs and ignoring Global Privacy Control signals. The case marks the largest penalty yet from the California Privacy Protection Agency.

In California, there is a widening focus on how companies manage consumer data, verification processes and third-party vendors. Regulators are now demanding that privacy signals be enforced at the technology layer, not just displayed through website banners or webforms.

Retailers must now show active, auditable compliance, with clear privacy notices, automated data controls and stronger vendor agreements. Regulators have also warned that businesses will be held responsible for partner failures and poor oversight of cookies and tracking tools.

At the same time, California’s new AI law, SB 53, extends governance obligations to frontier AI developers, requiring transparency around safety benchmarks and misuse prevention. The measure connects AI accountability to broader data governance, reinforcing that privacy and AI oversight are now inseparable.

Executives across retail and technology are being urged to embed compliance and governance into daily operations. California’s regulators are shifting from punishing visible lapses to demanding continuous, verifiable proof of compliance across both data and AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ant Group launches trillion-parameter AI model Ling-1T

Ant Group has unveiled its Ling AI model family, introducing Ling-1T, a trillion-parameter large language model that has been open-sourced for public use.

The Ling family now includes three main series: the Ling non-thinking models, the Ring thinking models, and the multimodal Ming models.

Ling-1T delivers state-of-the-art performance in code generation, mathematical reasoning, and logical problem-solving, achieving 70.42% accuracy on the 2025 AIME benchmark.

A model that combines efficient inference with strong reasoning capabilities, marking a major advance in AI development for complex cognitive tasks.

Company’s Chief Technology Officer, He Zhengyu, said that Ant Group views AGI as a public good that should benefit society.

The release of Ling-1T and the earlier Ring-1T-preview underscores Ant Group’s commitment to open, collaborative AI innovation and the development of inclusive AGI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OSCE warns AI threatens freedom of thought

The OSCE has launched a new publication warning that rapid progress in AI threatens the fundamental human right to freedom of thought. The report, Think Again: Freedom of Thought in the Age of AI, calls on governments to create human rights-based safeguards for emerging technologies.

Speaking during the Warsaw Human Dimension Conference, Professor Ahmed Shaheed of the University of Essex said that freedom of thought underpins most other rights and must be actively protected. He urged states to work with ODIHR to ensure AI development respects personal autonomy and dignity.

Experts at the event said AI’s growing influence on daily life risks eroding individuals’ ability to form independent opinions. They warned that manipulation of online information, targeted advertising, and algorithmic bias could undermine free thought and democratic participation.

ODIHR recommends states to prevent coercion, discrimination, and digital manipulation, ensuring societies remain open to diverse ideas. Protecting freedom of thought, the report concludes, is essential to preserving human dignity and democratic resilience in an age shaped by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot