The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.
LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.
AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.
LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.
Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
US tech giant Apple has called for the repeal of the EU’s Digital Markets Act, claiming the rules undermine user privacy, disrupt services, and erode product quality.
The company urged the Commission to replace the legislation with a ‘fit for purpose’ framework, or hand enforcement to an independent agency insulated from political influence.
Apple argued that the Act’s interoperability requirements had delayed the rollout of features in the EU, including Live Translation on AirPods and iPhone mirroring. Additionally, the firm accused the Commission of adopting extreme interpretations that created user vulnerabilities instead of protecting them.
Brussels has dismissed those claims. A Commission spokesperson stressed that DMA compliance is an obligation, not an option, and said the rules guarantee fair competition by forcing dominant platforms to open access to rivals.
A dispute that intensifies long-running friction between US tech firms and the EU regulators.
Apple has already appealed to the courts, with a public hearing scheduled in October, while Washington has criticised the bloc’s wider digital policy.
A clash has deepened transatlantic trade tensions, with the White House recently threatening tariffs after fresh fines against another American tech company.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Autonomous logistics firm Gatik is set to expand its partnership with Loblaw, deploying 50 new self-driving trucks across North America over the next year. The move marks the largest autonomous truck deployment in the region to date.
The slow rollout of self-driving technology has frustrated supply chain watchers, with most firms still testing limited fleets. Gatik’s large-scale deployment signals a shift toward commercial adoption, with 20 trucks to be added by the end of 2025 and an additional 30 by 2026.
The partnership was enabled by Ontario’s Autonomous Commercial Motor Vehicle Pilot Program, a ten-year initiative allowing approved operators to test automated commercial trucks on public roads. Officials hope it will boost road safety and support the trucking sector.
Industry analysts note that North America’s truck driver shortage is one of the most pressing logistics challenges facing the region. Nearly 70% of logistics firms report that driver shortages hinder their ability to meet freight demand, making automation a viable solution to address this issue.
Gatik, operating in the US and Canada, says the deployment could ease labour pressure and improve efficiency, but safety remains a key concern. Experts caution that striking a balance between rapid rollout and robust oversight will be crucial for establishing trust in autonomous freight operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
ByteDance has unveiled Seedream 4.0, its latest AI-powered image generation model, which it claims outperforms Google DeepMind’s Gemini 2.5 Flash Image. The launch signals ByteDance’s bid to rival leading creative AI tools.
Developed by ByteDance’s Seed division, the model combines advanced text-to-image generation with fast, precise image editing. Internal testing reportedly showed superior prompt accuracy, image alignment, and visual quality compared to US-developed DeepMind’s system.
Artificial Analysis, an independent AI benchmarking firm, called Seedream 4.0 a significant step forward. The model integrates Seedream 3.0’s generation capability with SeedEdit 3.0’s editing tools while maintaining a price of US$30 per 1,000 generations.
ByteDance claims that Seedream 4.0 runs over 10 times faster than earlier versions, enhancing the user experience with near-instant image inference. Early users have praised its ability to make quick, text-prompted edits with high accuracy.
The tool is now available to users in China through Jimeng and Doubao AI apps and businesses via Volcano Engine, ByteDance’s cloud platform. A formal technical report supporting the company’s claims has not yet been released.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission is collaborating with the EU capitals to narrow the list of proposals for large AI training hubs, known as AI Gigafactories. The €20 billion plan will be funded by the Commission (17%), the EU countries (17%), and industry (66%) to boost computing capacity for European developers.
The first call drew 76 proposals from 16 countries, far exceeding the initially planned four or five facilities. Most submissions must be merged or dropped, with Poland already seeking a joint bid with the Baltic states as talks continue.
Some EU members will inevitably lose out, with Ursula von der Leyen, the President of the European Commission, hinting that priority could be given to countries already hosting AI Factories. That could benefit Finland, whose Lumi supercomputer is part of a Nokia-led bid to scale up into a Gigafactory.
The plan has raised concerns that Europe’s efforts come too late, as US tech giants invest heavily in larger AI hubs. Still, Brussels hopes its initiative will allow EU developers to compete globally while maintaining control over critical AI infrastructure.
A formal call for proposals is expected by the end of the year, once the legal framework is finalised. Selection criteria and funding conditions will be set to launch construction as early as 2026.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.
Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.
Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.
Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.
The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The University of Oxford will become the first UK university to offer free ChatGPT Edu access to all staff and students. The rollout follows a year-long pilot with 750 academics, researchers, and professional services staff across the University and Colleges.
ChatGPT Edu, powered by OpenAI’s GPT-5 model, is designed for education with enterprise-grade security and data privacy. Oxford says it will support research, teaching, and operations while encouraging safe, responsible use through robust governance, training, and guidance.
Staff and students will receive access to in-person and online training, webinars, and specialised guidance on the use of generative AI. A dedicated AI Competency Centre and network of AI Ambassadors will support users, alongside mandatory security training.
The prestigious UK university has also established a Digital Governance Unit and an AI Governance Group to oversee the adoption of emerging technologies. Pilots are underway to digitise the Bodleian Libraries and explore how AI can improve access to historical collections worldwide.
A jointly funded research programme with the Oxford Martin School and OpenAI will study the societal impact of AI adoption. The project is part of OpenAI’s NextGenAI consortium, which brings together 15 global research institutions to accelerate breakthroughs in AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Treasury has issued an Advance Notice of Proposed Rulemaking (ANPRM) to gather public input on implementing the Guiding and Establishing National Innovation for US Stablecoins (GENIUS) Act. The consultation marks an early step in shaping rules around digital assets.
The GENIUS Act instructs the Treasury to draft rules that foster stablecoin innovation while protecting consumers, preserving stability, and reducing financial crime risks. The Treasury aims to balance technological progress with safeguards for the wider economic system by opening this process.
Through the ANPRM, the public is encouraged to submit comments, data, and perspectives that may guide the design of the regulatory framework. Although no new rules have been set yet, the consultation allows stakeholders to shape future stablecoin policies.
The initiative follows an earlier request for comment on methods to detect illicit activity involving digital assets, which remains open until 17 October 2025. Submissions in response to the ANPRM must be filed within 30 days of its publication in the Federal Register.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.
The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.
The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.
The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.
With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Regulators intend the framework to be proportionate, supporting UK competitiveness in global markets. The FCA is also examining how its Consumer Duty should apply to crypto, ensuring firms act to deliver good outcomes for clients.
Views are being sought on complaint-handling, including whether cases should be referred to the Financial Ombudsman Service.
David Geale, executive director of payments and digital finance, said the FCA aims to build a sustainable and competitive crypto sector by balancing innovation with trust and market integrity. He noted the standards would not eliminate investment risks but would give consumers clearer expectations.
The consultation follows draft legislation published by HM Treasury in April 2025. Responses on the discussion paper are due by 15 October 2025, with feedback on the consultation paper closing on 12 November 2025. Final rules are expected in 2026.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!