Autonomous logistics firm Gatik is set to expand its partnership with Loblaw, deploying 50 new self-driving trucks across North America over the next year. The move marks the largest autonomous truck deployment in the region to date.
The slow rollout of self-driving technology has frustrated supply chain watchers, with most firms still testing limited fleets. Gatik’s large-scale deployment signals a shift toward commercial adoption, with 20 trucks to be added by the end of 2025 and an additional 30 by 2026.
The partnership was enabled by Ontario’s Autonomous Commercial Motor Vehicle Pilot Program, a ten-year initiative allowing approved operators to test automated commercial trucks on public roads. Officials hope it will boost road safety and support the trucking sector.
Industry analysts note that North America’s truck driver shortage is one of the most pressing logistics challenges facing the region. Nearly 70% of logistics firms report that driver shortages hinder their ability to meet freight demand, making automation a viable solution to address this issue.
Gatik, operating in the US and Canada, says the deployment could ease labour pressure and improve efficiency, but safety remains a key concern. Experts caution that striking a balance between rapid rollout and robust oversight will be crucial for establishing trust in autonomous freight operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
ByteDance has unveiled Seedream 4.0, its latest AI-powered image generation model, which it claims outperforms Google DeepMind’s Gemini 2.5 Flash Image. The launch signals ByteDance’s bid to rival leading creative AI tools.
Developed by ByteDance’s Seed division, the model combines advanced text-to-image generation with fast, precise image editing. Internal testing reportedly showed superior prompt accuracy, image alignment, and visual quality compared to US-developed DeepMind’s system.
Artificial Analysis, an independent AI benchmarking firm, called Seedream 4.0 a significant step forward. The model integrates Seedream 3.0’s generation capability with SeedEdit 3.0’s editing tools while maintaining a price of US$30 per 1,000 generations.
ByteDance claims that Seedream 4.0 runs over 10 times faster than earlier versions, enhancing the user experience with near-instant image inference. Early users have praised its ability to make quick, text-prompted edits with high accuracy.
The tool is now available to users in China through Jimeng and Doubao AI apps and businesses via Volcano Engine, ByteDance’s cloud platform. A formal technical report supporting the company’s claims has not yet been released.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission is collaborating with the EU capitals to narrow the list of proposals for large AI training hubs, known as AI Gigafactories. The €20 billion plan will be funded by the Commission (17%), the EU countries (17%), and industry (66%) to boost computing capacity for European developers.
The first call drew 76 proposals from 16 countries, far exceeding the initially planned four or five facilities. Most submissions must be merged or dropped, with Poland already seeking a joint bid with the Baltic states as talks continue.
Some EU members will inevitably lose out, with Ursula von der Leyen, the President of the European Commission, hinting that priority could be given to countries already hosting AI Factories. That could benefit Finland, whose Lumi supercomputer is part of a Nokia-led bid to scale up into a Gigafactory.
The plan has raised concerns that Europe’s efforts come too late, as US tech giants invest heavily in larger AI hubs. Still, Brussels hopes its initiative will allow EU developers to compete globally while maintaining control over critical AI infrastructure.
A formal call for proposals is expected by the end of the year, once the legal framework is finalised. Selection criteria and funding conditions will be set to launch construction as early as 2026.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.
Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.
Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.
Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.
The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The University of Oxford will become the first UK university to offer free ChatGPT Edu access to all staff and students. The rollout follows a year-long pilot with 750 academics, researchers, and professional services staff across the University and Colleges.
ChatGPT Edu, powered by OpenAI’s GPT-5 model, is designed for education with enterprise-grade security and data privacy. Oxford says it will support research, teaching, and operations while encouraging safe, responsible use through robust governance, training, and guidance.
Staff and students will receive access to in-person and online training, webinars, and specialised guidance on the use of generative AI. A dedicated AI Competency Centre and network of AI Ambassadors will support users, alongside mandatory security training.
The prestigious UK university has also established a Digital Governance Unit and an AI Governance Group to oversee the adoption of emerging technologies. Pilots are underway to digitise the Bodleian Libraries and explore how AI can improve access to historical collections worldwide.
A jointly funded research programme with the Oxford Martin School and OpenAI will study the societal impact of AI adoption. The project is part of OpenAI’s NextGenAI consortium, which brings together 15 global research institutions to accelerate breakthroughs in AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Treasury has issued an Advance Notice of Proposed Rulemaking (ANPRM) to gather public input on implementing the Guiding and Establishing National Innovation for US Stablecoins (GENIUS) Act. The consultation marks an early step in shaping rules around digital assets.
The GENIUS Act instructs the Treasury to draft rules that foster stablecoin innovation while protecting consumers, preserving stability, and reducing financial crime risks. The Treasury aims to balance technological progress with safeguards for the wider economic system by opening this process.
Through the ANPRM, the public is encouraged to submit comments, data, and perspectives that may guide the design of the regulatory framework. Although no new rules have been set yet, the consultation allows stakeholders to shape future stablecoin policies.
The initiative follows an earlier request for comment on methods to detect illicit activity involving digital assets, which remains open until 17 October 2025. Submissions in response to the ANPRM must be filed within 30 days of its publication in the Federal Register.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.
The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.
The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.
The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.
With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Regulators intend the framework to be proportionate, supporting UK competitiveness in global markets. The FCA is also examining how its Consumer Duty should apply to crypto, ensuring firms act to deliver good outcomes for clients.
Views are being sought on complaint-handling, including whether cases should be referred to the Financial Ombudsman Service.
David Geale, executive director of payments and digital finance, said the FCA aims to build a sustainable and competitive crypto sector by balancing innovation with trust and market integrity. He noted the standards would not eliminate investment risks but would give consumers clearer expectations.
The consultation follows draft legislation published by HM Treasury in April 2025. Responses on the discussion paper are due by 15 October 2025, with feedback on the consultation paper closing on 12 November 2025. Final rules are expected in 2026.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.
The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.
Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.
Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.
SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The WTO launched the 2025 World Trade Report, titled ‘Making trade and AI work together to benefit all’. The report argues that AI could potentially boost global trade by up to 37% and GDP by 12–13% by 2040, particularly through digitally deliverable services.
It notes that AI can lower trade costs, improve supply-chain efficiency, and create opportunities for small firms and developing countries. Still, it warns that without deliberate action, AI could deepen global inequalities and widen the gap between advanced and developing economies.
The report underscores the need for investment in digital infrastructure, energy, skills, and enabling policies, highlighting the importance of IP protection, competition frameworks, and government support.
A newly developed indicator, the WTO AI Trade Policy Openness Index (AI-TPOI), revealed significant variation in AI-related trade policies across and within income groups.
It assessed three policy areas relevant to AI diffusion: barriers to services trade, restrictions on trade in AI-enabling goods, and limitations on cross-border data flows.
Stronger multilateral cooperation and targeted capacity-building were presented as essential to ensure AI-enabled trade supports inclusive, sustainable prosperity rather than reinforcing existing divides.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!