Toronto explores intelligent traffic systems to improve jams and transit

Toronto’s notoriously congested traffic, among the worst in North America, with drivers spending an average of about 100 hours in traffic annually, continues to frustrate commuters.

Experts and city officials are now considering artificial intelligence-driven traffic signal optimisation as a key tool to improve traffic flows by dynamically adjusting signal timing across the city’s roughly 2,500 intersections.

AI systems could analyse real-time traffic patterns faster and more efficiently than manual control, helping reduce idle time, clear bottlenecks and support transit modes like the Finch West LRT by prioritising movement where needed.

While details of Toronto’s broader congestion management plan are still being finalised, this high-tech approach is being positioned as one of the most promising ways to address chronic gridlock and improve overall mobility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI accelerates discovery in complex materials science

Scientists are increasingly applying generative AI models to address complex problems in materials science, such as predicting structures, simulating properties, and guiding the discovery of advanced materials with novel functions.

Traditional computational methods, such as density functional theory, can be slow and resource-intensive, whereas AI-based tools can learn from existing data and propose candidate materials more efficiently.

Early applications of these generative approaches include designing materials for energy storage, catalysis, and electronic applications, speeding up workflows that previously involved large amounts of trial and error.

Researchers emphasise that while AI does not yet replace physics-based modelling, it can complement it by narrowing the search space and suggesting promising leads for experimental validation.

The work reflects a broader trend of AI-augmented science, where machine learning and generative models act as accelerators for discovery across disciplines such as chemistry, physics and bioengineering.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zero taxes attract global AI cloud investment in India

India has unveiled a plan to offer foreign cloud providers zero taxes on revenues from services sold abroad if workloads are run from Indian data centres until 2047. The move aims to attract AI investment despite power and water shortages.

Major US tech companies, including Google, Microsoft and Amazon, have pledged billions of dollars to expand AI-focused data centres in India. Domestic operators are also increasing capacity, with large projects announced in Andhra Pradesh and other states.

The government has boosted incentives for electronics and semiconductor manufacturing, critical minerals, and cross-border e-commerce. These measures aim to integrate India more deeply into global technology supply chains.

Analysts warn that execution risks remain, including energy shortages, land access and regulatory hurdles. Observers say the tax holiday and incentives reflect a strategic bet on establishing India as a global hub for AI and cloud computing.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI slop spreads across social media

Social media platforms are increasingly filled with AI-generated slop created to maximise engagement. The rapid spread has been fuelled by easy access to generative tools and algorithm-driven promotion.

Users across major platforms are pushing back, frequently calling out fake or misleading posts in comment sections. In many cases, criticism of AI slop draws more attention than the original content.

Technology companies acknowledge concerns about low-quality AI media but remain reluctant to impose strict limits. Platform leaders argue that new formats are often criticised before gaining wider acceptance.

Researchers warn that repeated exposure to AI slop may contribute to what they describe as ‘brain rot’, reducing attention and discouraging content verification. The risk becomes more serious when fabricated visuals shape public opinion or circulate as news.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI news needs ‘nutrition labels’, UK think tank says amid concerns over gatekeepers

A leading British think tank has urged the government to introduce ‘nutrition labels’ for AI-generated news, arguing that clearer rules are needed as AI becomes a dominant source of information.

The Institute for Public Policy Research said AI firms are increasingly acting as new gatekeepers of the internet and must pay publishers for the journalism that shapes their output.

The group recommended standardised labels showing which sources underpin AI-generated answers, instead of leaving users unsure about the origin or reliability of the material they read.

It also called for a formal licensing system in the UK that would allow publishers to negotiate directly with technology companies over the use of their content. The move comes as a growing share of the public turns to AI for news, while Google’s AI summaries reach billions each month.

IPPR’s study found that some AI platforms rely heavily on content from outlets with licensing agreements, such as the Guardian and the Financial Times, while others, like the BBC, appear far less often due to restrictions on scraping.

The think tank warned that such patterns could weaken media plurality by sidelining local and smaller publishers instead of supporting a balanced ecosystem. It added that Google’s search summaries have already reduced traffic to news websites by providing answers before users click through.

The report said public funding should help sustain investigative and local journalism as AI tools expand. OpenAI responded that its products highlight sources and provide links to publishers, arguing that careful design can strengthen trust in the information people see online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Best moments from MoltBook archives

A new ‘Best of MoltBook’ post on Astral Codex Ten has renewed debate over how AI-assisted writing is being presented and understood. The collection highlights selected excerpts from MoltBook, a public notebook used to explore ideas with the help of AI tools.

MoltBook is framed as a space for experimentation rather than finished analysis, with short-form entries reflecting drafts, prompts and revisions. Human judgement remains central, with outputs curated, edited or discarded rather than treated as autonomous reasoning.

Some readers have questioned descriptions of the work as ‘agentic AI’, arguing the label exaggerates the technology’s role. The AI involved responds to instructions but does not act independently, plan goals or retain long-term memory.

The discussion reflects wider scepticism about inflated claims around AI capability. MoltBook is increasingly viewed as an example of AI as a productivity aid for thinking, rather than evidence of a new form of independent intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chinese court limits liability for AI hallucinations

A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.

Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.

The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.

The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Why smaller AI models may be the smarter choice

Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog post ‘Do we really need frontier AI for everyday work?’. While frontier AI systems dominate headlines with ever-growing capabilities, their real-world value for routine professional tasks is often limited. For many people, much of daily work remains simple, repetitive, and predictable.

Kurbalija points out that large parts of professional life, from administration and law to healthcare and corporate management, operate within narrow linguistic and cognitive boundaries. Daily communication relies on a small working vocabulary, and most decision-making follows familiar mental patterns.

In this context, highly complex AI models are often unnecessary. Smaller, specialised systems can handle these tasks more efficiently, at lower cost and with fewer risks.

Using frontier AI for routine work, the author suggests, is like using a sledgehammer to crack a nut. These large models are designed to handle almost anything, but that breadth comes with higher costs, heavier governance requirements, and stronger dependence on major technology platforms.

In contrast, small language models tailored to specific tasks or organisations can be faster, cheaper, and easier to control, while still delivering strong results.

Kurbalija compares this to professional expertise itself. Most jobs never required having the Encyclopaedia Britannica open on the desk. Real expertise lives in procedures, institutions, and communities, not in massive collections of general knowledge.

Similarly, the most useful AI tools are often those designed to draft standard documents, summarise meetings, classify requests, or answer questions based on a defined body of organisational knowledge.

Diplomacy, an area Kurbalija knows well, illustrates both the strengths and limits of AI. Many diplomatic tasks are highly ritualised and can be automated using rules-based systems or smaller models. But core diplomatic skills, such as negotiation, persuasion, empathy, and trust-building, remain deeply human and resistant to automation. The lesson, he argues, is to automate routines while recognising where AI should stop.

The broader paradox is that large AI platforms may benefit more from users than users benefit from frontier AI. By sitting at the centre of workflows, these platforms collect valuable data and organisational knowledge, even when their advanced capabilities are not truly needed.

As Kurbalija concludes, a more common-sense approach would prioritise smaller, specialised models for everyday work, reserving frontier AI for genuinely complex tasks, and moving beyond the assumption that bigger AI is always better.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deezer opens AI detection tool to rivals

French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.

Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.

The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.

Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot