Toyota and NTT push for accident free mobility

NTT and Toyota have expanded their partnership with a new initiative aimed at advancing safer mobility and reducing traffic accidents. The firms announced a Mobility AI Platform that combines high-quality communications, distributed computing and AI to analyse large volumes of data.

Toyota intends to use the platform to support software-defined vehicles, enabling continuous improvements in safety through data-driven automated driving systems.

The company plans to update its software and electronics architecture so vehicles can gather essential information and receive timely upgrades, strengthening both safety and security.

The platform will use three elements: distributed data centres, intelligent networks and an AI layer that learns from people, vehicles and infrastructure. As software-defined vehicles rise, Toyota expects a sharp increase in data traffic and a greater need for processing capacity.

Development will begin in 2025 with an investment of around 500 billion yen. Public trials are scheduled for 2028, followed by wider introduction from 2030.

Both companies hope to attract additional partners as they work towards a more connected and accident-free mobility ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK lawmakers push for binding rules on advanced AI

Growing political pressure is building in Westminster as more than 100 parliamentarians call for binding regulation on the most advanced AI systems, arguing that current safeguards lag far behind industry progress.

A cross-party group, supported by former defence and AI ministers, warns that unregulated superintelligent models could threaten national and global security.

The campaign, coordinated by Control AI and backed by tech figures including Skype co-founder Jaan Tallinn, urges Prime Minister Keir Starmer to distance the UK from the US stance against strict federal AI rules.

Experts such as Yoshua Bengio and senior peers argue that governments remain far behind AI developers, leaving companies to set the pace with minimal oversight.

Calls for action come after warnings from frontier AI scientists that the world must decide by 2030 whether to allow highly advanced systems to self-train.

Campaigners want the UK to champion global agreements limiting superintelligence development, establish mandatory testing standards and introduce an independent watchdog to scrutinise AI use in the public sector.

Government officials maintain that AI is already regulated through existing frameworks, though critics say the approach lacks urgency.

Pressure is growing for new, binding rules on the most powerful models, with advocates arguing that rapid advances mean strong safeguards may be needed within the next two years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU ministers call for faster action on digital goals

European ministers have adopted conclusions aimed to boosting the Union’s digital competitiveness, urging quicker progress toward the 2030 digital decade goals.

Officials called for stronger digital skills, wider adoption of technology, and a framework that supports innovation while protecting fundamental rights. Digital sovereignty remains a central objective, framed as open, risk-based and aligned with European values.

Ministers supported simplifying digital rules for businesses, particularly SMEs and start-ups, which face complex administrative demands. A predictable legal environment, less reporting duplication and more explicit rules were seen as essential for competitiveness.

Governments emphasised that simplification must not weaken data protection or other core safeguards.

Concerns over online safety and illegal content were a prominent feature in discussions on enforcing the Digital Services Act. Ministers highlighted the presence of harmful content and unsafe products on major marketplaces, calling for stronger coordination and consistent enforcement across member states.

Ensuring full compliance with EU consumer protection and product safety rules was described as a priority.

Cyber-resilience was a key focus as ministers discussed the increasing impact of cyberattacks on citizens and the economy. Calls for stronger defences grew as digital transformation accelerated, with several states sharing updates on national and cross-border initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fuels a new wave of cyber threats in Greece

Greece is confronting a rapid rise in cybercrime as AI strengthens the tools available to criminals, according to the head of the National Cyber Security Authority.

Michael Bletsas warned that Europe is already experiencing hybrid conflict, with Northeastern states facing severe incidents that reveal a digital frontline. Greece has not endured physical sabotage or damage to its infrastructure, yet cyberattacks remain a pressing concern.

Bletsas noted that most activity involves cybercrime instead of destructive action. He pointed to the expansion of cyberactivism and vandalism through denial-of-service attacks, which usually cause no lasting harm.

The broader problem stems from a surge in AI-driven intrusions and espionage, which offer new capabilities to malicious groups and create a more volatile environment.

Moreover, Bletsas said that the physical and digital worlds should be viewed as a single, interconnected sphere, with security designed around shared principles rather than being treated as separate domains.

Digital warfare is already unfolding, and Greece is part of it. The country must now define its alliances and strengthen its readiness as cyber threats intensify and the global divide grows deeper.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta moves investment from metaverse to AI smart glasses

Meta is redirecting part of its metaverse spending towards AI-powered glasses and wearables, aiming to capitalise on the growing interest in these devices. The shift comes after years of substantial investment in virtual reality, which has yet to convince investors of its long-term potential fully.

Reports indicate that Meta plans to reduce its metaverse budget by up to 30 percent, a move that lifted its share price by more than 3.4 percent. The company stated it has no broader changes planned, while offering no clarification on whether the adjustment will lead to job cuts.

The latest AI glasses, launched in September, received strong early feedback for features such as an in-lens display that can describe scenes and translate text. Their debut has intensified competition, with several industry players, including firms in China, racing to develop smart glasses and wearable technology.

Meta continues to face scepticism surrounding the metaverse, despite investing heavily in VR headsets and its Horizon Worlds platform. Interest in AI has surged, prompting the company to place a greater focus on large AI models, including those integrated into WhatsApp, and on producing more advanced smart devices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI innovation reshapes England’s World Cup strategy

England’s preparations for next summer’s World Cup increasingly rely on AI systems designed to sharpen decision-making both on and off the pitch. Analysts now utilise advanced tools to analyse vast datasets in hours rather than days, providing coaches with clearer insights before matches.

Penalty planning has become one of England’s most significant gains, with AI mapping opposition tendencies and each player’s striking style to ease pressure during high-stakes moments.

Players say the guidance helps them commit with confidence, while goalkeepers benefit from more detailed and precise information.

Player well-being is also guided by daily AI-powered checks that flag signs of fatigue and inform training loads, meal plans, and medical support.

Specialists insist that human judgement remains central, yet acknowledge that wealthier nations may gain an edge as smaller federations struggle to afford similar technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK researchers test robotic dogs and AI for early wildfire detection

Researchers at the University of Bradford are preparing to pilot an AI-enabled wildfire detection system that uses robotic dogs, drones, and emerging 6G networks to identify early signs of fire and alert emergency services.

The trial, set to take place in Greece in 2025, is part of the EU-funded 6G-VERSUS research project, which explores how next-generation connectivity can support crisis response.

According to project lead Dr Kamran Mahroof, wildfires have become a ‘pressing global challenge’ due to rising frequency and severity. The team intends to combine sensor data collected by four-legged robotic platforms and aerial drones with AI models capable of analysing smoke, vegetation dryness, and early heat signatures. High-bandwidth 6G links enable the near-instantaneous transmission of this data to emergency responders.

The research received funding earlier this year from the EU’s Horizon Innovation Action programme and was showcased in Birmingham during an event on AI solutions for global risks.

While the West Yorkshire Fire and Rescue Service stated that it does not currently employ AI for wildfire operations, it expressed interest in the project. It described its existing use of drones, mapping tools, and weather modelling for situational awareness.

The Bradford team emphasises that early detection remains the most effective tool for limiting wildfire spread. The upcoming pilot will evaluate whether integrated AI, robotics, and next-generation networks can help emergency services respond more quickly and predict where fires are likely to ignite.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Waterstones open to selling AI-generated books, but only with clear labelling

Waterstones CEO James Daunt has stated that the company is willing to stock books created using AI, provided the works are transparently labelled, and there is genuine customer demand.

In an interview on the BBC’s Big Boss podcast, Daunt stressed that Waterstones currently avoids placing AI-generated books on shelves and that his instinct as a bookseller is to ‘recoil’ from such titles. However, he emphasised that the decision ultimately rests with readers.

Daunt described the wider surge in AI-generated content as largely unsuitable for bookshops, saying most such works are not of a type Waterstones would typically sell. The publishing industry continues to debate the implications of generative AI, particularly around threats to authors’ livelihoods and the use of copyrighted works to train large language models.

A recent University of Cambridge survey found that more than half of published authors fear being replaced by AI, and two-thirds believe their writing has been used without permission to train models.

Despite these concerns, some writers are adopting AI tools for research or editing, while AI-generated novels and full-length works are beginning to emerge.

Daunt noted that Waterstones would consider carrying such titles if readers show interest, while making clear that the chain would always label AI-authored works to avoid misleading consumers. He added that readers tend to value the human connection with authors, suggesting that AI books are unlikely to be prominently featured in stores.

Daunt has led Waterstones since 2011, reshaping the chain by decentralising decision-making and removing the longstanding practice of publishers paying for prominent in-store placement. He also currently heads Barnes & Noble in the United States.

With both chains now profitable, Daunt acknowledged that a future share flotation is increasingly likely. However, no decision has been taken on whether London or New York would host any potential IPO.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Ultra users gain access to Gemini 3 Deep Think mode

Google has begun rolling out the Gemini 3 Deep Think mode to AI Ultra subscribers, offering enhanced reasoning for complex maths, science and logic tasks. The rollout follows last month’s preview during the Gemini 3 family release, allowing users to activate the mode directly within the Gemini app.

Deep Think builds on earlier Gemini 2.5 variants by utilising what Google refers to as parallel reasoning to test multiple hypotheses simultaneously. Early benchmark results show gains on structured problem-solving tasks, with improvements recorded on assessments such as Humanity’s Last Exam and ARC-AGI-2.

Subscribers can try the mode by selecting Deep Think in the prompt bar and choosing Gemini 3 Pro. Google states that the broader Gemini 3 upgrade enhances reliability when following lengthy instructions and reduces the need for repeated prompts during multi-step tasks.

Gemini 3 features stronger multimodal handling, enabling analysis of text, images, screenshots, PDFs and video. Capabilities include summarising lengthy material, interpreting detailed visuals and explaining graphs or charts with greater accuracy.

Larger context windows and improved planning support extended workflows such as research assistance and structured information management. Google describes Gemini 3 as its most secure model to date, with reinforced protections around sensitive or misleading queries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!