Authorities in Greece have released initial results from a pilot rollout of AI-powered traffic cameras in the greater Athens area. More than 2,000 serious traffic violations were recorded over four days. The cameras were installed at high-risk locations across Attica.
A single AI camera on Syngrou Avenue logged over 1,000 violations related to mobile phone use and failure to wear seatbelts. Around 800 speeding cases were recorded on roads with a 90 km/h limit. Additional red-light violations were detected at major junctions in Agia Paraskevi and Kallithea.
The pilot programme, backed by the Hellenic Police, marks Greece’s first use of AI-based traffic cameras on Attica’s road network. The rollout forms part of a broader national road safety effort. Authorities have stressed deterrence rather than punishment.
The cameras monitor serious breaches of Greece’s Highway Code, including speeding, red-light violations, and illegal use of mobile phones. Recorded data include images, video, and time-stamped metadata transmitted in encrypted form. Drivers are notified digitally and can submit appeals online.
Plans are underway to expand the system across Greece, with up to 2,500 cameras proposed nationwide. Fixed units will target high-risk locations, while others will be installed on public transport buses. Regional authorities are also preparing to integrate motorway camera networks.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Regulatory uncertainty has long shaped life sciences, but 2025 marked a shift in expectations. Authorities are focusing more on how companies operate in practice. Enforcement activity continues to signal sustained scrutiny.
Regulators across federal and state agencies are coordinating more closely. Attention is centred on digital system validation, AI-supported documentation, reimbursement processes, and third-party oversight. Flexibility in digital tools is no longer assumed.
Inspection priorities now extend beyond manufacturing quality. Regulators are examining governance of automated analyses, review of AI-generated records, and data consistency in decentralised trials. Clear documentation is becoming critical.
A similar shift is visible in reimbursement and data oversight. Authorities want insight into governance behind pricing, reporting, and data handling. Privacy enforcement now focuses on data flows, AI training data, and third-party access.
Looking ahead to 2026, scrutiny is expected to intensify around AI inspection standards and data sharing. Regulators are signalling higher expectations for transparency and accountability. Sound judgement and consistency may prove decisive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The growing demand for AI is reshaping the fortunes of the memory chip industry, according to leading manufacturers, who argue that the scale of AI investment is altering the sector’s typical boom-and-bust pattern.
The technology is creating more structural demand, rather than the sharp cyclical spikes that previously defined the market.
AI workloads depend heavily on robust memory systems, particularly as companies expand data centre capacity worldwide. Major chipmakers now expect steadier growth because AI models require vast data handling rather than one-off hardware surges.
Analysts suggest it could reduce the volatility that has often led to painful downturns for the industry.
Additionally, some reports claim that Japanese technology group Rakuten is prioritising low-cost AI development to improve profitability across its businesses.
Its AI leadership stresses the need to deploy systems that maximise margins instead of simply chasing capability for its own sake.
The developments underscore how AI is not only transforming software and services but also reshaping the economics of the hardware required to power them, from memory chips to cloud infrastructure on a global scale.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Seasonal influenza remains a significant global health burden, causing millions of severe infections and significant mortality each year, according to World Health Organisation estimates released in early 2025.
In several regions, flu activity has returned to or surpassed pre-pandemic levels, placing older adults, young children, and individuals with chronic conditions at the highest risk. Such patterns reinforce the need for improved prevention strategies and more effective vaccines.
Efforts to control influenza are challenged by the virus’s rapid mutation and the limitations of traditional laboratory methods. AI and machine learning are emerging as powerful tools for predicting antigenic changes, enhancing vaccine strain selection, and accelerating manufacturing.
Beyond vaccine development, AI-driven models are enhancing infection monitoring and immune response analysis by leveraging routine clinical data. These advances enhance surveillance and pave the way for personalised influenza prevention and treatment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Central Bank of Russia has introduced a detailed proposal aimed at bringing cryptocurrencies under a unified regulatory framework, marking a significant step towards formal legal recognition of digital assets.
Under the proposal, both qualified and non-qualified investors would be permitted to purchase cryptocurrencies. Investor status would be determined by factors such as education, professional background, income level, and asset holdings.
Non-qualified investors would be restricted to buying up to 300,000 roubles worth of crypto per year through authorised intermediaries.
Digital currencies and stablecoins would be classified as currency values under Russian law, yet their use as a means of payment for goods and services would remain prohibited. The framework maintains the state’s long-standing opposition to domestic crypto payments.
Russian residents would also gain the right to purchase and transfer crypto assets abroad, provided such transactions are reported to the Federal Tax Service. The central bank aims to finalise the legislative groundwork by 1 July 2026.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is transforming how news is produced and consumed, moving faster than audiences and policies can adapt. Journalists increasingly use AI for research, transcription and content optimisation, creating new trust challenges.
Ethical concerns are rising when AI misrepresents events or uses content without consent. Media organisations have introduced guidelines, but experts warn that rules alone cannot cover every scenario.
Audience scepticism remains, even as journalists adopt AI tools in daily practice. Transparency, apparent human oversight and ethical adoption are key to maintaining credibility and legitimacy.
Europe faces pressure to strengthen its trust infrastructure and regulate the use of AI in newsrooms. Experts argue that democratic stability depends on informed audiences and resilient journalism to counter disinformation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
TikTok removed fake adverts for weight loss drugs after a company impersonating UK retailer Boots used AI-generated videos. The clips falsely showed healthcare professionals promoting prescription-only medicines.
Boots said it contacted TikTok after becoming aware of the misleading adverts circulating on the platform. TikTok confirmed the videos were removed for breaching its rules on deceptive and harmful advertising.
BBC reporting found the account was briefly able to repost the same videos before being taken down. The account appeared to be based in Hong Kong and directed users to a website selling the drugs.
UK health regulators warned that prescription-only weight loss medicines must only be supplied by registered pharmacies. TikTok stated that it continues to strengthen its detection systems and bans the promotion of controlled substances.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Rising demand for AI is pushing data centre servers to operate at extreme speeds and temperatures. Traditional air cooling is no longer sufficient for the most powerful computer chips.
Liquid cooling systems use sprays or immersion baths to remove heat more efficiently. These methods allow continuous high performance while reducing the risk of hardware failure and overheating.
Environmental concerns are growing as data centres consume vast amounts of energy and water. Closed-loop liquid cooling cuts electricity use and limits water withdrawal from local supplies and ecosystems.
Concerns persist regarding certain cooling chemicals and their potential climate impact. Researchers and companies are developing safer fluids and passive cooling inspired by natural systems and biological processes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
An EU-funded project, AIOLIA, is examining how Europe’s approach to trustworthy AI can be applied in practice. Principles such as transparency and accountability are embedded in the AI Act’s binding rules. Turning those principles into design choices remains difficult.
The project focuses on closing that gap by analysing how AI ethics is applied in real systems. Its work supports the implementation of AI Act requirements beyond legal text. Lessons are translated into practical training.
Project coordinator Alexei Grinbaum argues that ethical principles vary widely by context. Engineers are expected to follow them, but implications differ across systems. Bridging the gap requires concrete examples.
AIOLIA analyses ten use cases across multiple domains involving professionals and citizens. The project examines how organisations operationalise ethics under regulatory and organisational constraints. Findings highlight transferable practices without a single model.
Training is central to the initiative, particularly for EU ethics evaluators and researchers working under the AI Act framework. As AI becomes more persuasive, risks around manipulation grow. AIOLIA aims to align ethical language with daily decisions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
ChatGPT Atlas has introduced an agent mode that allows an AI browser agent to view webpages and perform actions directly. The feature supports everyday workflows using the same context as a human user. Expanded capability also increases security exposure.
Prompt injection has emerged as a key threat to browser-based agents, targeting AI behaviour rather than software flaws. Malicious instructions embedded in content can redirect an agent from the user’s intended action. Successful attacks may trigger unauthorised actions.
To address the risk, OpenAI has deployed a security update to Atlas. The update includes an adversarially trained model and strengthened safeguards. It followed internal automated red teaming.
Automated red teaming uses reinforcement learning to train AI attackers that search for complex exploits. Simulations test how agents respond to injected prompts. Findings are used to harden models and system-level defences.
Prompt injection is expected to remain a long-term security challenge for AI agents. Continued investment in testing, training, and rapid mitigation aims to reduce real-world risk. The goal is to achieve reliable and secure AI assistance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!