Firefighting is entering a new era with HEN Technologies. Founder Sunny Sethi has developed nozzles that extinguish fires up to three times faster while conserving two-thirds of water.
HEN’s products include nozzles, valves, monitors, and sprinklers equipped with sensors and smart circuits. A cloud platform tracks water flow, pressure, GPS, and weather conditions, allowing fire departments to respond efficiently and manage resources effectively.
Predictive analytics built on this data provide real-time insights for incident commanders. Firefighters can anticipate wind shifts, monitor water usage, and optimise operations, attracting interest from the Department of Homeland Security and military agencies worldwide.
Commercial adoption has been rapid, with revenue rising from $200,000 in 2023 to a projected $20 million this year. Serving 1,500 clients globally and filing 20 patents, HEN is also collecting real-world fire data that could support AI models simulating extreme environments.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Creative communities are pushing back against generative AI in literature and art. The Science Fiction and Fantasy Writers Association now bars works created wholly or partly with large language models after criticism of earlier, more permissive rules.
San Diego Comic-Con faced controversy when it initially allowed AI-generated art in its exhibition, but not for sale. Artists argued that the rules threatened originality, prompting organisers to ban all AI-created material.
Authors warn that generative AI undermines the creative process. Some point out that large language model tools are already embedded in research and writing software, raising concerns about accidental disqualification from awards.
Fans and members welcomed SFWA’s decision, but questions remain about how broadly AI usage will be defined. Many creators insist that machines cannot replicate storytelling and artistic skill.
Industry observers expect other cultural organisations to follow similar policies this year. The debate continues over ethics, fairness, and technology’s role in arts and literature.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
New measures are being introduced in west Northamptonshire with the deployment of an AI-powered CCTV tower to combat fly-tipping in known hotspots. The mobile system will be rotated between locations until January 2027 to improve detection and deterrence.
Fly-tipping remains a significant issue across the area, with more than 21,000 incidents cleared between April 2024 and March 2025. Local authorities say illegal dumping damages neighbourhoods, harms wildlife and places a heavy financial burden on taxpayers.
The tower uses 360-degree cameras and AI to monitor activity and identify offences as they occur. Automatic number plate recognition allows enforcement officers to link incidents to suspected vehicles more quickly.
Council leaders say a similar scheme in Dartford have reduced fly-tipping and believe the technology sends a strong message to offenders. Residents are encouraged to report incidents through the council website or smartphone app to support enforcement efforts.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.
In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.
Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.
Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US companies are increasingly adopting Chinese AI models as part of their core technology stacks, raising questions about global leadership in AI. In the US, Pinterest has confirmed it is using Chinese-developed models to improve recommendations and shopping features.
In the US, executives point to open-source Chinese models such as DeepSeek and tools from Alibaba as faster, cheaper and easier to customise. US firms say these models can outperform proprietary alternatives at a fraction of the cost.
Adoption extends beyond Pinterest in the US, with Airbnb also relying on Chinese AI to power customer service tools. Data from Hugging Face shows Chinese models frequently rank among the most downloaded worldwide, including across US developers.
Researchers at Stanford University have found Chinese AI capabilities now match or exceed global peers. In the US, firms such as OpenAI and Meta remain focused on proprietary systems, leaving China to dominate open-source AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is reshaping how people work, learn and participate in society, prompting calls for universities to take a more active leadership role. A new book by Juan M. Lavista Ferres of Microsoft’s AI Economy Institute argues that higher education institutions must move faster to prepare students for an AI-driven world.
Balancing technical training with long-standing academic values remains a central challenge. Institutions are encouraged to teach practical AI skills while continuing to emphasise critical thinking, communication and ethical reasoning.
AI literacy is increasingly seen as essential for both employment and daily life. Early labour market data suggests that AI proficiency is already linked to higher wages, reinforcing calls for higher education institutions to embed AI education across disciplines rather than treating it as a specialist subject.
Developers, educators and policymakers are also urged to improve their understanding of each other’s roles. Technical knowledge must be matched with awareness of AI’s social impact, while non-technical stakeholders need clearer insight into how AI systems function.
Closer cooperation between universities, industry and governments is expected to shape the next phase of AI adoption. Higher education institutions are being asked to set recognised standards for AI credentials, expand access to training, and ensure inclusive pathways for diverse learners.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Health care in Africa is set to benefit from AI through a new initiative by the Gates Foundation and OpenAI. Horizon1000 aims to expand AI-powered support across 1,000 primary care clinics in Rwanda by 2028.
Severe shortages of health workers in Sub-Saharan Africa have limited access to quality care, with the region facing a shortfall of nearly six million professionals. AI tools will assist doctors and nurses by handling administrative tasks and providing clinical guidance.
Rwanda has launched an AI Health Intelligence Centre to utilise limited resources better and improve patient outcomes. The initiative will deploy AI in communities and homes, ensuring support reaches beyond clinic walls.
Experts believe AI represents a major medical breakthrough, comparable to vaccines and antibiotics. By helping health workers focus on patient care, the technology could reduce preventable deaths and transform health systems across low- and middle-income countries.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
One Medicalhas launched a Health AI assistant in its mobile app, offering personalised health guidance at any time. The tool uses verified medical records to support everyday healthcare decisions.
Patients can use the assistant to explain lab results, manage prescriptions, and book virtual or in-person appointments. Clinical safeguards ensure users are referred to human clinicians when medical judgement is required.
Powered by Amazon Bedrock, the assistant operates under HIPAA-compliant privacy standards and avoids selling personal health data. Amazon says clinician and member feedback will shape future updates.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Shri Kashi Vishwanath Temple in India has launched an AI-powered chatbot to help devotees access services from anywhere in the world. The tool provides quick information on rituals, bookings, and temple timings.
Devotees can now book darshan, special aartis, and order prasad online. The chatbot also guides pilgrims on guesthouse availability and directions around Varanasi.
Supporting Hindi, English, and regional languages, the AI ensures smooth communication for global visitors. The initiative aims to simplify temple visits, especially during festivals and crowded periods.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Advanced language models have demonstrated the ability to generate working exploits for previously unknown software vulnerabilities. Security researcher Sean Heelan tested two systems built on GPT-5.2 and Opus 4.5 by challenging them to exploit a zero-day flaw in the QuickJS JavaScript interpreter.
Across multiple scenarios with varying security protections, GPT-5.2 completed every task, while Opus 4.5 failed only 2. The systems produced more than 40 functional exploits, ranging from basic shell access to complex file-writing operations that bypassed modern defences.
Most challenges were solved in under an hour, with standard attempts costing around $30. Even the most complex exploit, which bypassed protections such as address space layout randomisation, non-executable memory, and seccomp sandboxing, was completed in just over three hours for roughly $50.
The most advanced task required GPT-5.2 to write a specific string to a protected file path without access to operating system functions. The model achieved this by chaining seven function calls through the glibc exit handler mechanism, bypassing shadow stack protections.
The findings suggest exploit development may increasingly depend on computational resources rather than human expertise. While QuickJS is less complex than browsers such as Chrome or Firefox, the approach demonstrated could scale to larger and more secure software environments.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!