New York lawmakers are considering legislation that would ban AI chatbots from providing legal or medical advice. The bill aims to stop automated systems from impersonating licensed professionals such as doctors and lawyers.
The proposal would also require chatbot operators to clearly inform users that they are interacting with an AI system. Notices must be prominent, written in the same language as the chatbot, and use a readable font.
A key feature of the bill is a private right of action. However, this would allow users to file civil lawsuits against chatbot owners who violate the law, recovering damages and legal fees. Experts say this enforcement tool strengthens the rules and deters abuse.
Supporters of the legislation argue it protects New Yorkers’ safety, particularly minors. Other bills in the same package would regulate online platforms like Roblox and set standards for generative AI, synthetic content, and the handling of biometric data.
The bill’s author, state Senator Kristen Gonzalez, said AI innovation should not come at the expense of public safety. She pointed to recent cases where AI chatbots were linked to harmful outcomes for minors, highlighting the need for transparency and accountability.
If passed, the law would take effect 90 days after the governor signs it. Lawmakers hope it will balance innovation with user protection, ensuring AI tools are used responsibly and safely across the state.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Executives across the US are increasingly using a metric known as labour cost margin to evaluate workforce needs in the AI era. Business leaders in the US say the measure reflects how companies balance human labour with expanding technology investments.
A KPMG survey of 100 US CEOs shows strong corporate commitment to AI spending. Nearly 80 percent of executives allocate at least five percent of capital budgets to AI projects.
The workforce impact remains uncertain despite growing investment. Many executives expect AI to change job composition rather than eliminate roles.
Companies are hiring new technology-focused roles, including AI strategists and workflow coordinators. Analysts say repetitive office tasks in the US may face the greatest risk from automation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A draft initiative from the European Commission seeks to introduce a new legal structure designed to simplify how companies operate across the EU.
The proposal, often referred to as the ‘EU Inc’ initiative, explores the creation of a so-called ’28th regime’ that would exist alongside national corporate frameworks used by member states.
A concept that aims to provide startups and technology firms with a single legal structure that applies across the EU.
Instead of navigating different national rules in each country, companies could operate under a unified regulatory model intended to reduce administrative barriers and encourage cross-border innovation.
According to the draft, the initiative may rely on an EU regulation rather than separate national legislation. Such an approach could enable faster implementation, as the EU regulations apply directly across all member states without requiring domestic transposition.
However, the legal basis of the proposal could raise institutional concerns. Using a regulation as the primary mechanism may constitute an unconventional shortcut in the EU lawmaking, potentially sparking debate among policymakers over the approach’s scope and legitimacy.
The initiative reflects broader efforts within the Union to simplify regulatory frameworks and strengthen the competitiveness of European startups. If adopted, the ‘EU Inc’ model could reshape how young companies expand across the single market.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers in the US have found that AI analysis of mammograms may help identify women at risk of heart disease. The study examined breast scans to measure calcium deposits in arteries, a sign linked to cardiovascular problems.
Scientists from Emory University in Atlanta analysed screening data from more than 120,000 women. Results showed women with higher levels of arterial calcium detected in mammograms faced significantly greater risk of heart attacks or strokes.
Researchers reported that even women under 50 years old showed increased cardiovascular risk when calcium deposits appeared on scans. Experts say the findings suggest routine breast screening could reveal hidden heart health risks.
Doctors in Atlanta say AI could allow mammograms to act as a dual screening tool for breast cancer and cardiovascular disease. Further research is planned before hospitals in the US widely adopt the method.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Technology hubs in China are promoting the OpenClaw AI agent as part of new local industry initiatives. Officials in China say the open source tool can automate tasks such as email management and travel booking.
Cities including Shenzhen, Wuxi and Hefei are drafting policies to build an ecosystem around OpenClaw. Authorities in China are offering subsidies, computing resources and office support to encourage AI-driven one-person companies.
OpenClaw has grown rapidly since its release and has become one of the fastest-expanding projects on GitHub. Technology groups say the tool could allow individuals to operate businesses with far fewer employees.
Regulators have also warned about security and data protection risks linked to AI agents. Draft rules in China propose limits on access to sensitive data and stronger oversight of cross-border information flows.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is acquiring Promptfoo, a platform designed to help enterprises identify and remediate vulnerabilities in AI systems during development. Once finalised, Promptfoo’s technology will be integrated into OpenAI Frontier, OpenAI’s platform for building and managing AI coworkers.
Promptfoo, led by Ian Webster and Michael D’Angelo, provides tools trusted by over a quarter of Fortune 500 companies. Its open-source CLI and library support evaluation and red-teaming of large language model applications.
The acquisition allows OpenAI to enhance both open-source initiatives and enterprise capabilities within Frontier.
Integration will introduce native security and evaluation features into Frontier. Enterprises will gain automated tools to detect risks such as prompt injections, jailbreaks, data leaks, tool misuse, and out-of-policy agent behaviour.
Security testing will be built into development workflows to catch issues early and support safe AI deployment.
Oversight and accountability features will also be strengthened. Integrated reporting and traceability will allow organisations to document testing, monitor changes over time, and meet governance, risk, and compliance requirements.
The acquisition is expected to expand OpenAI’s ability to deliver secure and reliable AI for enterprise applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has launched two lawsuits against the US Department of Defence, disputing its recent designation of the AI firm as a ‘supply chain risk.’ The company claims the move is unlawful and infringes on its First Amendment rights.
The company argues that the government is punishing it for refusing to allow the military to use its AI for domestic surveillance or for fully autonomous weapons.
The lawsuits, filed in California and Washington, DC courts, follow the Pentagon’s unprecedented use of the supply chain risk tool against a US company. The designation requires other government contractors to sever ties with Anthropic, posing a serious threat to its business operations.
The company maintains it remains committed to supporting national security applications of its AI.
The Department of Defence has used anthropic’s AI model Claude in operations targeting Iran. The company says it has worked with the DoD on system adaptations and seeks to continue negotiations while protecting its business and partners.
The firm claims government actions cause harm, though CEO Dario Amodei said the designation’s impact is limited. Anthropic insists judicial review is a necessary step to defend its business and ensure the responsible deployment of its technology.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Authorities in Canada have issued a warning about the growing use of AI in impersonation scams targeting citizens. Fraudsters increasingly deploy advanced tools capable of mimicking politicians, government officials and other public figures with convincing realism.
Deepfake videos, synthetic audio and AI-generated messages allow scammers to create convincing communications that appear to come from trusted authorities.
Such tactics are often used to persuade victims to send money, reveal personal information, install malicious software or engage with fraudulent investment offers.
Officials also warn about fake government websites created with AI-assisted tools that imitate official pages by copying national symbols and similar domain names. Suspicious websites often use unusual web addresses, extra characters, or unfamiliar domain endings to mislead visitors.
Authorities advise Canadians to verify unexpected messages through official channels rather than clicking links or responding immediately.
Suspected impersonation attempts should be reported to the Competition Bureau or the Canadian Anti-Fraud Centre.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is playing an increasingly important role in space medicine as astronauts aboard the International Space Station test new technologies designed to support autonomous health monitoring. The experiment combines augmented reality with an AI system that analyses ultrasound scans in orbit.
NASA astronaut Jack Hathaway and European Space Agency astronaut Sophie Adenot carried out guided ultrasound examinations using the EchoFinder-2 biomedical device.
Augmented-reality instructions helped the astronauts position the scanner correctly while AI analysed the images and confirmed the identification of internal organs.
The developers of the system aim to reduce reliance on medical specialists on Earth. Future crews travelling farther into space may face communication delays, making real-time guidance from ground teams more difficult.
Reliable AI-supported diagnostics could therefore become a key tool for long-duration missions, enabling astronauts to perform complex medical checks independently during journeys to the Moon, Mars, and beyond.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Tron has joined the Linux Foundation’s Agentic AI Foundation (AAIF) as a governing member to support the development of AI agent infrastructure. The network aims to enable collaboration and interoperability among systems that efficiently manage high-volume, low-value transactions.
Founder Justin Sun highlighted Tron’s speed, scalability, and low fees as key advantages for AI-agent use cases. He noted that as AI agents move to mainstream machine-to-machine commerce, transaction volumes could rise, increasing demand for robust blockchain networks.
The AAIF encourages open-source agentic AI development and establishes standards for governance, safety, and interoperability. Tron joins major members like Circle and JPMorgan while building tools and infrastructure to support AI, including the Bank of AI with AINFT.
Tron currently leads in blockchain revenue, with data showing strong performance over 24 hours, seven days, and 30 days. Sun confirmed that AI activity is contributing to this growth, reflecting the rapid adoption and scaling of agentic AI on the network.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!