AI chatbots operating in Colorado would face new child safety and suicide prevention requirements under a bipartisan bill introduced in the Colorado legislature. Lawmakers say the measure addresses parents to concerns about harmful chatbot interactions.
House Bill 1263 would require companies to clearly inform children in Colorado that they are interacting with AI rather than a real person. Platforms would also be barred from offering engagement rewards to child users.
The proposal mandates reasonable safeguards to prevent sexually explicit content and to stop chatbots from encouraging emotional dependence, including romantic role-playing. Parental control options would also be required where services are accessible to children in Colorado.
Companies would need to provide suicide prevention resources when users express self-harm thoughts and report such incidents to the Colorado attorney general. Violations would be treated as consumer protection infractions, carrying fines of up to $1,000 per occurrence in Colorado.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk, CEO of Tesla and xAI, has publicly accused Anthropic of stealing large volumes of data to train its AI models. The allegation was made on X in response to posts referencing Community Notes attached to Anthropic-related content.
Musk claimed the company had engaged in large-scale data theft and suggested that it had paid multi-billion-dollar settlements. Those financial claims remain contested, and no official confirmation has been provided to substantiate the figures.
Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact. https://t.co/EEtdsJQ1Op
Anthropic, known for developing the Claude AI model, was founded by former OpenAI employees and promotes an approach centred on AI safety and responsible development. The company has not publicly responded to Musk’s latest accusations.
The dispute reflects a broader conflict across the AI industry over how companies collect the text, images and other materials required to train large language models. Much of this data is scraped from the internet, often without explicit permission from rights holders.
Multiple lawsuits filed by authors, media organisations and software developers are testing whether large-scale scraping qualifies as fair use under copyright law. Court rulings in these cases could reshape licensing practices, impose financial penalties, and alter the economics of AI development.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The ShinyHunters extortion group has published a 6.1GB archive, which it claims contains more than 12 million records stolen from CarGurus, a US-based automotive platform. Have I Been Pwned listed the dataset, reporting that roughly 3.7 million records appear to be new.
The exposed information includes email addresses, IP addresses, full names, phone numbers, physical addresses, user account IDs, and finance-related application data belonging to CarGurus users. Dealer account details and subscription information were also reportedly included in the archive.
CarGurus has not issued a public statement confirming a breach. However, Have I Been Pwned said it attempts to verify the authenticity of datasets before adding them to its database, suggesting a level of validation of the leaked material.
Security experts warn that the availability of the data could increase the risk of phishing. Users are advised to remain cautious of unsolicited communications and potential scams that may leverage the exposed personal information.
ShinyHunters has recently claimed attacks against multiple large organisations across telecoms, fintech, retail, and media. The group is known for using social engineering tactics, including voice phishing and malicious OAuth applications, to gain access to SaaS platforms and extract customer data.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Scientists at Massachusetts Institute of Technology (MIT) report progress in applying AI to integrate and interpret diverse biological datasets, helping overcome key challenges in cell biology research.
Traditional experimental approaches often generate fragmented data, such as gene expression profiles, imaging, and molecular interactions, that are difficult to combine into a coherent view of cellular systems.
By contrast, AI models can learn patterns across multiple data types, reveal connections between disparate datasets, and generate holistic representations of cell behaviour that would otherwise require extensive manual synthesis.
The new AI techniques allow researchers to uncover relationships between genes, proteins and cellular processes with greater clarity, enabling improved hypothesis generation, experimental design and understanding of complex biological phenomena such as development, disease progression and response to therapies.
Because these AI tools can help prioritise experimental directions and reduce reliance on trial-and-error studies, they may accelerate breakthroughs in areas ranging from immunology to cancer biology.
Researchers emphasise that AI complements, rather than replaces, traditional biological expertise, acting as a data-driven partner that expands scientists’ ability to see the ‘bigger picture’ across scales and contexts.
Ethical and methodological considerations also underscore the importance of validating AI-generated insights with rigorous experiments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Multimodal sensing allows physical AI systems to combine inputs such as vision, audio, lidar and touch to build situational awareness in real time. The approach enables machines to operate autonomously in complex physical environments.
The architecture typically includes input modules for individual sensors, a fusion module to combine relevant data, and an output module to generate actions. Applications range from robotics and autonomous vehicles to spatial AI systems navigating dynamic 3D spaces.
Fusion techniques vary by use case, from Bayesian networks for uncertainty management to Kalman filters for navigation and neural networks for robotic manipulation. The aim is to leverage complementary sensor strengths while maintaining reliability.
Implementation presents technical challenges including environmental noise filtering, calibration across time and space, and balancing redundant versus complementary sensing. Engineers must also manage tradeoffs in processing power, controllers and system design.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UiPath has unveiled new agentic AI solutions for healthcare providers and payers. The tools focus on medical record summarisation, claim denial prevention, and prior authorisation, connecting data to speed workflows and improve efficiency.
Healthcare organisations face labour shortages and fragmented systems, making revenue cycle management challenging. Providers produce large volumes of clinical documentation that must be quickly turned into actionable insights for accurate reimbursement.
The platform converts records into concise, citation-backed summaries, automates claim review and appeals, and streamlines eligibility checks. AI predicts risks, reduces errors, and accelerates clinical and administrative processes for providers and payers alike.
UiPath partners with innovators such as Genzeon to embed domain expertise. The solution addresses rising costs, complex regulations, and labour challenges, helping teams make data-driven decisions and improve patient outcomes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Low solubility and poor bioavailability remain major hurdles in small-molecule drug development, often preventing promising candidates from reaching clinical trials. Traditional trial-and-error methods are time-consuming and depend heavily on the limited availability of active pharmaceutical ingredients (APIs).
AI and machine learning now provide predictive models that anticipate solubility, permeability and systemic exposure. These tools let scientists prioritise high-impact experiments while conserving valuable material.
Digital platforms combine predictive algorithms with stability testing to guide excipient and technology selection. AI can simulate molecular interactions and dose scenarios, helping teams identify risks early and refine first-in-human doses safely.
End-to-end AI/ML workflows integrate data, modelling and manufacturing insights. However, this accelerates development timelines, lowers the risk of late-stage reformulations and connects early formulation choices directly to clinical and manufacturing outcomes.
While AI enhances efficiency and precision, it does not replace human expertise. It amplifies formulation scientists’ work, freeing them to focus on innovative design, problem-solving and delivering high-quality therapies to patients more rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US policymakers are increasingly treating personal data as a dual use asset that carries both economic value and national security risks. Regulators have raised concerns about sensitive information, including geolocation data linked to military personnel.
Measures such as the Protecting Americans Data from Foreign Adversaries Act of 2024 and the Department of Justice Data Security Program aim to curb misuse by designated foreign adversaries. Both frameworks impose broad restrictions on cross border data transfers.
Experts warn that compliance remains complex and uncertain, with companies adapting in what one adviser described as a fog. Enforcement signals have already emerged, including a draft noncompliance letter from the Federal Trade Commission and litigation.
Organizations are being urged to integrate national security expertise into privacy and cybersecurity teams. Observers say early preparation is essential as selective enforcement risks increase under strict but evolving US data protection regimes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Large language models are designed to mimic human conversation, but treating them like people can mislead users. Politeness, flattery, or threats do not consistently improve the accuracy of AI responses.
Experts recommend focusing on how questions are structured rather than on word choice. Asking for multiple options, giving examples, and conducting step-by-step interviews can make AI outputs more relevant and useful.
Role-playing may be effective for creative or exploratory tasks, but it can reduce reliability when precise answers are required. AI models are constantly updated, making old prompting tricks largely ineffective.
Maintaining neutrality in prompts prevents biased responses, and while politeness may not improve AI performance, it can make interactions more comfortable. Developing careful prompt strategies is more effective than relying on manners alone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is increasingly being tested in media production as organisations adapt to changing digital consumption patterns. Generative AI tools are being used to repurpose archival material, experiment with new formats, and expand distribution across online platforms.
In this context, the BBC World Service has launched its first AI-animated video adaptations. The initiative transforms audio episodes of Witness History into short animated films, marking a new application of generative AI within the World Service’s programming.
Five episodes are scheduled for release, starting with The World’s First Labradoodle on the BBC World Service’s YouTube channel. Further adaptations cover Brazil’s largest bank heist, the restoration of Ramesses II’s mummy, the discovery of Lord Sipán in Peru, and an arrest related to football in Brazil.
The project aims to extend the reach of existing audio content and attract digital audiences who may not engage with radio. Editorial oversight remains in place, with AI positioned as a production support tool rather than a replacement for journalistic processes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!