AI chatbots operating in Colorado would face new child safety and suicide prevention requirements under a bipartisan bill introduced in the Colorado legislature. Lawmakers say the measure addresses parents to concerns about harmful chatbot interactions.
House Bill 1263 would require companies to clearly inform children in Colorado that they are interacting with AI rather than a real person. Platforms would also be barred from offering engagement rewards to child users.
The proposal mandates reasonable safeguards to prevent sexually explicit content and to stop chatbots from encouraging emotional dependence, including romantic role-playing. Parental control options would also be required where services are accessible to children in Colorado.
Companies would need to provide suicide prevention resources when users express self-harm thoughts and report such incidents to the Colorado attorney general. Violations would be treated as consumer protection infractions, carrying fines of up to $1,000 per occurrence in Colorado.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Central Bank of the UAE has partnered with Abu Dhabi-based AI company Core42 to develop a sovereign financial cloud infrastructure in the UAE. The system is designed to ensure data sovereignty and strengthen protection against cyber threats.
According to the Central Bank of the UAE, the platform will operate on a centralised, highly secure and isolated infrastructure. It aims to support continuous financial services while boosting operational agility across the UAE.
The infrastructure will be powered by AI and provide automation and real-time data analysis for licensed institutions in the UAE. It will also enable unified management of multi-cloud services within a single regulatory framework.
Core42, established by G42 in 2023, said finance must remain sovereign as it relies on digital infrastructure. The Central Bank of the UAE described the project as a key pillar of its financial infrastructure transformation programme.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More than 25 million people across the United States have had personal information exposed following a ransomware attack on government contractor Conduent. Updated state breach notifications indicate the incident is larger than initially understood.
Conduent provides printing, payment processing, and benefit administration services for state agencies and large corporations. Its systems support food assistance, unemployment benefits, and workplace programmes, reaching more than 100 million individuals, according to the company.
US State disclosures show Oregon and Texas account for most of the affected records, with additional cases reported in Massachusetts, New Hampshire, and Washington. Compromised data includes names, dates of birth, addresses, Social Security numbers, health insurance information, and medical details.
Public information from Conduent has been limited since the January 2025 attack. An incident notice published in October carried a ‘noindex’ tag in its source code, preventing search engines from listing the page, which critics say reduced visibility for affected individuals.
The breach ranks among the largest recent ransomware incidents, though it is smaller than the 2024 Change Healthcare attack that affected 190 million people. Regulators and affected users continue seeking clarity on the Conduent case and its security failures.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk, CEO of Tesla and xAI, has publicly accused Anthropic of stealing large volumes of data to train its AI models. The allegation was made on X in response to posts referencing Community Notes attached to Anthropic-related content.
Musk claimed the company had engaged in large-scale data theft and suggested that it had paid multi-billion-dollar settlements. Those financial claims remain contested, and no official confirmation has been provided to substantiate the figures.
Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact. https://t.co/EEtdsJQ1Op
Anthropic, known for developing the Claude AI model, was founded by former OpenAI employees and promotes an approach centred on AI safety and responsible development. The company has not publicly responded to Musk’s latest accusations.
The dispute reflects a broader conflict across the AI industry over how companies collect the text, images and other materials required to train large language models. Much of this data is scraped from the internet, often without explicit permission from rights holders.
Multiple lawsuits filed by authors, media organisations and software developers are testing whether large-scale scraping qualifies as fair use under copyright law. Court rulings in these cases could reshape licensing practices, impose financial penalties, and alter the economics of AI development.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The ShinyHunters extortion group has published a 6.1GB archive, which it claims contains more than 12 million records stolen from CarGurus, a US-based automotive platform. Have I Been Pwned listed the dataset, reporting that roughly 3.7 million records appear to be new.
The exposed information includes email addresses, IP addresses, full names, phone numbers, physical addresses, user account IDs, and finance-related application data belonging to CarGurus users. Dealer account details and subscription information were also reportedly included in the archive.
CarGurus has not issued a public statement confirming a breach. However, Have I Been Pwned said it attempts to verify the authenticity of datasets before adding them to its database, suggesting a level of validation of the leaked material.
Security experts warn that the availability of the data could increase the risk of phishing. Users are advised to remain cautious of unsolicited communications and potential scams that may leverage the exposed personal information.
ShinyHunters has recently claimed attacks against multiple large organisations across telecoms, fintech, retail, and media. The group is known for using social engineering tactics, including voice phishing and malicious OAuth applications, to gain access to SaaS platforms and extract customer data.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
US policymakers are increasingly treating personal data as a dual use asset that carries both economic value and national security risks. Regulators have raised concerns about sensitive information, including geolocation data linked to military personnel.
Measures such as the Protecting Americans Data from Foreign Adversaries Act of 2024 and the Department of Justice Data Security Program aim to curb misuse by designated foreign adversaries. Both frameworks impose broad restrictions on cross border data transfers.
Experts warn that compliance remains complex and uncertain, with companies adapting in what one adviser described as a fog. Enforcement signals have already emerged, including a draft noncompliance letter from the Federal Trade Commission and litigation.
Organizations are being urged to integrate national security expertise into privacy and cybersecurity teams. Observers say early preparation is essential as selective enforcement risks increase under strict but evolving US data protection regimes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Data Protection Supervisor (EDPS) and authorities from 61 jurisdictions issued a joint statement on AI-generated imagery, warning about tools that create realistic depictions of identifiable individuals without consent. The move underscores concerns over privacy, dignity and child safety.
Authorities said advances in AI image and video tools, especially when integrated into social media platforms, have enabled non-consensual intimate imagery, defamatory depictions, and other harmful content. Children and vulnerable groups are seen as particularly at risk.
The EDPS and the other signatories reminded organisations that AI content-generation systems must comply with applicable data protection and privacy laws. They stressed that creating non-consensual intimate imagery may constitute a criminal offence in many jurisdictions.
Organisations are urged to implement safeguards against misuse of personal data, ensure transparency about system capabilities and uses, and provide accessible mechanisms for swift content removal. Stronger protections and age-appropriate information are expected where children are involved.
Authorities signalled plans for coordinated responses, including enforcement, policy development and education initiatives. The EDPS and fellow signatories urged organisations to engage proactively with regulators and ensure innovation does not undermine fundamental rights.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The first enforcement provisions of the EU AI Act entered into force on 2 February 2025, marking a turning point for Europe’s AI startup ecosystem. The initial phase targets ‘unacceptable risk’ systems, including social scoring, real-time biometric surveillance in public spaces, and manipulative AI practices.
Under the regulation, penalties can reach €35 million or 7% of global annual turnover, whichever is higher. Although the current enforcement covers only prohibited practices, the move signals that Europe’s AI rulebook is now operational rather than theoretical.
Broader obligations for high-risk AI systems, such as hiring tools, credit scoring, and medical diagnostics, will apply from August 2026. Separate rules for general-purpose AI models are scheduled to take effect in August 2025.
Surveys from European SME groups indicate that many smaller technology companies feel unprepared. A significant share of reports have not conducted formal risk classification of their AI systems, despite this being a foundational requirement under the EU AI Act’s tiered framework.
While some founders warn that compliance costs could slow innovation, others point to long-term benefits from clearer governance standards. For startups, the coming months will focus on aligning products with AI Act risk tiers and strengthening documentation and oversight before stricter rules apply.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK’s Information Commissioner’s Office has fined Reddit £14.47 million after finding that the platform unlawfully used children’s personal information and failed to put in place adequate age checks.
Although Reddit updated its processes in July 2025, self-declaration remained easy to bypass, offering only a veneer of protection. Investigators also found that the company had not completed a data protection impact assessment until 2025, despite a large number of teenagers using the service.
Concerns were heightened by the volume of children affected and the risks created by relying on inadequate age checks.
The regulator noted that unlawful data processing occurred over a prolonged period, and that children were at risk of viewing harmful material while their information was processed without a lawful basis.
UK Information Commissioner John Edwards said companies must prioritise meaningful age assurance and understand the responsibilities set out in the Children’s Code.
The ICO said it will continue monitoring Reddit’s current controls and expects online platforms to align with robust age-assurance standards rather than rely on weak verification.
It will coordinate its oversight with Ofcom as part of broader efforts to strengthen online safety and ensure under-18s benefit from high privacy protections by default.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Across organisations, AI tools are moving beyond IT teams and into everyday business functions. CIOs now face the challenge of widening access while protecting data, security and trust.
Earlier waves of low-code platforms and citizen data science showed that empowerment can boost innovation but also create shadow IT and technical debt. AI agents and generative systems raise the stakes, with risks ranging from data leaks to flawed automated decisions.
Pressure from boards and business leaders means AI cannot be restricted to a small pilot group. Transparent governance, approved toolkits, and updated data policies are essential to prevent misuse while still enabling experimentation.
Long-term success depends on culture as much as technology. Leaders must define a focused AI vision, invest in literacy and adapt change management so employees use AI to improve decisions rather than accelerate flawed processes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!