Pennsylvania Senate passes bill to tackle AI-generated CSAM

The Pennsylvania Senate has passed Senate Bill 1050, requiring all individuals classified as mandated reporters to notify authorities of any instance of child sexual abuse material (CSAM) they become aware of, including material produced by a minor or generated using artificial intelligence.

The bill, sponsored by Senators Tracy Pennycuick, Scott Martin and Lisa Baker, addresses the recent rise in AI-generated CSAM and builds upon earlier legislation (Act 125 of 2024 and Act 35 of 2025) that targeted deepfakes and sexual deepfake content.

Supporters argue the bill strengthens child protection by closing a legal gap: while existing laws focused on CSAM involving real minors, the new measure explicitly covers AI-generated material. Senator Martin said the threat from AI-generated images is ‘very real’.

From a tech policy perspective, this law highlights how rapidly evolving AI capabilities, especially around image synthesis and manipulation, are pushing lawmakers to update obligations for reporting, investigation and accountability.

It raises questions around how institutions, schools and health-care providers will adapt to these new responsibilities and what enforcement mechanisms will look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Foxconn and OpenAI strengthen US AI manufacturing

OpenAI has formed a new partnership with Foxconn to prepare US manufacturing for a fresh generation of AI infrastructure hardware.

The agreement centres on design support and early evaluation instead of immediate purchase commitments, which gives OpenAI a path to influence development while Foxconn builds readiness inside American facilities.

Both companies expect rapid advances in AI capability to demand a new class of physical infrastructure. They plan to co-design several generations of data centre racks that can keep pace with model development instead of relying on slower single-cycle upgrades.

OpenAI will share insight into future hardware needs while Foxconn provides engineering knowledge and large-scale manufacturing capacity across the US.

A key aim is to strengthen domestic supply chains by improving rack architecture, widening access to domestic chip suppliers and expanding local testing and assembly. Foxconn intends to produce essential data centre components in the US, including cabling, networking, cooling and power systems.

The companies present such an effort as a way to support faster deployment, create more resilient infrastructure and bring economic benefits to American workers.

OpenAI frames the partnership as part of a broader push to ensure that critical AI infrastructure is built within the US instead of abroad. Company leaders argue that a robust domestic supply chain will support American leadership in AI and keep the benefits widely shared across the economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in healthcare gains regulatory compass from UK experts

Professor Alastair Denniston has outlined the core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time.

Like previous innovations such as MRI scanners and antibiotics, AI has the potential to improve diagnosis, treatment and personalised care dramatically. Still, it also requires careful oversight to ensure patient safety.

The MHRA’s National Commission on the Regulation of AI in Healthcare is developing a framework based on three key principles. The framework must be safe, ensuring proportionate regulation that protects patients without stifling innovation.

It must be fast, reducing delays in bringing beneficial technologies to patients and supporting small innovators who cannot endure long regulatory timelines. Ultimately, it must be trusted, with transparent processes that foster confidence in AI technologies today and in the future.

Professor Denniston emphasises that AI is not a single technology but a rapidly evolving ecosystem. The regulatory system must keep pace with advances while allowing the NHS to harness AI safely and efficiently.

Just as with earlier medical breakthroughs, failure to innovate can carry risks equal to the dangers of new technologies themselves.

The National Commission will soon invite the public to contribute their views through a call for evidence.

Patients, healthcare professionals, and members of the public are encouraged to share what matters to them, helping to shape a framework that balances safety, speed, and trust while unlocking the full potential of AI in the NHS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DPDP law takes effect as India tightens AI-era data protections

India has activated new Digital Personal Data Protection rules that sharply restrict how technology firms collect and use personal information. The framework limits data gathering to what is necessary for a declared purpose and requires clear explanations, opt-outs, and breach notifications for Indian users.

The rules apply across digital platforms, from social media and e-commerce to banks and public services. Companies must obtain parental consent for individuals under 18 and are prohibited from using children’s data for targeted advertising. Firms have 18 months to comply with the new safeguards.

Users can request access to their data, ask why it was collected, and demand corrections or updates. They may withdraw consent at any time and, in some cases, request deletion. Companies must respond within 90 days, and individuals can appoint someone to exercise these rights.

Civil society groups welcomed stronger user rights but warned that the rules may also expand state access to personal data. The Internet Freedom Foundation criticised limited oversight and said the provisions risk entrenching government control, reducing transparency for citizens.

India is preparing further digital regulations, including new requirements for AI and social media firms. With nearly a billion online users, the government has urged platforms to label AI-generated content amid rising concerns about deepfakes, online misinformation, and election integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Target expands OpenAI partnership with new ChatGPT shopping app

Target is expanding its partnership with OpenAI by launching a new shopping app directly inside ChatGPT. The app offers customers personalised recommendations, multi-item baskets and streamlined checkout across Drive Up, Order Pickup and shipping.

The retailer will continue using OpenAI’s models and ChatGPT Enterprise to enhance employee productivity and strengthen digital experiences across its business.

AI is central to Target’s operations, supporting supply-chain forecasts, store processes, and personalised digital tools. Over 18,000 employees utilise ChatGPT Enterprise to streamline routine tasks, enhance creativity, and receive faster support for guest requests and returns through internal AI assistants.

Customer-facing tools such as Shopping Assistant, Gift Finder, Guest Assist and JOY reinforce this strategy by offering curated suggestions and instant answers.

The new Target app inside ChatGPT extends this AI-driven approach to customers. Shoppers will be able to ask for ideas, browse curated suggestions, build baskets and check out through their Target accounts.

The beta version launches next week, and upcoming features include Target Circle linking and same-day delivery. Target views the partnership as part of a retail shift, embedding AI across products, operations and guest interactions to drive the next wave of innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI co-pilot uses CAD software to generate 3D designs

MIT engineers have developed a novel AI system able to use CAD software in a human-like way, controlling the interface with clicks, drags and menu commands to build 3D models from 2D sketches.

The team created a dataset called VideoCAD, comprising more than 41,000 real CAD session videos that explicitly show how users build shapes step-by-step, including mouse movement, keyboard commands and UI interactions.

By learning from this data, the AI agent can translate high-level design intents, such as ‘draw a line’ or ‘extrude a shape’, into specific UI actions like clicking a tool, dragging over a sketch region and executing the command.

When given a 2D drawing, the AI generates a complete 3D model by replicating the sequence of UI interactions a human designer would use. The researchers tested this on a variety of objects, from simple brackets to more complex architectural shapes.

The long-term vision is to build an AI-enabled CAD co-pilot. This tool not only automates repetitive modelling tasks but also works collaboratively with human designers to suggest next steps, speed up workflows or handle tedious operations.

The researchers argue this could significantly lower the barrier to entry for CAD use, making 3D design accessible to people without years of training.

From a digital economy and innovation policy perspective, this development is significant. It demonstrates how AI-driven UI agents are evolving, not just processing text or data, but also driving complex, creative software. That raises questions around intellectual property (who owns the design if the AI builds it?), productivity (will it replace or support designers?) and education (how will CAD teaching adapt?).

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI revives Molière play for modern stage

Centuries after Molière’s sudden death on stage, a France-based team of researchers, artists and AI specialists has revived the playwright’s creative spirit through a newly generated comedy. The project asked what he might have written had he lived beyond the age of 51.

Experts trained an AI model to study his themes, language and narrative patterns, before combining its output with scholarly review. The resulting play, titled ‘L’Astrologue ou les Faux Presages’, will premiere at the Palace of Versailles next year.

Researchers identified astrology as a theme Molière frequently hinted at, shaping a plot in which a naive bourgeois falls victim to a deceptive astrologer. Academics refined the AI text to ensure historical accuracy, offering fresh insight into Molière’s methods and reaffirming his lasting influence on French theatre.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Most Canadian businesses adopt AI but few see clear returns

Most Canadian businesses are using generative AI, yet few have fully integrated it into core operations. Only a small fraction are seeing measurable returns, according to new research from KPMG Canada.

Among 753 surveyed business leaders, 93 percent reported some AI adoption. Still, only 31 percent have deployed it across all workflows, while 32 percent have partially integrated AI, and 20 percent remain in early experimentation phases.

Despite widespread adoption, only 2 percent of companies reported a clear return on investment, mostly among firms with annual revenues over $1 billion. Nearly two-thirds said ROI was between five and 20 percent, while almost a third could not quantify it.

Most leaders expect returns within one to five years, highlighting the gap between AI adoption and measurable business impact. Experts emphasise that clear strategies and robust metrics are crucial to translate AI implementation into quantifiable growth.

KPMG Canada notes that successful AI integration requires investment not only in technology, but also in people and processes. Organisations are prioritising talent acquisition, skills training and change management to enhance AI literacy and scale adoption.

Strong governance and strategic frameworks that track both financial and operational benefits are crucial for companies to fully leverage the potential of AI and maintain competitiveness in a rapidly evolving economic landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kremlin launches new push for domestic AI development

Russian President Vladimir Putin has ordered the creation of a national task force to accelerate the development of domestic generative AI systems, arguing that homegrown models are essential to safeguarding the country’s technological and strategic sovereignty. Speaking at AI Journey, Russia’s flagship AI forum, he warned that foreign large-language models shape global information flows and can influence entire populations, making reliance on external technologies unacceptable.

Putin said the new task force will prioritise expanding data-centre infrastructure and securing reliable energy supplies, including through small-scale nuclear power stations. Russia still trails global leaders like the United States and China, but local companies have produced notable systems such as Sberbank’s Gigachat and Yandex’s GPT.

Sberbank unveiled a new version of Gigachat and showcased AI-powered tools, ranging from humanoid robots to medical-scanning ATMs. However, recent public demonstrations have drawn unwanted attention, including an incident in which a Russian AI robot toppled over on stage.

The Kremlin aims for AI technologies to contribute more than 11 trillion roubles ($136 billion) to Russia’s economy by 2030. Putin urged state bodies and major companies to adopt AI more aggressively while cautioning against overly strict regulation.

However, he stressed that only Russian-made AI systems should be used for national security to prevent sensitive data from flowing abroad. Western sanctions, which restrict access to advanced hardware, particularly microchips, continue to hinder Russia’s ambitions.

The push for domestic AI comes as Ukraine warns that Russia is developing a new generation of autonomous, AI-driven drones capable of operating in coordinated swarms and striking targets up to 62 miles away, underscoring the growing military stakes of the AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU simplifies digital rules to save billions for companies

The European Commission has unveiled a digital package designed to simplify rules and reduce administrative burdens, allowing businesses to focus on innovation rather than compliance.

An initiative that combines the Digital Omnibus, Data Union Strategy, and European Business Wallet to strengthen competitiveness across the EU while maintaining high standards of fundamental rights, data protection, and safety.

The Digital Omnibus streamlines rules on AI, cybersecurity, and data. Amendments will create innovation-friendly AI regulations, simplify reporting for cybersecurity incidents, harmonise aspects of the GDPR, and modernise cookie rules.

Improved access to data and regulatory guidance will support businesses, particularly SMEs, allowing them to develop AI solutions and scale operations across member states more efficiently.

The Data Union Strategy aims to unlock high-quality data for AI, strengthen Europe’s data sovereignty, and support businesses with legal guidance and strategic measures to ensure fair treatment of the EU data abroad.

Meanwhile, the European Business Wallet will provide a unified digital identity for companies, enabling secure signing, storage, and exchange of documents and communication with public authorities across 27 member states.

By easing administrative procedures, the package could save up to €5 billion by 2029, with the Business Wallet alone offering up to €150 billion in annual savings.

The Commission has launched a public consultation, the Digital Fitness Check, to assess the impact of these rules and guide future steps, ensuring that businesses can grow and innovate instead of being held back by complex regulations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!