The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.
The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.
Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.
Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI should be integrated into classrooms in Luxembourg rather than avoided, according to Gilbert Busana of the University of Luxembourg. Speaking to RTL Today in Luxembourg, he said ignoring AI would be a disservice to pupils and teachers alike.
Busana argued that AI should be taught both as a standalone subject and across disciplines in Luxembourg schools. Clear guidelines are needed to define when and how pupils may use AI, alongside transparency about its role in assignments.
He stressed that developing AI literacy in Luxembourg is essential to protect critical thinking. Assessment methods may shift away from focusing solely on final outputs towards evaluating the learning process itself.
Teachers in Luxembourg are increasingly becoming coaches rather than simple transmitters of knowledge. Busana said continuous professional training and collaboration within schools in Luxembourg will be vital as AI reshapes education.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has reported new instances of its models being exploited in online scams and coordinated information campaigns. The company detailed actions to remove offending accounts and strengthen safeguards, highlighting misuse in fraud and deceptive content creation.
Several cases involved romance and ‘task’ scams, in which AI-generated messages built emotional engagement before requesting payment. One network, dubbed ‘Operation Date Bait,’ used chatbots to promote a fictitious dating service targeting young men in Indonesia.
Another, ‘Operation False Witness,’ saw actors posing as legal professionals to solicit advance fees for non-existent recovery services.
The report also outlined coordinated campaigns leveraging AI to produce articles, social media posts, and comments on geopolitical topics. In ‘Operation Trolling Stone,’ AI-generated content on a Russian arrest in Argentina was shared widely in multiple languages to mimic grassroots engagement.
OpenAI stressed that AI was sometimes used, but reach and account size largely drove engagement.
The company continues monitoring misuse and collaborates with partners and authorities to curb fraudulent or deceptive activity. Systems have been updated to decline policy-violating requests, and not all suspicious content online was generated using its tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Efforts to secure a foothold in Europe have led Binance to select Greece as its entry point for operating under the EU’s Markets in Crypto-Assets framework. A licence would let the exchange offer services across the European Union when the rules take effect in July 2026.
Strategic considerations outweigh speed in the decision. Co-chief executive Richard Teng cited workforce quality, safety, and long-term growth potential as decisive factors, even though several larger EU economies have already issued more licences.
Regulatory attention continues to shape the company’s trajectory. Founder Changpeng Zhao remains a shareholder, as leadership says reforms aim to make the platform one of the most regulated exchanges globally.
Expansion plans unfold amid turbulent market conditions. Bitcoin’s prices remain well below last year’s highs, dampening retail sentiment, yet institutional participation has remained resilient, supporting liquidity amid volatility.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at Kyoto University have presented an AI robot monk designed to assist with religious ceremonies and spiritual guidance. The prototype, revealed at Shoren-in temple, demonstrates how robotics and faith traditions may coexist.
Equipped with an AI system based on Buddhist scriptures, the robot answers questions about personal struggles and wider social concerns. During a demonstration, it offered reflective advice while performing gestures such as bowing and placing its palms together.
Developers combined a chatbot powered by modern language technology with movements from an existing humanoid robot built by a Chinese manufacturer. Careful programming aimed to reproduce calm behaviour associated with traditional monks.
Japan faces a gradual decline in the number of active temples and clergy, encouraging the exploration of technological support within religious life. Project leaders believe the AI monk could represent a significant shift in preserving spiritual services for future communities.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A federal judge in Texas has preliminarily approved a $177 million settlement resolving claims that AT&T failed to safeguard consumer data in two separate breaches. The company denies wrongdoing but agreed to establish compensation funds covering affected customers nationwide.
The agreement creates two non-reversionary funds: $149 million for individuals whose personal data appeared on the dark web, and $28 million for customers whose call and text logs were accessed. It covers a March 2024 breach and a separate incident between May 2022 and early 2023.
Eligible class members may submit claims for cash payments, with amounts depending on the number of valid submissions, and may also receive up to 24 months of credit monitoring. The deadline to opt out or object is 17 October 2025, with a final approval hearing set for 3 December 2025.
Legal and administrative costs, attorneys’ fees, and service awards will be paid from the settlement funds. The case resolves claims brought on behalf of all living US residents whose data was exposed in the two AT&T breaches.
The settlement follows other recent legal challenges facing AT&T, including class actions filed by New York pensioners alleging the company misled investors about the environmental impact of its lead-sheathed cables.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Italian data protection authority has ordered Amazon Italia Logistics to halt processing of sensitive employee data after investigators found that the company gathered details ranging from health conditions to union involvement.
Information about workers’ private lives and family members had also been collected, often retained for a decade through internal tracking systems rather than being limited to what labour rules in Italy allow.
Regulators discovered that some data originated from cameras positioned near restrooms and staff break areas, a practice that breached EU privacy standards.
The watchdog concluded that the company’s monitoring went far beyond what employers are permitted to compile when assessing staff performance or workplace needs.
Amazon responded by stressing that protecting employee information remains a priority and said that internal rules and training programmes are designed to ensure compliance. The company added that any findings from the Italian authority would prompt a review of its procedures instead of being dismissed.
An order that arrives as Amazon attempts to regain its lobby badges at the European Parliament.
Access was suspended in 2024 after senior representatives declined to attend hearings on warehouse working conditions, and opposition from MEPs continues to place pressure on Parliament President Roberta Metsola to reject reinstatement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Regulatory scrutiny of the EU’s digital fairness framework is set to begin on 1 July as the European Commission moves to tighten its supervision of online platforms.
The Commission is preparing a major upgrade of its consumer protection framework, expected by December 2026.
The reforms aim to reinforce enforcement tools under the Unfair Commercial Practices Directive and the Consumer Protection Cooperation Regulation, allowing regulators to intervene more effectively when platforms breach fairness standards.
Michael McGrath, Commissioner for Democracy, Justice and Rule of Law, has highlighted the need for greater transparency and accountability as digital markets expand rapidly.
The forthcoming scrutiny focuses on ensuring that platforms respect transparency obligations, avoid manipulating users and provide fair conditions in online transactions.
Regulators seek to replace fragmented enforcement with a more coordinated model that reflects the increasingly cross-border nature of digital commerce.
Stronger consumer safeguards are becoming central to the digital agenda of the EU.
The next phase of reforms is expected to streamline investigations across member states and deliver more predictable outcomes for affected consumers, offering steadier enforcement instead of reactive measures taken after violations escalate.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Securities and Markets Authority (ESMA) has clarified that many crypto-perpetual contracts, including those for Bitcoin and Ether, are likely to be classified as contracts for difference (CFDs).
Due to their leverage, complexity, and risk, these products should target a narrow audience, with distribution strategies aligned accordingly.
The announcement came as Kraken launched perpetual futures for ten tokenised assets, including major indices, gold, and top tech and crypto stocks. ESMA warned that mass marketing or promotions targeting inexperienced investors are inappropriate under its guidance.
Firms must ensure that derivatives falling within the CFD category comply with product intervention requirements. Requirements include leverage limits, risk warnings, margin close-outs, negative balance protection, and a ban on incentives or benefits.
Non-advised services must include an appropriateness assessment to protect investors from unsuitable offerings.
ESMA also emphasised the importance of identifying and managing conflicts of interest arising from these products. The statement seeks to ensure firms market and distribute leveraged crypto products responsibly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI said criminal and state-linked groups misused ChatGPT for disinformation, scams and covert influence. Its latest threat report details coordinated account bans and highlights how AI tools are embedded within broader operational workflows rather than used in isolation.
One investigation linked accounts to Chinese law enforcement engaged in what were described as ‘cyber special operations’. Activities included planning influence campaigns, mass-reporting dissidents and drafting forged materials, with related efforts continuing through other tools despite model refusals.
The report also outlined a Cambodia-based romance scam targeting young men in Indonesia through a fake dating agency. Operators combined manual prompting with automated chatbots to sustain conversations and facilitate financial fraud, leading to account removals.
Separately, accounts tied to Russia’s ‘Rybar’ network used ChatGPT to draft and translate posts distributed across multiple platforms. OpenAI noted that campaign impact depended more on account reach and coordination than on AI-generated content alone.
Across China, Russia and parts of Southeast Asia, actors treated AI as one tool among many, alongside fake profiles, paid advertising and forged documents. OpenAI called for cross-industry vigilance, stressing the need to analyse behavioural patterns across platforms.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!