Majority of college students use or must use AI in classwork, but institutions lag in AI education

Research from Honorlock indicates a substantial shift in how students engage with generative AI in higher education: more than 56% of surveyed US college–enrolled students report being required to use AI tools in coursework, and 63% use AI for at least some assignments.

The most common uses include grammar and editing support (59%) and text generation (57%), with students also using AI to brainstorm ideas and clarify concepts.

Despite widespread AI use, there remains a significant gap in formal AI education: only 31% of students are aware of AI-focused courses at their institutions, and fewer than 20% have taken them.

Students themselves often learn AI skills independently rather than through a structured curriculum, potentially leaving them unprepared for workplaces where AI fluency is expected.

The survey also highlights academic integrity risks: more than one-third of students admitted to using AI assistance on quizzes or exams, underlining the need for clear AI use policies, responsible-use training and ethical frameworks within higher education.

Researchers and advocates argue that colleges should integrate AI literacy, including ethics, governance, real-world applications and responsible use, into coursework to better equip graduates for AI-enabled careers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kentucky AI therapy ban passes with strong support in decisive 88–7 vote

Lawmakers in the Kentucky House of Representatives have approved House Bill 455, a measure aimed at limiting the role of AI in mental health services. The proposal introduces safeguards to regulate the use of AI tools in therapy settings and to strengthen patient protections.

Under the bill, AI systems are prohibited from making independent therapeutic decisions or generating treatment plans without review from a licensed therapist. In particular, tools such as ChatGPT, Gemini, and Claude would be barred from performing direct therapy or replacing human interaction.

However, self-help materials and educational resources are explicitly exempt from the restrictions. Therapists may still use AI as a supportive tool, provided they do not delegate substantive clinical responsibilities or direct client engagement.

In addition, practitioners must inform patients if AI is being used and obtain their consent. Supporters argue that preserving the human-to-human relationship in therapy is essential, especially amid concerns that some chatbot systems have encouraged harmful behaviour or worsened mental health outcomes.

Although the bill passed the House 88-7, opposition came mainly from libertarian-leaning Republican members who contended that the measure introduces unnecessary regulation and could hinder innovation. Nevertheless, backers maintain that clearer guardrails are necessary to address risks linked to automated mental health advice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI music discovery unlocks powerful and highly effective ways to find new songs

AI tools developed by companies such as OpenAI, Anthropic, and Google are increasingly shaping everyday digital practices. While these systems are not fully reliable for complex research, they offer practical support for routine tasks. One emerging use case is personalised music discovery.

Music platforms, such as Spotify and Apple, allow users to export their listening history, creating opportunities for AI-driven analysis. By uploading a music library file, users enable AI systems to categorise genres, detect patterns, and identify gaps in their playlists. Broader preferences can then be refined through targeted prompts.

Greater specificity improves results. Users can exclude familiar artists, prioritise recent releases, or emphasise similarities with favourite bands. Signature tracks may be suggested for evaluation, allowing continuous feedback. Iterative interaction helps the system better understand musical preferences over time, leading to increasingly accurate recommendations.

Once curated, playlists can be exported and transferred back to streaming services using tools such as Exportify and TuneMyMusic. Although some may question the data implications of such personalisation, the process remains efficient, fast, and engaging. AI-driven music discovery ultimately demonstrates how general-purpose systems can deliver highly tailored cultural experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenClaw exploits spark a major security alert

A wave of coordinated attacks has targeted OpenClaw, the autonomous AI framework that gained rapid popularity after its release in January.

Multiple hacking groups have taken advantage of severe vulnerabilities to steal API keys, extract persistent memory data, and push information-stealing malware instead of leaving the platform’s expanding user base unharmed.

Security analysts have linked more than 30,000 compromised instances to campaigns that intercept messages and deploy malicious payloads through channels such as Telegram.

Much of the damage stems from flaws such as the Remote Code Execution vulnerability CVE-2026-25253, supply chain poisoning, and exposed administrative interfaces. Early attacks centred on the ‘ClawHavoc’ campaign, which disguised malware as legitimate installation tools.

Users who downloaded these scripts inadvertently installed stealers capable of full compromise, enabling attackers to move laterally across enterprise systems instead of being confined to a single device.

Further incidents emerged on the OpenClaw marketplace, where backdoored ‘skills’ were published from accounts that appeared reliable. These updates executed remote commands that allowed attackers to siphon OAuth tokens, passwords, and API keys in real time.

A Shodan scan later identified more than 312,000 OpenClaw instances running on a default port with little or no protection, while honeypots recorded hostile activity within minutes of appearing online.

Security researchers argue that the surge in attacks marks a decisive moment for autonomous AI frameworks. As organisations experiment with agents capable of independent decision-making, the absence of security-by-design safeguards is creating opportunities for organised threat groups.

Flare’s advisory urges companies to secure credentials and isolate AI workloads instead of relying on default configurations that expose high-privilege systems to the internet.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Generative AI presents the biggest data-risk challenge in history

Cybersecurity specialists warn that generative AI systems, such as large language models, are creating a data risk frontier far larger than that posed by previous digital innovations.

Because these models are trained on extensive datasets drawn from web pages, internal documents, email corpora and proprietary sources, they can unintentionally memorise or regenerate sensitive information, increasing the risk of exposure.

The article highlights several core concerns. Data leakage and memorisation, where AI models can repeat or infer private data if training processes are not tightly controlled.

Amplification of poor hygiene, when generative tools can magnify the reach of bad actors by automating phishing, social engineering, and malware generation at scale.

Compounding breach impact, if an AI model is trained on stolen or leaked data, it could internalise and regurgitate that information without detection, entrenching harm. Cloud and access governance gaps that allow organisations to adopt AI without robust access controls and encryption may widen their attack surface.

The author calls for revised data governance frameworks, including strict training data provenance, auditability, encryption, minimisation and purpose limitation, to mitigate what is described as ‘the biggest data risk in history.’

Recommendations also include accountability measures for models, continuous monitoring, and legislative action to align AI development with privacy and security principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman urges urgent AI regulation

OpenAI chief Sam Altman has called for urgent global regulation of AI, speaking at the AI Impact Summit in New Delhi. Addressing leaders and executives in New Delhi, he said the rapid pace of development demands coordinated international oversight.

In New Delhi, Altman suggested creating a body similar to the International Atomic Energy Agency to oversee advanced AI systems. He warned that highly capable open source biomodels could pose serious biosecurity risks if misused.

Altman argued in New Delhi that democratising AI is essential to prevent power from being concentrated in a single company or country. He added that safeguards are urgently required, even as technology continues to disrupt labour markets.

During the summit in New Delhi, Altman said ChatGPT has 100 million weekly users in India, with more than a third being students. OpenAI also announced plans with Tata Consultancy Services to build data centre infrastructure in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea accelerates AI education reform in universities

South Korea’s Ministry of Education has launched a nationwide initiative to introduce mandatory AI courses at universities. The measure aims to ensure that all students acquire basic AI skills, regardless of their major, and to extend AI education reforms to higher education.

Under the plan, 6 billion won will be allocated to 20 universities, each receiving 300 million won to develop compulsory introductory AI courses. An additional 30 billion won will support national universities outside Seoul, alongside 5 billion won for short-term interdisciplinary AI programmes.

AI education will be integrated across disciplines rather than confined to computer science departments. Universities are expected to introduce AI courses for nonengineering majors, promote cross-faculty collaboration, and establish campus-wide support systems.

Participating institutions will share curricula, enable credit recognition across universities, and expand course delivery through online platforms. A consultative group will coordinate implementation and disseminate best practices nationwide.

Significant structural challenges remain. Shortages of AI-specialised faculty, limited recruitment flexibility, and the absence of generative AI guidelines in many institutions raise concerns about implementation capacity.

Education officials state that support will also be provided to professors outside AI-related fields to strengthen teaching capacity and address instructor shortages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit tests AI shopping search

Reddit has begun testing an AI-powered shopping search tool with a limited group of users in the US. Search queries for product ideas now generate interactive carousels featuring prices, images and direct links to retailers.

Items appearing in the results are drawn from recommendations shared in posts and comments across the platform. Listings are connected to Reddit’s advertising and shopping partners, bringing community discussions closer to online purchasing.

Expansion into AI-led commerce builds on the company’s earlier launch of Dynamic Product Ads, designed to deliver personalised suggestions. Closer integration of search and shopping signals a broader effort to strengthen digital revenue streams.

Chief executive Steve Huffman recently described AI search as a significant business opportunity beyond product development alone. Weekly search users increased from 60 million to 80 million over the past year, while engagement with the AI-powered Reddit Answers tool rose sharply throughout 2025.

Developments place Reddit alongside other technology platforms investing in AI-driven retail features. Growing user engagement suggests the company sees search as central to its future commercial strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chinese AI video tool unsettles Hollywood

A new AI video model developed by ByteDance has unsettled Hollywood after generating cinema-quality clips from brief text prompts. Seedance 2.0, launched in 2025, went viral for producing realistic action scenes featuring western cinematic characters such as Spider Man and Deadpool.

In response, major studios, including Disney and Paramount, issued cease and desist letters over alleged copyright infringement. Japan has also begun investigating ByteDance after AI-generated anime videos spread widely online.

Industry experts say Seedance 2.0 stands out for combining text, visuals and audio within a single system. Analysts in Singapore and Melbourne argue that Chinese AI models are now matching US competitors at the technological frontier.

As Seedance 2.0 gains traction, Beijing continues to prioritise AI and robotics in its economic strategy. The rise of tools from China has intensified debate in the US and beyond over copyright, regulation and the future of creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India’s UIDAI rolls out AI-enabled biometric deduplication and document verification platform

UIDAI has deployed an advanced platform that uses AI-enabled models to improve biometric deduplication, the process of ensuring that each resident has a unique identity record, by checking fingerprints, facial images and iris scans against the entire Aadhaar database.

The authority describes this system, developed with the International Institute of Information Technology, Hyderabad, as an ‘Invisible Shield’ that can perform billions of computations efficiently at a population scale, running on high-performance inference infrastructure such as NVIDIA DGX systems to enhance accuracy and speed nationwide.

In addition to biometric matching, the platform incorporates AI-based document metadata extraction and verification to curb enrolment fraud, using secure APIs (e.g. DigiLocker) for source-of-truth checks against submitted documents.

The system is already being rolled out in several states. It is expected to expand across India in the coming months, boosting service quality, reducing turnaround times for Aadhaar enrolment and update transactions, and reinforcing trust in the digital identity infrastructure.

The initiative is part of a broader push to leverage AI for fraud detection and identity assurance at a national scale. It comes amid ongoing efforts by UIDAI to modernise authentication processes as biometric and AI-based systems evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!