New οffline AI note app promises privacy without subscriptions

Growing concern over data privacy and subscription fatigue has led an independent developer to create WitNote, an AI note-taking tool that runs entirely offline.

The software allows users to process notes locally on Windows and macOS rather than relying on cloud-based services where personal information may be exposed.

WitNote supports lightweight language models such as Qwen2.5-0.5B that can run with limited storage requirements. Users may also connect to external models through API keys if preferred.

Core functions include rewriting, summarising and extending content, while a WYSIWYG Markdown editor provides a familiar workflow without network delays, instead of relying on web-based interfaces.

Another key feature is direct integration with Obsidian Markdown files, allowing notes to be imported instantly and managed in one place.

The developer says the project remains a work in progress but commits to ongoing updates and user-driven improvements, even joining Apple’s developer programme personally to support smoother installation.

For users seeking AI assistance while protecting privacy and avoiding monthly fees, WitNote positions itself as an appealing offline alternative that keeps full control of data on the local machine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Best AI dictation tools for faster speech-to-text in 2026

AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.

Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.

Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.

Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.

Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.

Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.

Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China proposes strict AI rules to protect children

China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.

The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.

High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.

The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.

The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.

China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers abuse new AI agent connections

Security researchers warn hackers are exploiting a new feature in Microsoft Copilot Studio. The issue affects recently launched Connected Agents functionality.

Connected Agents allows AI systems to interact and share tools across environments. Researchers say default settings can expose sensitive capabilities without clear monitoring.

Zenity Labs reported attackers linking rogue agents to trusted systems. Exploits included unauthorised email sending and data access.

Experts urge organisations to disable Connected Agents for critical workloads. Stronger authentication and restricted access are advised until safeguards improve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Millions watch AI-generated brainrot content on YouTube

Kapwing research reveals that AI-generated ‘slop’ and brainrot videos now dominate a significant portion of YouTube feeds, accounting for 21–33% of the first 500 Shorts seen by new users.

These rapidly produced AI videos aim to grab attention but make it harder for traditional creators to gain visibility. Analysis of top trending channels shows Spain leads in AI slop subscribers with 20.22 million, while South Korea’s channels have amassed 8.45 billion views.

India’s Bandar Apna Dost is the most-viewed AI slop channel, earning an estimated $4.25 million annually and showing the profit potential of mass AI-generated content.

The prevalence of AI slop and brainrot has sparked debates over creativity, ethics, and advertiser confidence. YouTube CEO Neal Mohan calls generative AI transformative, but rising automated videos raise concerns over quality and brand safety.

Researchers warn that repeated exposure to AI-generated content can distort perception and contribute to information overload. Some AI content earns artistic respect, but much normalises low-quality videos, making it harder for users to tell meaningful content from repetitive or misleading material.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI slop dominates YouTube recommendations for new users

More than 20 percent of videos recommended to new YouTube users are low-quality, attention-driven content commonly referred to as AI slop, according to new research. The findings raise concerns about how recommendation systems shape early user experience on the platform.

Video-editing firm Kapwing analysed 15,000 of YouTube’s top channels across countries worldwide. Researchers identified 278 channels consisting entirely of AI-generated slop, designed primarily to maximise views rather than provide substantive content.

These channels have collectively amassed more than 63 billion views and 221 million subscribers. Kapwing estimates the network generates around $117 million in annual revenue through advertising and engagement.

To test recommendations directly, researchers created a new YouTube account and reviewed its first 500 suggested videos. Of these, 104 were classified as AI slop, with around one third falling into a category described as brainrot content.

Kapwing found that AI slop channels attract large audiences globally, including tens of millions of subscribers in countries such as Spain, Egypt, the United States, and Brazil. Researchers said the scale highlights the growing reach of low-quality AI-generated video content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI is changing how Europeans work and learn

Generative AI has become an everyday tool across Europe, with millions using platforms such as ChatGPT, Gemini and Grok for personal, work, and educational purposes. Eurostat data shows that around a third of people aged 16–74 tried AI tools at least once in 2025.

Adoption varies widely across the continent. Norway leads with 56 percent of the population using AI, while Turkey records only 17 percent.

Within the EU, Denmark tops usage at 48 percent, and Romania lags at 18 percent. Northern and digitally advanced countries dominate, while southern, central-eastern, and Balkan nations show lower engagement.

Researchers attribute this to general digital literacy, internet use, and familiarity with technology rather than government policy alone. AI tools are used more for personal purposes than for work.

Across the EU, 25 percent use AI for personal tasks, compared with 15 percent for professional applications.

Usage in education is even lower, with only 9 percent employing AI in formal learning, peaking at 21 percent in Sweden and Switzerland and dropping to just 1 percent in Hungary.

Experts stress that while access is essential, understanding how to apply AI effectively remains a key barrier. Countries with strong digital foundations adopt AI more, while limited awareness and skills restrict use, emphasising the need for AI literacy and infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI chatbots spreading rumours raise new risks

Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.

Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.

Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.

The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.

Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.

While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in education receives growing attention across the EU

A recent Flash Eurobarometer survey shows that EU citizens consider digital skills essential for all levels of education. Nearly nine in ten respondents believe schools should teach students to manage the effects of technology on mental and physical health.

Most also agree that digital skills deserve equal focus to traditional subjects such as reading, mathematics and science.

The survey highlights growing interest in AI in education. Over half of respondents see AI as both beneficial and challenging, emphasising the need for careful assessment. Citizens also expect teachers to be trained in AI use, including Generative AI, to guide students effectively.

While many support smartphone bans in schools, there is strong backing for digital learning tools, with 87% in favour of promoting technology designed specifically for education. Teachers, parents and families are seen as key in fostering safe and responsible technology use.

Overall, EU citizens advocate for a balanced approach that combines digital literacy, responsible use of technology, and the professional support of educators and families to foster a healthy learning environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The AI terms that shaped debate and disruption in 2025

AI continued to dominate public debate in 2025, not only through new products and investment rounds, but also through a rapidly evolving vocabulary that captured both promise and unease.

From ambitious visions of superintelligence to cultural shorthand like ‘slop’, language became a lens through which society processed another turbulent year for AI.

Several terms reflected the industry’s technical ambitions. Concepts such as superintelligence, reasoning models, world models and physical intelligence pointed to efforts to push AI beyond text generation towards deeper problem-solving and real-world interaction.

Developments by companies including Meta, OpenAI, DeepSeek and Google DeepMind reinforced the sense that scale, efficiency and new training approaches are now competing pathways to progress, rather than sheer computing power alone.

Other expressions highlighted growing social and economic tensions. Words like hyperscalers, bubble and distillation entered mainstream debate as data centres expanded, valuations rose, and cheaper model-building methods disrupted established players.

At the same time, legal and ethical debates intensified around fair use, chatbot behaviour and the psychological impact of prolonged AI interaction, underscoring the gap between innovation speed and regulatory clarity.

Cultural reactions also influenced the development of the AI lexicon. Terms such as vibe coding, agentic and sycophancy revealed how generative systems are reshaping work, creativity and user trust, while ‘slop’ emerged as a blunt critique of low-quality, AI-generated content flooding online spaces.

Together, these phrases chart a year in which AI moved further into everyday life, leaving society to wrestle with what should be encouraged, controlled or questioned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot