Growing concern over data privacy and subscription fatigue has led an independent developer to create WitNote, an AI note-taking tool that runs entirely offline.
The software allows users to process notes locally on Windows and macOS rather than relying on cloud-based services where personal information may be exposed.
WitNote supports lightweight language models such as Qwen2.5-0.5B that can run with limited storage requirements. Users may also connect to external models through API keys if preferred.
Core functions include rewriting, summarising and extending content, while a WYSIWYG Markdown editor provides a familiar workflow without network delays, instead of relying on web-based interfaces.
Another key feature is direct integration with Obsidian Markdown files, allowing notes to be imported instantly and managed in one place.
The developer says the project remains a work in progress but commits to ongoing updates and user-driven improvements, even joining Apple’s developer programme personally to support smoother installation.
For users seeking AI assistance while protecting privacy and avoiding monthly fees, WitNote positions itself as an appealing offline alternative that keeps full control of data on the local machine.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.
Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.
In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.
Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.
Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.
Not everyone is convinced.
Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.
The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.
Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Polish authorities have urged the European Commission to investigate TikTok over AI-generated content advocating Poland’s exit from the European Union. Officials say the videos pose risks to democratic processes and public order.
Deputy Minister for Digitalisation Dariusz Standerski highlighted that the narratives, distribution patterns, and synthetic audiovisual material suggest TikTok may not be fulfilling its obligations under the EU Digital Services Act for Very Large Online Platforms.
The associated TikTok account has since disappeared from the platform.
The Digital Services Act requires platforms to address systemic risks, including disinformation, and allows fines of up to 6% of a company’s global annual turnover for non-compliance. TikTok and the Commission have not provided immediate comment.
Authorities emphasised that the investigation could set an important precedent for how EU countries address AI-driven disinformation on major social media platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.
Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.
Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.
Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.
Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.
Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.
Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Many visually impaired gamers find mainstream video games difficult due to limited accessibility features. Support groups enable players to share tips, recommend titles, and connect with others who face similar challenges.
Audio and text‑based mobile games are popular, yet console and PC titles often lack voiceovers or screen reader support. Adjustable visual presets could make mainstream games more accessible for partially sighted players.
UK industry bodies acknowledge progress, but barriers remain for millions of visually impaired players. Communities offer social support and provide feedback to developers to improve games and make them inclusive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Association of Chartered Certified Accountants (ACCA) has announced it will largely end remote examinations in the UK from March 2026, requiring students to sit tests in person unless exceptional circumstances apply.
The decision aims to address a surge in cheating, particularly facilitated by AI tools.
Remote testing was introduced during the Covid-19 pandemic to allow students to continue qualifying when in-person exams were impossible. The ACCA said online assessments have now become too difficult to monitor effectively, despite efforts to strengthen safeguards against misconduct.
Investigations show cheating has impacted major auditing firms, including the ‘big four’ and other top companies. High-profile cases, such as EY’s $100m (£74m) settlement in the US, highlight the risks posed by compromised professional examinations.
While other accounting bodies, including the Institute of Chartered Accountants in England and Wales, continue to allow some online exams, the ACCA has indicated that high-stakes assessments must now be conducted in person to maintain credibility and integrity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The presidency of the Council of the European Union next year is expected to see Ireland lead a European drive for ID-verified social media accounts.
Tánaiste Simon Harris said the move is intended to limit anonymous abuse, bot activity and coordinated disinformation campaigns that he views as a growing threat to democracy worldwide.
A proposal that would require users to verify their identity instead of hiding behind anonymous profiles. Harris also backed an Australian-style age verification regime to prevent children from accessing social media, arguing that existing digital consent rules are not being enforced.
Media Minister Patrick O’Donovan is expected to bring forward detailed proposals during the presidency.
The plan is likely to trigger strong resistance from major social media platforms with European headquarters in Ireland, alongside criticism from the US.
However, Harris believes there is growing political backing across Europe, pointing to signals of support from French President Emmanuel Macron and UK Prime Minister Keir Starmer.
Harris said he wanted constructive engagement with technology firms rather than confrontation, while insisting that stronger safeguards are now essential.
He argued that social media companies already possess the technology to verify users and restrict harmful accounts, and that European-level coordination will be required to deliver meaningful change.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has blamed weak femtocell security at KT Corp for a major mobile payment breach that triggered thousands of unauthorised transactions.
Officials said the mobile operator used identical authentication certificates across femtocells and allowed them to stay valid for ten years, meaning any device that accessed the network once could do so repeatedly instead of being re-verified.
More than 22,000 users had identifiers exposed, and 368 people suffered unauthorised payments worth 243 million won.
Investigators also discovered that ninety-four KT servers were infected with over one hundred types of malware. Authorities concluded the company failed in its duty to deliver secure telecommunications services because its overall management of femtocell security was inadequate.
The government has now ordered KT to submit detailed prevention plans and will check compliance in June, while also urging operators to change authentication server addresses regularly and block illegal network access.
Officials said some hacking methods resembled a separate breach at SK Telecom, although there is no evidence that the same group carried out both attacks. KT said it accepts the findings and will soon set out compensation arrangements and further security upgrades instead of disputing the conclusions.
A separate case involving LG Uplus is being referred to police after investigators said affected servers were discarded, making a full technical review impossible.
The government warned that strong information security must become a survival priority as South Korea aims to position itself among the world’s leading AI nations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers are warning that AI chatbots may treat dialect speakers unfairly instead of engaging with them neutrally. Studies across English and German dialects found that large language models often attach negative stereotypes or misunderstand everyday expressions, leading to discriminatory replies.
A study in Germany tested ten language models using dialects such as Bavarian and Kölsch. The systems repeatedly described dialect speakers as uneducated or angry, and the bias became stronger when the dialect was explicitly identified.
Similar findings emerged elsewhere, including UK council services and AI shopping assistants that struggled with African American English.
Experts argue that such patterns risk amplifying social inequality as governments and businesses rely more heavily on AI. One Indian job applicant even saw a chatbot change his surname to reflect a higher caste, showing how linguistic bias can intersect with social hierarchy instead of challenging it.
Developers are now exploring customised AI models trained with local language data so systems can respond accurately without reinforcing stereotypes.
Researchers say bias can be tuned out of AI if handled responsibly, which could help protect dialect speakers rather than marginalise them.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!