AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.
Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.
Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.
Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.
Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.
Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.
Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Many visually impaired gamers find mainstream video games difficult due to limited accessibility features. Support groups enable players to share tips, recommend titles, and connect with others who face similar challenges.
Audio and text‑based mobile games are popular, yet console and PC titles often lack voiceovers or screen reader support. Adjustable visual presets could make mainstream games more accessible for partially sighted players.
UK industry bodies acknowledge progress, but barriers remain for millions of visually impaired players. Communities offer social support and provide feedback to developers to improve games and make them inclusive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Association of Chartered Certified Accountants (ACCA) has announced it will largely end remote examinations in the UK from March 2026, requiring students to sit tests in person unless exceptional circumstances apply.
The decision aims to address a surge in cheating, particularly facilitated by AI tools.
Remote testing was introduced during the Covid-19 pandemic to allow students to continue qualifying when in-person exams were impossible. The ACCA said online assessments have now become too difficult to monitor effectively, despite efforts to strengthen safeguards against misconduct.
Investigations show cheating has impacted major auditing firms, including the ‘big four’ and other top companies. High-profile cases, such as EY’s $100m (£74m) settlement in the US, highlight the risks posed by compromised professional examinations.
While other accounting bodies, including the Institute of Chartered Accountants in England and Wales, continue to allow some online exams, the ACCA has indicated that high-stakes assessments must now be conducted in person to maintain credibility and integrity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US federal agencies planning to deploy agentic AI in 2026 are being told to prioritise data organisation as a prerequisite for effective adoption. AI infrastructure providers say poorly structured data remains a major barrier to turning agentic systems into operational tools.
Public sector executives at Amazon Web Services, Oracle, and Cisco said government clients are shifting focus away from basic chatbot use cases. Instead, agencies are seeking domain-specific AI systems capable of handling defined tasks and delivering measurable outcomes.
US industry leaders said achieving this shift requires modernising legacy infrastructure alongside cleaning, structuring, and contextualising data. Executives stressed that agentic AI depends on high-quality data pipelines that allow systems to act autonomously within defined parameters.
Oracle said its public sector strategy for 2026 centres on enabling context-aware AI through updated data assets. Company executives argued that AI systems are only effective when deeply aligned with an organisation’s underlying data environment.
The companies said early agentic AI use cases include document review, data entry, and network traffic management. Cloud infrastructure was also highlighted as critical for scaling agentic systems and accelerating innovation across government workflows.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new computational brain model, built entirely from biological principles, has learned a visual categorisation task with accuracy and variability matching that of lab animals. Remarkably, the model achieved these results without being trained on any animal data.
The biomimetic design integrates detailed synaptic rules with large-scale architecture across the cortex, striatum, brainstem, and acetylcholine-modulated systems.
As the model learned, it reproduced neural rhythms observed in real animals, including strengthened beta-band synchrony during correct decisions. The result demonstrates emergent realism in both behaviour and underlying neural activity.
The model also revealed a previously unnoticed set of ‘incongruent neurons’ that predicted errors. When researchers revisited animal data, they found the same signals had gone undetected, highlighting the platform’s potential to uncover hidden neural dynamics.
Beyond neuroscience research, the model offers a powerful tool for testing neurotherapeutic interventions in silico. Simulating disease-related circuits allows scientists to test treatments before costly clinical trials, potentially speeding up the development of next-generation neurotherapeutics.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms has acquired Manus, a Singapore-based developer of general-purpose AI agents, as part of its continued push to expand artificial intelligence capabilities. The deal underscores Meta’s strategy of acquiring specialised AI firms to accelerate product development.
Manus, founded in China before relocating to Singapore, develops AI agents capable of performing tasks such as market research, coding, and data analysis. The company said it reached more than $100 million in annualised revenue within eight months of launch and was serving millions of users worldwide.
Meta said the acquisition will help integrate advanced automation into its consumer and enterprise offerings, including the Meta AI assistant. Manus will continue operating its subscription service, and its employees will join Meta’s teams.
Financial terms were not disclosed, but media reports valued the deal at more than $2 billion. Manus had been seeking funding at a similar valuation before being approached by Meta and had recently raised capital from international investors.
The acquisition follows a series of AI-focused deals by Meta, including investments in Scale AI and AI device start-ups. Analysts say the move highlights intensifying competition among major technology firms to secure AI talent and capabilities.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Apple has filed an appeal of a major UK antitrust ruling that could result in billions of dollars in compensation for App Store users. The move would escalate the case from the Competition Appeal Tribunal to the UK Court of Appeal.
The application follows an October ruling in which the tribunal found Apple had abused its dominant market position by charging excessive App Store fees. The decision set a £1.5 billion ($1.9 billion) compensation figure, which Apple previously signalled it would challenge.
After the tribunal declined to grant permission to appeal, Apple sought to appeal to a higher court. The company has not commented publicly on the latest filing but continues to dispute the tribunal’s assessment of competition in the app economy.
Central to the case is the tribunal’s proposed developer commission rate of 15-20 per cent, lower than Apple’s longstanding 30 per cent fee. The rate was determined using what the court described as informed estimates.
If upheld, the compensation would be distributed among UK App Store users who made purchases between 2015 and 2024. The case is being closely watched as a test of antitrust enforcement against major digital platforms.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Protecting AI agents from manipulation has become a top priority for OpenAI after rolling out a major security upgrade to ChatGPT Atlas.
The browser-based agent now includes stronger safeguards against prompt injection attacks, where hidden instructions inside emails, documents or webpages attempt to redirect the agent’s behaviour instead of following the user’s commands.
Prompt injection poses a unique risk because Atlas can carry out actions that a person would normally perform inside a browser. A malicious email or webpage could attempt to trigger data exposure, unauthorised transactions or file deletion.
Criminals exploit the fact that agents process large volumes of content across an almost unlimited online surface.
OpenAI has developed an automated red-team framework that uses reinforcement learning to simulate sophisticated attackers.
When fresh attack patterns are discovered, the models behind Atlas are retrained so that resistance is built into the agent rather than added afterwards. Monitoring and safety controls are also updated using real attack traces.
These new protections are already live for all Atlas users. OpenAI advises people to limit logged-in access where possible, check confirmation prompts carefully and give agents well-scoped tasks instead of broad instructions.
The company argues that proactive defence is essential as agentic AI becomes more capable and widely deployed.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers are warning that AI chatbots may treat dialect speakers unfairly instead of engaging with them neutrally. Studies across English and German dialects found that large language models often attach negative stereotypes or misunderstand everyday expressions, leading to discriminatory replies.
A study in Germany tested ten language models using dialects such as Bavarian and Kölsch. The systems repeatedly described dialect speakers as uneducated or angry, and the bias became stronger when the dialect was explicitly identified.
Similar findings emerged elsewhere, including UK council services and AI shopping assistants that struggled with African American English.
Experts argue that such patterns risk amplifying social inequality as governments and businesses rely more heavily on AI. One Indian job applicant even saw a chatbot change his surname to reflect a higher caste, showing how linguistic bias can intersect with social hierarchy instead of challenging it.
Developers are now exploring customised AI models trained with local language data so systems can respond accurately without reinforcing stereotypes.
Researchers say bias can be tuned out of AI if handled responsibly, which could help protect dialect speakers rather than marginalise them.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!