The Bank of Russia has reiterated its confidence in the long-term potential of the digital ruble, describing the project as one of the most advanced central bank digital currency initiatives globally.
According to the regulator, preparations for a large-scale rollout remain on track for 2026, with internal estimates suggesting the digital ruble could represent up to 5% of all cashless payments within seven years of launch.
Central bank officials highlighted smart contracts as a primary area of application, alongside budgetary payments and cross-border transaction mechanisms, where efficiency and transparency gains are expected.
The regulator added that global payment trends are being closely monitored. Officials stressed the importance of defining a clear role for each financial instrument rather than introducing technology without a specific economic purpose.
Bank of Russia officials also emphasised ongoing collaboration with market participants to identify new opportunities for the digital ruble and maximise its practical impact.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.
Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.
Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.
Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.
Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.
Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.
Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI tools are increasingly used for simple everyday calculations, yet a new benchmark suggests accuracy remains unreliable.
The ORCA study tested five major chatbots across 500 real-world maths prompts and found that users still face roughly a 40 percent chance of receiving the wrong answer.
Gemini from Google recorded the highest score at 63 percent, with xAI’s Grok almost level at 62.8 percent. DeepSeek followed with 52 percent, while ChatGPT scored 49.4 percent, and Claude placed last at 45.2 percent.
Performance varied sharply across subjects, with maths and conversion tasks producing the best results, but physics questions dragged scores down to an average accuracy below 40 percent.
Researchers identified most errors as sloppy calculations or rounding mistakes, rather than deeper failures to understand the problem. Finance and economics questions highlighted the widest gaps between the models, while DeepSeek struggled most in biology and chemistry, with barely one correct answer in ten.
Users are advised to double-check results whenever accuracy is crucial. A calculator or a verified source is still advised instead of relying entirely on an AI chatbot for numerical certainty.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Korean Air has disclosed a data breach affecting about 30,000 employees. Stolen records were taken from systems operated by a former subsidiary.
The breach occurred at catering supplier KC&D, sold off in 2020. Hackers, who had previously attacked the Washington Post accessed employee names and their bank account details, while customer data remained unaffected.
Investigators linked the incident to exploits in Oracle E-Business Suite. Cybercriminals abused zero day flaws during a wider global hacking campaign.
The attack against Korean Air has been claimed by the Cl0p ransomware group. Aviation firms worldwide have reported similar breaches connected to the same campaign.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Many visually impaired gamers find mainstream video games difficult due to limited accessibility features. Support groups enable players to share tips, recommend titles, and connect with others who face similar challenges.
Audio and text‑based mobile games are popular, yet console and PC titles often lack voiceovers or screen reader support. Adjustable visual presets could make mainstream games more accessible for partially sighted players.
UK industry bodies acknowledge progress, but barriers remain for millions of visually impaired players. Communities offer social support and provide feedback to developers to improve games and make them inclusive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A Moscow court has dismissed a class action lawsuit filed against Russia’s state media regulator Roskomnadzor and the Ministry of Digital Development by users of WhatsApp and Telegram. The ruling was issued by a judge at the Tagansky District Court.
The court said activist Konstantin Larionov failed to demonstrate he was authorised to represent messaging app users. The lawsuit claimed call restrictions violated constitutional rights, including freedom of information and communication secrecy.
The case followed Roskomnadzor’s decision in August to block calls on WhatsApp and Telegram, a move officials described as part of anti-fraud efforts. Both companies criticised the restrictions at the time.
Larionov and several dozen co-plaintiffs said the measures were ineffective, citing central bank data showing fraud mainly occurs through traditional calls and text messages. The plaintiffs also argued the restrictions disproportionately affected ordinary users.
Larionov said the group plans to appeal the decision and continue legal action. He has described the lawsuit as an attempt to challenge what he views as politically motivated restrictions on communication services in Russia.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Security researchers warn hackers are exploiting a new feature in Microsoft Copilot Studio. The issue affects recently launched Connected Agents functionality.
Connected Agents allows AI systems to interact and share tools across environments. Researchers say default settings can expose sensitive capabilities without clear monitoring.
Zenity Labs reported attackers linking rogue agents to trusted systems. Exploits included unauthorised email sending and data access.
Experts urge organisations to disable Connected Agents for critical workloads. Stronger authentication and restricted access are advised until safeguards improve.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Association of Chartered Certified Accountants (ACCA) has announced it will largely end remote examinations in the UK from March 2026, requiring students to sit tests in person unless exceptional circumstances apply.
The decision aims to address a surge in cheating, particularly facilitated by AI tools.
Remote testing was introduced during the Covid-19 pandemic to allow students to continue qualifying when in-person exams were impossible. The ACCA said online assessments have now become too difficult to monitor effectively, despite efforts to strengthen safeguards against misconduct.
Investigations show cheating has impacted major auditing firms, including the ‘big four’ and other top companies. High-profile cases, such as EY’s $100m (£74m) settlement in the US, highlight the risks posed by compromised professional examinations.
While other accounting bodies, including the Institute of Chartered Accountants in England and Wales, continue to allow some online exams, the ACCA has indicated that high-stakes assessments must now be conducted in person to maintain credibility and integrity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US federal agencies planning to deploy agentic AI in 2026 are being told to prioritise data organisation as a prerequisite for effective adoption. AI infrastructure providers say poorly structured data remains a major barrier to turning agentic systems into operational tools.
Public sector executives at Amazon Web Services, Oracle, and Cisco said government clients are shifting focus away from basic chatbot use cases. Instead, agencies are seeking domain-specific AI systems capable of handling defined tasks and delivering measurable outcomes.
US industry leaders said achieving this shift requires modernising legacy infrastructure alongside cleaning, structuring, and contextualising data. Executives stressed that agentic AI depends on high-quality data pipelines that allow systems to act autonomously within defined parameters.
Oracle said its public sector strategy for 2026 centres on enabling context-aware AI through updated data assets. Company executives argued that AI systems are only effective when deeply aligned with an organisation’s underlying data environment.
The companies said early agentic AI use cases include document review, data entry, and network traffic management. Cloud infrastructure was also highlighted as critical for scaling agentic systems and accelerating innovation across government workflows.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!