A Moscow court has dismissed a class action lawsuit filed against Russia’s state media regulator Roskomnadzor and the Ministry of Digital Development by users of WhatsApp and Telegram. The ruling was issued by a judge at the Tagansky District Court.
The court said activist Konstantin Larionov failed to demonstrate he was authorised to represent messaging app users. The lawsuit claimed call restrictions violated constitutional rights, including freedom of information and communication secrecy.
The case followed Roskomnadzor’s decision in August to block calls on WhatsApp and Telegram, a move officials described as part of anti-fraud efforts. Both companies criticised the restrictions at the time.
Larionov and several dozen co-plaintiffs said the measures were ineffective, citing central bank data showing fraud mainly occurs through traditional calls and text messages. The plaintiffs also argued the restrictions disproportionately affected ordinary users.
Larionov said the group plans to appeal the decision and continue legal action. He has described the lawsuit as an attempt to challenge what he views as politically motivated restrictions on communication services in Russia.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China is proposing new rules requiring users to consent before AI companies can use chat logs for training. The draft measures aim to balance innovation with safety and public interest.
Platforms would need to inform users when interacting with AI and provide options to access or delete their chat history. For minors, guardian consent is required before sharing or storing any data.
Analysts say the rules may slow AI chatbot improvements but provide guidance on responsible development. The measures signal that some user conversations are too sensitive for free training data.
The draft rules are open for public consultation with feedback due in late January. China encourages expanding human-like AI applications once safety and reliability are demonstrated.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Security researchers warn hackers are exploiting a new feature in Microsoft Copilot Studio. The issue affects recently launched Connected Agents functionality.
Connected Agents allows AI systems to interact and share tools across environments. Researchers say default settings can expose sensitive capabilities without clear monitoring.
Zenity Labs reported attackers linking rogue agents to trusted systems. Exploits included unauthorised email sending and data access.
Experts urge organisations to disable Connected Agents for critical workloads. Stronger authentication and restricted access are advised until safeguards improve.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Association of Chartered Certified Accountants (ACCA) has announced it will largely end remote examinations in the UK from March 2026, requiring students to sit tests in person unless exceptional circumstances apply.
The decision aims to address a surge in cheating, particularly facilitated by AI tools.
Remote testing was introduced during the Covid-19 pandemic to allow students to continue qualifying when in-person exams were impossible. The ACCA said online assessments have now become too difficult to monitor effectively, despite efforts to strengthen safeguards against misconduct.
Investigations show cheating has impacted major auditing firms, including the ‘big four’ and other top companies. High-profile cases, such as EY’s $100m (£74m) settlement in the US, highlight the risks posed by compromised professional examinations.
While other accounting bodies, including the Institute of Chartered Accountants in England and Wales, continue to allow some online exams, the ACCA has indicated that high-stakes assessments must now be conducted in person to maintain credibility and integrity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US federal agencies planning to deploy agentic AI in 2026 are being told to prioritise data organisation as a prerequisite for effective adoption. AI infrastructure providers say poorly structured data remains a major barrier to turning agentic systems into operational tools.
Public sector executives at Amazon Web Services, Oracle, and Cisco said government clients are shifting focus away from basic chatbot use cases. Instead, agencies are seeking domain-specific AI systems capable of handling defined tasks and delivering measurable outcomes.
US industry leaders said achieving this shift requires modernising legacy infrastructure alongside cleaning, structuring, and contextualising data. Executives stressed that agentic AI depends on high-quality data pipelines that allow systems to act autonomously within defined parameters.
Oracle said its public sector strategy for 2026 centres on enabling context-aware AI through updated data assets. Company executives argued that AI systems are only effective when deeply aligned with an organisation’s underlying data environment.
The companies said early agentic AI use cases include document review, data entry, and network traffic management. Cloud infrastructure was also highlighted as critical for scaling agentic systems and accelerating innovation across government workflows.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new computational brain model, built entirely from biological principles, has learned a visual categorisation task with accuracy and variability matching that of lab animals. Remarkably, the model achieved these results without being trained on any animal data.
The biomimetic design integrates detailed synaptic rules with large-scale architecture across the cortex, striatum, brainstem, and acetylcholine-modulated systems.
As the model learned, it reproduced neural rhythms observed in real animals, including strengthened beta-band synchrony during correct decisions. The result demonstrates emergent realism in both behaviour and underlying neural activity.
The model also revealed a previously unnoticed set of ‘incongruent neurons’ that predicted errors. When researchers revisited animal data, they found the same signals had gone undetected, highlighting the platform’s potential to uncover hidden neural dynamics.
Beyond neuroscience research, the model offers a powerful tool for testing neurotherapeutic interventions in silico. Simulating disease-related circuits allows scientists to test treatments before costly clinical trials, potentially speeding up the development of next-generation neurotherapeutics.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms has acquired Manus, a Singapore-based developer of general-purpose AI agents, as part of its continued push to expand artificial intelligence capabilities. The deal underscores Meta’s strategy of acquiring specialised AI firms to accelerate product development.
Manus, founded in China before relocating to Singapore, develops AI agents capable of performing tasks such as market research, coding, and data analysis. The company said it reached more than $100 million in annualised revenue within eight months of launch and was serving millions of users worldwide.
Meta said the acquisition will help integrate advanced automation into its consumer and enterprise offerings, including the Meta AI assistant. Manus will continue operating its subscription service, and its employees will join Meta’s teams.
Financial terms were not disclosed, but media reports valued the deal at more than $2 billion. Manus had been seeking funding at a similar valuation before being approached by Meta and had recently raised capital from international investors.
The acquisition follows a series of AI-focused deals by Meta, including investments in Scale AI and AI device start-ups. Analysts say the move highlights intensifying competition among major technology firms to secure AI talent and capabilities.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The presidency of the Council of the European Union next year is expected to see Ireland lead a European drive for ID-verified social media accounts.
Tánaiste Simon Harris said the move is intended to limit anonymous abuse, bot activity and coordinated disinformation campaigns that he views as a growing threat to democracy worldwide.
A proposal that would require users to verify their identity instead of hiding behind anonymous profiles. Harris also backed an Australian-style age verification regime to prevent children from accessing social media, arguing that existing digital consent rules are not being enforced.
Media Minister Patrick O’Donovan is expected to bring forward detailed proposals during the presidency.
The plan is likely to trigger strong resistance from major social media platforms with European headquarters in Ireland, alongside criticism from the US.
However, Harris believes there is growing political backing across Europe, pointing to signals of support from French President Emmanuel Macron and UK Prime Minister Keir Starmer.
Harris said he wanted constructive engagement with technology firms rather than confrontation, while insisting that stronger safeguards are now essential.
He argued that social media companies already possess the technology to verify users and restrict harmful accounts, and that European-level coordination will be required to deliver meaningful change.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has filed an appeal of a major UK antitrust ruling that could result in billions of dollars in compensation for App Store users. The move would escalate the case from the Competition Appeal Tribunal to the UK Court of Appeal.
The application follows an October ruling in which the tribunal found Apple had abused its dominant market position by charging excessive App Store fees. The decision set a £1.5 billion ($1.9 billion) compensation figure, which Apple previously signalled it would challenge.
After the tribunal declined to grant permission to appeal, Apple sought to appeal to a higher court. The company has not commented publicly on the latest filing but continues to dispute the tribunal’s assessment of competition in the app economy.
Central to the case is the tribunal’s proposed developer commission rate of 15-20 per cent, lower than Apple’s longstanding 30 per cent fee. The rate was determined using what the court described as informed estimates.
If upheld, the compensation would be distributed among UK App Store users who made purchases between 2015 and 2024. The case is being closely watched as a test of antitrust enforcement against major digital platforms.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A hacker using the name Lovely has allegedly claimed to have accessed subscriber data belonging to WIRED and to have leaked details relating to around 2.3 million users.
The same individual also states that a wider Condé Nast account system covering more than 40 million users could be exposed in future leaks instead of ending with the current dataset.
Security researchers are reported to have matched samples of the claimed leak with other compromised data sources. The information is said to include names, email addresses, user IDs and timestamps instead of passwords or payment information.
Some researchers also believe that certain home addresses could be included, which would raise privacy concerns if verified.
The dataset is reported to be listed on Have I Been Pwned. However, no official confirmation from WIRED or Condé Nast has been issued regarding the authenticity, scale or origin of the claimed breach, and the company’s internal findings remain unknown until now.
The hacker has also accused Condé Nast of failing to respond to earlier security warnings, although these claims have not been independently verified.
Users are being urged by security professionals to treat unexpected emails with caution instead of assuming every message is genuine.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!