Info stealing malware spreads from Windows to macOS

Microsoft has warned that info stealing malware is increasingly targeting macOS alongside Windows, using cross platform tools and social engineering. The company said the trend accelerated from late 2025.

Attackers are luring macOS users to fake websites and malicious installers, often promoted through online ads. Microsoft said these campaigns steal credentials, crypto wallets and browser sessions on macOS and Windows.

Python based malware is also playing a larger role, enabling attackers to target macOS and Windows with the same code. Microsoft reported growing abuse of trusted platforms such as WhatsApp to spread infostealers.

Microsoft urged organisations and individuals to strengthen layered cybersecurity on macOS and Windows. The company said better user awareness and monitoring could reduce the risk of data theft and account compromise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tinder tests AI Chemistry feature to cut swipe fatigue and revive engagement

The dating platform is expanding its reliance on AI, with Tinder experimenting with a feature designed to ease swipe fatigue among users.

A tool, known as Chemistry, that builds a picture of each person through optional questions and by reviewing their Camera Roll with permission, offering a more personalised route toward potential matches instead of repetitive browsing.

Match is currently testing the feature only in Australia. Executives say the system allows people to receive a small set of tailored profiles rather than navigating large volumes of candidates.

Tinder hopes the approach will strengthen engagement during a period when registrations and monthly activity remain lower than last year, despite minor improvements driven by AI-based recommendations.

Developers are also refocusing the broader discovery experience to reflect concerns raised by Gen Z around authenticity, trust and relevance.

The platform now relies on verification tools such as Face Check, which Match says cut harmful interactions by more than half instead of leaving users exposed to impersonators.

These moves indicate a shift away from the swipe mechanic that once defined the app, offering more direct suggestions that may improve outcomes.

Marketing investment is set to rise as part of the strategy. Match plans to allocate $50 million to new campaigns that will position Tinder as appealing again, using creators on TikTok and Instagram to reframe the brand.

Strong quarterly revenue failed to offset weaker guidance, yet the company argues that AI features will help shape a more reliable and engaging service for users seeking consistent matches.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google issues warning on malware affecting over 40% of Android devices

The US tech giant, Google, has alerted users that more than 40% of Android phones are vulnerable to new malware and spyware due to outdated software. Phones running older versions than Android 13 no longer receive security updates, leaving over a billion users worldwide at risk.

Data shows Android 16 is present on only 7.5% of devices, while versions 15, 14, and 13 still dominate the market.

Slow adoption of updates means many devices remain exposed, even when security patches are available. Google emphasised that outdated phones are particularly unsafe and cannot protect against emerging threats.

Users are advised to upgrade to Android 13 or newer, or purchase a mid-range device that receives regular updates, instead of keeping an old high-end phone without support. Unlike Apple, where most iPhones receive timely updates, older Android devices may never get the necessary security fixes.

The warning highlights the urgent need for users to act immediately to avoid potential data breaches and spyware attacks. Google’s message is clear: using unsupported Android devices is a growing global security concern.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU tests Matrix protocol as sovereign alternative for internal communication

The European Commission is testing a European open source system for its internal communications as worries grow in Brussels over deep dependence on US software.

A spokesperson said the administration is preparing a solution built on the Matrix protocol instead of relying solely on Microsoft Teams.

Matrix is already used by several European institutions, including the French government, German healthcare bodies and armed forces across the continent.

The Commission aims to deploy it as a complement and backup to Teams rather than a full replacement. Officials noted that Signal currently fills that role but lacks the flexibility needed for an organisation of the Commission’s size.

The initiative forms part of a wider push for digital sovereignty within the EU. A Matrix-based tool could eventually link the Commission with other Union bodies that currently lack a unified secure communication platform.

Officials said there is already an operational connection with the European Parliament.

The trial reflects growing sensitivity about Europe’s strategic dependence on non-European digital services.

By developing home-grown communication infrastructure instead of leaning on a single foreign supplier, the Commission hopes to build a more resilient and sovereign technological foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India pushes Meta to justify WhatsApp’s data-sharing

The Supreme Court of India has delivered a forceful warning to Meta after judges said the company could not play with the right to privacy.

The court questioned how WhatsApp monetises personal data in a country where the app has become the de facto communications tool for hundreds of millions of people. Judges added that meaningful consent is difficult when users have little practical choice.

Meta was told not to share any user information while the appeal over WhatsApp’s 2021 privacy policy continues. Judges pressed the company to explain the value of behavioural data instead of relying solely on claims about encrypted messages.

Government lawyers argued that personal data was collected and commercially exploited in ways most users would struggle to understand.

The case stems from a major update to WhatsApp’s data-sharing rules that India’s competition regulator said abused the platform’s dominant position.

A significant penalty was issued before Meta and WhatsApp challenged the ruling at the Supreme Court. The court has now widened the proceedings by adding the IT ministry and has asked Meta to provide detailed answers before the next hearing on 9 February.

WhatsApp is also under heightened scrutiny worldwide as regulators examine how encrypted platforms analyse metadata and other signals.

In India, broader regulatory changes, such as new SIM-binding rules, could restrict how small businesses use the service rather than broadening its commercial reach.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Major Chinese data leak exposes billions of records

Cybersecurity researchers uncovered an unsecured database exposing 8.7 billion records linked to individuals and businesses in China. The data was found in early January 2026 and remained accessible online for more than three weeks.

The China focused dataset included national ID numbers, home addresses, email accounts, social media identifiers and passwords. Researchers warned that the scale of exposure in China creates serious risks of identity theft and account takeovers.

The records were stored in a large Elasticsearch cluster hosted on so called bulletproof infrastructure. Analysts believe the structure suggests deliberate aggregation in China rather than an accidental misconfiguration.

Although the database is now closed, experts say actors targeting China may have already copied the data. China has experienced several major leaks in recent years, highlighting persistent weaknesses in large scale data handling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia steps up platform scrutiny after mass Snapchat removals

Snapchat has blocked more than 415,000 Australian accounts after the national ban on under-16s began, marking a rapid escalation in the country’s effort to restrict children’s access to major platforms.

The company relied on a mix of self-reported ages and age-detection technologies to identify users who appeared to be under 16.

The platform warned that age verification still faces serious shortcomings, leaving room for teenagers to bypass safeguards rather than supporting reliable compliance.

Facial estimation tools remain accurate only within a narrow range, meaning some young people may slip through while older users risk losing access. Snapchat also noted the likelihood that teenagers will shift towards less regulated messaging apps.

The eSafety commissioner has focused regulatory pressure on the 10 largest platforms, although all services with Australian users are expected to assess whether they fall under the new requirements.

Officials have acknowledged that the technology needs improvement and that reliability issues, such as the absence of a liveness check, contributed to false results.

More than 4.7 million accounts have been deactivated across the major platforms since the ban began, although the figure includes inactive and duplicate accounts.

Authorities in Australia expect further enforcement, with notices set to be issued to companies that fail to meet the new standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU plans a secure military data space by 2030

Institutions in the EU have begun designing a new framework to help European armies share defence information securely, rather than relying on US technology.

A plan centred on creating a military-grade data platform, the European Defence Artificial Intelligence Data Space, is intended to support sensitive exchanges among defence authorities.

Ultimately, the approach aims to replace the current patchwork of foreign infrastructure that many member states rely on to store and transfer national security data.

The European Defence Agency is leading the effort and expects the platform to be fully operational by 2030. The concept includes two complementary elements: a sovereign military cloud for data storage and a federated system that allows countries to exchange information on a trusted basis.

Officials argue that this will improve interoperability, speed up joint decision-making, and enhance operational readiness across the bloc.

A project that aligns with broader concerns about strategic autonomy, as EU leaders increasingly question long-standing dependencies on American providers.

Several European companies have been contracted to develop the early technical foundations. The next step is persuading governments to coordinate future purchases so their systems remain compatible with the emerging framework.

Planning documents suggest that by 2029, member states should begin integrating the data space into routine military operations, including training missions and coordinated exercises. EU authorities maintain that stronger control of defence data will be essential as military AI expands across European forces.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!