BeatBanker malware targets Android users in Brazil

A new Android malware called BeatBanker is targeting users in Brazil through fake Starlink and government apps. The malware hijacks devices, steals banking credentials, tampers with cryptocurrency transactions, and secretly mines Monero.

Infection begins on phishing websites mimicking the Google Play Store or the ‘INSS Reembolso’ app. Users are tricked into installing trojanised APKs, which evade detection through memory-based decryption and by blocking analysis environments.

Fake update screens maintain persistence while silently downloading additional malicious payloads.

BeatBanker initially combined a banking trojan with a cryptocurrency miner. It uses accessibility permissions to monitor browsers and crypto apps, overlaying fake screens to redirect Tether and other crypto transfers.

A foreground service plays silent audio loops to prevent the device from shutting down, while Firebase Cloud Messaging enables remote control of infected devices.

The latest variant replaces the banking module with the BTMOB RAT, providing full control over devices. Capabilities include automatic permissions, background persistence, keylogging, GPS tracking, camera access, and screen-lock credential capture.

Kaspersky warns that BeatBanker demonstrates the growing sophistication of mobile threats and multi-layered malware campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Biased AI suggestions shift societal attitudes

AI-powered writing tools may do more than speed up typing- they can influence the way people think. A Cornell study found that biassed autocomplete suggestions can subtly shift users’ opinions on issues like the death penalty, fracking, GMOs, and voting rights.

Experiments with over 2,500 participants revealed that users’ views gravitated toward the AI’s predetermined bias. Attempts to warn participants about the AI’s bias, either before or after writing, did not prevent the shifts.

Researchers noted that the effect occurs because users effectively write biassed viewpoints themselves, a process psychology research shows can alter personal attitudes.

The influence was consistent across political topics and participants of all leanings. Compared with simply providing pre-written arguments, biassed AI suggestions had a stronger effect on shaping opinions.

Researchers warn that as autocomplete and generative AI tools become increasingly prevalent, covert persuasion through AI may pose serious societal risks.

The study, led by Sterling Williams-Ceci and Mor Naaman of Cornell Tech, highlights the potential for AI to shape beliefs without users noticing. Findings highlight the need for oversight as AI writing assistants enter everyday communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI browsers expose new cybersecurity attack surfaces

Security researchers have demonstrated that agentic browsers, powered by AI, may introduce new cybersecurity vulnerabilities.

Experiments targeting the Comet AI browser, developed by Perplexity AI, showed that attackers could manipulate the system into executing phishing scams in only a few minutes.

The attack exploits the reasoning process used by AI agents when interacting with websites. These systems continuously explain their actions and observations, revealing internal signals that attackers can analyse to refine malicious strategies and bypass built-in safeguards.

Researchers showed that phishing pages can be iteratively trained using adversarial machine learning methods, such as Generative Adversarial Networks.

By observing how the AI browser responds to suspicious signals, attackers can optimise fraudulent pages until the system accepts them as legitimate.

The findings highlight a shift in the cybersecurity threat landscape. Instead of deceiving human users directly, attackers increasingly focus on manipulating the AI agents that perform online actions on behalf of users.

Security experts warn that prompt injection vulnerabilities remain a fundamental challenge for large language models and agentic systems.

Although new defensive techniques are being developed, researchers believe such weaknesses may remain difficult to eliminate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU platform law expands data access rights

European regulators are examining how the Digital Markets Act interacts with the General Data Protection Regulation across major digital platforms. The EU rules apply to designated gatekeepers that operate core platform services used by millions of users.

Policy specialists in the EU say the Digital Markets Act complements GDPR protections by strengthening user control over personal data. The framework also supports rights related to data access, portability and transparency for both consumers and business users.

The regulatory overlap affects areas including consent requirements, third-party software installation and interoperability between services. Authorities are also coordinating enforcement between competition and data protection regulators.

Analysts say the combined application of both laws could reshape the responsibilities of major technology platforms. Policymakers aim to increase user choice while reinforcing safeguards for the integrity and confidentiality of personal data in the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT researchers outline future of AI and physical sciences

AI and the mathematical and physical sciences are entering a new phase of collaboration that could accelerate technological progress and scientific discovery. Researchers increasingly see the relationship as a two-way exchange rather than a one-sided use of AI tools.

A 2025 MIT workshop brought together experts from astronomy, chemistry, materials science, mathematics and physics to examine the future of this collaboration.

Discussions resulted in a white paper published in Machine Learning: Science and Technology, outlining strategies for research institutions and funding bodies.

Participants agreed that stronger computing infrastructure, shared data resources and cross-disciplinary research methods are essential for progress. Scientists also improve AI by analysing neural networks, identifying principles and developing new algorithms.

Researchers highlighted the growing importance of so-called ‘centaur scientists’- specialists trained in both AI and traditional scientific disciplines. Universities, including MIT, are expanding interdisciplinary programmes and research initiatives to train experts who can work across AI and scientific fields.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Leading tech companies deepen AI competition with new capabilities

AI competition among leading AI developers intensified in early 2026 as major companies expanded their models, platforms, and partnerships. Companies including Google, OpenAI, Anthropic, and xAI are introducing new capabilities and integrating AI systems into broader ecosystems.

Google has continued to expand its Gemini model family with updates to Gemini 3.1 Pro and 3.1 Flash, designed to support complex tasks across applications. The company is also integrating Gemini into services such as Docs, Sheets, Slides, and Drive, allowing users to generate documents and analyse data across multiple Google services.

Gemini has also been embedded into the Chrome browser and integrated with Samsung’s Galaxy devices, expanding its distribution across consumer platforms as AI competition among major developers accelerates.

Anthropic has focused on advancing the Claude model family while positioning the system for enterprise and professional use. Recent updates include Claude Sonnet 4.6, which introduces improvements in reasoning and coding capabilities alongside an expanded context window currently in beta. The company has also launched a limited preview of the Claude Marketplace, allowing organisations to use third-party tools built on Claude through partnerships with several software companies.

OpenAI has continued to update ChatGPT with the release of the GPT-5 series, including GPT-5.2 and GPT-5.4. The newer models combine reasoning, coding, and agent-based workflows, while also introducing computer-use capabilities that allow the system to interact with applications directly.

OpenAI has also introduced additional services, including ChatGPT Health and integrations designed to assist with spreadsheet modelling and data analysis, further intensifying AI competition across enterprise and consumer tools.

Meanwhile, xAI has expanded development of its Grok models while increasing computing infrastructure. The company has reported growth in Grok usage through integration with the X platform and other applications. Recent announcements include upgrades to Grok’s voice and multimodal capabilities, as well as continued training of future models.

Across the industry, developers are increasingly positioning their systems not only as conversational assistants but also as tools integrated into enterprise workflows, creative production, and software development. New releases in 2026 reflect a broader shift toward multimodal systems, agent-based capabilities, and deeper integration with existing digital platforms, highlighting how AI competition is shaping the next phase of AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT dynamic visual explanations introduce interactive learning tools

OpenAI has introduced a new ChatGPT feature called dynamic visual explanations, allowing users to interact with mathematical and scientific concepts through real-time visuals.

Instead of relying solely on text explanations or static diagrams, the feature enables users to manipulate formulas and variables and immediately see how those changes affect results. For example, when exploring the Pythagorean theorem, users can adjust the triangle’s sides and see the hypotenuse update instantly.

To use the tool, users can ask ChatGPT questions such as ‘What is a lens equation?’ or ‘How can I find the area of a circle?’ The chatbot responds with both a written explanation and an interactive visual module that users can manipulate directly.

The feature currently supports more than 70 topics in mathematics and science. The topics include binomial squares, Charles’ law, compound interest, Coulomb’s law, exponential decay, Hooke’s law, kinetic energy, linear equations, and Ohm’s law.

OpenAI says it plans to expand the range of topics over time. The feature is already available to all logged-in ChatGPT users. The launch marks a shift in how ChatGPT supports learning. Instead of simply providing answers, the tool now encourages users to explore underlying concepts by experimenting with interactive models.

AI tools have become increasingly common in education, although their role remains widely debated. Some educators worry that students may become overly dependent on AI tools, while others see them as valuable learning aids.

According to OpenAI, more than 140 million people use ChatGPT every week to help with subjects such as mathematics and science, which many learners find challenging. Other technology companies are also experimenting with similar tools. Google’s Gemini introduced interactive diagrams and visual explanations last year.

The new feature joins several other ChatGPT learning tools, including study mode, which guides users through problems step by step, and QuizGPT, which allows users to create flashcards and test themselves before exams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic lawsuit gains Big Tech support in AI dispute

Several major US technology companies have backed Anthropic in its lawsuit challenging the US Department of Defence’s decision to label the AI company a national security ‘supply chain risk’.

Google, Amazon, Apple, and Microsoft have filed legal briefs supporting Anthropic’s attempt to overturn the designation issued by Defence Secretary Pete Hegseth. Anthropic argues the decision was retaliation after the company declined to allow its AI systems to be used for mass surveillance or autonomous weapons.

In court filings, the companies warned that the government’s action could have wider consequences for the technology sector. Microsoft said the decision could have ‘broad negative ramifications for the entire technology sector’.

Microsoft, which works closely with the US government and the Department of Defence, said it agreed with Anthropic’s position that AI systems should not be used to conduct domestic mass surveillance or enable autonomous machines to initiate warfare.

A joint amicus brief supporting Anthropic was also submitted by the Chamber of Progress, a technology policy organisation funded by companies including Google, Apple, Amazon and Nvidia. The group said it was concerned about the government penalising a company for its public statements.

The brief described the designation as ‘a potentially ruinous sanction’ for businesses and warned it could create a climate in which companies fear government retaliation for expressing views.

Anthropic’s lawsuit claims the government violated its free speech rights by retaliating against the company for comments made by its leadership. The dispute escalated after Anthropic declined to remove contractual restrictions preventing its AI models from being used for mass surveillance or autonomous weapons.

The company had previously introduced safeguards in government contracts to limit certain uses of its technology. Negotiations over revised contract language continued for several weeks before the disagreement became public.

Former military officials and technology policy advocates have also filed supporting briefs, warning that the decision could discourage companies from participating in national security projects if they fear retaliation for voicing concerns. The case is currently being heard in federal court in San Francisco.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Over 85 companies join global crypto partner program 

Mastercard has introduced the Crypto Partner Program, a global initiative connecting more than 85 crypto-native companies, payments providers, and financial institutions. The program aims to create a forum for collaboration that aligns innovation in digital assets with traditional payment systems.

Enterprise use cases such as cross-border remittances, payouts, and settlements are growing, underscoring the practical potential of on-chain payments. Participants will collaborate with Mastercard to design products that combine the speed and programmability of digital assets with existing card rails and global commerce.

The initiative builds on Mastercard’s long-standing approach to blockchain and digital assets, including Start Path and the Engage platform, which provide opportunities for collaboration, innovation, and growth.

The program focuses on turning technical innovation into scalable, compliant solutions that can operate across markets and everyday commerce.

Partners in the Crypto Partner Program include Binance, Circle, Crypto.com, Solana, Ripple, PayPal, and over 80 other industry leaders, demonstrating the growing ecosystem of companies working together to shape the future of digital payments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot