Trojanised Telegram APKs target Android users with Janus exploit

A large Android malware campaign has been uncovered, distributing trojanised versions of Telegram Messenger via more than 600 malicious domains. The operation uses phishing infrastructure and evasion techniques to deceive users and deliver infected APK files.

Domains exploit typosquatting, with names like ‘teleqram’ and ‘apktelegram’, and mimic Telegram’s website using cloned visuals and QR code redirects. Users are sent to zifeiji[.]asia, which hosts a fake Telegram site offering APK downloads between 60MB and 70MB.

The malware targets Android versions 5.0 to 8.0, exploiting the Janus vulnerability and bypassing security via legacy signature schemes. After installation, it establishes persistent access using socket callbacks, enabling remote control.

It communicates via unencrypted HTTP and FTP, and uses Android’s MediaPlayer component to trigger background activity unnoticed. Once installed, it requests extensive permissions, including access to all locally stored data.

Domains involved include over 300 on .com, with many registered through Gname, suggesting a coordinated and resilient campaign structure.

Researchers also found a JavaScript tracker embedded at telegramt.net, which collects browser and device data and sends it to dszb77[.]com. The goal appears to be user profiling and behavioural analysis.

Experts warn that the campaign’s scale and technical sophistication pose a significant risk to users running outdated Android systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands AI tools in Search with new subscriber perks

Google has begun rolling out new AI features in Search, introducing AI-powered phone calling to help users gather business information instead of contacting places manually.

The service, free for everyone in the US, allows people to search for businesses and have Google’s AI check pricing and availability on their behalf.

Subscribers to Google AI Pro and AI Ultra receive additional exclusive capabilities. These include access to Gemini 2.5 Pro, Google’s most advanced AI model, which supports complex queries such as coding or financial analysis.

Users can enable Gemini 2.5 Pro through the AI Mode tab instead of relying on the default model. Google is also launching Deep Research tools through Deep Search for in-depth investigations related to work, studies, or major life decisions.

Rather than rolling everything out all at once, Google is phasing in the features gradually. AI-powered calling is now available to all Search users in the US, while Gemini 2.5 Pro and Deep Research are becoming available specifically to AI Pro and AI Ultra subscribers.

With these updates, Google aims to position Search as more than a simple information tool by transforming it into an active digital assistant capable of handling everyday tasks and complex research instead of merely providing quick answers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts link Qantas data breach to AI voice impersonation

Cybersecurity experts believe criminals may have used AI-generated voice deepfakes to breach Qantas systems, potentially deceiving contact centre staff in Manila. The breach affected nearly six million customers, with links to a group known as Scattered Spider.

Qantas confirmed the breach after detecting suspicious activity on a third-party platform. Stolen data included names, phone numbers, and addresses—but no financial details. The airline has not confirmed whether voice impersonation was involved.

Experts point to Scattered Spiders’ history of using synthetic voices to trick help desk staff into handing over credentials. Former FBI agent Adam Marré said the technique, known as vishing, matches the group’s typical methods and links them to The Com, a cybercrime collective.

Other members of The Com have targeted companies like Salesforce through similar tactics. Qantas reportedly warned contact centre staff shortly before the breach, citing a threat advisory connected to Scattered Spider.

Google and CrowdStrike reported that the group frequently impersonates employees over the phone to bypass multi-factor authentication and reset passwords. The FBI has warned that Scattered Spider is now targeting airlines.

Qantas says its core systems remain secure and has not confirmed receiving a ransom demand. The airline is cooperating with authorities and urging affected customers to watch for scams using their leaked information.

Cybersecurity firm Trend Micro notes that voice deepfakes are now easy to produce, with convincing audio clips available for as little as $5. The deepfakes can mimic language, tone, and emotion, making them powerful tools for deception.

Experts recommend biometric verification, synthetic signal detection, and real-time security challenges to counter deepfakes. Employee training and multi-factor authentication remain essential defences.

Recent global cases illustrate the risk. In one instance, a deepfake mimicking US Senator Marco Rubio attempted to access sensitive systems. Other attacks involved cloned voices of US political figures Joe Biden and Susie Wiles.

As voice content becomes more publicly available, experts warn that anyone sharing audio online could become a target for AI-driven impersonation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Shopify explore product sales via ChatGPT

OpenAI is preparing to take a commission from product sales made directly through ChatGPT, signalling a significant shift in its business model. The move aims to monetise free users by embedding e-commerce checkout within the chatbot.

Currently, ChatGPT provides product links that redirect users to external sites. In April, OpenAI partnered with Shopify to support this feature. Sources say the next step is enabling purchases without leaving the platform, with merchants paying OpenAI a fee per transaction.

Until now, OpenAI has earned revenue mainly from ChatGPT Plus subscriptions and enterprise deals. Despite a $300 billion valuation, the company remains loss-making and seeks new commercial avenues tied to its conversational AI tools.

E-commerce integration would also challenge Google’s grip on product discovery and paid search, as more users turn to chatbots for recommendations.

Early prototypes have been shown to brands, and financial terms are under discussion. Shopify, which powers checkout on platforms like TikTok, may also provide the backend infrastructure for ChatGPT.

Product suggestions in ChatGPT are generated based on query relevance and user-specific context, including budgets and saved preferences. With memory upgrades, the chatbot can personalise results more effectively over time.

Currently, clicking on a product shows a list of sellers based on third-party data. Rankings rely mainly on metadata rather than price or delivery speed, though this is expected to evolve.

Marketers are already experimenting with ‘AIO’ — AI optimisation — to boost visibility in AI-generated product listings, similar to SEO for search engines.

An advertising agency executive said this shift could disrupt paid search and traditional ad models. Concerns are growing around how AI handles preferences and the fairness of its recommendations.

OpenAI has previously said it had ‘no active plans to pursue advertising’. However, CFO Sarah Friar recently confirmed that the company is open to ads in the future, using a selective approach.

CEO Sam Altman said OpenAI would not accept payments for preferential placement, but may charge small affiliate fees on purchases made through ChatGPT.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU confirms AI Act rollout and releases GPAI Code of Practice

The European Commission has confirmed it will move forward with the EU AI Act exactly as scheduled, instead of granting delays requested by tech giants and businesses.

On 10 July 2025, it published the final General-Purpose AI (GPAI) Code of Practice alongside FAQs to guide organisations aiming to comply with the new law.

Rather than opting for a more flexible timetable, the Commission is standing firm on its regulatory goals. The GPAI Code of Practice, now in its final form, sets out voluntary but strongly recommended steps for companies that want reduced administrative burdens and clearer legal certainty under the AI Act.

The document covers transparency, copyright, and safety standards for advanced AI models, including a model documentation form for providers.

Key dates have already been set. From 2 August 2025, rules covering notifications, governance, and penalties will come into force. By February 2026, official guidelines on classifying high-risk AI systems are expected.

The remaining parts of the legislation will take effect by August 2026, instead of being postponed further.

With the publication of the GPAI Code of Practice, the EU takes another step towards building a unified ethical framework for AI development and deployment across Europe, focusing on transparency, accountability, and respect for fundamental rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands urges EU to reduce reliance on US cloud providers

The Dutch government has released a policy paper urging the European Union to take coordinated action to reduce its heavy dependence on non-EU cloud providers, especially from the United States.

The document recommends that the European Commission introduce a clearer and harmonized approach at the EU level.

Key proposals include creating a consistent definition of ‘cloud sovereignty,’ adjusting public procurement rules to allow prioritizing sovereignty, promoting open-source technologies and standards, setting up a common European decision-making framework for cloud choices, and ensuring sufficient funding to support the development and deployment of sovereign cloud technologies.

These measures aim to strengthen the EU’s digital independence and protect public administrations from external political or economic pressures.

A recent investigation found that over 20,000 Dutch institutions rely heavily on US cloud services, with Microsoft holding about 60% of the market.

The Dutch government warned this dependence risks national security and fundamental rights. Concerns escalated after Microsoft blocked the ICC prosecutor’s email following US sanctions, sparking political outrage.

In response, the Dutch parliament called for reducing reliance on American providers and urged the government to develop a roadmap to protect digital infrastructure and regain control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fresh EU backlash over Digital Markets Act non-compliance

Meta is again under EU scrutiny after failing to fully comply with the bloc’s Digital Markets Act (DMA), despite a €200 million fine earlier this year.

The European Commission says Meta’s current ‘pay or consent’ model still falls short and could trigger further penalties. A formal warning is expected, with recurring fines likely if the company does not adjust its approach.

The DMA imposes strict rules on major tech platforms to reduce market dominance and protect digital fairness. While Meta claims its model meets legal standards, the Commission says progress has been minimal.

Over the past year, Meta has faced nearly €1 billion in EU fines, including €798 million for linking Facebook Marketplace to its central platform. The new case adds to years of tension over data practices and user consent.

The ‘pay or consent’ model offers users a choice between paying for privacy or accepting targeted ads. Regulators argue this does not meet the threshold for genuine consent and mirrors Meta’s past GDPR tactics.

Privacy advocates have long criticised Meta’s approach, saying users are left with no meaningful alternatives. Internal documents show Meta lobbied against privacy reforms and warned governments about reduced investment.

The Commission now holds greater power under the DMA than it did with GDPR, allowing for faster, centralised enforcement and fines of up to 10% of global turnover.

Apple has already been fined €500 million, and Google is also under investigation. The EU’s rapid action signals a stricter stance on platform accountability. The message for Meta and other tech giants is clear: partial compliance is no longer enough to avoid serious regulatory consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI system screens diabetic eye disease with near-perfect accuracy

A new AI programme is showing remarkable accuracy in detecting diabetic retinopathy, a leading cause of preventable blindness. The SMART system, short for Simple Mobile AI Retina Tracker, can scan retinal images using even basic smartphones and has achieved over 99% accuracy.

Researchers at the University of Texas Health Sciences Center in the US trained the AI using thousands of retinal images from diverse populations across six continents. The system processes images in under a second and can distinguish diabetic retinopathy from other eye diseases.

Experts say the technology could dramatically expand access to eye screenings, particularly in areas lacking specialist care. By integrating the tool into regular check-ups, both primary care providers and ophthalmologists could streamline early diagnosis.

Researchers highlighted that the tool’s mobile accessibility allows for global reach, potentially screening billions. The findings were presented at the annual meeting of The Endocrine Society, though they have yet to be peer-reviewed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pennsylvania criminalises malicious deepfakes under new digital forgery law

Governor Shapiro has enacted a new statute enhancing Pennsylvania’s legal stance on AI-generated content by defining deceptive deepfakes as digital forgery.

The law criminalises creating and distributing such content, mainly when used for deceit, highlighting a proactive response to deepening online threats.

The legislation differentiates between uses of deepfakes: non-consensual impersonation will result in misdemeanour charges, while cases involving fraudulent intent, such as financial scams or political manipulation, are now classified as third-degree felonies.

Support for the bill was bipartisan and overwhelming in the state legislature. Its sponsors emphasised that while it deters harmful digital impersonation, it also carefully safeguards legitimate speech, including parody, satire, and artistic expression.

With Pennsylvania now among the growing number of states implementing deepfake regulations, this development aligns with a national trend to regulate AI-generated digital forgeries. It complements earlier state-level laws and federal initiatives to curb AI’s misuse without stifling innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool uses walking patterns to detect early signs of dementia

Fujitsu and Acer Medical are trialling an AI-powered tool to help identify early signs of dementia and Parkinson’s disease by analysing patients’ walking patterns. The system, called aiGait and powered by Fujitsu’s Uvance skeleton recognition technology, converts routine movements into health data.

Initial tests are taking place at a daycare centre linked to Taipei Veterans Hospital, using tablets and smartphones to record basic patient movements. The AI compares this footage with known movement patterns associated with neurodegenerative conditions, helping caregivers detect subtle abnormalities.

The tool is designed to support early intervention, with abnormal results prompting follow-up by healthcare professionals. Acer Medical plans to expand the service to elderly care centres across Taiwan by the end of the year.

Fujitsu’s AI was originally developed for gymnastics scoring and adapted to analyse real-world gait data with high accuracy using everyday mobile devices. Both companies hope to extend the technology’s use to paediatrics, sports science, and rehabilitation in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!