Indonesia promises to bolster digital sovereignty and AI talent on Independence Day

Indonesia marked its 80th Independence Day by reaffirming its commitment to digital sovereignty and technology-driven inclusion.

The Ministry of Communication and Digital Affairs, following President Prabowo Subianto’s ‘Indonesia Incorporated’ directive, highlighted efforts to build an inclusive, secure, and efficient digital ecosystem.

Priorities include deploying 4G networks in remote regions, expanding public internet services, and reinforcing the Palapa Ring broadband infrastructure.

On the talent front, the government launched a Digital Talent Scholarship and AI Talent Factory to nurture AI skills, from beginners to specialists, setting the stage for future AI innovation domestically.

In parallel, digital protection measures have been bolstered: over 1.2 million pieces of harmful content have been blocked, while new regulations under the Personal Data Protection Law, age-verification, content monitoring, and reporting systems have been introduced to enhance child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake Telegram Premium site spreads dangerous malware

A fake Telegram Premium website infects users with Lumma Stealer malware through a drive-by download, requiring no user interaction.

The domain, telegrampremium[.]app, hosts a malicious executable named start.exe, which begins stealing sensitive data as soon as it runs.

The malware targets browser-stored credentials, crypto wallets, clipboard data and system files, using advanced evasion techniques to bypass antivirus tools.

Obfuscated with cryptors and hidden behind real services like Telegram, the malware also communicates with temporary domains to avoid takedown.

Analysts warn that it manipulates Windows systems, evades detection, and leaves little trace by disguising its payloads as real image files.

To defend against such threats, organisations are urged to implement better cybersecurity controls, such as behaviour-based detection and enforce stronger download controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The dark side of AI: Seven fears that won’t go away

AI has been hailed as the most transformative technology of our age, but with that power comes unease. From replacing jobs to spreading lies online, the risks attached to AI are no longer abstract; they are already reshaping lives. While governments and tech leaders promise safeguards, uncertainty fuels public anxiety.

Perhaps the most immediate concern is employment. Machines are proving cheaper and faster than humans in the software development and graphic design industries. Talk of a future “post-scarcity” economy, where robot labour frees people from work, remains speculative. Workers see only lost opportunities now, while policymakers struggle to offer coordinated solutions.

Environmental costs are another hidden consequence. Training large AI models demands enormous data centres that consume vast amounts of electricity and water. Critics argue that supposed future efficiencies cannot justify today’s pollution, which sometimes rivals small nations’ carbon footprint.

Privacy fears are also escalating. AI-driven surveillance—from facial recognition in public spaces to workplace monitoring—raises questions about whether personal freedom will survive in an era of constant observation. Many fear that “smart” devices and cameras may soon leave nowhere to hide.

Then there is the spectre of weaponisation. AI is already integrated into warfare, with autonomous drones and robotic systems assisting soldiers. While fully self-governing lethal machines are not yet in use, military experts warn that it is only a matter of time before battlefields become dominated by algorithmic decision-makers.

Artists and writers, meanwhile, worry about intellectual property theft. AI systems trained on creative works without permission or payment have sparked lawsuits and protests, leaving cultural workers feeling exploited by tech giants eager for training data.

Misinformation represents another urgent risk. Deepfakes and AI-generated propaganda are flooding social media, eroding trust in institutions and amplifying extremist views. The danger lies not only in falsehoods themselves but in the echo chambers algorithms create, where users are pushed toward ever more radical beliefs.

And hovering above it all is the fear of runaway AI. Although science fiction often exaggerates this threat, researchers take seriously the possibility of systems evolving in ways we cannot predict or control. Calls for global safeguards and transparency have grown louder, yet solutions remain elusive.

In the end, fear alone cannot guide us. Addressing these risks requires not just caution but decisive governance and ethical frameworks. Only then can humanity hope to steer AI toward progress rather than peril.

Source: Forbes

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Zoom patches critical Windows flaw with high risk of takeover

Zoom has patched a critical Windows vulnerability that could let attackers fully take control of devices without needing credentials. The flaw, CVE-2025-49457, stems from the app failing to use explicit paths when loading DLLs, allowing malicious files to be executed.

Attackers could exploit this to install malware or extract sensitive data such as recordings or user credentials, even pivoting deeper into networks. The issue affects several Zoom products, including Workplace, VDI, Rooms, and Meeting SDK, all before version 6.3.10.

Zoom urges users to update their app immediately, as the flaw requires no advanced skill and can be triggered with minimal access. However, this highlights the increasing cybersecurity concerns associated with the digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GenAI app usage up 50% as firms struggle with oversight

Enterprise employees are increasingly building their own AI tools, sparking a surge in shadow AI that raises security concerns.

Netskope reports a 50% rise in generative AI platform use, with over half of current adoption estimated to be unsanctioned by IT.

Platforms like Azure OpenAI, Amazon Bedrock, and Vertex AI lead this trend, allowing users to connect enterprise data to custom AI agents.

The growth of shadow AI has prompted calls for better oversight, real-time user training, and updated data loss prevention strategies.

On-premises deployment is also increasing, with 34% of firms using local LLM interfaces like Ollama and LM Studio. Security risks grow as AI agents retrieve data using API calls beyond browsers, particularly from OpenAI and Anthropic endpoints.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Gemini update remembers your preferences, until you tell it not to

Google has begun rolling out a feature that enables its Gemini AI chatbot to automatically remember key personal details and preferences from previous chats, unless users opt out. However, this builds upon earlier functionality where memory could only be activated on request.

The update is enabled by default on Gemini 2.5 Pro in select countries and will be extended to the 2.5 Flash version later. Users can turn off the setting under Personal Context in the app to deactivate it.

Alongside auto-memory, Google is introducing Temporary Chats, a privacy tool for one-off interactions. These conversations aren’t saved to your history, aren’t used to train Gemini, and are deleted after 72 hours.

Google is also renaming ‘Gemini Apps Activity’ to ‘Keep Activity’, a setting that, when enabled, lets Google sample uploads like files and photos to improve services from 2 September, while still offering the option to opt out.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Top cybersecurity vendors double down on AI-powered platforms

The cybersecurity market is consolidating as AI reshapes defence strategies. Platform-based solutions replace point tools to cut complexity, counter AI threats, and ease skill shortages. IDC predicts that security spending will rise 12% in 2025 to $377 billion by 2028.

Vendors embed AI agents, automation, and analytics into unified platforms. Palo Alto Networks’ Cortex XSIAM reached $1 billion in bookings, and its $25 billion CyberArk acquisition expands into identity management. Microsoft blends Azure, OpenAI, and Security Copilot to safeguard workloads and data.

Cisco integrates AI across networking, security, and observability, bolstered by its acquisition of Splunk. CrowdStrike rebounds from its 2024 outage with Charlotte AI, while Cloudflare shifts its focus from delivery to AI-powered threat prediction and optimisation.

Fortinet’s platform spans networking and security, strengthened by Suridata’s SaaS posture tools. Zscaler boosts its Zero Trust Exchange with Red Canary’s MDR tech. Broadcom merges Symantec and Carbon Black, while Check Point pushes its AI-driven Infinity Platform.

Identity stays central, with Okta leading access management and teaming with Palo Alto on integrated defences. The companies aim to platformise, integrate AI, and automate their operations to dominate an increasingly complex cyberthreat landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

North Korean hackers switch to ransomware in major cyber campaign

A North Korean hacking unit has launched a ransomware campaign targeting South Korea and other countries, marking a shift from pure espionage. Security firm S2W identified the subgroup, ‘ChinopuNK’, as part of the ScarCruft threat actor.

The operation began in July, utilising phishing emails and a malicious shortcut file within a RAR archive to deploy multiple malware types. These included a keylogger, stealer, ransomware, and a backdoor.

ScarCruft, active since 2016, has targeted defectors, journalists, and government agencies. Researchers say the move to ransomware indicates either a new revenue stream or a more disruptive mission.

The campaign has expanded beyond South Korea to Japan, Vietnam, Russia, Nepal, and the Middle East. Analysts note the group’s technical sophistication has improved in recent years.

Security experts advise monitoring URLs, file hashes, behaviour-based indicators, and ongoing tracking of ScarCruft’s tools and infrastructure, to detect related campaigns from North Korea and other countries early.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches small AI model for mobiles and IoT

Google has released Gemma 3 270M, an open-source AI model with 270 million parameters designed to run efficiently on smartphones and Internet of Things devices.

Drawing on technology from the larger Gemini family, it focuses on portability, low energy use and quick fine-tuning, enabling developers to create AI tools that work on everyday hardware instead of relying on high-end servers.

The model supports instruction-following and text structuring with a 256,000-token vocabulary, offering scope for natural language processing and on-device personalisation.

Its design includes quantisation-aware training to work in low-precision formats such as INT4, reducing memory use and improving speed on mobile processors instead of requiring extensive computational power.

Industry commentators note that the model could help meet demand for efficient AI in edge computing, with applications in healthcare wearables and autonomous IoT systems. Keeping processing on-device also supports privacy and reduces dependence on cloud infrastructure.

Google highlights the environmental benefits of the model, pointing to reduced carbon impact and greater accessibility for smaller firms and independent developers. While safeguards like ShieldGemma aim to limit risks, experts say careful use will still be needed to avoid misuse.

Future developments may bring new features, including multimodal capabilities, as part of Google’s strategy to blend open and proprietary AI within hybrid systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cohere secures $500m funding to expand secure enterprise AI

Cohere has secured $500 million in new funding, lifting its valuation to $6.8 billion and reinforcing its position as a secure, enterprise-grade AI specialist.

The Toronto-based firm, which develops large language models tailored for business use, attracted backing from AMD, Nvidia, Salesforce, and other investors.

Its flagship multilingual model, Aya 23, supports 23 languages and is designed to help companies adopt AI without the risks linked to open-source tools, reflecting growing demand for privacy-conscious, compliant solutions.

The round marks renewed support from chipmakers AMD and Nvidia, who had previously invested in the company.

Salesforce Ventures’ involvement hints at potential integration with enterprise software platforms, while other backers include Radical Ventures, Inovia Capital, PSP Investments, and the Healthcare of Ontario Pension Plan.

The company has also strengthened its leadership, appointing former Meta AI research head Joelle Pineau as Chief AI Scientist, Instagram co-founder Mike Krieger as Chief Product Officer, and ex-Uber executive Saroop Bharwani as Chief Technology Officer for Applied R&D.

Cohere intends to use the funding to advance agentic AI, systems capable of performing tasks autonomously, while focusing on security and ethical development.

With over $1.5 billion raised since its 2019 founding, the company targets adoption in regulated sectors such as healthcare and finance.

The investment comes amid a broader surge in AI spending, with industry leaders betting that secure, customisable AI will become essential for enterprise operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!