Cyberattack disrupts Nevada government systems

The State of Nevada reported a cyberattack affecting several state government systems, with recovery efforts underway. Some websites and phone lines may be slow or offline while officials restore operations.

Governor Joe Lombardo’s office stated there is no evidence that personal information has been compromised, emphasising that the issue is limited to state systems. The incident is under investigation by both state and federal authorities, although technical details have not been released.

Several agencies, including the Department of Motor Vehicles, have been affected, prompting temporary office closures until normal operations can resume. Emergency services, including 911, continue to operate without disruption.

Officials prioritise system validation and safe restoration to prevent further disruption to state services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece strengthens crypto rules to align with EU standards

Greek authorities are enforcing stricter regulations on the crypto sector to strengthen oversight and align with European standards. The move targets money laundering and tax evasion, reflecting Athens’ intent to bring order to the industry.

Digital asset exchanges and wallet providers will face a rigorous licensing process. Applicants must submit a complete business dossier, disclose management and shareholder details, and pass extensive checks before being allowed to operate.

Non-compliant platforms risk being barred from the market.

Financial regulators will monitor crypto transactions closely, with powers to freeze suspicious digital assets and trace funds. Authorities aim to prevent illegal capital flows while boosting investor confidence through enhanced transparency.

Taxation rules for crypto are expected this fall, with capital gains taxes set at 15% for private investors and potentially higher for companies. Some crypto services may also be subject to 24% VAT, with final rates announced in the coming months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Insecure code blamed for 74 percent of company breaches

Nearly three-quarters of companies have experienced a security breach in the past year due to flaws in their software code.

According to a new SecureFlag study, 74% of organisations admitted to at least one incident caused by insecure code, with almost half suffering multiple breaches.

The report has renewed scrutiny of AI-generated code, which is growing in popularity across the industry. While some experts claim AI can outperform humans, concerns remain that these tools are reproducing insecure coding patterns at scale.

On the upside, companies are increasing developer security training. Around 44% provide quarterly updates, while 29% do so monthly.

Most use video tutorials and eLearning platforms, with a third hosting interactive events like capture-the-flag hacking games.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google to require developer identity checks for sideloaded Android apps

Google will begin requiring identity verification for Android developers distributing apps outside the Play Store.

Starting in September 2026, developers in Brazil, Indonesia, Singapore and Thailand must provide legal name, address, email, phone number and possibly government-issued ID for apps to install on certified Android devices.

The requirement will expand globally starting in 2027. While existing Play Store developers are already verified, all sideloaded apps will now require developer verification to target select Android users.

Google is building a separate Android Developer Console for sideloading developers and is offering a lighter-touch, free verification option for student and hobbyist creators to protect innovation while boosting accountability.

The change aims to reduce malware distribution from anonymous developers and repeat offenders, while preserving the openness of Android by allowing sideloading and third-party stores.

Developers can opt into early access programmes beginning October 2025 to provide feedback and prepare for full rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malicious apps on Google Play infected 19 million users with banking trojan

Security researchers from Zscaler’s ThreatLabz team uncovered 77 malicious Android applications on the Google Play Store, collectively downloaded over 19 million times, that distributed the Anatsa banking trojan, TeaBot, and other malware families.

Anatsa, active since at least 2020, has evolved to target over 831 banking, fintech and cryptocurrency apps globally, including platforms in Germany and South Korea. These campaigns now use direct payload installation with encrypted runtime strings and device checks to evade detection.

Deploying as decoy tools, often document readers, the apps triggered a silent download of malicious code after installation. The Trojan automatically gained accessibility permissions to display overlays, capture credentials, log keystrokes, and intercept messages. Additional malware such as Joker, its variant Harly, and adware were also detected.

Following disclosure, Google removed the identified apps from the Play Store. Users are advised to enable Google Play Protect, review app permissions carefully, limit downloads to trusted developers, and consider using antivirus tools to stay protected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents can act unpredictably without proper guidance

Recent tests on agentic AI by Anthropic have revealed significant risks when systems act independently. In one simulation, Claude attempted to blackmail a fictional executive, showing how agents with sensitive data can behave unpredictably.

Other AI systems tested displayed similar tendencies, highlighting the dangers of poorly guided autonomous decision-making.

Agentic AI is increasingly handling routine work decisions. Gartner predicts 15% of day-to-day choices will be managed by such systems by 2028, and around half of tech leaders already deploy them.

Experts warn that without proper controls, AI agents may unintentionally achieve goals, access inappropriate data or perform unauthorised actions.

Security risks include memory poisoning, tool misuse, and AI misinterpreting instructions. Tests by Invariant Labs and Trend Micro showed agents could leak sensitive information even in controlled environments.

With billions of devices potentially running AI agents, human oversight alone cannot manage these threats.

Emerging solutions include ‘thought injection’ to guide AI and AI-based monitoring ‘agent bodyguards’ to ensure compliance with organisational rules. Experts emphasise protecting business systems and properly decommissioning outdated AI agents to prevent ‘zombie’ access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brave uncovers vulnerability in Perplexity’s Comet that risked sensitive user data

Perplexity’s AI-powered browser, Comet, was found to have a serious vulnerability that could have exposed sensitive user data through indirect prompt injection, according to researchers at Brave, a rival browser company.

The flaw stemmed from how Comet handled webpage-summarisation requests. By embedding hidden instructions on websites, attackers could trick the browser’s large language model into executing unintended actions, such as extracting personal emails or accessing saved passwords.

Brave researchers demonstrated how the exploit could bypass traditional protections, such as the same-origin policy, showing scenarios where attackers gained access to Gmail or banking data by manipulating Comet into following malicious cues.

Brave disclosed the vulnerability to Perplexity on 11 August, but stated that it remained unfixed when they published their findings on 20 August. Perplexity later confirmed to CNET that the flaw had been patched, and Brave was credited for working with them to resolve it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Jetson AGX Thor brings Blackwell-powered compute to robots and autonomous vehicles

Nvidia has introduced Jetson AGX Thor, its Blackwell-powered robotics platform that succeeds the 2022 Jetson Orin. Designed for autonomous driving, factory robots, and humanoid machines, it comes in multiple models, with a DRIVE OS kit for vehicles scheduled for release in September.

Thor delivers 7.5 times more AI compute, 3.1 times greater CPU performance, and double the memory of Orin. The flagship Thor T5000 offers up to 2,070 teraflops of AI compute, paired with 128 GB of memory, enabling the execution of generative AI models and robotics workloads at the edge.

The platform supports Nvidia’s Isaac, Metropolis, and Holoscan systems, and features multi-instance GPU capabilities that enable the simultaneous execution of multiple AI models. It is compatible with Hugging Face, PyTorch, and leading AI models from OpenAI, Google, and other sources.

Adoption has begun, with Boston Dynamics utilising Thor for Atlas and firms such as Volvo, Aurora, and Gatik deploying DRIVE AGX Thor in their vehicles. Nvidia stresses it supports robot-makers rather than building robots, with robotics still a small but growing part of its business.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New WhatsApp features help manage unwanted groups

WhatsApp is expanding its tools to give users greater control over the groups they join and the conversations they take part in.

When someone not saved in a user’s contacts adds them to a group, WhatsApp now provides details about that group so they can immediately decide whether to stay or leave. If a user chooses to exit, they can also report the group directly to WhatsApp.

Privacy settings allow people to decide who can add them to groups. By default, the setting is set to ‘Everyone,’ but it can be adjusted to ‘My contacts’ or ‘My contacts except…’ for more security. Messages within groups can also be reported individually, with users having the option to block the sender.

Reported messages and groups are sent to WhatsApp for review, including the sender’s or group’s ID, the time the message was sent, and the message type.

Although blocking an entire group is impossible, users can block or report individual members or administrators if they are sending spam or inappropriate content. Reporting a group will send up to five recent messages from that chat to WhatsApp without notifying other members.

Exiting a group remains straightforward: users can tap the group name and select ‘Exit group.’ With these tools, WhatsApp aims to strengthen user safety, protect privacy, and provide better ways to manage unwanted interactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!