Tencent Cloud sites exposed credentials and source code in major security lapse

Researchers have uncovered severe misconfigurations in two Tencent Cloud sites that exposed sensitive credentials and internal source code to the public. The flaws could have given attackers access to Tencent’s backend infrastructure and critical internal services.

Cybernews discovered the data leaks in July 2025, finding hardcoded plain-text passwords, a sensitive internal .git directory, and configuration files linked to Tencent’s load balancer and JEECG development platform.

Weak passwords, built from predictable patterns like the company name and year, increased the risk of exploitation.

The exposed data may have been accessible since April, leaving months of opportunity for scraping bots or malicious actors.

With administrative console access, attackers could have tampered with APIs, planted malicious code, pivoted deeper into Tencent’s systems, or abused the trusted domain for phishing campaigns.

Tencent confirmed the incident as a ‘known issue’ and has since closed access, though questions remain over how many parties may have already retrieved the exposed information.

Security experts warn that even minor oversights in cloud operations can cascade into serious vulnerabilities, especially for platforms trusted by millions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces scrutiny as OpenAI updates protections after teen suicide case

OpenAI has announced new safety measures for its popular chatbot following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after relying on ChatGPT for guidance.

The parents allege the chatbot isolated their son and contributed to his death earlier in the year.

The company said it will improve ChatGPT’s ability to detect signs of mental distress, including indirect expressions such as users mentioning sleep deprivation or feelings of invincibility.

It will also strengthen safeguards around suicide-related conversations, which OpenAI admitted can break down in prolonged chats. Planned updates include parental controls, access to usage details, and clickable links to local emergency services.

OpenAI stressed that its safeguards work best during short interactions, acknowledging weaknesses in longer exchanges. It also said it is considering building a network of licensed professionals that users could access through ChatGPT.

The company added that content filtering errors, where serious risks are underestimated, will also be addressed.

The lawsuit comes amid wider scrutiny of AI tools by regulators and mental health experts. Attorneys general from more than 40 US states recently warned AI companies of their duty to protect children from harmful or inappropriate chatbot interactions.

Critics argue that reliance on chatbots for support instead of professional care poses growing risks as usage expands globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack disrupts Nevada government systems

The State of Nevada reported a cyberattack affecting several state government systems, with recovery efforts underway. Some websites and phone lines may be slow or offline while officials restore operations.

Governor Joe Lombardo’s office stated there is no evidence that personal information has been compromised, emphasising that the issue is limited to state systems. The incident is under investigation by both state and federal authorities, although technical details have not been released.

Several agencies, including the Department of Motor Vehicles, have been affected, prompting temporary office closures until normal operations can resume. Emergency services, including 911, continue to operate without disruption.

Officials prioritise system validation and safe restoration to prevent further disruption to state services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Insecure code blamed for 74 percent of company breaches

Nearly three-quarters of companies have experienced a security breach in the past year due to flaws in their software code.

According to a new SecureFlag study, 74% of organisations admitted to at least one incident caused by insecure code, with almost half suffering multiple breaches.

The report has renewed scrutiny of AI-generated code, which is growing in popularity across the industry. While some experts claim AI can outperform humans, concerns remain that these tools are reproducing insecure coding patterns at scale.

On the upside, companies are increasing developer security training. Around 44% provide quarterly updates, while 29% do so monthly.

Most use video tutorials and eLearning platforms, with a third hosting interactive events like capture-the-flag hacking games.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Publishers set to earn from Comet Plus, Perplexity’s new initiative

Perplexity has announced Comet Plus, a new service that will pay premium publishers to provide high-quality news content as an alternative to clickbait. The company has not disclosed its roster of partners or payment structure, though reports suggest a pool of $42.5 million.

Publishers have long criticised AI services for exploiting their work without compensation. Perplexity, backed by Amazon’s Jeff Bezos, said Comet Plus will create a fairer system and reward journalists for producing trusted content in the era of AI.

The platform introduces a revenue model based on three streams: human visits, search citations, and agent actions. Perplexity argues this approach better reflects how people consume information today, whether by browsing manually, seeking AI-generated answers, or using AI agents.

The company stated that the initiative aims to rebuild trust between readers and publishers, while ensuring that journalism thrives in a changing digital economy. The initial group of publishing partners will be revealed later.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google to require developer identity checks for sideloaded Android apps

Google will begin requiring identity verification for Android developers distributing apps outside the Play Store.

Starting in September 2026, developers in Brazil, Indonesia, Singapore and Thailand must provide legal name, address, email, phone number and possibly government-issued ID for apps to install on certified Android devices.

The requirement will expand globally starting in 2027. While existing Play Store developers are already verified, all sideloaded apps will now require developer verification to target select Android users.

Google is building a separate Android Developer Console for sideloading developers and is offering a lighter-touch, free verification option for student and hobbyist creators to protect innovation while boosting accountability.

The change aims to reduce malware distribution from anonymous developers and repeat offenders, while preserving the openness of Android by allowing sideloading and third-party stores.

Developers can opt into early access programmes beginning October 2025 to provide feedback and prepare for full rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malicious apps on Google Play infected 19 million users with banking trojan

Security researchers from Zscaler’s ThreatLabz team uncovered 77 malicious Android applications on the Google Play Store, collectively downloaded over 19 million times, that distributed the Anatsa banking trojan, TeaBot, and other malware families.

Anatsa, active since at least 2020, has evolved to target over 831 banking, fintech and cryptocurrency apps globally, including platforms in Germany and South Korea. These campaigns now use direct payload installation with encrypted runtime strings and device checks to evade detection.

Deploying as decoy tools, often document readers, the apps triggered a silent download of malicious code after installation. The Trojan automatically gained accessibility permissions to display overlays, capture credentials, log keystrokes, and intercept messages. Additional malware such as Joker, its variant Harly, and adware were also detected.

Following disclosure, Google removed the identified apps from the Play Store. Users are advised to enable Google Play Protect, review app permissions carefully, limit downloads to trusted developers, and consider using antivirus tools to stay protected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brave uncovers vulnerability in Perplexity’s Comet that risked sensitive user data

Perplexity’s AI-powered browser, Comet, was found to have a serious vulnerability that could have exposed sensitive user data through indirect prompt injection, according to researchers at Brave, a rival browser company.

The flaw stemmed from how Comet handled webpage-summarisation requests. By embedding hidden instructions on websites, attackers could trick the browser’s large language model into executing unintended actions, such as extracting personal emails or accessing saved passwords.

Brave researchers demonstrated how the exploit could bypass traditional protections, such as the same-origin policy, showing scenarios where attackers gained access to Gmail or banking data by manipulating Comet into following malicious cues.

Brave disclosed the vulnerability to Perplexity on 11 August, but stated that it remained unfixed when they published their findings on 20 August. Perplexity later confirmed to CNET that the flaw had been patched, and Brave was credited for working with them to resolve it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Humain Chat has been unveiled by Saudi Arabia to drive AI innovation

Saudi Arabia has taken a significant step in AI with the launch of Humain Chat, an app powered by one of the world’s most enormous Arabic-trained datasets.

Developed by state-backed venture Humain, the app is designed to strengthen the country’s role in AI while promoting sovereign technologies.

Built on the Allam large language model, Humain Chat allows real-time web search, speech input across Arabic dialects, bilingual switching between Arabic and English, and secure data compliance with Saudi privacy laws.

The app is already available on the web, iOS, and Android in Saudi Arabia, with plans for regional expansion across the Middle East before reaching global markets.

Humain was established in May under the leadership of Crown Prince Mohammed bin Salman and the Public Investment Fund. Its flagship model, ALLAM 34B, is described as the most advanced AI system created in the Arab world. The company said the app will evolve further as user adoption grows.

CEO Tareq Amin called the launch ‘a historic milestone’ for Saudi Arabia, stressing that Humain Chat shows how advanced AI can be developed in Arabic while staying culturally rooted and built by local expertise.

A team of 120 specialists based in the Kingdom created the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!