Generative AI music takes ethical turn with Beatoven.ai’s Maestro launch

Beatoven.ai has launched Maestro, a generative AI model for instrumental music that will later expand to vocals and sound effects. The company claims it is the first fully licensed AI model, ensuring royalties for artists and rights holders.

Trained on licensed datasets from partners such as Rightsify and Symphonic Music, Maestro avoids scraping issues and guarantees attribution. Beatoven.ai, with two million users and 15 million tracks generated, says Maestro can be fine-tuned for new genres.

The platform also includes tools for catalogue owners, allowing labels and publishers to analyse music, generate metadata, and enhance back-catalogue discovery. CEO Mansoor Rahimat Khan said Maestro builds an ‘AI-powered music ecosystem’ designed to push creativity forward rather than mimic it.

Industry figures praised the approach. Ed Newton-Rex of Fairly Trained said Maestro proves AI can be ethical, while Musical AI’s Sean Power called it a fair licensing model. Beatoven.ai also plans to expand its API into gaming, film, and virtual production.

The launch highlights the wider debate over licensing versus scraping. Scraping often exploits copyrighted works without payment, while licensed datasets ensure royalties, higher-quality outputs, and long-term trust. Advocates argue that licensing offers a more sustainable and fairer path for GenAI music.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Parental controls and crisis tools added to ChatGPT amid scrutiny

The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.

The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.

Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.

The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global agencies and the FBI issue a warning on Salt Typhoon operations

The FBI, US agencies, and international partners have issued a joint advisory on a cyber campaign called ‘Salt Typhoon.’

The operation is said to have affected more than 200 US companies across 80 countries.

The advisory, co-released by the FBI, the National Security Agency, the Cybersecurity and Infrastructure Security Agency, and the Department of Defence Cyber Crime Centre, was also supported by agencies in the UK, Canada, Australia, Germany, Italy and Japan.

According to the statement, Salt Typhoon has focused on exploiting network infrastructure such as routers, virtual private networks and other edge devices.

The group has been previously linked to campaigns targeting US telecommunications networks in 2024. It has also been connected with activity involving a US National Guard network, the advisory names three Chinese companies allegedly providing products and services used in their operations.

Telecommunications, defence, transportation and hospitality organisations are advised to strengthen cybersecurity measures. Recommended actions include patching vulnerabilities, adopting zero-trust approaches and using the technical details included in the advisory.

Salt Typhoon, also known as Earth Estrie and Ghost Emperor, has been observed since at least 2019 and is reported to maintain long-term access to compromised devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA’s sales grow as the market questions AI momentum

Sales of AI chips by Nvidia rose strongly in its latest quarter, though the growth was less intense than in previous periods, raising questions about the sustainability of demand.

The company’s data centre division reported revenue of 41.1 billion USD between May and July, a 56% rise from last year but slightly below analyst forecasts.

Overall revenue reached 46.7 billion USD, while profit climbed to 26.4 billion USD, both higher than expected.

Nvidia forecasts sales of $54 billion USD for the current quarter.

CEO Jensen Huang said the company remains at the ‘beginning of the buildout’, with trillions expected to be spent on AI by the decade’s end.

However, investors pushed shares down 3% in extended trading, reflecting concerns that rapid growth is becoming harder to maintain as annual sales expand.

Nvidia’s performance was also affected by earlier restrictions on chip sales to China, although the removal of limits in exchange for a sales levy is expected to support future revenue.

Analysts noted that while AI continues to fuel stock market optimism, the pace of growth is slowing compared with the company’s earlier surge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung enhances TV and monitor range with Copilot AI

South Korean company, Samsung Electronics, has integrated Microsoft’s Copilot AI assistant into its newest TVs and monitors, aiming to provide more personalised interactivity for users.

The technology will be available across models released annually, including the premium Micro RGB TV. With Copilot built directly into displays, Samsung explained that viewers can use voice commands or a remote control to search, learn and engage with content more positively.

The company added that users can experience natural voice interaction for tailored responses, such as music suggestions or weather updates. Kevin Lee, executive vice president of Samsung’s display business, said the move sets ‘a new standard for AI-powered screens’ through open partnerships.

Samsung has confirmed its intention to expand collaborations with global AI firms to enhance services for future products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI assistant for editing messages

Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.

The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.

According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.

To activate the feature, users can tap a small pencil icon that appears while composing a message.

In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’

By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google alerts users after detecting malware spread through captive portals

Warnings have been issued by Google to some users after detecting a web traffic hijacking campaign that delivered malware through manipulated login portals.

According to the company’s Threat Intelligence Group, attackers compromised network edge devices to modify captive portals, the login pages often seen when joining public Wi-Fi or corporate networks.

Instead of leading to legitimate security updates, the altered portals redirected users to a fake page presenting an ‘Adobe Plugin’ update. The file, once installed, deployed malware known as CANONSTAGER, which enabled the installation of a backdoor called SOGU.SEC.

The software, named AdobePlugins.exe, was signed with a valid GlobalSign certificate linked to Chengdu Nuoxin Times Technology Co, Ltd. Google stated it is tracking multiple malware samples connected to the same certificate.

The company attributed the campaign to a group it tracks as UNC6384, also known by other names including Mustang Panda, Silk Typhoon, and TEMP.Hex.

Google said it first detected the campaign in March 2025 and sent alerts to affected Gmail and Workspace users. The operation reportedly targeted diplomats in Southeast Asia and other entities worldwide, suggesting a potential link to cyber espionage activities.

Google advised users to enable Enhanced Safe Browsing in Chrome, keep devices updated, and use two-step verification for stronger protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time conversations feel smoother with Google Translate’s Gemini AI update

Google Translate is receiving powerful Gemini AI upgrades that make speaking across languages feel far more natural.

The refreshed live conversation mode intelligently recognises pauses, accents, and background noise, allowing two people to talk without the rigid back-and-forth of older versions. Google says the new system should even work in noisy environments like cafes, a real-world challenge for speech technology.

The update also introduces a practice mode that pushes Translate beyond its traditional role as a utility. Users can set their skill level and goals, then receive personalised listening and speaking exercises designed to build confidence.

The tool is launching in beta for selected language pairs, such as English to Spanish or French, but it signals Google’s ambition to blend translation with education.

By bringing some advanced translation capabilities first seen on Pixel devices into the widely available Translate app, Google makes real-time multilingual communication accessible to everyone.

It’s a practical application of AI that promises to change everyday conversations and how people prepare to learn new languages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tencent Cloud sites exposed credentials and source code in major security lapse

Researchers have uncovered severe misconfigurations in two Tencent Cloud sites that exposed sensitive credentials and internal source code to the public. The flaws could have given attackers access to Tencent’s backend infrastructure and critical internal services.

Cybernews discovered the data leaks in July 2025, finding hardcoded plain-text passwords, a sensitive internal .git directory, and configuration files linked to Tencent’s load balancer and JEECG development platform.

Weak passwords, built from predictable patterns like the company name and year, increased the risk of exploitation.

The exposed data may have been accessible since April, leaving months of opportunity for scraping bots or malicious actors.

With administrative console access, attackers could have tampered with APIs, planted malicious code, pivoted deeper into Tencent’s systems, or abused the trusted domain for phishing campaigns.

Tencent confirmed the incident as a ‘known issue’ and has since closed access, though questions remain over how many parties may have already retrieved the exposed information.

Security experts warn that even minor oversights in cloud operations can cascade into serious vulnerabilities, especially for platforms trusted by millions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces scrutiny as OpenAI updates protections after teen suicide case

OpenAI has announced new safety measures for its popular chatbot following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after relying on ChatGPT for guidance.

The parents allege the chatbot isolated their son and contributed to his death earlier in the year.

The company said it will improve ChatGPT’s ability to detect signs of mental distress, including indirect expressions such as users mentioning sleep deprivation or feelings of invincibility.

It will also strengthen safeguards around suicide-related conversations, which OpenAI admitted can break down in prolonged chats. Planned updates include parental controls, access to usage details, and clickable links to local emergency services.

OpenAI stressed that its safeguards work best during short interactions, acknowledging weaknesses in longer exchanges. It also said it is considering building a network of licensed professionals that users could access through ChatGPT.

The company added that content filtering errors, where serious risks are underestimated, will also be addressed.

The lawsuit comes amid wider scrutiny of AI tools by regulators and mental health experts. Attorneys general from more than 40 US states recently warned AI companies of their duty to protect children from harmful or inappropriate chatbot interactions.

Critics argue that reliance on chatbots for support instead of professional care poses growing risks as usage expands globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!