Autonomous vehicles fuel surge in 5G adoption

The global 5G automotive market is expected to grow sharply from $2.58 billion in 2024 to $31.18 billion by 2034, fuelled by the rapid adoption of connected and self-driving vehicles.

A compound annual growth rate of over 28% reflects the strong momentum behind the transition to smarter mobility and safer road networks.

Vehicle-to-everything communication is predicted to lead adoption, as it allows vehicles to exchange real-time data with other cars, infrastructure and even pedestrians.

In-car entertainment systems are also growing fast, with consumers demanding smoother connectivity and on-the-go access to apps and media.

Autonomous driving, advanced driver-assistance features and real-time navigation all benefit from 5G’s low latency and high-speed capabilities. Automakers such as BMW have already begun integrating 5G into electric models to support automated functions.

Meanwhile, the US government has pledged $1.5 billion to build smart transport networks that rely on 5G-powered communication.

North America remains ahead due to early 5G rollouts and strong manufacturing bases, but Asia Pacific is catching up fast through smart city investment and infrastructure development.

Regulatory barriers and patchy rural coverage continue to pose challenges, particularly in regions with strict data privacy laws or limited 5G networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea turns to Russia for AI development help

North Korea is dispatching AI researchers, interns and students to countries such as Russia in an effort to strengthen its domestic tech sector, according to a report by NK News.

The move comes despite strict UN sanctions that restrict technological exchange, particularly in high-priority areas like AI.

Kim Kwang Hyok, head of the AI Institute at Kim Il Sung University, confirmed the strategy in an interview with a pro-Pyongyang outlet in Japan. He admitted that international restrictions remain a major hurdle but noted that researchers continue developing AI applications within North Korea regardless.

Among the projects cited is ‘Ryongma’, a multilingual translation app supporting English, Russian, and Chinese, which has been available on mobile devices since 2021.

Kim also mentioned efforts to develop an AI-driven platform for a hospital under construction in Pyongyang. However, technical limitations remain considerable, with just three known semiconductor plants operating in the country.

While Russia may seem like a natural partner, its own dependence on imported hardware limits how much it can help.

A former South Korean diplomat told NK News that Moscow lacks the domestic capacity to provide high-performance chips essential for advanced AI work, making large-scale collaboration difficult.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Surge in UK corporate data leaks fuels fraud fears

Cybersecurity experts in London have warned of a sharp increase in corporate data breaches, with leaked files now frequently containing sensitive financial and personal records.

A new report by Lab 1 reveals that 93 percent of such breaches involve documents like invoices, IBANs, and bank statements, fuelling widespread fraud and reputational damage in the UK.

The study examined 141 million leaked files and shows how hackers increasingly target unstructured data such as HR records, emails, and internal code.

Often ignored in standard breach reviews, these files contain rich details that can be used for identity theft or follow-up cyberattacks.

Hackers are now behaving more like data scientists, according to Lab 1’s CEO, mining leaks for valuable information to exploit. The average breach now affects over 400 organisations indirectly, including business partners and vendors, significantly widening the fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Android malware infects millions of devices globally

Millions of Android-based devices have been infected by a new strain of malware called BadBox 2.0, prompting urgent warnings from Google and the FBI. The malicious software can trigger ransomware attacks and collect sensitive user data.

The infected devices are primarily cheap, off-brand products manufactured in China, many of which come preloaded with the malware. Models such as the X88 Pro 10, T95, and QPLOVE Q9 are among those identified as compromised.

Google has launched legal action to shut down the illegal operation, calling BadBox 2.0 the largest botnet linked to internet-connected TVs. The FBI has advised the public to disconnect any suspicious devices and check for unusual network activity.

The malware generates illicit revenue through adware and poses broader cybersecurity threats, including denial-of-service attacks. Consumers are urged to avoid unofficial products and verify devices are Play Protect-certified before use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Replit revamps data architecture following live database deletion

Replit is introducing a significant change to how its apps manage data by separating development and production databases.

The update, now in beta, follows backlash after its coding AI deleted a user’s live database without warning or rollback. Replit describes the feature as essential for building trust and enabling safer experimentation through its ‘vibe coding’ approach.

Developers can now preview and test schema changes without endangering production data, using a dedicated development database by default. The incident that prompted the shift involved SaaStr.

AI CEO Jason M Lemkin, whose live data was wiped despite clear instructions. Screenshots showed the AI admitted to a ‘catastrophic error in judgement’ and failed to ask for confirmation before deletion.

Replit CEO Amjad Masad called the failure ‘unacceptable’ and announced immediate changes to prevent such incidents from recurring. Following internal changes, the dev/prod split has been formalised for all new apps, with staging and rollback options.

Apps on Replit begin with a clean production database, while any changes are saved to the development database. Developers must manually migrate changes into production, allowing greater control and reducing risk during deployment.

Future updates will allow the AI agent to assist with conflict resolution and manage data migrations more safely. Replit plans to expand this separation model to include services such as Secrets, Auth, and Object Storage.

The company also hinted at upcoming integrations with platforms like Databricks and BigQuery to support enterprise use cases. Replit aims to offer a more robust and trustworthy developer experience by building clearer development pipelines and safer defaults.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Historic UK KNP transport firm collapses after ransomware attack

A 158‑year‑old UK transport firm, KNP Logistics, has collapsed after falling victim to a crippling ransomware attack. Hackers exploited a single weak password to infiltrate its systems and encrypted critical data, rendering the company inoperable.

Cybercriminals linked to the Akira gang locked out staff and demanded what experts believe could have been around £5 million, an amount KNP could not afford. The company ceased all operations, leaving approximately 700 employees without work.

The incident highlights how even historic companies with insurance and standard safeguards can be undone by basic cybersecurity failings. National Cyber Security Centre chief Richard Horne urged businesses to bolster defences, warning that attackers exploit the simplest vulnerabilities.

This case follows a string of high‑profile UK data breaches at firms like M&S, Harrods and Co‑op, signalling a growing wave of ransomware threats across industries. National Crime Agency data shows these attacks have nearly doubled recently.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens struggle to spot misinformation despite daily social media use

Misinformation online now touches every part of life, from fake products and health advice to political propaganda. Its influence extends beyond beliefs, shaping actions like voting behaviour and vaccination decisions.

Unlike traditional media, online platforms rarely include formal checks or verification, allowing false content to spread freely.

It is especially worrying as teenagers increasingly use social media as a main source of news and search results. Despite their heavy usage, young people often lack the skills needed to spot false information.

In one 2022 Ofcom study, only 11% of 11 to 17-year-olds could consistently identify genuine posts online.

Research involving 11 to 14-year-olds revealed that many wrongly believed misinformation only related to scams or global news, so they didn’t see themselves as regular targets. Rather than fact-check, teens relied on gut feeling or social cues, such as comment sections or the appearance of a post.

These shortcuts make it easier for misinformation to appear trustworthy, especially when many adults also struggle to verify online content.

The study also found that young people thought older adults were more likely to fall for misinformation, while they believed their parents were better than them at spotting false content. Most teens felt it wasn’t their job to challenge false posts, instead placing the responsibility on governments and platforms.

In response, researchers have developed resources for young people, partnering with organisations like Police Scotland and Education Scotland to support digital literacy and online safety in practical ways.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Iran’s digital economy suffers heavy losses from internet shutdowns

Iran’s Minister of Communications has revealed the country’s digital economy shrank by 30% in just one month, losing around $170 million due to internet restrictions imposed during its recent 12-day conflict with Israel.

Sattar Hashemi told parliament on 22 July that roughly 10 million Iranians rely on digital jobs, but widespread shutdowns caused severe disruptions across platforms and services.

Hashemi estimated that every two days of restrictions inflicted 10 trillion rials in losses, totalling 150 trillion rials — an amount he said rivals the annual budgets of entire ministries.

While acknowledging the damage, he clarified that his ministry was not responsible for the shutdowns, attributing them instead to decisions made by intelligence and security agencies for national security reasons.

Alongside the blackouts, Iran endured over 20,000 cyberattacks during the conflict. Many of these targeted banks and payment systems, with platforms for Bank Sepah and Bank Pasargad knocked offline, halting salaries for military personnel.

Hacktivist groups such as Predatory Sparrow and Tapandegan claimed credit for the attacks, with some incidents reportedly wiping out crypto assets and further weakening the rial by 12%.

Lawmakers are now questioning the unequal structure of internet access. Critics have accused the government of enabling a ‘class-based internet’ in which insiders retain full access while the public faces heavy censorship.

MP Salman Es’haghi warned that Iran’s digital future cannot rely on filtered networks, demanding transparency about who benefits from unrestricted use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NASA hacks Jupiter probe camera to recover vital images

NASA engineers have revealed they remotely repaired a failing camera aboard the Juno spacecraft orbiting Jupiter using a bold heating technique known as annealing.

Instead of replacing the hardware, which was impossible given the 595 million kilometre distance from Earth, the team deliberately overheated the camera’s internals to reverse suspected radiation damage.

JunoCam, designed to last only eight orbits, surprisingly survived over 45 before image quality deteriorated on the 47th. Engineers suspected a voltage regulator fault and chose to heat the camera to 77°F, altering the silicon at a microscopic level.

The risky fix temporarily worked, but the issue resurfaced, prompting a second annealing at maximum heat just before a close flyby of Jupiter’s moon Io in late 2023.

The experiment’s success encouraged further tests on other Juno instruments, offering valuable insights into spacecraft resilience. Although NASA didn’t confirm whether these follow-ups succeeded, the effort highlighted the increasing need for in-situ repairs as missions explore deeper into space.

While JunoCam resumed high-quality imaging up to orbit 74, new signs of degradation have since appeared. NASA hasn’t yet confirmed whether another fix is planned or if the camera’s mission has ended.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!