UK users face reduced cloud security as Apple responds to government pressure

Apple has withdrawn its Advanced Data Protection (ADP) feature for cloud backups in Britain, citing government requirements.

Users attempting to enable the encryption service now receive an error message, while existing users will eventually have to deactivate it. The move weakens iCloud security in the country, allowing authorities access to data that would otherwise be encrypted.

Experts warn that the change compromises user privacy and exposes data to potential cyber threats. Apple has insisted it will not create a backdoor for encrypted services, as doing so would increase security risks.

The UK government has not confirmed whether it issued a Technical Capability Notice, which could mandate such access.

Apple’s decision highlights ongoing tensions between tech companies and governments over encryption policies. Similar legal frameworks exist in countries like Australia, raising concerns that other nations could follow suit.

Security advocates argue that strong encryption is essential for protecting user privacy and safeguarding sensitive information from cybercriminals.

For more information on these topics, visit diplomacy.edu.

China and North Korea-linked accounts shut down by OpenAI

OpenAI has removed accounts linked to users in China and North Korea over concerns they were using ChatGPT for malicious activities.

The company cited cases of AI-generated content being used for surveillance, influence campaigns, and fraudulent schemes. AI tools were employed to detect the operations.

Some accounts produced news articles in Spanish that criticised the US and were later published under a Chinese company’s byline. Others, potentially connected to North Korea, created fake resumes and online profiles in an attempt to secure jobs at Western firms.

A separate operation, believed to be tied to financial fraud in Cambodia, used ChatGPT to generate and translate comments on social media.

The US government has raised concerns over China’s use of AI to spread misinformation and suppress its population. Security risks associated with AI-driven disinformation and fraudulent activities have led to increased scrutiny of how such tools are being used globally.

OpenAI’s ChatGPT remains the most widely used AI chatbot, with over 400 million weekly active users. The company is also in discussions to secure up to $40 billion in funding, which could set a record for a private firm.

For more information on these topics, visit diplomacy.edu.

Australia slaps A$1 million fine on Telegram

Australia’s eSafety Commission has fined messaging platform Telegram A$1 million ($640,000) for failing to respond promptly to questions regarding measures it took to prevent child abuse and extremist content. The Commission had asked social media platforms, including Telegram, to provide details on their efforts to combat harmful content. Telegram missed the May 2024 deadline, submitting its response in October, which led to the fine.

eSafety Commissioner Julie Inman Grant emphasised the importance of timely transparency and adherence to Australian law. Telegram, however, disagreed with the penalty, stating that it had fully responded to the questions, and plans to appeal the fine, which it claims was solely due to the delay in response time.

The fine comes amid increasing global scrutiny of Telegram, with growing concerns over its use by extremists. Australia’s spy agency recently noted that a significant portion of counter-terrorism cases involved youth, highlighting the increasing risk posed by online extremist content. If Telegram does not comply with the penalty, the eSafety Commission could pursue further legal action.

For more information on these topics, visit diplomacy.edu.

Quantum computing could render today’s encryption obsolete

The rise of quantum computing poses a serious threat to modern encryption systems, with experts warning that critical digital infrastructure could become vulnerable once quantum devices reach sufficient power.

Unlike classical computers that process binary bits, quantum computers use qubits, allowing them to perform vast numbers of calculations simultaneously.

This capability could make breaking widely used encryption methods, like RSA, possible in minutes—something that would take today’s computers thousands of years.

Although quantum systems powerful enough to crack encryption may still be years away, there is growing concern that hackers could already be collecting encrypted data to decode it once the technology catches up.

Sensitive information—such as national security data, intellectual property, and personal records—could be at risk. In response, the US National Institute of Standards and Technology has introduced new post-quantum encryption standards and is encouraging organisations to transition swiftly, though the scale of the upgrade needed across global infrastructure remains immense.

Updating web browsers and modern devices may be straightforward, but older systems, critical infrastructure, and the growing number of Internet of Things (IoT) devices pose significant challenges.

Satellites, for instance, vary in how easily they can be upgraded, with remote sensing satellites often requiring full replacements. Cybersecurity experts stress the need for ‘crypto agility’ to make the transition manageable, aiming to avoid a chaotic scramble once quantum threats materialise.

For more information on these topics, visit diplomacy.edu.

Content creators fear financial risks amid TikTok ban talks

For many creators, TikTok has become more than just a platform for viral trends—it’s their livelihood. Beauty content creator Leila Nikea left her job as a make-up artist three years ago to focus solely on TikTok, tripling her income and even buying her first home.

Yet, uncertainty surrounding TikTok’s future has left her anxious, especially after the recent threat of a US ban over national security concerns. Although the ban was briefly implemented and then postponed, ongoing scrutiny has made creators like Leila fear for their financial stability.

Musicians Howard and George, known as The Whiskey Brothers, share similar concerns. After nearly two decades performing as a wedding band, TikTok finally gave them a platform to reach new audiences with their original music.

Their growing following led to their first official gig under their new name. However, the prospect of future bans has cast a shadow over their plans, making them question the long-term sustainability of their careers on TikTok.

Veteran tech influencer Safwan Ahmedmia, better known as SuperSaf, has already faced the consequences of a TikTok ban when India blocked the app in 2020, costing him thousands of followers. Now, he spreads his content across multiple platforms, advising fellow creators to do the same.

As debates over TikTok’s data privacy and security continue worldwide, creators are increasingly aware of the fragility of their digital careers. While many remain committed to their passions, the platform’s instability serves as a stark reminder of the risks tied to relying on a single app for income.

For more information on these topics, visit diplomacy.edu.

New STLA AutoDrive enables hands-free urban commuting

Stellantis has unveiled its first in-house-developed automated driving system, STLA AutoDrive, designed to assist urban commuters with hands-free and eyes-off driving. The system can manage speed, steering and braking while adapting to traffic flow.

The new technology allows drivers to momentarily shift focus from the road at speeds of up to 60 kilometres per hour.

Stellantis confirmed that future advancements could enable operation at speeds reaching 95 kilometres per hour.

Deployment of STLA AutoDrive will be determined by market demand, with integration planned across Stellantis’ vehicle brands. The system represents a step forward in the company’s push for enhanced driving automation.

For more information on these topics, visit diplomacy.edu.

AI tool matches years of superbug research in record time

Scientists at Imperial College London have been left astonished after an AI tool replicated and even expanded on a decade of their superbug research in just 48 hours. Professor José R. Penadés and his team had spent years investigating how antibiotic-resistant bacteria develop, only for Google’s AI system, ‘co-scientist,’ to reach the same conclusion almost instantly. Even more remarkably, the AI generated additional hypotheses, one of which the researchers had never considered and are now actively exploring.

The discovery has sparked excitement over AI’s potential to revolutionise scientific progress. Had the researchers possessed the AI-generated hypothesis at the start of their project, it could have saved years of effort. However, the breakthrough also raises concerns about AI’s growing role in scientific fields traditionally driven by human expertise. Some fear automation could replace jobs, while others see it as a powerful tool to accelerate discoveries and push the boundaries of knowledge.

Despite initial scepticism, Prof. Penadés described the experience as ‘spectacular’ and believes AI will transform science. Comparing it to competing in a Champions League final, he emphasised that rather than replacing researchers, AI has the potential to act as a powerful collaborator. As technology continues to advance, the challenge will be to balance AI’s immense capabilities with the need for human oversight and ethical considerations in research, not just in the UK, but globally.

For more information on these topics, visit diplomacy.edu.

Brazil slaps X with $1.42 million fine for noncompliance

Brazil‘s Supreme Court Justice Alexandre de Moraes has fined social media platform X, owned by Elon Musk, 8.1 million reais ($1.42 million) for failing to comply with judicial orders. The ruling, made public on Thursday, follows a legal case from 2023 where the court had instructed X to remove a profile spreading misinformation and provide the user’s registration data.

X’s failure to meet these demands resulted in a daily fine of 100,000 reais, and the company’s local legal representative faced potential criminal liability. The court order required the immediate payment of the fine, citing the platform’s noncompliance. X’s legal team in Brazil has not commented on the matter.

In 2024, X faced a month-long suspension in Brazil for not adhering to court orders related to hate speech moderation and for failing to designate a legal representative in the country, as mandated by law.

For more information on these topics, visit diplomacy.edu.

Sanas raises millions to transform call centre communication

AI start-up Sanas has raised $65 million in a new funding round, valuing the company at over $500 million. The firm, founded in 2020, uses artificial intelligence to modify call centre workers’ accents in real time, aiming to reduce discrimination and improve communication. Its software preserves the speaker’s emotions and identity while adjusting phonetic patterns instantly.

The company was inspired by a call centre worker’s struggle with accent bias, leading its founders to develop a solution that enhances clarity without replacing human connection. Despite concerns that such technology may homogenise voices rather than promote acceptance of diverse accents, Sanas insists its mission is to break barriers and reduce discrimination.

With an annual revenue of $21 million and a growing client base across healthcare, logistics, and manufacturing, Sanas is rapidly expanding. The company plans to develop new AI-driven speech technologies, increase its global presence, and open an office in the Philippines, a major hub for call centres.

For more information on these topics, visit diplomacy.edu.

Gemini AI now requires separate app on iOS

Google has removed its AI assistant, Gemini, from the main Google app on iOS, encouraging users to download the standalone Gemini app instead. The change, announced via an email to customers, is seen as a strategic move to position Gemini as a direct competitor to AI chatbots like ChatGPT and Claude.

The dedicated Gemini app allows users to interact with the AI assistant through voice and text, integrate it with Google services like Search and YouTube, and access advanced features such as AI-generated summaries and image creation. Those who attempt to use Gemini in the main Google app will now see a message directing them to the App Store.

While the shift may enable Google to roll out new AI features more efficiently, it also risks reducing Gemini’s reach, as some users may not be inclined to download a separate app. The company is also promoting its Google One AI Premium plan through the Gemini app, offering access to its more advanced capabilities.

For more information on these topics, visit diplomacy.edu.