Kuwait to strengthen telecom resilience amid regional tensions

Kuwait is implementing strategic policies to disaster-proof its telecommunications and digital infrastructure in light of rising regional tensions, particularly the ongoing conflict between Iran and the Zionist entity. Under any emergency scenario, these policies prioritise the continuity of essential services, such as the internet, mobile networks, and digital government systems.

To operationalise this approach, the government, led by the Minister of State for Communication Affairs, convened a high-level emergency meeting with key stakeholders, including the Ministry of Communications, the Communications and Information Technology Regulatory Authority (CITRA), and major telecom providers like Zain, Ooredoo, stc, and Virgin Mobile. The goal is to ensure unified national readiness through regular coordination, planning, and communication.

Kuwait is reinforcing its technical and operational capabilities to support these policies. The Ministry of Communications has raised its alert level and is conducting real-time monitoring of local networks to detect and respond to disruptions quickly.

Telecom providers have confirmed their infrastructure is prepared for various emergency scenarios, citing the activation of emergency centres, advanced technical support systems, and contingency plans. At the same time, CITRA has taken steps to maintain stable data flows by activating local internet exchange points (IXs) and securing alternative international routing paths, measures designed to minimise the impact of any potential regional connectivity breakdown.

In parallel, Kuwait is safeguarding digital public services as a core part of its policy framework. The Central Agency for Information Technology (CAIT) has implemented contingency plans and system integration efforts to ensure the continuity of government digital services. These measures aim to guarantee that citizens can access essential services, even during crises.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT now supports MCP for business data access, but safety risks remain

OpenAI has officially enabled support for Anthropic’s Model Context Protocol (MCP) in ChatGPT, allowing businesses to connect their internal tools directly to the chatbot through Deep Research.

The development enables employees to retrieve company data from previously siloed systems, offering real-time access to documents and search results via custom-built MCP servers.

Adopting MCP — an open industry protocol recently embraced by OpenAI, Google and Microsoft — opens new possibilities and presents security risks.

OpenAI advises users to avoid third-party MCP servers unless hosted by the official service provider, warning that unverified connections may carry prompt injections or hidden malicious directives. Users are urged to report suspicious activity and avoid exposing sensitive data during integration.

To connect tools, developers must set up an MCP server and create a tailored connector within ChatGPT, complete with detailed instructions. The feature is now live for ChatGPT Enterprise, Team and Edu users, who can share the connector across their workspace as a trusted data source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hackers breached Viasat during 2024 presidential campaign

According to Bloomberg News, satellite communications firm Viasat Inc. was reportedly among the targets of the Chinese-linked cyberespionage operation known as Salt Typhoon, which coincided with the 2024 US presidential campaign.

The breach, believed to have occurred last year, was discovered in 2025. Viasat confirmed it had investigated the incident in cooperation with an independent cybersecurity partner and relevant government authorities.

According to the company, the unauthorised access stemmed from a compromised device, though no evidence of customer impact has been found. ‘Viasat believes that the incident has been remediated and has not detected any recent activity related to this event,’ the firm stated, reaffirming its collaboration with United States officials.

Salt Typhoon, attributed to China by US intelligence, has previously been accused of breaching major telecom networks, including Verizon, AT&T and Lumen. Hackers allegedly gained full access to internal systems, enabling the geolocation of millions of users and the interception of phone calls.

In December 2024, US officials disclosed that a ninth telecom company had been compromised and confirmed that individuals connected to both Kamala Harris’s and Donald Trump’s presidential campaigns were targeted.

Chinese authorities have consistently rejected the claims, labelling them disinformation. Beijing maintains it ‘firmly opposes and combats cyberattacks and cybertheft in all forms’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft begins password deletion in six weeks

Microsoft has announced that it will begin deleting saved passwords from its Authenticator app in six weeks, urging users to shift to more secure passkeys. The company confirmed that by August 2025, saved passwords will no longer be accessible, marking a decisive move away from traditional logins.

Users can transition their credentials to Microsoft Edge or adopt passkeys, which are less vulnerable to phishing and breaches. Despite growing risks, Google is making similar recommendations as most users still rely on passwords or outdated two-factor authentication.

The changes reflect a broader industry push to phase out passwords entirely, citing their inherent insecurity and the surge in credential-based attacks. Microsoft also warned that attackers are intensifying efforts to exploit passwords before their relevance fades.

Authenticator will continue supporting passkeys, but users must keep it enabled as their passkey provider. Microsoft’s message is clear: act now to secure your accounts before password support disappears.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI adds pop-up warning after users share sensitive info

Meta has introduced a new pop-up in its Meta AI app, alerting users that any prompts they share may be made public. While AI chat interactions are rarely private by design, many users appeared unaware that their conversations could be published for others to see.

The Discovery feed in the Meta AI app had previously featured conversations that included intimate details—such as break-up confessions, attempts at self-diagnosis, and private photo edits.

According to multiple reports last week, these were often shared unknowingly by users who may not have realised the implications of the app’s sharing functions. Mashable confirmed this by finding such examples directly in the feed.

Now, when a user taps the ‘Share’ button on a Meta AI conversation, a new warning appears: ‘Prompts you post are public and visible to everyone. Your prompts may be suggested by Meta on other Meta apps. Avoid sharing personal or sensitive information.’ A ‘Post to feed’ button then appears below.

Although the sharing step has always required users to confirm, Business Insider reports that the feature wasn’t clearly explained—leading some users to publish their conversations unintentionally. The new alert aims to clarify that process.

As of this week, Meta AI’s Discovery feed features mostly AI-generated images and more generic prompts, often from official Meta accounts. For users concerned about privacy, there is an option in the app’s settings to opt out of the Discovery feed altogether.

Still, experts advise against entering personal or sensitive information into AI chatbots, including Meta AI. Adjusting privacy settings and avoiding the ‘Share’ feature are the best ways to protect your data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google warns against weak passwords amid £12bn scams

Gmail users are being urged to upgrade their security as online scams continue to rise sharply, with cyber criminals stealing over £12 billion in the past year alone. Google is warning that simple passwords leave people vulnerable to phishing and account takeovers.

To combat the threat, users are encouraged to switch to passkeys or use ‘Sign in with Google’, both of which offer stronger protections through fingerprint, face ID or PIN verification. Over 60% of Baby Boomers and Gen X users still rely on weak passwords, increasing their exposure to attacks.

Despite the availability of secure alternatives, only 30% of users reportedly use them daily. Gen Z is leading the shift by adopting newer tools, bypassing outdated security habits altogether.

Google recommends adding 2-Step Verification for those unwilling to leave passwords behind. With scams growing more sophisticated, extra security measures are no longer optional, they are essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s robotics industry set to double by 2028, led by drones and humanoid robots

China’s robotics industry is on course to double in size by 2028, with Morgan Stanley projecting market growth from US$47 billion in 2024 to US$108 billion.

With an annual expansion rate of 23 percent, the country is expected to strengthen its leadership in this fast-evolving field. Analysts credit China’s drive for innovation and cost efficiency as key to advancing next-generation robotics.

A cornerstone of the ‘Made in China 2025’ initiative, robotics is central to the nation’s goal of dominating global high-tech industries. Last year, China accounted for 40 percent of the worldwide robotics market and over half of all industrial robot installations.

Recent data shows industrial robot production surged 35.5 percent in May, while service robot output climbed nearly 14 percent.

Morgan Stanley anticipates drones will remain China’s largest robotics segment, set to grow from US$19 billion to US$40 billion by 2028.

Meanwhile, the humanoid robot sector is expected to see an annual growth rate of 63 percent, expanding from US$300 million in 2025 to US$3.4 billion by 2030. By 2050, China could be home to 302 million humanoid robots, making up 30 percent of the global population.

The researchers describe 2025 as a milestone year, marking the start of mass humanoid robot production.

They emphasise that automation is already reshaping China’s manufacturing industry, boosting productivity and quality instead of simply replacing workers and setting the stage for a brighter industrial future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISOs warn AI-driven cyberattacks are rising, with DNS infrastructure at risk

A new report warns that chief information security officers (CISOs) are bracing for a sharp increase in cyber-attacks as AI continues to reshape the global threat landscape. According to CSC’s report, 98% of CISOs expect rising attacks over the next three years, with domain infrastructure a key concern.

AI-powered domain generation algorithms (DGAs) have been flagged as a key threat by 87% of security leaders. Cyber-squatting, DNS hijacking, and DDoS attacks remain top risks, with nearly all CISOs expressing concern over bad actors’ increasing use of AI.

However, only 7% said they feel confident in defending against domain-based threats.

Concerns have also been raised about identity verification. Around 99% of companies worry their domain registrars fail to apply adequate Know Your Customer (KYC) policies, leaving them vulnerable to infiltration.

Meanwhile, half of organisations have not implemented or tested a formal incident response plan or adopted AI-driven monitoring tools.

Budget constraints continue to limit cybersecurity readiness. Despite the growing risks, only 7% of CISOs reported a significant increase in security budgets between 2024 and 2025. CSC’s Ihab Shraim warned that DNS infrastructure is a prime target and urged firms to act before facing technical and reputational fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!