Is AI distorting our view of the Milky Way’s black hole?

A new AI model has created a fresh image of Sagittarius A*, the supermassive black hole at the centre of our galaxy, suggesting it is spinning close to its maximum speed.

The model was trained on noisy data from the Event Horizon Telescope, a globe-spanning network of radio telescopes, using information once dismissed due to atmospheric interference.

Researchers believe this AI-enhanced image shows the black hole’s rotational axis pointing towards Earth, offering potential insights into how radiation and matter behave near such cosmic giants.

By using previously considered unusable data, scientists hope to improve our understanding of black hole dynamics.

However, not all physicists are confident in the results.

Nobel Prize-winning astrophysicist Reinhard Genzel has voiced concern over the reliability of models built on compromised data, stressing that AI should not be treated as a miracle fix. He warned that the new image might be distorted due to the poor quality of its underlying information.

The researchers plan to test their model against newer and more reliable data to address these concerns. Their goal is to refine the AI further and provide more accurate simulations of black holes in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI brings DALL-E image creation to WhatsApp users worldwide

OpenAI has officially launched image creation capabilities for WhatsApp users, expanding access to its AI visual tools via the verified number +1-800-ChatGPT. Using natural language prompts, the feature enables users to generate or edit images directly within their chats.

Previously limited to the web and mobile versions of ChatGPT, the image generation tool—powered by DALL-E—is now available globally on WhatsApp, free of charge. OpenAI announced the rollout via X, encouraging users to connect their accounts for enhanced functionality.

To get started, users should save +1-800-ChatGPT (+1-800-242-8478) to their contacts, send ‘Hi’ via WhatsApp, and follow the instructions to link their OpenAI account.

Once verified, they can prompt the AI with creative requests such as ‘design a futuristic skyline’ or ‘show a dog surfing on Mars’ and receive bespoke visuals in return.

The move further integrates generative AI into everyday messaging, making powerful image-creation tools more accessible to a broad user base.

Meanwhile, WhatsApp is preparing to introduce in-app advertising. With over two billion active users, Meta plans to monetise the platform more aggressively—signalling a notable shift in WhatsApp’s strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake technology fuels new harassment risks

A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.

Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.

Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.

Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

T-Mobile launches priority network for emergency services

T-Mobile is expanding its support for emergency response teams by combining 5G, AI and drone technologies to boost disaster recovery operations. Its T-Priority service, launched last year, offers dedicated network slices to ensure fast, low-latency data access during crises.

US first responders in disaster-hit regions like Southern California and North Carolina have already used the system to operate body cams, traffic monitoring tools and mapping systems. T-Mobile deployed hundreds of 5G routers and hotspot devices to aid efforts during the Palisades wildfire and Hurricanes.

AI and drone technologies are key in reconnaissance, damage assessment and real-time communication. T-Mobile’s self-organising network adapts to changing conditions using live data, ensuring stable connectivity throughout emergency operations.

Public-private collaboration is central to the initiative, with T-Mobile working alongside FEMA, the Department of Defense and local emergency centres. The company has also signed a major deal to provide New York City with a dedicated public safety network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI adds pop-up warning after users share sensitive info

Meta has introduced a new pop-up in its Meta AI app, alerting users that any prompts they share may be made public. While AI chat interactions are rarely private by design, many users appeared unaware that their conversations could be published for others to see.

The Discovery feed in the Meta AI app had previously featured conversations that included intimate details—such as break-up confessions, attempts at self-diagnosis, and private photo edits.

According to multiple reports last week, these were often shared unknowingly by users who may not have realised the implications of the app’s sharing functions. Mashable confirmed this by finding such examples directly in the feed.

Now, when a user taps the ‘Share’ button on a Meta AI conversation, a new warning appears: ‘Prompts you post are public and visible to everyone. Your prompts may be suggested by Meta on other Meta apps. Avoid sharing personal or sensitive information.’ A ‘Post to feed’ button then appears below.

Although the sharing step has always required users to confirm, Business Insider reports that the feature wasn’t clearly explained—leading some users to publish their conversations unintentionally. The new alert aims to clarify that process.

As of this week, Meta AI’s Discovery feed features mostly AI-generated images and more generic prompts, often from official Meta accounts. For users concerned about privacy, there is an option in the app’s settings to opt out of the Discovery feed altogether.

Still, experts advise against entering personal or sensitive information into AI chatbots, including Meta AI. Adjusting privacy settings and avoiding the ‘Share’ feature are the best ways to protect your data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Plumbing still safe as AI replaces office jobs, says AI pioneer

Nobel Prize-winning scientist Geoffrey Hinton, often called the ‘Godfather of AI,’ has warned that many intellectual jobs are at risk of being replaced by AI—while manual trades like plumbing may remain safe for years to come.

Speaking on the Diary of a CEO podcast, Hinton predicted that AI will eventually surpass human capabilities across most fields, but said it will take far longer to master physical skills. ‘A good bet would be to be a plumber,’ he noted, citing the complexity of physical manipulation as a barrier for AI.

Hinton, known for his pioneering work on neural networks, said ‘mundane intellectual labour’ would be among the first to go. ‘AI is just going to replace everybody,’ he said, naming paralegals and call centre workers as particularly vulnerable.

He added that while highly skilled roles or those in sectors with overwhelming demand—like healthcare—may endure, most jobs are unlikely to escape the wave of disruption. ‘Most jobs, I think, are not like that,’ he said, forecasting widespread upheaval in the labour market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT loses chess match to Atari 2600

ChatGPT, was trounced in a chess match by a 1979 video game running on an Atari 2600 emulator. Citrix engineer Robert Caruso set up the match using Video Chess to test how the AI would perform against vintage gaming software.

The result was unexpectedly lopsided. ChatGPT confused rooks for bishops, forgot piece positions and made repeated beginner mistakes, eventually asking for the match to be restarted. Even when standard chess notation was used, its performance failed to improve.

Caruso described the 90-minute session as full of basic blunders, saying the AI would have been laughed out of a primary school chess club. His post highlighted the limitations of ChatGPT’s architecture, which is built for language understanding, not strategic board gameplay.

While the experiment doesn’t mean ChatGPT is entirely useless at chess, it suggests users are better off discussing the game with the bot than challenging it. OpenAI has not yet responded to the light-hearted but telling critique.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI considers antitrust action against Microsoft over AI hosting control

OpenAI reportedly tries to reduce Microsoft’s exclusive control over hosting its AI models, signalling growing friction between the two companies.

According to the Wall Street Journal, OpenAI leadership has considered filing an antitrust complaint against Microsoft, alleging anti-competitive behaviour in their ongoing collaboration. The move could trigger federal regulatory scrutiny.

The tension comes amid ongoing talks over OpenAI’s corporate restructuring. A report by The Information suggests that OpenAI is negotiating to grant Microsoft a 33% stake in its reorganized for-profit unit. In exchange, Microsoft would give up rights to future profits.

OpenAI also wants to revise its existing contract with Microsoft, particularly clauses that grant exclusive Azure hosting rights. The company reportedly aims to exclude its planned $3 billion acquisition of AI startup Windsurf from the agreement, which otherwise gives Microsoft access to OpenAI’s intellectual property.

This developing rift could reshape one of the most influential alliances in AI. Microsoft has invested heavily in OpenAI since 2019 and integrates its models into Microsoft 365 Copilot and Azure services. However, both firms are diversifying.

OpenAI is turning to Google Cloud and Oracle for additional computing power, while Microsoft has begun integrating alternative AI models into its products.

Industry experts warn that regulatory scrutiny or contract changes could impact enterprise customers relying on tightly integrated AI solutions, particularly in sectors like healthcare and finance. Companies may face service disruptions, higher costs, or compatibility challenges if major players shift strategy or infrastructure.

Analysts suggest that the era of single-model reliance may be ending. As innovation from rivals like DeepSeek accelerates, enterprises and cloud providers are moving toward multi-model support, aiming for modular, scalable, and use-case-specific AI deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISOs warn AI-driven cyberattacks are rising, with DNS infrastructure at risk

A new report warns that chief information security officers (CISOs) are bracing for a sharp increase in cyber-attacks as AI continues to reshape the global threat landscape. According to CSC’s report, 98% of CISOs expect rising attacks over the next three years, with domain infrastructure a key concern.

AI-powered domain generation algorithms (DGAs) have been flagged as a key threat by 87% of security leaders. Cyber-squatting, DNS hijacking, and DDoS attacks remain top risks, with nearly all CISOs expressing concern over bad actors’ increasing use of AI.

However, only 7% said they feel confident in defending against domain-based threats.

Concerns have also been raised about identity verification. Around 99% of companies worry their domain registrars fail to apply adequate Know Your Customer (KYC) policies, leaving them vulnerable to infiltration.

Meanwhile, half of organisations have not implemented or tested a formal incident response plan or adopted AI-driven monitoring tools.

Budget constraints continue to limit cybersecurity readiness. Despite the growing risks, only 7% of CISOs reported a significant increase in security budgets between 2024 and 2025. CSC’s Ihab Shraim warned that DNS infrastructure is a prime target and urged firms to act before facing technical and reputational fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot