NASA hacks Jupiter probe camera to recover vital images

NASA engineers have revealed they remotely repaired a failing camera aboard the Juno spacecraft orbiting Jupiter using a bold heating technique known as annealing.

Instead of replacing the hardware, which was impossible given the 595 million kilometre distance from Earth, the team deliberately overheated the camera’s internals to reverse suspected radiation damage.

JunoCam, designed to last only eight orbits, surprisingly survived over 45 before image quality deteriorated on the 47th. Engineers suspected a voltage regulator fault and chose to heat the camera to 77°F, altering the silicon at a microscopic level.

The risky fix temporarily worked, but the issue resurfaced, prompting a second annealing at maximum heat just before a close flyby of Jupiter’s moon Io in late 2023.

The experiment’s success encouraged further tests on other Juno instruments, offering valuable insights into spacecraft resilience. Although NASA didn’t confirm whether these follow-ups succeeded, the effort highlighted the increasing need for in-situ repairs as missions explore deeper into space.

While JunoCam resumed high-quality imaging up to orbit 74, new signs of degradation have since appeared. NASA hasn’t yet confirmed whether another fix is planned or if the camera’s mission has ended.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT evolves from chatbot to digital co-worker

OpenAI has launched a powerful multi-function agent inside ChatGPT, transforming the platform from a conversational AI into a dynamic digital assistant capable of executing multi-step tasks.

Rather than waiting for repeated commands, the agent acts independently — scheduling meetings, drafting emails, summarising documents, and managing workflows with minimal input.

The development marks a shift in how users interact with AI. Instead of merely assisting, ChatGPT now understands broader intent, remembers context, and completes tasks autonomously.

Professionals and individuals using ChatGPT online can now treat the system as a digital co-worker, helping automate complex tasks without bouncing between different tools.

The integration reflects OpenAI’s long-term vision of building AI that aligns with real-world needs. Compared to single-purpose tools like GPTZero or NoteGPT, the ChatGPT agent analyses, summarises, and initiates next steps.

It’s part of a broader trend, where AI is no longer just a support tool but a full productivity engine.

For businesses adopting ChatGPT professional accounts, the rollout offers immediate value. It reduces manual effort, streamlines enterprise operations, and adapts to user habits over time.

As AI continues to embed itself into company infrastructure, the new agent from OpenAI signals a future where human–AI collaboration becomes the norm, not the exception.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Louis Vuitton Australia confirms customer data breach after cyberattack

Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.

The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.

The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.

While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.

Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance needs urgent international coordination

A GIS Reports analysis emphasises that as AI systems become pervasive, they create significant global challenges, including surveillance risks, algorithmic bias, cyber vulnerabilities, and environmental pressures.

Unlike legacy regulatory regimes, AI technology blurs the lines among privacy, labour, environmental, security, and human rights domains, demanding a uniquely coordinated governance approach.

The report highlights that leading AI research and infrastructure remain concentrated in advanced economies: over half of general‑purpose AI models originated in the US, exacerbating global inequalities.

Meanwhile, facial recognition or deepfake generators threaten civic trust, amplify disinformation, and even provoke geopolitical incidents if weaponised in defence systems.

The analysis calls for urgent public‑private cooperation and a new regulatory paradigm to address these systemic issues.

Recommendations include forming international expert bodies akin to the IPCC, and creating cohesive governance that bridges labour rights, environmental accountability, and ethical AI frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

5G market grows as GCT begins chipset rollout

GCT Semiconductor Holding, Inc. has begun delivering samples of its latest 5G chipsets to lead customers, including Airspan Networks and Orbic. The company offers chip and module formats to meet specific testing needs.

Initial shipments aim to fulfil early demand, after which GCT will work with clients to assess performance and establish production requirements. The firm is well positioned to scale with a robust supply chain and deep experience in high-speed connectivity.

The fabless semiconductor designer targets mid-tier 5G applications and plans to introduce a Verizon-certified module. GCT has said it remains focused on accelerating its role in the global 5G market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Surging AI use drives utility upgrades

The rapid rise of AI is placing unprecedented strain on the US power grid, as the electricity demands of massive data centres continue to surge.

Utilities nationwide are struggling to keep up, expanding infrastructure and revising rate structures to accommodate an influx of power-hungry facilities.

Regions like Northern Virginia have become focal points, where dense data centre clusters consume tens of megawatts each and create years-long delays for new connections.

Some next-generation AI systems are expected to require between 1 and 5 gigawatts of constant power, roughly the output of multiple Hoover Dams, posing significant challenges for energy suppliers and regulators alike.

In response, tech firms and utilities are considering a mix of solutions, including on-site natural gas generation, investments in small nuclear reactors, and greater reliance on renewable sources.

At the federal level, streamlined permitting and executive actions are used to fast-track grid and plant development.

‘The scale of AI’s power appetite is unprecedented,’ said Dr Elena Martinez, senior grid strategist at the Centre for Energy Innovation. ‘Utilities must pivot now, combining smart-grid tech, diverse energy sources and regulatory agility to avoid systemic bottlenecks.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Critical minerals challenge AI’s sustainable expansion

Recent debates on AI’s environmental impact have overwhelmingly focused on energy use, particularly in powering massive data centres and training large language models.

However, a Forbes analysis by Saleem H. Ali warns that the material inputs for AI, such as phosphorus, copper, lithium, rare earths, and uranium, are being neglected, despite presenting similarly severe constraints to scaling and sustainability.

While major companies like Google and Blackstone invest heavily in data centre construction and hydroelectric power in places like Pennsylvania, these energy-focused solutions do not address looming material bottlenecks.

Many raw minerals essential for AI hardware are finite, regionally concentrated, and environmentally taxing to extract. However, this raises risks ranging from supply chain fragility to ecological damage and geopolitical tension.

Experts now say that sustainable AI development demands a dual focus, not only on low-carbon energy, but on keeping critical mineral supply chains resilient.

Without a coordinated approach, AI growth may stall or drive unsustainable resource extraction with long-term global consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How to keep your data safe while using generative AI tools

Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.

Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.

Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.

Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.

Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.

Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.

Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI gains ground as GenAI maturity grows in public sector

Public sector organisations around the world are rapidly moving beyond experimentation with generative AI (GenAI), with up to 90% now planning to explore, pilot, or implement agentic AI systems within the next two years.

Capgemini’s latest global survey of 350 public sector agencies found that most already use or trial GenAI, while agentic AI is being recognised as the next step — enabling autonomous, goal-driven decision-making with minimal human input.

Unlike GenAI, which generates content subject to human oversight, agentic AI can act independently, creating new possibilities for automation and public service delivery.

Dr Kirti Jain of Capgemini explained that GenAI depends on human-in-the-loop (HITL) processes, where users review outputs before acting. By contrast, agentic AI completes the final step itself, representing a future phase of automation. However, data governance remains a key barrier to adoption.

Data sovereignty emerged as a leading concern for 64% of surveyed public sector leaders. Fewer than one in four said they had sufficient data to train reliable AI systems. Dr Jain emphasised that governance must be embedded from the outset — not added as an afterthought — to ensure data quality, accountability, and consistency in decision-making.

A proactive approach to governance offers the only stable foundation for scaling AI responsibly. Managing the full data lifecycle — from acquisition and storage to access and application — requires strict privacy and quality controls.

Significant risks arise when flawed AI-generated insights influence decisions affecting entire populations. Capgemini’s support for government agencies focuses on three areas: secure infrastructure, privacy-led data usability, and smarter, citizen-centric services.

EPA Victoria CTO Abhijit Gupta underscored the need for timely, secure, and accessible data as a prerequisite for AI in the public sector. Accuracy and consistency, Dr Jain noted, are essential whether outcomes are delivered by humans or machines. Governance, he added, should remain technology-agnostic yet agile.

Strong data foundations require only minor adjustments to scale agentic AI that can manage full decision-making cycles. Capgemini’s model of ‘active data governance’ aims to enable public sector AI to scale safely and sustainably.

Singapore was highlighted as a leading example of responsible innovation, driven by rapid experimentation and collaborative development. The AI Trailblazers programme, co-run with the private sector, is tackling over 100 real-world GenAI challenges through a test-and-iterate model.

Minister for Digital Josephine Teo recently reaffirmed Singapore’s commitment to sharing lessons and best practices in sustainable AI development. According to Dr Jain, the country’s success lies not only in rapid adoption, but in how AI is applied to improve services for citizens and society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT stuns users by guessing object in viral video using smart questions

A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.

The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.

Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.

Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.

The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.

Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.

She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!