How to keep your data safe while using generative AI tools

Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.

Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.

Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.

Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.

Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.

Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.

Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OCC urged to delay crypto bank approvals

Major US banking and credit union associations are pressuring regulators to delay granting federal bank licences to crypto firms. These include companies such as Circle, Ripple, and Fidelity Digital Assets.

In a joint letter, the American Bankers Association and others called on the Office of the Comptroller of the Currency (OCC) to halt decisions on these applications, raising what they described as serious legal and procedural issues.

The groups argue that the crypto firms’ business models do not align with the fiduciary activities typically required for national trust banks. They warned that granting such charters without clear oversight could mark a major policy shift and potentially weaken the foundations of the financial system.

The banks also claim the publicly available details of the applications are insufficient for public scrutiny. Some in the crypto sector see this as a sign of resistance from traditional banks fearing competition.

Recent legislative developments, particularly the GENIUS Act’s stablecoin framework, are encouraging more crypto firms to seek national bank charters.

Legal experts say such charters offer broader operational freedom than the new stablecoin licence, making them an increasingly attractive option for firms aiming to operate across all US states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human rights must anchor crypto design

Crypto builders face growing pressure to design systems that protect fundamental human rights from the outset. As concerns mount over surveillance, state-backed ID systems, and AI impersonation, experts warn that digital infrastructure must not compromise individual freedom.

Privacy-by-default, censorship resistance, and decentralised self-custody are no longer idealistic features — they are essential for any credible Web3 system. Critics argue that many current tools merely replicate traditional power structures, offering centralisation disguised as innovation.

The collapse of platforms like FTX has only strengthened calls for human-centric solutions.

New approaches are needed to ensure people can prove their personhood online without relying on governments or corporations. Digital inclusion depends on verification systems that are censorship-resistant, privacy-preserving and accessible.

Likewise, self-custody must evolve beyond fragile key backups and complex interfaces to empower everyday users.

While embedding values in code brings ethical and political risks, avoiding the issue could lead to greater harm. For the promise of Web3 to be realised, rights must be a design priority — not an afterthought.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT stuns users by guessing object in viral video using smart questions

A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.

The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.

Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.

Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.

The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.

Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.

She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI device brings early skin cancer diagnosis to remote communities

A Scottish research team has developed a pioneering AI-powered tool that could transform how skin cancer is diagnosed in some of the world’s most isolated regions.

The device, created by PhD student Tess Watt at Heriot-Watt University, enables rapid diagnosis without needing internet access or direct contact with a dermatologist.

Patients use a compact camera connected to a Raspberry Pi computer to photograph suspicious skin lesions.

The system then compares the image against thousands of preloaded examples using advanced image recognition and delivers a diagnosis in real time. These results are then shared with local GP services, allowing treatment to begin without delay.

The self-contained diagnostic system is among the first designed specifically for remote medical use. Watt said that home-based healthcare is vital, especially with growing delays in GP appointments.

The device, currently 85 per cent accurate, is expected to improve further with access to more image datasets and machine learning enhancements.

The team plans to trial the tool in real-world settings after securing NHS ethical approval. The initial rollout is aimed at rural Scottish communities, but the technology could benefit global populations with poor access to dermatological care.

Heriot-Watt researchers also believe the device will aid patients who are infirm or housebound, making early diagnosis more accessible than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity CEO predicts that AI browser could soon replace recruiters and assistants

Perplexity AI CEO Aravind Srinivas believes that the company’s new AI-powered browser, Comet, could soon replace two key white-collar roles in most offices: recruiters and executive assistants.

Speaking on The Verge podcast, Srinivas explained that with the integration of more advanced reasoning models like GPT-5 or Claude 4.5, Comet will be able to handle tasks traditionally assigned to these positions.

He also described how a recruiter’s week-long workload could be reduced to a single AI prompt.

From sourcing candidates to scheduling interviews, tracking responses in Google Sheets, syncing calendars, and even briefing users ahead of meetings, Comet is built to manage the entire process—often without any follow-up input.

The tool remains in an invite-only phase and is currently available to premium users.

Srinivas also framed Comet as the early foundation of a broader AI operating system for knowledge workers, enabling users to issue natural language commands for complex tasks.

He emphasised the importance of adopting AI early, warning that those who fail to keep pace with the technology’s rapid growth—where breakthroughs arrive every few months—risk being left behind in the job market.

In a separate discussion, he urged younger generations to reduce time spent scrolling on Instagram and instead focus on mastering AI tools. According to him, the shift is inevitable, and those who embrace it now will hold a long-term professional advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta CEO unveils plan to spend hundreds of billions on AI data centres

Mark Zuckerberg has pledged to invest hundreds of billions of dollars to build a network of massive data centres focused on superintelligent AI. The initiative forms part of Meta’s wider push to lead the race in developing machines capable of outperforming humans in complex tasks.

The first of these centres, called Prometheus, is set to launch in 2026. Another facility, Hyperion, is expected to scale up to 5 gigawatts. Zuckerberg said the company is building several more AI ‘titan clusters’, each one covering an area comparable to a significant part of Manhattan.

He also cited Meta’s strong advertising revenue as the reason it can afford such bold spending despite investor concerns.

Meta recently regrouped its AI projects under a new division, Superintelligence Labs, following internal setbacks and high-profile staff departures.

The company hopes the division will generate fresh revenue streams through Meta AI tools, video ad generators, and wearable smart devices. It is reportedly considering dropping its most powerful open-source model, Behemoth, in favour of a closed alternative.

The firm has increased its 2025 capital expenditure to up to $72 billion and is actively hiring top talent, including former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman.

Analysts say Meta’s AI investments are paying off in advertising but warn that the real return on long-term AI dominance will take time to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK considers Bitcoin sale to plug budget gap

Chancellor Rachel Reeves is reportedly considering the sale of over £5 billion in seized Bitcoin to help reduce the UK’s growing fiscal deficit. The Treasury is under pressure to find alternative revenue sources amid soaring borrowing costs, high inflation, and sluggish growth.

The Bitcoin in question was mostly confiscated in 2018 during a crackdown on a Chinese Ponzi scheme. Since then, its value has risen dramatically, with initial holdings worth around £300 million now estimated at more than £5 billion.

The assets were linked to convicted money launderers, including Jian Wen, and are currently held by UK law enforcement.

While the sale could help avoid tax increases or spending cuts, critics warn of repeating past mistakes. Comparisons have already been drawn to Gordon Brown’s heavily criticised gold sales in the early 2000s, which resulted in billions in missed profits.

There are also unresolved legal concerns about returning funds to victims of the fraud.

Some observers argue the UK should consider holding the Bitcoin as a strategic reserve, in line with countries like El Salvador. Analysts note that the US also sold off seized Bitcoin from 2014 to 2021, missing out on a potential $21 billion gain.

If the UK follows through with the sale, many believe it could prove to be one of the most short-sighted fiscal moves in recent history.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DuckDuckGo adds new tool to block AI-generated images from search results

Privacy-focused search engine DuckDuckGo has launched a new feature that allows users to filter out AI-generated images from search results.

Although the company admits the tool is not perfect and may miss some content, it claims it will significantly reduce the number of synthetic images users encounter.

The new filter uses open-source blocklists, including a more aggressive ‘nuclear’ option, sourced from tools like uBlock Origin and uBlacklist.

Users can access the setting via the Images tab after performing a search or use a dedicated link — noai.duckduckgo.com — which keeps the filter always on and also disables AI summaries and the browser’s chatbot.

The update responds to growing frustration among internet users. Platforms like X and Reddit have seen complaints about AI content flooding search results.

In one example, users searching for ‘baby peacock’ reported seeing just as many or more AI images than real ones, making it harder to distinguish between fake and authentic content.

DuckDuckGo isn’t alone in trying to tackle unwanted AI material. In 2024, Hiya launched a Chrome extension aimed at spotting deepfake audio across major platforms.

Microsoft’s Bing has also partnered with groups like StopNCII to remove explicit synthetic media from its results, showing that the fight against AI content saturation is becoming a broader industry trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GENIUS Act signed as stablecoin regulation divides opinion

President Donald Trump has officially signed the GENIUS Act into law, marking a historic step in establishing a legal framework for stablecoins in the US. The act, passed with bipartisan support on 18 July, introduces the first rules for the $250 billion stablecoin market.

While Trump hailed the bill’s passage as a major achievement, backlash has emerged from both politicians and crypto insiders. Republican Representative Marjorie Taylor Greene condemned the bill, arguing it could secretly enable the rollout of a central bank digital currency (CBDC).

She warned that stablecoins under state control may function like a surveillance tool and criticised the absence of a clause banning CBDCs from the legislation.

Outside Capitol Hill, concerns were echoed by prominent Bitcoin advocate Justin Bechler, who likened the act to a covert power grab by central authorities. He claimed that fully compliant, state-enforced stablecoins effectively amount to CBDCs in practice.

Jean Rausis of SmarDex also described the bill as a ‘CBDC trojan horse’.

However, some believe the criticism is misplaced. Journalist Eleanor Terrett noted that the GENIUS Act includes language that prohibits the Federal Reserve from launching a retail CBDC.

Senator Tim Scott supported this view, stating the act does not expand the Fed’s powers in any direction resembling a digital currency for the public.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot