China outlines plan to expand high-tech industries

China has pledged to expand its high-tech industries over the next decade. Officials said emerging sectors such as quantum computing, hydrogen energy, nuclear fusion, and brain-computer interfaces will receive major investment and policy backing.

Development chief Zheng Shanjie told reporters that the coming decade will redefine China’s technology landscape, describing it as a ‘new scale’ of innovation. The government views breakthroughs in science and AI as key to boosting economic resilience amid a slowing property market and demographic decline.

The plan underscores Beijing’s push to rival Washington in cutting-edge technology, with billions already channelled into state-led innovation programmes. Public opinion in Beijing appears supportive, with many citizens expressing optimism that China could lead the next technological revolution.

Economists warn, however, that sustained progress will require tackling structural issues, including low domestic consumption and reduced investor confidence. Analysts said Beijing’s long-term success will depend on whether it can balance rapid growth with stable governance and transparent regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Big Tech ramps up Brussels lobbying as EU considers easing digital rules

Tech firms now spend a record €151 million a year on lobbying at EU institutions, up from €113 million in 2023, according to transparency-register analysis by Corporate Europe Observatory and LobbyControl.

Spending is concentrated among US giants. The ten biggest tech companies, including Meta, Microsoft, Apple, Amazon, Qualcomm and Google, together outspend the top ten in pharma, finance and automotive. Meta leads with a budget above €10 million.

Estimates calculate there are 890 full-time lobbyists now working to influence tech policy in Brussels, up from 699 in 2023, with 437 holding European Parliament access badges. In the first half of 2025, companies declared 146 meetings with the Commission and 232 with MEPs, with artificial intelligence regulation and the industry code of practice frequently on the agenda.

As industry pushes back on the Digital Markets Act and Digital Services Act and the Commission explores the ‘simplification’ of EU rulebooks, lobbying transparency campaigners fear a rollback on the progress made to regulate the digital sector. On the contrary, companies argue that lobbying helps lawmakers grasp complex markets and assess impacts on innovation and competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT offers wellness checks for long chat sessions

OpenAI has introduced new features in ChatGPT to encourage healthier use for people who spend extended periods chatting with the AI. Users may see a pop-up message reading ‘Just checking in. You’ve been chatting for a while, is this a good time for a break?’.

Users can dismiss it or continue, helping to prevent excessive screen time while staying flexible. The update also guides high-stakes personal decisions.

ChatGPT will not give direct advice on sensitive topics such as relationships, but instead asks questions and encourages reflection, helping users consider their options safely.

OpenAI acknowledged that AI can feel especially personal for vulnerable individuals. Earlier versions sometimes struggled to recognise signs of emotional dependency or distress.

The company is improving the model to detect these cases and direct users to evidence-based resources when needed, making long interactions safer and more mindful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Emergency cardiology gets a lift from AI-read ECGs, with fewer false activations

AI ECG analysis improved heart attack detection and reduced false alarms in a multicentre study of 1,032 suspected STEMI cases. Conducted across three primary PCI centres from January 2020 to May 2024, it points to quicker, more accurate triage, especially beyond specialist hospitals.

ST-segment elevation myocardial infarction occurs when a major coronary artery is blocked. Guideline targets call for reperfusion within 90 minutes of first medical contact. Longer delays are associated with roughly a 3-fold increase in mortality, underscoring the need for rapid, reliable activation.

The AI ECG model, trained to detect acute coronary occlusion and STEMI equivalents, analysed each patient’s initial tracing. Confirmatory angiography and biomarkers identified 601 true STEMIs and 431 false positives. AI detected 553 of 601 STEMIs, versus 427 identified by standard triage on the first ECG.

False positives fell sharply with AI. Investigators reported a 7.9 percent false-positive rate with the model, compared with 41.8 percent under standard protocols. Clinicians said earlier that more precise identification could streamline transfers from non-PCI centres and help teams reach reperfusion targets.

An editorial welcomed the gains but urged caution. The model targets acute occlusion rather than STEMI, needs prospective validation in diverse populations, and must be integrated with clear governance and human oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ontario updates deidentification guidelines for safer data use

Ontario’s privacy watchdog has released an expanded set of deidentification guidelines to help organisations protect personal data while enabling innovation. The 100-page document from the Office of the Information and Privacy Commissioner (IPC) offers step-by-step advice, checklists and examples.

The update modernises the 2016 version to reflect global regulatory changes and new data protection practices. She emphasised that the guidelines aim to help organisations of all sizes responsibly anonymise data while maintaining its usefulness for research, AI development and public benefit.

Developed through broad stakeholder consultation, the guidelines were refined with input from privacy experts and the Canadian Anonymization Network. The new version responds to industry requests for more detailed, operational guidance.

Although the guidelines are not legally binding, experts said following them can reduce liability risks and strengthen compliance with privacy laws. The IPC hopes they will serve as a practical reference for executives and data officers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Clearview AI faces criminal complaint in Austria over GDPR violations

On 28 October 2025, European privacy NGO noyb (None of Your Business) submitted a criminal complaint against Clearview AI and its management to Austrian prosecutors.

The complaint targets Clearview’s long-criticised practice of scraping billions of photos and videos from the public web to build a facial recognition database, including biometric data of EU residents, in ways noyb claims flagrantly violate the EU General Data Protection Regulation (GDPR).

Clearview markets its technology to law enforcement and governmental agencies, offering clients the ability to upload a face image and retrieve matches from its vast index, reportedly over 60 billion images.

Multiple EU data protection authorities have already found Clearview in breach of GDPR rules and imposed fines and bans in countries such as France, Greece, Italy, the Netherlands, and the United Kingdom.

Despite those rulings, Clearview has largely ignored enforcement actions, refusing to comply or pay fines except in limited cases, citing its lack of a European base as a shield. Noyb argues that the company exploits this regulatory gap to skirt accountability.

Under Austrian law, certain GDPR violations are criminal offences (via § 63 of Austria’s data protection statute), allowing prosecutors to hold both corporations and their executives personally liable, including potential imprisonment. Noyb’s complaint thus seeks to escalate enforcement beyond administrative fines to criminal sanctions.

Max Schrems, noyb’s founder, condemned Clearview’s conduct as a systematic affront to European legal frameworks: ‘Clearview AI amassed a global database of photos and biometric data … Such power is extremely concerning and undermines the idea of a free society.’

The outcome could set a landmark precedent: if prosecutors accept and pursue the case, Clearview’s executives might face arrest if they travel to Europe, and EU-wide legal cooperation (e.g. extradition requests) could follow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI Foundation to fund global health and AI safety projects

OpenAI has finalised its recapitalisation, simplifying its structure while preserving its core mission. The new OpenAI Foundation controls OpenAI Group PBC and holds about $130 billion in equity, making it one of history’s best-funded philanthropies.

The Foundation will receive further ownership as OpenAI’s valuation grows, ensuring its financial resources expand alongside the company’s success. Its mission remains to ensure that artificial general intelligence benefits all of humanity.

The more the business prospers, the greater the Foundation’s capacity to fund global initiatives.

An initial $25 billion commitment will focus on two core areas: advancing healthcare breakthroughs and strengthening AI resilience. Funds will go toward open-source health datasets, medical research, and technical defences to make AI systems safer and more reliable.

The initiative builds on OpenAI’s existing People-First AI Fund and reflects recommendations from its Nonprofit Commission.

The recapitalisation follows nearly a year of discussions with the Attorneys General of California and Delaware, resulting in stronger governance and accountability. With this structure, OpenAI aims to advance science, promote global cooperation, and share AI benefits broadly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most transformative decade begins as Kurzweil’s AI vision unfolds

AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translation and real-time voice synthesis to medical diagnostics and language generation, today’s systems perform tasks once reserved for human cognition. For those watching closely, this shift feels less like a surprise and more like a milestone reached.

Ray Kurzweil, one of the most prominent futurists of the past half-century, predicted much of what is now unfolding. In 1999, his book The Age of Spiritual Machines laid a roadmap for how computers would grow exponentially in power and eventually match and surpass human capabilities. Over two decades later, many of his projections for the 2020s have materialised with unsettling accuracy.

The futurist who measured the future

Kurzweil’s work stands out not only for its ambition but for its precision. Rather than offering vague speculation, he produced a set of quantifiable predictions, 147 in total, with a claimed accuracy rate of over 85 percent. These ranged from the growth of mobile computing and cloud-based storage to real-time language translation and the emergence of AI companions.

Since 2012, he has worked at Google as Director of Engineering, contributing to developing natural language understanding systems. He believes is that exponential growth in computing power, driven by Moore’s Law and its successors, will eventually transform our tools and biology.

Reprogramming the body with code

One of Kurzweil’s most controversial but recurring ideas is that human ageing is, at its core, a software problem. He believes that by the early 2030s, advancements in biotechnology and nanomedicine could allow us to repair or even reverse cellular damage.

The logic is straightforward: if ageing results from accumulated biological errors, then precise intervention at the molecular level might prevent those errors or correct them in real time.

AI adoption among US firms with over 250 employees fell to under 12 per cent in August, the largest drop since the Census Bureau began tracking in 2023.

Some of these ideas are already being tested, though results remain preliminary. For now, claims about extending life remain speculative, but the research trend is real.

Kurzweil’s perspective places biology and computation on a converging path. His view is not that we will become machines, but that we may learn to edit ourselves with the same logic we use to program them.

The brain, extended

Another key milestone in Kurzweil’s roadmap is merging biological and digital intelligence. He envisions a future where nanorobots circulate through the bloodstream and connect our neurons directly to cloud-based systems. In this vision, the brain becomes a hybrid processor, part organic, part synthetic.

By the mid-2030s, he predicts we may no longer rely solely on internal memory or individual thought. Instead, we may access external information, knowledge, and computation in real time. Some current projects, such as brain–computer interfaces and neuroprosthetics, point in this direction, but remain in early stages of development.

Kurzweil frames this not as a loss of humanity but as an expansion of its potential.

The singularity hypothesis

At the centre of Kurzweil’s long-term vision lies the idea of a technological singularity. By 2045, he believes AI will surpass the combined intelligence of all humans, leading to a phase shift in human evolution. However, this moment, often misunderstood, is not a single event but a threshold after which change accelerates beyond human comprehension.

Human like robot and artificial intelligence

The singularity, in Kurzweil’s view, does not erase humanity. Instead, it integrates us into a system where biology no longer limits intelligence. The implications are vast, from ethics and identity to access and inequality. Who participates in this future, and who is left out, remains an open question.

Between vision and verification

Critics often label Kurzweil’s forecasts as too optimistic or detached from scientific constraints. Some argue that while trends may be exponential, progress in medicine, cognition, and consciousness cannot be compressed into neat timelines. Others worry about the philosophical consequences of merging with machines.

Still, it is difficult to ignore the number of predictions that have already come true. Kurzweil’s strength lies not in certainty, but in pattern recognition. His work forces a reckoning with what might happen if the current pace of change continues unchecked.

Whether or not we reach the singularity by 2045, the present moment already feels like the future he described.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands open-source AI models to boost global innovation

The US tech giant, NVIDIA, has released open-source AI models and data tools across language, biology and robotics to accelerate innovation and expand access to cutting-edge research.

New model families, Nemotron, Cosmos, Isaac GR00T and Clara, are designed to empower developers to build intelligent agents and applications with enhanced reasoning and multimodal capabilities.

The company is contributing these open models and datasets to Hugging Face, further solidifying its position as a leading supporter of open research.

Nemotron models improve reasoning for digital AI agents, while Cosmos and Isaac GR00T enable physical AI and robotic systems to perform complex simulations and behaviours. Clara advances biomedical AI, allowing scientists to analyse RNA, generate 3D protein structures and enhance medical imaging.

Major industry partners, including Amazon Robotics, ServiceNow, Palantir and PayPal, are already integrating NVIDIA’s technologies to develop next-generation AI agents.

An initiative that reflects NVIDIA’s aim to create an open ecosystem that supports both enterprise and scientific innovation through accessible, transparent and responsible AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!