Protecting human rights in neurotechnology

The Australian Human Rights Commission has called for neurotechnology to be developed with strong human rights protections and legal safeguards for neural data. Its report, ‘Peace of Mind: Navigating the ethical frontiers of neurotechnology and human rights’, warns that such technologies could expose sensitive brain data and increase risks of surveillance, discrimination, and violations of freedom of thought.

Innovations in neurotechnology, including brain-computer interfaces that help people with paralysis communicate and wearable devices that monitor workplace fatigue, offer significant benefits but also present profound ethical challenges. Commissioner Lorraine Finlay stressed that protecting privacy and human dignity must remain central to technological progress.

The report urges the government, industry, and civil society in Australia to ensure informed consent, ban neuromarketing targeting children, prohibit coercive workplace applications, and legally review military uses. A specialist agency is recommended to enforce safety standards, prioritising the rights and best interests of children, older people, and individuals with disabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

A new capitalism for the intelligent age

In his Time’s article, Klaus Schwab argues that business is undergoing a deeper transformation than previous technological revolutions. He notes that we are entering what he terms the ‘Intelligent Age’, where value is less about physical assets and more about ideas, relationships and the ability to learn faster than change.

According to Schwab, the assumptions of the Industrial Age, that growth meant simply scaling, that efficiency trumped adaptability, and that workers were interchangeable, no longer hold. Instead, enterprises must become living ecosystems, adaptable platforms rather than pipelines.

However, Schwab warns that intelligent technologies such as AI and automation are not inherently benign.

On the one hand, they can amplify human potential; on the other, if misused, they risk diminishing it. Business leaders must therefore undergo not just digital transformation, but a mental transformation, embracing resilience, inclusivity and human dignity as core values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NYPD sued over Microsoft-linked surveillance system

The New York Police Department is facing a lawsuit from the Surveillance Technology Oversight Project (S.T.O.P.), which accuses it of running an invasive citywide surveillance network built with Microsoft technology.

The system, known as the Domain Awareness System (DAS), has operated since 2012 and connects more than a dozen surveillance tools, including video cameras, biometric scanners, license plate readers, and financial analytics, into one centralised network. According to court filings, the system collects location data, social media activity, vehicle information, and even banking details to create ‘digital profiles’ of millions of residents.

S.T.O.P. argues that the network captures and stores data on all New Yorkers, including those never suspected of a crime, amounting to a ‘web of surveillance’ that violates constitutional rights. The group says newly obtained records show that DAS integrates citywide cameras, 911 and 311 call logs, police databases, and feeds from drones and helicopters into a single monitoring platform.

Calling DAS ‘an unprecedented violation of American life’, the organisation has asked the US District Court for the Southern District of New York to declare the city’s surveillance practices unconstitutional.

This is not the first time Microsoft’s technology has drawn scrutiny this year over data tracking and storing, its recently announced ‘Recall’ feature also raised alarm over potential privacy issues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

A licensed AI music platform emerges from UMG and Udio

UMG and Udio have struck an industry-first deal to license AI music, settle litigation, and launch a 2026 platform that blends creation, streaming, and sharing in a licensed environment. Training uses authorised catalogues, with fingerprinting, filtering, and revenue sharing for artists and songwriters.

Udio’s current app stays online during the transition under a walled garden, with fingerprinting, filtering, and other controls added ahead of relaunch. Rights management sits at the core: licensed inputs, transparent outputs, and enforcement that aims to deter impersonation and unlicensed derivatives.

Leaders frame the pact as a template for a healthier AI music economy that aligns rightsholders, developers, and fans. Udio calls it a way to champion artists while expanding fan creativity, and UMG casts it as part of its broader AI partnerships across platforms.

Commercial focus extends beyond headline licensing to business model design, subscriptions, and collaboration tools for creators. Expect guardrails around style guidance, attribution, and monetisation, plus pathways for official stems and remix packs so fan edits can be cleared and paid.

Governance will matter as usage scales, with audits of model inputs, takedown routes, and payout rules under scrutiny. Success will be judged on artist adoption, catalogue protection, and whether fans get safer ways to customise music without sacrificing rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Automakers and freight partners join NVIDIA and Uber to accelerate level 4 deployments

NVIDIA and Uber partner on level 4-ready fleets using the DRIVE AGX Hyperion 10, aiming to scale a unified human-and-robot driver network from 2027. A joint AI data factory on NVIDIA Cosmos will curate training data, aiming to reach 100,000 vehicles over time.

DRIVE AGX Hyperion 10 is a reference compute and sensor stack for level 4 readiness across cars, vans, and trucks. Automakers can pair validated hardware with compatible autonomy software to speed safer, scalable, AI-defined mobility. Passenger and freight services gain faster paths from prototype to fleet.

Stellantis, Lucid, and Mercedes-Benz are preparing passenger platforms on Hyperion 10. Aurora, Volvo Autonomous Solutions, and Waabi are extending level 4 capability to long-haul trucking. Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve, and WeRide continue to build on NVIDIA DRIVE.

The production platform pairs dual DRIVE AGX Thor on Blackwell with DriveOS and a qualified multimodal sensor suite. Cameras, radar, lidar, and ultrasonics deliver 360-degree coverage. Modular design plus PCIe, Ethernet, confidential computing, and liquid cooling support upgrades and uptime.

NVIDIA is also launching Halos, a cloud-to-vehicle AI safety and certification system with an ANSI-accredited inspection lab and certification program. A multimodal AV dataset and reasoning VLA models aim to improve urban driving, testing, and validation for deployments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN report shows human cost of Afghan telecommunications shutdowns

A new UN briefing highlights the severe human rights effects of recent telecommunications shutdowns in Afghanistan. The 48-hour nationwide disruption hindered access to healthcare, emergency services, banking, education, and daily communications, worsening the hardships already faced by the population.

Women and girls were disproportionately affected, with restricted contact with guardians preventing travel for essential activities and limiting access to online education. Health workers reported preventable deaths due to the inability to call for emergency assistance, while humanitarian aid was delayed in regions still recovering from natural disasters and involuntary returns from neighbouring countries.

The UN stresses that such shutdowns violate rights to freedom of expression and access to information, and urges authorities to ensure any communication restrictions comply with international human rights standards. Rapid restoration of services and legally justified measures are essential to protect the Afghan population.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US Internet Bill of Rights unveiled as response to global safety laws

A proposed US Internet Bill of Rights aims to protect digital freedoms as governments expand online censorship laws. The framework, developed by privacy advocates, calls for stronger guarantees of free expression, privacy, and access to information in the digital era.

Supporters argue that recent legislation such as the UK’s Online Safety Act, the EU’s Digital Services Act, and US proposals like KOSA and the STOP HATE Act have eroded civil liberties. They claim these measures empower governments and private firms to control online speech under the guise of safety.

The proposed US bill sets out rights including privacy in digital communications, platform transparency, protection against government surveillance, and fair access to the internet. It also calls for judicial oversight of censorship requests, open algorithms, and the protection of anonymous speech.

Advocates say the framework would enshrine digital freedoms through federal law or constitutional amendment, ensuring equal access and privacy worldwide. They argue that safeguarding free and open internet access is vital to preserve democracy and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Alliance science pact lifts US–Korea cooperation on AI, quantum, 6G, and space

The United States and South Korea agreed on a broad science and technology memorandum to deepen alliance ties and bolster Indo-Pacific stability. The non-binding pact aims to accelerate innovation while protecting critical capabilities. Both sides cast it as groundwork for a new Golden Age of Innovation.

AI sits at the centre. Plans include pro-innovation policy alignment, trusted exports across the stack, AI-ready datasets, safety standards, and enforcement of compute protection. Joint metrology and standards work links the US Center for AI Standards and Innovation with the AI Safety Institute of South Korea.

Trusted technology leadership extends beyond AI. The memorandum outlines shared research security, capacity building for universities and industry, and joint threat analysis. Telecommunications cooperation targets interoperable 6G supply chains and coordinated standards activity with industry partners.

Quantum and basic research are priority growth areas. Participants plan interoperable quantum standards, stronger institutional partnerships, and secured supply chains. Larger projects and STEM exchanges aim to widen collaboration, supported by shared roadmaps and engagement in global consortia.

Space cooperation continues across civil and exploration programmes. Strands include Artemis contributions, a Korean cubesat rideshare on Artemis II, and Commercial Lunar Payload Services. The Korea Positioning System will be developed for maximum interoperability with GPS.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Big Tech ramps up Brussels lobbying as EU considers easing digital rules

Tech firms now spend a record €151 million a year on lobbying at EU institutions, up from €113 million in 2023, according to transparency-register analysis by Corporate Europe Observatory and LobbyControl.

Spending is concentrated among US giants. The ten biggest tech companies, including Meta, Microsoft, Apple, Amazon, Qualcomm and Google, together outspend the top ten in pharma, finance and automotive. Meta leads with a budget above €10 million.

Estimates calculate there are 890 full-time lobbyists now working to influence tech policy in Brussels, up from 699 in 2023, with 437 holding European Parliament access badges. In the first half of 2025, companies declared 146 meetings with the Commission and 232 with MEPs, with artificial intelligence regulation and the industry code of practice frequently on the agenda.

As industry pushes back on the Digital Markets Act and Digital Services Act and the Commission explores the ‘simplification’ of EU rulebooks, lobbying transparency campaigners fear a rollback on the progress made to regulate the digital sector. On the contrary, companies argue that lobbying helps lawmakers grasp complex markets and assess impacts on innovation and competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!