Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe risks falling behind without telecom scale, Telefónica says

Telefónica has called for a shift in Europe’s telecommunications policy, arguing that market fragmentation is undermining investment, digital competitiveness, and the continent’s technological sovereignty, according to a new blog post from the company.

In the post, Telefónica says Europe’s emphasis on maximising retail competition has produced a highly fragmented operator landscape. It cites industry data showing the average European operator serves around five million customers, far fewer than peers in the United States or China.

The company argues that this lack of scale explains Europe’s lower per-capita investment in telecoms infrastructure and is slowing the rollout of technologies such as standalone 5G, fibre networks, and sovereign cloud and AI platforms.

Telefónica points to recent reports by Mario Draghi and Enrico Letta as signs of a policy shift, with EU institutions placing greater weight on investment capacity, resilience, and dynamic efficiency alongside traditional competition objectives.

The blog post concludes that Europe faces a strategic choice between preserving fragmented markets or enabling responsible consolidation. Telefónica says carefully regulated mergers could support sustainability, reduce regional digital divides, and strengthen Europe’s digital infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered grid pilot aims to cut energy costs in Ottawa

Canada has announced new federal funding to pilot AI tools on the electricity grid, backing a project designed to improve reliability, affordability and efficiency as energy demand grows.

The government of Canada will provide $6 million to Hydro Ottawa under the Ottawa Distributed Energy Resource Accelerator programme. The initiative will utilise AI-enhanced predictive analytics to forecast peak demand and help balance electricity supply and demand in near real-time.

The project will turn customer-owned technologies such as smart thermostats, electric vehicle chargers and home batteries into responsive grid resources. By aggregating them, Hydro Ottawa aims to manage local constraints and reduce costly network upgrades, starting in areas like Kanata North that are experiencing rapid growth.

Officials say the programme will give households more control over energy use while strengthening grid resilience. The pilot is also intended to serve as a model that could be scaled across other neighbourhoods and electricity systems.

The funding comes through the Energy Innovation Program, which supports innovative grid demonstrations and AI-driven energy projects. Ottawa says such initiatives are key to modernising Canada’s electricity system and supporting the transition to a low-carbon economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines safeguards as AI cyber capabilities advance

Cyber capabilities in advanced AI models are improving rapidly, delivering clear benefits for cyberdefence while introducing new dual-use risks that require careful management, according to OpenAI’s latest assessment.

The company points to sharp gains in capture-the-flag performance, with success rates rising from 27 percent in August to 76 percent by November 2025. OpenAI says future models could reach high cyber capability, including assistance with sophisticated intrusion techniques.

To address this, OpenAI says it is prioritising defensive use cases, investing in tools that help security teams audit code, patch vulnerabilities, and respond more effectively to threats. The goal is to give defenders an advantage in an often under-resourced environment.

OpenAI argues that cybersecurity cannot be governed through a single safeguard, as defensive and offensive techniques overlap. Instead, it applies a defence-in-depth approach that combines access controls, monitoring, detection systems, and extensive red teaming to limit misuse.

Alongside these measures, the company plans new initiatives, including trusted access programmes for defenders, agent-based security tools in private testing, and the creation of a Frontier Risk Council. OpenAI says these efforts reflect a long-term commitment to cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Disney backs OpenAI with $1bn investment and licensing pact

The Walt Disney Company has struck a landmark agreement with OpenAI, becoming the first major content licensing partner on Sora, the AI company’s short-form generative video platform.

Under the three-year deal, Sora will generate short videos using more than 200 animated and creature characters from Disney, Pixar, Marvel, and Star Wars. The licence also covers ChatGPT Images, excluding talent likenesses and voices.

Beyond licensing, Disney will become a major OpenAI customer, using its APIs to develop new products and experiences, including for Disney+, while deploying ChatGPT internally across its workforce. Disney will also make a $1 billion equity investment in OpenAI and receive warrants for additional shares.

Both companies frame the partnership as a test case for responsible AI in creative industries. Executives say the agreement is designed to expand storytelling possibilities while protecting creators’ rights, user safety, and intellectual property across platforms.

Subject to final approvals, Sora-generated Disney content is expected to begin rolling out in early 2026. Curated selections may appear on Disney+, marking a new phase in how established entertainment brands engage with generative AI tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tiiny AI unveils the Pocket Lab supercomputer

Tiiny AI has revealed the Pocket Lab, a palm-sized device recognised as the world’s smallest personal AI supercomputer. Guinness World Records confirmed the title, noting its ability to run models with up to 120 billion parameters.

The Pocket Lab uses an ARM v9.2 CPU, a discrete NPU delivering 190 TOPS and 80GB of LPDDR5X memory. Popular open-source models such as GPT-OSS, Llama, Qwen, Mistral, DeepSeek and Phi are supported. Tiiny AI says its hardware makes large-scale reasoning possible in a handheld format.

Two in-house technologies enhance efficiency by distributing workloads and reducing unnecessary activations. TurboSparse manages sparse neuron activity to preserve capability while improving speed, and PowerInfer splits computation across the CPU and NPU.

Tiiny AI plans a full showcase at CES 2026, with pricing and release information still pending. Analysts want to see how the device performs in real-world tasks compared with much larger systems. The company believes the Pocket Lab will shift expectations for personal AI hardware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Time honours leading AI architects worldwide

Time magazine has named the so-called architects of AI as its Person of the Year, recognising leading technologists reshaping global industries. Figures highlighted include Sam Altman, Jensen Huang, Elon Musk, Mark Zuckerberg, Lisa Su, Demis Hassabis, Dario Amodei and Fei-Fei Li.

Time emphasises that major AI developers have placed enormous bets on infrastructure and capability. Their competition and collaboration have accelerated rapid adoption across businesses and households.

The magazine also examined negative consequences linked to rapid deployment, including mental health concerns and reported chatbot-related lawsuits. Economists warn of significant labour disruption as companies adopt automated systems widely.

The editorial team framed 2025 as a tipping point when AI moved into everyday life. The publication resisted using AI-generated imagery for its cover, choosing traditional artists instead. Industry observers say the selection reflects AI’s central role in shaping economic and social priorities throughout the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU survey shows strong public backing for digital literacy in schools

A new Eurobarometer survey finds that Europeans want digital skills to hold the same status in schools as reading, mathematics and science.

Citizens view digital competence as essential for learning, future employment and informed participation in public life.

Nine in ten respondents believe that schools should guide pupils on how to handle the harmful effects of digital technologies on their mental health and well-being, rather than treating such issues as secondary concerns.

Most Europeans also support a more structured approach to online information. Eight in ten say digital literacy helps them avoid misinformation, while nearly nine in ten want teachers to be fully prepared to show students how to recognise false content.

A majority continues to favour restrictions on smartphones in schools, yet an even larger share supports the use of digital tools specifically designed for learning.

More than half find that AI brings both opportunities and risks for classrooms, which they believe should be examined in greater depth.

Almost half want the EU to shape standards for the use of educational technologies, including rules on AI and data protection.

The findings will inform the European Commission’s 2030 Roadmap on digital education and skills, scheduled for release next year as part of the Union of Skills initiative.

A survey carried out across all member states reflects a growing expectation that digital education should become a central pillar of Europe’s teaching systems, rather than an optional enhancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!