EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise and risks of synthetic media

Synthetic media transforms content creation across sectors

The rapid development of AI has enabled significant breakthroughs in synthetic media, opening up new opportunities in healthcare, education, entertainment and many more.

Instead of relying on traditional content creation, companies are now using advanced tools to produce immersive experiences, training simulations and personalised campaigns. But what exactly is synthetic media?

Seattle-based ElastixAI raised $16 million to build a platform that improves how large language models run, focusing on efficient inference rather than training.

Synthetic media refers to content produced partly or entirely by AI, including AI-generated images, music, video and speech. Tools such as ChatGPT, Midjourney and voice synthesisers are now widely used in both creative and commercial settings.

The global market for synthetic media is expanding rapidly. Valued at USD 4.5 billion in 2023, it is projected to reach USD 16.6 billion by 2033, driven mainly by tools that convert text into images, videos or synthetic speech.

The appeal lies in its scalability and flexibility: small teams can now quickly produce a wide range of professional-grade content and easily adapt it for multiple audiences or languages.

However, as synthetic media becomes more widespread, so do the ethical challenges it poses.

How deepfakes threaten trust and security

The same technology has raised serious concerns as deepfakes – highly realistic but fake audio, images and videos – become harder to detect and more frequently misused.

Deepfakes, a subset of synthetic media, go a step further by creating content that intentionally imitates real people in deceptive ways, often for manipulation or fraud.

The technology behind deepfakes involves face swapping through variational autoencoders and voice cloning via synthesised speech patterns. The entry barrier is low, making these tools accessible to the general public.

computer keyboard with red deepfake button key deepfake dangers online

First surfacing on Reddit in 2017, deepfakes have quickly expanded into healthcare, entertainment, and education, yet they also pose a serious threat when misused. For example, a major financial scam recently cost a company USD 25 million due to a deepfaked video call with a fake CFO.

Synthetic media fuels global political narratives

Politicians and supporters have often openly used generative AI to share satirical or exaggerated content, rather than attempting to disguise it as real.

In Indonesia, AI even brought back the likeness of former dictator Suharto to endorse candidates, while in India, meme culture thrived but failed to significantly influence voters’ decisions.

In the USA, figures like Elon Musk and Donald Trump have embraced AI-generated memes and voice parodies to mock opponents or improve their public image.

AI, US elections, Deepfakes

While these tools have made it easier to create misinformation, researchers such as UC Berkeley’s Hany Farid argue that the greater threat lies in the gradual erosion of trust, rather than a single viral deepfake.

It is becoming increasingly difficult for users to distinguish truth from fiction, leading to a contaminated information environment that harms public discourse. Legal concerns, public scrutiny, and the proliferation of ‘cheapfakes’—manipulated media that do not rely on AI—may have limited the worst predictions.

Nonetheless, experts warn that the use of AI in campaigns will continue to become more sophisticated. Without clear regulation and ethical safeguards, future elections may not be able to prevent the disruptive influence of synthetic media as easily.

Children use AI to create harmful deepfakes

School-aged children are increasingly using AI tools to generate explicit deepfake images of their classmates, often targeting girls. What began as a novelty has become a new form of digital sexual abuse.

With just a smartphone and a popular app, teenagers can now create and share highly realistic fake nudes, turning moments of celebration, like a bat mitzvah photo, into weapons of humiliation.

Rather than being treated as simple pranks, these acts have severe psychological consequences for victims and are leaving lawmakers scrambling.

Educators and parents are now calling for urgent action. Instead of just warning teens about criminal consequences, schools are starting to teach digital ethics, consent, and responsible use of technology.

kids using laptops in class

Programmes that explain the harm caused by deepfakes may offer a better path forward than punishment alone. Experts say the core issues—respect, agency, and safety—are not new.

The tools may be more advanced, but the message remains the same: technology must be used responsibly, not to exploit others.

Deepfakes become weapons of modern war

Deepfakes can also be deployed to sow confusion, falsify military orders, and manipulate public opinion. While not all such tactics will succeed, their growing use in psychological and propaganda operations cannot be ignored.

Intelligence agencies are already exploring how to integrate synthetic media into information warfare strategies, despite the risk of backfiring.

A new academic study from University College Cork examined how such videos spread on social media and how users reacted.

While many responded with scepticism and attempts at verification, others began accusing the real footage of being fake. The growing confusion risks creating an online environment where no information feels trustworthy, exactly the outcome hostile actors might seek.

While deception has long been part of warfare, deepfakes challenge the legal boundaries defined by international humanitarian law.

 Crowd, Person, Adult, Male, Man, Press Conference, Head, Face, People

Falsifying surrender orders to launch ambushes could qualify as perfidy—a war crime—while misleading enemies about troop positions may remain lawful.

Yet when civilians are caught in the crossfire of digital lies, violations of the Geneva Conventions become harder to ignore.

Regulation is lagging behind the technology, and without urgent action, deepfakes may become as destructive as conventional weapons, redefining both warfare and the concept of truth.

The good side of deepfake technology

Yet, not all applications are harmful. In medicine, deepfakes can aid therapy or generate synthetic ECG data for research while protecting patient privacy. In education, the technology can recreate historical figures or deliver immersive experiences.

Journalists and human rights activists also use synthetic avatars for anonymity in repressive environments. Meanwhile, in entertainment, deepfakes offer cost-effective ways to recreate actors or build virtual sets.

These examples highlight how the same technology that fuels disinformation can also be harnessed for innovation and the public good.

Governments push for deepfake transparency

However, the risks are rising. Misinformation, fraud, nonconsensual content, and identity theft are all becoming more common.

The danger of copyright infringement and data privacy violations also looms large, particularly when AI-generated material pulls content from social media or copyrighted works without permission.

Policymakers are taking action, but is it enough?

The USA has banned AI robocalls, and Europe’s AI Act aims to regulate synthetic content. Experts emphasise the need for worldwide cooperation, with regulation focusing on consent, accountability, and transparency.

eu artificial intelligence act 415652543

Embedding watermarks and enforcing civil liabilities are among the strategies being considered. To navigate the new landscape, a collaborative effort across governments, industry, and the public is crucial, not just to detect deepfakes but also to define their responsible use.

Some emerging detection methods include certifying content provenance, where creators or custodians attach verifiable information about the origin and authenticity of media.

Automated detection systems analyse inconsistencies in facial movements, speech patterns, or visual blending to identify manipulated media. Additionally, platform moderation based on account reputation and behaviour helps filter suspicious sources.

Systems that process or store personal data must also comply with privacy regulations, ensuring individuals’ rights to correct or erase inaccurate data.

Yet, despite these efforts, many of these systems still struggle to reliably distinguish synthetic content from real one.

As detection methods lag, some organisations like Reality Defender and Witness work to raise awareness and develop countermeasures.

The rise of AI influencers on social media

Another subset of synthetic media is the AI-generated influencers. AI (or synthetic) influencers are virtual personas powered by AI, designed to interact with followers, create content, and promote brands across social media platforms.

Unlike traditional influencers, they are not real people but computer-generated characters that simulate human behaviour and emotional responses. Developers use deep learning, natural language processing, and sophisticated graphic design to make these influencers appear lifelike and relatable.

Finfluencers face legal action over unregulated financial advice.

Once launched, they operate continuously, often in multiple languages and across different time zones, giving brands a global presence without the limitations of human engagement.

These virtual influencers offer several key advantages for brands. They can be precisely controlled to maintain consistent messaging and avoid the unpredictability that can come with human influencers.

Their scalability allows them to reach diverse markets with tailored content, and over time, they may prove more cost-efficient due to their ability to produce content at scale without the ongoing costs of human talent.

Brands can also experiment with creative storytelling in new and visually compelling ways that might be difficult for real-life creators.

Synthetic influencers have also begun appearing in the healthcare sector, although their widespread popularity in the sector remains limited. However, it is expected to grow rapidly.

Their rise also brings significant challenges. AI influencers lack genuine authenticity and emotional depth, which can hinder the formation of meaningful connections with audiences.

Their use raises ethical concerns around transparency, especially if followers are unaware that they are interacting with AI.

Data privacy is another concern, as these systems often rely on collecting and analysing large amounts of user information to function effectively.

Additionally, while they may save money in the long run, creating and maintaining a sophisticated AI influencer involves a substantial upfront investment.

Study warns of backlash from synthetic influencers

A new study from Northeastern University urges caution when using AI-powered influencers, despite their futuristic appeal and rising prominence.

While these digital figures may offer brands a modern edge, they risk inflicting greater harm on consumer trust compared to human influencers when problems arise.

The findings show that consumers are more inclined to hold the brand accountable if a virtual influencer promotes a faulty product or spreads misleading information.

Rather than viewing these AI personas as independent agents, users tend to see them as direct reflections of the company behind them. Instead of blaming the influencer, audiences shift responsibility to the brand itself.

Interestingly, while human influencers are more likely to be held personally liable, virtual influencers still cause deeper reputational damage.

 Accessories, Jewelry

People assume that their actions are fully scripted and approved by the business, making any error seem deliberate or embedded in company practices rather than a personal mistake.

Regardless of the circumstances, AI influencers are reshaping the marketing landscape by providing an innovative and highly adaptable tool for brands. While they are unlikely to replace human influencers entirely, they are expected to play a growing role in digital marketing.

Their continued rise will likely force regulators, brands, and developers to establish clearer ethical standards and guidelines to ensure responsible and transparent use.

Shaping the future of synthetic media

In conclusion, the growing presence of synthetic media invites both excitement and reflection. As researchers, policymakers, and creators grapple with its implications, the challenge lies not in halting progress but in shaping it thoughtfully.

All forms of synthetic media, like any other form of technology, have a dual capacity to empower and exploit, demanding a new digital literacy — one that prioritises critical engagement, ethical responsibility, and cross-sector collaboration.

On the one hand, deepfakes threaten democratic stability, information integrity, and civilian safety, blurring the line between truth and fabrication in conflict, politics, and public discourse.

On the other hand, AI influencers are transforming marketing and entertainment by offering scalable, controllable, and hyper-curated personas that challenge notions of authenticity and human connection.

Rather than fearing the tools themselves, we as human beings need to focus on cultivating the norms and safeguards that determine how, and for whom, they are used. Ultimately, these tools are meant to enhance our way of life, not undermine it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF leadership panel explores future of digital governance

As the Internet Governance Forum (IGF) prepares to mark its 20th anniversary, members of the IGF Leadership Panel gathered in Norway to present a strategic vision for strengthening the forum’s institutional role and ensuring greater policy impact.

The session explored proposals to make the IGF a permanent UN institution, improve its output relevance for policymakers, and enhance its role in implementing outcomes from WSIS+20 and the Global Digital Compact.

While the tone remained largely optimistic, Nobel Peace Prize laureate Maria Ressa voiced a more urgent appeal, calling for concrete action in a rapidly deteriorating information ecosystem.

Speakers emphasized the need for a permanent and better-resourced IGF. Vint Cerf, Chair of the Leadership Panel, reflected on the evolution of internet governance, arguing that ‘we must maintain enthusiasm for computing’s positive potential whilst addressing problems’.

He acknowledged growing threats like AI-driven disruption and information pollution, which risk undermining democratic governance and economic fairness online. Maria Fernanda Garza and Lise Fuhr echoed the call, urging for the IGF to be integrated into the UN structure with sustainable funding and measurable performance metrics. Fuhr commended Norway’s effort to bring 16 ministers from the Global South to the meeting, framing it as a model for future inclusive engagement.

 Indoors, Restaurant, Adult, Female, Person, Woman, Cafeteria, Boy, Male, Teen, Man, Wristwatch, Accessories, Jewelry, Necklace, People, Glasses, Urban, Face, Head, Cup, Food, Food Court, Lucky Fonz III, Judy Baca, Roy Hudd, Lisa Palfrey, Ziba Mir-Hosseini, Mareen von Römer, Kim Shin-young, Lídia Jorge

A significant focus was placed on integrating IGF outcomes with the WSIS+20 and Global Digital Compact processes. Amandeep Singh Gill noted that these two tracks are ‘complementary’ and that existing WSIS architecture should be leveraged to avoid duplication. He emphasized that budget constraints limit the creation of new bodies, making it imperative for the IGF to serve as the core platform for implementation and monitoring.

Garza compared the IGF’s role to a ‘canary in the coal mine’ for digital policy, urging better coordination with National and Regional Initiatives (NRIs) to translate global goals into local impact.

Participants discussed the persistent challenge of translating IGF discussions into actionable outputs. Carol Roach emphasized the need to identify target audiences and tailor outputs using formats such as executive briefs, toolkits, and videos. Lan Xue added,’ to be policy-relevant, the IGF must evolve from a space of dialogue to a platform of strategic translation’.

He proposed launching policy trackers, aligning outputs with global policy calendars, and appointing liaison officers to bridge the gap between IGF and forums such as the G20, UNGA, and ITU.

Inclusivity emerged as another critical theme. Panellists underscored the importance of engaging underrepresented regions through financial support, capacity-building, and education. Fuhr highlighted the value of internet summer schools and grassroots NRIs, while Gill stressed that digital sovereignty is now a key concern in the Global South. ‘The demand has shifted’, he said, ‘from content consumption to content creation’.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Maria Ressa closed the session with an impassioned call for immediate action. She warned that the current information environment contributes to global conflict and democratic erosion, stating that ‘without facts, no truth, no trust. Without trust, you cannot govern’. Citing recent wars and digital manipulation, she urged the IGF community to move from reflection to implementation. ‘Online violence is real-world violence’, she said. ‘We’ve talked enough. Now is the time to act.’

Despite some differences in vision, the session revealed a strong consensus on key issues: the need for institutional evolution, enhanced funding, better policy translation, and broader inclusion. Bertrand de la Chapelle, however, cautioned against making the IGF a conventional UN body, instead proposing a ‘constitutional moment’ in 2026 to consider more flexible institutional reforms.

The discussion demonstrated that while the IGF remains a trusted forum for inclusive dialogue, its long-term relevance depends on its ability to produce concrete outcomes and adapt to a volatile digital environment. As Vint Cerf reminded participants in closing, ‘this is an opportunity to make this a better environment than it already is and to contribute more to our global digital society’.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

WSIS prepares for Geneva as momentum builds for impactful digital governance

As preparations intensify for the World Summit on the Information Society (WSIS+20) high-level event, scheduled for 7–11 July in Geneva, stakeholders from across sectors gathered at the Internet Governance Forum in Norway to reflect on WSIS’s evolution and map a shared path forward.

The session, moderated by Gitanjali Sah of ITU, brought together over a dozen speakers from governments, UN agencies, civil society, and the technical and business communities.

The event is crucial, marking two decades since the WSIS process began. It has grown into a multistakeholder framework involving more than 50 UN entities. While the action lines offer a structured and inclusive approach to digital cooperation, participants acknowledged that measurement and implementation remain the weakest links.

IGF
WSIS prepares for Geneva as momentum builds for impactful digital governance 16

Ambassador Thomas Schneider of Switzerland—co-host of the upcoming high-level event—called for a shift from discussion to decision-making. “Dialogue is necessary but not sufficient,” he stated. “We must ensure these voices translate into outcomes.” Echoing this, South Africa’s representative, Cynthia, reaffirmed her country’s leadership as chair-designate of the event and its commitment to inclusive governance via its G20 presidency focus on AI, digital public infrastructure, and small business support.

UNDP’s Yu Ping Chan shared insights from the field: “Capacity building remains the number one request from governments. It’s not a new principle—it has been central since WSIS began.” She cited UNDP’s work on the Hamburg Declaration on responsible AI and AI ecosystem development in Africa as examples of translating global dialogue into national action.

Tatevik Grigoryan from UNESCO emphasised the enduring value of WSIS’s human rights-based foundations. “We continue to facilitate action lines on access to information, e-learning, and media ethics,” she said, encouraging engagement with UNESCO’s ROMEX framework as a tool for ethical, inclusive digital societies.

Veni from ICANN reinforced the technical community’s role, expressing hope that the WSIS Forum would be formally recognised in the UN’s review documents. “We must not overlook the forum’s contributions. Multistakeholder governance remains essential,” he insisted.

Representing the FAO, Dejan Jakovljević reminded participants that 700 million people remain undernourished. “Digital transformation in agriculture is vital. But farmers without connectivity are left behind,” he said, highlighting the WSIS framework’s role in fostering collaboration across sectors.

Anriette Esterhuysen of APC called civil society to embrace WSIS as a complementary forum to the IGF. “WSIS gives us a policy and implementation framework. It’s not just about talk—it’s about tools we can use at the national level.”

The Inter-Parliamentary Union’s Andy Richardson underscored parliaments’ dual role: advancing innovation while protecting citizens. Meli from the International Chamber of Commerce pointed to business engagement through AI-related workshops and discussions on strengthening multi-stakeholders.

Gitanjali Sah acknowledged past successes but urged continued ambition. “We were very ambitious in 1998—and we must be again,” she said. Still, she noted a persistent challenge: “We lack clear indicators to measure WSIS action line progress. That’s a gap we must close.”

The upcoming Geneva event will feature 67 ministers, 72 WSIS champions, and a youth programme alongside the AI for Good summit. Delegates were encouraged to submit input to the UN review process by 15 July and to participate in shaping a WSIS future that is more measurable, inclusive, and action-oriented.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Google launches AI voice chat in Search app for Android and iOS

Google has started rolling out its new ‘Search Live in AI Mode’ for the Google app on Android and iOS, offering users the ability to have seamless voice-based conversations with Search.

Currently available only in the US for those signed up to the AI Mode experiment in Labs, the feature was previewed at last month’s Google I/O conference.

The tool uses a specially adapted version of Google’s Gemini AI model, fine-tuned to deliver smarter voice interactions. It combines the model’s capabilities with Google Search’s information infrastructure to provide real-time spoken responses.

Using a technique called ‘query fan-out’, the system retrieves a wide range of web content, helping users discover more varied and relevant information.

The new mode is particularly useful when multitasking or on the go. Users can tap a ‘Live’ icon in the Google app and ask spoken queries like how to keep clothes from wrinkling in a suitcase.

Follow-up questions are handled just as naturally, and related links are displayed on-screen, letting users read more without breaking their flow.

To use the feature, users can tap a sparkle-shaped waveform icon under the Search bar or next to the search field. Once activated, a full-screen interface appears with voice control options and a scrolling list of relevant links.

Even with the phone locked or other apps open, the feature keeps running. A mute button, transcript view, and voice style settings—named Cassini, Cosmo, Neso, and Terra—offer additional control over the experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WSIS+20 Interactive Stakeholders Consultation

The General Assembly resolution A/70/125, called for a high-level meeting in 2025 to review the overall implementation of the outcomes of the World Summit on the Information Society (WSIS), known as WSIS+20 Review.

The WSIS vision is to establish a “people-centered, inclusive and development-oriented information society” for harnessing the potential of information and communication technologies for sustainable development.

Google email will reply by using your voice

Google is building a next-generation email system that uses generative AI to reply to mundane messages in your own tone, according to DeepMind CEO Demis Hassabis.

Speaking at SXSW London, Hassabis said the system would handle everyday emails instead of requiring users to write repetitive responses themselves.

Hassabis called email ‘the thing I really want to get rid of,’ and joked he’d pay thousands each month for that luxury. He emphasised that while AI could help cure diseases or combat climate change, it should also solve smaller daily annoyances first—like managing inbox overload.

The upcoming feature aims to identify routine emails and draft replies that reflect the user’s writing style, potentially making decisions on simpler matters.

While details are still limited, the project remains under development and could debut as part of Google’s premium AI subscription model before reaching free-tier users.

Gmail already includes generative tools that adjust message tone, but the new system goes further—automating replies instead of just suggesting edits.

Hassabis also envisioned a universal AI assistant that protects users’ attention and supports digital well-being, offering personalised recommendations and taking care of routine digital tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Thailand advances satellite rules

The Thai National Broadcasting and Telecommunications Commission (NBTC) has recently proposed a draft regulation titled ‘Criteria for Authorisation to Use Frequency Bands for Land, Aeronautical, and Maritime Earth Stations in FSS Services’. The regulation specifically targets the operation of Earth Stations in Motion (ESIMs), which include land-based stations on vehicles, aeronautical stations on aircraft, and maritime stations on ships and offshore platforms.

It defines dedicated frequency bands for both geostationary (GSO) and non-geostationary (NGSO) satellites, aligning closely with international best practices and recommendations from the International Telecommunication Union (ITU). The primary objective of this draft is to streamline the process for using specific radio frequencies by removing the need for individual frequency allocation for each ESIM deployment.

That approach aims to simplify and accelerate the rollout of high-speed satellite internet services for mobile users across various sectors, thus promoting innovation and economic development by facilitating faster and broader adoption of advanced satellite communications. Overall, the NBTC’s initiative underscores the critical importance for regulators worldwide to continually update their spectrum management frameworks.

Why does it matter?

In a rapidly evolving technological landscape, outdated or rigid regulations can obstruct innovation and economic growth. Effective spectrum management must strike a balance between preventing harmful interference and supporting the deployment of cutting-edge communication technologies like satellite-based internet services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US antitrust trial sees Google defend Chrome and data control

Google has warned that proposed remedies in the ongoing US antitrust case, including a possible sell-off of Chrome, could expose users to data breaches and national security threats. Arguing that Google’s infrastructure is key to protecting Chrome against rising cyberattacks.

Google cited past breaches to emphasise the risks of moving such tools to buyers lacking similar security standards. The Justice Department, however, maintains that breaking up Google’s dominance would encourage fairer competition.

Proposals include banning exclusive deals, sharing user data to support rivals, and enabling Apple or others to shift default search settings. An economic expert testified these remedies could reduce Google’s market share from 88% to 51%, though full impact would take years to materialise.

Judge Amit Mehta raised concerns that dismantling Google’s monopoly might simply replace it with another, such as Microsoft. Google CEO Sundar Pichai is set to testify next, as the case continues through 9 May in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scientists make progress in bridging quantum computers with optical networks

Researchers at Caltech have developed a groundbreaking silicon device that could help quantum computers communicate over long distances.

The innovation, created by a team led by Professor Mohammad Mirhosseini, successfully converts microwave photons into optical photons, overcoming a major challenge in quantum networking. Their findings were recently published in Nature Nanotechnology.

Quantum computers rely on microwave photons to store and process information, but these particles require near-zero temperatures and lose data when travelling through standard internet cables.

Optical photons, however, can move efficiently over long distances at room temperature. The new device acts as a bridge between the two, using a vibrating silicon beam to convert microwave signals into optical ones with remarkable efficiency.

Built from silicon to minimise noise, the transducer outperforms older systems by 100 times while maintaining the same level of signal clarity.

The breakthrough brings the concept of a quantum internet closer to reality, offering a scalable way to link quantum computers across vast networks in the future.

For more information on these topics, visit diplomacy.edu.