Paris-based quantum computing startup Pasqal has inked a significant deal with Saudi Arabia’s oil giant Aramco, marking the installation of the kingdom’s first quantum computer. Scheduled for deployment in the latter half of 2025, Pasqal will oversee the installation, maintenance, and operation of a powerful 200-cubit quantum computer.
Georges-Olivier Reymond, CEO and co-founder of Pasqal expressed enthusiasm about the partnership, highlighting its role in advancing the commercial embrace of quantum technology within Saudi Arabia. The initiative follows Pasqal’s successful provision of quantum computers to both France and Germany. Notably, Alain Aspect, a co-founder of Pasqal, was awarded the 2022 Nobel Prize in Physics for groundbreaking experiments underpinning quantum mechanics, laying the foundation for quantum computing.
Why does it matter?
The allure of quantum computing lies in its potential to revolutionise computational capabilities, with projections suggesting that quantum computers could outpace today’s supercomputers by millions of times in certain computations. This partnership between Pasqal and Aramco signals a meaningful step towards harnessing the power of quantum technology to solve complex problems across various sectors, including energy, finance, and logistics. As the global race for quantum supremacy intensifies, collaborations like this one are pivotal in pushing the boundaries of technological innovation, promising transformative advancements with far-reaching implications for industries and societies worldwide.
South Korea and the UK are set to co-host the second global AI summit in Seoul this week, a response to the rapid advancements in AI since the first summit in November. UK Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol will lead a virtual summit on Tuesday, emphasising the urgent need for improved AI regulation amidst growing concerns over the impact of technology on society.
In a joint article, leaders of the UK and South Korea highlighted the necessity for global AI standards to prevent a ‘race to the bottom’. The summit, now called the AI Seoul Summit, will address AI safety, innovation, and inclusion. A recent global AI safety report underlined potential risks such as labour market disruptions, AI-enabled cyber attacks, and the loss of control over AI, stressing that societal and governmental decisions will shape the future of AI.
Why does it matter?
Initially focused on AI safety, the November summit saw prominent figures like Elon Musk and Sam Altman engage in discussions, with China signing the ‘Bletchley Declaration’ on AI risk management alongside the US and others. This week’s events will include a virtual summit on Tuesday and an in-person session on Wednesday featuring key industry players from companies like Anthropic, OpenAI, Google DeepMind, Microsoft, Meta, and IBM.
The US Justice Department (DOJ) and TikTok requested a US appeals court to expedite the review of legal challenges against a new law requiring ByteDance to divest TikTok’s US assets by 19 January or face a ban. They seek a ruling by 6 December to allow for a potential Supreme Court review. With the legal move, TikTok expects to avoid emergency preliminary injunctive relief.
In the past two weeks, TikTok, ByteDance, and TikTok content creators filed lawsuits to block the law, arguing it infringes on First Amendment rights and the US Constitution.
Driven by fears of Chinese data access and spying, Congress rapidly passed the legislation, which President Joe Biden signed on 24 April. The law requires ByteDance to sell TikTok by 19 January due to national security concerns, potentially affecting 170 million American users. The Justice Department may submit classified information to support these concerns.
Although it does not intend to ban the app entirely, the law also prevents app stores and internet hosting services from offering or supporting TikTok unless ByteDance complies.
Why does it matter?
The long-standing threat of a potential TikTok ban in the United States, first raised when former President Donald Trump attempted to shut down the app via executive order, seems to have reached a critical point. Constitutional law scholars argue that forcing TikTok to cease its American operations over unspecified national security concerns would violate the First Amendment and that US officials must prove in court that banning TikTok is the least restrictive way to address the threat. While Trump’s executive order was blocked by federal judges due to a lack of evidence that the app posed a security risk, it remains to be seen if the DOJ will be able to present concrete evidence this time.
Elon Musk’s feud with Australian authorities reached new heights as he advocated for the imprisonment of a senator and criticised the country’s gun laws in the wake of a court order targeting his platform, X. The dispute stemmed from X’s publication of a video depicting a knife attack on an Assyrian bishop during a church service in Sydney, prompting the federal court to temporarily halt the video’s display.
Our concern is that if ANY country is allowed to censor content for ALL countries, which is what the Australian “eSafety Commissar” is demanding, then what is to stop any country from controlling the entire Internet?
In response to the court order, Musk accused Australian leaders of attempting to censor the internet, sparking condemnation from lawmakers and prompting Senator Jacqui Lambie to delete her X account in protest. Lambie called for Musk’s imprisonment, labelling him as ‘lacking a social conscience’. Musk, in turn, labelled Lambie as an ‘enemy of the people of Australia.’
Musk’s combative approach towards governments extends beyond Australia, as seen in his clashes with authorities in Brazil over social media content oversight. He further escalated tensions by endorsing posts criticising Australia’s gun laws and government, reacting with exclamation marks and amplifying messages questioning the integrity of Australian governance.
The legal battle between Musk’s platform and Australian authorities intensified during a court hearing, where X was accused of failing to fully comply with the temporary takedown order. Despite claims of compliance, the video remained accessible on X in Australia. The federal court judge extended the temporary takedown order until further hearings, citing the need for continued deliberation over the contentious issue.
A military court in Moscow has reportedly sentenced Meta Platforms spokesperson Andy Stone to six years in prison in absentia for ‘publicly defending terrorism.’ This ruling comes amid Russia’s crackdown on Meta, which was designated as an extremist organisation in the country, resulting in the banning of Facebook and Instagram in 2022 due to Russia’s conflict with Ukraine.
Meta has yet to comment on the reported sentencing of Stone, who serves as the company’s communications director. Stone himself was unavailable for immediate response following the court’s decision. Stone’s lawyer, Valentina Filippenkova, indicated they intend to appeal the verdict, expressing a request for acquittal.
The Russian interior ministry initiated a criminal investigation against Stone late last year, although the specific charges were not disclosed then. According to state investigators, Stone’s online comments allegedly defended ‘aggressive, hostile, and violent actions’ against Russian soldiers involved in what Russia terms its ‘special military operation’ in Ukraine.
Why does it matter?
Stone’s sentencing underscores Russia’s stringent stance on online content related to its military activities in Ukraine, extending repercussions to individuals associated with Meta Platforms. The circumstances also reflect the broader context of heightened scrutiny and legal actions against perceived dissent and criticism within Russia’s digital landscape.
The Raisi administration in Iran has allocated millions of dollars towards bolstering the country’s internet infrastructure, focusing on tightening control over information flow and reducing the influence of external media.
This decision, part of a broader financial strategy for the Ministry of Communications and Information Technology, reflects a 25% increase from the previous year’s budget, totalling over IRR 195,830 billion (approximately $300 million). Additionally, over IRR 150,000 billion (over $220 million) in miscellaneous credits have been earmarked to expand the national information network.
The Ministry of Communications and Information Technology’s efforts aim to reduce dependency on the global internet, leading to a more isolated and state-controlled national information network.
Why does it matter?
Popular social media platforms like Instagram and Facebook are blocked in Iran, and the government appears to be tightening internet control. Cloudflare has observed a significant decrease in internet traffic from Iran over the past two years, suggesting a trend of increased control and isolation. However, widespread internet disruptions have sparked discontent, leading the Tehran Chamber of Commerce to call for policy reassessment, citing economic concerns.
In the first quarter of 2024, Pulse has documented 22 deliberate internet shutdowns across 12 countries, with some ongoing since 2023. This figure matches the peak seen in 2021 during Myanmar‘s military coup, highlighting a concerning trend. India has been the most affected, with nine shutdowns, followed by Ethiopia and Senegal, each experiencing two incidents. Over half of these shutdowns have been localised, impacting specific regions within countries including Chad, Comoros, Cuba, Iran, Pakistan, Palestinian Territory and Russia.
Among the recorded events, nine led to nationwide disruptions lasting from hours to months, affecting approximately 297 million internet users and resulting in over 910 days of downtime. These shutdowns have inflicted significant economic losses, amounting to USD 565.4 million in GDP, as reported by Pulse. Such disruptions hinder societal progress, hamper economies, and undermine the stability of the global internet infrastructure.
Why does it matter?
Championing an open and easily accessible internet, advocates stress the significance of prioritising policies that ensure uninterrupted connectivity. Governments and policymakers globally are encouraged to endorse efforts to protect the internet, acknowledging its pivotal role in nurturing economic development and providing opportunities for individuals to exercise fundamental human rights in the digital era.
The Czech government has taken action by sanctioning individuals, including Viktor Medvedchuk and the website voiceofeurope.com, over their alleged involvement in a pro-Russian influence operation in Europe spreading disinformation. According to the Czech Foreign Ministry, the campaign aimed to undermine Ukraine’s territorial integrity, sovereignty, and liberty.
Czech Prime Minister Petr Fiala has underscored that the activities of the sanctioned individuals were aimed at bolstering Russian influence in the EU countries and the European Parliament, based on findings from the Czech secret service agency BIS.
Medvedchuk, a former Ukrainian politician now residing in Russia, stands accused of covertly financing Voice of Europe to sway the European Parliamentary election. Financial accounts associated with the implicated individuals and entities will be frozen as part of the sanctions.
Why does it matter?
The development comes shortly after the European Parliament, and experts warned of expected attempts to undermine the upcoming EU elections in June and deter voter turnout through disinformation campaigns. Despite efforts to combat disinformation through tools like the Digital Services Act, challenges still need to be addressed in effectively countering misleading narratives, especially in the limited timeframe leading up to the elections.
As part of the process towards developing a Global Digital Compact (GDC), the UN Secretary-General has issued a policy brief outlining areas in which ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing sustainable development goals (SDGs), making the online space open and safe for everyone, and governing artificial intelligence (AI) for humanity.
The policy brief also suggests objectives and actions to advance such cooperation and ‘safeguard and advance our digital future’. These are structured around the following topics:
Digital connectivity and capacity building. The overarching objectives here are to close the digital divide and empower people to participate fully in the digital economy. Proposed actions range from common targets for universal and meaningful connectivity to putting in place or strengthening public education for digital literacy.
Digital cooperation to accelerate progress on the SDGs. Objectives include making targeted investments in digital public infrastructure and services, making data representative, interoperable, and accessible, and developing globally harmonised digital sustainability standards. Among the proposed actions are the development of definitions of safe, inclusive, and sustainable digital public infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint on digital transformation (something the UN would do).
Upholding human rights. Putting human rights at the centre of the digital future, ending the gender digital divide, and protecting workers are the outlined objectives in this area. One key proposed action is the establishment of a digital human rights advisory mechanism, facilitated by the Office of the UN High Commissioner for Human Rights, to provide guidance on human rights and technology issues.
An inclusive, open, secure, and shared internet. There are two objectives: safeguarding the free and shared nature of the internet, and reinforcing accountable multistakeholder governance. Some of the proposed actions include commitments from governments to avoid blanket internet shutdowns and refrain from actions disrupting critical infrastructures.
Digital trust and security. Objectives range from strengthening multistakeholder cooperation to elaborate norms, guidelines, and principles on the responsible use of digital technologies, to building capacity and expanding the global cybersecurity workforce. The proposed overarching action is for stakeholders to commit to developing common standards and industry codes of conduct to address harmful content on digital platforms.
Data protection and empowerment. Ensuring that data are governed for the benefit of all, empowering people to control their personal data, and developing interoperable standards for data quality as envisioned as key objectives. Among the proposed actions are an invitation for countries to consider adopting a declaration on data rights and seeking convergence on principles for data governance through a potential Global Data Compact.
Agile governance of AI and other emerging technologies. The proposed objectives relate to ensuring transparency, reliability, safety, and human control in the design and use of AI; putting transparency, fairness, and accountability at the core of AI governance; and combining existing norms, regulations, and standards into a framework for agile governance of AI. Actions envisioned range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector.
Global digital commons. Objectives include ensuring inclusive digital cooperation, enabling regular and sustained exchanges across states, regions, and industry sectors, and developing and governing technologies in ways that enable sustainable development, empower people, and address harms.
The document further notes that ‘the success of a GDC will rest on its implementation’. This implementation would be done by different stakeholders at the national, regional, and sectoral level, and be supported by spaces such as the Internet Governance Forum and the World Summit on the Information Society Forum. One suggested way to support multistakeholder participation is through a trust fund that could sponsor a Digital Cooperation Fellowship Programme.
As a mechanism to follow up on the implementation of the GDC, the policy brief suggests that the Secretary-General could be tasked to convene an annual Digital Cooperation Forum (DCF). The mandate of the forum would also include, among other things, facilitating collaboration across digital multistakeholder frameworks and reducing duplication; promoting cross-border learning in digital governance; and identifying and promoting policy solutions to emerging digital challenges and governance gaps.