UNESCO warns of AI’s role in distorting Holocaust history

A new UNESCO report highlights the growing risk of Holocaust distortion through AI-generated content as young people increasingly rely on Generative AI for information. The report, published with the World Jewish Congress, warns that AI can amplify biases and spread misinformation, as many AI systems are trained on internet data that includes harmful content. Such content led to fabricated testimonies and distorted historical records, such as deepfake images and false quotes.

The report notes that Generative AI models can ‘hallucinate’ or invent events due to insufficient or incorrect data. Examples include ChatGPT fabricating Holocaust events that never happened and Google’s Bard generating fake quotes. These kinds of ‘hallucinations’ not only distort historical facts but also undermine trust in experts and simplify complex histories by focusing on a narrow range of sources.

UNESCO calls for urgent action to implement its Recommendation on the Ethics of Artificial Intelligence, emphasising fairness, transparency, and human rights. It urges governments to adopt these guidelines and tech companies to integrate them into AI development. UNESCO also stresses the importance of working with Holocaust survivors and historians to ensure accurate representation and educating young people to develop critical thinking and digital literacy skills.

UK parliamentary candidate introduces AI lawmaker concept

In a bold move highlighting the intersection of technology and politics, businessman Steve Endacott is running in the 4 July national election in Britain, aiming to become a member of parliament (MP) with the aid of an AI-generated avatar. The campaign leaflet for Endacott features not his own face but that of an AI avatar dubbed ‘AI Steve.’ The initiative, if successful, would result in the world’s first AI-assisted lawmaker.

Endacott, founder of Neural Voice, presented his AI avatar to the public in Brighton, engaging with locals on various issues through real-time interactions. The AI discusses topics like LGBTQ rights, housing, and immigration and then offers policy ideas, seeking feedback from citizens. Endacott aims to demonstrate how AI can enhance voter access to their representatives, advocating for a reformed democratic process where people are more connected to their MPs.

Despite some scepticism, with concerns about the effectiveness and trustworthiness of an AI MP, Endacott insists that the AI will serve as a co-pilot, formulating policies reviewed by a group of validators to ensure security and integrity. The Electoral Commission clarified that the elected candidate would remain the official MP, not the AI. While public opinion is mixed, the campaign underscores the growing role of AI in various sectors and sparks an important conversation about its potential in politics.

AI tools struggle with election questions, raising voter confusion concerns

As the ‘year of global elections’ reaches its midpoint, AI chatbots and voice assistants are still struggling with basic election questions, risking voter confusion. The Washington Post found that Amazon’s Alexa often failed to correctly identify Joe Biden as the 2020 US presidential election winner, sometimes providing irrelevant or incorrect information. Similarly, Microsoft’s Copilot and Google’s Gemini refused to answer such questions, redirecting users to search engines instead.

Tech companies are increasingly investing in AI to provide definitive answers rather than lists of websites. This feature is particularly important as false claims about the 2020 election being stolen persist, even after multiple investigations found no fraud. Trump faced federal charges for attempting to overturn Biden’s victory, who won decisively with over 51% of the popular vote.

OpenAI’s ChatGPT and Apple’s Siri, however, correctly answered election questions. Seven months ago, Amazon claimed to have fixed Alexa’s inaccuracies, and recent tests showed Alexa correctly stating Biden won the 2020 election. Nonetheless, inconsistencies were spotted last week. Microsoft and Google, in return, said they avoid answering election-related questions to reduce risks and prevent misinformation,, a policy also applied in Europe due to a new law requiring safeguards against misinformation.

Why does it matter?

Tech companies are increasingly tasked with distinguishing fact from fiction as it develops AI-enabled assistants. Recently, Apple announced a partnership with OpenAI to enhance Siri with generative AI capabilities. Concurrently, Amazon is set to launch a new AI version of Alexa as a subscription service in September, although it remains unclear how it will handle election queries. An early prototype struggled with accuracy, and internal doubts about its readiness persist. The new AI assistants from Amazon and Apple aim to merge traditional voice commands with conversational capabilities, but experts warn this integration may pose new challenges.

Italian PM and Pope to address AI ethics at G7

Italian Prime Minister Giorgia Meloni and Pope Francis are teaming up to warn global leaders that diving into AI without ethical considerations could lead to catastrophic consequences. The collaboration, long in the making, will climax with Pope Francis attending the G7 summit in southern Italy at Meloni’s invitation, where he aims to educate leaders on the potential dangers posed by AI.

Concerned about AI’s societal and economic impacts, Meloni has been vocal about her fears regarding job losses and widening inequalities. She recently highlighted these concerns at the UN, coining the term ‘Algorethics’ to emphasise the need for ethical boundaries in technological advancements. Paolo Benanti, a Franciscan friar and advisor to both Meloni and the Pope, stressed the growing power of multinational corporations in AI development, raising alarms about the concentration of wealth and power.

Pope Francis, known for advocating social justice issues, has previously called for an AI ethics conference at the Vatican, drawing global tech giants and international organisations into the discussion. His upcoming address at the G7 summit is expected to focus on AI’s impact on vulnerable populations and could touch on concerns about autonomous weaponry. Meloni, in turn, is poised to advocate for stronger regulations to ensure AI technologies adhere to ethical standards and serve societal interests.

Despite AI hype, recent studies suggest the promised financial benefits for businesses implementing AI projects have been underwhelming. That challenges the optimistic narratives often associated with AI, indicating a need for more cautious and balanced approaches to its development and deployment.

Young Americans show mixed embrace of AI, survey reveals

Young Americans are rapidly embracing generative AI, but few use it daily, according to a recent survey by Common Sense Media, Hopelab, and Harvard’s Center for Digital Thriving. The survey, conducted in October and November 2023 with 1,274 US teens and young adults aged 14-22, found that only 4% use AI tools daily. Additionally, 41% have never used AI, and 8% are unaware of what AI tools are. The main uses for AI among respondents are seeking information (53%) and brainstorming (51%).

Demographic differences show that 40% of white respondents use AI for schoolwork, compared to 62% of Black respondents and 48% of Latinos. Looking ahead, 41% believe AI will have both positive and negative impacts in the next decade. Notably, 28% of LGBTQ+ respondents expect mostly negative impacts, compared to 17% of cisgender/straight respondents. Young people have varied opinions on AI, as some view it as a sign of a changing world and are enthusiastic about its future, while others find it unsettling and concerning.

Why does it matter?

Young people globally share concerns over AI, which the IMF predicts will affect nearly 40% of jobs, with advanced economies seeing up to 60%. In comparison to the results above, a survey of 1,000 young Hungarians (aged 15-29) found that frequent AI app users are more positive about its benefits, while 38% of occasional users remain skeptical. Additionally, 54% believe humans will maintain control over AI, with 54% of women fearing loss of control compared to 37% of men.

French startup Pasqal set to introduce first quantum computer in Saudi Arabia

Paris-based quantum computing startup Pasqal has inked a significant deal with Saudi Arabia’s oil giant Aramco, marking the installation of the kingdom’s first quantum computer. Scheduled for deployment in the latter half of 2025, Pasqal will oversee the installation, maintenance, and operation of a powerful 200-cubit quantum computer.

Georges-Olivier Reymond, CEO and co-founder of Pasqal expressed enthusiasm about the partnership, highlighting its role in advancing the commercial embrace of quantum technology within Saudi Arabia. The initiative follows Pasqal’s successful provision of quantum computers to both France and Germany. Notably, Alain Aspect, a co-founder of Pasqal, was awarded the 2022 Nobel Prize in Physics for groundbreaking experiments underpinning quantum mechanics, laying the foundation for quantum computing.

Why does it matter?

The allure of quantum computing lies in its potential to revolutionise computational capabilities, with projections suggesting that quantum computers could outpace today’s supercomputers by millions of times in certain computations. This partnership between Pasqal and Aramco signals a meaningful step towards harnessing the power of quantum technology to solve complex problems across various sectors, including energy, finance, and logistics. As the global race for quantum supremacy intensifies, collaborations like this one are pivotal in pushing the boundaries of technological innovation, promising transformative advancements with far-reaching implications for industries and societies worldwide.

South Korea and UK to host global AI summit in Seoul

South Korea and the UK are set to co-host the second global AI summit in Seoul this week, a response to the rapid advancements in AI since the first summit in November. UK Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol will lead a virtual summit on Tuesday, emphasising the urgent need for improved AI regulation amidst growing concerns over the impact of technology on society.

In a joint article, leaders of the UK and South Korea highlighted the necessity for global AI standards to prevent a ‘race to the bottom’. The summit, now called the AI Seoul Summit, will address AI safety, innovation, and inclusion. A recent global AI safety report underlined potential risks such as labour market disruptions, AI-enabled cyber attacks, and the loss of control over AI, stressing that societal and governmental decisions will shape the future of AI.

Why does it matter?

Initially focused on AI safety, the November summit saw prominent figures like Elon Musk and Sam Altman engage in discussions, with China signing the ‘Bletchley Declaration’ on AI risk management alongside the US and others. This week’s events will include a virtual summit on Tuesday and an in-person session on Wednesday featuring key industry players from companies like Anthropic, OpenAI, Google DeepMind, Microsoft, Meta, and IBM.

TikTok, DOJ push for fast-track review of ByteDance divestiture law

The US Justice Department (DOJ) and TikTok requested a US appeals court to expedite the review of legal challenges against a new law requiring ByteDance to divest TikTok’s US assets by 19 January or face a ban. They seek a ruling by 6 December to allow for a potential Supreme Court review. With the legal move, TikTok expects to avoid emergency preliminary injunctive relief.

In the past two weeks, TikTok, ByteDance, and TikTok content creators filed lawsuits to block the law, arguing it infringes on First Amendment rights and the US Constitution. 

Driven by fears of Chinese data access and spying, Congress rapidly passed the legislation, which President Joe Biden signed on 24 April. The law requires ByteDance to sell TikTok by 19 January due to national security concerns, potentially affecting 170 million American users. The Justice Department may submit classified information to support these concerns.

Although it does not intend to ban the app entirely, the law also prevents app stores and internet hosting services from offering or supporting TikTok unless ByteDance complies.

Why does it matter?

The long-standing threat of a potential TikTok ban in the United States, first raised when former President Donald Trump attempted to shut down the app via executive order, seems to have reached a critical point. Constitutional law scholars argue that forcing TikTok to cease its American operations over unspecified national security concerns would violate the First Amendment and that US officials must prove in court that banning TikTok is the least restrictive way to address the threat. While Trump’s executive order was blocked by federal judges due to a lack of evidence that the app posed a security risk, it remains to be seen if the DOJ will be able to present concrete evidence this time.

Australian senator calls for Musk’s imprisonment

Elon Musk’s feud with Australian authorities reached new heights as he advocated for the imprisonment of a senator and criticised the country’s gun laws in the wake of a court order targeting his platform, X. The dispute stemmed from X’s publication of a video depicting a knife attack on an Assyrian bishop during a church service in Sydney, prompting the federal court to temporarily halt the video’s display.

In response to the court order, Musk accused Australian leaders of attempting to censor the internet, sparking condemnation from lawmakers and prompting Senator Jacqui Lambie to delete her X account in protest. Lambie called for Musk’s imprisonment, labelling him as ‘lacking a social conscience’. Musk, in turn, labelled Lambie as an ‘enemy of the people of Australia.’

Musk’s combative approach towards governments extends beyond Australia, as seen in his clashes with authorities in Brazil over social media content oversight. He further escalated tensions by endorsing posts criticising Australia’s gun laws and government, reacting with exclamation marks and amplifying messages questioning the integrity of Australian governance.

The legal battle between Musk’s platform and Australian authorities intensified during a court hearing, where X was accused of failing to fully comply with the temporary takedown order. Despite claims of compliance, the video remained accessible on X in Australia. The federal court judge extended the temporary takedown order until further hearings, citing the need for continued deliberation over the contentious issue.

Meta spokesperson sentenced to six years in Russia

A military court in Moscow has reportedly sentenced Meta Platforms spokesperson Andy Stone to six years in prison in absentia for ‘publicly defending terrorism.’ This ruling comes amid Russia’s crackdown on Meta, which was designated as an extremist organisation in the country, resulting in the banning of Facebook and Instagram in 2022 due to Russia’s conflict with Ukraine.

Meta has yet to comment on the reported sentencing of Stone, who serves as the company’s communications director. Stone himself was unavailable for immediate response following the court’s decision. Stone’s lawyer, Valentina Filippenkova, indicated they intend to appeal the verdict, expressing a request for acquittal.

The Russian interior ministry initiated a criminal investigation against Stone late last year, although the specific charges were not disclosed then. According to state investigators, Stone’s online comments allegedly defended ‘aggressive, hostile, and violent actions’ against Russian soldiers involved in what Russia terms its ‘special military operation’ in Ukraine.

Why does it matter?

Stone’s sentencing underscores Russia’s stringent stance on online content related to its military activities in Ukraine, extending repercussions to individuals associated with Meta Platforms. The circumstances also reflect the broader context of heightened scrutiny and legal actions against perceived dissent and criticism within Russia’s digital landscape.