Bollywood actors featured in AI fake videos for India’s election

In the midst of India’s monumental general election, AI-generated fake videos featuring Bollywood actors criticising Prime Minister Narendra Modi and endorsing the opposition Congress party have gone viral. The videos, viewed over half a million times on social media, underscore the growing role of AI-generated content in elections worldwide.

India’s election, involving almost one billion voters, pits Modi’s Bharatiya Janata Party (BJP) against an alliance of opposition parties. As campaigning shifts towards digital platforms like WhatsApp and Facebook, AI is being utilised for the first time in Indian elections, signalling a new era of political communication.

Despite efforts by platforms like Facebook to remove the fake videos, they continue to circulate, prompting police investigations and highlighting the challenges of combating misinformation in the digital age. While actors Aamir Khan and Ranveer Singh have denounced the videos as fake, their proliferation underscores the potential impact of AI-generated content on public opinion.

Why does it matter?

In this year’s election in India, politicians employ AI in various ways, from creating videos featuring deceased family members to using AI-generated anchors to deliver political messages. These tactics raise questions about the ethical implications of AI in politics and its potential to shape public discourse in unprecedented ways.

Meta to label AI-generated content instead of removing it

Meta Platforms Inc., the parent company of Facebook and Instagram, has announced changes to its content policies regarding AI-generated content. Under the new policy, Meta will no longer remove misleading AI-generated content but will instead label it to provide transparency. This shift in approach aims to address concerns about misleading content without outright removal.

Previously, Meta’s policy targeted ‘manipulated media’ that could mislead viewers into thinking someone in a video said something they did not. Now, the content policy extends to digitally altered images, videos, or audio as the company will employ fact-checking and labelling to inform users about the nature of the content they encounter on its platforms.

The policy was revised in February after Meta’s Oversight Board criticised the previous approach as ‘incoherent’. The board recommended using labels instead of removal for AI-generated content, and Meta has agreed with this perspective, emphasising the importance of transparency and additional context in handling such content.

Why does it matter?

Starting in May, AI-generated Meta-platform content will be labelled ‘Made with AI’ to indicate its origin. This policy change is particularly significant given the upcoming US elections, with Meta acknowledging the need for clear labelling of AI-generated posts, including those created using competitors’ technology.

Meta’s shift in content moderation policy reflects a broader trend toward transparency in dealing with AI-generated content across social media platforms. By implementing labelling instead of removal, Meta aims to provide users with more information about the nature of the online content.

Elon Musk’s X expands fact-checking program ahead of Indian elections

X, owned by Elon Musk, announced its support for posting Community Notes, a crowd-sourced fact-checking program, ahead of India’s national elections. This initiative aims to provide more context to popular posts, debunk myths, and offer broader insights. The first Indian contributors will begin posting notes today, with plans to accept more over time. Community Notes, previously known as Birdwatch, has expanded to 69 countries, with India being one of the last major markets for the program to enter.

Despite efforts to combat misinformation, X’s Community Notes program has faced challenges controlling the spread of false information, which is particularly significant given India’s complex multilingual political landscape and the upcoming elections. Although the platform has not made specific announcements regarding its efforts for the Indian elections, many social media platforms are ramping up measures to address potential misinformation during the election period.

X has had a contentious relationship with the Indian government, notably engaging in legal battles over content censorship. Last year, the platform reinstated political ads after a previous ban, reflecting ongoing tensions with authorities. However, Elon Musk has acknowledged India’s strict social media regulations, emphasising the company’s adherence to local laws. Moreover, earlier this year, X complied with government orders to withhold certain accounts and posts related to farmers’ protests in India.

California enacts Senate bill to safeguard elections against disinformation and deepfakes

California has passed Senate Bill 1228, requiring large online platforms to implement digital identity verification and labelling for influential users and those sharing significant amounts of AI-generated content. The law mandates semiannual reporting to the Attorney General regarding user authentication methods and public disclosure of authenticated accounts.

The bill’s sponsor, Senator Steve Padilla, highlights the need to combat foreign interference and disinformation campaigns targeting US elections. By verifying the identities of accounts with substantial followings, the law seeks to mitigate the spread of false information and malicious content. Additionally, the legislative package includes measures like Assembly Bill (AB) 2839, which restricts deepfakes in campaign ads, and AB 2655, which addresses the labelling and regulation of generative AI deepfakes.

The laws were developed in collaboration with the California Initiative for Technology and Democracy (CITED) to address concerns about online misinformation and its potential impact on democratic processes. A survey reveals strong public support for measures promoting user authentication and legal accountability for online posts, reflecting growing concerns about spreading false information.

However, critics raise constitutional concerns and question the effectiveness of SB 1228’s criteria for identifying influential accounts. The experts point out potential flaws in the law, such as the definition of influential users based on view counts and AI-generated content volume, which may encompass genuine influencers and spam accounts. Despite these challenges, California’s legislative efforts signal a proactive approach to combating online misinformation and protecting electoral integrity.

Google suspends political ads in South Korea ahead of general elections

Google has announced the suspension of all political advertisements in South Korea leading up to the country’s general elections in April, as per The Korea Times. The ban encompasses all Google-owned platforms, including YouTube, Google Search, and the Google Play Store.

Google also intends to guide users to credible information about voting methods and voter registration by providing links on its homepage. Additionally, the company plans to offer election-related information panels in YouTube search results, connecting users with trustworthy sources for further details.

As South Korea joins the lineup of nations with significant elections in 2024, Google is part of its efforts to combat misinformation and address voter bias, echoing similar initiatives taken after major elections globally. However, it is uncertain whether Google will apply this policy to other election-bound countries like India.

Why does it matter?

As reported by Medianama, this isn’t the first time Google has implemented such a ban; it previously did so after the US Presidential election in 2020 and before elections in the Philippines, Canada, and Singapore. While it’s unclear if this trend will continue in other countries gearing up for elections, Google has already established stricter regulations in India. These include identity verification, pre-certification by the Election Commission, and transparency measures through initiatives like the Google Ads Transparency Centre.

Trump reiterates opposition to central bank digital currencies

The digital asset issues reemerged in the US presidential election race. The highest polling candidate of the Republican Party for the November election, and the former US President Donald Trump, has reiterated his strong opposition to central bank digital currencies (CBDCs) during a rally in Laconia, New Hampshire.

Former president Trump, firmly stated that he would never allow the creation of a CBDC, citing concerns over the potential threat to personal freedom and the government’s absolute control over individuals’ money. He warned that a central bank digital currencies would give the federal government the power to seize money without people even realizing it.

Trump’s stance on CBDCs has garnered support, with two crypto-friendly candidates, Vivek Ramaswamy and Ron DeSantis, suspending their campaigns and endorsing Trump. This suggests that there is substantial support among Republican primary voters for Trump’s position on CBDCs.

The absence of pro-crypto candidates may lessen the focus on digital assets in the presidential race, according to CoinDesk. It remains to be seen how extensively the topic will be discussed throughout the campaign.

The future of TikTok will determine the future of an integrated internet

The TikTok controversy, as the Economist put it, is ‘a test of whether global internet can remain intact as US-China relations deteriorate’.

The background of the TikTok controversy is geopolitics and the forthcoming US elections. There is concern that TikTok could be used to influence elections.

The protection of privacy is sometimes indicated as the main risk from TikTok. However, this is not the case now as TikTok data is publicly available for scrapping.

However, the main concern of the US political elite is the potential manipulation of US users by this Chinese company. For example, a quarter of US users use TikTok as a news source.

TikTok algorithms that could be used for manipulation are developed mainly in Beijing.

There are a few things that TikTok can do to address the risk of being shut down by US authorities, including:

  • Having data held by Oracle, as TikTok has already been doing since the legal action by the Trump administration
  • Letting third parties inspect TikTok algorithms, including showing the source code and allowing ongoing inspection

According to the Economist, ‘TikTok should be ultimately responsible to an independent board of its own, with members from outside China.’

China is likely to oppose requests for inspections of TikTok’s algorithms, which may lead to TikTok and other digital companies being shut down by the USA and other Western countries. The outcome of the TikTok political controversy will have far-reaching consequences for a global internet.