Alphabet’s YouTube announced its compliance with a court decision to block access to 32 video links in Hong Kong, marking a move critics argue infringes on the city’s freedoms amid tightening security measures. The decision followed a government application granted by Hong Kong’s Court of Appeal, targeting a protest anthem named ‘Glory to Hong Kong,’ with judges cautioning against its potential use by dissidents to incite secession.
Expressing disappointment, YouTube stated it would abide by the removal order while highlighting concerns regarding the chilling effect on online free expression. Observers, including the US government, voiced worries over the ban’s impact on Hong Kong’s reputation as a financial hub committed to the free flow of information.
Industry groups emphasised the importance of maintaining a free and open internet in Hong Kong, citing its significance in preserving the city’s competitive edge. The move reflects broader trends of tech companies complying with legal requirements, with Google parent Alphabet having previously restricted content in China.
Why does it matter?
Despite YouTube’s action, tensions persist over the erosion of freedoms in Hong Kong, underscored by ongoing international scrutiny and criticism of the city’s security crackdown on dissent. As the city grapples with balancing national security concerns and its promised autonomy under the ‘one country, two systems’ framework, the implications for its future as a global business centre remain uncertain.
In recent reports by The New York Times, the challenges faced by AI companies in acquiring high-quality training data have come to light. The New York Times elaborates on how companies like OpenAI and Google have navigated this issue, often treading in legally ambiguous territories related to AI copyright law.
OpenAI, for instance, resorted to developing its Whisper audio transcription model by transcribing over a million hours of YouTube videos to train GPT-4, its advanced language model. Although this approach raised legal concerns, OpenAI believed it fell within fair use. The company’s president, Greg Brockman, reportedly played a hands-on role in collecting these videos.
According to a Google spokesperson, there were unconfirmed reports of OpenAI’s activities, and both Google’s terms of service and robots.txt files prohibit unauthorised scraping or downloading of YouTube content. Google also utilised transcripts from YouTube, aligned with its agreements with content creators.
AI companies, including Google and OpenAI, are grappling with the dwindling availability of quality training data to improve their models. The future of AI training may involve synthetic data or curriculum learning methods, but these approaches still need to be proven. In the meantime, companies continue to explore various avenues for data acquisition, sometimes straying into legally contentious territories as they navigate this evolving landscape.
A recent report by research organisations Access Now and Global Witness revealed that nearly 50 ads filled with misinformation aimed at disrupting India’s elections or impeding voters were approved by YouTube despite clearly violating the platform’s policies on election misinformation. The investigation, titled ‘”Votes will not be counted”: Indian election disinformation ads and YouTube,’ found ads in English, Hindi, and Telugu spreading falsehoods about the upcoming election, such as false claims about voting methods and age requirements.
Although all the ads were ultimately withdrawn by their senders before publication due to safety concerns, YouTube’s approval of these ads has raised concerns about its role in ensuring free and fair elections. Namrata Maheshwari, senior policy counsel at Access Now, emphasised YouTube’s failure to implement disinformation policies, especially as India approaches its crucial election year in 2024.
In response to the investigation, YouTube’s parent company, Google, stated that none of the ads ran on its systems and reaffirmed that its policies are enforced year-round. The company explained that its enforcement process involves multiple layers to ensure ad compliance, indicating that initial approval does not guarantee the publication of ads that later violate policies. However, the report underscores the need for platforms like YouTube to effectively implement and enforce their policies to safeguard electoral integrity, particularly in the face of increasing misinformation surrounding elections globally.
Why does it matter?
The incident highlights the ongoing challenges social media platforms face in combating misinformation and ensuring the integrity of democratic processes. As the spread of false information continues to threaten elections worldwide, there is growing pressure on tech companies to enhance their efforts in detecting and removing misleading content, especially during critical election periods.
Hackers changed the organisation’s profile picture, bio, and cover photo on Twitter to make it appear as though it was part of The Possessed NFT collection.
On YouTube, hackers deleted all of the videos on the British Army’s channel and changed its name and profile picture to look like the (real) investment company Ark Invest. Hackers replaced the British Army’s videos with a series of old live streams featuring former Twitter CEO Jack Dorsey and Tesla CEO Elon Musk.