Arbitrariness and political censorship are prevalent in the YouTube administration controlled by Washington, according to Russian Foreign Ministry spokesperson Maria Zakharova. She claims that YouTube systematically censors information, beginning with the blocking of accounts from Russian media outlets and government agencies. The official channel of the Russian Foreign Ministry has received unfounded warnings, with some videos being blocked.
Zakharova emphasises that YouTube’s actions constitute direct censorship, violating the rights of subscribers by restricting the free distribution and access to information. She asserts that the United States, which oversees YouTube, has international obligations to uphold freedom of speech, and the actions taken by YouTube contradict these obligations.
Additionally, Alexander Khinshtein, head of the State Duma Committee on Information Policy, mentioned a potential 70% reduction in YouTube download speeds on computers, which would not affect mobile communications. Roskomnadzor later cited disrespect for Russia and numerous legal violations as reasons for actions against YouTube.
YouTube speeds in Russia are expected to significantly decline on desktop computers due to Google’s failure to upgrade its equipment in the country and its refusal to unblock Russian media channels. The situation has drawn criticism from Alexander Khinshtein, head of the lower house of parliament’s information policy committee, who emphasised that the slowdown is a repercussion of YouTube’s actions. Khinshtein highlighted that download speeds on the platform have already decreased by 40% and could drop by up to 70% next week.
The decline in YouTube quality is attributed to Google’s inaction, particularly its failure to upgrade Google Global Cache servers in Russia. Additionally, Google has not invested in Russian infrastructure and allowed its local subsidiary to go bankrupt, preventing it from covering local data centre expenses. Communications regulator Roskomnadzor has echoed these concerns, indicating that the lack of upgrades has led to deteriorating service quality.
Google has faced multiple fines from Russia for not removing content deemed illegal or undesirable by the Russian government. Following Russia’s invasion of Ukraine in March 2022, YouTube blocked channels associated with Russian state-funded media worldwide, citing its policy against content that denies or trivialises well-documented violent events. Subsequently, Google’s Russian subsidiary filed for bankruptcy, citing Russian authorities’ seizure of its bank account as the reason for its inability to function. Meanwhile, some Russian officials, including Chechen leader Ramzan Kadyrov, have proposed blocking YouTube entirely in response to the ongoing tensions.
Google’s parent company stocks fell by over 3% on Wednesday amid concerns that rising investments in AI infrastructure could squeeze margins and that YouTube is facing stiff competition for ad dollars. The Google parent company saw its capital expenditure rise to $13.2 billion in the second quarter, exceeding expectations as it invests heavily in the infrastructure needed to support generative AI services and compete with Microsoft.
While Alphabet has been cutting costs through layoffs to protect profitability, analysts noted that seasonal hiring of fresh graduates and the earlier-than-usual Pixel launch would impact margins in the third quarter. Additionally, YouTube’s ad sales growth slowed to 13% in the second quarter from nearly 21% in the first quarter, as it grapples with tough year-on-year comparisons and competition from Amazon in the online video ad market.
Despite these challenges, many analysts remain positive about Alphabet, citing its AI efforts driving up cloud revenue and minimal disruption to Search revenue from its AI overviews. Cloud computing services revenue rose by 28.8%, outpacing expectations and signalling robust enterprise spending. Analysts believe Alphabet’s AI advancements position it as a market leader, and 25 brokerages have raised their price targets for the stock. Their failed Wiz acquisition echoes the company’s ambitions to expand their market share and reclaim their place at the top.
Alphabet’s stock, which has gained about 30% this year due to the AI stock rally, is set to lose around $60 billion in market value. However, its 12-month forward price-to-earnings ratio of 22.2 remains competitive compared to Nvidia’s 38.6, indicating continued confidence in Alphabet’s long-term growth prospects.
YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.
Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.
Good news creators: our updated Erase Song tool helps you easily remove copyright-claimed music from your video (while leaving the rest of your audio intact). Learn more… https://t.co/KeWIw3RFeH
YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.
In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
YouTube has implemented new privacy guidelines allowing individuals to request the removal of AI-generated videos that imitate them. Initially promised in November 2023, these rules are now officially in effect, as confirmed by a recent update to YouTube’s privacy policies.
According to the updated guidelines, users can request the removal of content that realistically depicts a synthetic version of themselves, created or altered using AI. YouTube will evaluate such requests based on several criteria, including whether the content is changed, disclosed as synthetic, identifiable, realistic, and whether it serves public interest like parody or satire. Human moderators will handle complaints, and if validated, the uploader must either delete the video within 48 hours or edit out the problematic parts.
These guidelines aim to protect individuals from potentially harmful content like deepfakes, which can easily mislead viewers. They are particularly relevant in upcoming elections in countries such as France, the UK, and the US, where misusing AI-generated videos could impact political discourse.
Channel Seven is currently investigating a significant breach on its YouTube channel, where unauthorised content featuring an AI-generated deepfake version of Elon Musk was streamed repeatedly. The incident on Thursday involved the channel being altered to mimic Tesla’s official presence. Viewers were exposed to a fabricated live stream where the AI-generated Musk promoted cryptocurrency investments via a QR code, claiming a potential doubling of assets.
During the stream, the fake Musk engaged with an audience, urging them to take advantage of the purported investment opportunity. The footage also featured a chat box from the fake Tesla page, displaying comments and links that further promoted the fraudulent scheme. The incident affected several other channels under Channel Seven’s umbrella, including 7 News and Spotlight, with all content subsequently deleted from these platforms.
A spokesperson from Channel Seven acknowledged the issue, confirming they are investigating alongside YouTube to resolve the situation swiftly. The network’s main YouTube page appeared inaccessible following the breach, prompting the investigation into how the security lapse occurred. The incident comes amidst broader challenges for Seven West Media, which recently announced significant job cuts as part of a cost-saving initiative led by its new CEO.
Why does it matter?
The breach underscores growing concerns over cybersecurity on social media platforms, particularly as unauthorised access to high-profile channels can disseminate misleading or harmful information. Channel Seven’s efforts to address the issue highlight the importance of robust digital security measures in safeguarding against such incidents in the future.
YouTube is negotiating with major record labels to license their songs for AI tools that clone popular artists’ music. The negotiations aim to secure the content needed to legally train AI song generators and launch new tools this year. Google-owned YouTube has offered upfront payments to major labels like Sony, Warner, and Universal to encourage artists to participate, but many remain opposed, fearing it could devalue their work.
Previously, YouTube tested an AI tool called ‘Dream Track,’ which allowed users to create music clips mimicking well-known artists. However, only a few artists participated, including Charli XCX and John Legend. YouTube now hopes to sign up dozens more artists to expand its AI song generator tool, though it won’t carry the Dream Track brand.
Why does it matter?
These negotiations come as AI companies like OpenAI are making licensing agreements with media groups. The proposed music deals would involve one-off payments to labels rather than royalty-based arrangements. YouTube’s AI tools could become part of its Shorts platform, competing with TikTok and other similar platforms. As these discussions continue, major labels are also suing AI startups for allegedly using copyrighted recordings without permission, seeking significant damages.
A Russian rights group, OVD-Info, reported that YouTube has threatened to block one of its channels in Russia, called Kak Teper, which discusses Ukraine war and political issues and has 100,000 subscribers.
Reuters reports that YouTube’s warning followed a complaint from Russian regulator Roskomnadzor, claiming the content violated information technology laws. OVD-Info is negotiating with YouTube and Google, labelling the potential block as political censorship.
YouTube did not specify which law was violated and did not respond to inquiries about this case but confirmed the reinstatement of videos from other opposition channels. Blocking YouTube entirely in Russia could be unpopular due to its large user base of tens of millions of monthly users.
Why does it matter?
OVD-Info’s Kak Teper might become the first entire human rights channel banned on YouTube, warns Natalia Krapiva from Access Now, noting the growing threat to civil society’s presence on the platform. While Russia has blocked most foreign social media, YouTube has managed to avoid a ban, but not without consequences, as it has been consistently fined for hosting content deemed illegal by Russian authorities.
Alphabet’s YouTube announced its compliance with a court decision to block access to 32 video links in Hong Kong, marking a move critics argue infringes on the city’s freedoms amid tightening security measures. The decision followed a government application granted by Hong Kong’s Court of Appeal, targeting a protest anthem named ‘Glory to Hong Kong,’ with judges cautioning against its potential use by dissidents to incite secession.
Expressing disappointment, YouTube stated it would abide by the removal order while highlighting concerns regarding the chilling effect on online free expression. Observers, including the US government, voiced worries over the ban’s impact on Hong Kong’s reputation as a financial hub committed to the free flow of information.
Industry groups emphasised the importance of maintaining a free and open internet in Hong Kong, citing its significance in preserving the city’s competitive edge. The move reflects broader trends of tech companies complying with legal requirements, with Google parent Alphabet having previously restricted content in China.
Why does it matter?
Despite YouTube’s action, tensions persist over the erosion of freedoms in Hong Kong, underscored by ongoing international scrutiny and criticism of the city’s security crackdown on dissent. As the city grapples with balancing national security concerns and its promised autonomy under the ‘one country, two systems’ framework, the implications for its future as a global business centre remain uncertain.
In recent reports by The New York Times, the challenges faced by AI companies in acquiring high-quality training data have come to light. The New York Times elaborates on how companies like OpenAI and Google have navigated this issue, often treading in legally ambiguous territories related to AI copyright law.
OpenAI, for instance, resorted to developing its Whisper audio transcription model by transcribing over a million hours of YouTube videos to train GPT-4, its advanced language model. Although this approach raised legal concerns, OpenAI believed it fell within fair use. The company’s president, Greg Brockman, reportedly played a hands-on role in collecting these videos.
According to a Google spokesperson, there were unconfirmed reports of OpenAI’s activities, and both Google’s terms of service and robots.txt files prohibit unauthorised scraping or downloading of YouTube content. Google also utilised transcripts from YouTube, aligned with its agreements with content creators.
AI companies, including Google and OpenAI, are grappling with the dwindling availability of quality training data to improve their models. The future of AI training may involve synthetic data or curriculum learning methods, but these approaches still need to be proven. In the meantime, companies continue to explore various avenues for data acquisition, sometimes straying into legally contentious territories as they navigate this evolving landscape.