Alphabet stocks drop on AI investment concerns

Google’s parent company stocks fell by over 3% on Wednesday amid concerns that rising investments in AI infrastructure could squeeze margins and that YouTube is facing stiff competition for ad dollars. The Google parent company saw its capital expenditure rise to $13.2 billion in the second quarter, exceeding expectations as it invests heavily in the infrastructure needed to support generative AI services and compete with Microsoft.

While Alphabet has been cutting costs through layoffs to protect profitability, analysts noted that seasonal hiring of fresh graduates and the earlier-than-usual Pixel launch would impact margins in the third quarter. Additionally, YouTube’s ad sales growth slowed to 13% in the second quarter from nearly 21% in the first quarter, as it grapples with tough year-on-year comparisons and competition from Amazon in the online video ad market.

Despite these challenges, many analysts remain positive about Alphabet, citing its AI efforts driving up cloud revenue and minimal disruption to Search revenue from its AI overviews. Cloud computing services revenue rose by 28.8%, outpacing expectations and signalling robust enterprise spending. Analysts believe Alphabet’s AI advancements position it as a market leader, and 25 brokerages have raised their price targets for the stock. Their failed Wiz acquisition echoes the company’s ambitions to expand their market share and reclaim their place at the top.

Alphabet’s stock, which has gained about 30% this year due to the AI stock rally, is set to lose around $60 billion in market value. However, its 12-month forward price-to-earnings ratio of 22.2 remains competitive compared to Nvidia’s 38.6, indicating continued confidence in Alphabet’s long-term growth prospects.

AI tool lets YouTube creators erase copyrighted songs

YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.

Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.

YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.

In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

YouTube implements rules for removing AI-generated mimicking videos

YouTube has implemented new privacy guidelines allowing individuals to request the removal of AI-generated videos that imitate them. Initially promised in November 2023, these rules are now officially in effect, as confirmed by a recent update to YouTube’s privacy policies.

According to the updated guidelines, users can request the removal of content that realistically depicts a synthetic version of themselves, created or altered using AI. YouTube will evaluate such requests based on several criteria, including whether the content is changed, disclosed as synthetic, identifiable, realistic, and whether it serves public interest like parody or satire. Human moderators will handle complaints, and if validated, the uploader must either delete the video within 48 hours or edit out the problematic parts.

These guidelines aim to protect individuals from potentially harmful content like deepfakes, which can easily mislead viewers. They are particularly relevant in upcoming elections in countries such as France, the UK, and the US, where misusing AI-generated videos could impact political discourse.

AI-generated Elon Musk hijacks Channel Seven’s YouTube

Channel Seven is currently investigating a significant breach on its YouTube channel, where unauthorised content featuring an AI-generated deepfake version of Elon Musk was streamed repeatedly. The incident on Thursday involved the channel being altered to mimic Tesla’s official presence. Viewers were exposed to a fabricated live stream where the AI-generated Musk promoted cryptocurrency investments via a QR code, claiming a potential doubling of assets.

During the stream, the fake Musk engaged with an audience, urging them to take advantage of the purported investment opportunity. The footage also featured a chat box from the fake Tesla page, displaying comments and links that further promoted the fraudulent scheme. The incident affected several other channels under Channel Seven’s umbrella, including 7 News and Spotlight, with all content subsequently deleted from these platforms.

A spokesperson from Channel Seven acknowledged the issue, confirming they are investigating alongside YouTube to resolve the situation swiftly. The network’s main YouTube page appeared inaccessible following the breach, prompting the investigation into how the security lapse occurred. The incident comes amidst broader challenges for Seven West Media, which recently announced significant job cuts as part of a cost-saving initiative led by its new CEO.

Why does it matter?

The breach underscores growing concerns over cybersecurity on social media platforms, particularly as unauthorised access to high-profile channels can disseminate misleading or harmful information. Channel Seven’s efforts to address the issue highlight the importance of robust digital security measures in safeguarding against such incidents in the future.

YouTube seeks music licensing deals for AI generation tools

YouTube is negotiating with major record labels to license their songs for AI tools that clone popular artists’ music. The negotiations aim to secure the content needed to legally train AI song generators and launch new tools this year. Google-owned YouTube has offered upfront payments to major labels like Sony, Warner, and Universal to encourage artists to participate, but many remain opposed, fearing it could devalue their work.

Previously, YouTube tested an AI tool called ‘Dream Track,’ which allowed users to create music clips mimicking well-known artists. However, only a few artists participated, including Charli XCX and John Legend. YouTube now hopes to sign up dozens more artists to expand its AI song generator tool, though it won’t carry the Dream Track brand.

Why does it matter?

These negotiations come as AI companies like OpenAI are making licensing agreements with media groups. The proposed music deals would involve one-off payments to labels rather than royalty-based arrangements. YouTube’s AI tools could become part of its Shorts platform, competing with TikTok and other similar platforms. As these discussions continue, major labels are also suing AI startups for allegedly using copyrighted recordings without permission, seeking significant damages.

YouTube threatens to block Russian rights group’s channel

A Russian rights group, OVD-Info, reported that YouTube has threatened to block one of its channels in Russia, called Kak Teper, which discusses Ukraine war and political issues and has 100,000 subscribers.

Reuters reports that YouTube’s warning followed a complaint from Russian regulator Roskomnadzor, claiming the content violated information technology laws. OVD-Info is negotiating with YouTube and Google, labelling the potential block as political censorship.

YouTube did not specify which law was violated and did not respond to inquiries about this case but confirmed the reinstatement of videos from other opposition channels. Blocking YouTube entirely in Russia could be unpopular due to its large user base of tens of millions of monthly users.

Why does it matter? 

OVD-Info’s Kak Teper might become the first entire human rights channel banned on YouTube, warns Natalia Krapiva from Access Now, noting the growing threat to civil society’s presence on the platform. While Russia has blocked most foreign social media, YouTube has managed to avoid a ban, but not without consequences, as it has been consistently fined for hosting content deemed illegal by Russian authorities.

YouTube to block Hong Kong protest anthem videos following court directive

Alphabet’s YouTube announced its compliance with a court decision to block access to 32 video links in Hong Kong, marking a move critics argue infringes on the city’s freedoms amid tightening security measures. The decision followed a government application granted by Hong Kong’s Court of Appeal, targeting a protest anthem named ‘Glory to Hong Kong,’ with judges cautioning against its potential use by dissidents to incite secession.

Expressing disappointment, YouTube stated it would abide by the removal order while highlighting concerns regarding the chilling effect on online free expression. Observers, including the US government, voiced worries over the ban’s impact on Hong Kong’s reputation as a financial hub committed to the free flow of information.

Industry groups emphasised the importance of maintaining a free and open internet in Hong Kong, citing its significance in preserving the city’s competitive edge. The move reflects broader trends of tech companies complying with legal requirements, with Google parent Alphabet having previously restricted content in China.

Why does it matter?

Despite YouTube’s action, tensions persist over the erosion of freedoms in Hong Kong, underscored by ongoing international scrutiny and criticism of the city’s security crackdown on dissent. As the city grapples with balancing national security concerns and its promised autonomy under the ‘one country, two systems’ framework, the implications for its future as a global business centre remain uncertain.

OpenAI utilised one million hours of YouTube content to train GPT-4

In recent reports by The New York Times, the challenges faced by AI companies in acquiring high-quality training data have come to light. The New York Times elaborates on how companies like OpenAI and Google have navigated this issue, often treading in legally ambiguous territories related to AI copyright law.

OpenAI, for instance, resorted to developing its Whisper audio transcription model by transcribing over a million hours of YouTube videos to train GPT-4, its advanced language model. Although this approach raised legal concerns, OpenAI believed it fell within fair use. The company’s president, Greg Brockman, reportedly played a hands-on role in collecting these videos.

According to a Google spokesperson, there were unconfirmed reports of OpenAI’s activities, and both Google’s terms of service and robots.txt files prohibit unauthorised scraping or downloading of YouTube content. Google also utilised transcripts from YouTube, aligned with its agreements with content creators.

Similarly, Meta encountered challenges with data availability for training its AI models. The company’s AI team discussed using copyrighted works without permission to catch up with OpenAI. Meta explored options like paying for book licenses or acquiring a large publisher to address this issue.

Why does it matter?

AI companies, including Google and OpenAI, are grappling with the dwindling availability of quality training data to improve their models. The future of AI training may involve synthetic data or curriculum learning methods, but these approaches still need to be proven. In the meantime, companies continue to explore various avenues for data acquisition, sometimes straying into legally contentious territories as they navigate this evolving landscape.

YouTube under scrutiny for approving false information ads on India’s elections

A recent report by research organisations Access Now and Global Witness revealed that nearly 50 ads filled with misinformation aimed at disrupting India’s elections or impeding voters were approved by YouTube despite clearly violating the platform’s policies on election misinformation. The investigation, titled ‘”Votes will not be counted”: Indian election disinformation ads and YouTube,’ found ads in English, Hindi, and Telugu spreading falsehoods about the upcoming election, such as false claims about voting methods and age requirements.

Although all the ads were ultimately withdrawn by their senders before publication due to safety concerns, YouTube’s approval of these ads has raised concerns about its role in ensuring free and fair elections. Namrata Maheshwari, senior policy counsel at Access Now, emphasised YouTube’s failure to implement disinformation policies, especially as India approaches its crucial election year in 2024.

In response to the investigation, YouTube’s parent company, Google, stated that none of the ads ran on its systems and reaffirmed that its policies are enforced year-round. The company explained that its enforcement process involves multiple layers to ensure ad compliance, indicating that initial approval does not guarantee the publication of ads that later violate policies. However, the report underscores the need for platforms like YouTube to effectively implement and enforce their policies to safeguard electoral integrity, particularly in the face of increasing misinformation surrounding elections globally.

Why does it matter?

The incident highlights the ongoing challenges social media platforms face in combating misinformation and ensuring the integrity of democratic processes. As the spread of false information continues to threaten elections worldwide, there is growing pressure on tech companies to enhance their efforts in detecting and removing misleading content, especially during critical election periods.

British Army’s YouTube and Twitter accounts hacked and used to promote crypto scams

The UK Ministry of Defence has confirmed that the British Army’s Twitter and YouTube accounts were hacked and used to spread scams.

Hackers changed the organisation’s profile picture, bio, and cover photo on Twitter to make it appear as though it was part of The Possessed NFT collection.

On YouTube, hackers deleted all of the videos on the British Army’s channel and changed its name and profile picture to look like the (real) investment company Ark Invest. Hackers replaced the British Army’s videos with a series of old live streams featuring former Twitter CEO Jack Dorsey and Tesla CEO Elon Musk.

The Army has regained control of the accounts.