China’s new video-generating AI faces limitations due to political censorship

A new AI video-generating model, Kling, developed by Beijing-based Kuaishou, is now widely available but with significant limitations. Initially launched in a waitlisted access for users with Chinese phone numbers, Kling can now be accessed by anyone providing their email. The model generates five-second videos based on user prompts, simulating physics like rustling leaves and flowing water with a resolution of 720p.

However, Kling censors politically sensitive topics. Prompts related to ‘Democracy in China,’ ‘Chinese President Xi Jinping,’ and ‘Tiananmen Square protests’ result in error messages. The censorship occurs at the prompt level, allowing for the generation of videos related to these topics as long as they are not explicitly mentioned.

That behaviour likely stems from intense political pressure from the Chinese government. The Cyberspace Administration of China (CAC) is actively testing AI models to ensure they align with core socialist values and has proposed a blacklist of sources for training AI models. Companies must prepare models that produce ‘safe’ answers to thousands of questions, which may slow China’s AI development and create two classes of models: those heavily filtered and those less so.

The dichotomy raises questions about the broader implications for the AI ecosystem, as restrictive policies may hinder technological advancement and innovation.

Trump allies hinder disinformation research leading up to US election

A legal campaign led by allies of former US president Donald Trump requested an investigation within the misinformation research field, claiming an alleged conspiracy to censor conservative voices online. Under this investigation, academics in the field who worked at tracking election misinformation online were scrutinised daily, including regular scanning of their correspondence with AI software and searching for messages from government agencies or tech companies.

Disinformation has proliferated online as the US election approaches, especially after significant events such as the assassination attempt on Trump and President Biden’s withdrawal from the race. Due to the political scrutiny, researchers held back from publicly reporting some of their insights on misinformation issues related to public affairs.

Last month, the Supreme Court reversed a lower-court ruling restricting tech companies and the government from communicating about misinformation online. But the ruling hasn’t deterred Republicans from bringing lawsuits and sending a string of legal demands.

According to the investigation by The Washington Post, the GOP campaign has eroded the once thriving ecosystem of academics, nonprofits and tech industry initiatives dedicated to addressing the spread of misinformation online. Many prominent researchers in the field, like Claire Wardle, Stefanie Friedhoff, Ryan Calo and Kate Starbird, have expressed their concerns for academic freedom and democracy.

Bangladesh faces fifth day without internet amid protests

Bangladesh remained without internet access for the fifth consecutive day as the government declared a public holiday on Monday. Authorities maintained strict control following a Supreme Court ruling that reduced a contentious quota system for government jobs, which had triggered violent protests. Despite an apparent calm, the nation witnessed military personnel patrolling the capital and other areas after a curfew with a shoot-on-sight order was imposed days earlier.

The protests, primarily led by students, erupted over a quota reserving 30% of government jobs for relatives of veterans from Bangladesh’s 1971 war of independence. Clashes between police and protesters resulted in over a hundred deaths, according to local newspapers, though official figures have not been released. In response to the Supreme Court’s decision to reduce the veterans’ quota to 5%, protesters have called for the restoration of internet services and the removal of security officials from university campuses.

Despite the court ruling, tensions in Bangladesh remain high. Protesters issued a 48-hour ultimatum for the government to end the digital crackdown and return the country to normalcy. The US Embassy in Dhaka described the situation as highly volatile, warning Americans to avoid large crowds and reconsider travel plans. The protests have presented a significant challenge to Prime Minister Sheikh Hasina’s government, highlighting ongoing political strife between her Awami League party and the opposition Bangladesh Nationalist Party.

Singapore blocks 95 accounts linked to exiled Chinese tycoon Guo Wengui

Singapore has ordered five social media platforms to block access to 95 accounts linked to exiled Chinese tycoon Guo Wengui. These accounts posted over 120 times from April 17 to May 10, alleging foreign interference in Singapore’s leadership transition. The Home Affairs Ministry stated that the posts suggested a foreign actor influenced the selection of Singapore’s new prime minister.

Singapore’s Foreign Interference (Countermeasures) Act, enacted in October 2021, was used for the first time to address this issue. Guo Wengui, recently convicted in the US for fraud, has a history of opposing Beijing. Together with former Trump adviser Steve Bannon, he launched the New Federal State of China, aimed at overthrowing China’s Communist Party.

The ministry expressed concern that Guo’s network could spread false narratives detrimental to Singapore’s interests and sovereignty. Blocking these accounts was deemed necessary to prevent potential hostile information campaigns targeting Singapore.

Guo and his affiliated organisations have been known to push various Singapore-related narratives. The coordinated actions and previous attempts to use Singapore to advance their agenda highlighted their capability to undermine Singapore’s social cohesion and sovereignty.

Tokyo residents oppose massive data centre project

Residents of Akishima city in western Tokyo are petitioning to block the construction of a large logistics and data centre by Singaporean developer GLP. Over 220 residents have expressed concerns that the centre would harm local wildlife, cause pollution, increase electricity usage, and deplete the city’s groundwater supply.

The group has filed a petition to review the urban planning process that approved GLP’s 3.63-million-megawatt data centre, which is estimated to emit around 1.8 million tons of carbon dioxide annually. They also worry that the project would require cutting down 3,000 of the 4,800 trees on the site, threatening the habitat of Eurasian goshawks and badgers.

The residents are considering arbitration to force GLP to reconsider its plans, with construction set to begin in February and completion expected by early 2029. The opposition comes amidst growing demand for data centres in Japan, where the market is projected to grow significantly over the next few years. GLP has declined to comment on the matter.

Meta will remove content in which ‘Zionist’ is used as a proxy term for antisemitism

Meta announced on Tuesday that it will begin removing more posts that target ‘Zionists’ when the term is used to refer to Jewish people and Israelis, rather than supporters of the political movement. This decision is based on the claim that the world can take on new meanings and become a proxy term for nationality. Meta categorises numerous ‘protected characteristics,’ including nationality, race, and religion.

Previously, Meta’s approach has treated the word ‘Zionist’ as a proxy for Jewish or Israeli people in two specific cases: when Zionists are compared to rats, reflecting antisemitic imagery, and when context clearly indicates that the word means ‘Jew’ or ‘Israeli.’ Now, Meta will remove content attacking ‘Zionists’ when it is not explicitly about the political movement and when it uses certain antisemitic stereotypes, dehumanises, denies the existence of, or threatens or calls for harm or intimidation of ‘Jews’ or ‘Israelis.’

The policy change has been praised by the World Jewish Congress. Its president, Ronald S. Lauder, stated, ‘By recognizing and addressing the misuse of the term ‘Zionist,’ Meta is taking a bold stand against those who seek to mask their hatred of Jews.’ Meta has previously reported significant decreases in hate speech on its platforms.

A recurring question during consultations was how to handle comparisons of Zionists to criminals. Meta does not allow content that compares “protected characteristics” to criminals, but currently believes such comparisons can be used as shorthand for comments on larger military actions. The issue has been referred to an oversight board. Meta consulted with 145 stakeholders from civil society and academia across various global regions for this policy update.

AI tool lets YouTube creators erase copyrighted songs

YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.

Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.

YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.

In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Supreme Court delays ruling on state laws targeting social media

The US Supreme Court has deferred rulings on the constitutionality of laws from Florida and Texas aimed at regulating social media companies’ content moderation practices. The laws, challenged by industry groups including NetChoice and CCIA, sought to limit platforms like Meta Platforms, Google, and others from moderating content they deem objectionable. While the lower courts had mixed decisions—blocking Florida’s law and upholding Texas’—the Supreme Court unanimously decided these decisions didn’t fully address First Amendment concerns and sent them back for further review.

Liberal Justice Elena Kagan, writing for the majority, questioned Texas’ law, suggesting it sought to impose state preferences on social media content moderation, which could violate the First Amendment. Central to the debate is whether states can compel platforms to host content against their editorial discretion, which companies argue is necessary to manage spam, bullying, extremism, and hate speech. Critics argue these laws protect free speech by preventing censorship of conservative viewpoints, a claim disputed by the Biden administration, which opposes the laws for potentially violating First Amendment protections.

Why does it matter?

At stake are laws that would restrict platforms with over 50 million users from censoring based on viewpoint (Texas) and limit content exclusion for political candidates or journalistic enterprises (Florida). Additionally, these laws require platforms to explain content moderation decisions, a requirement some argue burdens free speech rights.

The Supreme Court’s decision not to rule marks another chapter in the ongoing legal battle over digital free speech rights, following earlier decisions regarding officials’ social media interactions and misinformation policies.

The future of humour in advertising with AI

AI is revolutionising the world of advertising, particularly when it comes to humour. Traditionally, humour in advertising was heavily depended on human creativity, relying on puns, sarcasm, and funny voices to engage consumers. However, as AI advances, it is increasingly being used to create comedic content.

Neil Heymann, Global Chief Creative Officer at Accenture Song, discussed the integration of AI in humour at the Cannes Lions International Festival of Creativity. He noted that while humour in advertising carries certain risks, the potential rewards far outweigh them. Despite the challenges of maintaining a unique comedic voice in a globalised market, AI offers new opportunities for creativity and personalisation.

One notable example Heymann highlighted was a recent Uber ad in the UK featuring Robert De Niro. He emphasised that while AI might struggle to replicate the nuanced performance of an actor like De Niro, it can still be a valuable tool for generating humour. For instance, a new tool developed by Google Labs can create jokes by exploring various wordplay and puns, expanding the creative options available to writers.

Heymann believes that AI can also help navigate the complexities of global advertising. By acting as an advanced filtering system, AI can identify potential cultural pitfalls and ensure that humorous content resonates with diverse audiences without losing the thrill of creativity.

Moreover, AI’s impact on advertising extends beyond humour. Toys ‘R’ Us recently pioneered text-to-video AI-generated advertising clips, showcasing AI’s ability to revolutionise content creation across various formats. That innovation highlights the expanding role of AI in shaping the future of advertising, where technological advancements continuously redefine creative possibilities.

WikiLeaks founder agrees to plea deal over US classified documents

The founder of WikiLeaks, Julian Assange, has agreed to plead guilty to a single charge of conspiring to acquire and disclose classified US national defence documents, as outlined in court documents filed in the US District Court for the Northern Mariana Islands.

Under the terms of a deal, Julian Assange has confessed in a US court, concluding a 14-year legal struggle, and has been granted freedom. He formally entered a plea to a single offence in the Northern Mariana Islands, an American territory in the Pacific, shortly after his release from a British prison. In exchange, he has been given credit for time served and is permitted to fly back to Australia to reunite with his family. 

US authorities had been pursuing the 52-year-old for a significant disclosure of confidential files in 2010. Prosecutors had initially sought to prosecute the Wikileaks founder on 18 counts, primarily under the Espionage Act, related to the release of confidential US military records and diplomatic messages concerning the Afghanistan and Iraq wars, which they claimed endangered lives. Wikileaks had unveiled a video from a US military helicopter showing civilians being killed in Baghdad, Iraq. It also released numerous confidential documents indicating that the US military had caused the deaths of hundreds of civilians in unreported incidents during the Afghanistan war. 

Wikileaks, established by Assange in 2006, has published over 10 million documents. One of Assange’s prominent collaborators, US Army intelligence analyst Chelsea Manning, was sentenced to 35 years in prison before then-President Barack Obama commuted the sentence in 2017. 

During the hearing, Assange told the court, ‘As a journalist, I encouraged my source to provide information that was deemed classified to publish that information.’ Assange underscored his belief that he would be shielded by the First Amendment of the US Constitution, safeguarding freedom of the press. Prosecutors alleged that the WikiLeaks founder actively promoted leaks of classified information, asserting that Assange told leakers that ‘top secret means nothing.’ Following the sentencing, Assange’s attorney, Barry Pollack, affirmed that ‘Wikileaks’s work will persist, and Mr Assange, without a doubt, will remain a driving force for freedom of speech and government transparency.’