Singapore blocks 95 accounts linked to exiled Chinese tycoon Guo Wengui

Singapore has ordered five social media platforms to block access to 95 accounts linked to exiled Chinese tycoon Guo Wengui. These accounts posted over 120 times from April 17 to May 10, alleging foreign interference in Singapore’s leadership transition. The Home Affairs Ministry stated that the posts suggested a foreign actor influenced the selection of Singapore’s new prime minister.

Singapore’s Foreign Interference (Countermeasures) Act, enacted in October 2021, was used for the first time to address this issue. Guo Wengui, recently convicted in the US for fraud, has a history of opposing Beijing. Together with former Trump adviser Steve Bannon, he launched the New Federal State of China, aimed at overthrowing China’s Communist Party.

The ministry expressed concern that Guo’s network could spread false narratives detrimental to Singapore’s interests and sovereignty. Blocking these accounts was deemed necessary to prevent potential hostile information campaigns targeting Singapore.

Guo and his affiliated organisations have been known to push various Singapore-related narratives. The coordinated actions and previous attempts to use Singapore to advance their agenda highlighted their capability to undermine Singapore’s social cohesion and sovereignty.

Tokyo residents oppose massive data centre project

Residents of Akishima city in western Tokyo are petitioning to block the construction of a large logistics and data centre by Singaporean developer GLP. Over 220 residents have expressed concerns that the centre would harm local wildlife, cause pollution, increase electricity usage, and deplete the city’s groundwater supply.

The group has filed a petition to review the urban planning process that approved GLP’s 3.63-million-megawatt data centre, which is estimated to emit around 1.8 million tons of carbon dioxide annually. They also worry that the project would require cutting down 3,000 of the 4,800 trees on the site, threatening the habitat of Eurasian goshawks and badgers.

The residents are considering arbitration to force GLP to reconsider its plans, with construction set to begin in February and completion expected by early 2029. The opposition comes amidst growing demand for data centres in Japan, where the market is projected to grow significantly over the next few years. GLP has declined to comment on the matter.

Meta will remove content in which ‘Zionist’ is used as a proxy term for antisemitism

Meta announced on Tuesday that it will begin removing more posts that target ‘Zionists’ when the term is used to refer to Jewish people and Israelis, rather than supporters of the political movement. This decision is based on the claim that the world can take on new meanings and become a proxy term for nationality. Meta categorises numerous ‘protected characteristics,’ including nationality, race, and religion.

Previously, Meta’s approach has treated the word ‘Zionist’ as a proxy for Jewish or Israeli people in two specific cases: when Zionists are compared to rats, reflecting antisemitic imagery, and when context clearly indicates that the word means ‘Jew’ or ‘Israeli.’ Now, Meta will remove content attacking ‘Zionists’ when it is not explicitly about the political movement and when it uses certain antisemitic stereotypes, dehumanises, denies the existence of, or threatens or calls for harm or intimidation of ‘Jews’ or ‘Israelis.’

The policy change has been praised by the World Jewish Congress. Its president, Ronald S. Lauder, stated, ‘By recognizing and addressing the misuse of the term ‘Zionist,’ Meta is taking a bold stand against those who seek to mask their hatred of Jews.’ Meta has previously reported significant decreases in hate speech on its platforms.

A recurring question during consultations was how to handle comparisons of Zionists to criminals. Meta does not allow content that compares “protected characteristics” to criminals, but currently believes such comparisons can be used as shorthand for comments on larger military actions. The issue has been referred to an oversight board. Meta consulted with 145 stakeholders from civil society and academia across various global regions for this policy update.

AI tool lets YouTube creators erase copyrighted songs

YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.

Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.

YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.

In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Supreme Court delays ruling on state laws targeting social media

The US Supreme Court has deferred rulings on the constitutionality of laws from Florida and Texas aimed at regulating social media companies’ content moderation practices. The laws, challenged by industry groups including NetChoice and CCIA, sought to limit platforms like Meta Platforms, Google, and others from moderating content they deem objectionable. While the lower courts had mixed decisions—blocking Florida’s law and upholding Texas’—the Supreme Court unanimously decided these decisions didn’t fully address First Amendment concerns and sent them back for further review.

Liberal Justice Elena Kagan, writing for the majority, questioned Texas’ law, suggesting it sought to impose state preferences on social media content moderation, which could violate the First Amendment. Central to the debate is whether states can compel platforms to host content against their editorial discretion, which companies argue is necessary to manage spam, bullying, extremism, and hate speech. Critics argue these laws protect free speech by preventing censorship of conservative viewpoints, a claim disputed by the Biden administration, which opposes the laws for potentially violating First Amendment protections.

Why does it matter?

At stake are laws that would restrict platforms with over 50 million users from censoring based on viewpoint (Texas) and limit content exclusion for political candidates or journalistic enterprises (Florida). Additionally, these laws require platforms to explain content moderation decisions, a requirement some argue burdens free speech rights.

The Supreme Court’s decision not to rule marks another chapter in the ongoing legal battle over digital free speech rights, following earlier decisions regarding officials’ social media interactions and misinformation policies.

The future of humour in advertising with AI

AI is revolutionising the world of advertising, particularly when it comes to humour. Traditionally, humour in advertising was heavily depended on human creativity, relying on puns, sarcasm, and funny voices to engage consumers. However, as AI advances, it is increasingly being used to create comedic content.

Neil Heymann, Global Chief Creative Officer at Accenture Song, discussed the integration of AI in humour at the Cannes Lions International Festival of Creativity. He noted that while humour in advertising carries certain risks, the potential rewards far outweigh them. Despite the challenges of maintaining a unique comedic voice in a globalised market, AI offers new opportunities for creativity and personalisation.

One notable example Heymann highlighted was a recent Uber ad in the UK featuring Robert De Niro. He emphasised that while AI might struggle to replicate the nuanced performance of an actor like De Niro, it can still be a valuable tool for generating humour. For instance, a new tool developed by Google Labs can create jokes by exploring various wordplay and puns, expanding the creative options available to writers.

Heymann believes that AI can also help navigate the complexities of global advertising. By acting as an advanced filtering system, AI can identify potential cultural pitfalls and ensure that humorous content resonates with diverse audiences without losing the thrill of creativity.

Moreover, AI’s impact on advertising extends beyond humour. Toys ‘R’ Us recently pioneered text-to-video AI-generated advertising clips, showcasing AI’s ability to revolutionise content creation across various formats. That innovation highlights the expanding role of AI in shaping the future of advertising, where technological advancements continuously redefine creative possibilities.

WikiLeaks founder agrees to plea deal over US classified documents

The founder of WikiLeaks, Julian Assange, has agreed to plead guilty to a single charge of conspiring to acquire and disclose classified US national defence documents, as outlined in court documents filed in the US District Court for the Northern Mariana Islands.

Under the terms of a deal, Julian Assange has confessed in a US court, concluding a 14-year legal struggle, and has been granted freedom. He formally entered a plea to a single offence in the Northern Mariana Islands, an American territory in the Pacific, shortly after his release from a British prison. In exchange, he has been given credit for time served and is permitted to fly back to Australia to reunite with his family. 

US authorities had been pursuing the 52-year-old for a significant disclosure of confidential files in 2010. Prosecutors had initially sought to prosecute the Wikileaks founder on 18 counts, primarily under the Espionage Act, related to the release of confidential US military records and diplomatic messages concerning the Afghanistan and Iraq wars, which they claimed endangered lives. Wikileaks had unveiled a video from a US military helicopter showing civilians being killed in Baghdad, Iraq. It also released numerous confidential documents indicating that the US military had caused the deaths of hundreds of civilians in unreported incidents during the Afghanistan war. 

Wikileaks, established by Assange in 2006, has published over 10 million documents. One of Assange’s prominent collaborators, US Army intelligence analyst Chelsea Manning, was sentenced to 35 years in prison before then-President Barack Obama commuted the sentence in 2017. 

During the hearing, Assange told the court, ‘As a journalist, I encouraged my source to provide information that was deemed classified to publish that information.’ Assange underscored his belief that he would be shielded by the First Amendment of the US Constitution, safeguarding freedom of the press. Prosecutors alleged that the WikiLeaks founder actively promoted leaks of classified information, asserting that Assange told leakers that ‘top secret means nothing.’ Following the sentencing, Assange’s attorney, Barry Pollack, affirmed that ‘Wikileaks’s work will persist, and Mr Assange, without a doubt, will remain a driving force for freedom of speech and government transparency.’

Geologists voice concerns about potential censorship and bias in Chinese AI chatbot

Geologists are expressing concerns about potential Chinese censorship and bias in GeoGPT, a new AI chatbot backed by the International Union of Geological Sciences (IUGS). Developed under the Deep-time Digital Earth (DDE) program, which is heavily funded by China, GeoGPT aims to assist geoscientists, particularly in developing countries, by providing access to extensive geological data. However, issues around transparency and censorship have been highlighted by experts, raising questions about the chatbot’s reliability.

Critics like Prof. Paul Cleverley have pointed out potential censorship and lack of transparency in GeoGPT’s responses. Although DDE representatives claim that the chatbot’s information is purely geoscientific and free from state influence, tests with its underlying AI, Qwen, developed by Alibaba, suggest that certain sensitive questions may be avoided or answered inadequately. That contrasts with responses from other AI models like ChatGPT, which provide more direct information on similar queries.

Further concerns are raised about the involvement of Chinese funding and the potential for biassed data usage. Geoscientific research, which includes valuable information about natural resources, could be strategically filtered. Additionally, the terms of use for GeoGPT prohibit generating content that undermines national security or incites subversion, aligning with Chinese laws, which may influence the chatbot’s outputs.

The IUGS president, John Ludden, has stated that GeoGPT’s database will be made public once appropriate governance is ensured. However, with the project being predominantly funded by Chinese sources, geoscientists remain sceptical about the impartiality and transparency of GeoGPT’s data and responses.

ByteDance challenges US TikTok ban in court

ByteDance and its subsidiary company TikTok are urging a US court to overturn a law that would ban the popular app in the USA by 19 January. The new legal act, signed by President Biden in April, demands ByteDance divest TikTok’s US assets or face a ban, which the company argues is impractical on technological, commercial, and legal grounds.

ByteDance contends that the law, driven by concerns over potential Chinese access to American data, violates free speech rights and unfairly targets TikTok while ‘ignores many applications with substantial operations in China that collect large amounts of US user data, as well as the many US companies that develop software and employ engineers in China.’ They argue that the legislation represents a substantial departure from the US tradition of supporting an open internet and sets a dangerous precedent.

The US Court of Appeals for the District of Columbia will hear oral arguments on this case on 16 September, a decision that could shape the future of TikTok in the US. ByteDance claims lengthy negotiations with the US government, which ended abruptly in August 2022, proposed various measures to protect US user data, including a ‘kill switch’ for the government to suspend TikTok if necessary. Additionally, the company made public a 100-plus page draft national security agreement to protect US TikTok user data and claims it has spent more than $2 billion on the effort. However, they believe the administration prefers to shut down the app rather than finalise a feasible agreement.

The Justice Department, defending the law, asserted that it addresses national security concerns appropriately. Moreover, the case follows a similar attempt by former President Trump to ban TikTok, which was blocked by the courts in 2020. This time, the new law would prohibit app stores and internet hosting services from supporting TikTok unless ByteDance divests it.

TikTok’s fate in US to be decided before election

A US appeals court has scheduled oral arguments for 16 September to address legal challenges against a new law requiring ByteDance, the China-based parent company of TikTok, to divest its US assets by 19 January or face a ban. The law, signed by President Joe Biden on 24 April, aims to eliminate Chinese ownership of TikTok due to national security concerns. TikTok, ByteDance, and a group of TikTok creators have filed lawsuits to block the law, arguing that it significantly impacts American life, with 170 million Americans using the app.

The hearing will coincide with the final weeks of the 2024 presidential election, and both parties are seeking a ruling by 6 December to allow for a potential Supreme Court review. The law also prohibits app stores like Apple and Google from offering TikTok and bars internet hosting services from supporting it unless ByteDance divests. Such a measure reflects US lawmakers’ fears that China could use TikTok to access American data or conduct espionage.