Meta will remove content in which ‘Zionist’ is used as a proxy term for antisemitism

Meta announced on Tuesday that it will begin removing more posts that target ‘Zionists’ when the term is used to refer to Jewish people and Israelis, rather than supporters of the political movement. This decision is based on the claim that the world can take on new meanings and become a proxy term for nationality. Meta categorises numerous ‘protected characteristics,’ including nationality, race, and religion.

Previously, Meta’s approach has treated the word ‘Zionist’ as a proxy for Jewish or Israeli people in two specific cases: when Zionists are compared to rats, reflecting antisemitic imagery, and when context clearly indicates that the word means ‘Jew’ or ‘Israeli.’ Now, Meta will remove content attacking ‘Zionists’ when it is not explicitly about the political movement and when it uses certain antisemitic stereotypes, dehumanises, denies the existence of, or threatens or calls for harm or intimidation of ‘Jews’ or ‘Israelis.’

The policy change has been praised by the World Jewish Congress. Its president, Ronald S. Lauder, stated, ‘By recognizing and addressing the misuse of the term ‘Zionist,’ Meta is taking a bold stand against those who seek to mask their hatred of Jews.’ Meta has previously reported significant decreases in hate speech on its platforms.

A recurring question during consultations was how to handle comparisons of Zionists to criminals. Meta does not allow content that compares “protected characteristics” to criminals, but currently believes such comparisons can be used as shorthand for comments on larger military actions. The issue has been referred to an oversight board. Meta consulted with 145 stakeholders from civil society and academia across various global regions for this policy update.

AI tool lets YouTube creators erase copyrighted songs

YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.

Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.

YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.

In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

Supreme Court delays ruling on state laws targeting social media

The US Supreme Court has deferred rulings on the constitutionality of laws from Florida and Texas aimed at regulating social media companies’ content moderation practices. The laws, challenged by industry groups including NetChoice and CCIA, sought to limit platforms like Meta Platforms, Google, and others from moderating content they deem objectionable. While the lower courts had mixed decisions—blocking Florida’s law and upholding Texas’—the Supreme Court unanimously decided these decisions didn’t fully address First Amendment concerns and sent them back for further review.

Liberal Justice Elena Kagan, writing for the majority, questioned Texas’ law, suggesting it sought to impose state preferences on social media content moderation, which could violate the First Amendment. Central to the debate is whether states can compel platforms to host content against their editorial discretion, which companies argue is necessary to manage spam, bullying, extremism, and hate speech. Critics argue these laws protect free speech by preventing censorship of conservative viewpoints, a claim disputed by the Biden administration, which opposes the laws for potentially violating First Amendment protections.

Why does it matter?

At stake are laws that would restrict platforms with over 50 million users from censoring based on viewpoint (Texas) and limit content exclusion for political candidates or journalistic enterprises (Florida). Additionally, these laws require platforms to explain content moderation decisions, a requirement some argue burdens free speech rights.

The Supreme Court’s decision not to rule marks another chapter in the ongoing legal battle over digital free speech rights, following earlier decisions regarding officials’ social media interactions and misinformation policies.

The future of humour in advertising with AI

AI is revolutionising the world of advertising, particularly when it comes to humour. Traditionally, humour in advertising was heavily depended on human creativity, relying on puns, sarcasm, and funny voices to engage consumers. However, as AI advances, it is increasingly being used to create comedic content.

Neil Heymann, Global Chief Creative Officer at Accenture Song, discussed the integration of AI in humour at the Cannes Lions International Festival of Creativity. He noted that while humour in advertising carries certain risks, the potential rewards far outweigh them. Despite the challenges of maintaining a unique comedic voice in a globalised market, AI offers new opportunities for creativity and personalisation.

One notable example Heymann highlighted was a recent Uber ad in the UK featuring Robert De Niro. He emphasised that while AI might struggle to replicate the nuanced performance of an actor like De Niro, it can still be a valuable tool for generating humour. For instance, a new tool developed by Google Labs can create jokes by exploring various wordplay and puns, expanding the creative options available to writers.

Heymann believes that AI can also help navigate the complexities of global advertising. By acting as an advanced filtering system, AI can identify potential cultural pitfalls and ensure that humorous content resonates with diverse audiences without losing the thrill of creativity.

Moreover, AI’s impact on advertising extends beyond humour. Toys ‘R’ Us recently pioneered text-to-video AI-generated advertising clips, showcasing AI’s ability to revolutionise content creation across various formats. That innovation highlights the expanding role of AI in shaping the future of advertising, where technological advancements continuously redefine creative possibilities.

WikiLeaks founder agrees to plea deal over US classified documents

The founder of WikiLeaks, Julian Assange, has agreed to plead guilty to a single charge of conspiring to acquire and disclose classified US national defence documents, as outlined in court documents filed in the US District Court for the Northern Mariana Islands.

Under the terms of a deal, Julian Assange has confessed in a US court, concluding a 14-year legal struggle, and has been granted freedom. He formally entered a plea to a single offence in the Northern Mariana Islands, an American territory in the Pacific, shortly after his release from a British prison. In exchange, he has been given credit for time served and is permitted to fly back to Australia to reunite with his family. 

US authorities had been pursuing the 52-year-old for a significant disclosure of confidential files in 2010. Prosecutors had initially sought to prosecute the Wikileaks founder on 18 counts, primarily under the Espionage Act, related to the release of confidential US military records and diplomatic messages concerning the Afghanistan and Iraq wars, which they claimed endangered lives. Wikileaks had unveiled a video from a US military helicopter showing civilians being killed in Baghdad, Iraq. It also released numerous confidential documents indicating that the US military had caused the deaths of hundreds of civilians in unreported incidents during the Afghanistan war. 

Wikileaks, established by Assange in 2006, has published over 10 million documents. One of Assange’s prominent collaborators, US Army intelligence analyst Chelsea Manning, was sentenced to 35 years in prison before then-President Barack Obama commuted the sentence in 2017. 

During the hearing, Assange told the court, ‘As a journalist, I encouraged my source to provide information that was deemed classified to publish that information.’ Assange underscored his belief that he would be shielded by the First Amendment of the US Constitution, safeguarding freedom of the press. Prosecutors alleged that the WikiLeaks founder actively promoted leaks of classified information, asserting that Assange told leakers that ‘top secret means nothing.’ Following the sentencing, Assange’s attorney, Barry Pollack, affirmed that ‘Wikileaks’s work will persist, and Mr Assange, without a doubt, will remain a driving force for freedom of speech and government transparency.’

Geologists voice concerns about potential censorship and bias in Chinese AI chatbot

Geologists are expressing concerns about potential Chinese censorship and bias in GeoGPT, a new AI chatbot backed by the International Union of Geological Sciences (IUGS). Developed under the Deep-time Digital Earth (DDE) program, which is heavily funded by China, GeoGPT aims to assist geoscientists, particularly in developing countries, by providing access to extensive geological data. However, issues around transparency and censorship have been highlighted by experts, raising questions about the chatbot’s reliability.

Critics like Prof. Paul Cleverley have pointed out potential censorship and lack of transparency in GeoGPT’s responses. Although DDE representatives claim that the chatbot’s information is purely geoscientific and free from state influence, tests with its underlying AI, Qwen, developed by Alibaba, suggest that certain sensitive questions may be avoided or answered inadequately. That contrasts with responses from other AI models like ChatGPT, which provide more direct information on similar queries.

Further concerns are raised about the involvement of Chinese funding and the potential for biassed data usage. Geoscientific research, which includes valuable information about natural resources, could be strategically filtered. Additionally, the terms of use for GeoGPT prohibit generating content that undermines national security or incites subversion, aligning with Chinese laws, which may influence the chatbot’s outputs.

The IUGS president, John Ludden, has stated that GeoGPT’s database will be made public once appropriate governance is ensured. However, with the project being predominantly funded by Chinese sources, geoscientists remain sceptical about the impartiality and transparency of GeoGPT’s data and responses.

ByteDance challenges US TikTok ban in court

ByteDance and its subsidiary company TikTok are urging a US court to overturn a law that would ban the popular app in the USA by 19 January. The new legal act, signed by President Biden in April, demands ByteDance divest TikTok’s US assets or face a ban, which the company argues is impractical on technological, commercial, and legal grounds.

ByteDance contends that the law, driven by concerns over potential Chinese access to American data, violates free speech rights and unfairly targets TikTok while ‘ignores many applications with substantial operations in China that collect large amounts of US user data, as well as the many US companies that develop software and employ engineers in China.’ They argue that the legislation represents a substantial departure from the US tradition of supporting an open internet and sets a dangerous precedent.

The US Court of Appeals for the District of Columbia will hear oral arguments on this case on 16 September, a decision that could shape the future of TikTok in the US. ByteDance claims lengthy negotiations with the US government, which ended abruptly in August 2022, proposed various measures to protect US user data, including a ‘kill switch’ for the government to suspend TikTok if necessary. Additionally, the company made public a 100-plus page draft national security agreement to protect US TikTok user data and claims it has spent more than $2 billion on the effort. However, they believe the administration prefers to shut down the app rather than finalise a feasible agreement.

The Justice Department, defending the law, asserted that it addresses national security concerns appropriately. Moreover, the case follows a similar attempt by former President Trump to ban TikTok, which was blocked by the courts in 2020. This time, the new law would prohibit app stores and internet hosting services from supporting TikTok unless ByteDance divests it.

TikTok’s fate in US to be decided before election

A US appeals court has scheduled oral arguments for 16 September to address legal challenges against a new law requiring ByteDance, the China-based parent company of TikTok, to divest its US assets by 19 January or face a ban. The law, signed by President Joe Biden on 24 April, aims to eliminate Chinese ownership of TikTok due to national security concerns. TikTok, ByteDance, and a group of TikTok creators have filed lawsuits to block the law, arguing that it significantly impacts American life, with 170 million Americans using the app.

The hearing will coincide with the final weeks of the 2024 presidential election, and both parties are seeking a ruling by 6 December to allow for a potential Supreme Court review. The law also prohibits app stores like Apple and Google from offering TikTok and bars internet hosting services from supporting it unless ByteDance divests. Such a measure reflects US lawmakers’ fears that China could use TikTok to access American data or conduct espionage.

Pope Francis to address AI ethics at G7 summit

Pope Francis is set to make history at the upcoming G7 summit in Italy’s Puglia region by becoming the first pope to address the gathering’s discussions on AI. His participation underscores his commitment to ensuring that AI development aligns with human values and serves the common good. The 87-year-old pontiff recognises the potential of AI for positive change but also emphasises the need for careful regulation to prevent its misuse and safeguard against potential risks.

At the heart of the pope’s message is the call for an ethical framework to guide AI development and usage. Through initiatives like the ‘Rome Call for AI Ethics’, the Vatican seeks to promote transparency, inclusion, responsibility, and impartiality in AI endeavours. Notably, major tech companies like Microsoft, IBM, Cisco Systems, and international organisations have endorsed these principles.

During the G7 summit, Pope Francis is expected to advocate for international cooperation in AI regulation. He emphasises the importance of addressing global inequalities in access to technology and mitigating threats like AI-controlled weapons and the spread of misinformation. His presence at the summit signifies a proactive engagement with contemporary issues, reflecting his vision of a Church actively involved in shaping the world’s future.

The pope’s decision to address AI at the G7 summit follows concerns about the rise of ‘deepfake’ technology, exemplified by manipulated images of himself circulating online. He recognises the transformative potential of AI in the 21st century and seeks to ensure its development aligns with human dignity and social justice. Through his participation, Pope Francis aims to contribute to the creation of an ethical and regulatory framework that promotes the responsible use of AI for the benefit of all humanity.

Australia drops legal challenge against Musk’s X over violent video removal

Australia’s cyber safety regulator has decided to drop its legal challenge against Elon Musk-owned X (formerly Twitter) concerning the removal of videos depicting the stabbing of an Assyrian church bishop in Sydney. The decision follows a setback in May when a federal court judge rejected a request to extend a temporary order for X to block the videos, which Australian authorities deemed a terrorist attack.

eSafety Commissioner Julie Inman Grant highlighted the issue of graphic material being accessible online, especially to children, and criticised X’s initial refusal to remove the violent content globally. Grant emphasised the original intent to prevent the footage from going viral, which could incite further violence and harm the community, defending the regulator’s actions despite the legal outcome.

Why does it matter?

The incident, which involved a 16-year-old boy charged with a terrorism offence, also led to a public clash between Musk and Australian officials, including Prime Minister Anthony Albanese. Musk’s criticisms of the regulatory order as censorship sparked controversy, while other major platforms like Meta, TikTok, Reddit, and Telegram complied with removal requests. X had opted to geo-block the content in Australia, a solution deemed ineffective by the regulator due to users employing virtual private networks.