Supreme Court delays ruling on state laws targeting social media

The US Supreme Court has deferred rulings on the constitutionality of laws from Florida and Texas aimed at regulating social media companies’ content moderation practices. The laws, challenged by industry groups including NetChoice and CCIA, sought to limit platforms like Meta Platforms, Google, and others from moderating content they deem objectionable. While the lower courts had mixed decisions—blocking Florida’s law and upholding Texas’—the Supreme Court unanimously decided these decisions didn’t fully address First Amendment concerns and sent them back for further review.

Liberal Justice Elena Kagan, writing for the majority, questioned Texas’ law, suggesting it sought to impose state preferences on social media content moderation, which could violate the First Amendment. Central to the debate is whether states can compel platforms to host content against their editorial discretion, which companies argue is necessary to manage spam, bullying, extremism, and hate speech. Critics argue these laws protect free speech by preventing censorship of conservative viewpoints, a claim disputed by the Biden administration, which opposes the laws for potentially violating First Amendment protections.

Why does it matter?

At stake are laws that would restrict platforms with over 50 million users from censoring based on viewpoint (Texas) and limit content exclusion for political candidates or journalistic enterprises (Florida). Additionally, these laws require platforms to explain content moderation decisions, a requirement some argue burdens free speech rights.

The Supreme Court’s decision not to rule marks another chapter in the ongoing legal battle over digital free speech rights, following earlier decisions regarding officials’ social media interactions and misinformation policies.

The future of humour in advertising with AI

AI is revolutionising the world of advertising, particularly when it comes to humour. Traditionally, humour in advertising was heavily depended on human creativity, relying on puns, sarcasm, and funny voices to engage consumers. However, as AI advances, it is increasingly being used to create comedic content.

Neil Heymann, Global Chief Creative Officer at Accenture Song, discussed the integration of AI in humour at the Cannes Lions International Festival of Creativity. He noted that while humour in advertising carries certain risks, the potential rewards far outweigh them. Despite the challenges of maintaining a unique comedic voice in a globalised market, AI offers new opportunities for creativity and personalisation.

One notable example Heymann highlighted was a recent Uber ad in the UK featuring Robert De Niro. He emphasised that while AI might struggle to replicate the nuanced performance of an actor like De Niro, it can still be a valuable tool for generating humour. For instance, a new tool developed by Google Labs can create jokes by exploring various wordplay and puns, expanding the creative options available to writers.

Heymann believes that AI can also help navigate the complexities of global advertising. By acting as an advanced filtering system, AI can identify potential cultural pitfalls and ensure that humorous content resonates with diverse audiences without losing the thrill of creativity.

Moreover, AI’s impact on advertising extends beyond humour. Toys ‘R’ Us recently pioneered text-to-video AI-generated advertising clips, showcasing AI’s ability to revolutionise content creation across various formats. That innovation highlights the expanding role of AI in shaping the future of advertising, where technological advancements continuously redefine creative possibilities.

WikiLeaks founder agrees to plea deal over US classified documents

The founder of WikiLeaks, Julian Assange, has agreed to plead guilty to a single charge of conspiring to acquire and disclose classified US national defence documents, as outlined in court documents filed in the US District Court for the Northern Mariana Islands.

Under the terms of a deal, Julian Assange has confessed in a US court, concluding a 14-year legal struggle, and has been granted freedom. He formally entered a plea to a single offence in the Northern Mariana Islands, an American territory in the Pacific, shortly after his release from a British prison. In exchange, he has been given credit for time served and is permitted to fly back to Australia to reunite with his family. 

US authorities had been pursuing the 52-year-old for a significant disclosure of confidential files in 2010. Prosecutors had initially sought to prosecute the Wikileaks founder on 18 counts, primarily under the Espionage Act, related to the release of confidential US military records and diplomatic messages concerning the Afghanistan and Iraq wars, which they claimed endangered lives. Wikileaks had unveiled a video from a US military helicopter showing civilians being killed in Baghdad, Iraq. It also released numerous confidential documents indicating that the US military had caused the deaths of hundreds of civilians in unreported incidents during the Afghanistan war. 

Wikileaks, established by Assange in 2006, has published over 10 million documents. One of Assange’s prominent collaborators, US Army intelligence analyst Chelsea Manning, was sentenced to 35 years in prison before then-President Barack Obama commuted the sentence in 2017. 

During the hearing, Assange told the court, ‘As a journalist, I encouraged my source to provide information that was deemed classified to publish that information.’ Assange underscored his belief that he would be shielded by the First Amendment of the US Constitution, safeguarding freedom of the press. Prosecutors alleged that the WikiLeaks founder actively promoted leaks of classified information, asserting that Assange told leakers that ‘top secret means nothing.’ Following the sentencing, Assange’s attorney, Barry Pollack, affirmed that ‘Wikileaks’s work will persist, and Mr Assange, without a doubt, will remain a driving force for freedom of speech and government transparency.’

Geologists voice concerns about potential censorship and bias in Chinese AI chatbot

Geologists are expressing concerns about potential Chinese censorship and bias in GeoGPT, a new AI chatbot backed by the International Union of Geological Sciences (IUGS). Developed under the Deep-time Digital Earth (DDE) program, which is heavily funded by China, GeoGPT aims to assist geoscientists, particularly in developing countries, by providing access to extensive geological data. However, issues around transparency and censorship have been highlighted by experts, raising questions about the chatbot’s reliability.

Critics like Prof. Paul Cleverley have pointed out potential censorship and lack of transparency in GeoGPT’s responses. Although DDE representatives claim that the chatbot’s information is purely geoscientific and free from state influence, tests with its underlying AI, Qwen, developed by Alibaba, suggest that certain sensitive questions may be avoided or answered inadequately. That contrasts with responses from other AI models like ChatGPT, which provide more direct information on similar queries.

Further concerns are raised about the involvement of Chinese funding and the potential for biassed data usage. Geoscientific research, which includes valuable information about natural resources, could be strategically filtered. Additionally, the terms of use for GeoGPT prohibit generating content that undermines national security or incites subversion, aligning with Chinese laws, which may influence the chatbot’s outputs.

The IUGS president, John Ludden, has stated that GeoGPT’s database will be made public once appropriate governance is ensured. However, with the project being predominantly funded by Chinese sources, geoscientists remain sceptical about the impartiality and transparency of GeoGPT’s data and responses.

ByteDance challenges US TikTok ban in court

ByteDance and its subsidiary company TikTok are urging a US court to overturn a law that would ban the popular app in the USA by 19 January. The new legal act, signed by President Biden in April, demands ByteDance divest TikTok’s US assets or face a ban, which the company argues is impractical on technological, commercial, and legal grounds.

ByteDance contends that the law, driven by concerns over potential Chinese access to American data, violates free speech rights and unfairly targets TikTok while ‘ignores many applications with substantial operations in China that collect large amounts of US user data, as well as the many US companies that develop software and employ engineers in China.’ They argue that the legislation represents a substantial departure from the US tradition of supporting an open internet and sets a dangerous precedent.

The US Court of Appeals for the District of Columbia will hear oral arguments on this case on 16 September, a decision that could shape the future of TikTok in the US. ByteDance claims lengthy negotiations with the US government, which ended abruptly in August 2022, proposed various measures to protect US user data, including a ‘kill switch’ for the government to suspend TikTok if necessary. Additionally, the company made public a 100-plus page draft national security agreement to protect US TikTok user data and claims it has spent more than $2 billion on the effort. However, they believe the administration prefers to shut down the app rather than finalise a feasible agreement.

The Justice Department, defending the law, asserted that it addresses national security concerns appropriately. Moreover, the case follows a similar attempt by former President Trump to ban TikTok, which was blocked by the courts in 2020. This time, the new law would prohibit app stores and internet hosting services from supporting TikTok unless ByteDance divests it.

TikTok’s fate in US to be decided before election

A US appeals court has scheduled oral arguments for 16 September to address legal challenges against a new law requiring ByteDance, the China-based parent company of TikTok, to divest its US assets by 19 January or face a ban. The law, signed by President Joe Biden on 24 April, aims to eliminate Chinese ownership of TikTok due to national security concerns. TikTok, ByteDance, and a group of TikTok creators have filed lawsuits to block the law, arguing that it significantly impacts American life, with 170 million Americans using the app.

The hearing will coincide with the final weeks of the 2024 presidential election, and both parties are seeking a ruling by 6 December to allow for a potential Supreme Court review. The law also prohibits app stores like Apple and Google from offering TikTok and bars internet hosting services from supporting it unless ByteDance divests. Such a measure reflects US lawmakers’ fears that China could use TikTok to access American data or conduct espionage.

Pope Francis to address AI ethics at G7 summit

Pope Francis is set to make history at the upcoming G7 summit in Italy’s Puglia region by becoming the first pope to address the gathering’s discussions on AI. His participation underscores his commitment to ensuring that AI development aligns with human values and serves the common good. The 87-year-old pontiff recognises the potential of AI for positive change but also emphasises the need for careful regulation to prevent its misuse and safeguard against potential risks.

At the heart of the pope’s message is the call for an ethical framework to guide AI development and usage. Through initiatives like the ‘Rome Call for AI Ethics’, the Vatican seeks to promote transparency, inclusion, responsibility, and impartiality in AI endeavours. Notably, major tech companies like Microsoft, IBM, Cisco Systems, and international organisations have endorsed these principles.

During the G7 summit, Pope Francis is expected to advocate for international cooperation in AI regulation. He emphasises the importance of addressing global inequalities in access to technology and mitigating threats like AI-controlled weapons and the spread of misinformation. His presence at the summit signifies a proactive engagement with contemporary issues, reflecting his vision of a Church actively involved in shaping the world’s future.

The pope’s decision to address AI at the G7 summit follows concerns about the rise of ‘deepfake’ technology, exemplified by manipulated images of himself circulating online. He recognises the transformative potential of AI in the 21st century and seeks to ensure its development aligns with human dignity and social justice. Through his participation, Pope Francis aims to contribute to the creation of an ethical and regulatory framework that promotes the responsible use of AI for the benefit of all humanity.

Australia drops legal challenge against Musk’s X over violent video removal

Australia’s cyber safety regulator has decided to drop its legal challenge against Elon Musk-owned X (formerly Twitter) concerning the removal of videos depicting the stabbing of an Assyrian church bishop in Sydney. The decision follows a setback in May when a federal court judge rejected a request to extend a temporary order for X to block the videos, which Australian authorities deemed a terrorist attack.

eSafety Commissioner Julie Inman Grant highlighted the issue of graphic material being accessible online, especially to children, and criticised X’s initial refusal to remove the violent content globally. Grant emphasised the original intent to prevent the footage from going viral, which could incite further violence and harm the community, defending the regulator’s actions despite the legal outcome.

Why does it matter?

The incident, which involved a 16-year-old boy charged with a terrorism offence, also led to a public clash between Musk and Australian officials, including Prime Minister Anthony Albanese. Musk’s criticisms of the regulatory order as censorship sparked controversy, while other major platforms like Meta, TikTok, Reddit, and Telegram complied with removal requests. X had opted to geo-block the content in Australia, a solution deemed ineffective by the regulator due to users employing virtual private networks.

Former Meta engineer sues over Gaza post suppression

A former Meta engineer has accused the company of bias in its handling of Gaza-related content, alleging he was fired for addressing bugs that suppressed Palestinian Instagram posts. Ferras Hamad, a Palestinian-American who worked on Meta’s machine learning team, filed a lawsuit in California state court for discrimination and wrongful termination. Hamad claims Meta exhibited a pattern of bias against Palestinians, including deleting internal communications about the deaths of Palestinian relatives and investigating the use of the Palestinian flag emoji while not probing similar uses of the Israeli or Ukrainian flag emojis.

Why does it matter?

The lawsuit reflects ongoing criticisms by human rights groups of Meta’s content moderation regarding Israel and the Palestinian territories. These concerns were amplified following the conflict that erupted in Gaza after a Hamas attack in Israel and Israel’s subsequent offensive.

Hamad’s firing, he asserts, was linked to his efforts to fix issues that restricted Palestinian Instagram posts from appearing in searches and feeds, including a misclassified video by a Palestinian photojournalist.

Despite his manager confirming the task was part of his duties, Hamad was later investigated and fired, allegedly for violating a policy on working with accounts of people he knew personally, which he denies.

Human rights groups protest Meta’s alleged censorship of pro-Palestinian content

Meta’s annual shareholder meeting on Wednesday sparked online protests from human rights groups, calling for an end to what they describe as systemic censorship of pro-Palestinian content on the company’s platforms and within its workforce. Nearly 200 Meta employees have recently urged CEO Mark Zuckerberg to address alleged internal censorship and biases on public platforms, advocating for greater transparency and an immediate ceasefire in Gaza.

Activists argue that after years of pressing Meta and other platforms for fairer content moderation, shareholders might exert more influence on the company than public pressure alone. Nadim Nashif, founder of the social media watchdog group 7amleh, highlighted that despite a decade of advocacy, the situation has deteriorated, necessitating new strategies like shareholder engagement to spur change.

Recently this month, a public statement from Meta employees followed an internal petition in 2023 with over 450 signatures, whose author faced an investigation by HR for allegedly violating company rules. The latest letter condemns Meta’s actions as creating a ‘hostile and unsafe work environment’ for Palestinian, Arab, Muslim, and ‘anti-genocide’ colleagues, with many employees claiming censorship and dismissiveness from leadership.

During the shareholder meeting, Meta focused on its AI projects and managing disinformation, sidestepping the issue of Palestinian content moderation. Despite external audit findings and a letter from US Senator Elizabeth Warren criticising Meta’s handling of pro-Palestinian content, the company did not immediately address the circulating letters and petitions.