US Supreme Court declines Snapchat case

The US Supreme Court decided not to review a case involving a Texas teenager who sued Snapchat, alleging the platform did not adequately protect him from sexual abuse by a teacher. The minor, known as Doe, accused Snap Inc. of negligence for failing to safeguard young users from sexual predators, particularly a teacher who exploited him via the app. Bonnie Guess-Mazock, the teacher involved, was convicted of sexually assaulting the teenager.

Lower courts dismissed the lawsuit, citing Section 230 of the Communications Decency Act, which shields internet companies from liability for content posted by users. With the Supreme Court declining to hear the case, Snapchat retains its protection under this law. Justices Clarence Thomas and Neil Gorsuch expressed concerns about the broad immunity granted to social media platforms under Section 230.

Why does this matter?

The case has sparked wider debate about the responsibilities of tech companies in preventing such abuses and whether laws like Section 230 should be revised to hold them more accountable for content on their platforms. Both US political parties have called for reforms to ensure internet companies can be held liable when their platforms are used for harmful activities.

Meta lifts ban on ‘Shaheed’ after review

Meta Platforms, the parent company of Facebook and Instagram, announced it would lift its blanket ban on the word ‘shaheed’ (which means ‘martyr’ in English). The move follows a year-long review by Meta’s independent oversight board, which concluded that the company’s approach to the word was overly broad.

Meta faced strong criticism for its content policies, particularly regarding the Middle East. A 2021 study commissioned by Meta Platforms highlighted the adverse human rights impact on Palestinians and other Arabic-speaking users. Criticisms intensified with the escalation of hostilities between Israel and Hamas in October.

Why does it matter?

The review revealed that Meta’s policy on the word ‘shaheed’ did not consider its various meanings, often resulting in the removal of non-violent content. Meta acknowledged these findings and adjusted its approach, removing content only when ‘shaheed’ is paired with otherwise violating content. The oversight board welcomed this change, noting that the previous policy had led to widespread censorship across Meta’s platforms.

AI bot campaign faces setback in Wyoming

In a unique twist on political campaigning, a Wyoming man named Victor Miller has entered the mayoral race in Cheyenne with an AI bot called ‘VIC.’ Miller, who works at a Laramie County library, sees VIC as a revolutionary tool for improving government transparency and accountability. However, just before a scheduled interview with Fox News Digital, Miller faced a significant setback when OpenAI closed his account, jeopardising his campaign.

Despite this challenge, Miller remains determined to continue promoting VIC, hoping to demonstrate its potential at a public event in Laramie County. He believes that AI technology can streamline government processes and reduce human error, although he is now contemplating whether to declare his reliance on VIC formally. The decision comes as he navigates the restrictions imposed by OpenAI, which cited policy violations related to political campaigning.

Miller’s vision extends beyond his mayoral bid. He has called for support from prominent figures in the AI industry, like Elon Musk, to develop an open-source model that ensures equal access to this emerging technology. His campaign underscores a broader debate about open versus closed AI models, emphasising the need for transparency and fairness in technological advancements.

Wyoming’s legal framework, however, presents additional hurdles. State officials have indicated that candidates must be real persons and use their full names on the ballot. The issue complicates VIC’s candidacy, as the AI bot cannot meet these requirements. Nevertheless, Miller’s innovative approach has sparked conversations about the future role of AI in governance, with similar initiatives emerging globally.

YouTube implements rules for removing AI-generated mimicking videos

YouTube has implemented new privacy guidelines allowing individuals to request the removal of AI-generated videos that imitate them. Initially promised in November 2023, these rules are now officially in effect, as confirmed by a recent update to YouTube’s privacy policies.

According to the updated guidelines, users can request the removal of content that realistically depicts a synthetic version of themselves, created or altered using AI. YouTube will evaluate such requests based on several criteria, including whether the content is changed, disclosed as synthetic, identifiable, realistic, and whether it serves public interest like parody or satire. Human moderators will handle complaints, and if validated, the uploader must either delete the video within 48 hours or edit out the problematic parts.

These guidelines aim to protect individuals from potentially harmful content like deepfakes, which can easily mislead viewers. They are particularly relevant in upcoming elections in countries such as France, the UK, and the US, where misusing AI-generated videos could impact political discourse.

Google requires disclosure for election ads with altered content

Google announced that it will require advertisers to disclose election ads that use digitally altered content depicting real or realistic-looking people or events to combat misinformation during elections. This latest update to Google’s political content policy mandates advertisers to select a checkbox for ‘altered or synthetic content’ within their campaign settings.

The proliferation of generative AI, capable of rapidly creating text, images, and video, has sparked concerns over potential misuse. Deepfakes, which convincingly manipulate content to misrepresent individuals, have further blurred the distinction between fact and fiction in digital media.

To implement these changes, Google will automatically generate an in-ad disclosure for feeds and shorts on mobile devices and in-stream ads on computers and television. Advertisers must provide a prominently displayed disclosure for other ad formats that is clearly visible to users. According to Google, the exact wording of these disclosures will vary based on the context of each advertisement.

Why does it matter?

Earlier this year, during India’s general election, fake videos featuring Bollywood actors surfaced online, criticising Prime Minister Narendra Modi and urging support for the opposition Congress party. The incident highlighted the growing challenge of combating deceptive content amplified by AI-generated media.

In a related effort, OpenAI, led by Sam Altman, reported disrupting five covert influence operations in May that aimed to manipulate public opinion using AI models across various online platforms. Meta Platforms had previously committed to similar transparency measures, requiring advertisers on Facebook and Instagram to disclose the use of AI or digital tools in creating political, social, or election-related ads.

Supreme Court delays ruling on state laws targeting social media

The US Supreme Court has deferred rulings on the constitutionality of laws from Florida and Texas aimed at regulating social media companies’ content moderation practices. The laws, challenged by industry groups including NetChoice and CCIA, sought to limit platforms like Meta Platforms, Google, and others from moderating content they deem objectionable. While the lower courts had mixed decisions—blocking Florida’s law and upholding Texas’—the Supreme Court unanimously decided these decisions didn’t fully address First Amendment concerns and sent them back for further review.

Liberal Justice Elena Kagan, writing for the majority, questioned Texas’ law, suggesting it sought to impose state preferences on social media content moderation, which could violate the First Amendment. Central to the debate is whether states can compel platforms to host content against their editorial discretion, which companies argue is necessary to manage spam, bullying, extremism, and hate speech. Critics argue these laws protect free speech by preventing censorship of conservative viewpoints, a claim disputed by the Biden administration, which opposes the laws for potentially violating First Amendment protections.

Why does it matter?

At stake are laws that would restrict platforms with over 50 million users from censoring based on viewpoint (Texas) and limit content exclusion for political candidates or journalistic enterprises (Florida). Additionally, these laws require platforms to explain content moderation decisions, a requirement some argue burdens free speech rights.

The Supreme Court’s decision not to rule marks another chapter in the ongoing legal battle over digital free speech rights, following earlier decisions regarding officials’ social media interactions and misinformation policies.

Australia asks internet companies to help regulate online access for minors

Australia has given internet-related companies six months to develop enforceable codes aimed at preventing children from accessing pornography and other inappropriate content online, according to the eSafety Commission’s announcement on Tuesday. To outline their expectations, it presented a policy paper to guide the internet actors’ codes. Preliminary drafts of the codes are to be presented by 3 October 2024, and final codes are to be submitted by 19 December 2024. 

These codes will complement current government drives towards online content policy and safety. “We […] need industry to play their part by putting in some effective barriers to protect children,” said the eSafety Commissioner Ms Inman Grant. Concerned internet actors range from dating apps and social media to search engines and games.

The codes will be centred primarily around pornography, but will also encompass suicide and serious illnesses, including self-harm and eating disorders. Potential measures to protect children from explicit content may include age verification systems, default parental controls, and software to blur or filter inappropriate material.

The regulator did specify this will not completely limit access. “We know kids will always be curious and will likely seek out porn as they enter adolescence and explore their sexuality, so, many of these measures are really focused on preventing unintentional exposure to young children.” Australia has previously decided against the use of age verification for pornographic or adult content websites.   

Australia already has a number of codes for online safety, many of which are conceived thanks to similar consultations with NGOs and civil society actors. Spokespeople for Google and Meta have already said they will continue to engage with the commissioner for the conception of regulation and safety codes codes.

New Zealand pushes bill for tech platforms to pay for news

New Zealand’s conservative coalition government plans to introduce a bill mandating digital technology platforms to pay media companies for news. The Fair Digital News Bargaining Bill, initially proposed by the previous Labour government, aims to support local media companies in generating revenue from the news they produce. Communications Minister Paul Goldsmith stated that the bill would be amended to align more closely with Australia’s similar digital bargaining law, which forces internet firms like Meta and Google to negotiate content supply deals with media outlets.

Meta criticised the bill, arguing that it overlooked how its platforms function and the value they provide to news outlets; Google did not immediately comment. However, the proposed legislation would grant the communications minister the power to decide which digital platforms are subject to the law, with an independent regulator overseeing its enforcement.

While the right-wing ACT New Zealand party does not support the bill, the opposition Labour Party has expressed conditional support, pending a review of the amendments. Labour spokesperson Willie Jackson voiced relief that the government is progressing with legislation to create a fairer media landscape for news companies operating online.

AI revolutionises academic writing, prompting debate over quality and bias

In a groundbreaking shift for the academic world, AI now contributes to at least 10% of research papers, soaring to 20% in computer science, according to The Economist. This transformation is driven by advancements in large language models (LLMs), as highlighted in a University of Tübingen study comparing recent papers with those from the pre-ChatGPT era. The research shows a notable change in word usage, with terms like ‘delivers,’ ‘potential,’ ‘intricate,’ and ‘crucial’ becoming more common, while ‘important’ declines in use.

Chat with statistics of the words used in AI-generated research papers
Source: The Economist

Researchers are leveraging LLMs for editing, translating, simplifying coding, streamlining administrative tasks, and accelerating manuscript drafting. However, this integration raises concerns. LLMs may reinforce existing viewpoints and frequently cite prominent articles, potentially leading to an inflation of publications and a dilution of research quality. This risks perpetuating bias and narrowing academic diversity.

As the academic community grapples with these changes, scientific journals seek solutions to address the challenges as the sophistication of AI increases. Trying to detect and prevent the use of AI is increasingly futile. Other approaches to uphold the quality of research are discussed, including investment into a more solid peer-reviewing process, insisting on replicating experiments, and hiring academics based on the quality of their work instead of quantity, promoted by public obsession.

Recognizing the inevitability of AI’s role in academic writing, Diplo introduced the KaiZen publishing approach. This innovative approach combines just-in-time updates facilitated by AI with reflective writing crafted by humans, aiming to harmonize the strengths of AI and human intellect in producing scholarly work.

As AI continues to revolutionize academic writing, the landscape of research and publication is poised for further evolution, prompting ongoing debates and the search for balanced solutions.

Meta faces EU charges on user privacy tech rules

The EU antitrust regulators have charged Meta Platforms with violating landmark tech rules through its new ‘pay or consent’ advertising model for Facebook and Instagram. The model, introduced last November, offers users a choice between a free, ad-supported service with tracking or a paid, ad-free service. The European Commission argues this binary choice breaches the Digital Markets Act (DMA) by forcing users to consent to data tracking without providing a less personalised but equivalent alternative.

Meta asserts that its model complies with a ruling from EU’s top court and is aligned with the DMA, expressing a willingness to engage with the Commission to resolve the issue. However, if found guilty, Meta could face fines of up to 10% of its global annual turnover. The Commission aims to conclude its investigation by March next year.

The charge follows a recent DMA-related charge against Apple for similar non-compliance, highlighting the EU’s efforts to regulate Big Tech and empower users to control their data.