Meta faces legal challenge on Instagram’s impact on teenagers

Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.

The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.

Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.

The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.

Meta reintroduces facial recognition for celebrity scam protection

Meta, the parent company of Facebook, is testing facial recognition technology again, three years after halting its use due to privacy concerns. This time, the company focuses on combating ‘celeb bait’ scams, which use public figures’ images in fraudulent advertisements. Meta plans to enrol around 50,000 celebrities in a trial program that will automatically compare their profile photos with those in suspicious ads. If the system detects a match, Meta will block the ad and notify the celebrities who can opt out of the program.

The trial, which will begin globally in December, excludes regions where regulatory clearance has yet to be obtained, such as Britain, the European Union, South Korea, and certain US states states like Texas and Illinois. Meta’s vice president of content policy, Monika Bickert, explained that the program protects celebrities from being exploited in scam ads, a growing problem on social media platforms. Meta aims to offer this protection while allowing participants to choose whether to participate in the trial.

The initiative comes at a time when Meta is balancing the need to address rising scam concerns while avoiding past criticisms over user data privacy. In 2021, Meta shut down its previous facial recognition system and deleted the face scan data of a billion users, citing growing concerns over biometric data use. Earlier this year, the company faced a $1.4 billion fine in Texas for allegedly collecting biometric data illegally.

In addition to targeting scam ads, Meta is also considering using facial recognition data to help everyday users regain access to their accounts, especially in cases where they’ve been hacked or forgotten their passwords. Meta emphasises that all facial data generated by the new system will be deleted immediately after use, regardless of whether a scam is detected. The tool has undergone extensive internal and external privacy reviews before being implemented.

Ride-hailing app Yango suspended in Togo over safety concerns

Togo’s transport ministry has suspended the operations of Yango, a ride-hailing app owned by Yandex, the tech giant from Russia, due to security concerns. The app had been operating in the West African nation since June, but the ministry stated Yango was functioning without proper authorisation and in violation of national regulations.

The decision to suspend Yango was driven by concerns over passenger safety, as well as the app’s failure to adhere to the country’s legal procedures. The ministry emphasised the need to ensure that transportation services in Togo operate in compliance with local laws.

Effective immediately, Yango’s services have been halted across the entire national territory. The company has not yet commented on the suspension or provided any response to requests for information.

Yango, which had only recently entered the Togolese market, now faces an indefinite pause in operations as the government prioritises safety and regulatory compliance for ride-hailing services.

ByteDance fires intern for disrupting AI training

ByteDance, the parent company of TikTok, has dismissed an intern for what it described as “maliciously interfering” with the training of one of its AI models. The Chinese tech giant clarified that while the intern, who was part of the advertising technology team, had no experience with ByteDance’s AI Lab, some reports circulating on social media and other platforms have exaggerated the incident’s impact.

ByteDance stated that the interference did not disrupt its commercial operations or its large language AI models. It also denied claims that the damage exceeded $10 million or affected an AI training system powered by thousands of graphics processing units (GPUs). The company highlighted that the intern was fired in August, and it has since notified their university and relevant industry bodies.

As one of the leading tech firms in AI development, ByteDance operates popular platforms like TikTok and Douyin. The company continues to invest heavily in AI, with applications including its Doubao chatbot and a text-to-video tool named Jimeng.

London-based company faces scrutiny for AI models misused in propaganda campaigns

A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.

In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.

UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.

Meta’s oversight board seeks public input on immigration posts

Meta’s Oversight Board has opened a public consultation on immigration-related content that may harm immigrants following two controversial cases on Facebook. The board, which operates independently but is funded by Meta, will assess whether the company’s policies sufficiently protect refugees, migrants, immigrants, and asylum seekers from severe hate speech.

The first case concerns a Facebook post made in May by a Polish far-right coalition, which used a racially offensive term. Despite the post accumulating over 150,000 views, 400 shares, and receiving 15 hate speech reports from users, Meta chose to keep it up following a human review. The second case involves a June post from a German Facebook page that included an image expressing hostility toward immigrants. Meta also upheld its decision to leave this post online after review.

Following the Oversight Board’s intervention, Meta’s experts reviewed both cases again but upheld the initial decisions. Helle Thorning-Schmidt, co-chair of the board, stated that these cases are critical in determining if Meta’s policies are effective and sufficient in addressing harmful content on its platform.

Rebranded World Network boosts iris-scanning crypto push with new devices

Sam Altman’s cryptocurrency project, Worldcoin, has rebranded as World Network and is intensifying its efforts to scan irises worldwide using its “orb” devices. The project’s core feature, World ID, acts as a digital passport to verify individuals as real humans, helping to distinguish them from AI bots online. At an event in San Francisco, World Network revealed an updated version of its orb device, boasting 5G connectivity and enhanced privacy features, alongside new initiatives to improve access to the technology.

Despite signing up over 6.9 million people since its launch in July 2023, the project has faced criticism from privacy advocates regarding the collection and storage of personal data. Several countries, including Spain and Portugal, have temporarily banned the use of the orb devices, while Argentina and Britain are currently reviewing the project.

US military explores deepfake use

The United States’ Special Operations Command (SOCOM) is pursuing the development of sophisticated deepfake technology to create virtual personas indistinguishable from real humans, as per a procurement document from the Department of Defense’s Joint Special Operations Command (JSOC).

These artificial avatars would operate on social media and online platforms, featuring realistic expressions and high-quality images akin to government IDs. JSOC also seeks technologies to produce convincing facial and background videos, including ‘selfie videos’, to avoid detection by social media algorithms.

US state agencies have previously announced frameworks to combat foreign information manipulation, citing national security threats from these technologies. Despite recognising the global dangers posed by deepfakes, SOCOM’s initiative underscores a willingness to engage with the technology for potential military advantage.

Experts expressed concern over the ethical implications and potential for increased misinformation, warning of the entirely deceptive nature of deepfakes, with no legitimate applications beyond deceit, possibly encouraging further global misuse. Furthermore, such practices pose the risk of diminished public trust in government communications, exacerbated by perceived hypocrisy in deploying such technology.

Why does it matter?

This plan reflects an ongoing interest in leveraging digital manipulation for military purposes, despite previous incidents where platforms like Meta dismantled similar US-linked networks. It further shows a contradiction in the US’s stance on deepfake use, as it simultaneously condemns similar actions by countries like Russia and China.

ChatGPT for Windows launches with restrictions

OpenAI has released the ChatGPT app for Windows, which is now available via the Microsoft Store. Like the Mac version launched earlier this year, it offers quick access to the AI-powered chatbot, allowing users to integrate AI into their daily activities.

The app is in an early release stage and limited to paid users, including Plus, Team, Enterprise, and Edu subscribers. A broader rollout to free-tier users is expected within the next few weeks or months. Some Mac and web version features are not yet included but are planned for future updates.

Key features missing from the Windows version include advanced voice modes, integration with Google Drive and Microsoft OneDrive, and external authentication through GPT Builder. Users, however, can still upload files and photos using the newly introduced o1 model for analysis.

ChatGPT offers convenience features such as the ‘Alt + Space’ shortcut, which allows users to bring it into focus when multitasking. It also remembers its previous position on the screen, resetting to the centre upon reopening.

X redirects user’s lawsuits to conservative Texan courts

X (formerly Twitter), has updated its terms of service, requiring users to file any lawsuits against the company in Texas’ Northern District in the US, a court known for conservative rulings. This change, effective November 15, appears to align with Musk’s increasing support for conservative causes, including backing Donald Trump’s 2024 presidential campaign. Critics argue the move is an attempt to ‘judge-shop,’ as the Northern District has become a popular destination for right-leaning litigants seeking to block parts of President Biden’s agenda.

X’s headquarters are in Bastrop, Texas, located in the Western District, but the company has chosen the Northern District for legal disputes. This district already hosts two lawsuits filed by X, including one against Media Matters after the watchdog group published a report linking ads on the platform to posts promoting Nazism. The move to steer legal cases to this specific court highlights the company’s efforts to benefit from a legal environment more favorable to conservative causes.