Google DeepMind’s AI may ease culture war tensions, say researchers

A new AI tool created by Google DeepMind, called the ‘Habermas Machine,’ could help reduce culture war divides by mediating between different viewpoints. The system takes individual opinions and generates group statements that reflect both majority and minority perspectives, aiming to foster greater agreement.

Developed by researchers, including Professor Chris Summerfield from the University of Oxford, the AI system has been tested in the United Kingdom with more than 5,000 participants. It was found that the statements created by AI were often rated higher in clarity and quality than those written by human mediators, increasing group consensus by eight percentage points on average.

The Habermas Machine was also used in a virtual citizens’ assembly on topics such as Brexit and universal childcare. It was able to produce group statements that acknowledged minority views without marginalising them, but the AI approach does have its critics.

Some researchers argue that AI-mediated discussions don’t always promote empathy or give smaller minorities enough influence in shaping the final statements. Despite these concerns, the potential for AI to assist in resolving social disagreements remains a promising development.

Meta faces legal challenge on Instagram’s impact on teenagers

Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.

The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.

Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.

The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.

ByteDance fires intern for disrupting AI training

ByteDance, the parent company of TikTok, has dismissed an intern for what it described as “maliciously interfering” with the training of one of its AI models. The Chinese tech giant clarified that while the intern, who was part of the advertising technology team, had no experience with ByteDance’s AI Lab, some reports circulating on social media and other platforms have exaggerated the incident’s impact.

ByteDance stated that the interference did not disrupt its commercial operations or its large language AI models. It also denied claims that the damage exceeded $10 million or affected an AI training system powered by thousands of graphics processing units (GPUs). The company highlighted that the intern was fired in August, and it has since notified their university and relevant industry bodies.

As one of the leading tech firms in AI development, ByteDance operates popular platforms like TikTok and Douyin. The company continues to invest heavily in AI, with applications including its Doubao chatbot and a text-to-video tool named Jimeng.

Massachusetts parents sue school over AI use dispute

The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.

The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.

The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.

London-based company faces scrutiny for AI models misused in propaganda campaigns

A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.

In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.

UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.

Hiya unveils new tool to detect AI deepfake voices

Hiya, a US-based company specialising in fraud and spam detection, has introduced a new Chrome browser extension to identify AI-generated deepfake voices. The tool offers free access to anyone concerned about the growing risk of voice manipulation.

The Deepfake Voice Detector analyses video and audio streams, sampling audio in just one second to determine whether a voice is genuine or artificially generated. Hiya’s technology relies on AI algorithms it integrated following the acquisition of Loccus.ai in July.

With deepfakes becoming increasingly difficult to spot, the company aims to help users stay ahead of potential misuse. Hiya president Kush Parikh emphasised the importance of launching the tool ahead of the US elections in November to address the rising threat.

A survey of 2,000 individuals conducted by Hiya revealed that one in four people encountered audio deepfakes between April and July this year. Personal voice calls emerged as the primary risk factor (61%), followed by exposure on platforms like Facebook (22%) and YouTube (17%).

Independent candidates face off against AI avatar

Two independent candidates participated in an online debate on Thursday, engaging with an AI-generated version of incumbent congressman Don Beyer. The digital avatar, dubbed ‘DonBot’, was created using Beyer’s website and public materials to simulate his responses in the event, streamed on YouTube and Rumble.

Beyer, a Democrat seeking re-election, opted not to join the debate in person. His AI representation featured a robotic voice reading answers without imitating his tone. Independent challengers Bentley Hensel and David Kennedy appeared on camera, while the Republican candidate Jerry Torres did not participate. Viewership remained low, peaking at fewer than 20 viewers, and parts of DonBot’s responses were inaudible.

Hensel explained that the AI was programmed to provide unbiased answers using available public information. The debate tackled policy areas such as healthcare, gun control, and aid to Israel. When asked why voters should re-elect Beyer, the AI stated, ‘I believe that I can make a real difference in the lives of the people of Virginia’s 8th district.’

Although the event saw minimal impact, observers suggest the use of AI in politics could become more prevalent. The reliance on such technology raises concerns about transparency, especially if no regulations are introduced to guide its use in future elections.

US prosecutors intensify efforts to combat AI-generated child abuse content

US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.

Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.

The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.

Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.

US military explores deepfake use

The United States’ Special Operations Command (SOCOM) is pursuing the development of sophisticated deepfake technology to create virtual personas indistinguishable from real humans, as per a procurement document from the Department of Defense’s Joint Special Operations Command (JSOC).

These artificial avatars would operate on social media and online platforms, featuring realistic expressions and high-quality images akin to government IDs. JSOC also seeks technologies to produce convincing facial and background videos, including ‘selfie videos’, to avoid detection by social media algorithms.

US state agencies have previously announced frameworks to combat foreign information manipulation, citing national security threats from these technologies. Despite recognising the global dangers posed by deepfakes, SOCOM’s initiative underscores a willingness to engage with the technology for potential military advantage.

Experts expressed concern over the ethical implications and potential for increased misinformation, warning of the entirely deceptive nature of deepfakes, with no legitimate applications beyond deceit, possibly encouraging further global misuse. Furthermore, such practices pose the risk of diminished public trust in government communications, exacerbated by perceived hypocrisy in deploying such technology.

Why does it matter?

This plan reflects an ongoing interest in leveraging digital manipulation for military purposes, despite previous incidents where platforms like Meta dismantled similar US-linked networks. It further shows a contradiction in the US’s stance on deepfake use, as it simultaneously condemns similar actions by countries like Russia and China.

X redirects user’s lawsuits to conservative Texan courts

X (formerly Twitter), has updated its terms of service, requiring users to file any lawsuits against the company in Texas’ Northern District in the US, a court known for conservative rulings. This change, effective November 15, appears to align with Musk’s increasing support for conservative causes, including backing Donald Trump’s 2024 presidential campaign. Critics argue the move is an attempt to ‘judge-shop,’ as the Northern District has become a popular destination for right-leaning litigants seeking to block parts of President Biden’s agenda.

X’s headquarters are in Bastrop, Texas, located in the Western District, but the company has chosen the Northern District for legal disputes. This district already hosts two lawsuits filed by X, including one against Media Matters after the watchdog group published a report linking ads on the platform to posts promoting Nazism. The move to steer legal cases to this specific court highlights the company’s efforts to benefit from a legal environment more favorable to conservative causes.