Google has released SynthID Text, a watermarking tool designed to help developers identify AI-generated content. Available for free on platforms like Hugging Face and Google’s Responsible GenAI Toolkit, this open-source technology aims to improve transparency around AI-written text. It works by embedding subtle patterns into the token distribution of text generated by AI models without affecting the quality or speed of the output.
SynthID Text has been integrated with Google’s Gemini models since earlier this year. While it can detect text that has been paraphrased or modified, the tool does have limitations, particularly with shorter text, factual responses, and content translated from other languages. Google acknowledges that its watermarking technique may struggle with these formats but emphasises the tool’s overall benefits.
As the demand for AI-generated content grows, so does the need for reliable detection methods. Countries like China are already mandating watermarking of AI-produced material, and similar regulations are being considered in US, California. The urgency is clear, with predictions that AI-generated content could dominate 90% of online text by 2026, creating new challenges in combating misinformation and fraud.
Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.
Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.
Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.
Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.
The RegHorizon and ETH Zurich Center for Law and Economics are organising a fifth AI Policy Summit. This years summit will be held on 1-2 November 2024.
The AI Policy Summit offers a platform for policymakers, business leaders, civil society, and academia to converge, exchange ideas, and collaboratively shape the future of AI policies. The Summit is an opportunity to be at the forefront of AI policy-making, ensuring that the technology benefits all of humanity while addressing ethical, social, and legal considerations.
More information, agenda, and the registration are available at the Summit webpage
The United States Federal Trade Commission (FTC) has introduced a rule banning the creation, purchase, and dissemination of fake online reviews, ensuring that testimonials are genuine and trustworthy. That includes reviews attributed to people who don’t exist, those generated by AI, or individuals with no real experience with the product or service.
The rule empowers the FTC to impose civil penalties on businesses and individuals knowingly engaging in such deceptive practices, holding violators accountable. By cracking down on fake reviews, the FTC protects consumers from being misled and ensures they can make informed purchasing decisions.
That initiative also promotes fair competition by penalising dishonest companies and supporting those operating with integrity, fostering a transparent and competitive marketplace. Additionally, the FTC’s rule goes beyond fake reviews by prohibiting businesses from using manipulative tactics such as unfounded legal threats, physical intimidation, or false accusations to influence their online reputation.
These measures prevent companies from using unethical strategies to control public perception, ensuring that business reputations are based on genuine consumer feedback, not coercion or deceit. The FTC aims to create a market environment that values honesty and fairness through this comprehensive approach.
Dow Jones and the New York Post have taken legal action against AI startup Perplexity AI, accusing the company of unlawfully copying their copyrighted content. The lawsuit is part of a wider dispute between publishers and tech companies over the use of news articles and other content without permission to train and operate AI systems.
Perplexity AI, which aims to disrupt the search engine market, assembles information from websites it deems authoritative and presents AI-generated summaries. Publishers claim that Perplexity bypasses their websites, depriving them of advertising and subscription revenue, and undermines the work of journalists.
The lawsuit, filed in the Southern District of New York, argues that Perplexity’s AI generates answers based on a vast database of news articles, often copying content verbatim. News Corp, owner of Dow Jones and the New York Post, is asking the court to block Perplexity’s use of its articles and to destroy any databases containing copyrighted material.
Perplexity has also faced allegations from other media organisations, including Forbes and Wired. While the company has introduced a revenue-sharing programme with some publishers, many news outlets continue to resist, seeking stronger legal protections for their content.
Alcon Entertainment, the producer behind Blade Runner 2049, has filed a lawsuit against Tesla and Warner Bros, accusing them of misusing AI-generated images that resemble scenes from the movie to promote Tesla’s new autonomous cybercab. Filed in California, the lawsuit alleges violations of US copyright law and claims Tesla falsely implied a partnership with Alcon through the use of the imagery.
Alcon stated that it had rejected Warner Bros’ request to use official Blade Runner images for Tesla’s cybercab event on October 10. Despite this, Tesla allegedly proceeded with AI-created visuals that mirrored the film’s style. Alcon is concerned this could confuse its brand partners, especially ahead of its upcoming Blade Runner 2099 series for Amazon Prime.
Though no specific damages were mentioned, Alcon emphasized that it has invested hundreds of millions in the Blade Runner brand and argued that Tesla’s actions had caused substantial financial harm.
A new AI tool created by Google DeepMind, called the ‘Habermas Machine,’ could help reduce culture war divides by mediating between different viewpoints. The system takes individual opinions and generates group statements that reflect both majority and minority perspectives, aiming to foster greater agreement.
Developed by researchers, including Professor Chris Summerfield from the University of Oxford, the AI system has been tested in the United Kingdom with more than 5,000 participants. It was found that the statements created by AI were often rated higher in clarity and quality than those written by human mediators, increasing group consensus by eight percentage points on average.
The Habermas Machine was also used in a virtual citizens’ assembly on topics such as Brexit and universal childcare. It was able to produce group statements that acknowledged minority views without marginalising them, but the AI approach does have its critics.
Some researchers argue that AI-mediated discussions don’t always promote empathy or give smaller minorities enough influence in shaping the final statements. Despite these concerns, the potential for AI to assist in resolving social disagreements remains a promising development.
Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.
The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.
Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.
The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.
ByteDance, the parent company of TikTok, has dismissed an intern for what it described as “maliciously interfering” with the training of one of its AI models. The Chinese tech giant clarified that while the intern, who was part of the advertising technology team, had no experience with ByteDance’s AI Lab, some reports circulating on social media and other platforms have exaggerated the incident’s impact.
ByteDance stated that the interference did not disrupt its commercial operations or its large language AI models. It also denied claims that the damage exceeded $10 million or affected an AI training system powered by thousands of graphics processing units (GPUs). The company highlighted that the intern was fired in August, and it has since notified their university and relevant industry bodies.
As one of the leading tech firms in AI development, ByteDance operates popular platforms like TikTok and Douyin. The company continues to invest heavily in AI, with applications including its Doubao chatbot and a text-to-video tool named Jimeng.
The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.
The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.
The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.