Snap introduces watermarks for AI-generated images

Social media company Snap announced its plans to add watermarks to AI-generated images on its platform, aiming to enhance transparency and protect user content. The watermark, featuring a small ghost with a sparkle icon, will denote images created using AI tools and will appear when the image is exported or saved to the camera roll. However, how Snap intends to detect and address watermark removal remains unclear, raising questions about enforcement methods.

This move aligns with efforts by other tech giants like Microsoft, Meta, and Google, who have implemented measures to label or identify AI-generated images. Snap currently offers AI-powered features like Lenses and a selfie-focused tool called Dreams for paid users, emphasising the importance of transparency and safety in AI-driven experiences.

Why does it matter?

In its commitment to ensuring equitable access and user expectations, Snap has partnered with HackerOne to stress-test its AI image-generation tools and established a review process to address potential biases in AI results. The company’s dedication to transparency extends to providing context cards with AI-generated images and implementing controls in the Family Center to monitor teen interactions with AI, following previous controversies surrounding inappropriate responses from the ‘My AI’ chatbot. As Snap continues to evolve its AI-powered features, its focus on transparency and safety underscores its commitment to fostering a positive and inclusive user experience on its platform.

US Congress proposes Generative AI Copyright Disclosure Act

A new bill introduced in the US Congress aims to require AI companies to disclose the copyrighted material they use to train their generative AI models. The bill, named the Generative AI Copyright Disclosure Act and introduced by California Democrat Adam Schiff, mandates that AI firms submit copyrighted works in their training datasets to the Register of Copyrights before launching new generative AI systems. Companies must file this information at least 30 days before releasing their AI tools or face financial penalties. The datasets in question can contain vast amounts of text, images, music, or video content.

Congressman Schiff emphasised the need to balance AI’s potential with ethical guidelines and protections, citing AI’s disruptive influence on various aspects of society. The bill does not prohibit AI from training on copyrighted material but requires companies to disclose the copyrighted works they use. This move responds to increasing litigation and government scrutiny around whether major AI companies have unlawfully used copyrighted content to develop tools like ChatGPT.

Entertainment industry organisations and unions, including the Recording Industry Association of America and the Directors Guild of America, have supported Schiff’s bill. They argue that protecting the intellectual property of human creative content is crucial, given that AI-generated content originates from human sources. Companies like OpenAI, currently facing lawsuits alleging copyright infringement, maintain their use of copyrighted material falls under fair use, a legal doctrine permitting certain unlicensed use of copyrighted materials.

Why does it matter?

As generative AI technology evolves, concerns about the potential impact on artists’ rights grow within the entertainment industry. Notably, over 200 musicians recently issued an open letter urging increased protections against AI and cautioning against tools that could undermine or replace musicians and songwriters. The debate highlights the intersection of AI innovation, copyright law, and the livelihoods of creative professionals, presenting complex challenges for policymakers and stakeholders alike.

OpenAI utilised one million hours of YouTube content to train GPT-4

In recent reports by The New York Times, the challenges faced by AI companies in acquiring high-quality training data have come to light. The New York Times elaborates on how companies like OpenAI and Google have navigated this issue, often treading in legally ambiguous territories related to AI copyright law.

OpenAI, for instance, resorted to developing its Whisper audio transcription model by transcribing over a million hours of YouTube videos to train GPT-4, its advanced language model. Although this approach raised legal concerns, OpenAI believed it fell within fair use. The company’s president, Greg Brockman, reportedly played a hands-on role in collecting these videos.

According to a Google spokesperson, there were unconfirmed reports of OpenAI’s activities, and both Google’s terms of service and robots.txt files prohibit unauthorised scraping or downloading of YouTube content. Google also utilised transcripts from YouTube, aligned with its agreements with content creators.

Similarly, Meta encountered challenges with data availability for training its AI models. The company’s AI team discussed using copyrighted works without permission to catch up with OpenAI. Meta explored options like paying for book licenses or acquiring a large publisher to address this issue.

Why does it matter?

AI companies, including Google and OpenAI, are grappling with the dwindling availability of quality training data to improve their models. The future of AI training may involve synthetic data or curriculum learning methods, but these approaches still need to be proven. In the meantime, companies continue to explore various avenues for data acquisition, sometimes straying into legally contentious territories as they navigate this evolving landscape.

TikTok removes Universal Music songs amidst licensing dispute

TikTok initiates removal of Universal Music Publishing Group’s (UMPG) songs due to unsuccessful license renewal negotiations. Following the expiration of their licensing agreement on January 31, TikTok while retaining the videos, has begun silencing videos featuring songs from artists associated with UMPG.

The new policy implies that TikTok will need to exclude any music where UMPG songwriters have contributed, irrespective of the main label. This expands the impact beyond UMG-associated artists, affecting others as well – if a UMPG-affiliated songwriter contributed to a song by another label, even minimally, TikTok will be obliged to remove it from their platform.

Despite UMPG’s claim of negligible impact to its revenue, the new changes will adversely impact artists and songwriters who will lose promotion opportunities as the platform is known for enabling music discovery. Artists also stand to potentially lose out on royalty earnings on the platform. UMG recognizes these consequences, but maintains its commitment to securing a new deal that justly compensates its artists.

G7 digital and tech ministers discuss AI, data flows, digital infrastructure, standards, and more

On 29-30 April 2023, G7 digital and tech ministers met in Takasaki, Japan, to discuss a wide range of digital policy topics, from data governance and artificial intelligence (AI), to digital infrastructure and competition. The outcomes of the meeting – which was also attended by representatives of India, Indonesia, Ukraine, the Economic Research Institute for ASEAN and East Asia, the International Telecommunication Union, the Organisation for Economic Co-operation and Development, UN, and the World Bank Group – include a ministerial declaration and several action plans and commitments to be endorsed at the upcoming G7 Hiroshima Summit.

During the meeting, G7 digital and tech ministers committed to strengthening cooperation on cross-border data flows, and operationalising Data Free Flow with Trust (DFFT) through an Institutional Arrangement for Partnership (IAP). IAP, expected to be launched in the coming months, is dedicated to ‘bringing governments and stakeholders together to operationalise DFFT through principles-based, solutions-oriented, evidence-based, multistakeholder, and cross-sectoral cooperation’. According to the ministers, focus areas for IAP should include data location, regulatory cooperation, trusted government access to data, and data sharing.

The ministers further noted the importance of enhancing the security and resilience of digital infrastructures. In this regard, they have committed to strengthening cooperation – within G7 and with like-minded partners – to support and enhance network resilience through measures such as ensuring and extending secure and resilient routes of submarine cables. Moreover, the group endorsed the G7 Vision of the future network in the Beyond 5G/6G era, and is committed to enhancing cooperation on research, development, and international standards setting towards building digital infrastructure for the 2030s and beyond. These commitments are also reflected in a G7 Action Plan for building a secure and resilient digital infrastructure

In addition to expressing a commitment to promoting an open, free, global, interoperable, reliable, and secure internet, G7 ministers condemned government-imposed internet shutdowns and network restrictions. When it comes to global digital governance processes, the ministers expressed support for the UN Internet Governance Forum (IGF) as the ‘leading multistakeholder forum for Internet policy discussions’ and have proposed that the upcoming Global Digital Compact reinforce, build on, and contribute to the success of the IGF and World Summit on the Information Society (WSIS) process. Also included in the internet governance section is a commitment to protecting democratic institutions and values from foreign threats, including foreign information manipulation and interference, disinformation and other forms of foreign malign activity. These issues are further detailed in an accompanying G7 Action Plan for open, free, global, interoperable, reliable, and secure internet

On matters related to emerging and disruptive technologies, the ministers acknowledged the need for ‘agile, more distributed, and multistakeholder governance and legal frameworks, designed for operationalising the principles of the rule of law, due process, democracy, and respect for human rights, while harnessing the opportunities for innovation’. They also called for the development of sustainable supply chains and agreed to continue discussions on developing collective approaches to immersive technologies such as the metaverse

With AI high on the meeting agenda, the ministers have stressed the importance of international discussions on AI governance and interoperability between AI governance frameworks, and expressed support for the development of tools for trustworthy AI (e.g. (non)regulatory frameworks, technical standards, assurance techniques) through multistakeholder international organisations. The role of technical standards in building trustworthy AI and in fostering interoperability across AI governance frameworks was highlighted both in the ministerial declaration and in the G7 Action Plan for promoting global interoperability between tools for trustworthy AI

When it comes to AI policies and regulations, the ministers noted that these should be human-centric, based on democratic values, risk-based, and forward-looking. The opportunities and challenges of generative AI technologies were also tackled, as ministers announced plans to convene future discussions on issues such as governance, safeguarding intellectual property rights, promoting transparency, and addressing disinformation. 

On matters of digital competition, the declaration highlights the importance of both using existing competition enforcement tools and developing and implementing new or updated competition policy or regulatory frameworks ‘to address issues caused by entrenched market power, promote competition, and stimulate innovation’. A summit related to digital competition for competition authorities and policymakers is planned for the fall of 2023.

European Patent Office publishes patent insight report on quantum computing

The European Patent Office (EPO) has published a patent insight report on quantum computing. The report provides an overview of quantum computing at large, while also looking at issues such as physical realisations of quantum computing, quantum error correction and mitigation, and technologies related to quantum computing and artificial intelligence/machine learning.

One of the report’s key findings is that the number of inventions in the field of quantum computing has multiplied over the last decade. In addition, quantum computing inventions show a higher growth rate than in all fields of technology in general. The above-average share of international patent applications in quantum computing suggests high economic expectations related to the technology.

Getty Images Sues Stable Diffusion AI image generator for copyright infringement 

In the High Court of Justice in London, Getty Images has filed a lawsuit against Stability AI for allegedly infringing the intellectual property rights of millions of images from its platform, which Stability AI used to train its AI image generator, Stable Diffusion. According to the lawsuit, Stability AI violated several of Getty Image’s Terms of Service, such as image scraping used to train its AI image generator.

Getty Image is alleging that Stability AI has unlawfully copied and processed images from its website without obtaining a license for their commercial exploitation, including copyright in content that belongs to or is represented by Getty Images.

Previously in September, as was then reported, Getty Images banned AI-generated content on its platform, including images produced by Stable Diffusion, fearing possible (future) copyright lawsuits.

Class action lawsuit against Stability AI, Midjourney, and DeviantArt over intellectual property infringement

Three artists have launched a class action lawsuit in San Francisco, USA against Stability AI, Midjourney, and DeviantArt, creators of artificial intelligence (AI) art generators, alleging intellectual property rights infringements.

The artists argue that the producers of AI art generators violated the copyrights of millions of users by scraping the web to train their art algorithms without getting approval from content owners or giving credit or compensation. As such, they are seeking damages as well as an injunction to prevent future harm from the AI tools in question.

The lawsuit alleges direct copyright infringement, vicarious copyright infringement related to forgeries, violations of the Digital Millennium Copyright Act (DMCA), violation of class members’ rights of publicity, unlawful competition, and a breach of terms of service.

Belarus allows parallel imports and digital piracy

A new law approved by the Belarusian parliament allows the use of digital content without the copyright owners’ consent when such holders reside in ‘unfriendly countries’ that have placed sanctions on Belarus. While the measure enables the use of software, films, music, television programmes, and other audiovisual works without copyright permission, royalties still need to be paid for such uses. The funds are to be deposited into the account of the Belarusian Patent Office, where copyright holders would be able to claim them within three years.

The law also permits parallel imports, a process in which authentic goods are brought in through alternative supply lines without the intellectual property owners’ consent. This measure applies to imports from any country, if the concerned goods are included in a list of products deemed as essential for the internal market.

The legislation is to apply until the end of 2024.

Microsoft, GitHub, and OpenAI open source coding assistant sued for alleged copyright violation

A proposed class action in California challenges Microsoft, GitHub, and OpenAI for alleged copyright violation around GitHub Copilot, an artificial intelligence (AI) powered open source code generating assistant. The lawsuit alleges that Copilot is trained on public repositories of code extracted from the web, some of which is licensed content. While the case is in the initial stages, and the defendants will argue that their use of code qualifies as fair use under US copyright law, the outcome of this lawsuit may have significant consequences for the future of generative AI.