EU demands transparency from Temu and Shein

The European Union has directed Chinese fast-fashion e-commerce giants Temu and Shein to disclose their compliance with EU online content regulations by July 12. This move follows complaints lodged by consumer groups and designates both platforms as Very Large Online Platforms under the Digital Services Act. These designations impose stricter obligations on handling illegal and harmful content.

According to the European Commission, requests for information have been issued to Temu and Shein regarding their measures to combat illegal products, prevent user deception through manipulative interfaces, and safeguard minors. The Commission also seeks transparency in their recommendation systems, traceability of sellers, and compliance integration into platform design.

The enforcement action stems from consumer organisations’ complaints and underscores the EU’s commitment to ensuring digital platforms uphold regulatory standards. Failure to comply with the Digital Services Act could lead to fines of up to 6% of a company’s global turnover, emphasising the seriousness with which the EU views adherence to online content rules.

Temu and Shein are mandated to furnish comprehensive responses by the specified deadline, marking a pivotal moment in how global e-commerce giants navigate regulatory landscapes beyond their home markets. The outcome of these disclosures will be closely monitored as the EU continues to assert its regulatory authority over digital platforms operating within its jurisdiction.

Instagram tests AI for creator interactions

Instagram is trialling a new feature called ‘AI Studio’, allowing creators to develop AI versions of themselves. Meta CEO Mark Zuckerberg recently revealed on his broadcast channel that the feature is undergoing an initial test phase with selected creators and users in the United States.

Zuckerberg highlighted that AI avatars from popular creators and interest-based AI models will soon appear in Instagram messaging. These AI entities are initially designed to interact within messaging threads and will be clearly marked as AI-generated.

During the broadcast, Zuckerberg demonstrated early examples featuring AI-powered chatbots developed in collaboration with creators such as the team behind the meme account ‘Wasted’ and Don Allen Stevenson III. These chatbots aim to assist creators by engaging with their followers and responding to messages on their behalf.

Creators on Instagram can initiate interactions by tapping the ‘Message’ button, prompting users to acknowledge that the responses may be AI-generated and potentially not entirely accurate or appropriate. Each AI-generated message will be prefaced with ‘AI’ and marked with a ‘beta’ tag, indicating ongoing development and testing.

Meta’s launch of AI Studio last year enabled businesses to create AI chatbots for platforms like Messenger, Facebook, and Instagram. The initiative reflects Meta’s ongoing efforts to integrate advanced AI technologies into its social media platforms, enhancing user engagement and interaction capabilities.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

NBC using AI to recreate Al Michaels’ voice for Olympics recaps

NBC is set to bring sportscaster Al Michaels back to the Olympics with a twist this summer: his voice will be powered by AI. The network announced on Wednesday that AI software will recreate Michaels’ voice to deliver daily recaps of the Summer Games for subscribers of its Peacock streaming platform. That marks a significant milestone for the use of AI by a major media company.

The AI-driven recaps will be part of a new feature called ‘Your Daily Olympic Recap on Peacock,’ offering 10-minute highlight packages. These packages will include event updates, athlete backstories, and other content personalised to subscriber preferences. NBC claims the highlights can be packaged in about 7 million different ways, drawn from 5,000 hours of live coverage from Paris, showcasing the efficiency of AI in delivering tailored content.

Al Michaels expressed initial scepticism about the project but became intrigued after seeing a demonstration. He is being compensated for his involvement. Michaels, known for his long broadcasting career, including the iconic Miracle on Ice Game at the 1980 Winter Olympics, lent his past NBC broadcast audio to train the AI system. NBC assures that all content will be reviewed by a team of editors for factual accuracy and proper pronunciation. The highlights tool will be available on Peacock via web browsers and iOS and iPadOS apps starting 27 July.

Time magazine partners with OpenAI for content access

Time magazine has entered a multi-year agreement with OpenAI, granting the AI firm access to its news archives. The deal allows OpenAI’s ChatGPT to cite and link back to Time.com in user queries, although financial details were not disclosed. OpenAI, led by Sam Altman, has forged similar partnerships with prominent media outlets such as the Financial Times, Axel Springer, Le Monde, and Prisa Media.

These collaborations help train and enhance OpenAI’s products while providing media companies access to AI technology for developing new products. Despite some media companies suing OpenAI over content usage, such partnerships are crucial for training AI models and offer a potential revenue stream for news publishers. Such a trend comes amid broader industry tensions, highlighted by Meta’s decision to block news sharing in Canada following new legislation requiring payment for news content.

Why does it matter?

The OpenAI-Time deal is part of a larger movement where publishers seek fair compensation for their content amid the rise of generative AI, which has prompted discussions on ethical content usage and compliance with web standards.

AI and the UK election: Can ChatGPT influence the outcome?

With the UK heading to the polls, the role of AI in guiding voter decisions is under scrutiny. ChatGPT, a generative AI tool, has been tested on its ability to provide insights into the upcoming general election. Despite its powerful pattern-matching capabilities, experts emphasise its limitations and potential biases, given that AI tools rely on their training data and accessible online content.

ChatGPT suggested a strong chance of a Labour victory in the UK based on current polling when prompted about the likely outcomes of the election. However, AI’s predictions can be flawed, as demonstrated when a glitch led ChatGPT to declare Labour as the election winner prematurely incorrectly. This incident prompted OpenAI to refine ChatGPT’s responses, ensuring more cautious and accurate outputs.

ChatGPT can help voters navigate party manifestos, outlining the priorities of major parties like Labour and the Conservatives. By summarising key points from multiple sources, the AI aims to provide balanced insights. Nevertheless, the psychological impact of AI-generated single answers remains a concern, as it could influence voter behaviour and election outcomes.

Why does it matter?

The use of AI for election guidance has sparked debates about its appropriateness and reliability. While AI can offer valuable information, the importance of critical thinking and informed decision-making must be balanced. As the election date approaches, voters are reminded that their choices hold significant weight, and participation in the democratic process is crucial.

Meta may block news in Australia over licensing fees

Meta, the owner of Facebook, is contemplating blocking news content in Australia if the government enforces licensing fees, a company representative revealed during a parliamentary hearing. Meta’s regional policy director, Mia Garlick, stated that ‘all options are on the table’ to avoid paying fees, emphasising that there are many alternative channels for news content. Meta is awaiting a decision from Canberra on whether it will apply a 2021 law that allows the government to set fees for US tech giants to pay media outlets for links.

The intention to withdraw news from its platforms mirrors Meta’s stance in Canada in 2023 when similar laws were introduced and followed by the pressure of Prime Minister Trudeau to ensure Meta complies with the Online News Act, which requires tech giants with 20 million monthly users and over C$1 billion in annual revenue to compensate Canadian news publishers.

Meta had initially struck deals with Australian media firms, including News Corp and the Australian Broadcasting Corp, but has announced it will only renew these arrangements in 2024. Australia’s assistant treasurer must now decide whether to force Meta to pay for news content, while free-to-air broadcasters like Nine Entertainment and Seven West Media are already citing revenue losses and cutting jobs in anticipation of expired deals with Meta.

In defence, Garlick explained that blocking news content would be a form of compliance with the law, stating that Meta adheres to other laws such as tax, safety, and privacy. She also defended Meta’s content moderation processes, which were managed from centres outside Australia. Addressing concerns about harmful misinformation and scams, including a lawsuit by billionaire Andrew Forrest over scam ads featuring his image, Garlick acknowledged the challenges but assured that Meta has policies and tools to combat such issues.

YouTube seeks music licensing deals for AI generation tools

YouTube is negotiating with major record labels to license their songs for AI tools that clone popular artists’ music. The negotiations aim to secure the content needed to legally train AI song generators and launch new tools this year. Google-owned YouTube has offered upfront payments to major labels like Sony, Warner, and Universal to encourage artists to participate, but many remain opposed, fearing it could devalue their work.

Previously, YouTube tested an AI tool called ‘Dream Track,’ which allowed users to create music clips mimicking well-known artists. However, only a few artists participated, including Charli XCX and John Legend. YouTube now hopes to sign up dozens more artists to expand its AI song generator tool, though it won’t carry the Dream Track brand.

Why does it matter?

These negotiations come as AI companies like OpenAI are making licensing agreements with media groups. The proposed music deals would involve one-off payments to labels rather than royalty-based arrangements. YouTube’s AI tools could become part of its Shorts platform, competing with TikTok and other similar platforms. As these discussions continue, major labels are also suing AI startups for allegedly using copyrighted recordings without permission, seeking significant damages.

The future of humour in advertising with AI

AI is revolutionising the world of advertising, particularly when it comes to humour. Traditionally, humour in advertising was heavily depended on human creativity, relying on puns, sarcasm, and funny voices to engage consumers. However, as AI advances, it is increasingly being used to create comedic content.

Neil Heymann, Global Chief Creative Officer at Accenture Song, discussed the integration of AI in humour at the Cannes Lions International Festival of Creativity. He noted that while humour in advertising carries certain risks, the potential rewards far outweigh them. Despite the challenges of maintaining a unique comedic voice in a globalised market, AI offers new opportunities for creativity and personalisation.

One notable example Heymann highlighted was a recent Uber ad in the UK featuring Robert De Niro. He emphasised that while AI might struggle to replicate the nuanced performance of an actor like De Niro, it can still be a valuable tool for generating humour. For instance, a new tool developed by Google Labs can create jokes by exploring various wordplay and puns, expanding the creative options available to writers.

Heymann believes that AI can also help navigate the complexities of global advertising. By acting as an advanced filtering system, AI can identify potential cultural pitfalls and ensure that humorous content resonates with diverse audiences without losing the thrill of creativity.

Moreover, AI’s impact on advertising extends beyond humour. Toys ‘R’ Us recently pioneered text-to-video AI-generated advertising clips, showcasing AI’s ability to revolutionise content creation across various formats. That innovation highlights the expanding role of AI in shaping the future of advertising, where technological advancements continuously redefine creative possibilities.

Reddit’s new rules for AI and content use

Reddit has announced updates to its Robots Exclusion Protocol (robots.txt file), which regulates automated web bot access to websites. Traditionally used to allow search engines to index site content, the protocol now faces challenges with AI-driven scraping for model training, often without proper attribution.

In addition to the revised robots.txt file, Reddit will enforce rate limits and blocks on unidentified bots and crawlers. According to multiple sources, these measures apply to entities not complying with Reddit’s Public Content Policy or lacking formal agreements with the platform. The changes are aimed at deterring AI companies from using Reddit content to train large language models without permission. Despite these updates, AI crawlers could potentially disregard Reddit’s directives, as highlighted by recent incidents.

Recently, Wired uncovered that AI-powered startup Perplexity continued scraping Reddit content despite being blocked in the robots.txt file. Perplexity’s CEO argued that robots.txt isn’t legally binding, raising questions about the effectiveness of such protocols in regulating AI scraping practices.

Reddit’s updates will exempt authorised partners like Google, with whom Reddit has a substantial agreement allowing AI model training on its data. This move signals Reddit’s stance on controlling access to its content for AI training purposes, emphasising compliance with its policies to safeguard user interests.

These developments align with Reddit’s recent policy updates, underscoring its efforts to manage and regulate data access and use by commercial entities and partners.

Industry leaders unite for ethical AI data practices

Several companies that license music, images, videos, and other datasets for training AI systems have formed the first trade group in the sector, the Dataset Providers Alliance (DPA). The founding members of the DPA include Rightsify, vAIsual, Pixta, and Datarade. The group aims to advocate for ethical data sourcing, including protecting intellectual property rights and ensuring rights for individuals depicted in datasets.

The rise of generative AI technologies has led to backlash from content creators and numerous copyright lawsuits against major tech companies like Google, Meta, and OpenAI. Developers often train AI models using vast amounts of content, much of which is scraped from the internet without permission. To address these issues, the DPA will establish ethical standards for data transactions, ensuring that members do not sell data obtained without explicit consent. The alliance will also push for legislative measures in the NO FAKES Act, penalising unauthorised digital replicas of voices or likenesses and supporting transparency requirements in AI training data.

The DPA plans to release a white paper in July outlining its positions and advocating for these standards and legislative changes to ensure ethical practices in AI data sourcing and usage.