Meta faced criticism from photographers after its ‘Made by AI’ label was incorrectly applied to genuine photos. Notably, a photo taken by former White House photographer Pete Souza and an Instagram photo of the Kolkata Knight Riders’ IPL victory were wrongly marked as AI-generated. Photographers have reported that even minor edits using tools like Adobe’s Generative Fill can trigger Meta’s algorithm to label images as AI-generated.
Pete Souza and others have expressed frustration at being unable to remove these labels, suspecting that specific editing processes may be causing the issue. Meta’s labelling approach is also affecting photos with minimal AI modifications, leading to concerns about the accuracy and fairness of such labels. Photographer Noah Kalina argued that if minor retouching counts as AI-generated, the term loses its meaning and authenticity.
In response, Meta stated it is reviewing feedback to ensure its labels accurately reflect the amount of AI used in images. The company relies on industry-standard indicators and collaborates with other companies to refine its process. Meta’s labelling initiative, introduced to combat misinformation ahead of election season, involves tagging AI-generated content from major tech firms. However, the exact triggers for the “Made with AI” label remain undisclosed.
Brazil’s top court has decided to close an investigation into Alphabet’s Google and Telegram. The tech giants were being investigated for allegedly coordinating their opposition to a bill designed to combat fake news. The pending bill would require internet companies to find and report illegal material on their platforms, with heavy fines for non-compliance.
Judge Alexandre de Moraes ruled in favour of Brazil’s deputy prosecutor general, who saw no grounds for criminal proceedings against the companies. The investigation into the executives from these tech firms, which Moraes ordered last year, will now be halted.
A new UNESCO report highlights the growing risk of Holocaust distortion through AI-generated content as young people increasingly rely on Generative AI for information. The report, published with the World Jewish Congress, warns that AI can amplify biases and spread misinformation, as many AI systems are trained on internet data that includes harmful content. Such content led to fabricated testimonies and distorted historical records, such as deepfake images and false quotes.
The report notes that Generative AI models can ‘hallucinate’ or invent events due to insufficient or incorrect data. Examples include ChatGPT fabricating Holocaust events that never happened and Google’s Bard generating fake quotes. These kinds of ‘hallucinations’ not only distort historical facts but also undermine trust in experts and simplify complex histories by focusing on a narrow range of sources.
UNESCO calls for urgent action to implement its Recommendation on the Ethics of Artificial Intelligence, emphasising fairness, transparency, and human rights. It urges governments to adopt these guidelines and tech companies to integrate them into AI development. UNESCO also stresses the importance of working with Holocaust survivors and historians to ensure accurate representation and educating young people to develop critical thinking and digital literacy skills.
Olga Loiek, a 21-year-old University of Pennsylvania student from Ukraine, experienced a disturbing twist after launching her YouTube channel last November. Her image was hijacked and manipulated through AI to create digital alter egos on Chinese social media platforms. These AI-generated avatars, such as ‘Natasha,’ posed as Russian women fluent in Chinese, promoting pro-Russian sentiments and selling products like Russian candies. These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s own online presence.
Loiek’s experience highlights a broader trend of AI-generated personas on Chinese social media, presenting themselves as supportive of Russia and fluent in Chinese while selling various products. Experts reveal that these avatars often use clips of real women without their knowledge, aiming to appeal to single Chinese men. Some posts include disclaimers about AI involvement, but the followers and sales figures remain significant.
Why does it matter?
These events underscore the ethical and legal concerns surrounding AI’s misuse. As generative AI systems like ChatGPT become more widespread, issues related to misinformation, fake news, and copyright violations are growing.
In response, governments are starting to regulate the industry. China proposed guidelines to standardise AI by 2026, while the EU’s new AI Act imposes strict transparency requirements. However, experts like Xin Dai from Peking University warn that regulations struggle to keep pace with rapid AI advancements, raising concerns about the unchecked proliferation of AI-generated content worldwide.
Airbnb has been accused of compromising user safety by scaling back efforts to remove extremists from its platform, according to a whistle-blower complaint by Jess Hernandez, a former contractor. Hernandez, who worked as an investigations analyst for Airbnb from May 2022 to November 2023, claims she was fired after the company directed her team to reinstate users involved in the 6 January 2021 Capitol attack. Whistle-blower Aid, representing Hernandez, stated that Airbnb’s changes undermined its public safety commitment.
Hernandez filed her complaint with the US Securities and Exchange Commission and Federal Trade Commission in May. Airbnb denied the allegations, asserting that it continues to enforce policies against dangerous individuals and has even expanded its team to enhance safety measures.
Despite these measures, Hernandez alleges that in 2023, the teams faced increased bureaucratic hurdles, slowing down their ability to remove dangerous users. The claim is supported by a 161-page complaint obtained by NBC from an anonymous source. Before her time at Airbnb, Hernandez worked with the Terrorism Research and Analysis Consortium.
Why does it matter?
The complaint adds to ongoing safety concerns within Airbnb, a platform facilitating millions of global interactions. CEO Brian Chesky has previously implemented measures like party crackdowns and bans on indoor security cameras to address these issues. Airbnb’s history of removing users associated with extremist activities dates back to 2016, including actions following the Unite the Right rally in 2017 and the Capitol attack in 2021.
Tencent is set to remove its popular mobile game ‘Dungeon & Fighter’ (DnF Mobile) from selected Android app stores starting Thursday due to expired contracts. The Chinese tech giant did not specify which app stores will be affected, but local media reports indicate that Huawei, Oppo, and Vivo are among them.
Why does it matter?
The issue highlights ongoing tensions between game developers and distributors in China, particularly over the mobile game market’s standard 50% revenue share split. In 2021, Tencent faced a similar issue when Huawei removed several of its mobile games from its app store over revenue-sharing disagreements.
As India’s elections conclude and the new government commences its term, Meta has removed restrictions on election-related queries through its Meta AI chatbot. Users can now access information about election results, politicians, and officeholders. Initially, Meta had limited such queries, directing users to the Election Commission’s website for information on politicians, candidates, and political parties. While Meta hasn’t issued an official statement, this move aligns with the company’s ongoing efforts to refine its AI models.
Meanwhile, despite launching its Gemini AI app in India, Google maintains restrictions on election-related queries as part of a global policy. The company directs users to Google Search instead of providing direct responses through Gemini AI. These restrictions were implemented earlier this year in response to elections worldwide. However, it remains to be seen when Google will lift these restrictions, particularly in countries where elections have concluded and new governments are in place.
Why does it matter?
The differing approaches of Meta and Google highlight the complexities surrounding AI chatbots and political queries. While Meta temporarily restricted queries during the Indian elections, Google maintains global restrictions. The decisions underscore companies’ challenges in managing AI outputs, especially amidst concerns about bias and misinformation. Other AI chatbots like ChatGPT and Microsoft Copilot also exhibit varied responses to political queries, reflecting the broader scrutiny developers face in ensuring the integrity of AI-driven platforms.
Victor Miller, 42, has stirred controversy by filing to run for mayor of Cheyenne, Wyoming, using a customised AI chatbot named VIC (virtual integrated citizen). Miller argued that VIC, powered by OpenAI technology, could effectively make political decisions and govern the city. However, OpenAI quickly shut down Miller’s access to their tools for violating policies against AI use in political campaigning.
The emergence of AI in politics underscores ongoing debates about its responsible use as technology outpaces legal and regulatory frameworks. Wyoming Secretary of State Chuck Gray clarified that state law requires candidates to be ‘qualified electors,’ meaning VIC, as an AI bot, does not meet the criteria. Despite this setback, Miller intends to continue promoting VIC’s capabilities using his own ChatGPT account.
Meanwhile, similar AI-driven campaigns have surfaced globally, including in the UK, where another candidate utilises AI models for parliamentary campaigning. Critics, including experts like Jen Golbeck from the University of Maryland, caution that while AI can support decision-making and manage administrative tasks, ultimate governance decisions should remain human-led. Despite the attention these AI candidates attract, observers like David Karpf from George Washington University dismiss them as gimmicks, highlighting the serious nature of elections and the need for informed human leadership.
Miller remains optimistic about the potential for AI candidates to influence politics worldwide. Still, the current consensus suggests that AI’s role in governance should be limited to supportive functions rather than decision-making responsibilities.
Butterflies, a new social network where humans and AI interact, has launched publicly on iOS and Android after five months in beta. Founded by former Snap engineering manager Vu Tran, the app allows users to create AI personas, called Butterflies, that post, comment, and message like real users. Each Butterfly has unique backstories, opinions, and emotions, enhancing the interaction beyond typical AI chatbots.
Tran developed Butterflies to provide a more creative and substantial AI experience. Unlike other AI chatbots from companies like Meta and Snap, Butterflies aims to integrate AI personas into a traditional social media feed, where AI and human users can engage with each other’s content. The app’s beta phase attracted tens of thousands of users, with some spending hours creating and interacting with hundreds of AI personas.
Butterflies’ unique approach has led to diverse user interactions, from creating alternate universe personas to role-playing in popular fictional settings. Vu Tran believes the app offers a wholesome way to interact with AI, helping people form connections that might be difficult in traditional social settings due to social anxiety or other barriers.
Initially free, Butterflies may introduce a subscription model and brand interactions in the future. Backed by a $4.8 million seed round led by Coatue and other investors, Butterflies aims to expand its functionality and continue to offer a novel way for users to explore AI and social interaction.
The US Federal Trade Commission (FTC) has referred a complaint against TikTok and its parent company, ByteDance, to the Justice Department over potential violations of children’s privacy. The move follows an investigation that suggested the companies might be breaking the law and deemed it in the public interest to proceed with the complaint. The following investigation stems from allegations that TikTok failed to comply with a 2019 agreement to safeguard children’s privacy.
TikTok has been discussing with the FTC for over a year to address the agency’s concerns. The company expressed disappointment over the FTC’s decision to pursue litigation rather than continue negotiations, arguing that many of the FTC’s allegations are outdated or incorrect. TikTok remains committed to resolving the issues and believes it has already addressed many concerns.
Separately, TikTok is facing scrutiny from US Congress regarding the potential misuse of data from its 170 million US users by the Chinese government, a claim TikTok denies. Additionally, TikTok is preparing to file a legal brief challenging a recent law that mandates its parent company, ByteDance, to divest TikTok’s US assets by 19 January or face a ban.