Three US lawmakers have raised concerns about NewsBreak, a popular news aggregation app, due to its Chinese origins and use of AI tools that have produced erroneous stories. Senator Mark Warner, chair of the Intelligence Committee, emphasised the threat posed by technologies from adversarial countries. At the same time, Representative Raja Krishnamoorthi highlighted the need for transparency regarding any ties to the Chinese Communist Party (CCP). Representative Elise Stefanik pointed to the backing by IDG Capital, a Beijing-based private equity firm, as a reason for increased scrutiny.
NewsBreak, launched in the US in 2015, was originally a subsidiary of the Chinese news app Yidian, founded by Jeff Zheng. Despite being labelled an American company by its spokesperson, court documents and other evidence reveal historical links to Chinese investors and engineers based in China. Notably, Yidian has received praise from Chinese Communist Party officials for disseminating government propaganda, although there is no evidence that NewsBreak has censored or produced pro-China news.
The primary investors in NewsBreak include San Francisco-based Francisco Partners and Beijing-based IDG Capital. IDG Capital, which the Pentagon has listed as allegedly working with Beijing’s military, denies any such association. Francisco Partners has described the scrutiny as ‘false and misleading,’ but the lawmakers maintain their stance on carefully examining the app’s potential risks to US interests.
New York state lawmakers have passed new legislation to restrict social media platforms from showing ‘addictive’ algorithmic content to users under 18 without parental consent. The measure to implement aims to mitigate online risks to children, making New York the latest state to take such action. A companion bill was also passed, which limits online sites from collecting and selling the personal data of minors.
Governor Kathy Hochul is expected to sign both bills into law, calling them a significant step toward addressing the youth mental health crisis and ensuring a safer digital environment. The legislation could impact revenues for social media companies like Meta, which generated significant income from advertising to minors.
While industry associations have criticised the bills as unconstitutional and an assault on free speech, proponents argue that the measures are necessary to protect adolescents from mental health issues linked to excessive social media use. The SAFE (Stop Addictive Feeds Exploitation) for Kids Act will require parental consent for minors to view algorithm-driven content instead of providing a chronological feed of followed accounts and popular content.
The New York Child Data Protection Act, the companion bill, will bar online sites from collecting, using, or selling the personal data of minors without informed consent. Violations could result in significant penalties, adding a layer of protection for young internet users.
Google has issued new guidance for developers building AI apps distributed through Google Play in response to growing concerns over the proliferation of AI-powered apps designed to create deepfake nude images. The platform recently announced a crackdown on such applications, signalling a firm stance against the misuse of AI for generating non-consensual and potentially harmful content.
The move comes in the wake of alarming reports highlighting the ease with which these apps can manipulate photos to create realistic yet fabricated nude images of individuals. Reports have surfaced about apps like ‘DeepNude’ and its clones, which can strip clothes from images of women to produce highly realistic nude photos. Another report detailed the widespread availability of apps that could generate deepfake videos, leading to significant privacy invasions and the potential for harassment and blackmail.
Apps offering AI features have to be ‘rigorously tested’ to safeguard against prompts that generate restricted content and have to provide a way for users to signal it. Google strongly suggests that developers document the recommended tests before launching them, as Google could ask them to be reviewed in the future. Additionally, developers can’t advertise that their app breaks any of Google Play’s rules at the risk of getting banned from the app store. The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.
Why Does It Matter?
The proliferation of AI-driven deepfake apps on platforms like Google Play undermine personal privacy and consent by allowing anyone to generate highly realistic and often explicit content of individuals without their knowledge or consent. Such misuse can lead to severe reputational damage, harassment, and even extortion, affecting both individuals and public figures alike.
Reporters Without Borders (RSF) has praised the Council of Europe’s (CoE) new Framework Convention on AI for its progress but criticised its reliance on private sector self-regulation. The Convention, which includes 46 European countries, aims to address the impact of AI on human rights, democracy, and the rule of law. While it acknowledges the threat of AI-fueled disinformation, RSF argues that it fails to provide the necessary mechanisms to achieve its goals.
The CoE Convention mandates strict regulatory measures for AI use in the public sector but allows member states to choose self-regulation for the private sector. RSF believes this distinction is a critical flaw, as the private sector, particularly social media companies and other digital service providers, have historically prioritised business interests over the public good. According to RSF, this approach will not effectively combat the disinformation challenges posed by AI.
RSF urges countries that adopt the Convention to implement robust national legislation to strictly regulate AI development and use. That would ensure that AI technologies are deployed ethically and responsibly, protecting the integrity of information and democratic processes. Vincent Berthier, Head of RSF’s Tech Desk, emphasised the need for legal requirements over self-regulation to ensure AI serves the public interest and upholds the right to reliable information.
RSF’s recommendations provide a framework for AI regulation that addresses the shortcomings of both the Council of Europe’s Framework Convention and the European Union’s AI Act, advocating for stringent measures to safeguard the integrity of information and democracy.
Top officials at the US Federal Election Commission (FEC) are divided over a proposal requiring political advertisements on broadcast radio and television to disclose if their content is generated by AI. FEC Vice Chair Ellen Weintraub backs the proposal, initiated by FCC Chairwoman Jessica Rosenworcel, which aims to enhance transparency in political ads, whereas FEC Chair Sean Cooksey opposes it.
The proposal, which does not ban AI-generated content, comes amid increasing concerns in Washington that such content could mislead voters in the upcoming 2024 elections. Rosenworcel emphasised the risk of ‘deepfakes’ and other altered media misleading the public and noted that the FCC has long-standing authority to mandate disclosures. Weintraub also highlighted the importance of transparency for public benefit and called for collaborative regulatory efforts between the FEC and FCC.
However, Cooksey warned that mandatory disclosures might conflict with existing laws and regulations, creating confusion in political campaigns. Republican FCC Commissioner Brendan Carr criticised the proposal, pointing out inconsistencies in regulation, as the FCC cannot oversee internet, social media, or streaming service ads. The debate gained traction following an incident in January where a fake AI-generated robocall impersonating US President Joe Biden aimed to influence New Hampshire’s Democratic primary, leading to charges against a Democratic consultant.
Last Christmas Eve, NewsBreak, a popular news app, published a false report about a shooting in Bridgeton, New Jersey. The Bridgeton police quickly debunked the story, which had been generated by AI, stating that no such event had occurred. NewsBreak, which operates out of Mountain View, California, and has offices in Beijing and Shanghai, removed the erroneous article four days later, attributing the mistake to its content source.
NewsBreak, known for filling the void left by shuttered local news outlets, uses AI to rewrite news from various sources. However, this method has led to multiple errors, including incorrect information about local charities and fictitious bylines. In response to growing criticism, NewsBreak added a disclaimer about potential inaccuracies to its homepage. With over 50 million monthly users, the app primarily targets a demographic of suburban or rural women over 45 without college degrees.
The company has faced legal challenges due to its AI-generated content. Patch Media settled a $1.75 million lawsuit with NewsBreak over copyright infringement, and Emmerich Newspapers reached a settlement in a similar case. Concerns about the company’s ties to China have also been raised, as half of its employees are based there, prompting worries about data privacy and security.
Despite these issues, NewsBreak maintains that it complies with US data laws and operates on US-based servers. The company’s CEO, Jeff Zheng, emphasises its identity as a US-based business, crucial for its long-term credibility and success.
Australia’s cyber safety regulator has decided to drop its legal challenge against Elon Musk-owned X (formerly Twitter) concerning the removal of videos depicting the stabbing of an Assyrian church bishop in Sydney. The decision follows a setback in May when a federal court judge rejected a request to extend a temporary order for X to block the videos, which Australian authorities deemed a terrorist attack.
eSafety Commissioner Julie Inman Grant highlighted the issue of graphic material being accessible online, especially to children, and criticised X’s initial refusal to remove the violent content globally. Grant emphasised the original intent to prevent the footage from going viral, which could incite further violence and harm the community, defending the regulator’s actions despite the legal outcome.
Why does it matter?
The incident, which involved a 16-year-old boy charged with a terrorism offence, also led to a public clash between Musk and Australian officials, including Prime Minister Anthony Albanese. Musk’s criticisms of the regulatory order as censorship sparked controversy, while other major platforms like Meta, TikTok, Reddit, and Telegram complied with removal requests. X had opted to geo-block the content in Australia, a solution deemed ineffective by the regulator due to users employing virtual private networks.
A former Meta engineer has accused the company of bias in its handling of Gaza-related content, alleging he was fired for addressing bugs that suppressed Palestinian Instagram posts. Ferras Hamad, a Palestinian-American who worked on Meta’s machine learning team, filed a lawsuit in California state court for discrimination and wrongful termination. Hamad claims Meta exhibited a pattern of bias against Palestinians, including deleting internal communications about the deaths of Palestinian relatives and investigating the use of the Palestinian flag emoji while not probing similar uses of the Israeli or Ukrainian flag emojis.
Hamad’s firing, he asserts, was linked to his efforts to fix issues that restricted Palestinian Instagram posts from appearing in searches and feeds, including a misclassified video by a Palestinian photojournalist.
Despite his manager confirming the task was part of his duties, Hamad was later investigated and fired, allegedly for violating a policy on working with accounts of people he knew personally, which he denies.
Australia is considering new regulations to make Meta Platforms, the parent company of Facebook, pay news companies for content. The development follows Meta’s decision to stop compensating Australian news publishers despite a 2021 law that mandates such payments. News Corp Australia’s executive chairman, Michael Miller, urged the government to enforce this law, criticising Meta for abandoning previous agreements and emphasising the need for fair negotiations.
Meta argues that interest in news on its platforms is declining and views its services as free distribution channels for media companies. However, publishers claim that social media platforms profit unfairly from advertising revenue linked to news content. As a consequence, if the government enforces the 2021 law, Meta might restrict news sharing on Facebook in Australia, as it has done in Canada, leading to concerns about increased misinformation.
Miller also highlighted the negative impacts of social media on mental health and called for a regulatory framework to protect Australians. His proposal includes holding tech firms accountable for all content, enforcing competition laws for digital advertising, improving consumer complaint processes, and supporting mental health programs. He suggested barring companies that fail to comply with these rules from operating in Australia. Meta has defended its actions, stating that it respects Australian laws and community standards and has implemented measures to promote online safety and prevent harm.
China has unveiled an AI chatbot based on principles derived from President Xi Jinping’s political ideology. The chatbot, named ‘Xue Xi’, aims to propagate ‘Xi Jinping Thought’ through conversational interactions with users. Xi Jinping Thought, also known as ‘Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era‘, is made up of 14 principles, including ensuring the absolute power of the Chinese Communist Party, strengthening national security and socialist values, as well as improving people’s livelihoods and well-being.
Developed by a team at Tsinghua University, ‘Xue Xi’ utilises natural language processing to engage users in discussions about Xi Jinping’s ideas on governance, socialism with Chinese characteristics, and national rejuvenation. The chatbot was trained on seven databases, six of which were mostly related to information technologies provided by China’s internet watchdog and the Cyberspace Administration of China (CAC).
The chatbot’s creation is the latest effort of a broader strategy to spread the Chinese leader’s ideology and an attempt to leverage technology, strengthen ideological education and promote ideological loyalty among citizens. Students have had to take classes on Xi Jinping’s Thoughts in schools, and an app called Study Xi Strong Nation was also rolled out in 2019 to allow users to learn and take quizzes about his ideologies.
Why Does It Matter?
The launch of Xue Xi raises important questions about the intersection of AI technology and political ideology. It represents China’s innovative approach to using AI for ideological dissemination, aiming to ensure widespread adherence to Xi Jinping Thought. By deploying AI in this manner, China advances its technological capabilities and seeks to shape public discourse and reinforce state-approved narratives. Critics argue that such initiatives could exacerbate issues related to censorship and surveillance, potentially limiting freedom of expression and promoting conformity to government viewpoints. Moreover, the development of ‘Xue Xi’ underscores China’s broader ambition to lead in AI development, positioning itself as a pioneer in using technology for ideological governance.