Meta Platforms is facing 11 complaints over proposed changes to its privacy policy that could violate EU privacy regulations. The changes, set to take effect on 26 June, would allow Meta to use personal data, including posts and private images, to train its AI models without user consent. Advocacy group NOYB has urged privacy watchdogs to take immediate action against these changes, arguing that they breach the EU’s General Data Protection Regulation (GDPR).
Meta claims it has a legitimate interest in using users’ data to develop its AI models, which can be shared with third parties. However, NOYB founder Max Schrems contends that the European Court of Justice has previously ruled against Meta’s arguments for similar data use in advertising, suggesting that the company is ignoring these legal precedents. Schrems criticises Meta’s approach, stating that the company should obtain explicit user consent rather than complicating the opt-out process.
In response to the impending policy changes, NOYB has called on data protection authorities across multiple European countries, including Austria, Germany, and France, to initiate an urgent procedure to address the situation. If found in violation of GDPR, Meta could face strict fines.
Chinese AI chip firms, including industry leaders such as MetaX and Enflame, are downgrading their chip designs in order to comply with Taiwan Semiconductor Manufacturing Company’s (TSMC) stringent supply chain security protocols and regulatory requirements. This strategic adjustment comes amidst heightened scrutiny and restrictions imposed by the US on semiconductor exports to Chinese companies, which includes limitations on accessing advanced manufacturing technologies critical for AI chip production.
The US has imposed strict export controls to obstruct China’s military advancements in AI and supercomputing. These controls include restrictions on sophisticated processors from companies like Nvidia, as well as on-chip manufacturing equipment crucial for advanced semiconductor production. That move has prevented TSMC and other overseas chip manufacturers using US tools from fulfilling orders for these restricted technologies.
In response to these restrictions, top Chinese AI chip firms MetaX and Enflame have reportedly submitted downgraded chip designs to TSMC in late 2023. MetaX, founded by former Advanced Micro Devices (AMD) executives and backed by state support, had to introduce the C280 chip after its more advanced C500 Graphic Processing Unit (GPU) ran out of stock in China earlier in the year. Enflame, also Shanghai-based and supported by Tencent, faces similar challenges.
Why does it matter?
The decision to downgrade chip designs to meet production demands reflects the delicate balance between technological advancement and supply chain resilience. While simplifying designs may expedite production and mitigate supply risks in the short term, it also raises questions about long-term innovation and competitiveness. The ability to innovate and deliver cutting-edge AI technologies hinges on access to advanced chip manufacturing processes, which are increasingly concentrated among a few global players.
On Tuesday, a group of current and former OpenAI employees issued an open letter warning that leading AI companies lack necessary transparency and accountability to address potential risks. The letter highlights AI safety concerns, such as deepening inequalities, misinformation, and loss of control over autonomous systems, potentially leading to catastrophic outcomes.
The 16 signatories, including Google DeepMind staff, emphasised that AI firms have financial incentives to avoid effective oversight and criticised their weak obligations to share critical information. They called for stronger whistleblower protections, noting that confidentiality agreements often prevent employees from raising concerns. Some current OpenAI employees signed anonymously, fearing retaliation. AI pioneers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell also endorsed the letter, criticising inadequate preparations for AI’s dangers.
The letter also calls for AI companies to commit to main principles in order to maintain a curtain level of accountability and transparency. Those principles are – not to enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism, facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise, and support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.
Why does it matter?
In response, OpenAI defended its record, citing its commitment to safety, rigorous debate, and engagement with various stakeholders. The company highlighted its anonymous integrity hotline and newly formed Safety and Security Committee as channels for employee concerns. The critique of OpenAI comes amid growing scrutiny of CEO Sam Altman’s leadership. The concerns raised by OpenAI insiders highlights the critical need for transparency and accountability in AI development. Ensuring that AI companies are effectively overseen and held accountable and that insiders are enabled to speak out about unethical or dangerous practices without fear of retaliation represent pivotal safeguards to inform the public and the decision makers about AI’s potential capabilities and risks.
Last Christmas Eve, NewsBreak, a popular news app, published a false report about a shooting in Bridgeton, New Jersey. The Bridgeton police quickly debunked the story, which had been generated by AI, stating that no such event had occurred. NewsBreak, which operates out of Mountain View, California, and has offices in Beijing and Shanghai, removed the erroneous article four days later, attributing the mistake to its content source.
NewsBreak, known for filling the void left by shuttered local news outlets, uses AI to rewrite news from various sources. However, this method has led to multiple errors, including incorrect information about local charities and fictitious bylines. In response to growing criticism, NewsBreak added a disclaimer about potential inaccuracies to its homepage. With over 50 million monthly users, the app primarily targets a demographic of suburban or rural women over 45 without college degrees.
The company has faced legal challenges due to its AI-generated content. Patch Media settled a $1.75 million lawsuit with NewsBreak over copyright infringement, and Emmerich Newspapers reached a settlement in a similar case. Concerns about the company’s ties to China have also been raised, as half of its employees are based there, prompting worries about data privacy and security.
Despite these issues, NewsBreak maintains that it complies with US data laws and operates on US-based servers. The company’s CEO, Jeff Zheng, emphasises its identity as a US-based business, crucial for its long-term credibility and success.
Young Americans are rapidly embracing generative AI, but few use it daily, according to a recent survey by Common Sense Media, Hopelab, and Harvard’s Center for Digital Thriving. The survey, conducted in October and November 2023 with 1,274 US teens and young adults aged 14-22, found that only 4% use AI tools daily. Additionally, 41% have never used AI, and 8% are unaware of what AI tools are. The main uses for AI among respondents are seeking information (53%) and brainstorming (51%).
Demographic differences show that 40% of white respondents use AI for schoolwork, compared to 62% of Black respondents and 48% of Latinos. Looking ahead, 41% believe AI will have both positive and negative impacts in the next decade. Notably, 28% of LGBTQ+ respondents expect mostly negative impacts, compared to 17% of cisgender/straight respondents. Young people have varied opinions on AI, as some view it as a sign of a changing world and are enthusiastic about its future, while others find it unsettling and concerning.
Why does it matter?
Young people globally share concerns over AI, which the IMF predicts will affect nearly 40% of jobs, with advanced economies seeing up to 60%. In comparison to the results above, a survey of 1,000 young Hungarians (aged 15-29) found that frequent AI app users are more positive about its benefits, while 38% of occasional users remain skeptical. Additionally, 54% believe humans will maintain control over AI, with 54% of women fearing loss of control compared to 37% of men.
China has unveiled an AI chatbot based on principles derived from President Xi Jinping’s political ideology. The chatbot, named ‘Xue Xi’, aims to propagate ‘Xi Jinping Thought’ through conversational interactions with users. Xi Jinping Thought, also known as ‘Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era‘, is made up of 14 principles, including ensuring the absolute power of the Chinese Communist Party, strengthening national security and socialist values, as well as improving people’s livelihoods and well-being.
Developed by a team at Tsinghua University, ‘Xue Xi’ utilises natural language processing to engage users in discussions about Xi Jinping’s ideas on governance, socialism with Chinese characteristics, and national rejuvenation. The chatbot was trained on seven databases, six of which were mostly related to information technologies provided by China’s internet watchdog and the Cyberspace Administration of China (CAC).
The chatbot’s creation is the latest effort of a broader strategy to spread the Chinese leader’s ideology and an attempt to leverage technology, strengthen ideological education and promote ideological loyalty among citizens. Students have had to take classes on Xi Jinping’s Thoughts in schools, and an app called Study Xi Strong Nation was also rolled out in 2019 to allow users to learn and take quizzes about his ideologies.
Why Does It Matter?
The launch of Xue Xi raises important questions about the intersection of AI technology and political ideology. It represents China’s innovative approach to using AI for ideological dissemination, aiming to ensure widespread adherence to Xi Jinping Thought. By deploying AI in this manner, China advances its technological capabilities and seeks to shape public discourse and reinforce state-approved narratives. Critics argue that such initiatives could exacerbate issues related to censorship and surveillance, potentially limiting freedom of expression and promoting conformity to government viewpoints. Moreover, the development of ‘Xue Xi’ underscores China’s broader ambition to lead in AI development, positioning itself as a pioneer in using technology for ideological governance.
Adobe faced backlash this weekend after the Ansel Adams estate criticised the company for selling AI-generated imitations of the famous photographer’s work. The estate posted a screenshot on Threads showing ‘Ansel Adams-style’ images on Adobe Stock, stating that Adobe’s actions had pushed them to their limit. Adobe allows AI-generated images on its platform but requires users to have appropriate rights and prohibits content created using prompts with other artists’ names.
In response, Adobe removed the offending content and reached out to the Adams estate, which claimed it had been contacting Adobe since August 2023 without resolution. The estate urged Adobe to respect intellectual property and support the creative community proactively. Adobe Stock’s Vice President, Matthew Smith, noted that moderators review all submissions, and the company can block users who violate rules.
Adobe’s Director of Communications, Bassil Elkadi, confirmed they are in touch with the Adams estate and have taken appropriate steps to address the issue. The Adams estate has thanked Adobe for the removal and expressed hope that the issue is resolved permanently.
Smith highlighted that while AI-generated fakes have been increasingly used in elections in countries like India, the United States, Pakistan, and Indonesia, the European context appears less affected. For instance, in India, deepfake videos of Bollywood actors criticising Prime Minister Narendra Modi and supporting the opposition went viral. In the EU, a Russian-language video falsely claimed that citizens were fleeing Poland for Belarus, but the EU’s disinformation team debunked it.
Ahead of the European Parliament elections from June 6-9, Microsoft’s training for candidates to monitor AI-related disinformation seems to be paying off. Despite not declaring victory prematurely, Smith emphasised that current threats focus more on events like the Olympics than the elections. This development follows the International Olympic Committee’s ban on the Russian Olympic Committee for recognising councils in Russian-occupied regions of Ukraine. Microsoft plans to release a detailed report on this issue soon.
A recent survey conducted by the Elon University Poll and the Imagining the Digital Future Center at Elon University has revealed widespread concerns among American adults regarding the impact of AI on the upcoming presidential election. According to the survey, more than three-fourths of respondents believe that abuses involving AI systems will influence the election outcome. Specifically, 73% of respondents fear AI will be used to manipulate social media, while 70% anticipate the spread of fake information through AI-generated content like deepfakes.
Moreover, the survey highlights concerns about targeted AI manipulation to dissuade certain voters from participating in the election, with 62% of respondents expressing apprehension about this possibility. Overall, 78% of Americans anticipate at least one form of AI abuse affecting the election, while over half believe all three identified forms are likely to occur. Lee Rainie, director of Elon University’s Imagining the Digital Future Center, notes that voters in the USA anticipate facing significant challenges in navigating misinformation and voter manipulation tactics facilitated by AI during the campaign period.
The survey underscores a strong consensus among Americans regarding the accountability of political candidates who maliciously alter or fake photos, videos, or audio files. A resounding 93% of respondents believe such candidates should face punishment, with opinions split between removal from office (46%) and criminal prosecution (36%). Additionally, the survey reveals concerns about the public’s ability to discern faked media, as 69% of respondents lack confidence in most voters’ ability to detect altered content.
AI is making significant strides in the healthcare sector, with Chinese researchers developing an AI hospital town that promises to revolutionise medical training and treatment. Dubbed ‘Agent Hospital’, this virtual environment, created by Tsinghua University researchers, features a large language model (LLM)-powered intelligent agents that act as doctors, nurses, and patients, all capable of autonomous interaction. These AI agents can treat thousands of patients quickly, achieving a 93.06% accuracy rate on medical exams. This innovative approach aims to enhance the training of medical professionals by allowing them to practice in a risk-free, simulated environment.
The AI hospital town not only offers advanced training opportunities for medical students but also has the potential to transform real-world healthcare delivery. The AI hospital can provide valuable insights and predictions by simulating various medical scenarios, including the spread of infectious diseases. The system utilises a vast repository of medical knowledge, enabling AI doctors to handle numerous cases efficiently and accurately, paving the way for high-quality, affordable, and convenient healthcare services.
While the future of AI in healthcare appears promising, significant challenges remain in implementing and promoting AI-driven medical solutions. Ensuring strict adherence to medical regulations, validating technological maturity, and developing effective AI-human collaboration mechanisms are essential to mitigate risks to public health. Experts emphasise that despite the impressive capabilities of AI, it can only partially replace the human touch in medicine. Personalised care, compassion, and legal responsibilities are aspects that AI cannot replicate, highlighting the indispensable role of human doctors in healthcare.