China has unveiled an AI chatbot based on principles derived from President Xi Jinping’s political ideology. The chatbot, named ‘Xue Xi’, aims to propagate ‘Xi Jinping Thought’ through conversational interactions with users. Xi Jinping Thought, also known as ‘Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era‘, is made up of 14 principles, including ensuring the absolute power of the Chinese Communist Party, strengthening national security and socialist values, as well as improving people’s livelihoods and well-being.
Developed by a team at Tsinghua University, ‘Xue Xi’ utilises natural language processing to engage users in discussions about Xi Jinping’s ideas on governance, socialism with Chinese characteristics, and national rejuvenation. The chatbot was trained on seven databases, six of which were mostly related to information technologies provided by China’s internet watchdog and the Cyberspace Administration of China (CAC).
The chatbot’s creation is the latest effort of a broader strategy to spread the Chinese leader’s ideology and an attempt to leverage technology, strengthen ideological education and promote ideological loyalty among citizens. Students have had to take classes on Xi Jinping’s Thoughts in schools, and an app called Study Xi Strong Nation was also rolled out in 2019 to allow users to learn and take quizzes about his ideologies.
Why Does It Matter?
The launch of Xue Xi raises important questions about the intersection of AI technology and political ideology. It represents China’s innovative approach to using AI for ideological dissemination, aiming to ensure widespread adherence to Xi Jinping Thought. By deploying AI in this manner, China advances its technological capabilities and seeks to shape public discourse and reinforce state-approved narratives. Critics argue that such initiatives could exacerbate issues related to censorship and surveillance, potentially limiting freedom of expression and promoting conformity to government viewpoints. Moreover, the development of ‘Xue Xi’ underscores China’s broader ambition to lead in AI development, positioning itself as a pioneer in using technology for ideological governance.
Adobe faced backlash this weekend after the Ansel Adams estate criticised the company for selling AI-generated imitations of the famous photographer’s work. The estate posted a screenshot on Threads showing ‘Ansel Adams-style’ images on Adobe Stock, stating that Adobe’s actions had pushed them to their limit. Adobe allows AI-generated images on its platform but requires users to have appropriate rights and prohibits content created using prompts with other artists’ names.
In response, Adobe removed the offending content and reached out to the Adams estate, which claimed it had been contacting Adobe since August 2023 without resolution. The estate urged Adobe to respect intellectual property and support the creative community proactively. Adobe Stock’s Vice President, Matthew Smith, noted that moderators review all submissions, and the company can block users who violate rules.
Adobe’s Director of Communications, Bassil Elkadi, confirmed they are in touch with the Adams estate and have taken appropriate steps to address the issue. The Adams estate has thanked Adobe for the removal and expressed hope that the issue is resolved permanently.
Smith highlighted that while AI-generated fakes have been increasingly used in elections in countries like India, the United States, Pakistan, and Indonesia, the European context appears less affected. For instance, in India, deepfake videos of Bollywood actors criticising Prime Minister Narendra Modi and supporting the opposition went viral. In the EU, a Russian-language video falsely claimed that citizens were fleeing Poland for Belarus, but the EU’s disinformation team debunked it.
Ahead of the European Parliament elections from June 6-9, Microsoft’s training for candidates to monitor AI-related disinformation seems to be paying off. Despite not declaring victory prematurely, Smith emphasised that current threats focus more on events like the Olympics than the elections. This development follows the International Olympic Committee’s ban on the Russian Olympic Committee for recognising councils in Russian-occupied regions of Ukraine. Microsoft plans to release a detailed report on this issue soon.
A recent survey conducted by the Elon University Poll and the Imagining the Digital Future Center at Elon University has revealed widespread concerns among American adults regarding the impact of AI on the upcoming presidential election. According to the survey, more than three-fourths of respondents believe that abuses involving AI systems will influence the election outcome. Specifically, 73% of respondents fear AI will be used to manipulate social media, while 70% anticipate the spread of fake information through AI-generated content like deepfakes.
Moreover, the survey highlights concerns about targeted AI manipulation to dissuade certain voters from participating in the election, with 62% of respondents expressing apprehension about this possibility. Overall, 78% of Americans anticipate at least one form of AI abuse affecting the election, while over half believe all three identified forms are likely to occur. Lee Rainie, director of Elon University’s Imagining the Digital Future Center, notes that voters in the USA anticipate facing significant challenges in navigating misinformation and voter manipulation tactics facilitated by AI during the campaign period.
The survey underscores a strong consensus among Americans regarding the accountability of political candidates who maliciously alter or fake photos, videos, or audio files. A resounding 93% of respondents believe such candidates should face punishment, with opinions split between removal from office (46%) and criminal prosecution (36%). Additionally, the survey reveals concerns about the public’s ability to discern faked media, as 69% of respondents lack confidence in most voters’ ability to detect altered content.
AI is making significant strides in the healthcare sector, with Chinese researchers developing an AI hospital town that promises to revolutionise medical training and treatment. Dubbed ‘Agent Hospital’, this virtual environment, created by Tsinghua University researchers, features a large language model (LLM)-powered intelligent agents that act as doctors, nurses, and patients, all capable of autonomous interaction. These AI agents can treat thousands of patients quickly, achieving a 93.06% accuracy rate on medical exams. This innovative approach aims to enhance the training of medical professionals by allowing them to practice in a risk-free, simulated environment.
The AI hospital town not only offers advanced training opportunities for medical students but also has the potential to transform real-world healthcare delivery. The AI hospital can provide valuable insights and predictions by simulating various medical scenarios, including the spread of infectious diseases. The system utilises a vast repository of medical knowledge, enabling AI doctors to handle numerous cases efficiently and accurately, paving the way for high-quality, affordable, and convenient healthcare services.
While the future of AI in healthcare appears promising, significant challenges remain in implementing and promoting AI-driven medical solutions. Ensuring strict adherence to medical regulations, validating technological maturity, and developing effective AI-human collaboration mechanisms are essential to mitigate risks to public health. Experts emphasise that despite the impressive capabilities of AI, it can only partially replace the human touch in medicine. Personalised care, compassion, and legal responsibilities are aspects that AI cannot replicate, highlighting the indispensable role of human doctors in healthcare.
Microsoft announced on Monday a significant investment of 33.7 billion Swedish crowns ($3.21 billion) to enhance its cloud and AI infrastructure in Sweden over the next two years. This investment marks the company’s largest commitment to Sweden to date and includes plans to train 250,000 individuals in AI skills, aiming to boost the country’s competitiveness in the tech sector. Microsoft Vice Chair and President Brad Smith emphasised that this initiative goes beyond technology, focusing on providing widespread access to essential tools and skills for Sweden’s people and economy.
As part of this investment, Microsoft plans to deploy 20,000 advanced graphics processing units (GPUs) across its data centre sites in Sandviken, Gavle, and Staffanstorp. These GPUs are designed to accelerate computer calculations, enhancing the efficiency and capability of AI applications. Smith was scheduled to meet with Swedish Prime Minister Ulf Kristersson in Stockholm to discuss the investment and its implications for the country’s tech landscape.
In addition to bolstering AI infrastructure in Sweden, Microsoft is committed to promoting AI adoption throughout the Nordic region, which includes Denmark, Finland, Iceland, and Norway. The strategic move underscores Microsoft’s dedication to fostering innovation and equipping the Nordic countries with the necessary resources to thrive in the evolving AI era.
Advanced Micro Devices (AMD) unveiled its latest AI processors at the Computex technology trade show in Taipei on Monday, signalling its commitment to challenging Nvidia’s dominance in the AI semiconductor market. AMD CEO Lisa Su introduced the MI325X accelerator, set for release in late 2024, and outlined the company’s ambitious roadmap to develop new AI chips annually. The move aligns with Nvidia’s strategy, as both companies race to meet the soaring demand for advanced AI data centre chips essential for generative AI programs.
AMD is not only aiming to compete with Nvidia but also to surpass it with innovations like the MI350 series, expected in 2025, which promises a 35-fold improvement in AI inference performance over current models. The company also previewed the MI400 series, set for 2026, featuring a new architecture called ‘Next’. Su emphasised that AI is the company’s top priority, driving a focus on rapid product development to maintain a competitive edge in the market.
The shift towards an annual product cycle reflects the growing importance of AI capabilities in the tech industry. Investors who have been keenly following the AI chip market have seen AMD’s shares more than double since the start of 2023, though Nvidia’s shares have surged even more dramatically. AMD’s plans include AI chip sales projections of $4 billion for 2024, up $500 million from previous estimates, and introducing new central processor units (CPUs) and neural processing units (NPUs) for AI tasks in PCs.
Why does it matter?
As the PC market looks to rebound from a prolonged slump, AMD is banking on its advanced AI capabilities to drive growth. Major PC providers like HP and Lenovo are set to incorporate AMD’s AI chips in their devices, which already meet Microsoft’s Copilot+ PC requirements. This strategic focus on AI-enhanced hardware highlights AMD’s commitment to staying at the forefront of technological innovation and market demand.
OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations that misused its AI models for deceptive activities online. Over the past three months, actors from Russia, China, Iran, and Israel used AI to generate fake comments, articles, and social media profiles. These operations targeted issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the US, aiming to manipulate public opinion and influence political outcomes.
Despite these efforts, OpenAI stated that the deceptive campaigns did not see increased audience engagement. The company emphasised that these operations included both AI-generated and manually-created content. OpenAI’s announcement highlights ongoing concerns about using AI technology to spread misinformation.
The Zambian government has completed drafting a comprehensive AI policy aimed at leveraging modern technologies for the country’s development. Felix Mutati, the minister of science and technology, announced that the AI plan will be officially launched within the next two months. The initiative is seen as a crucial step towards achieving Zambia’s ambitious goal of producing 3 million tonnes of copper annually, utilising AI to enhance mineral exploration and production processes.
Copper, the cornerstone of Zambia’s economy, stands to benefit significantly from AI integration. Mutati highlighted that AI could expedite mineral exploration and create new job opportunities, thus bringing substantial economic benefits. Speaking at the Copperbelt Agricultural Mining and Industrial Networking Enterprise in Kitwe, he emphasised that AI is essential for the country’s future growth and development.
Zambia will host an AI Conference next month to prepare for an AI-driven future. The event aims to engage stakeholders and prepare the nation for the transformative impact of AI. Larry Mweetwa, the acting director for science and technology, mentioned that the government is already training its workforce in AI and will soon begin discussions with industry players to ensure effective implementation and maximum benefit from the new technology.
The European Securities and Markets Authority (ESMA) has issued its first statement on AI, emphasising that banks and investment firms in the EU must uphold boardroom responsibility and legal obligations to safeguard customers when using AI. ESMA’s guidance, aimed at entities regulated across the EU, outlines how these firms can integrate AI into their daily operations while complying with the EU’s MiFID securities law.
While AI offers opportunities to enhance investment strategies and client services, ESMA underscores its inherent risks, particularly concerning protecting retail investors. The authority stresses that management bodies are ultimately responsible for decisions, regardless of whether humans or AI-based tools make them. ESMA emphasises the importance of acting in clients’ best interests, irrespective of the tools firms choose to employ.
ESMA’s statement extends beyond the direct development or adoption of AI tools by financial institutions, also addressing the use of third-party AI technologies. Whether firms utilise platforms like ChatGPT or Google Bard with or without senior management’s direct knowledge, ESMA emphasises the need for management bodies to understand and oversee the application of AI technologies within their organisations.
Their guidance aligns with the forthcoming EU rules on AI, set to take effect next month, establishing a potential global standard for AI governance across various sectors. Additionally, efforts are underway at the global level, led by the Group of Seven economies (G7), to establish safeguards for AI technology’s safe and responsible development.