Microsoft to invest $3.21 billion in Sweden’s cloud and AI infrastructure

Microsoft announced on Monday a significant investment of 33.7 billion Swedish crowns ($3.21 billion) to enhance its cloud and AI infrastructure in Sweden over the next two years. This investment marks the company’s largest commitment to Sweden to date and includes plans to train 250,000 individuals in AI skills, aiming to boost the country’s competitiveness in the tech sector. Microsoft Vice Chair and President Brad Smith emphasised that this initiative goes beyond technology, focusing on providing widespread access to essential tools and skills for Sweden’s people and economy.

As part of this investment, Microsoft plans to deploy 20,000 advanced graphics processing units (GPUs) across its data centre sites in Sandviken, Gavle, and Staffanstorp. These GPUs are designed to accelerate computer calculations, enhancing the efficiency and capability of AI applications. Smith was scheduled to meet with Swedish Prime Minister Ulf Kristersson in Stockholm to discuss the investment and its implications for the country’s tech landscape.

In addition to bolstering AI infrastructure in Sweden, Microsoft is committed to promoting AI adoption throughout the Nordic region, which includes Denmark, Finland, Iceland, and Norway. The strategic move underscores Microsoft’s dedication to fostering innovation and equipping the Nordic countries with the necessary resources to thrive in the evolving AI era.

AMD unveils new AI chips to challenge Nvidia

Advanced Micro Devices (AMD) unveiled its latest AI processors at the Computex technology trade show in Taipei on Monday, signalling its commitment to challenging Nvidia’s dominance in the AI semiconductor market. AMD CEO Lisa Su introduced the MI325X accelerator, set for release in late 2024, and outlined the company’s ambitious roadmap to develop new AI chips annually. The move aligns with Nvidia’s strategy, as both companies race to meet the soaring demand for advanced AI data centre chips essential for generative AI programs.

AMD is not only aiming to compete with Nvidia but also to surpass it with innovations like the MI350 series, expected in 2025, which promises a 35-fold improvement in AI inference performance over current models. The company also previewed the MI400 series, set for 2026, featuring a new architecture called ‘Next’. Su emphasised that AI is the company’s top priority, driving a focus on rapid product development to maintain a competitive edge in the market.

The shift towards an annual product cycle reflects the growing importance of AI capabilities in the tech industry. Investors who have been keenly following the AI chip market have seen AMD’s shares more than double since the start of 2023, though Nvidia’s shares have surged even more dramatically. AMD’s plans include AI chip sales projections of $4 billion for 2024, up $500 million from previous estimates, and introducing new central processor units (CPUs) and neural processing units (NPUs) for AI tasks in PCs.

Why does it matter?

As the PC market looks to rebound from a prolonged slump, AMD is banking on its advanced AI capabilities to drive growth. Major PC providers like HP and Lenovo are set to incorporate AMD’s AI chips in their devices, which already meet Microsoft’s Copilot+ PC requirements. This strategic focus on AI-enhanced hardware highlights AMD’s commitment to staying at the forefront of technological innovation and market demand.

OpenAI uncovers misuse of AI in deceptive campaigns

OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations that misused its AI models for deceptive activities online. Over the past three months, actors from Russia, China, Iran, and Israel used AI to generate fake comments, articles, and social media profiles. These operations targeted issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the US, aiming to manipulate public opinion and influence political outcomes.

Despite these efforts, OpenAI stated that the deceptive campaigns did not see increased audience engagement. The company emphasised that these operations included both AI-generated and manually-created content. OpenAI’s announcement highlights ongoing concerns about using AI technology to spread misinformation.

In response to these threats, OpenAI has formed a Safety and Security Committee, led by CEO Sam Altman and other board members, to oversee the training of its next AI model. Additionally, Meta Platforms reported similar findings of likely AI-generated content used deceptively on Facebook and Instagram, underscoring the broader issue of AI misuse in digital platforms.

Zambia finalizes AI policy to boost copper production

The Zambian government has completed drafting a comprehensive AI policy aimed at leveraging modern technologies for the country’s development. Felix Mutati, the minister of science and technology, announced that the AI plan will be officially launched within the next two months. The initiative is seen as a crucial step towards achieving Zambia’s ambitious goal of producing 3 million tonnes of copper annually, utilising AI to enhance mineral exploration and production processes.

Copper, the cornerstone of Zambia’s economy, stands to benefit significantly from AI integration. Mutati highlighted that AI could expedite mineral exploration and create new job opportunities, thus bringing substantial economic benefits. Speaking at the Copperbelt Agricultural Mining and Industrial Networking Enterprise in Kitwe, he emphasised that AI is essential for the country’s future growth and development.

Zambia will host an AI Conference next month to prepare for an AI-driven future. The event aims to engage stakeholders and prepare the nation for the transformative impact of AI. Larry Mweetwa, the acting director for science and technology, mentioned that the government is already training its workforce in AI and will soon begin discussions with industry players to ensure effective implementation and maximum benefit from the new technology.

EU watchdog sets AI guidelines for banks

The European Securities and Markets Authority (ESMA) has issued its first statement on AI, emphasising that banks and investment firms in the EU must uphold boardroom responsibility and legal obligations to safeguard customers when using AI. ESMA’s guidance, aimed at entities regulated across the EU, outlines how these firms can integrate AI into their daily operations while complying with the EU’s MiFID securities law.

While AI offers opportunities to enhance investment strategies and client services, ESMA underscores its inherent risks, particularly concerning protecting retail investors. The authority stresses that management bodies are ultimately responsible for decisions, regardless of whether humans or AI-based tools make them. ESMA emphasises the importance of acting in clients’ best interests, irrespective of the tools firms choose to employ.

ESMA’s statement extends beyond the direct development or adoption of AI tools by financial institutions, also addressing the use of third-party AI technologies. Whether firms utilise platforms like ChatGPT or Google Bard with or without senior management’s direct knowledge, ESMA emphasises the need for management bodies to understand and oversee the application of AI technologies within their organisations.

Their guidance aligns with the forthcoming EU rules on AI, set to take effect next month, establishing a potential global standard for AI governance across various sectors. Additionally, efforts are underway at the global level, led by the Group of Seven economies (G7), to establish safeguards for AI technology’s safe and responsible development.

Survey reveals limited usage of AI tools among general population

A recent study by the Reuters Institute and the University of Oxford sheds light on the general population’s widespread lack of awareness and usage of generative AI tools. Despite their prevalence in tech-centric professions, tools like ChatGPT, Gemini, and Copilot remain unfamiliar to many people, with 20-30% of respondents across six countries admitting they haven’t even heard of them.

The survey, conducted among approximately 12,000 participants in Argentina, Denmark, France, Japan, the UK, and the USA, highlights that most people do not use generative AI tools daily. Even OpenAI’s ChatGPT, the most recognised tool, is used daily by only a small fraction of respondents, ranging from 1% in Japan to 7% in the USA. Other popular tools like Google’s Gemini and Microsoft’s Copilot also have limited daily usage.

Generational differences are evident, with younger demographics more likely to engage with generative AI, while older age groups exhibit lower usage rates. The study suggests that generative AI is primarily utilised for media generation and information retrieval, with 28% using it for various media types and 24% for gathering information.

Respondents anticipate significant impacts of generative AI across sectors such as search engines, social media, news media, and science. However, overall expectations regarding AI’s societal impact lean towards pessimism, particularly concerning issues like the cost of living, equality, and job security.

Meta discovers ‘likely AI-generated’ content praising Israel

Meta reported finding likely AI-generated content used deceptively on Facebook and Instagram, praising Israel’s handling of the Gaza conflict in comments under posts from global news organisations and US lawmakers. This campaign, linked to the Tel Aviv-based political marketing firm STOIC, targeted audiences in the US and Canada by posing as various concerned citizens. STOIC has not commented on the allegations.

Meta’s quarterly security report marks the first disclosure of text-based generative AI technology used in influence operations since its emergence in late 2022. While AI-generated profile photos have been identified in past operations, the use of text-based AI raises concerns about more effective disinformation campaigns. Despite this, Meta’s security team successfully disrupted the Israeli campaign early and maintained confidence in their ability to detect such networks.

The report detailed six covert influence operations disrupted in the first quarter, including an Iran-based network focused on the Israel-Hamas conflict, which did not use generative AI. As Meta and other tech giants continue to address potential AI misuse, upcoming elections in the EU and the US will test their defences against AI-generated disinformation.

Senators to introduce No Fakes Act to regulate AI in music and film industries

US senators are set to introduce a bill in June to regulate AI in the music and movie industries amid rising tensions in Hollywood. The NO FAKES Act, an acronym for Nurture Originals, Foster Art, and Keep Entertainment Safe, aims to prohibit the unauthorised creation of AI-generated replicas of individuals’ likenesses or voices.

Senator Chris Coons (D-Del.) is leading the bipartisan effort with Senators Amy Klobuchar (D-Minn.), Marsha Blackburn (R-Tenn.), and Thom Tillis (R-N.C.). They are working with artists in the recording and movie industries on the bill’s details.

Musicians, in particular, are increasingly worried about the lack of protection for their names, likenesses, and voices from being used in AI-generated songs. During the Grammys on the Hill lobbying event, Sheryl Crow noted the urgency of establishing guidelines and safeguards considering the unsettling trend of artists’ voices being used without consent, even posthumously.

However, before considering a national AI bill, Senators will need to address several issues, including whether the law will override existing state laws like Tennessee’s ELVIS Act and determine the duration of licensing restrictions and postmortem rights for an artist’s digital replica.

As Senate discussions continue, the Recording Academy has supported the bill. Meanwhile, the movie industry also backs the regulation but has raised concerns about potential First Amendment infringements. A similar bill, the No AI Fraud Act, is being considered in the House. Senate Majority Leader Chuck Schumer is also pushing for AI legislation that respects First Amendment principles.

Why does it matter?

Concerns about AI’s impact on the entertainment industry escalated after a dispute between Scarlett Johansson and OpenAI. Johansson accused OpenAI of using an ‘eerily similar’ voice to hers for a new chatbot without her permission. A similar situation happened with singers Ariana Grande and Lainey Wilson, who have also had their voices mimicked without consent. Last year, an anonymous artist released ‘Heart on my Sleeve,’ falsely impersonating Drake and The Weeknd, raising alarm bells across the industry.

AI tools deployed to counter cyber threats at 2024 Olympics

In just over two months, Paris will host the eagerly awaited 2024 Summer Olympics, welcoming athletes from around the globe. These athletes had a condensed preparation period due to the COVID-related delay of the 2020 Summer Olympics, which took place in Tokyo in 2021. While athletes hone their skills for the upcoming games, organisers diligently fortify their defences against cybersecurity threats.

As cyber threats become increasingly sophisticated, there’s a growing focus on leveraging AI to combat them. Blackbird.AI has developed Constellation, an AI-powered narrative intelligence platform that identifies and analyses disinformation-driven narratives. By assessing the risk and adding context to these narratives, Constellation equips organisations with invaluable insights for informed decision-making.

The platform’s real-time monitoring capability allows for early detection and mitigation of narrative attacks, which can inflict significant financial and reputational damage. With the ability to analyse various forms of content across multiple platforms and languages, Constellation offers a comprehensive approach to combating misinformation and safeguarding against online threats.

Meanwhile, the International Olympic Committee (IOC) is also embracing AI, recognising its potential to enhance various aspects of sports. From talent identification to improving judging fairness and protecting athletes from online harassment, the IOC is leveraging AI to innovate and enhance the Olympic experience. With cybersecurity concerns looming, initiatives like Viginum, spearheaded by French President Emmanuel Macron, aim to counter online interference and ensure the security of major events like the Olympics.

EU launches AI Office to regulate AI development

The European Commission has launched the AI Office to oversee the development, deployment, and regulation of AI in the EU. The AI Office ensures that AI fosters societal and economic benefits while managing associated risks. It will play a crucial role in implementing the AI Act, especially for general-purpose AI models. It will also support research and innovation to position the EU as a leader in trustworthy AI.

The AI Office comprises several specialised units. The Regulation and Compliance Unit will enforce the AI Act across the EU, working with member states to administer sanctions and handle investigations. The ‘AI Safety Unit’ will identify and mitigate risks associated with powerful AI models. The ‘Excellence in AI and Robotics Unit’ will fund research and coordinate the GenAI4EU initiative. The ‘AI for Societal Good Unit’ will focus on international collaborations in areas like weather modelling and cancer diagnosis. Lastly, the ‘AI Innovation and Policy Coordination Unit’ will monitor AI trends, stimulate investment, and support testing and regulatory sandboxes.

Led by the Head of the AI Office and advised by a Lead Scientific Adviser and an international affairs expert, the office will employ over 140 staff members. These include technology specialists, lawyers, and policy experts. The AI Office will collaborate with member states and the scientific community through dedicated forums and the European Artificial Intelligence Board. It will also support research and innovation activities, ensuring that AI models developed in Europe are integrated into various applications, thereby stimulating investment.

The AI Office will officially begin its operations on 16 June, with the first meeting of the AI Board scheduled for the end of June. It will issue guidelines on AI system definitions and prohibitions within six months of the AI Act’s enforcement, expected by the end of July 2024. This initiative follows the EU AI Act, provisionally agreed upon in December 2023, and aims to maintain safety and fundamental rights while fostering innovation and investment in AI across Europe.