JD Vance takes on Europe’s AI regulations in Paris

US Vice President JD Vance is set to speak at the Paris AI summit on Tuesday, where he is expected to address Europe’s regulation of artificial intelligence and the moderation of content on major tech platforms. As AI continues to grow, the global discussion has shifted from safety concerns to intense geopolitical competition, with nations vying to lead the technology’s development. On the first day of the summit, French President Emmanuel Macron emphasised the need for Europe to reduce regulatory barriers to foster AI growth, in contrast to the regulatory divergence between the US, China, and Europe.

Vance, a vocal critic of content moderation on tech platforms, has voiced concerns over Europe’s approach, particularly in relation to Elon Musk’s platform X. Ahead of his trip, he stressed that free speech should be a priority for the US under President Trump, suggesting that European content moderation could harm these values. While Vance’s main focus in Paris is expected to be Russia’s invasion of Ukraine, he will lead the American delegation in discussions with nearly 100 countries, including China and India, to navigate competing national interests in the AI sector.

Macron and European Commission President Ursula von der Leyen are also expected to present a new AI strategy, aimed at simplifying regulations and accelerating Europe’s progress. At the summit, Macron highlighted the region’s shift to carbon-free nuclear energy to meet the growing energy demands of AI. German Chancellor Olaf Scholz called on European companies to unite in strengthening AI efforts within the continent. Meanwhile, OpenAI CEO Sam Altman is scheduled to speak, following a significant bid from a consortium led by Musk to purchase OpenAI.

The summit also anticipates discussions on a draft statement proposing an inclusive, human rights-based approach to AI, with an emphasis on avoiding market concentration and ensuring sustainability for both people and the planet. However, it remains unclear whether nations will support this approach as they align their strategies.

For more information on these topics, visit diplomacy.edu.

Rethinking AI regulation: Are new laws really necessary?

Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jovan Kurbalija, executive director of Diplo, argues in his blog that before enacting new AI-specific rules, society must assess whether current legal frameworks—such as consumer protection, data governance, and liability laws—can effectively regulate AI.

He draws historical parallels, citing the 4,000-year-old Code of Hammurabi as an example of legal accountability principles that remain relevant today. Kurbalija explains that legal systems have always adapted to technological advances without requiring entirely new legal categories.

He also highlights how laws governing property, commerce, and torts were successfully applied to the internet in the 1990s, suggesting that AI can be regulated similarly. Instead of focusing on abstract ethical discussions, he argues that enforcing existing legal frameworks will ensure accountability for AI developers and users.

The blog post also examines different layers of AI regulation, from hardware and data laws to algorithmic governance and AI applications. While AI-generated content has raised legal disputes over intellectual property and data use, these challenges, Kurbalija contends, should be addressed by refining current laws rather than introducing entirely new ones. He points to ongoing legal battles involving OpenAI, the New York Times, and Getty Images as examples of courts adapting existing regulations to the AI landscape.

Ultimately, Kurbalija asserts that AI is a tool, much like a hammer or a horse, and does not require its own distinct legal system. What matters most, he insists, is holding those who create and deploy AI accountable for its consequences. Society can effectively govern AI without requiring specialised regulations by reinforcing traditional legal principles such as liability, transparency, and justice.

France secures billions for AI expansion

France is set to receive an unprecedented €83 billion in AI-related investments, with Canadian firm Brookfield committing €20 billion by 2030. The majority of this funding will be allocated to data centres, including a massive one in Cambrai with a capacity of up to one gigawatt. This surge in investment follows the announcement of a €50 billion AI campus project between France and the UAE.

A key factor behind France’s appeal is its energy infrastructure. With 65% of its electricity generated from nuclear power and another 25% from renewables, the country offers a sustainable solution for tech companies seeking to reduce their carbon footprint. This has positioned France as an attractive location for power-intensive AI data centres.

Alongside international funding, France’s public investment bank Bpifrance has pledged €10 billion to support AI startups, while telecom giant Iliad is investing €3 billion in AI-focused infrastructure. With the AI Action Summit set to take place in Paris, more investment announcements could be on the horizon.

French telecoms giant Iliad commits €3 billion to AI infrastructure

French telecoms group Iliad has announced a €3 billion investment in AI infrastructure, including data centres and computing power. The investment will be made through its subsidiary OpCore, which operates 13 data centres across Europe. In the short term, OpCore plans to deploy several hundred megawatts of capacity, with a long-term goal of expanding to several gigawatts.

Iliad has also partnered with France-based AI startup Mistral AI to integrate its ‘Le Chat Pro’ AI model into services for its 15.5 million French subscribers. The move highlights Europe’s push to catch up with the US and China in AI development. American initiatives, such as US President Donald Trump’s Stargate programme, aim to invest up to $500 billion in AI over the next five years.

OpenAI CEO Sam Altman has urged Europe to embrace AI and suggested a Stargate-style programme could be introduced on the continent. Iliad’s investment signals a growing commitment among European companies to strengthen the region’s AI capabilities and infrastructure.

Musk bids $97.4 billion-dollar to reclaim control over OpenAI

Elon Musk has reignited his rivalry with OpenAI by leading a consortium in a staggering $97.4 billion bid to acquire the nonprofit that governs the ChatGPT creator. The move is the latest chapter in Musk’s long-running battle with OpenAI CEO Sam Altman, who swiftly dismissed the offer with a sarcastic post on X, suggesting he would buy Musk’s platform for $9.74 billion instead. The dramatic exchange highlights the growing tensions surrounding OpenAI’s controversial shift from a nonprofit to a for-profit entity, a transition that Musk has legally challenged, claiming it betrays the company’s original mission of prioritising AI safety over profit.

Musk co-founded OpenAI with Altman in 2015, envisioning an organisation dedicated to open-source AI research for the benefit of humanity. However, he parted ways with the company before it became a dominant force in generative AI. Since then, Musk has launched his own AI venture, xAI, which recently secured $6 billion in funding at a $40 billion valuation. His latest bid to acquire OpenAI comes when the company is seeking new investments to fuel its growth, with reports suggesting that SoftBank is in talks to lead a funding round that would push OpenAI’s valuation to an eye-watering $300 billion.

Musk’s legal battle with OpenAI hinges on the argument that the organisation’s leaders, including Altman, have violated their original agreement by prioritising commercial interests over AI safety and transparency. His lawsuit seeks to block OpenAI’s shift to a for-profit structure, and now, his surprise takeover bid could throw a major obstacle in the company’s fundraising efforts. The consortium backing Musk’s offer includes Baron Capital Group and Emanuel Capital, signalling that serious financial players support the bid. Analysts suggest that OpenAI’s board has a fiduciary duty to consider the offer, given its substantial valuation and potential legal complications.

Financing such a deal would require Musk to tap into his vast wealth, with options including selling Tesla stock, leveraging assets from SpaceX, or securing loans against his holdings. However, his financial leverage is likely constrained after his $44 billion acquisition of X (formerly Twitter), and securing additional funding for such a massive bid could prove challenging. Meanwhile, OpenAI, currently valued at $157 billion, remains in talks with investors for its expansion, and any disruption caused by Musk’s move could impact its ability to raise funds on favourable terms.

Why does it matter?

Legal experts and industry analysts view Musk’s bid as a significant disruption to OpenAI’s trajectory. Jonathan Macey, a corporate governance professor at Yale Law School, noted that the nonprofit’s board is now in a difficult position, as rejecting a higher offer in favour of a different funding strategy could raise concerns about whether the board is acting in the best interest of OpenAI’s original mission. Furthermore, Musk’s criticism of OpenAI’s partnership with Microsoft, a key investor in the company, adds another layer of complexity to the situation, as Microsoft remains a powerful force in shaping OpenAI’s future direction.

If Musk’s plan succeeds, he could steer OpenAI back toward an open-source, safety-focused model, aligning with his publicly stated goals of ensuring AI development remains transparent and ethical. However, if OpenAI resists, it could face prolonged legal battles and financial uncertainties that might slow its rapid expansion. Either way, Musk’s aggressive push to reclaim influence over OpenAI could reshape the company’s approach and future business plans.

AI Action Summit in Paris shapes the future of AI amid divergent visions

World leaders gathered in Paris for the second day of the Artificial Intelligence (AI) Action Summit, where the focus turned to balancing national interests with global cooperation. Representatives from nearly 100 countries, including the US, China, and India, aimed to find common ground on sustainable AI development. However, questions lingered over whether the US would endorse a draft statement promoting an inclusive, human rights-based approach to AI.

French President Emmanuel Macron emphasised Europe’s commitment to clean energy as a cornerstone for AI growth, contrasting it with the US’s fossil fuel-driven strategy. ‘We won’t adopt a ‘drill, baby, drill’ policy,’ Macron said, ‘but instead ‘plug, baby, plug’ into our clean energy resources.’ This stance reflects Europe’s ambition to lead in sustainable AI innovation while addressing the technology’s massive energy demands.

Despite differing energy policies, there was consensus on one point: 2025 is not the year for new AI regulations. US President Donald Trump’s dismantling of his predecessor’s AI safeguards has influenced global perspectives, with Europe opting to streamline its regulations rather than impose new ones. European Commission President Ursula von der Leyen is set to unveil a new AI strategy to simplify rules, deepen the single market, and boost computing investments.

German Chancellor Olaf Scholz urged the EU companies to unite in a collective push for ‘AI made in Europe,’ signalling a desire for regional self-reliance in the face of global competition. Meanwhile, tech executives, including OpenAI CEO Sam Altman, joined the summit’s Business Day, highlighting the private sector’s role in shaping AI’s future. A consortium led by Elon Musk reportedly offered $97.4 billion to acquire the nonprofit overseeing OpenAI, though details remain unconfirmed.

US Vice President JD Vance added another layer of intrigue as the summit progressed. While his primary focus was expected to be AI, reports suggested he might also address the Russia-Ukraine conflict, and this shift in agenda underscores the complex interplay between technology and geopolitics at the summit.

The draft declaration, which calls for avoiding market monopolies and ensuring AI benefits people and the planet, remains a point of contention. While many nations support its principles, the US delegation has not confirmed its stance, casting doubt on the universal adoption of the declaration. Consequently, without unanimous backing, the summit risks failing to establish a unified, sustainable framework for AI’s global development.

Scottish poet calls for AI-free literature

Scotland’s Makar, Peter Mackay, has voiced concerns about the growing role of artificial intelligence in literature, warning that it could threaten the livelihoods of new writers. With AI tools capable of generating dialogue, plot ideas, and entire narratives, Mackay fears that competing with machine-created content may become increasingly difficult for human authors.

To address these challenges, he has proposed clearer distinctions between human and AI-generated work. Ideas discussed include a certification system similar to the Harris Tweed Orb, ensuring books are marked as ‘100% AI-free.’ Another suggestion is an ingredient-style label outlining an AI-generated book’s influences, listing percentages of various literary styles.

Mackay also believes literary prizes, such as the Highland Book Prize, can play a role in safeguarding human creativity by celebrating originality and unique writing styles and qualities that AI struggles to replicate. He warns of the day an AI-generated book wins a major award, questioning what it would mean for writers who spend years perfecting their craft.

Nokia appoints Justin Hotard as new CEO

Nokia has announced that Pekka Lundmark will step down as CEO, with Justin Hotard, currently EVP and GM of Intel’s Data Center & AI Group, set to take over the role on April 1. This leadership change is seen as part of Nokia’s strategic shift towards expanding into areas like AI and data centres, where the company is positioning itself for future growth. Hotard’s strong background in AI and technology is expected to drive Nokia’s focus on these emerging sectors.

The news has led to a 1.6% rise in Nokia’s shares, reflecting positive investor sentiment despite the surprise announcement. Analysts note that the appointment of Hotard suggests Nokia’s commitment to strengthening its network infrastructure unit, particularly as it looks to benefit from the surge in AI investments. This follows Nokia’s $2.3 billion acquisition of US optical networking firm Infinera, aimed at tapping into the growing data centre market.

Lundmark, who has been CEO since 2020, will remain with Nokia as an advisor to Hotard until the end of the year. Despite some initial denials about leadership changes, the company confirmed that the transition plan had been in place for some time, with Lundmark signalling his intention to step down once the business repositioning was more advanced.

Nokia’s infrastructure business, which includes AI-integrated systems for communication, and its mobile networks division, focusing on 5G technology, are both seen as key to the company’s future. While shares are up 27.85% over the past year, they remain significantly lower than their peak in 2000.

South Korea accuses DeepSeek of excessive data collection

South Korea’s National Intelligence Service (NIS) has raised concerns about the Chinese AI app DeepSeek, accusing it of excessively collecting personal data and using it for training purposes. The agency warned government bodies last week to take security measures, highlighting that unlike other AI services, DeepSeek collects sensitive data such as keyboard input patterns and transfers it to Chinese servers. Some South Korean government ministries have already blocked access to the app due to these security concerns.

The NIS also pointed out that DeepSeek grants advertisers unrestricted access to user data and stores South Korean users’ data in China, where it could be accessed by the Chinese government under local laws. The agency also noted discrepancies in the app’s responses to sensitive questions, such as the origin of kimchi, which DeepSeek claimed was Chinese when asked in Chinese, but Korean when asked in Korean.

DeepSeek has also been accused of censoring political topics, such as the 1989 Tiananmen Square crackdown, prompting the app to suggest changing the subject. In response to these concerns, China’s foreign ministry stated that the country values data privacy and security and complies with relevant laws, denying that it pressures companies to violate privacy. DeepSeek has not yet commented on the allegations.

EU AI regulations making it harder for global firms, Ezzat says

Aiman Ezzat, CEO of Capgemini, has criticised the European Union’s AI regulations, claiming they are overly restrictive and hinder the ability of global companies to deploy AI technology in the region. His comments come ahead of the AI Action summit in Paris and reflect increasing frustration from private sector players with EU laws. Ezzat highlighted the complexity of navigating different regulations across countries, especially in the absence of global AI standards, and argued that the EU’s AI Act hailed as the most comprehensive worldwide, could stifle innovation.

As one of Europe’s largest IT services firms, Capgemini works with major players like Microsoft, Google Cloud, and Amazon Web Services. The company is concerned about the implementation of AI regulations in various countries and how they affect business operations. Ezzat is hopeful that the AI summit will provide an opportunity for regulators and industry leaders to align on AI policies moving forward.

Despite the regulatory challenges, Ezzat spoke positively about DeepSeek, a Chinese AI firm gaining traction by offering cost-effective, open-source models that compete with US tech giants. However, he pointed out that while DeepSeek shares its models, it is not entirely open source, as there is limited access to the data used for training the models. Capgemini is in the early stages of exploring the use of DeepSeek’s technology with clients.

As concerns about AI’s impact on privacy grow, European data protection authorities have begun investigating AI companies, including DeepSeek, to ensure compliance with privacy laws. Ezzat’s comments underscore the ongoing tension between innovation and regulation in the rapidly evolving AI landscape.