Anthropic unveils AI chatbot Claude in EU

Anthropic, an AI startup backed by tech giants Google and Amazon.com, announced the European release of its generative AI chatbot, Claude, set for Tuesday. The new AI tool places Claude in direct competition with Microsoft-backed OpenAI’s ChatGPT, renowned for its record-breaking 100 million monthly active users just two months post-launch.

While Claude has been accessible online in several countries, this marks its debut availability via the web and iPhones throughout the EU, including non-EU nations like Switzerland and Iceland. Additionally, European businesses can opt for the ‘Claude Team’ plan for €28 ($30.21) per month before value-added tax.

Anthropic, founded by former OpenAI executives and siblings Dario and Daniela Amodei, recently entered the corporate AI market fray by launching a business-oriented version of Claude. This business-oriented tool potentially pits Anthropic against its own backers, Amazon and Google, as they vie for a share of the burgeoning AI business spending.

Dario Amodei, Anthropic’s CEO and co-founder, emphasised Claude’s widespread utility, saying that millions of people worldwide are already using Claude to do things like accelerate scientific processes, enhance customer service or refine their writing. With its European rollout, Anthropic anticipates further innovation and adoption across diverse sectors and businesses.

IMF chief compares AI impact on labour to a ‘tsunami’

AI is poised to drastically reshape the global labour market, according to International Monetary Fund Managing Director Kristalina Georgieva. She likened its impact to a ‘tsunami’, projecting that 60% of jobs in advanced economies and 40% worldwide will be affected within the next two years. Georgieva emphasised the urgency of preparing individuals and businesses for this imminent transformation, speaking at an event organised by the Swiss Institute of International Studies in Zurich.

While AI adoption promises significant gains in productivity, Georgieva warned against potential downsides, including the proliferation of misinformation and the exacerbation of societal inequality. She highlighted the recent vulnerabilities of the world economy, citing shocks like the 2020 global pandemic and the ongoing conflict in Ukraine. Despite these challenges, she noted a resilience in the global economy, with no imminent signs of a widespread recession.

Addressing concerns about inflation, Swiss National Bank Chairman Thomas Jordan emphasised progress in Switzerland’s inflation management. With inflation reaching 1.4% in April, remaining within the SNB’s target range for the 11th consecutive month, Jordan expressed optimism about maintaining price stability in the coming years. However, he acknowledged lingering uncertainties surrounding future economic trends.

US Air Force jets engage in dogfight, one piloted by AI

In a groundbreaking demonstration of technological advancement, two US Air Force fighter jets recently engaged in a dogfight over California, with one jet piloted by a human and the other by AI. The AI-piloted jet, named Vista, showcased the Air Force’s strides in AI technology, which dates back to the 1950s but continues to evolve.

The US is in a race with China to maintain superiority in AI and its integration into weapon systems, raising concerns about the potential for future wars fought primarily by machines. Despite assurances from officials that direct human intervention will always be required on the US side, questions linger about adversaries’ intentions and the need for rapid deployment of US capabilities.

AI’s military history traces back to the 1960s and 1970s when systems like the Navy’s Aegis missile defence were developed, which employed if/then rule sets for autonomous decision-making. However, big advancements came in 2012 with the ability of computers to analyse big data and generate their own rule sets, marking a significant milestone dubbed AI’s ‘big bang.’

Why does it matter?

Numerous AI projects are underway across the Pentagon, including enhancing communication between pilots and air operations centres and developing AI-based navigation systems independent of GPS satellites. Safety remains a top priority as AI learns and adapts, with extensive precautions taken to ensure the accuracy and reliability of AI-driven systems. Despite challenges, AI technology promises to revolutionise military operations, offering enhanced capabilities and strategic advantages in future conflicts.

US and China to meet in Geneva for AI risk discussions

The US and China are set to meet in Geneva on Tuesday to discuss advanced AI, with US officials underscoring that Washington’s policies would not be open for negotiation, despite exploring ways to address risks associated with the technology. President Joe Biden’s administration aims to engage China on various fronts to minimise miscommunication between the two countries, with AI being a focal point. Earlier discussions between US Secretary of State Antony Blinken and China’s Foreign Minister Wang Yi in Beijing laid the groundwork for these formal bilateral talks on AI.

Highlighting concerns over China’s rapid deployment of AI across multiple sectors, including civilian, military, and national security, US officials stress the need for direct communication to address security implications for the US and its allies. However, they clarified that talks with Beijing do not involve promoting technical collaboration or negotiating technology protection policies.

Despite competing interests in shaping AI rules, both the US and China hope to explore areas where mutual agreements can enhance global safety. Tarun Chhabra from the US National Security Council and Seth Center from the State Department will lead the discussions with Chinese officials, focusing on critical AI risks. Meanwhile, US Senate Majority Leader Chuck Schumer intends to issue recommendations on addressing AI risks in the coming weeks, emphasising the need for proactive legislation to navigate the competitive landscape with China and regulate AI advancements effectively.

Microsoft to invest €4 billion in French cloud and AI infrastructure

Microsoft Corp. is set to embark on a €4 billion ($4.3 billion) venture to construct cloud and AI infrastructure in France, marking its latest major commitment to AI technology. The American tech giant aims to train a million individuals and bolster 2,500 startups in this European country by 2027, as stated in an official announcement. Earlier this year, Microsoft unveiled a strategic partnership and a €15 million investment into Mistral AI, a Paris-based startup competing in the AI domain alongside OpenAI.

France has emerged as a focal point for AI development, drawing substantial national funds and support from local magnates for initiatives like Mistral and Kyutai, a newly formed AI research nonprofit. Microsoft’s announcement aligns with President Emmanuel Macron’s ‘Choose France’ summit, which seeks to entice foreign enterprises and position France as a financial epicentre within the EU. Concurrently, another American tech titan, Amazon.com Inc., has pledged €1.2 billion towards infrastructure and computing, with 56 projects slated for announcement during the event, according to the Elysee.

Microsoft is investing significantly in its Azure cloud platform and associated AI tools. Following a €3.2 billion investment in Germany in February, the company injected $1.5 billion into the Abu Dhabi-based AI firm G42 in April. However, Microsoft’s endeavours in the cloud and AI sectors have attracted intensified antitrust scrutiny, particularly concerning its investments exceeding $10 billion in OpenAI.

OpenAI considers allowing AI-generated pornography

OpenAI is sparking debate by considering the possibility of allowing users to generate explicit content, including pornography, using its AI-powered tools like ChatGPT and DALL-E. While maintaining a ban on deepfakes, OpenAI’s proposal has raised concerns among campaigners who question its commitment to producing ‘safe and beneficial’ AI. The company sees potential for ‘not-safe-for-work’ (NSFW) content creation but stresses the importance of responsible usage and adherence to legal and ethical standards.

The proposal, outlined in a document discussing OpenAI’s AI development practices, aims to initiate discussions about the boundaries of content generation within its products. Joanne Jang, an OpenAI employee, stressed the need for maximum user control while ruling out deepfake creation. Despite acknowledging the importance of discussions around sexuality and nudity, OpenAI maintains strong safeguards against deepfakes and prioritises protecting users, particularly children.

Critics, however, have accused OpenAI of straying from its mission statement of developing safe and beneficial AI by delving into potentially harmful commercial endeavours like AI erotica. Concerns about the spread of AI-generated pornography have been underscored by recent incidents, prompting calls for tighter regulation and ethical considerations in the tech sector. While OpenAI’s policies prohibit sexually explicit content, questions remain about the effectiveness of safeguards and the company’s approach to handling sensitive content creation.

Why does it matter?

As discussions unfold, stakeholders, including lawmakers, experts, and campaigners, closely scrutinise OpenAI’s proposal and its potential implications for online safety and ethical AI development. With growing concerns about the misuse of AI technology, the debate surrounding OpenAI’s stance on explicit content generation highlights broader challenges in balancing innovation, responsibility, and societal well-being in the digital age.

US lawmakers introduce AI export bill

A bipartisan group of lawmakers introduced a bill to strengthen the Biden administration’s ability to regulate the export of AI models, focusing on protecting US technology from potential misuse by foreign competitors. Sponsored by both Republicans and Democrats, the bill proposes granting the Commerce Department explicit authority to control AI exports deemed risky to national security and to prohibit collaboration between Americans and foreigners on such systems.

The bill points to strengthening legal oversight due to the pressing need to protect US AI technology from hostile exploitation. The emerging concerns are advanced AI models, which can process vast amounts of data and generate content that adversaries could exploit for cyber attacks or even the development of biological weapons.

While the Commerce Department and the White House have yet to comment on the bill, reports suggest that the US is gearing up to implement export controls on proprietary AI models to counter threats China and Russia pose. Current US laws make it challenging to regulate the export of open-source AI models, which are freely accessible. The legal measure would, therefore, streamline regulations, particularly regarding open-source AI, and grant the Commerce Department enhanced oversight over AI systems if approved.

Why does it matter?

The introduction of this bill is set against the backdrop of intensifying global competition in AI development. China, for instance, heavily relies on open-source models like Meta Platforms’ ‘Llama’ series. Recent revelations about using these models by Chinese AI firms have raised concerns about intellectual property and security risks. Furthermore, Microsoft’s significant investment in a UAE-based AI firm, G42, has sparked a debate over the implications of deepening ties between Gulf states and China, leading to security agreements between the US, UAE, and Microsoft.

OpenAI set to challenge Google with new AI-powered search product

OpenAI is gearing up to unveil its AI-powered search product, intensifying its rivalry with Google in the realm of search technology. The announcement, slated for Monday, comes amidst reports of OpenAI’s efforts to challenge Google’s dominance and compete with emerging players like Perplexity in the AI search space. While OpenAI has remained tight-lipped about the development, industry insiders anticipate a big step in the AI search landscape.

The timing of the announcement, just ahead of Google’s annual I/O conference, suggests OpenAI’s strategic positioning to capture attention in the tech world. Building on its flagship ChatGPT product, the new search offering promises to revolutionise information retrieval by leveraging AI to extract direct information from the web, complete with citations.

Why does it matter?

Despite ChatGPT’s initial success, OpenAI has faced challenges sustaining user growth and relevance during the chatbot’s evolution. The retirement of ChatGPT plugins in April indicates the company’s engagement to refine its offerings and adapt to user needs.

As OpenAI aims to expand its reach and enhance its product capabilities, the launch of its AI search product marks a breakthrough in its quest to redefine information access and reshape the future of AI-driven technologies.

AI boom fuels data centre deals in Asia Pacific

Private equity investors and asset managers are gearing up for a surge in mergers and acquisitions (M&A) and investments within Asia Pacific’s data centre sector, driven by the rising demand for digital infrastructure due to the AI boom. The newborn trend is particularly pronounced in Asia Pacific, which has seen a record high in data centre deals, with M&A activity totalling $840.47 million last year alone.

The rapid expansion of AI capabilities by technology giants such as Microsoft, Amazon, Alphabet Inc, and Meta Platforms is a significant driver behind the increasing demand for data centre capacity in the region. Microsoft, for instance, recently announced a $2.2 billion investment in Malaysia to bolster its cloud and AI services across Asia, with plans to establish its first Asian data centre in Thailand.

Several major deals are in the pipeline, including the potential sale of a stake in Telkom Indonesia’s data centre business worth $1 billion and Japan’s NEC contemplating a $500 million data centre sale. Additionally, Bain Capital is seeking financing for Chindata’s international assets and its China business, while Goldman Sachs Asset Management has invested over $1 billion in data centre development in Asia over the past three years.

Why does it matter?

The surge in data centre investments underscores the unprecedented demand for high-quality data centre capacity fueled by the AI revolution. As AI applications drive massive data consumption, the need for increased capacity becomes paramount, signalling a robust outlook for the data centre market in Asia Pacific in the coming years. With consistent investments and strategic partnerships on the horizon, industry experts anticipate intensified deal flow within the data centre space throughout 2024 and beyond.

Dotdash Meredith partners with OpenAI for AI integration

Dotdash Meredith, a prominent publisher overseeing titles like People and Better Homes & Gardens, has struck a deal with OpenAI, marking a big step in integrating AI technology into the media landscape. The agreement involves utilising AI models for Dotdash Meredith’s ad-targeting product, D/Cipher, which will enhance its precision and effectiveness. Additionally, licensing content to ChatGPT, OpenAI’s language model, will expand the reach of Dotdash Meredith’s content to a wider audience, thereby increasing its visibility and influence.

Through this partnership, OpenAI will integrate content from Dotdash Meredith’s publications into ChatGPT, offering users access to a wealth of informative articles. Moreover, both entities will collaborate on developing new AI features tailored for magazine readers, indicating a forward-looking approach to enhancing reader engagement.

One key collaboration aspect involves leveraging OpenAI’s models to enhance D/Cipher, Dotdash Meredith’s ad-targeting platform. With the impending shift towards a cookie-less online environment, the publisher aims to bolster its targeting technology by employing AI, ensuring advertisers can reach their desired audience effectively.

Dotdash Meredith’s CEO, Neil Vogel, emphasised the importance of fair compensation for publishers in the AI landscape, highlighting the need for proper attribution and compensation for content usage. The stance reflects a broader industry conversation surrounding the relationship between AI platforms and content creators.

Why does it matter?

While Dotdash Meredith joins a growing list of news organisations partnering with OpenAI, not all have embraced such agreements. Some, like newspapers owned by Alden Global Capital, have pursued legal action against OpenAI and Microsoft, citing copyright infringement concerns. These concerns revolve around using their content in AI models without proper attribution or compensation. These contrasting responses underscore the complex dynamics as AI increasingly intersects with traditional media practices.