OpenAI, led by Sam Altman, has entered a significant deal with media giant News Corp, securing access to content from its major publications. The agreement follows a recent content licensing deal with the Financial Times aimed at enhancing the capabilities of OpenAI’s ChatGPT. Such partnerships are essential for training AI models, providing a financial boost to news publishers traditionally excluded from the profits generated by internet companies distributing their content.
The financial specifics of the latest deal remain undisclosed, though the Wall Street Journal, a News Corp entity, reported that it could be valued at over $250 million across five years. The deal ensures that content from News Corp’s publications, including the Wall Street Journal, MarketWatch, and the Times, will not be immediately available on ChatGPT upon publication. The following move is part of OpenAI’s ongoing efforts to secure diverse data sources, following a similar agreement with Reddit.
The announcement has positively impacted News Corp’s market performance, with shares rising by approximately 4%. OpenAI’s continued collaboration with prominent media platforms underscores its commitment to developing sophisticated AI models capable of generating human-like responses and comprehensive text summaries.
Microsoft’s recent deal with UAE-backed AI firm G42 could involve the transfer of advanced AI technology, raising concerns about national security implications. Microsoft President Brad Smith highlighted that the agreement might eventually include exporting sophisticated chips and AI model weights, although this phase has no set timeline. The deal, which necessitates US Department of Commerce approval, includes safeguards to prevent the misuse of technology by Chinese entities. However, details of these measures remain undisclosed, prompting scepticism among US lawmakers about their adequacy.
Concerns about the agreement have been voiced by senior US officials, who warn of the potential national security risks posed by advanced AI systems, such as the ease of engineering dangerous weapons. Representative Michael McCaul expressed frustration over the lack of a comprehensive briefing for Congress, citing fears of Chinese espionage through UAE channels. Current regulations require notifications and export licenses for AI chips, but gaps exist regarding the export of AI models, leading to legislative efforts to grant US officials more explicit control over such exports.
Why does it matter?
The deal, valued at $1.5 billion, was framed as a strategic move to extend US technology influence amid global competition, particularly with China. Although the exact technologies and security measures involved are not fully disclosed, the agreement aims to enhance AI capabilities in regions like Kenya and potentially Turkey and Egypt. Microsoft asserts that G42 will adhere to US regulatory requirements and has implemented a ‘know your customer’ rule to prevent Chinese firms from using the technology for training AI models.
Microsoft emphasises its commitment to ensuring secure global technology transfers, with provisions for imposing financial penalties on G42 through arbitration courts in London if compliance issues arise. While the US Commerce Department will oversee the deal under existing and potential future export controls, how Commerce Secretary Gina Raimondo will handle the approval process remains uncertain. Smith anticipates that the regulatory framework developed for this deal will likely be applied broadly across the industry.
The US Federal Communications Commission (FCC) has proposed a requirement for political ads to disclose the use of AI-generated content. Chairwoman Jessica Rosenworcel announced Wednesday that the FCC would seek public comments on this potential rule. The initiative aims to ensure transparency in political advertising, allowing consumers to know when AI tools are utilised in the ads they view.
Under the proposed framework, candidate and issue ads would need to include disclosures about AI-generated content for cable, satellite TV, and radio providers, but not for streaming services like YouTube, which fall outside FCC regulation. The first step involves defining what constitutes AI-generated content and determining if such a regulation is necessary. The proposal marks the beginning of a fact-finding mission to develop new regulations.
The FCC document emphasises the public interest in protecting viewers from misleading or deceptive programming and promoting informed decision-making. While the proposal is still in its early stages, it reflects a growing concern about the impact of AI on political communication. The rule, if implemented, could deter low-effort AI-generated ads and help address deceptive practices in political advertising.
The FCC will gather more information on how this rule would interact with the Federal Trade Commission and the Federal Election Commission, which oversee advertising and campaign regulations. The timeline for the rule’s enforcement remains uncertain, pending further review and public input.
AI firm iFlytek has entered a price war among China’s top tech companies by significantly reducing the cost of its ‘Spark’ large-language model (LLM). iFlytek’s move follows recent price cuts by Alibaba, Baidu, and Bytedance for their own LLMs used in generative AI products. Spark Lite, launched last September, is now free for public use, while Spark Pro and Max versions are priced at just 0.21 yuan (less than 3 cents) per 10,000 tokens, which is five times cheaper than competitors.
iFlytek claims that Spark surpasses ChatGPT 3.5 in Chinese language tasks and performs comparably in English. The Hefei-based company, renowned for its voice recognition technology, highlighted that Spark’s pricing allows significant cost savings. For instance, Spark Max can generate the entirety of Yu Hua’s novel ‘To Live’ for just 2.1 yuan ($0.29).
State-owned China Mobile, holding a 10% stake in iFlytek, is its largest shareholder. Strategic pricing aims to make advanced AI technology more accessible to the public while challenging the market dominance of other tech giants.
A recent PwC (PricewaterhouseCoopers International Limited) report highlights that sectors of the global economy with high exposure to AI are experiencing significant productivity gains and wage increases. The study found that productivity growth in AI-intensive industries is nearly five times faster than in sectors with less AI integration. In the UK, job postings requiring AI skills are growing 3.6 times faster than other listings, with employers offering a 14% wage premium for these roles, particularly in legal and IT sectors.
Since the launch of ChatGPT in late 2022, AI’s impact on employment has been widely debated. However, PwC’s findings indicate that AI has influenced the job market for over a decade. Job postings for AI specialists have increased sevenfold since 2012, far outpacing the growth for other roles. The report suggests that AI is being used to address labour shortages, which could benefit countries with ageing populations and high worker demand.
PwC’s 2024 global AI jobs barometer reveals that the growth in AI-related employment contradicts fears of widespread job losses due to automation. Despite predictions of significant job reductions, the continued rise in AI-exposed occupations suggests that AI is creating new industries and transforming the job market. According to PwC UK’s chief economist, Barret Kupelian, as AI technology advances and spreads across more sectors, its potential economic impact could be transformative, marking only the beginning of its influence on productivity and employment.
A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.
Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.
Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.
Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.
IBM announced it would release a family of AI models as open-source software and assist Saudi Arabia in training an AI system in Arabic. Unlike competitors such as Microsoft, which charge for their AI models, IBM provides open access to its ‘Granite’ AI models, allowing companies to customise them. These models aim to help software developers complete computer code more efficiently. IBM monetises this by offering a paid tool, Watsonx, to help run the customised models within data centres.
IBM’s approach focuses on profiting when customers utilise the AI models, regardless of their origin or data centre location. IBM’s CEO, Arvind Krishna, emphasised that they believe in the early potential of generative AI and the benefits of competition for consumers. He also highlighted the importance of being safe and responsible in AI development.
Additionally, IBM announced a collaboration with the Saudi Data and Artificial Intelligence Authority to train its ‘ALLaM’ Arabic language model using Watsonx. The following initiative will enhance IBM’s AI capabilities by incorporating the ability to understand multiple Arabic dialects.
On Tuesday, Chinese tech giants Alibaba and Baidu significantly reduced prices for their large-language models (LLMs), intensifying a price war in the cloud computing sector. Alibaba’s cloud unit announced cuts of up to 97% on its Tongyi Qwen models, with the Qwen-Long model now costing only 0.0005 yuan per 1,000 tokens, down from 0.02 yuan. Baidu quickly followed, making its Ernie Speed and Ernie Lite models free for all business users.
The price reduction comes amid an ongoing price war in China’s cloud computing industry, with Alibaba and Tencent already lowering prices for their cloud services. Cloud vendors in China have increasingly relied on AI chatbot services to boost sales, spurred by the popularity of OpenAI’s ChatGPT. The competition has now extended to the LLMs powering these chatbots, potentially impacting profit margins.
Other companies have also joined the fray. Bytedance recently slashed the prices of its Doubao LLMs by 99.3% below the industry average for business users. Chinese startup Moonshot introduced a tipping feature for prioritising chatbot use, targeting both business and individual users. Baidu was the first in China to charge consumers for its LLM products, with its Ernie 4 model costing 59 yuan per month.
Microsoft is pushing generative AI to the forefront of Windows and its PCs. At its Build developer conference, the company unveiled new Copilot+ PCs and AI-powered features like Recall, designed to help users find past apps and files. These AI-first devices, featuring dedicated chips called NPUs, will be deeply integrated into Windows 11. The first models will use Qualcomm’s Snapdragon X Elite and Plus chips, promising extensive battery life, with Intel and AMD also on board to create processors for these devices.
In addition to the Copilot+ PCs, Microsoft introduced new Surface devices, including the Surface Laptop and Surface Pro. The Surface Laptop now features up to 22 hours of battery life and faster performance, while the new Surface Pro boasts a 90% speed increase, an OLED display, and an upgraded front-facing camera. Both devices support Wi-Fi 7 and have haptic feedback features.
Microsoft’s upcoming Recall feature for Windows 11 will allow users to ‘remember’ apps and content accessed weeks or months ago, enabling them to find past activities easily. Recall can associate colours, images, and more, allowing natural language searches. Microsoft emphasises user privacy, ensuring that all data remains on the device and is not used for AI training.
Other AI enhancements include Super Resolution for upscaling old photos and Live Captions with translations for over 40 languages. These features are powered by the Windows Copilot Runtime, which supports generative AI-powered apps even without an internet connection. CapCut, a popular video editor, will utilise this runtime to enhance its AI capabilities.
Google has announced the rollout of ‘AI Overviews’, previously known as the Search Generative Experience (SGE), marking a significant shift in how users experience search results. The following feature will provide AI-generated summaries at the top of many search results, initially for users in the US and soon globally. Liz Reid, Google’s head of Search, explained that the advancement simplifies the search process by handling more complex tasks, allowing users to focus on what matters most to them.
At the recent I/O developer conference, Google unveiled various AI-driven features to enhance search capabilities. These include the ability to search using video via Lens, a planning tool for generating trip itineraries or meal plans from a single query, and AI-organized results pages tailored to specific needs, like finding restaurants for different occasions. Google’s Gemini AI model powers these innovations, summarising web content and customising results based on user input.
Despite the extensive integration of AI, only some searches will involve these advanced features. Reid noted that simple searches like navigating a specific website won’t benefit from AI enhancements. However, AI can provide comprehensive and detailed responses for more complex queries.
Why does it matter?
Google aims to balance creativity with factual accuracy in its AI outputs, ensuring reliable information while maintaining a human perspective, especially valued by younger users. Google’s shift towards AI-enhanced search represents a broader evolution from traditional keyword searches to more dynamic and interactive user experiences. By enabling natural language queries and providing rich, contextual answers, Google seeks to make searching more intuitive and efficient. The approach not only aims to attract more users but also promises to transform how people interact with information online, reducing the need for extensive typing and multiple tabs.