Nvidia’s latest AI chip struggles in China market

Nvidia’s latest AI chip, the H20, tailored for the Chinese market, is struggling with weak demand, leading to prices dropping below that of rival Huawei’s Ascend 910B chip. Despite being Nvidia’s most advanced product available in China, the H20’s abundant supply suggests it needs to gain more traction. This comes as Nvidia faces stiff competition and US sanctions that have significantly impacted its business in China, a market that previously contributed 17% to its fiscal 2024 revenue.

The competitive pressure and sanctions create uncertainty for Nvidia’s prospects in China. Senior executives acknowledged a substantial drop in their data centre revenue from China since new export control restrictions were implemented. Market analyst Hebe Chen noted that Nvidia is trying to balance maintaining its presence in China while navigating US tensions and preparing for potentially worse outcomes in the long term.

Huawei’s aggressive expansion and increased shipments of its Ascend 910B chip, which reportedly outperforms the H20 in some metrics, further challenge Nvidia. While Nvidia’s H20 has seen some orders from major Chinese tech firms like Alibaba, its success is constrained by Beijing’s preference for domestically produced chips. With a significant price discrepancy between Nvidia’s H20 and Huawei’s 910B, Nvidia’s margin squeeze is apparent as it competes in a market increasingly dominated by local players.

FCC proposes $6 million fine for scammer impersonating US President Biden in robocalls

The FCC has proposed a $6 million fine against a scammer who used voice-cloning technology to impersonate US President Biden in a series of illegal robocalls during the New Hampshire primary election. This incident serves as a stern warning to other potential high-tech scammers about the misuse of generative AI in such schemes. In January, many New Hampshire voters received fraudulent calls mimicking President Biden, urging them not to vote in the primary. The voice-cloning technology, which has become widely accessible, enabled this deception with just a few minutes of Biden’s publicly available speeches.

The FCC and other law enforcement agencies have made it clear that using fake voices to suppress votes or for other malicious activities is strictly prohibited. Loyaan Egal, the chief of the FCC’s Enforcement Bureau, emphasised their commitment to preventing the misuse of telecommunications networks for such purposes. The primary perpetrator, political consultant Steve Kramer, collaborated with the disreputable Life Corporation and telecom company Lingo, among others, to execute the robocall scheme.

While Kramer faces violations of several rules, there are currently no criminal charges against him or his associates. The FCC’s power is limited to civil penalties, requiring cooperation with local or federal law enforcement for further action. Although the $6 million fine represents a significant penalty, the actual amount paid may be lower due to various factors. Kramer has the opportunity to respond to the allegations, and additional actions are being taken against Lingo, which could lead to further fines or the loss of licenses.

Following this case, the FCC officially declared in February that AI-generated voices are illegal to use in robocalls. This decision underscores the agency’s stance on generative AI and its potential for abuse, aiming to prevent future incidents of voter suppression and other fraudulent activities.

Truecaller allows users to answer calls in their own AI voice

Truecaller, the popular caller ID service, is introducing a new AI feature allowing users to answer phone calls in their ‘own voice’. The innovation, made possible through a partnership with Microsoft, utilises the tech giant’s Personal Voice technology. By recording a short script in their own voice, Truecaller users can create a digital copy, enabling the Assistant to respond to callers in a lifelike manner. This feature enhances the user experience, offering a personalised touch to phone interactions.

The personal voice feature, initially available to paid Truecaller users, represents a significant advancement in AI-powered voice technologies. While users can customise follow-up responses, Truecaller has restricted editing of the introductory greeting template to maintain clarity that callers are interacting with a digital version of the user’s voice. Azure AI Speech’s personal voice feature adds watermarks to identify synthetic audio, ensuring transparency in voice interactions.

Raphael Mimoun, Truecaller’s product director and general manager for Israel, believes that the personal voice feature will revolutionise call management and elevate user experience. The rollout of this feature will commence in select markets, including the US, Canada, India, and Sweden, over the coming weeks. Initially available to public beta users, it will eventually be accessible to all users in eligible regions, promising to deliver innovative solutions to Truecaller’s global user base.

OpenAI strikes major content deal with News Corp

OpenAI, led by Sam Altman, has entered a significant deal with media giant News Corp, securing access to content from its major publications. The agreement follows a recent content licensing deal with the Financial Times aimed at enhancing the capabilities of OpenAI’s ChatGPT. Such partnerships are essential for training AI models, providing a financial boost to news publishers traditionally excluded from the profits generated by internet companies distributing their content.

The financial specifics of the latest deal remain undisclosed, though the Wall Street Journal, a News Corp entity, reported that it could be valued at over $250 million across five years. The deal ensures that content from News Corp’s publications, including the Wall Street Journal, MarketWatch, and the Times, will not be immediately available on ChatGPT upon publication. The following move is part of OpenAI’s ongoing efforts to secure diverse data sources, following a similar agreement with Reddit.

The announcement has positively impacted News Corp’s market performance, with shares rising by approximately 4%. OpenAI’s continued collaboration with prominent media platforms underscores its commitment to developing sophisticated AI models capable of generating human-like responses and comprehensive text summaries.

Microsoft’s deal with UAE AI firm sparks security concerns in US

Microsoft’s recent deal with UAE-backed AI firm G42 could involve the transfer of advanced AI technology, raising concerns about national security implications. Microsoft President Brad Smith highlighted that the agreement might eventually include exporting sophisticated chips and AI model weights, although this phase has no set timeline. The deal, which necessitates US Department of Commerce approval, includes safeguards to prevent the misuse of technology by Chinese entities. However, details of these measures remain undisclosed, prompting scepticism among US lawmakers about their adequacy.

Concerns about the agreement have been voiced by senior US officials, who warn of the potential national security risks posed by advanced AI systems, such as the ease of engineering dangerous weapons. Representative Michael McCaul expressed frustration over the lack of a comprehensive briefing for Congress, citing fears of Chinese espionage through UAE channels. Current regulations require notifications and export licenses for AI chips, but gaps exist regarding the export of AI models, leading to legislative efforts to grant US officials more explicit control over such exports.

Why does it matter?

The deal, valued at $1.5 billion, was framed as a strategic move to extend US technology influence amid global competition, particularly with China. Although the exact technologies and security measures involved are not fully disclosed, the agreement aims to enhance AI capabilities in regions like Kenya and potentially Turkey and Egypt. Microsoft asserts that G42 will adhere to US regulatory requirements and has implemented a ‘know your customer’ rule to prevent Chinese firms from using the technology for training AI models.

Microsoft emphasises its commitment to ensuring secure global technology transfers, with provisions for imposing financial penalties on G42 through arbitration courts in London if compliance issues arise. While the US Commerce Department will oversee the deal under existing and potential future export controls, how Commerce Secretary Gina Raimondo will handle the approval process remains uncertain. Smith anticipates that the regulatory framework developed for this deal will likely be applied broadly across the industry.

FCC proposes disclosure for AI-generated political ads

The US Federal Communications Commission (FCC) has proposed a requirement for political ads to disclose the use of AI-generated content. Chairwoman Jessica Rosenworcel announced Wednesday that the FCC would seek public comments on this potential rule. The initiative aims to ensure transparency in political advertising, allowing consumers to know when AI tools are utilised in the ads they view.

Under the proposed framework, candidate and issue ads would need to include disclosures about AI-generated content for cable, satellite TV, and radio providers, but not for streaming services like YouTube, which fall outside FCC regulation. The first step involves defining what constitutes AI-generated content and determining if such a regulation is necessary. The proposal marks the beginning of a fact-finding mission to develop new regulations.

The FCC document emphasises the public interest in protecting viewers from misleading or deceptive programming and promoting informed decision-making. While the proposal is still in its early stages, it reflects a growing concern about the impact of AI on political communication. The rule, if implemented, could deter low-effort AI-generated ads and help address deceptive practices in political advertising.

The FCC will gather more information on how this rule would interact with the Federal Trade Commission and the Federal Election Commission, which oversee advertising and campaign regulations. The timeline for the rule’s enforcement remains uncertain, pending further review and public input.

iFlytek slashes AI model prices amid tech price war in China

AI firm iFlytek has entered a price war among China’s top tech companies by significantly reducing the cost of its ‘Spark’ large-language model (LLM). iFlytek’s move follows recent price cuts by Alibaba, Baidu, and Bytedance for their own LLMs used in generative AI products. Spark Lite, launched last September, is now free for public use, while Spark Pro and Max versions are priced at just 0.21 yuan (less than 3 cents) per 10,000 tokens, which is five times cheaper than competitors.

iFlytek claims that Spark surpasses ChatGPT 3.5 in Chinese language tasks and performs comparably in English. The Hefei-based company, renowned for its voice recognition technology, highlighted that Spark’s pricing allows significant cost savings. For instance, Spark Max can generate the entirety of Yu Hua’s novel ‘To Live’ for just 2.1 yuan ($0.29).

State-owned China Mobile, holding a 10% stake in iFlytek, is its largest shareholder. Strategic pricing aims to make advanced AI technology more accessible to the public while challenging the market dominance of other tech giants.

AI drives productivity surge in certain industries, report shows

A recent PwC (PricewaterhouseCoopers International Limited) report highlights that sectors of the global economy with high exposure to AI are experiencing significant productivity gains and wage increases. The study found that productivity growth in AI-intensive industries is nearly five times faster than in sectors with less AI integration. In the UK, job postings requiring AI skills are growing 3.6 times faster than other listings, with employers offering a 14% wage premium for these roles, particularly in legal and IT sectors.

Since the launch of ChatGPT in late 2022, AI’s impact on employment has been widely debated. However, PwC’s findings indicate that AI has influenced the job market for over a decade. Job postings for AI specialists have increased sevenfold since 2012, far outpacing the growth for other roles. The report suggests that AI is being used to address labour shortages, which could benefit countries with ageing populations and high worker demand.

PwC’s 2024 global AI jobs barometer reveals that the growth in AI-related employment contradicts fears of widespread job losses due to automation. Despite predictions of significant job reductions, the continued rise in AI-exposed occupations suggests that AI is creating new industries and transforming the job market. According to PwC UK’s chief economist, Barret Kupelian, as AI technology advances and spreads across more sectors, its potential economic impact could be transformative, marking only the beginning of its influence on productivity and employment.

FBI charges man with creating AI-generated child abuse material

A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.

Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.

Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.

Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.

IBM releases open-source AI models and partners with Saudi Arabia

IBM announced it would release a family of AI models as open-source software and assist Saudi Arabia in training an AI system in Arabic. Unlike competitors such as Microsoft, which charge for their AI models, IBM provides open access to its ‘Granite’ AI models, allowing companies to customise them. These models aim to help software developers complete computer code more efficiently. IBM monetises this by offering a paid tool, Watsonx, to help run the customised models within data centres.

IBM’s approach focuses on profiting when customers utilise the AI models, regardless of their origin or data centre location. IBM’s CEO, Arvind Krishna, emphasised that they believe in the early potential of generative AI and the benefits of competition for consumers. He also highlighted the importance of being safe and responsible in AI development.

Additionally, IBM announced a collaboration with the Saudi Data and Artificial Intelligence Authority to train its ‘ALLaM’ Arabic language model using Watsonx. The following initiative will enhance IBM’s AI capabilities by incorporating the ability to understand multiple Arabic dialects.