EU regulators work with tech giants on AI rules

According to Ireland’s Data Protection Commission, leading global internet companies are working closely with the EU regulators to ensure their AI products comply with the bloc’s stringent data protection laws. This body, which oversees compliance for major firms like Google, Meta, Microsoft, TikTok, and OpenAI, has yet to exercise its full regulatory power over AI but may enforce significant changes to business models to uphold data privacy.

AI introduces several potential privacy issues, such as whether companies can use public data to train AI models and the legal basis for using personal data. AI operators must also guarantee individuals’ rights, including the right to have their data erased and address the risk of AI models generating incorrect personal information. Significant engagement has been noted from tech giants seeking guidance on their AI innovations, particularly large language models.

Following consultations with the Irish regulator, Google has already agreed to delay and modify its Gemini AI chatbot. While Ireland leads regulation due to many tech firms’ EU headquarters being located there, other EU regulators can influence decisions through the European Data Protection Board. AI operators must comply with the new EU AI Act and the General Data Protection Regulation, which imposes fines of up to 4% of a company’s global turnover for non-compliance.

Why does it matter?

Ireland’s broad regulatory authority means that companies failing to perform due diligence on new products could be forced to alter their designs. As the EU’s AI regulatory landscape evolves, these tech firms must navigate both the AI Act and existing data protection laws to avoid substantial penalties.

OpenAI CEO leads safety committee for AI model training

OpenAI has established a Safety and Security Committee to oversee the training of its next AI model, the company announced on Tuesday. CEO Sam Altman will lead the committee alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman. The committee makes safety and security recommendations to OpenAI’s board.

The committee’s initial task is to review and enhance OpenAI’s existing safety practices over the next 90 days, after which it will present its findings to the board. Following the board’s review, OpenAI plans to share the adopted recommendations publicly. This move follows the disbanding of OpenAI’s Superalignment team earlier this month, which led to the departure of key figures like former Chief Scientist Ilya Sutskever and Jan Leike.

Other members of the new committee include technical and policy experts Aleksander Madry, Lilian Weng, and head of alignment sciences John Schulman. Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be part of the committee, contributing to the safety and security oversight of OpenAI’s projects and operations.

Leadership vacancy stalls EU AI office development

Two months after the European Parliament passed the landmark AI Act, the EU Commission office responsible for its implementation remains understaffed and leaderless. Although such a pace is common for public institutions, stakeholders worry it may delay the enactment of the hundreds of pages of the AI Act, especially with some parts coming into effect by the end of the year.

The EU Commission’s Directorate-General for Communication Networks, Content, and Technology (DG Connect), which houses the AI Office, is undergoing a reorganisation. Despite reassurances from officials that preparations are on track, concerns persist about the office’s limited budget, slow hiring process, and the overwhelming workload on the current staff. Three Members of the European Parliament (MEPs) have expressed dissatisfaction with the transparency and progress of the recruitment and leadership processes.

The European Commission has identified 64 deliverables for the AI Office, prohibiting certain AI uses set to take effect by the year’s end. Codes of practice for general-purpose models, such as ChatGPT, will be developed within nine months of the legislation’s enactment. Despite recent recruitment efforts, including two positions opened in March and additional roles for lawyers and AI ethicists soon to be advertised, the hiring process is expected to take several more months.

Why does it matter?

A significant question remains regarding the leadership of the AI Office. The Commission has yet to announce candidates or details of the selection process. Speculation has arisen around MEP Dragoș Tudorache, who has been active in AI policy and is not seeking re-election. However, he has not confirmed any plans post-tenure. The Commission aims to finalise the office’s staffing and leadership to ensure the smooth implementation of the AI Act.

China’s AI chipmakers closing gap on global leaders

China’s domestic AI chipmakers are rapidly closing the gap on international leaders, according to Xu Bing, co-founder of SenseTime Group Inc. Despite the significant lag in computational power compared to the US, China possesses the talent and data necessary to advance in the AI field, Xu stated during an interview at the UBS Asian Investment Conference in Hong Kong. SenseTime, a leading AI company in China, faces challenges due to US sanctions that restrict access to advanced AI technology, such as Nvidia’s accelerators.

The US trade controls have spurred the development of domestic alternatives from companies like Huawei Technologies and Shanghai Biren Technology, both also affected by US restrictions. Xu emphasised that although Asia faces a considerable shortfall in computational resources, the region is abundant in talent and data. He noted that China’s AI chip industry is catching up quickly, with SenseTime collaborating with local semiconductor firms to enhance their computing capabilities.

While the exact gap between Chinese and US AI technology is uncertain, estimated between one to three years, Xu is optimistic that this disadvantage in computing power will be temporary. He believes that, over time, the disparity in computing resources will diminish, viewing computing power as a commodity China will eventually acquire in sufficient quantity. Notable Chinese companies making strides in AI chips include Moore Threads Intelligent Beijing Co., Huawei, and other key players like Baidu Inc. and Naura Technology Group Ltd, which have received government attention and support.

Musk’s xAI secures $6 billion investment for AI development

Elon Musk’s AI startup, xAI, has secured a whopping $6 billion in a recent funding round, marking one of the largest deals in the burgeoning AI sector. The fund positions xAI to fiercely compete with industry rivals such as OpenAI, Microsoft, and Google. Among the notable backers are Valor Equity Partners, Vy Capital, Andreessen Horowitz, Sequoia Capital, and Fidelity, alongside prominent figures like Prince Alwaleed Bin Talal and Kingdom Holding.

The funding round, which values xAI at $18 billion pre-money, underscores Musk’s significant presence in AI. As an early and prominent figure in AI entrepreneurship, Musk’s ventures extend beyond xAI, including his leadership role at Tesla, which pioneers self-driving technologies. Musk’s involvement with OpenAI, however, has been contentious, leading to legal disputes and accusations of mission deviation.

xAI, which emerged just last year from the social network X, has already made strides in AI development, notably with its chatbot ChatGPT-rival Grok 1.0 model. The company’s recent release of the Grok 1.5 model and its exploration of multimodal capabilities indicate a commitment to advancing AI technologies. Despite its ambitions for ‘truthful’ AI systems, concerns have arisen regarding Grok’s news summary feature, which has been reported to generate misleading information.

Why does it matter?

With the new financing, xAI aims to bring its initial products to market, enhance infrastructure, and accelerate research and development efforts. Additionally, the company seeks partnerships to expand Grok’s user base beyond X, signalling its intent to scale its AI innovations and influence in the global market.

India boosts military AI efforts amid China rivalry

India is ramping up its efforts in the field of AI, not only for commercial purposes but also for military applications, as it seeks to keep pace with its regional rival, China. A report by the Delhi Policy Group highlighted India’s annual spending on AI, estimated at around $50 million, which is dwarfed by China’s investment, surpassing 30 times that amount. Recognising the strategic importance of AI, India aims to bolster its indigenous AI capabilities to bolster its defence capabilities.

The Indian military has been actively exploring AI applications, including the recent launch of a robotic buddy designed to carry out tasks such as surveillance and supporting soldiers in rugged terrains. The Signals Technology Evaluation and Adaptation Group (STEAG) also spearheads research into AI and other emerging technologies to enhance modern warfare capabilities. India’s collaboration with the US on AI development further underscores its commitment to leveraging cutting-edge technology for defence purposes.

Why does it matter?

AI holds significant potential for enhancing military intelligence, training, and education, offering insights into real-time simulations and exercises. However, concerns remain about AI technology’s ethical implications and potential misuse, including the proliferation of deepfakes and disinformation campaigns. While India boasts a strong civilian AI sector, it faces stiff competition from China’s well-funded and centralised military AI system.

As AI technology continues to evolve, India aims to play a leading role in shaping the ethical and legal frameworks governing its use in warfare and society at large. With ongoing research and investment in AI, India seeks to ensure that its military remains at the forefront of technological innovation, positioning itself as a key player in the future landscape of high-tech warfare.

Musk’s xAI plans supercomputer to enhance AI chatbot Grok

Elon Musk has revealed that his AI startup, xAI, plans to construct a supercomputer to enhance its AI chatbot Grok, aiming to launch it by fall 2025. According to the report, Musk suggested that xAI might collaborate with Oracle to develop this vast computational resource. When complete, the supercomputer would utilise Nvidia’s flagship H100 GPUs and be four times larger than the biggest GPU clusters currently available.

The Grok 2 model already required about 20,000 Nvidia H100 GPUs for training, and Musk anticipates that future models, like Grok 3, will need around 100,000 of these chips. Nvidia’s H100 GPUs are in high demand due to their dominance in the AI data centre chip market, making them challenging to procure.

Musk established xAI last year to compete with AI powerhouses such as Microsoft-backed OpenAI and Google’s Alphabet. His ambitious plan to build the supercomputer marks his commitment to advancing AI technology and maintaining a competitive edge in the rapidly evolving industry.

Spotify tests Spanish speaking AI DJ feature

Spotify is expanding its AI DJ feature by developing a version that will speak Spanish. Tech expert Chris Messina discovered the new AI DJ, called ‘DJ Livi’, in the app’s code. That version represents the first language expansion for the AI DJ, initially launched last year in English under the name ‘DJ X’. The new feature is expected to be available in Mexico, with the potential for broader use wherever Spanish is spoken.

While Spotify has not officially confirmed the rollout of DJ Livi, the company acknowledged that it frequently tests new features to enhance user experience. The spokesperson emphasised that some tests lead to broader implementations while others provide valuable insights. The following development aligns with the global popularity of the Spanish language and its significant presence in the US, where over 42 million people speak Spanish at home.

In addition to the AI DJ, Spotify is exploring other AI technologies, such as host-read ads for podcasts and personalised AI playlists. CEO Daniel Ek has indicated that AI will be crucial in personalisation, advertising, and podcast summaries. These efforts highlight Spotify’s commitment to leveraging AI to improve its services and cater to a diverse user base.

Nvidia’s latest AI chip struggles in China market

Nvidia’s latest AI chip, the H20, tailored for the Chinese market, is struggling with weak demand, leading to prices dropping below that of rival Huawei’s Ascend 910B chip. Despite being Nvidia’s most advanced product available in China, the H20’s abundant supply suggests it needs to gain more traction. This comes as Nvidia faces stiff competition and US sanctions that have significantly impacted its business in China, a market that previously contributed 17% to its fiscal 2024 revenue.

The competitive pressure and sanctions create uncertainty for Nvidia’s prospects in China. Senior executives acknowledged a substantial drop in their data centre revenue from China since new export control restrictions were implemented. Market analyst Hebe Chen noted that Nvidia is trying to balance maintaining its presence in China while navigating US tensions and preparing for potentially worse outcomes in the long term.

Huawei’s aggressive expansion and increased shipments of its Ascend 910B chip, which reportedly outperforms the H20 in some metrics, further challenge Nvidia. While Nvidia’s H20 has seen some orders from major Chinese tech firms like Alibaba, its success is constrained by Beijing’s preference for domestically produced chips. With a significant price discrepancy between Nvidia’s H20 and Huawei’s 910B, Nvidia’s margin squeeze is apparent as it competes in a market increasingly dominated by local players.

FCC proposes $6 million fine for scammer impersonating US President Biden in robocalls

The FCC has proposed a $6 million fine against a scammer who used voice-cloning technology to impersonate US President Biden in a series of illegal robocalls during the New Hampshire primary election. This incident serves as a stern warning to other potential high-tech scammers about the misuse of generative AI in such schemes. In January, many New Hampshire voters received fraudulent calls mimicking President Biden, urging them not to vote in the primary. The voice-cloning technology, which has become widely accessible, enabled this deception with just a few minutes of Biden’s publicly available speeches.

The FCC and other law enforcement agencies have made it clear that using fake voices to suppress votes or for other malicious activities is strictly prohibited. Loyaan Egal, the chief of the FCC’s Enforcement Bureau, emphasised their commitment to preventing the misuse of telecommunications networks for such purposes. The primary perpetrator, political consultant Steve Kramer, collaborated with the disreputable Life Corporation and telecom company Lingo, among others, to execute the robocall scheme.

While Kramer faces violations of several rules, there are currently no criminal charges against him or his associates. The FCC’s power is limited to civil penalties, requiring cooperation with local or federal law enforcement for further action. Although the $6 million fine represents a significant penalty, the actual amount paid may be lower due to various factors. Kramer has the opportunity to respond to the allegations, and additional actions are being taken against Lingo, which could lead to further fines or the loss of licenses.

Following this case, the FCC officially declared in February that AI-generated voices are illegal to use in robocalls. This decision underscores the agency’s stance on generative AI and its potential for abuse, aiming to prevent future incidents of voter suppression and other fraudulent activities.