Salesforce launches first AI centre in London

Salesforce has chosen London for its first AI centre, where experts, developers, and customers will collaborate on innovation and skill development. The US cloud software company, which is hosting its annual London World Tour event, announced last year a $4 billion investment in the UK over five years, focusing on AI innovation.

Zahra Bahrololoumi, CEO of Salesforce UK and Ireland, highlighted customer enthusiasm for AI’s benefits while noting caution about potential risks. She emphasised the importance of trust in AI adoption, citing Salesforce’s Einstein technology’s ‘Trust Layer’ that protects customer data.

Moreover, Salesforce’s dedication to responsible AI goes beyond data security. Bahrololoumi emphasises the company’s commitment to making AI a force for good. Their message to customers and partners is clear as they are deeply committed to collaborating closely to ensure that the transformative technology of AI brings about positive impacts.

Intel CEO announces company ambitions in line with US semiconductor policy

In a recent interview, Intel CEO Pat Gelsinger presented how Intel intends to remain a relevant actor in the Chinese chip market while also scaling up production in the US. This was done at the occasion of the Computex tech conference in Taipei, where Intel released its new Xeon 6 processor, destined for data centres. Its release comes at a time when tech giants are challenging Nvidia’s chip dominance. 

Gelsinger aims to build an Intel foundry in the US after the organisation was incentivised to increase facilities in the US with as much as $8.5 billion in grants and $11 billion in loans under the CHIPS and Science Act. In its release, the White House stated this is a step towards ‘protecting national security’ and increasing US share of global chip production to ‘20% […] by the end of the decade’.

“The capital is critical. We said that we have to have economic competitiveness if we build these factories in the US, and that’s what the CHIPS Act has done. It’s created a level playing field if I were building a factory in Asia versus US,” Gelsinger said.

Why does it matter?

At the same time, Gelsinger reiterated the importance of the Chinese market to his company. “China is a big market for Intel today, and one that we’re investing in to be a big market for Intel tomorrow as well,” he said. Intel has been competing to catch up its global market share since 2017, when South Korea’s Samsung overtook it as the largest chipmaker in terms of revenue. Since then, Taiwan Semiconductor Manufacturing Company reportedly overtook Samsung in 2023.

Microsoft, OpenAI and Nvidia face antitrust scrutiny in USA

The US Justice Department and the Federal Trade Commission (FTC) have agreed to proceed with antitrust investigations into Microsoft, OpenAI, and Nvidia’s dominance in the AI industry. Under the agreement, the Justice Department will focus on Nvidia’s potential antitrust violations, while the FTC will examine Microsoft and OpenAI’s conduct. Microsoft has a significant stake in OpenAI, having invested $13 billion in its for-profit subsidiary.

The regulators’ deal, expected to be finalised soon, reflects increased scrutiny of the AI sector. The FTC is also investigating Microsoft’s $650 million deal with AI startup Inflection AI. This action follows a January order requiring several tech giants, including Microsoft and OpenAI, to provide information on AI investments and partnerships.

Why does it matter?

Last year, the FTC began investigating OpenAI for potential consumer protection law violations. US antitrust chief Jonathan Kanter recently expressed concerns about the AI industry’s reliance on vast data and computing power, which could reinforce the dominance of major firms. Microsoft, OpenAI, Nvidia, the Justice Department, and the FTC have not commented on the ongoing investigations.

Meta faces EU complaints over AI data use

Meta Platforms is facing 11 complaints over proposed changes to its privacy policy that could violate EU privacy regulations. The changes, set to take effect on 26 June, would allow Meta to use personal data, including posts and private images, to train its AI models without user consent. Advocacy group NOYB has urged privacy watchdogs to take immediate action against these changes, arguing that they breach the EU’s General Data Protection Regulation (GDPR).

Meta claims it has a legitimate interest in using users’ data to develop its AI models, which can be shared with third parties. However, NOYB founder Max Schrems contends that the European Court of Justice has previously ruled against Meta’s arguments for similar data use in advertising, suggesting that the company is ignoring these legal precedents. Schrems criticises Meta’s approach, stating that the company should obtain explicit user consent rather than complicating the opt-out process.

In response to the impending policy changes, NOYB has called on data protection authorities across multiple European countries, including Austria, Germany, and France, to initiate an urgent procedure to address the situation. If found in violation of GDPR, Meta could face strict fines.

Chinese AI chip firms downgrading designs to secure TSMC production

Chinese AI chip firms, including industry leaders such as MetaX and Enflame, are downgrading their chip designs in order to comply with Taiwan Semiconductor Manufacturing Company’s (TSMC) stringent supply chain security protocols and regulatory requirements. This strategic adjustment comes amidst heightened scrutiny and restrictions imposed by the US on semiconductor exports to Chinese companies, which includes limitations on accessing advanced manufacturing technologies critical for AI chip production.

The US has imposed strict export controls to obstruct China’s military advancements in AI and supercomputing. These controls include restrictions on sophisticated processors from companies like Nvidia, as well as on-chip manufacturing equipment crucial for advanced semiconductor production. That move has prevented TSMC and other overseas chip manufacturers using US tools from fulfilling orders for these restricted technologies.

In response to these restrictions, top Chinese AI chip firms MetaX and Enflame have reportedly submitted downgraded chip designs to TSMC in late 2023. MetaX, founded by former Advanced Micro Devices (AMD) executives and backed by state support, had to introduce the C280 chip after its more advanced C500 Graphic Processing Unit (GPU) ran out of stock in China earlier in the year. Enflame, also Shanghai-based and supported by Tencent, faces similar challenges.

Why does it matter?

The decision to downgrade chip designs to meet production demands reflects the delicate balance between technological advancement and supply chain resilience. While simplifying designs may expedite production and mitigate supply risks in the short term, it also raises questions about long-term innovation and competitiveness. The ability to innovate and deliver cutting-edge AI technologies hinges on access to advanced chip manufacturing processes, which are increasingly concentrated among a few global players.

OpenAI insiders call for stronger oversight and whistleblower protections in open letter

On Tuesday, a group of current and former OpenAI employees issued an open letter warning that leading AI companies lack necessary transparency and accountability to address potential risks. The letter highlights AI safety concerns, such as deepening inequalities, misinformation, and loss of control over autonomous systems, potentially leading to catastrophic outcomes.

The 16 signatories, including Google DeepMind staff, emphasised that AI firms have financial incentives to avoid effective oversight and criticised their weak obligations to share critical information. They called for stronger whistleblower protections, noting that confidentiality agreements often prevent employees from raising concerns. Some current OpenAI employees signed anonymously, fearing retaliation. AI pioneers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell also endorsed the letter, criticising inadequate preparations for AI’s dangers.

The letter also calls for AI companies to commit to main principles in order to maintain a curtain level of accountability and transparency. Those principles are –  not to enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism, facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise, and support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.

Why does it matter?

In response, OpenAI defended its record, citing its commitment to safety, rigorous debate, and engagement with various stakeholders. The company highlighted its anonymous integrity hotline and newly formed Safety and Security Committee as channels for employee concerns. The critique of OpenAI comes amid growing scrutiny of CEO Sam Altman’s leadership. The concerns raised by OpenAI insiders highlights the critical need for transparency and accountability in AI development. Ensuring that AI companies are effectively overseen and held accountable and that insiders are enabled to speak out about unethical or dangerous practices without fear of retaliation represent pivotal safeguards to inform the public and the decision makers about AI’s potential capabilities and risks.

Musk diverts Nvidia chips from Tesla to X Corp amid logistical hurdles

Logistical issues led Elon Musk to redirect numerous Nvidia chips, initially intended for Tesla’s electric vehicles, to X Corp. On Tuesday, Musk explained that Tesla had nowhere to send the Nvidia chips, so they would have just remained in storage.

That’s a response to a CNBC report highlighting a memo from Nvidia stating that 12,000 of its top AI chips, originally meant for Tesla, were sent to X instead. Future shipments intended for X were later reassigned to Tesla.

Musk also announced that the Gigafactory in Texas is nearly complete and will house 50,000 H100 chips. He mentioned that about half of Tesla’s $10 billion AI-related spending this year will be for internal use, including the AI inference computer and Dojo supercomputer. He noted that Nvidia hardware accounts for two-thirds of the cost of building AI training superclusters, and Tesla plans to spend $3-4 billion on Nvidia hardware this year. Tesla is working on its own supercomputer to advance driverless-car technology, aiming to increase the number of active H100s from 35,000 to 85,000 by year-end.

Why does it matter?

The following situation has sparked criticism that Musk’s focus on AI and robotics might detract from Tesla’s core car business. Musk, who currently holds 13% of shares directly and about 21% with options, has requested 25% ownership to increase his influence. In January, he threatened to take his advanced technology ideas elsewhere if he isn’t granted more ownership.

NewsBreak’s AI error sparks controversy

Last Christmas Eve, NewsBreak, a popular news app, published a false report about a shooting in Bridgeton, New Jersey. The Bridgeton police quickly debunked the story, which had been generated by AI, stating that no such event had occurred. NewsBreak, which operates out of Mountain View, California, and has offices in Beijing and Shanghai, removed the erroneous article four days later, attributing the mistake to its content source.

NewsBreak, known for filling the void left by shuttered local news outlets, uses AI to rewrite news from various sources. However, this method has led to multiple errors, including incorrect information about local charities and fictitious bylines. In response to growing criticism, NewsBreak added a disclaimer about potential inaccuracies to its homepage. With over 50 million monthly users, the app primarily targets a demographic of suburban or rural women over 45 without college degrees.

The company has faced legal challenges due to its AI-generated content. Patch Media settled a $1.75 million lawsuit with NewsBreak over copyright infringement, and Emmerich Newspapers reached a settlement in a similar case. Concerns about the company’s ties to China have also been raised, as half of its employees are based there, prompting worries about data privacy and security.

Despite these issues, NewsBreak maintains that it complies with US data laws and operates on US-based servers. The company’s CEO, Jeff Zheng, emphasises its identity as a US-based business, crucial for its long-term credibility and success.

Russian propagandists launch disinformation campaign against Paris Olympics

Russian operatives are intensifying efforts to discredit the upcoming Paris Summer Olympics and undermine support for Ukraine, utilizing both online and offline tactics, according to experts and officials.

Efforts include using AI to create fake videos featuring actor Tom Cruise criticizing the International Olympic Committee and placing symbolic coffins near the Eiffel Tower, suggesting French soldiers in Ukraine.

Analysts note a sense of desperation among Russian propagandists, which aim to tarnish the Olympics and thwart Ukraine’s momentum in procuring Western weapons against Russia.

Not limited to online disinformation, recent stunts include the placement of symbolic coffins near the Eiffel Tower, fueling suspicions of Russian involvement, amidst French President Macron’s consideration of deploying troops to Ukraine, further angering Russia.

With the Paris Olympics approaching, concerns are mounting over potential cyber threats, given Russia’s history of disruptive actions during major events, highlighting the need for heightened vigilance and cybersecurity measures.

Young Americans show mixed embrace of AI, survey reveals

Young Americans are rapidly embracing generative AI, but few use it daily, according to a recent survey by Common Sense Media, Hopelab, and Harvard’s Center for Digital Thriving. The survey, conducted in October and November 2023 with 1,274 US teens and young adults aged 14-22, found that only 4% use AI tools daily. Additionally, 41% have never used AI, and 8% are unaware of what AI tools are. The main uses for AI among respondents are seeking information (53%) and brainstorming (51%).

Demographic differences show that 40% of white respondents use AI for schoolwork, compared to 62% of Black respondents and 48% of Latinos. Looking ahead, 41% believe AI will have both positive and negative impacts in the next decade. Notably, 28% of LGBTQ+ respondents expect mostly negative impacts, compared to 17% of cisgender/straight respondents. Young people have varied opinions on AI, as some view it as a sign of a changing world and are enthusiastic about its future, while others find it unsettling and concerning.

Why does it matter?

Young people globally share concerns over AI, which the IMF predicts will affect nearly 40% of jobs, with advanced economies seeing up to 60%. In comparison to the results above, a survey of 1,000 young Hungarians (aged 15-29) found that frequent AI app users are more positive about its benefits, while 38% of occasional users remain skeptical. Additionally, 54% believe humans will maintain control over AI, with 54% of women fearing loss of control compared to 37% of men.