AI tools create realistic child abuse images, says report

A report from the Internet Watch Foundation (IWF) has exposed a disturbing misuse of AI to generate deepfake child sexual abuse images based on real victims. While the tools used to create these images remain legal in the UK, the images themselves are illegal. The case of a victim, referred to as Olivia, exemplifies the issue. Abused between the ages of three and eight, Olivia was rescued in 2023, but dark web users are now employing AI tools to create new abusive images of her, with one model available for free download.

The IWF report also reveals an anonymous dark web page with links to AI models for 128 child abuse victims. Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material. Additionally, the report mentions models that can generate abusive images of celebrity children. Analysts found that 90% of these AI-generated images are realistic enough to fall under the same laws as real child sexual abuse material, highlighting the severity of the problem.

US tops AI startup creation, China imposes ideological controls

The United States is the undisputed global leader in AI startups and private-sector investment, according to a report by S&P Global. Between 2013 and 2023, 5,509 AI companies were founded in the US, eclipsing all other countries combined. China, the runner-up, saw 1,446 AI startups during the same period. The United Kingdom, Israel, and Canada followed with 727, 442, and 397 startups, respectively.

Investment in AI also heavily favours the US, with $335.2 billion poured into the sector over the past decade. China’s private-sector investment totalled $103.7 billion, while the UK, Israel, and Canada saw investments of $22.3 billion, $12.8 billion, and $10.6 billion, respectively. The report noted, however, that government investment, particularly in China, is significant and less transparent.

China’s approach to AI is shaped by its government’s ideological demands. The Cyberspace Administration of China (CAC) tests AI models to ensure they align with ‘core socialist values.’ AI companies must monitor content to avoid sensitive topics, including political dissent and references to historical events like the Tiananmen Square massacre.

S&P Global anticipates that private investments in AI startups could reach up to $900 billion globally by 2027, reflecting an annual growth rate of at least 70%. Meanwhile, China’s stringent regulatory environment continues to influence the development and deployment of AI technologies within its borders.

Elon Musk’s AI fashion show goes viral

Elon Musk has caused quite a stir on social media with an AI-generated video featuring global leaders and technology tycoons in avant-garde attire. The video, posted on Musk’s platform X, showcases figures like US President Joe Biden and Prime Minister Narendra Modi on a virtual catwalk. Musk, clad in a futuristic superhero costume themed around Tesla and X, declared it was ‘High time for an AI fashion show!’

The video features Modi in a multi-coloured outfit adorned with geometric symbols and black sunglasses, a striking blend of modern and classic styles. Other notable appearances include Russian President Vladimir Putin in a Louis Vuitton suit, Biden in sunglasses and a wheelchair, and Kim Jong-un in a baggy sweatshirt and gold necklace. Former US President Donald Trump also makes an appearance, adding to the eclectic assortment.

Musk took a potshot at Microsoft co-founder Bill Gates, who appears in the video holding a banner titled ‘Excessive concentration of power’, a reference to the recent major IT outage. Microsoft reported that a software update by cybersecurity firm CrowdStrike affected nearly 8.5 million devices globally.

The video also features Apple CEO Tim Cook with an iPad and former House Speaker Nancy Pelosi in a vibrant Supreme dress. The AI fashion video garnered over 33 million views in just four hours, prompting widespread reactions across the globe.

Samsung plans revolutionary AI phones

Samsung is reportedly exploring new phone designs tailored for generative AI applications. Roh Tae-moon, president of Samsung’s Mobile Experience unit, stated that upcoming ‘AI phones’ will look ‘radically different’ from current models. These new devices are expected to be more mobile, incorporating additional sensors and larger screens.

Roh revealed that a significant portion of Samsung’s mobile phone research and development is now focused on these AI-driven phones. Although specific designs were not disclosed, the goal is to move beyond the traditional slim rectangular form that has dominated the market since the iPhone’s debut.

The shift towards AI integration in phones follows Samsung’s introduction of the ‘Galaxy AI’ system, enhancing existing features and adding new tools for users. This move is part of a broader industry trend, with major players like Apple and Google also incorporating AI into their devices.

Competitors have tried to launch AI-specific devices with unique designs, but these have not gained mainstream success. Products like the Rabbit R1 and Humane AI were criticised for poor performance and battery life, highlighting the challenges in creating functional AI-driven smartphones.

OpenAI considers developing own AI chip with Broadcom

OpenAI, the maker of ChatGPT, is in discussions with Broadcom and other chip designers about developing a new AI chip. This move aims to address the shortage of expensive graphic processing units required for developing its AI models, such as ChatGPT, GPT-4, and DALL-E3.

The Microsoft-backed company is hiring former Google employees who developed the tech giant’s own AI chip and plans to create an AI server chip. OpenAI is exploring the idea of making its own AI chips to ensure a more stable supply of essential components.

OpenAI CEO Sam Altman has ambitious plans to raise billions of dollars to establish semiconductor manufacturing facilities. Potential partners for this venture include Intel, Taiwan Semiconductor Manufacturing Co, and Samsung Electronics.

A spokesperson for OpenAI mentioned that the company is having ongoing conversations with industry and government stakeholders to enhance access to the infrastructure needed for making AI benefits widely accessible.

Google and NBC Universal revamp Olympic coverage with AI

In a pioneering move, a technology company, Google, contracts with a broadcasting company to cover the Olympic and the Paralympic Games. NBC Universal, the official broadcast agency of the Olympic Games in the US, collaborates with the tech giant and Sports Committees to attract younger audiences and those who have grown accustomed to viewing short clips online.

How it is intended to work is sports commentators will use Google’s Gemini AI to narrate competitions, generate recaps, as an assistant to anchors, answer questions posed online and equip them with the content of informed research, and even entertain audiences. 

Currently, Google’s AI overviews are used to provide synopses to queries posed in Google searches. These give ready-made short responses to questions posed by users without directing them to third-party websites via links.

Musk’s Grok AI struggles with news accuracy

Grok, Elon Musk’s AI model available on the X platform, encountered significant issues in accuracy following the attempted assassination of former President Donald Trump. The AI model posted incorrect headlines, including one falsely claiming Vice President Kamala Harris had been shot and another wrongly identifying the shooter as an antifa member. These errors stemmed from Grok’s inability to discern sarcasm and verify unverified claims on X.

After announcing plans to develop TruthGPT, Elon Musk has promoted Grok as a revolutionary tool for news aggregation, leveraging real-time posts from millions of users. Despite its potential, the incident underscores Grok’s limitations, particularly in handling breaking news. The model’s humorous design can also be a drawback, leading to the spread of misinformation and confusion.

The reliance on AI for news summaries raises concerns about accuracy and context, especially during critical events. Former Facebook public-policy director Katie Harbath emphasized the need for human oversight in providing context and verifying facts. The incident with Grok mirrors challenges faced by other AI models, such as OpenAI’s ChatGPT, which includes disclaimers to manage user expectations.

Meta suspends AI use in Brazil amid privacy concerns

Meta has suspended the use of its generative AI (GenAI) tools in Brazil after the country’s data protection authority issued a preliminary ban on its new privacy policy. The suspension follows a decision by Brazil’s National Data Protection Authority (ANPD) to halt Meta’s policy, citing risks to users’ fundamental data rights.

ANPD’s decision arose from concerns over Meta’s use of personal data to train its AI systems without users’ explicit consent. The agency warned of ‘serious and irreparable damage’ to the rights of data subjects and imposed a daily fine of 50,000 reais for non-compliance. Meta expressed disappointment, stating that the decision is a setback for innovation and AI development in Brazil.

The controversy in Brazil reflects broader global challenges for tech companies navigating stringent data privacy laws. In regions like the European Union, similar regulatory hurdles have forced Meta and other tech giants to pause their AI tool rollouts. Human Rights Watch highlighted risks associated with personal data in AI training, noting how personal photos, including those of Brazilian children, have been misused in image datasets, raising significant privacy and ethical concerns.

Meta’s response aligns with its recent actions in Europe, where it withheld its AI models due to regulatory uncertainties. This situation underscores the tension between advancing AI technologies and adhering to evolving data protection regulations.

AI-powered drones to boost Ukraine’s military capabilities

In Ukraine, several startups are advancing AI systems to enhance drone operations, aiming to gain a technological edge in the ongoing conflict. These AI-enabled drones are designed to tackle increasing signal jamming by Russian forces and operate in larger groups, revolutionising modern warfare. The development includes visual systems for target identification, terrain mapping for navigation, and complex programs enabling drones to work in interconnected swarms.

One notable company, Swarmer, is creating software that links drones into a network, allowing for instant decision implementation across the group, with human intervention limited to green-lighting automated strikes. CEO Serhiy Kupriienko explained that AI can manage hundreds of drones, whereas human pilots struggle with more than five. The system, called Styx, directs reconnaissance and strike drones, both aerial and ground-based, with each drone planning its own moves and predicting the behaviour of others in the swarm.

The need for AI drones is increasing as Electronic Warfare (EW) systems disrupt signals between pilots and drones. AI-operated drones could significantly improve hit rates, countering the current drop in strike success due to jamming. The goal is to develop affordable AI targeting systems that can be deployed en masse along the extensive front line, potentially using low-cost computers like the Raspberry Pi. Such advancements could significantly enhance Ukraine’s military capabilities in the ongoing conflict, as seen with their use of Clearview AI’s facial recognition services.

CMA CGM and Google join forces on AI solutions

French shipping and logistics company CMA CGM has partnered with Alphabet’s Google to accelerate the deployment of AI solutions across its global operations. The collaboration aims to boost efficiency and reduce delivery times by optimising routes, container handling, and inventory management while minimising costs and carbon emissions. CMA CGM’s Chairman and CEO Rodolphe Saadé described the partnership as a crucial step in the company’s transformation strategy.

Google France CEO Sébastien Missoffe highlighted Google’s infrastructure, data expertise, and long-term AI approach as key factors that will support CMA CGM’s growth. CEVA Logistics, CMA CGM’s logistics arm, will utilise Google’s AI-based management tools to enhance volume and demand forecasting, improving operational planning at its warehouses.

The partnership extends to CMA CGM’s media arm, which holds stakes in French private broadcaster M6 and recently acquired BFM TV. The media division aims to develop tools to help journalists synthesise and translate documents, generate media snippets for social networks, and digitise archives for research purposes. This collaboration underscores the growing trend of leveraging AI to address challenges across various industries, similar to the partnership between Airbus and Agrimetrics in agronomy.