Growing demand of AI-generated child abuse material in dark web

As per new research conducted by Anglia Ruskin University, there is a rising interest among online offenders in learning how to create AI-generated child sexual abuse material, as evident from interactions on the dark web. The revelation was made by analysing the chats that took place in the dark web forum over the past 12 months, where group members were found to be teaching each other how to create child sexual abuse material by using online guides and videos and exchanging advice.

Members in these forums have gathered their supply of non-AI content to learn how to make these images. Researchers Dr Deanna Davy and Prof Sam Lundrigan also revealed that some members referred others who created AI images as artists. In contrast, others hoped the technology would soon become sufficiently capable to make the process easier.

Why does it matter?

The following trend has massive ramifications for child safety. Dr Davy stated how AI-generated child sexual abuse material warrants a greater understanding of how offenders are creating and sharing such content, especially for police and public protection agencies. Professor Lundrigan added how this trend ‘adds to the growing global threat of online child abuse in all forms and must be viewed as a critical area to address in our response to this type of crime’.

Future is coming: AI robots tested in BMW production

BMW is exploring the potential of AI-powered humanoid robots to assist in car manufacturing. The German automaker has partnered with California-based Figure to trial their advanced humanoid robot, Figure 02, at the South Carolina BMW Group Plant Spartanburg. The robot’s ability to handle complex tasks with human-like dexterity was put to the test during the trial, focusing on whether these machines could safely integrate into production lines.

Footage from the trial showcases the robot’s capabilities, including its ability to walk, grasp, and coordinate two-handed tasks. The Figure 02 is equipped with a powerful processor, advanced sensors, and human-scale hands, making it suitable for physically demanding and repetitive tasks in the factory. The combination of mobility and dexterity positions the robot as a potential asset in challenging work environments.

Why does this matter?

BMW highlights the significance of these developments in robotics, noting the promise they hold for the future of production. While the company has not yet committed to incorporating AI robots into its workforce, the rapid advancement of AI suggests that their use in manufacturing may soon become a reality.

The trial serves as an early step in assessing the feasibility of humanoid robots in production settings, with BMW keen to stay at the forefront of this technological evolution. The company is carefully evaluating the results to determine the best possible applications for these robots in the automotive industry.

Alphabet behind the wheel: US dominates AI, but China closes in

Companies from US and China are leading the race in AI research, with Alphabet, the parent company of Google, at the forefront. A recent study from Georgetown University revealed that Alphabet has published the most frequently cited AI academic papers over the past decade. Seven of the top ten positions are held by US companies, including Microsoft and Meta, reflecting their dominance in the field.

Chinese firms are not far behind, with Tencent, Alibaba, and Huawei securing spots within the top ten. These companies have shown remarkable growth, particularly in the number of papers accepted at major conferences. Huawei has outpaced its competitors with a 98.2% annual growth rate in this area, followed by Alibaba at 53.5%.

The competition extends beyond academic publications to patents. Baidu, a leading Chinese tech firm, topped the list of patent applications with over 10,000 submissions from 2013 to 2023. Baidu’s growth has been particularly striking, with a 228% increase in patent applications year-on-year in 2020. US companies hold three spots in the top ten for patents, with IBM making the list.

Samsung Electronics is the only Korean company to make the top 100, ranking No. 14 for highly cited AI articles and No. 4 for patents. However, Samsung’s growth in these areas has been slower compared to other global leaders, with modest increases in conference paper acceptances in recent years.

AI powers Tokyo’s new disaster response system

Tokyo has introduced a new AI-driven system aimed at improving the speed and efficiency of disaster response efforts. The technology, developed by Hitachi Ltd., leverages high-altitude cameras to detect fires and building collapses in real-time, ensuring that emergency services receive critical information without delay.

The system is designed to automatically identify signs of disasters, such as structural collapses and fires, and immediately notify the police, fire department, and Japan’s self-defense forces. This rapid communication is intended to streamline response efforts, potentially saving lives during emergencies.

High-resolution cameras have been strategically installed at various locations across Tokyo, including the metropolitan government buildings, to support the system’s operations. The initiative is particularly significant given Japan’s ongoing concerns about the threat of a ‘mega’ earthquake.

Tokyo’s investment in AI technology reflects the city’s commitment to bolstering its disaster preparedness, aiming to safeguard its residents by ensuring quicker and more effective emergency responses.

‘AI is the future’: Pakistani PM announced new tech initiatives for students

Prime Minister of Pakistan, Shehbaz Sharif, has highlighted the significant role of provinces in supporting students through the National Finance Commission (NFC), which allocates 60 percent of its shares to them. Speaking at the International Youth Day event, he pointed out that provinces now have ample resources to invest in educational initiatives, including the distribution of laptops, a practice he championed as Chief Minister of Punjab.

The Prime Minister announced that the federal government would distribute smartphones to one million high-achieving students, while provinces will continue to provide laptops. Emphasising the importance of technology in education, he underlined that equipping the new generation with modern tools is essential for the country’s future. AI, he noted, is a key area for growth.

Citing China’s success through technological advancements, the Prime Minister of Pakistan vowed to make all necessary resources available to students. He also reflected on accumulated debt over the past 70 years, contrasting it with the vision of the country’s founders. The speech included a call to action for a long-term educational programme to be launched after 14 August.

Sharif’s remarks stressed the need to bridge the gap between different social classes in Pakistan, with a focus on ensuring that every child, regardless of background, has access to the latest technology. He expressed hope that these initiatives would lead to a brighter future for the nation.

Huawei’s AI chip set to rival Nvidia in China

Huawei Technologies is on the brink of releasing a new AI chip, Ascend 910C, to challenge Nvidia’s dominance in the Chinese market. The company has made significant strides despite US sanctions, with Chinese internet firms and telecom operators recently testing the processor.

Huawei claims that the Ascend 910C rivals Nvidia’s H100, a powerful AI chip that has been unavailable in China.

Why does this matter?

The development signals Huawei’s ongoing efforts to circumvent restrictions and bolster its position in the AI sector.

Emotional attachment to AI could impact real-life interactions, says OpenAI

The potential impact of OpenAI’s realistic voice feature on human interactions has raised concerns, with the company warning that people might form emotional bonds with AI at the expense of real-life relationships. The company noted that users of its GPT-4 model have shown signs of anthropomorphizing the AI, attributing human-like qualities to it, which could lead to misplaced trust and dependency. OpenAI’s report highlighted that the high-quality voice interaction might exacerbate these issues, raising questions about the long-term effects on social norms.

The company observed that some testers of the AI voice feature interacted with it in ways that suggested an emotional connection, such as expressing sadness over the end of their session. While these behaviours might seem harmless, OpenAI emphasised the need to study their potential evolution over time. The report also suggested that reliance on AI for social interaction could diminish users’ abilities or willingness to engage in human relationships, altering how people interact with one another.

Concerns were also raised about the AI’s ability to recall details and handle tasks, which might lead to over-reliance on the technology. OpenAI further noted that its AI models, designed to be deferential in conversations, might inadvertently promote anti-social norms when users become accustomed to behaviours, such as interrupting, that are inappropriate in human interactions. The company pledged to continue testing how these voice capabilities could affect emotional attachment and social behaviour.

The issue gained attention following a controversy in June when OpenAI was criticized for allegedly using a voice similar to actress Scarlett Johansson‘s in its chatbot. Although the company denied the voice belonged to Johansson, the incident underscored the risks associated with voice-cloning technology. As AI models continue to advance toward human-like reasoning, experts are increasingly urging a pause to consider the broader implications for human relationships and societal norms.

Growing data centre demand sparks renewable energy investments

US Energy Secretary Jennifer Granholm has assured that the country will be able to meet the growing electricity demands driven by the rapid expansion of data centres powering AI. The Department of Energy anticipates that electricity demand will double by midcentury due to factors such as manufacturing growth, electrification of vehicles, and AI development. Despite concerns from local communities about the strain on resources, Granholm remains confident that clean energy sources will be sufficient to handle this increased demand, bolstered by significant investments under recent legislation.

Granholm highlighted the strong growth in renewable energy investments, predicting the deployment of over 60 gigawatts of clean energy and storage capacity this year alone. However, she acknowledged the immediate challenge of building transmission lines to connect data centers to these clean power sources. The Department of Energy is working to expedite the permitting process for such projects, with public and private investments playing a key role in expanding infrastructure.

The growth of AI has put many renewable energy goals to a test. Collaborations between tech giants such as Google and energy departments are emerging as a solution to meet the surging demand. For example, a recent partnership in Virginia between Iron Mountain and the state’s energy department will introduce large-scale batteries to store renewable energy for data centers. Granholm suggested that such initiatives could turn the demand from data centers into a catalyst for further investment in renewable energy.

The United States DOE is also researching ways to improve efficiency in data centers, aiming to help tech companies increase computing power while managing energy consumption. Granholm, after recent meetings with tech and utility leaders, hinted at upcoming major announcements that would reinforce America’s leadership in technology and innovation.

AI deepfakes raise doubts in crucial US election

As the US election draws near, the proliferation of deepfake content is raising serious concerns about its impact on undecided voters. Deepfakes—AI-generated images, videos, or audio clips—pose a significant threat to the democratic process by making it increasingly difficult for the public to distinguish between reality and fiction. This issue was recently highlighted when Donald Trump falsely claimed that a large crowd welcoming Vice President Kamala Harris in Detroit was an AI fabrication, despite evidence proving the event’s authenticity.

Trump’s unfounded allegations and the spread of misleading deepfake content by his supporters are not just problematic for those who are firmly in his camp, but for undecided voters. These voters, who are critical to the outcome of the election, may struggle to discern the truth amidst a flood of manipulated media. This erosion of trust in what is real and what is fabricated undermines a key pillar of democracy and creates fertile ground for anti-democratic forces to gain power.

The growing prevalence of deepfakes and other digital misinformation strategies is expected to intensify in the run-up to the election. Already, Trump supporters have circulated a clearly AI-generated image, falsely claiming it was promoted by the Harris campaign. Such tactics aim to blur the lines between truth and falsehood, turning the election discourse away from verifiable facts and towards a chaotic environment where nothing can be trusted.

Experts warn that unless decisive action is taken, deepfake content will continue to compromise the integrity of the democratic process. The European Union has expressed similar concerns about the role of deepfakes in elections, highlighting the global scale of the problem. In the US, the spread of political spam and digital misinformation has surged as the 2024 election approaches, further complicating the landscape for voters.

Man who used AI to create indecent images of children faces jail

In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.

The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.

Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.