As the US election draws near, the proliferation of deepfake content is raising serious concerns about its impact on undecided voters. Deepfakes—AI-generated images, videos, or audio clips—pose a significant threat to the democratic process by making it increasingly difficult for the public to distinguish between reality and fiction. This issue was recently highlighted when Donald Trump falsely claimed that a large crowd welcoming Vice President Kamala Harris in Detroit was an AI fabrication, despite evidence proving the event’s authenticity.
Trump’s unfounded allegations and the spread of misleading deepfake content by his supporters are not just problematic for those who are firmly in his camp, but for undecided voters. These voters, who are critical to the outcome of the election, may struggle to discern the truth amidst a flood of manipulated media. This erosion of trust in what is real and what is fabricated undermines a key pillar of democracy and creates fertile ground for anti-democratic forces to gain power.
The growing prevalence of deepfakes and other digital misinformation strategies is expected to intensify in the run-up to the election. Already, Trump supporters have circulated a clearly AI-generated image, falsely claiming it was promoted by the Harris campaign. Such tactics aim to blur the lines between truth and falsehood, turning the election discourse away from verifiable facts and towards a chaotic environment where nothing can be trusted.
Experts warn that unless decisive action is taken, deepfake content will continue to compromise the integrity of the democratic process. The European Union has expressed similar concerns about the role of deepfakes in elections, highlighting the global scale of the problem. In the US, the spread of political spam and digital misinformation has surged as the 2024 election approaches, further complicating the landscape for voters.
In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.
The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.
Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.
An Austrian advocacy group, NOYB, has filed a complaint against the social media platform X, owned by Elon Musk, accusing the company of using users’ data to train its AI systems without their consent. The complaint, led by privacy activist Max Schrems, was lodged with authorities in nine European Union countries, pressuring Ireland’s Data Protection Commission (DPC), the primary EU regulator for major US tech firms because their EU operations are based in Ireland.
Despite this fact, NOYB’s complaint primarily focuses on X’s lack of cooperation and the inadequacy of its mitigation measures rather than questioning the legality of the data processing itself. Schrems emphasised the need for X to fully comply with the EU law by obtaining user consent before using their data. X has yet to respond to the latest complaint but intends to work with the DPC on AI-related issues.
In a related case, Meta, Facebook’s parent company, delayed the launch of its AI assistant in Europe after the Irish DPC advised against it, following similar complaints from NOYB regarding using personal data for AI training.
ASOS has deepened its collaboration with Microsoft by signing a new three-year deal to extend its use of AI technologies. The partnership, aimed at enhancing both customer experiences and internal operations, will see the introduction of AI tools designed to save time and allow employees to focus on more creative and strategic tasks. The online retailer’s director of technology operations, Victoria Arden, emphasised the importance of this move in driving operational excellence.
Since early 2023, ASOS has been utilising Microsoft’s Copilot tools, including those for Microsoft 365 and GitHub, to improve the efficiency of its engineering and HR teams. The HR team, for instance, has used Copilot to analyse employee engagement surveys, while other departments have explored AI-powered insights through tools like Power BI. The partnership highlights ASOS’s commitment to adopting cutting-edge technologies to enhance its data-driven decision-making processes.
ASOS has been actively piloting AI solutions to improve various aspects of its business. A recent example is the use of Copilot in Power BI to summarise performance data, aiding the company in making informed decisions. The retailer’s AI Stylist, powered by Microsoft’s Azure OpenAI, also represents a key innovation, helping customers discover new fashion trends through a conversational interface.
The collaboration between ASOS and Microsoft is built on a strong foundation established in 2022 when ASOS chose Microsoft Azure as its preferred cloud platform. However, this extended partnership reflects ASOS’s dedication to innovation through safe and responsible experimentation, aiming to continue delivering personalised, data-driven services to its global customer base.
IBM has teamed up with WWF-Germany to develop an AI-driven solution aimed at safeguarding African forest elephants, a species facing severe threats from poaching and habitat loss. This new technology will use AI to accurately identify individual elephants from camera trap photos, enhancing conservation efforts and allowing for more precise tracking of these endangered animals.
The partnership will combine IBM’s technological expertise with WWF’s conservation knowledge to create an AI-powered tool that could revolutionise how elephants are monitored. By focusing on image recognition, the technology aims to identify elephants by their unique physical features, such as heads and tusks, much like human fingerprints.
Additionally, the collaboration will employ IBM Environmental Intelligence to monitor and analyse biomass and vegetation in elephant habitats. The data will be crucial in predicting elephant movements and assessing the ecosystem services provided by these animals, such as carbon sequestration. Such insights could also pave the way for sustainable finance investments by quantifying the carbon services offered by elephants.
IBM emphasised the broader potential of this initiative, highlighting its role in supporting nature restoration and contributing to global climate change efforts. By integrating advanced technology with conservation strategies, the partnership seeks to make a lasting positive impact on both the environment and sustainable development.
AI is rapidly transforming the landscape of scientific research, but not always for the better. A growing concern is the proliferation of AI-generated errors and misinformation within academic journals. From bizarrely inaccurate images to nonsensical text, the quality of published research is being compromised. Trend like this one is exacerbated by the pressure on researchers to publish prolifically, leading many to turn to it as a shortcut.
Paper mills, which generate fraudulent academic papers for profit, are exploiting AI to produce vast quantities of low-quality content. These fabricated studies, often filled with nonsensical data and plagiarised text, are infiltrating reputable journals. The academic publishing industry is struggling to keep pace with this influx of junk science, as traditional quality control measures prove inadequate.
Beyond the issue of outright fraud, the misuse of AI by well-intentioned researchers is also a problem. While AI tools can be valuable for tasks like data analysis and language translation, their limitations are often overlooked. Overreliance on AI can lead to errors, biases, and a decline in critical thinking. As a result, the credibility of scientific research is at stake.
To address this crisis, a multifaceted approach is necessary. Increased investment in detection tools, stricter peer review standards, and greater transparency in the research process are essential steps. Additionally, academic institutions must foster a culture that prioritises quality over quantity, encouraging researchers to focus on depth rather than speed. Ultimately, safeguarding the integrity of scientific research requires a collaborative effort from researchers, publishers, and the public.
Humanoid robots are poised to revolutionise industries, with tech giants leading the charge. Companies such as Nvidia and Tesla are at the forefront of developing these human-like machines, equipped with advanced AI. These robots are designed to perform complex tasks, from manufacturing to customer service.
The potential applications for humanoid robots are vast. Tesla aims to deploy them in its factories, while other companies are exploring their use in logistics and healthcare. As AI technology continues to evolve, these machines are becoming increasingly sophisticated, capable of learning and adapting to new tasks.
Why does this matter?
The development of humanoid robots represents a significant investment in the future. Companies like Nvidia are building entire ecosystems to support robotics innovation. While challenges remain, the potential benefits are enormous. As these machines become more prevalent, they could reshape the workforce and drive economic growth.
The race to develop the most advanced humanoid robot is heating up. With major players investing heavily in this technology, the future of work is changing rapidly.
OpenAI’s chief strategy officer, Jason Kwon, has expressed confidence that humans will continue to control AI, downplaying concerns about the technology developing unchecked. Speaking at an forum in Seoul, Kwon emphasised that the core of safety lies in ensuring human oversight. As those systems grow more advanced, Kwon believes they will become easier to manage, countering fears of them becoming uncontrollable.
The company is actively working on creating a framework that allows AI systems to reflect the cultural values of different countries. Kwon highlighted the importance of making certain models adaptable to local contexts, ensuring that users in various regions feel the technology is designed with them in mind. However, approach like this one aims to foster a sense of ownership and relevance across diverse cultures.
Despite some scepticism surrounding the future of AI, Kwon remains optimistic about its trajectory. He compared it’s potential growth to that of the internet, which has become an indispensable tool globally. While acknowledging that AI is still in its early stages, he pointed out that adoption rates are gradually increasing, with significant room for growth.
Kwon noted that in South Korea, a country with over 50 million people, only 1 million are daily active users of ChatGPT. Even in the US, fewer than 20 per cent of the population has tried the tool. Kwon’s remarks suggest that AI’s journey is just beginning, with significant expansion expected in the coming years.
One of the largest AI research organizations has appointed Zico Kolter, a distinguished professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Renowned for his focus on AI safety, Kolter will also join the company’s safety and security committee, which is tasked with overseeing the safe deployment of OpenAI’s projects. The appointment comes as OpenAI’s board undergoes changes in response to growing concerns about the safety of generative AI, which has seen rapid adoption across various sectors.
Following the departure of co-founder John Schulman, Kolter’s addition to the OpenAI board underscores a commitment to addressing these safety concerns. He brings a wealth of experience from his roles as the chief expert at Bosch and chief technical adviser at Gray Swan, a startup dedicated to AI safety. Notably, Kolter has contributed to developing methods that automatically assess the safety of large language models, a crucial area as AI systems become increasingly sophisticated. His expertise will be invaluable in guiding OpenAI as it navigates the challenges posed by the widespread use of generative AI technologies such as ChatGPT.
The formation of the safety and security committee in May, preceded by Ilya Sutskever‘s leaving, which includes Kolter alongside CEO Sam Altman and other directors, underlines OpenAI’s proactive approach to ensuring AI is developed and deployed responsibly. The committee is responsible for making recommendations on safety decisions across all of OpenAI’s projects, reflecting the company’s recognition of the potential risks associated with AI advancements.
In a related move, Microsoft relinquished its board observer seat at OpenAI in July, aiming to address antitrust concerns from regulators in the United States and the United Kingdom. This decision was seen as a step towards maintaining a balance of power within OpenAI, as the company continues to play a leading role in the rapidly evolving AI landscape.
Elon Musk’s social media platform, X, has agreed to pause using data from European Union users to train its AI systems until further court decisions are made. The agreement comes after Ireland’s Data Protection Commission (DPC) sought to suspend X’s processing of user data for AI development, arguing that the platform had started using this data without user consent.
X, formerly known as Twitter, introduced an option for users to opt out of data usage for AI training. However, this was only available from 16 July, despite data processing beginning on 7 May. This delay led the DPC to take legal action, with a court hearing revealing that X would refrain from using data collected between 7 May and 1 August until the issue is resolved.
X’s legal team is expected to file opposition papers against the DPC’s suspension order by 4 September. The platform defended its actions, calling the regulator’s order unwarranted and unjustified. This case follows similar scrutiny faced by other tech giants like Meta and Google, which have also faced regulatory challenges in the EU over their AI systems.