Queensland premier criticises AI use in political advertising

The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.

Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.

The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.

Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.

US, EU, UK pledge to protect generative AI market fairness

Top competition authorities from the EU, UK, and US have issued a joint statement emphasising the importance of fair, open, and competitive markets in developing and deploying generative AI. Leaders from these regions, including Margrethe Vestager of the European Commission, Sarah Cardell of the UK Competition and Markets Authority, Jonathan Kanter of the US Department of Justice, and Lina M. Khan of the US Federal Trade Commission, highlighted their commitment to ensuring effective competition and protecting consumers and businesses from potential market abuses.

The officials recognise the transformational potential of AI technologies but stress the need to safeguard against risks that could undermine fair competition. These risks include the concentration of control over essential AI development inputs, such as specialised chips and vast amounts of data, and the possibility of large firms using their existing market power to entrench or extend their dominance in AI-related markets. The statement also warns against partnerships and investments that could stifle competition by allowing major firms to co-opt competitive threats.

The joint statement outlines several principles for protecting competition within the AI ecosystem, including fair dealing, interoperability, and maintaining choices for consumers and businesses. The authorities are particularly vigilant about the potential for AI to facilitate anti-competitive behaviours, such as price fixing or unfair exclusion. Additionally, they underscore the importance of consumer protection, ensuring that AI applications do not compromise privacy, security, or autonomy through deceptive or unfair practices.

Seomjae is set to launch its AI-powered mathematics learning program at CES 2025

Seomjae, a Seoul-based education solutions developer, is set to launch its AI-powered mathematics learning program at the Consumer Electronics Show in Las Vegas next January. The program uses an AI Retrieval-Augmented Generation model, developed over two years by a team of 40 mathematicians and AI developers. It features over 120,000 math problems and 30,000 lectures, offering personalised education tracks for each student.

Beta testing will begin on July 29, involving 50 students from Seoul, Ulsan, and Boston. The feedback will help enhance the technology and its feasibility. The innovative system, called Transforming Educational Content to AI, extracts and analyses information from lectures and problem solutions to provide core content.

Seomjae is also expanding its business portfolio to include an essay-writing educational program through partnerships in the US and Vietnam. The company will participate in Dubai’s Gulf Information Technology Exhibition this October, showcasing its new educational technologies.

A company official expressed excitement about starting beta testing and integrating diverse feedback to improve the program. The goal is to refine the AI system and ensure its effectiveness for students worldwide.

Gcore attracts $60 million investment for AI innovation

Gcore has raised $60 million in Series A funding from investors including Wargaming, Constructor Capital, and Han River Partners. That marks Gcore’s first external capital raise in over a decade. The funds will be invested in Gcore’s AI technology and platform, utilising NVIDIA GPUs to drive AI innovations. The move highlights Gcore’s commitment to enhancing cloud resource efficiency and data sovereignty.

The company’s extensive network and cloud capabilities have made it a trusted partner for public organisations, telcos, and global corporations. Gcore’s infrastructure supports a wide range of industries, including media, gaming, technology, financial services, and retail. Its global network of over 180 edge nodes spans six continents, powering the training and inference of large language models.

Investors have expressed strong support for Gcore’s mission. Wargaming’s Sean Lee praised Gcore’s decade-long partnership, while Constructor Capital’s Matthias Winter highlighted the company’s comprehensive edge solutions and low latency. Han River Partners’ Christopher Koh noted Gcore’s strategic position in emerging AI markets, particularly in the APAC region.

CEO Andre Reitenbach emphasised the transformative potential of AI for businesses. Gcore aims to connect the world to AI with innovative cloud and edge solutions. The investment underscores the growing demand for AI infrastructure and Gcore’s role in meeting this need, supported by its robust network and advanced AI servers.

Europol predicts a surge in AI-assisted cybercrimes across the EU

Europol’s latest report predicts a surge in AI-assisted cybercrimes across the EU. The ‘Internet Organised Crime Threat Assessment 2024’ highlights how AI tools are enabling non-technical individuals to execute complex online crimes. These tools, such as deep fakes and false advertisements, are making it easier for bad actors to engage in cybercrime.

The agency stresses the need for law enforcement to enhance their capabilities to counter these threats. Europol’s Executive Director, Catherine De Bolle, emphasises the importance of building robust human and technical resources. Future advancements in deepfake technology could lead to severe cases of sexual extortion, requiring sophisticated detection tools.

Concerns also extend to the cryptocurrency ecosystem. Europol’s report flags the potential for increased fraud involving non-fungible tokens (NFTs) and Bitcoin exchange-traded funds (ETFs). As more people adopt these financial instruments, those without extensive cryptocurrency knowledge may become prime targets for scammers.

Recently, Europol seized €44.2 million in cryptocurrency assets from ChipMixer, linked to money laundering. This operation underscores the growing challenges law enforcement faces in combating sophisticated financial crimes facilitated by emerging technologies.

Deloitte partners with Amazon to enhance global AI and data capabilities

Deloitte has formed a strategic collaboration with Amazon Web Services (AWS) to assist companies globally in enhancing their capabilities in generative artificial intelligence, data analytics, and quantum computing. The partnership includes the establishment of an Innovation Lab, with a focus on cutting-edge technologies like AI, quantum machine learning, and autonomous robotics. This lab aims to address industry-specific challenges and support companies in successfully transitioning proofs of concept into full production.

The Innovation Lab will facilitate collaboration between Deloitte and AWS engineers to develop solutions for diverse industries, encompassing financial services, healthcare, media, and energy. One of the initial projects, Deloitte’s C-Suite AI™ for CFOs, is designed to streamline financial functions using large language models. These models simplify workflows, generate investor documentation, and automate customer service. This tool is powered by NVIDIA and Amazon Bedrock to specifically aid the financial services sector.

Toyota Motors North America exemplifies a company benefiting from AWS machine learning and decision intelligence services to enhance their data ecosystem. Innovative solutions, such as dynamic pricing and parts forecasting, have been developed through their collaborative efforts with Deloitte. The partnership’s objective is to assist companies in transitioning from exploration to production of new technologies, addressing the inherent complexities and challenges involved.

Why does this matter?

Deloitte remains committed to supporting companies throughout their AI transformation journey. They offer tailored AI services and leverage their deep industry knowledge. The firm is currently training over 120,000 professionals worldwide in AI and investing more than £2 billion in technology learning and development initiatives. This extensive programme aims to boost skills in AI and other advanced technologies, ensuring greater client impact and improved productivity.

Grundon invests in AI for safer driving

Grundon Waste Management is investing £750,000 in AI technology over three years to enhance driver safety. The company will implement Samsara’s Connected Operations Platform across its fleet of over 300 vehicles, following successful trials at two depots. The trials showed a 71% reduction in detected events and increased fuel efficiency, encouraging optimal driving habits.

Grundon expects to save around £220,000 annually in fuel costs once the technology is fully deployed. Chris Double, Regional Operations Manager, noted positive feedback from drivers during the trials. The AI Dash Cams, which provide instant feedback and acknowledge good performance, have been well-received.

The technology includes Dual-Facing AI Dash Cams and other cameras that monitor driver activity and connect to existing 360-degree cameras. Drivers can also use the Samsara App to track their behaviour through a points-based system. The system aims to improve safe driving habits and encourage good behaviour.

Philip van der Wilt, SVP and General Manager EMEA at Samsara, highlighted the measurable impact of the technology during the trials. He looks forward to a long-term partnership with Grundon to continue driving innovation and safety in their operations.

CIA’s Lakshmi Raman on integrating AI with intelligence work

Lakshmi Raman, the Director of AI at the CIA, has had a remarkable journey within the intelligence community. Starting as a software developer in 2002, Raman rose through the ranks to lead the CIA‘s enterprise data science efforts. She credits her success to the presence of women role models at the agency, which has historically been male-dominated.

In her current role, Raman oversees and integrates AI activities across the CIA, emphasising the partnership between humans and machines. The CIA has been utilising AI since around 2000, particularly in natural language processing, computer vision, and video analytics. Raman highlighted the agency’s focus on staying abreast of new trends like generative AI, which aids in content triage, search, discovery, and countering analytic bias.

The CIA’s proactive approach to AI, along with NSA’s focus on AI advancements, reflects the security agencies’ efforts to utilise AI as a tool to increase their effectiveness and support their mission.

LinkedIn adds games and AI tools to increase user visits

LinkedIn is introducing AI-powered career advice and interactive games in an effort to encourage daily visits and drive growth. The Financial Times reported that this initiative is part of a broader overhaul aimed at increasing user engagement on the Microsoft-owned platform, which currently lags behind entertainment-focused social media sites like Facebook and TikTok.

With slowing revenue growth, analysts have suggested that LinkedIn must diversify its income streams beyond subscriptions and make the platform more engaging. Editor in Chief Daniel Roth emphasised the goal of building a daily habit for users to share knowledge, get information, and interact with content on the site. The efforts reflect LinkedIn’s push to enhance the user experience, such as unveiling AI-driven job hunting features and detecting fake accounts, as well as disabling targeted ads.

In June, LinkedIn recorded 1.5 million content interactions per minute, though it did not disclose site traffic or active user figures. Data from Similarweb showed that visits reached 1.8 billion in June, but the growth rate has slowed significantly since early 2024. For continued growth, media analyst Kelsey Chickering noted that LinkedIn needs to become ‘stickier’ and offer more than just job listings and applications.

Moreover, LinkedIn is becoming a significant platform for consumer engagement, with companies like Amazon and Nike attracting millions of followers. The platform’s fastest-growing demographic is Generation Z, many of whom shop via social media. The trend highlights LinkedIn’s potential as a robust avenue for retailers to reach a sophisticated and influential audience.

AI tools create realistic child abuse images, says report

A report from the Internet Watch Foundation (IWF) has exposed a disturbing misuse of AI to generate deepfake child sexual abuse images based on real victims. While the tools used to create these images remain legal in the UK, the images themselves are illegal. The case of a victim, referred to as Olivia, exemplifies the issue. Abused between the ages of three and eight, Olivia was rescued in 2023, but dark web users are now employing AI tools to create new abusive images of her, with one model available for free download.

The IWF report also reveals an anonymous dark web page with links to AI models for 128 child abuse victims. Offenders are compiling collections of images of named victims, such as Olivia, and using them to fine-tune AI models to create new material. Additionally, the report mentions models that can generate abusive images of celebrity children. Analysts found that 90% of these AI-generated images are realistic enough to fall under the same laws as real child sexual abuse material, highlighting the severity of the problem.