Queensland premier criticises AI use in political advertising

The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.

Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.

The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.

Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.

LinkedIn adds games and AI tools to increase user visits

LinkedIn is introducing AI-powered career advice and interactive games in an effort to encourage daily visits and drive growth. The Financial Times reported that this initiative is part of a broader overhaul aimed at increasing user engagement on the Microsoft-owned platform, which currently lags behind entertainment-focused social media sites like Facebook and TikTok.

With slowing revenue growth, analysts have suggested that LinkedIn must diversify its income streams beyond subscriptions and make the platform more engaging. Editor in Chief Daniel Roth emphasised the goal of building a daily habit for users to share knowledge, get information, and interact with content on the site. The efforts reflect LinkedIn’s push to enhance the user experience, such as unveiling AI-driven job hunting features and detecting fake accounts, as well as disabling targeted ads.

In June, LinkedIn recorded 1.5 million content interactions per minute, though it did not disclose site traffic or active user figures. Data from Similarweb showed that visits reached 1.8 billion in June, but the growth rate has slowed significantly since early 2024. For continued growth, media analyst Kelsey Chickering noted that LinkedIn needs to become ‘stickier’ and offer more than just job listings and applications.

Moreover, LinkedIn is becoming a significant platform for consumer engagement, with companies like Amazon and Nike attracting millions of followers. The platform’s fastest-growing demographic is Generation Z, many of whom shop via social media. The trend highlights LinkedIn’s potential as a robust avenue for retailers to reach a sophisticated and influential audience.

Global tech outage hits Meta’s content moderators

A global tech outage on Friday affected some external vendors responsible for content moderation on Meta’s platforms, including Facebook, Instagram, WhatsApp, and Threads. According to a Meta spokesperson, the outage temporarily impacted several tools used by these vendors, causing minimal disruption to Meta’s support operations but not significantly affecting content moderation efforts.

The outage led to a SEV1 alert at Meta, indicating a critical issue that required immediate attention. Meta relies on a combination of AI and human review to moderate the billions of posts made on its platforms. While Meta staff handle some reviews, most are outsourced to vendors like Teleperformance and Concentrix, who employ numerous workers to identify and address rule violations such as hate speech and violence.

Despite the outage disrupting vendor access to key systems that route flagged content for review, operations continued as expected. Concentrix reported monitoring and addressing the impacts of the outage, while Teleperformance did not provide a comment. Meta confirmed that the issues had been resolved earlier in the day, ensuring minimal to no impact on their content moderation processes.

Singapore blocks 95 accounts linked to exiled Chinese tycoon Guo Wengui

Singapore has ordered five social media platforms to block access to 95 accounts linked to exiled Chinese tycoon Guo Wengui. These accounts posted over 120 times from April 17 to May 10, alleging foreign interference in Singapore’s leadership transition. The Home Affairs Ministry stated that the posts suggested a foreign actor influenced the selection of Singapore’s new prime minister.

Singapore’s Foreign Interference (Countermeasures) Act, enacted in October 2021, was used for the first time to address this issue. Guo Wengui, recently convicted in the US for fraud, has a history of opposing Beijing. Together with former Trump adviser Steve Bannon, he launched the New Federal State of China, aimed at overthrowing China’s Communist Party.

The ministry expressed concern that Guo’s network could spread false narratives detrimental to Singapore’s interests and sovereignty. Blocking these accounts was deemed necessary to prevent potential hostile information campaigns targeting Singapore.

Guo and his affiliated organisations have been known to push various Singapore-related narratives. The coordinated actions and previous attempts to use Singapore to advance their agenda highlighted their capability to undermine Singapore’s social cohesion and sovereignty.

Musk’s Grok AI struggles with news accuracy

Grok, Elon Musk’s AI model available on the X platform, encountered significant issues in accuracy following the attempted assassination of former President Donald Trump. The AI model posted incorrect headlines, including one falsely claiming Vice President Kamala Harris had been shot and another wrongly identifying the shooter as an antifa member. These errors stemmed from Grok’s inability to discern sarcasm and verify unverified claims on X.

After announcing plans to develop TruthGPT, Elon Musk has promoted Grok as a revolutionary tool for news aggregation, leveraging real-time posts from millions of users. Despite its potential, the incident underscores Grok’s limitations, particularly in handling breaking news. The model’s humorous design can also be a drawback, leading to the spread of misinformation and confusion.

The reliance on AI for news summaries raises concerns about accuracy and context, especially during critical events. Former Facebook public-policy director Katie Harbath emphasized the need for human oversight in providing context and verifying facts. The incident with Grok mirrors challenges faced by other AI models, such as OpenAI’s ChatGPT, which includes disclaimers to manage user expectations.

AI software provides multilingual tutorial videos for foreign workers in Japan

AI software designed to create multilingual tutorial videos for foreign workers in Japan has been launched. Tokyo-based Studist Corp developed ‘Teachme AI’ to help companies produce instructional videos quickly and efficiently.

Teachme AI can translate text into 20 different languages, including Thai, Vietnamese, Indonesian, and Bengali. This innovation aims to support businesses as the number of foreign workers in Japan rises, addressing labour shortages and an ageing population.

The software significantly reduces editing times, automatically dividing footage into chapters with subtitles. During a demonstration, a 30-minute video with Thai explanations was created in just 15 minutes, impressing users with its efficiency.

US senators introduce COPIED Act to combat intellectual property theft in creative industry

The Content Origin Protection and Integrity from Edited and Deepfaked Media Bill, also known as the COPIED Act, was introduced on 11 July 2024 by US lawmakers, Senators Marsha Blackburn, Maria Cantrell and Martin Heinrich. The bill is expected to safeguard the intellectual property of creatives, particularly journalists, publishers, broadcasters and artists.

In recent times, the work and images of creatives have been used or modified without consent, at times to generate income. The push for legislation in the area was intensified in January after explicit AI-generated images of the US musician Taylor Swift surfaced on X

According to the bill, images, videos, audio clips and texts are considered deepfakes if they contain ‘synthetic or synthetically modified content that appears authentic to a reasonable person and creates a false understanding or impression’. If moved into legislation, the bill restricts online platforms where US-based customers frequent, and annual revenue of at least $50 million is generated or where 25 million active users are registered for three consecutive months.

Under the bill, companies that deploy or develop AI models must install a feature allowing users to tag such images with contextual or content provenance information, such as their source and history, in a machine-readable format. After that, it would be illegal to remove such tags for any other reason than research, use these images to train subsequent AI models or generate content. Victims will then have the right to sue offenders. 

The COPIED Act is backed by several artist-affiliated groups, including SAG-AFTRA, the National Music Publishers’ Association, the Songwriters Guild of America (SGA), the National Association of Broadcasters as well as The US National Institute of Standards and Technology (NIST), the US Patent and Trademark Office (USPTO) and the US Copyright Office. The bill also has received bipartisan support.

India’s antitrust body finds Apple abused App Store dominance

India’s antitrust body, the Competition Commission of India (CCI), has concluded its investigation into Apple’s practices within the Indian app market, finding the tech giant engaged in abusive conduct. According to a confidential report viewed by Reuters, the CCI alleges Apple exploited its dominant position in the iOS app ecosystem by mandating developers to use its proprietary in-app purchase system. This requirement, the CCI asserts, limits competition and imposes unfair terms on developers who rely on Apple’s platform to reach consumers.

The 142-page report highlights Apple’s significant influence over digital products and services distribution through its App Store on iOS devices. It describes the App Store as a crucial channel for app developers, who must comply with Apple’s terms, including its billing and payment system. Both Apple and the CCI declined to comment on the report’s findings.

The CCI report marks a pivotal phase in India’s investigation, pending review by senior officials. It could result in fines and directives for Apple to revise its business practices. The case originated from complaints by a non-profit group and Indian startups, alleging Apple’s practices stifle competition and inflate costs for developers and consumers.

Why does this matter?

The investigation mirrors the heightened scrutiny Apple faces globally. In June, the EU regulators accused Apple of breaching antitrust laws, potentially leading to substantial fines. Apple is also under investigation for new fees imposed on developers, responding with plans to allow alternative app distribution in the EU under the Digital Markets Act.

The report underscores the regulatory pressure tech giants face worldwide, with similar antitrust actions targeting Google in India over its in-app payment policies. As the CCI deliberates its next steps, Apple’s market practices remain a focal point amid broader concerns over fair competition in the digital economy.

Musk’s X faces EU investigation for DSA violations

According to a ruling by the EU tech regulators, Elon Musk’s social media company, X, has breached the EU online content rules. The decision taken by the European Commission follows a seven-month investigation under the Digital Services Act (DSA), which mandates that large online platforms and search engines tackle illegal content and address risks to public security. The European Commission highlighted issues with X’s use of dark patterns, lack of advertising transparency, and restricted data access for researchers.

The investigation also noted that X’s verified accounts, marked with a blue checkmark, do not adhere to industry standards, impairing users’ ability to verify account authenticity. X must also meet the DSA requirement to provide a reliable, searchable advertisement repository. The company has also been accused of obstructing researchers from accessing its public data, violating the DSA.

Why does this matter?

X has several months to respond to these charges. The company could face a fine of up to 6% of its global turnover if found guilty. The EU industry chief, Thierry Breton, stated that if their findings are confirmed, they will impose fines and demand significant operational changes.

Meanwhile, the European Commission continues separate investigations into disseminating illegal content on X and the measures it has taken to counter disinformation. Similar investigations are also ongoing for other platforms, including ByteDance’s TikTok, AliExpress, and Meta Platforms.

Australia to enforce anti-scam laws on internet firms

Australia plans to introduce a law by the end of the year that will require internet companies to stop hosting scams proactively or face strict fines. The Australian Competition and Consumer Commission (ACCC) and the treasury department are working with internet, banking, and telecommunications firms to create a mandatory, enforceable anti-scam code. This code will obligate companies to take reasonable steps to protect users and provide effective complaint services.

Scams, including cryptocurrency scam advertisements featuring mining billionaire Andrew Forrest, have caused significant financial losses in Australia. Forrest is suing Meta in California for failing to act against these ads domestically. From 2020 to 2023, the amount lost by Australians to scams tripled to A$2.7 billion, mirroring global trends as more people turned to online activities during the pandemic.

Why does this matter?

The ACCC’s push for new laws aims to make all participating industries accountable. This restrictive legislation might create a conflict between Australia and an industry that relies on US laws, which largely exempt them from responsibility. Previously, a law forcing internet companies to pay media companies licensing to media companies fees led Meta to consider blocking media content on Facebook in Australia.

The proposed mandatory anti-scam codes, which the ACCC hopes to implement by the end of the year, would subject companies to fines of A$50 million, three times the benefit gained by wrongdoing, or 30% of turnover at the time of the infraction. The ACCC is also suing Meta for failing to stop the publication of scam ads, with the case still in the pre-trial stage. Meta preferred a voluntary code, arguing that a mandatory code might stifle innovation.