AI software provides multilingual tutorial videos for foreign workers in Japan

AI software designed to create multilingual tutorial videos for foreign workers in Japan has been launched. Tokyo-based Studist Corp developed ‘Teachme AI’ to help companies produce instructional videos quickly and efficiently.

Teachme AI can translate text into 20 different languages, including Thai, Vietnamese, Indonesian, and Bengali. This innovation aims to support businesses as the number of foreign workers in Japan rises, addressing labour shortages and an ageing population.

The software significantly reduces editing times, automatically dividing footage into chapters with subtitles. During a demonstration, a 30-minute video with Thai explanations was created in just 15 minutes, impressing users with its efficiency.

US senators introduce COPIED Act to combat intellectual property theft in creative industry

The Content Origin Protection and Integrity from Edited and Deepfaked Media Bill, also known as the COPIED Act, was introduced on 11 July 2024 by US lawmakers, Senators Marsha Blackburn, Maria Cantrell and Martin Heinrich. The bill is expected to safeguard the intellectual property of creatives, particularly journalists, publishers, broadcasters and artists.

In recent times, the work and images of creatives have been used or modified without consent, at times to generate income. The push for legislation in the area was intensified in January after explicit AI-generated images of the US musician Taylor Swift surfaced on X

According to the bill, images, videos, audio clips and texts are considered deepfakes if they contain ‘synthetic or synthetically modified content that appears authentic to a reasonable person and creates a false understanding or impression’. If moved into legislation, the bill restricts online platforms where US-based customers frequent, and annual revenue of at least $50 million is generated or where 25 million active users are registered for three consecutive months.

Under the bill, companies that deploy or develop AI models must install a feature allowing users to tag such images with contextual or content provenance information, such as their source and history, in a machine-readable format. After that, it would be illegal to remove such tags for any other reason than research, use these images to train subsequent AI models or generate content. Victims will then have the right to sue offenders. 

The COPIED Act is backed by several artist-affiliated groups, including SAG-AFTRA, the National Music Publishers’ Association, the Songwriters Guild of America (SGA), the National Association of Broadcasters as well as The US National Institute of Standards and Technology (NIST), the US Patent and Trademark Office (USPTO) and the US Copyright Office. The bill also has received bipartisan support.

India’s antitrust body finds Apple abused App Store dominance

India’s antitrust body, the Competition Commission of India (CCI), has concluded its investigation into Apple’s practices within the Indian app market, finding the tech giant engaged in abusive conduct. According to a confidential report viewed by Reuters, the CCI alleges Apple exploited its dominant position in the iOS app ecosystem by mandating developers to use its proprietary in-app purchase system. This requirement, the CCI asserts, limits competition and imposes unfair terms on developers who rely on Apple’s platform to reach consumers.

The 142-page report highlights Apple’s significant influence over digital products and services distribution through its App Store on iOS devices. It describes the App Store as a crucial channel for app developers, who must comply with Apple’s terms, including its billing and payment system. Both Apple and the CCI declined to comment on the report’s findings.

The CCI report marks a pivotal phase in India’s investigation, pending review by senior officials. It could result in fines and directives for Apple to revise its business practices. The case originated from complaints by a non-profit group and Indian startups, alleging Apple’s practices stifle competition and inflate costs for developers and consumers.

Why does this matter?

The investigation mirrors the heightened scrutiny Apple faces globally. In June, the EU regulators accused Apple of breaching antitrust laws, potentially leading to substantial fines. Apple is also under investigation for new fees imposed on developers, responding with plans to allow alternative app distribution in the EU under the Digital Markets Act.

The report underscores the regulatory pressure tech giants face worldwide, with similar antitrust actions targeting Google in India over its in-app payment policies. As the CCI deliberates its next steps, Apple’s market practices remain a focal point amid broader concerns over fair competition in the digital economy.

Musk’s X faces EU investigation for DSA violations

According to a ruling by the EU tech regulators, Elon Musk’s social media company, X, has breached the EU online content rules. The decision taken by the European Commission follows a seven-month investigation under the Digital Services Act (DSA), which mandates that large online platforms and search engines tackle illegal content and address risks to public security. The European Commission highlighted issues with X’s use of dark patterns, lack of advertising transparency, and restricted data access for researchers.

The investigation also noted that X’s verified accounts, marked with a blue checkmark, do not adhere to industry standards, impairing users’ ability to verify account authenticity. X must also meet the DSA requirement to provide a reliable, searchable advertisement repository. The company has also been accused of obstructing researchers from accessing its public data, violating the DSA.

Why does this matter?

X has several months to respond to these charges. The company could face a fine of up to 6% of its global turnover if found guilty. The EU industry chief, Thierry Breton, stated that if their findings are confirmed, they will impose fines and demand significant operational changes.

Meanwhile, the European Commission continues separate investigations into disseminating illegal content on X and the measures it has taken to counter disinformation. Similar investigations are also ongoing for other platforms, including ByteDance’s TikTok, AliExpress, and Meta Platforms.

Australia to enforce anti-scam laws on internet firms

Australia plans to introduce a law by the end of the year that will require internet companies to stop hosting scams proactively or face strict fines. The Australian Competition and Consumer Commission (ACCC) and the treasury department are working with internet, banking, and telecommunications firms to create a mandatory, enforceable anti-scam code. This code will obligate companies to take reasonable steps to protect users and provide effective complaint services.

Scams, including cryptocurrency scam advertisements featuring mining billionaire Andrew Forrest, have caused significant financial losses in Australia. Forrest is suing Meta in California for failing to act against these ads domestically. From 2020 to 2023, the amount lost by Australians to scams tripled to A$2.7 billion, mirroring global trends as more people turned to online activities during the pandemic.

Why does this matter?

The ACCC’s push for new laws aims to make all participating industries accountable. This restrictive legislation might create a conflict between Australia and an industry that relies on US laws, which largely exempt them from responsibility. Previously, a law forcing internet companies to pay media companies licensing to media companies fees led Meta to consider blocking media content on Facebook in Australia.

The proposed mandatory anti-scam codes, which the ACCC hopes to implement by the end of the year, would subject companies to fines of A$50 million, three times the benefit gained by wrongdoing, or 30% of turnover at the time of the infraction. The ACCC is also suing Meta for failing to stop the publication of scam ads, with the case still in the pre-trial stage. Meta preferred a voluntary code, arguing that a mandatory code might stifle innovation.

Vimeo introduces AI labelling for videos

Vimeo has joined TikTok, YouTube, and Meta in requiring creators to label AI-generated content. Announced on Wednesday, this new policy mandates that creators disclose when realistic content is produced using AI. The updated terms of service aim to prevent confusion between genuine and AI-created videos, addressing the challenge of distinguishing real from fake content due to advanced generative AI tools.

Not all AI usage requires labelling; animated content, videos with obvious visual effects, or minor AI production assistance are exempt. However, videos that feature altered depictions of celebrities or events must include an AI content label. Vimeo’s AI tools, such as those that edit out long pauses, will also prompt labelling.

Creators can manually indicate AI usage when uploading or editing videos, specifying whether AI was used for audio, visuals, or both. Vimeo plans to develop automated systems to detect and label AI-generated content to enhance transparency and reduce the burden on creators. CEO Philip Moyer emphasised the importance of protecting user-generated content from AI training models, aligning Vimeo with similar policies at YouTube.

Rising threat of deepfake pornography for women

As deepfake pornography becomes an increasing threat to women online, both international and domestic lawmakers face difficulties in creating effective protections for victims. The issue has gained prominence through cases like that of Amy Smith, a student in Paris who was targeted with manipulated nude images and harassed by an anonymous perpetrator. Despite reporting the crime to multiple authorities, Smith found little support due to the complexities of tracking faceless offenders across borders.

Recent data shows that deepfake pornography is predominantly used for malicious purposes, with 98% of such videos being explicit. The FBI has identified a rise in “sextortion schemes,” where altered images are used for blackmail. Public awareness of these crimes is often heightened by high-profile cases, but many victims are not celebrities and face immense challenges in seeking justice.

Efforts are underway to address these issues through new legislation. In the US, proposed bills aim to hold perpetrators accountable and require prompt removal of deepfake content from the internet. Additionally, President Biden’s recent executive order seeks to develop technology for detecting and tracking deepfake images. In Europe, the AI Act introduces regulations for AI systems but faces criticism for its limited scope. While these measures represent progress, experts caution that they may not fully prevent future misuse of deepfake technology.

US voters prefer cautious AI regulation over China race

A recent poll by the AI Policy Institute has shed light on strong public opinion in the United States regarding the regulation of AI.

Contrary to claims from the tech industry that strict regulations could hinder competition with China, a majority of American voters prioritise safety and control over the rapid development of AI. The poll reveals that 75% of both Democrats and Republicans prefer a cautious approach to AI development to prevent its misuse by adversaries.

The debate underscores growing concerns about national security and technological competitiveness. While China leads in AI patents, with over 38,000 registered compared to the US’s 6,300, Americans seem wary of sacrificing regulatory oversight in favour of expedited innovation.

Most respondents advocate for stringent safety measures and testing requirements to mitigate potential risks associated with powerful AI technologies.

Moreover, the poll highlights widespread support for restrictions on exporting advanced AI models to countries like China, reflecting broader apprehensions about technology transfer and national security. Despite the absence of comprehensive federal AI regulation in the US, states like California have begun to implement their own measures, prompting varied responses from tech industry leaders and policymakers alike.

Bumble fights AI scammers with new reporting tool

With the instances of scammers using AI-generated photos and videos on dating apps, Bumble has added a new feature that lets users report suspected AI-generated profiles. Now, users can select ‘Fake profile’ and then choose ‘Using AI-generated photos or videos’ among other reporting options such as inappropriate content, underage users, and scams. By allowing users to report such profiles, Bumble aims to reduce the misuse of AI in creating misleading profiles.

Earlier in February this year, Bumble introduced the ‘Deception Detector’, which combines AI and human moderators to detect and eliminate fake profiles and scammers. Following this measure, Bumble has witnessed a 45% overall reduction in reported spam and scams. Another notable feature of Bumble is its ‘Private Detector‘ AI tool that blurs unsolicited nude photos.

Risa Stein, Bumble’s VP of Product, emphasised the importance of creating a safe space and stated, ‘We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.’

Meta will remove content in which ‘Zionist’ is used as a proxy term for antisemitism

Meta announced on Tuesday that it will begin removing more posts that target ‘Zionists’ when the term is used to refer to Jewish people and Israelis, rather than supporters of the political movement. This decision is based on the claim that the world can take on new meanings and become a proxy term for nationality. Meta categorises numerous ‘protected characteristics,’ including nationality, race, and religion.

Previously, Meta’s approach has treated the word ‘Zionist’ as a proxy for Jewish or Israeli people in two specific cases: when Zionists are compared to rats, reflecting antisemitic imagery, and when context clearly indicates that the word means ‘Jew’ or ‘Israeli.’ Now, Meta will remove content attacking ‘Zionists’ when it is not explicitly about the political movement and when it uses certain antisemitic stereotypes, dehumanises, denies the existence of, or threatens or calls for harm or intimidation of ‘Jews’ or ‘Israelis.’

The policy change has been praised by the World Jewish Congress. Its president, Ronald S. Lauder, stated, ‘By recognizing and addressing the misuse of the term ‘Zionist,’ Meta is taking a bold stand against those who seek to mask their hatred of Jews.’ Meta has previously reported significant decreases in hate speech on its platforms.

A recurring question during consultations was how to handle comparisons of Zionists to criminals. Meta does not allow content that compares “protected characteristics” to criminals, but currently believes such comparisons can be used as shorthand for comments on larger military actions. The issue has been referred to an oversight board. Meta consulted with 145 stakeholders from civil society and academia across various global regions for this policy update.