Tech giants clash over California AI legislation

California lawmakers are poised to vote on groundbreaking legislation aimed at regulating AI to prevent potential catastrophic risks, such as manipulating the state’s electric grid or aiding in the creation of chemical weapons. Spearheaded by Democratic state Sen. Scott Wiener, the bill targets AI systems with immense computing power, setting safety standards that apply only to models costing over $100 million to train.

Tech giants like Meta (Facebook) and Google strongly oppose the bill, arguing that it unfairly targets developers rather than those who misuse AI for harmful purposes. They contend that such regulations could stifle innovation and drive tech companies away from California, potentially fracturing the regulatory landscape.

While highlighting California’s role as a leader in AI adoption, Governor Gavin Newsom has not publicly endorsed the bill. His administration is concurrently exploring rules to combat AI discrimination in employment and housing, underscoring the dual challenges of promoting AI innovation while safeguarding against its misuse.

The proposed legislation has garnered support from prominent AI researchers and would establish a new state agency to oversee AI development practices and enforce compliance. Proponents argue that California must act swiftly to avoid repeating past regulatory oversights in the social media sector, despite concerns over regulatory overreach and its potential economic impact.

Humanoid robot to enhance railway maintenance by JR West

West Japan Railway Co. (JR West) has announced plans to deploy a humanoid robot to undertake maintenance tasks along its railway tracks in the Kyoto-Osaka-Kobe region starting this July. The move aims to enhance efficiency and safety by delegating hazardous and physically demanding tasks to the robot.

Equipped with two arms, the robot will operate atop a construction vehicle, reaching heights up to 12 meters. It can handle objects weighing up to 40 kilograms and be fitted with various tools such as chain saws and brushes for different maintenance needs.

Operators will control the robot from inside the vehicle using goggles that display real-time camera feeds from the robot’s perspective. A setup like this enables precise control, replicating the sensation of physically handling tools and objects through advanced feedback mechanisms.

JR West anticipates a significant reduction in staffing requirements for these tasks, estimating a decrease of around 30% with the introduction of the robot. Beyond internal use, the company plans to evaluate the robot’s effectiveness and explore opportunities to expand its deployment to other areas and potentially market it to external entities.

Introducing this humanoid robot in Japan marks a strategic step for JR West towards leveraging advanced technology to improve operational efficiency and safety standards across its railway maintenance operations.

Japan unveils AI defence strategy

The Japanese Defence Ministry has unveiled its inaugural policy to promote AI use, aiming to adapt to technological advancements in defence operations. Focusing on seven key areas, including detection and identification of military targets, command and control, and logistic support, the policy aims to streamline the ministry’s work and respond to changes in technology-driven defence operations.

The new policy highlights that AI can enhance combat operation speed, reduce human error, and improve efficiency through automation. AI is also expected to aid in information gathering and analysis, unmanned defence assets, cybersecurity, and work efficiency. However, the policy acknowledges the limitations of AI, particularly in unprecedented situations, and concerns regarding its credibility and potential misuse.

The Defence Ministry plans to secure human resources with cyber expertise to address these issues, starting a specialised recruitment category in fiscal 2025. Defence Minister Minoru Kihara emphasised the importance of adapting to new forms of battle using AI and cyber technologies and stressed the need for cooperation with the private sector and international agencies.

Recognising the risks associated with AI use, Kihara highlighted the importance of accurately identifying and addressing these shortcomings. He stated that Japan’s ability to adapt to new forms of battle with AI and cyber technologies is a significant challenge in building up its defence capabilities. The ministry aims to deepen cooperation with the private sector and relevant foreign agencies by proactively sharing its views and strategies.

Nvidia faces French antitrust charges over competition concerns

Nvidia is facing potential charges from the French antitrust regulator over allegations of anti-competitive behaviour, marking an enforcement agency’s first action against the chip giant. The scrutiny follows raids conducted last September in the graphics cards sector, explicitly targeting Nvidia as part of a broader inquiry into cloud computing. The company’s prominence in AI and graphics chips, boosted by the popularity of applications like ChatGPT, has drawn regulatory attention in Europe and beyond.

While Nvidia and the French authority declined to comment, the European Commission is unlikely to expand its current review, focusing instead on the French investigation. Concerns highlighted by the French watchdog include Nvidia’s CUDA chip programming software, essential for accelerated computing using GPUs, and its investments in AI-centric cloud providers like CoreWeave. These legal developments represent provident measures for potential risks associated with market dependence and competition in the rapidly evolving AI sector.

Why does it matter?

Under French antitrust rules, companies in violation could face fines of up to 10% of their global annual turnover, though concessions can mitigate penalties. Simultaneously, the US Department of Justice is leading an investigation into Nvidia, which is part of the broader scrutiny of Big Tech alongside the Federal Trade Commission. Nvidia’s regulatory challenges reflect worldwide scrutiny over its market dominance and strategic expansions in critical technology sectors.

YouTube implements rules for removing AI-generated mimicking videos

YouTube has implemented new privacy guidelines allowing individuals to request the removal of AI-generated videos that imitate them. Initially promised in November 2023, these rules are now officially in effect, as confirmed by a recent update to YouTube’s privacy policies.

According to the updated guidelines, users can request the removal of content that realistically depicts a synthetic version of themselves, created or altered using AI. YouTube will evaluate such requests based on several criteria, including whether the content is changed, disclosed as synthetic, identifiable, realistic, and whether it serves public interest like parody or satire. Human moderators will handle complaints, and if validated, the uploader must either delete the video within 48 hours or edit out the problematic parts.

These guidelines aim to protect individuals from potentially harmful content like deepfakes, which can easily mislead viewers. They are particularly relevant in upcoming elections in countries such as France, the UK, and the US, where misusing AI-generated videos could impact political discourse.

Anthropic launches grants for developing new AI benchmark

Anthropic is launching a new program to fund the creation of new benchmarks for better assessing AI model performance and its impact. In its blog post, Anthropic stated that it will offer grants to third-party organisations developing improved methods for evaluating advanced AI model capabilities.

Urging the AI research community to develop more rigorous benchmarks that address societal and security implications, Anthropic advocated for revising existing methodologies through new tools, infrastructure, and methods. Highlighting how they aim to develop an early warning system to identify and assess risks, it specifically called for tests to evaluate a model’s ability to conduct cyberattacks, enhance weapons of mass destruction, and manipulate or deceive individuals.

Moreover, Anthropic also aims for its new program to support research into benchmarks and tasks that explore AI’s potential in scientific study, multilingual communication, bias mitigation, and self-censorship of toxicity. In addition to grants, researchers will have the chance to consult with the company’s domain experts. The company also expressed interest in potentially investing in or acquiring the most promising projects, offering various ‘funding options tailored to the needs and stage of each project’.

Why does this matter?

Benchmarks are the process of evaluating the quality of an AI system. The evaluation is typically a fixed process of assessing the capability of an AI model, usually in one area, while AI models like Anthropic’s Claude and Open AI’s ChatGPT are designed to perform a host of tasks. Thus, developing robust and reliable model evaluations is complex and is riddled with challenges. Anthropic’s initiative to support new AI benchmarks is commendable, with their stated objective of the program serving as a catalyst for progress towards a future where comprehensive AI evaluation is an industry-standard. However, given their own commercial interests, the initiative may raise trust concerns.

Google requires disclosure for election ads with altered content

Google announced that it will require advertisers to disclose election ads that use digitally altered content depicting real or realistic-looking people or events to combat misinformation during elections. This latest update to Google’s political content policy mandates advertisers to select a checkbox for ‘altered or synthetic content’ within their campaign settings.

The proliferation of generative AI, capable of rapidly creating text, images, and video, has sparked concerns over potential misuse. Deepfakes, which convincingly manipulate content to misrepresent individuals, have further blurred the distinction between fact and fiction in digital media.

To implement these changes, Google will automatically generate an in-ad disclosure for feeds and shorts on mobile devices and in-stream ads on computers and television. Advertisers must provide a prominently displayed disclosure for other ad formats that is clearly visible to users. According to Google, the exact wording of these disclosures will vary based on the context of each advertisement.

Why does it matter?

Earlier this year, during India’s general election, fake videos featuring Bollywood actors surfaced online, criticising Prime Minister Narendra Modi and urging support for the opposition Congress party. The incident highlighted the growing challenge of combating deceptive content amplified by AI-generated media.

In a related effort, OpenAI, led by Sam Altman, reported disrupting five covert influence operations in May that aimed to manipulate public opinion using AI models across various online platforms. Meta Platforms had previously committed to similar transparency measures, requiring advertisers on Facebook and Instagram to disclose the use of AI or digital tools in creating political, social, or election-related ads.

UN adopts China-led AI resolution

The UN General Assembly has adopted a resolution on AI capacity building, led by China. This non-binding resolution seeks to enhance developing countries’ AI capabilities through international cooperation and capacity-building initiatives. It also urges international organisations and financial institutions to support these efforts.

The resolution comes in the context of the ongoing technology rivalry between Beijing and Washington, as both nations strive to influence AI governance and portray each other as destabilising forces. Earlier this year, the US promoted a UN resolution advocating for ‘safe, secure, and trustworthy’ AI systems, gaining the support of over 110 countries, including China.

China’s resolution acknowledges the UN’s role in AI capacity-building and calls on Secretary-General Antonio Guterres to report on the unique challenges developing countries face and provide recommendations to address them.

Connecticut launches AI Academy to boost tech skills

Connecticut is spearheading efforts by developing what could be the nation’s first Citizens AI Academy. The free online resource aims to offer classes for learning basic AI skills and obtaining employment certificates.

Democratic Senator James Maroney of Connecticut emphasised the need for continuous learning in this rapidly evolving field. Determining the essential skills for an AI-driven world presents challenges due to the technology’s swift progression and varied expert opinions. Gregory LaBlanc from Berkeley Law School suggested that workers should focus on managing and utilising AI rather than understanding its technical intricacies to complement the capabilities of AI.

Several states, including Connecticut, California, Mississippi, and Maryland, have proposed legislation addressing AI in education. For instance, California is considering incorporating AI literacy into school curricula to ensure students understand AI principles, recognise its use, and appreciate its ethical implications. Connecticut’s AI Academy plans to offer certificates for career-related skills and provide foundational knowledge, from digital literacy to interacting with chatbots.

Despite the push for AI education, concerns about the digital divide persist. Senator Maroney highlighted the potential disadvantage for those needing more basic digital skills or access to technology. Marvin Venay of Bring Tech Home and Tesha Tramontano-Kelly of CfAL for Digital Inclusion stress the importance of affordable internet and devices as prerequisites for effective AI education. Ensuring these fundamentals is crucial for equipping individuals with the necessary tools to thrive in an AI-driven future.

Chance the Rapper teams up with Meta for AI-driven creativity

Grammy Award-winning musician and producer Chance the Rapper is known for his innovative approach to music and fashion. Recently, he has teamed up with Meta for their Super Fan event, showcasing his interest in cutting-edge technology, particularly Meta AI. The collaboration highlights how AI transforms various aspects of his work, from engaging fans to creating music and fashion.

Chance has long been a pioneer in using digital platforms to connect with fans and distribute his music. With the advent of AI, he is now pushing the boundaries of creativity even further. He likens AI to the patchwork denim look he sported at the Meta event, describing it as an amalgamation of different design patterns. The comparison underscores his view of AI as a tool for combining diverse elements to create something unique.

The Meta AI suite, integrated into platforms like Instagram and Facebook, allows Chance to explore new artistic directions. He uses these tools to experiment with music production, generate unique soundscapes, and refine his musical style. Chance also finds inspiration on Instagram, drawing from various topics and incorporating these influences into his work.

Additionally, Chance sees potential in Meta’s new Ray-Ban Meta smart glasses, which offer responsive tech for human engagement and photography. By leveraging AI tools, he enhances his artistic process, engages more effectively with fans, and supports initiatives like the growth of women’s sports. As he prepares to release his new project, ‘Star Line Gallery,’ Chance the Rapper continues to inspire and innovate in the realms of music and fashion.