Google revises AI team’s mission statement, removing equity focus

Google has quietly updated the webpage for its Responsible AI and Human-Centred Technology team, removing references to diversity and equity

Terms such as ‘marginalised communities’ and ‘underrepresented groups’ have been replaced with more neutral language. The changes were first spotted by watchdog group The Midas Project, which previously reported similar edits to Google’s Startups Founders Fund page.

The company’s move comes amid a broader rollback of diversity, equity, and inclusion (DEI) initiatives across the tech industry. Google announced in February that it would end its diversity hiring targets and reassess its DEI programmes.

Other companies, including Amazon and Meta, have also scaled back diversity policies in response to legal and political pressures from the Trump administration, which has criticised such initiatives.

Federal contracts could be influencing these decisions, as many of the affected companies, including Google, work closely with United States agencies.

While some firms, such as OpenAI, have removed diversity language from hiring pages, Apple recently rejected a shareholder proposal to eliminate its DEI programmes. The changes suggest a shifting landscape for corporate diversity efforts in the US tech sector.

For more information on these topics, visit diplomacy.edu.

US drops AI investment proposal against Google

The US Department of Justice (DOJ) has decided to drop its earlier proposal to force Alphabet, Google’s parent company, to sell its investments in AI companies, including its stake in Anthropic, a rival to OpenAI.

The proposal was originally included in a wider initiative to boost competition in the online search market. The DOJ now argues that restricting Google’s AI investments might lead to unintended consequences in the rapidly changing AI sector.

While this move represents a shift in the government’s approach, the DOJ and 38 state attorneys general are continuing their antitrust case against Google. They argue that Google holds an illegal monopoly in the search market and is distorting competition.

The government’s case includes demands for Google to divest its Chrome browser and implement other measures to foster competition.

Google has strongly opposed these efforts, stating that they would harm consumers, the economy, and national security. The company is also planning to appeal the proposals.

As part of the ongoing scrutiny, the DOJ’s latest proposal mandates that Google notify the government of any future investments in generative AI, a move intended to curb further concentration of power in the sector.

This case is part of a broader wave of antitrust scrutiny facing major tech companies like Google, Apple, and Meta, as US regulators seek to rein in the market dominance of Big Tech.

For more information on these topics, visit diplomacy.edu.

Authors challenge Meta’s use of their books in AI training

A lawsuit filed by authors Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates against Meta has taken a significant step forward as a federal judge has ruled that the case will continue.

The authors allege that Meta used their books to train its Llama AI models without consent, violating their intellectual property rights.

They further claim that Meta intentionally removed copyright management information (CMI) from the works to conceal the alleged infringement.

Meta, however, defends its actions, arguing that the training of AI models qualifies as fair use and that the authors lack standing to sue.

Despite this, the judge allowed the lawsuit to move ahead, acknowledging that the authors’ claims suggest concrete injury, specifically regarding the removal of CMI to hide the use of copyrighted works.

While the lawsuit touches on several legal points, the judge dismissed claims related to the California Comprehensive Computer Data Access and Fraud Act, stating that there was no evidence of Meta accessing the authors’ computers or servers.

Meta’s defence team has continued to assert that the AI training practices were legally sound, though the ongoing case will likely provide more insight into the company’s stance on copyright.

The ruling adds to the growing list of copyright-related lawsuits involving AI models, including one filed by The New York Times against OpenAI. As the debate around AI and intellectual property rights intensifies, this case could set important precedents.

For more information on these topics, visit diplomacy.edu.

New York MTA partners with Google to detect track problems

The Metropolitan Transportation Authority (MTA) in New York City has partnered with Google Public Sector on a pilot program designed to detect track defects before they cause significant disruptions. Using Google Pixel smartphones retrofitted onto subway cars, the system captured millions of sensor readings, GPS locations, and hours of audio to identify potential problems. The project aimed to improve the efficiency of the MTA’s response to track issues, potentially saving time and money while reducing delays for passengers.

The AI-powered program, called TrackInspect, analyses the sounds and vibrations from the subway to pinpoint areas that could signal defects, such as loose rails or worn joints. Data collected during the pilot, which ran from September 2024 to January 2025, showed that the AI system successfully identified 92% of defect locations found by human inspectors. The system was trained using feedback from MTA inspectors, helping refine its ability to predict track issues.

While the pilot was considered a success, the future of the program remains uncertain due to financial concerns at the MTA. Despite this, the success of the project has sparked interest from other transit systems looking to adopt similar AI-driven technologies to improve infrastructure maintenance and reduce delays. The MTA is now exploring other technological partnerships to enhance its track monitoring and maintenance efforts.

For more information on these topics, visit diplomacy.edu.

Nagasaki University launches AI program for medical student training

Nagasaki University in southwestern Japan, in collaboration with a local systems development company, has unveiled a new AI program aimed at enhancing medical student training.

The innovative program allows students to practice interviews with virtual patients on a screen, addressing the growing difficulty of securing simulated patients for training, especially in regional areas facing population declines.

In a demonstration earlier this month, an AI-powered virtual patient exhibited symptoms such as fever and cough, responding appropriately to questions from a medical student.

Scheduled for introduction by March 2026, the technology will allow students to interact with virtual patients of different ages, genders, and symptoms, enhancing their learning experience.

The university plans to enhance the program with scoring and feedback functions to make the training more efficient and improve the quality of learning.

Shinya Kawashiri, an associate professor at the university’s School of Medicine, expressed hope that the system would lead to more effective study methods.

Toru Kobayashi, a professor at the university’s School of Information and Data Sciences, highlighted the program as a groundbreaking initiative in Japan’s medical education landscape.

For more information on these topics, visit diplomacy.edu.

China expands university enrolment to boost AI talent

China’s top universities are set to expand undergraduate enrolment to develop talent in key strategic fields, particularly AI.

The move follows the rapid rise of AI startup DeepSeek, which has drawn global attention for producing advanced AI models at a fraction of the usual cost.

The company’s success, largely driven by researchers from elite institutions in China, is seen as a major step in Beijing’s efforts to boost its homegrown STEM workforce.

Peking University announced it would add 150 undergraduate spots in 2025 to focus on national strategic needs, particularly in information science, engineering, and clinical medicine.

Renmin University will expand enrolment by over 100 places, aiming to foster innovation in AI. Meanwhile, Shanghai Jiao Tong University plans to add 150 spots dedicated to emerging technologies such as integrated circuits, biomedicine, and new energy.

This expansion aligns with China’s broader strategy to strengthen its education system and technological capabilities. In January, the government introduced a national action plan to enhance education efficiency and innovation by 2035.

Additionally, authorities plan to introduce AI education in primary and secondary schools to nurture digital skills and scientific curiosity from an early age.

For more information on these topics, visit diplomacy.edu.

Taco Bell parent company invests $1 billion in AI-powered restaurant technology

Taco Bell is ramping up its use of AI as part of a broader $1 billion investment by parent company Yum Brands in digital and technology.

At a recent investor event in New York, executives showcased the company’s ‘Byte by Yum’ AI tools, which aim to improve labour management and inventory tracking. Taco Bell’s Chief Digital and Technology Officer, Dane Mathews, said AI is already being used to streamline operations without reducing labour costs.

Around 500 Taco Bell locations in the United States now use AI-driven voice technology to handle drive-through orders, a significant increase from 100 locations in mid-2024.

During the investor event, executives presented a video skit demonstrating how AI could assist managers by suggesting staffing adjustments and optimising inventory. Analysts found the presentation both innovative and slightly unsettling, with Yum suggesting AI would help free up employees for other tasks rather than replace them.

Fast food chains are increasingly adopting AI to modernise operations, with companies like McDonald’s and Chipotle also investing in automation and digital tools. While Yum’s AI technology is currently used in nearly 25,000 of its 61,000 global restaurants, executives acknowledged there is still a long road ahead.

Analysts believe Yum may eventually commercialise its AI software beyond its own restaurant network. Taco Bell’s AI-driven strategy comes as the chain projects an 8% rise in same-store sales for the current quarter.

For more information on these topics, visit diplomacy.edu.

AI to support China’s social welfare system

China is stepping up the use of AI and big data in elderly and social care as it seeks to address economic challenges posed by a shrinking workforce and an ageing population.

Civil affairs minister Lu Zhiyuan announced the initiative at the ‘Two Sessions’ political gathering, highlighting efforts to make services more accessible and efficient.

The country’s population has declined for a third consecutive year, with over 310 million people now aged 60 and above.

Officials are increasingly turning to technology to drive future growth. Local governments have moved swiftly to integrate AI into public services, with DeepSeek‘s chatbot gaining traction since its latest version was released in January.

Despite restrictions on AI chip sales imposed by the United States, DeepSeek’s cost-effective model has outperformed several Western competitors, reinforcing China’s position in AI development.

President Xi Jinping has reaffirmed the government’s support for AI, recently meeting with leaders from top technology firms, including DeepSeek, Tencent, Huawei and Xiaomi.

The push for AI adoption in social welfare services reflects a broader strategy to maintain economic stability and innovation in the face of demographic challenges.

For more information on these topics, visit diplomacy.edu.

Reddit launches new tools to improve user engagement

Reddit has introduced new tools to help users follow community rules and track content performance, aiming to boost engagement on the platform. The update comes after a slowdown in user growth due to Google’s algorithm changes, though traffic from the search engine has since recovered.

Among the new features is a ‘rules check’ tool, currently being tested on smartphones, which helps users comply with subreddit guidelines. Additionally, a post-recovery option allows users to repost content in alternative subreddits if their original submission is removed. Reddit will also suggest subreddits based on post content and clarify posting requirements for specific communities.

The company has enhanced its post insights feature, offering detailed engagement metrics to help users refine their content. This follows Reddit’s December launch of Reddit Answers, an AI-powered search tool designed to provide curated summaries of community discussions, which is still in beta testing.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.