OpenAI has called for increased US investment and supportive regulations to ensure leadership in AI development and prevent China from gaining dominance in the sector. Its ‘Economic Blueprint’ outlines the need for strategic policies around AI resources, including chips, data, and energy.
The document highlights the risk of $175 billion in global funds shifting to China-backed projects if the US fails to attract those investments. OpenAI also proposed stricter export controls on AI models to prevent misuse by adversarial nations and protect national security.
CEO Sam Altman, who contributed $1 million to President-elect Donald Trump’s inaugural fund, seeks stronger ties with the incoming administration, which includes former PayPal executive David Sacks as AI and crypto czar. The company will host an event in Washington DC this month to promote its proposals.
Microsoft-backed OpenAI continues to seek further investment after raising $6.6 billion last year. The startup plans to transform into a for-profit entity to secure additional funding necessary for competing in the expensive AI race.
As war forced thousands of Lebanese families to flee their homes, mechanical engineer Hania Zataari developed an AI chatbot to streamline aid distribution. The tool, linked to WhatsApp, collects requests for essentials like food, blankets, and medicine, helping volunteers reach those in need more efficiently. With support from donors abroad, the project has delivered hundreds of aid packages to displaced families in Sidon and beyond.
Many displaced people have struggled to access government assistance, leaving volunteers to fill the gap. Economic turmoil has further strained resources, with aid organisations warning of severe funding shortages. Despite these challenges, the chatbot has helped distribute crucial supplies, with volunteers working tirelessly to match demand with available resources.
Researchers see potential in the technology but question its scalability in other regions. The chatbot’s success, they argue, lies in its local adaptation and cultural familiarity. While it cannot solve Lebanon’s crisis, for the families relying on it, the tool has made survival a little easier.
British Prime Minister Keir Starmer has announced an ambitious plan to position the UK as a global leader in AI. In a speech on Monday, Starmer outlined proposals to establish specialised zones for data centres and incentivise technology-focused education, aiming to boost economic growth and innovation. According to the government, fully adopting AI could increase productivity by 1.5% annually, adding £47 billion ($57 billion) to the economy each year over the next decade.
Central to the plan is the adoption of recommendations from the “AI Opportunities Action Plan,” authored by venture capitalist Matt Clifford. Measures include fast-tracking planning permissions for data centres and ensuring energy connections, with the first such centre to be built in Culham, Oxfordshire. Starmer emphasised the potential for AI to create jobs, attract investment, and improve lives by streamlining processes like planning consultations and reducing administrative burdens for teachers.
The UK, currently the third-largest AI market behind the US and China, faces stiff global competition in establishing itself as an AI hub. While Starmer pledged swift action to maintain competitiveness, challenges persist. The Labour government’s recent high-tax budget has dampened some business confidence, and the Bank of England reported stagnation in economic growth last quarter. However, Starmer remains optimistic, declaring, “We must move fast and take action.”
By integrating AI into its economic strategy, the UK hopes to capitalise on technological advancements, balancing innovation with regulatory oversight in an increasingly competitive global landscape.
The Japanese government is considering publicly disclosing the names of developers behind malicious artificial intelligence systems as part of efforts to combat disinformation and cyberattacks. The move, aimed at ensuring accountability, follows a government panel’s recommendation that stricter legal frameworks are necessary to prevent AI misuse.
The proposed bill, expected to be submitted to parliament soon, will focus on gathering information on harmful AI activities and encouraging developers to cooperate with government investigations. However, it will stop short of imposing penalties on offenders, amid concerns that harsh measures might discourage AI innovation.
Japan’s government may also share its findings with the public if harmful AI systems cause significant damage, such as preventing access to vital public services. While the bill aims to balance innovation with public safety, questions remain about how the government will decide what constitutes a “malicious” AI system and the potential impact on freedom of expression.
A US waste management firm has introduced AI-powered electric garbage trucks to reduce fire risks caused by improperly disposed lithium-ion batteries. The vehicles, showcased at the Consumer Electronics Show (CES) in Las Vegas, can detect batteries in rubbish loads before they reach recycling centres, preventing potential fires.
Lithium-ion batteries, commonly used in gadgets like phones and toothbrushes, are highly flammable and often slip through existing detection systems at recycling facilities. Fires linked to these batteries have caused significant damage, with several US recycling centres burning down annually. The new trucks allow drivers to flag sensitive collections and alert facilities in advance.
The advanced trucks, developed by industrial firm Oshkosh, also come with electric arm technology to speed up collections and AI software to spot contamination in recycling bins. These features help reduce risks, improve efficiency, and allow companies to hold customers accountable for improper recycling. Waste management officials see electrification as a key step, as garbage trucks typically travel shorter distances, making them ideal for battery-powered operation.
A group of authors, including Ta-Nehisi Coates and Sarah Silverman, has accused Meta Platforms of using pirated books to train its AI systems with CEO Mark Zuckerberg’s approval. Newly disclosed court documents filed in California allege that Meta knowingly relied on the LibGen dataset, which contains millions of pirated works, to develop its large language model, Llama.
The lawsuit, initially filed in 2023, claims Meta infringed on copyright by using the authors’ works without permission. The authors argue that internal Meta communications reveal concerns within the company about the dataset’s legality, which were ultimately overruled. Meta has not yet responded to the latest allegations.
The case is one of several challenging the use of copyrighted materials to train AI systems. While defendants in similar lawsuits have cited fair use, the authors contend that newly uncovered evidence strengthens their claims. They have requested permission to file an updated complaint, adding computer fraud allegations and revisiting dismissed claims related to copyright management information.
US District Judge Vince Chhabria has allowed the authors to file an amended complaint but expressed doubts about the validity of some new claims. The outcome of the case could have broader implications for how AI companies utilise copyrighted content in training data.
French startup Rounded is developing an orchestration platform that allows companies to create their own AI voice agents. Initially focused on web3, the company shifted its attention to AI-powered voice interactions in mid-2023. Its first product, Donna, was designed for anesthetists, helping private hospitals handle large volumes of routine patient calls. The AI agent has already managed hundreds of thousands of conversations, improving in speed and accuracy over time.
After refining its technology, Rounded expanded its focus to offer a platform where businesses can build their own AI voice agents. Users can integrate various AI models, such as speech-to-text and text-to-speech engines, selecting components from providers like Azure, GPT-4o mini, and ElevenLabs. The platform also helps define prompts and parameters to optimise each agent’s performance for specific use cases.
The startup has secured €600,000 in funding from UC Berkeley’s SkyDeck accelerator and business investors. With growing interest in AI-powered customer interactions, Rounded is poised to attract further investment as it expands its product offering.
AI is transforming the way new medicines are developed, with AI-powered drug discovery advancing at an unprecedented pace. Insilico Medicine, a US-based biotech firm, has designed an experimental treatment for idiopathic pulmonary fibrosis (IPF) using AI to identify a potential drug target and generate molecules. The approach significantly reduced the time and resources needed, with the drug discovery process taking 18 months instead of the usual four years.
AI-driven methods are being adopted by both startups and major pharmaceutical companies to accelerate drug development. Insilico Medicine has multiple drug candidates in clinical trials, while Recursion Pharmaceuticals is using AI to analyse vast biological datasets and uncover new treatment possibilities. A molecule designed by Recursion to target lymphoma and solid tumours has already entered human trials, demonstrating the growing potential of AI in medical research.
Despite the progress, experts note that AI-discovered drugs have yet to complete full clinical trials. The technology faces challenges, particularly in data availability and bias, but researchers remain optimistic. As AI continues to refine drug discovery, many believe it will lead to faster, more cost-effective treatments and a higher success rate in bringing new medicines to market.
Google is testing a new feature called “Daily Listen,” which generates personalised AI-powered podcasts based on users’ Discover feeds. The feature, currently rolling out to US users in the Search Labs experiment, provides a five-minute audio summary of topics tailored to individual interests. Each podcast includes links to related stories, allowing listeners to explore subjects in greater depth.
The experience is integrated with Google’s Discover and Search tools, using followed topics to refine content recommendations. Daily Listen functions similarly to NotebookLM’s Audio Overviews, which create AI-generated audio summaries based on shared documents. Users who have access to the feature will see a “Daily Listen” card on their Google app’s home screen, displaying a play button and episode length.
Once launched, the podcast plays alongside a rolling transcript, offering a seamless blend of text and audio. Google aims to enhance how users consume news and stay informed, making the experience more interactive and personalised. The feature reflects the company’s ongoing push into AI-driven content delivery.
Google is merging additional AI teams into its DeepMind division to speed up innovation in AI technologies. Logan Kilpatrick, head of product for Google’s AI Studio, confirmed that both the AI Studio team and the Gemini API developers would now operate under DeepMind.
DeepMind, created in 2023 from the merger of Google Brain and DeepMind, has played a central role in Google’s AI advancements, including the Gemini model series. Kilpatrick stated the restructuring would strengthen collaboration and accelerate progress in making research tools available to developers.
Engineer Jaana Dogan highlighted that the move would make DeepMind’s tools more publicly accessible, with better APIs, open-source contributions, and enhanced developer resources planned. This shift follows earlier integrations of the Gemini chatbot and responsible AI teams into DeepMind as part of Google’s ongoing strategy.
CEO Sundar Pichai previously described Gemini as gaining strong momentum while stressing the need to move faster in 2025 to close competitive gaps. Scaling Gemini for consumers will be a primary focus next year.