ChatGPT is rolling out a new task-scheduling feature that allows paying users to set reminders and recurring requests directly with the AI assistant. Available to ChatGPT Plus, Team, and Pro users, the feature can handle practical tasks like sending reminders about passport expirations or offering personalised weekend plans based on the weather.
The task system represents OpenAI’s early venture into AI agents that can perform autonomous actions. Users can set tasks through ChatGPT’s web app by selecting the scheduling option from a dropdown menu. Once enabled, the assistant can deliver reminders or perform regular check-ins, such as providing daily news briefings or checking for concert tickets monthly.
While the feature currently offers limited independence, OpenAI sees it as a foundational step towards more capable AI systems. CEO Sam Altman hinted that 2025 will be a significant year for AI agents that may begin to handle more complex tasks, like booking travel or writing code. For now, ChatGPT’s task feature remains in beta, with plans to refine it based on user feedback.
A biotech startup Bioptimus has raised $41 million to develop an advanced AI model aimed at simulating biological processes. Dubbed the ‘GPT for biology,’ this technology seeks to predict disease outcomes and accelerate the discovery of new treatments by learning from vast datasets that span everything from molecules to entire organisms.
The funding round, led by US venture firm Cathay Innovation, highlights a growing global interest in AI-driven healthcare. The French company’s CEO, Jean-Philippe Vert, explained that Bioptimus uses a model akin to those powering chatbots like ChatGPT, but instead of generating text, it simulates complex biological interactions. The goal is to help researchers better understand disease mechanisms and improve treatments in sectors ranging from medicine to cosmetics.
Founded less than a year ago, Bioptimus has already launched H-Optimus-0, an open-source model that aids in diagnosing diseases such as cancer. With the latest funding, the company plans to expand its platform by integrating a broader range of data and forming new partnerships with biotech and pharmaceutical firms to drive innovation in healthcare.
France has become a hotbed for AI startups, with companies like Mistral AI and Hugging Face raising massive funds. Bioptimus’s rapid rise highlights how specialised AI models are transforming industries beyond traditional tech sectors.
Microsoft has created a new internal division, CoreAI Platform and Tools, to accelerate its development of AI technologies. The restructuring brings together its developer teams and AI platform under one unit, aimed at making AI a central pillar of the company’s software strategy.
Jay Parikh, a former engineering leader at Meta and CEO of cloud security startup Lacework, will head the new organisation. Reporting directly to CEO Satya Nadella, Parikh will oversee various teams focused on AI infrastructure and tools. His appointment signals Microsoft’s continued push to lead in the fast-evolving AI space.
CoreAI’s formation reflects Microsoft’s increasing emphasis on “model-forward” applications, which Nadella described as reshaping software development across all categories. The company’s recent efforts include embedding AI tools across its productivity suite and cloud services, solidifying its place in the growing AI market.
This latest move builds on Microsoft’s broader strategy to remain a leader in AI innovation, following its high-profile partnership with OpenAI and ongoing investments in cloud-based AI solutions.
AI chip startup Blaize has announced plans to go public through a SPAC deal, which will see the company listed on Nasdaq with a valuation of $1.2 billion. Founded in 2011 by former Intel engineers, Blaize specialises in AI chips for edge devices such as drones, security cameras, and industrial robots. Unlike traditional data centre chips, its products are designed for real-world applications that prioritise low latency, power efficiency, and privacy.
The company has raised $335 million from prominent investors, including Samsung and Mercedes-Benz, and claims to have $400 million worth of deals in the pipeline. CEO Dinakar Munagala, who spent over a decade at Intel, emphasised that Blaize’s approach focuses on practical AI solutions for physical environments, differentiating the company from competitors like Nvidia, which primarily targets large-scale data centres.
Despite facing financial challenges, including a loss of $87.5 million in 2023, Blaize is betting on a future where AI chips are embedded into everyday devices. The startup is also involved in defence-related contracts, with one major deal involving AI systems capable of identifying troops and detecting drones, further highlighting its niche in edge computing.
Blaize’s IPO marks a significant shift in the AI chip industry, signalling investor interest in decentralised AI technologies that extend beyond traditional data centre applications.
OpenAI has called for increased US investment and supportive regulations to ensure leadership in AI development and prevent China from gaining dominance in the sector. Its ‘Economic Blueprint’ outlines the need for strategic policies around AI resources, including chips, data, and energy.
The document highlights the risk of $175 billion in global funds shifting to China-backed projects if the US fails to attract those investments. OpenAI also proposed stricter export controls on AI models to prevent misuse by adversarial nations and protect national security.
CEO Sam Altman, who contributed $1 million to President-elect Donald Trump’s inaugural fund, seeks stronger ties with the incoming administration, which includes former PayPal executive David Sacks as AI and crypto czar. The company will host an event in Washington DC this month to promote its proposals.
Microsoft-backed OpenAI continues to seek further investment after raising $6.6 billion last year. The startup plans to transform into a for-profit entity to secure additional funding necessary for competing in the expensive AI race.
As war forced thousands of Lebanese families to flee their homes, mechanical engineer Hania Zataari developed an AI chatbot to streamline aid distribution. The tool, linked to WhatsApp, collects requests for essentials like food, blankets, and medicine, helping volunteers reach those in need more efficiently. With support from donors abroad, the project has delivered hundreds of aid packages to displaced families in Sidon and beyond.
Many displaced people have struggled to access government assistance, leaving volunteers to fill the gap. Economic turmoil has further strained resources, with aid organisations warning of severe funding shortages. Despite these challenges, the chatbot has helped distribute crucial supplies, with volunteers working tirelessly to match demand with available resources.
Researchers see potential in the technology but question its scalability in other regions. The chatbot’s success, they argue, lies in its local adaptation and cultural familiarity. While it cannot solve Lebanon’s crisis, for the families relying on it, the tool has made survival a little easier.
British Prime Minister Keir Starmer has announced an ambitious plan to position the UK as a global leader in AI. In a speech on Monday, Starmer outlined proposals to establish specialised zones for data centres and incentivise technology-focused education, aiming to boost economic growth and innovation. According to the government, fully adopting AI could increase productivity by 1.5% annually, adding £47 billion ($57 billion) to the economy each year over the next decade.
Central to the plan is the adoption of recommendations from the “AI Opportunities Action Plan,” authored by venture capitalist Matt Clifford. Measures include fast-tracking planning permissions for data centres and ensuring energy connections, with the first such centre to be built in Culham, Oxfordshire. Starmer emphasised the potential for AI to create jobs, attract investment, and improve lives by streamlining processes like planning consultations and reducing administrative burdens for teachers.
The UK, currently the third-largest AI market behind the US and China, faces stiff global competition in establishing itself as an AI hub. While Starmer pledged swift action to maintain competitiveness, challenges persist. The Labour government’s recent high-tax budget has dampened some business confidence, and the Bank of England reported stagnation in economic growth last quarter. However, Starmer remains optimistic, declaring, “We must move fast and take action.”
By integrating AI into its economic strategy, the UK hopes to capitalise on technological advancements, balancing innovation with regulatory oversight in an increasingly competitive global landscape.
The Japanese government is considering publicly disclosing the names of developers behind malicious artificial intelligence systems as part of efforts to combat disinformation and cyberattacks. The move, aimed at ensuring accountability, follows a government panel’s recommendation that stricter legal frameworks are necessary to prevent AI misuse.
The proposed bill, expected to be submitted to parliament soon, will focus on gathering information on harmful AI activities and encouraging developers to cooperate with government investigations. However, it will stop short of imposing penalties on offenders, amid concerns that harsh measures might discourage AI innovation.
Japan’s government may also share its findings with the public if harmful AI systems cause significant damage, such as preventing access to vital public services. While the bill aims to balance innovation with public safety, questions remain about how the government will decide what constitutes a “malicious” AI system and the potential impact on freedom of expression.
A US waste management firm has introduced AI-powered electric garbage trucks to reduce fire risks caused by improperly disposed lithium-ion batteries. The vehicles, showcased at the Consumer Electronics Show (CES) in Las Vegas, can detect batteries in rubbish loads before they reach recycling centres, preventing potential fires.
Lithium-ion batteries, commonly used in gadgets like phones and toothbrushes, are highly flammable and often slip through existing detection systems at recycling facilities. Fires linked to these batteries have caused significant damage, with several US recycling centres burning down annually. The new trucks allow drivers to flag sensitive collections and alert facilities in advance.
The advanced trucks, developed by industrial firm Oshkosh, also come with electric arm technology to speed up collections and AI software to spot contamination in recycling bins. These features help reduce risks, improve efficiency, and allow companies to hold customers accountable for improper recycling. Waste management officials see electrification as a key step, as garbage trucks typically travel shorter distances, making them ideal for battery-powered operation.
A group of authors, including Ta-Nehisi Coates and Sarah Silverman, has accused Meta Platforms of using pirated books to train its AI systems with CEO Mark Zuckerberg’s approval. Newly disclosed court documents filed in California allege that Meta knowingly relied on the LibGen dataset, which contains millions of pirated works, to develop its large language model, Llama.
The lawsuit, initially filed in 2023, claims Meta infringed on copyright by using the authors’ works without permission. The authors argue that internal Meta communications reveal concerns within the company about the dataset’s legality, which were ultimately overruled. Meta has not yet responded to the latest allegations.
The case is one of several challenging the use of copyrighted materials to train AI systems. While defendants in similar lawsuits have cited fair use, the authors contend that newly uncovered evidence strengthens their claims. They have requested permission to file an updated complaint, adding computer fraud allegations and revisiting dismissed claims related to copyright management information.
US District Judge Vince Chhabria has allowed the authors to file an amended complaint but expressed doubts about the validity of some new claims. The outcome of the case could have broader implications for how AI companies utilise copyrighted content in training data.