ChatGPT is rolling out a new task-scheduling feature that allows paying users to set reminders and recurring requests directly with the AI assistant. Available to ChatGPT Plus, Team, and Pro users, the feature can handle practical tasks like sending reminders about passport expirations or offering personalised weekend plans based on the weather.
The task system represents OpenAI’s early venture into AI agents that can perform autonomous actions. Users can set tasks through ChatGPT’s web app by selecting the scheduling option from a dropdown menu. Once enabled, the assistant can deliver reminders or perform regular check-ins, such as providing daily news briefings or checking for concert tickets monthly.
While the feature currently offers limited independence, OpenAI sees it as a foundational step towards more capable AI systems. CEO Sam Altman hinted that 2025 will be a significant year for AI agents that may begin to handle more complex tasks, like booking travel or writing code. For now, ChatGPT’s task feature remains in beta, with plans to refine it based on user feedback.
A group of authors, including Ta-Nehisi Coates and Sarah Silverman, has accused Meta Platforms of using pirated books to train its AI systems with CEO Mark Zuckerberg’s approval. Newly disclosed court documents filed in California allege that Meta knowingly relied on the LibGen dataset, which contains millions of pirated works, to develop its large language model, Llama.
The lawsuit, initially filed in 2023, claims Meta infringed on copyright by using the authors’ works without permission. The authors argue that internal Meta communications reveal concerns within the company about the dataset’s legality, which were ultimately overruled. Meta has not yet responded to the latest allegations.
The case is one of several challenging the use of copyrighted materials to train AI systems. While defendants in similar lawsuits have cited fair use, the authors contend that newly uncovered evidence strengthens their claims. They have requested permission to file an updated complaint, adding computer fraud allegations and revisiting dismissed claims related to copyright management information.
US District Judge Vince Chhabria has allowed the authors to file an amended complaint but expressed doubts about the validity of some new claims. The outcome of the case could have broader implications for how AI companies utilise copyrighted content in training data.
Google and Microsoft have each pledged $1 million to support Donald Trump’s upcoming presidential inauguration, joining other tech giants such as Meta, Amazon, and Apple’s Tim Cook in contributing significant sums. The donations appear to be part of broader strategies by these companies to maintain access to political leadership in a rapidly changing regulatory environment.
Google, which has faced threats from Trump regarding potential break-ups, aims to secure goodwill through financial contributions and online visibility, including a YouTube livestream of the inauguration. Microsoft has also maintained steady political donations, previously giving $500,000 to Trump’s first inauguration as well as to President Joe Biden’s ceremony.
This alignment with Trump marks a notable trend of tech companies seeking to protect their interests, particularly as issues like antitrust regulations and data privacy laws remain in political crosshairs. With both tech giants navigating a landscape of increased government scrutiny, their contributions indicate a cautious approach to preserving influence at the highest levels of power.
These donations reflect a pragmatic move by Silicon Valley, where cultivating political ties is seen as a way to safeguard business operations amid shifting political dynamics.
Elon Musk’s AI company, xAI, is preparing to launch a controversial feature for its chatbot, Grok, called ‘Unhinged Mode.’ According to a recently updated FAQ on the Grok website, this mode will deliver responses that are intentionally provocative, offensive, and irreverent, mimicking an amateur stand-up comedian pushing boundaries.
Musk first teased the idea of an unfiltered chatbot nearly a year ago, describing Grok as a tool that would answer controversial questions without self-censorship. While Grok has already been known for its edgy responses, it currently avoids politically sensitive topics. The new mode appears to be an effort to deliver on Musk’s vision of an anti-‘woke’ AI assistant, standing apart from more conservative competitors like OpenAI’s ChatGPT.
The move comes amid ongoing debates about political bias in AI systems. Musk has previously claimed that most AI tools lean left due to their reliance on web-based training data. He has vowed to make Grok politically neutral, blaming the internet’s content for any perceived bias in the chatbot’s current outputs. Critics, however, worry that unleashing an unfiltered mode could lead to harmful or offensive outputs, raising questions about the responsibility of AI developers.
As Grok continues to evolve, the AI industry is closely watching how users respond to Musk’s push for a less restrained chatbot. Whether this will prove a success or ignite further controversy remains to be seen.
Elon Musk has echoed concerns from AI researchers that the industry is running out of new, real-world data to train advanced models. Speaking during a livestream with Stagwell’s Mark Penn, Musk noted that AI systems have already processed most of the available human knowledge. He described this data plateau as having been reached last year.
To address the issue, AI developers are increasingly turning to synthetic data, information generated by the AI itself, to continue training models. Musk argued that self-generated data will allow AI systems to improve through self-learning, with major players like Microsoft, Google, and Meta already incorporating this approach in their AI models.
While synthetic data offers cost-saving advantages, it also poses risks. Some experts warn it could cause “model collapse,” reducing creativity and reinforcing biases if the AI reproduces flawed patterns from earlier training data. As the AI sector pivots towards self-generated training material, the challenge lies in balancing innovation with reliability.
Elon Musk’s AI company, xAI, has launched a standalone iOS app for its chatbot, Grok, marking a major expansion beyond its initial availability to X users. The app is now live in several countries, including the US, Australia, and India, allowing users to access the chatbot directly on their iPhones.
The Grok app offers features such as real-time data retrieval from the web and X, text rewriting, summarising long content, and even generating images from text prompts. xAI highlights Grok’s ability to create photorealistic images with minimal restrictions, including the use of public figures and copyrighted material.
In addition to the app, xAI is working on a dedicated website, Grok.com, which will soon make the chatbot available on browsers. Initially limited to X’s paying subscribers, Grok rolled out a free version in November and made it accessible to all users earlier this month. The launch marks a notable push by xAI to establish Grok as a versatile, widely available AI assistant.
US safety regulators are investigating Tesla’s ‘Actually Smart Summon’ feature, which allows drivers to move their cars remotely without being inside the vehicle. The probe follows reports of crashes involving the technology, including at least four confirmed incidents.
The US National Highway Traffic Safety Administration (NHTSA) is examining nearly 2.6 million Tesla cars equipped with the feature since 2016. The agency noted issues with the cars failing to detect obstacles, such as posts and parked vehicles, while using the technology.
Tesla has not commented on the investigation. Company founder Elon Musk has been a vocal supporter of self-driving innovations, insisting they are safer than human drivers. However, this probe, along with other ongoing investigations into Tesla’s autopilot features, could result in recalls and increased scrutiny of the firm’s driverless systems.
The NHTSA will assess how fast cars can move in Smart Summon mode and the safeguards in place to prevent use on public roads. Tesla’s manual advises drivers to operate the feature only in private areas with a clear line of sight, but concerns remain over its real-world safety applications.
A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.
The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.
Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.
While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.
Elon Musk’s AI venture, xAI, has unveiled a standalone iOS app for its chatbot, Grok, marking its first major expansion beyond the X platform. The app, currently in beta testing across Australia and a few other regions, offers users an array of generative AI features, including real-time web access, text rewriting, summarisation, and even image generation from text prompts.
Grok, described as a ‘maximally truthful and curious’ assistant, is designed to provide accurate answers, create photorealistic images, and analyse uploaded pictures. While previously restricted to paying X subscribers, a free version of the chatbot was launched in November and has recently been made accessible to all users.
The app also serves as a precursor to a dedicated web platform, Grok.com, which is in the works. xAI has touted the chatbot’s ability to produce detailed and unrestricted image content, even allowing creations involving public figures and copyrighted material. This open approach sets Grok apart from other AI tools with stricter content policies.
As the beta rollout progresses, Grok is poised to become a versatile tool for users seeking generative AI capabilities in a dynamic and user-friendly interface.
The NeurIPS conference, AI’s premier annual gathering, drew over 16,000 computer scientists to British Columbia last week, highlighting the field’s rapid growth and transformation. Once an intimate meeting of academic outliers, the event has evolved into a showcase for technological breakthroughs and corporate ambitions, featuring major players like Alphabet, Meta, and Microsoft.
Industry luminaries like Ilya Sutskever and Fei-Fei Li discussed AI’s evolving challenges. Sutskever emphasised AI’s unpredictability as it learns to reason, while Li called for expanding beyond 2D internet data to develop “spatial intelligence.” The conference, delayed a day to avoid clashing with a Taylor Swift concert, underscored AI’s growing mainstream prominence.
Venture capitalists, sponsors, and tech giants flooded the event, reflecting AI’s lucrative appeal. The number of research papers accepted has surged tenfold in a decade, and discussions focused on tackling the costs and limitations of scaling AI models. Notable attendees included Meta’s Yann LeCun and Google DeepMind’s Jeff Dean, who advocated for ‘modular’ and ‘tangly’ AI architectures.
In a symbolic moment of AI’s widening reach, 10-year-old Harini Shravan became the youngest ever to have a paper accepted, illustrating how the field now embraces new generations and diverse ideas.