OpenAI has introduced a new beta feature called Tasks in ChatGPT, expanding into the virtual assistant market. Tasks will let users schedule future actions such as reminders for concert ticket sales or recurring updates like daily weather reports.
ChatGPT may also suggest tasks based on user conversations, with users retaining control to accept or decline them. The feature aims to compete with virtual assistants like Apple’s Siri and Amazon’s Alexa, both of which are being enhanced with AI capabilities.
The updated Alexa will include generative AI features for task automation, with Amazon CEO Andy Jassy announcing its launch in the coming months. Apple has also integrated ChatGPT into Siri under its ‘Apple Intelligence’ initiative, seeking user permission for queries sent to OpenAI’s service.
The US Justice Department has removed malware from over 4,200 computers worldwide in an operation targeting a hacking group linked to the Chinese government. The malware, known as ‘PlugX,’ was used to steal information and compromise systems across the United States, Europe, and Asia. Investigators identified the cybercriminals behind the attack as ‘Mustang Panda’ and ‘Twill Typhoon,’ groups believed to have received financial support from China.
Court documents filed in the US District Court for the Eastern District of Pennsylvania allege that the Chinese government paid Mustang Panda to develop PlugX. The malware has been active since at least 2014 and was used not only to target governments and businesses but also Chinese political dissidents. Officials described the operation as a critical step in neutralising cyber threats backed by foreign states.
Authorities emphasised the growing risks posed by state-sponsored hacking groups and their ability to infiltrate global networks. The Justice Department remains committed to dismantling cyber threats and preventing adversaries from exploiting sensitive information. The scale of the attack highlights the persistent threat of cyber espionage and the need for international cooperation in addressing cybersecurity challenges.
Polymarket, a cryptocurrency-based prediction market, has come under fire for alleged violations of Singapore’s strict gambling laws. Authorities blocked access to the platform, deeming it an unlicensed gambling site. Those who attempt to bypass restrictions risk hefty fines and jail time under the Gambling Control Act 2022.
Further criticism erupted as Polymarket allowed users to bet on tragic events like the devastating Palisades wildfire in Los Angeles. The platform’s wildfire-related betting markets have been widely condemned as unethical, with accusations of profiting from human suffering. Polymarket’s attempts to defend its actions have done little to appease public outrage.
Meanwhile, Polymarket faces intense scrutiny in the US. The FBI recently raided CEO Shayne Coplan’s residence, seizing electronic devices, while the CFTC subpoenaed Coinbase for information on the platform’s activities. Despite its rapid growth during the US elections, with record-breaking trading volumes, Polymarket now grapples with plummeting activity and mounting regulatory challenges.
A French interior designer, identified as Anne, has fallen victim to a sophisticated scam in which she was tricked into believing she was in a relationship with actor Brad Pitt. Over the course of a year, the scammer, using AI-generated images and fake social media profiles, manipulated Anne into sending €830,000 for purported cancer treatment after a fabricated story involving the actor’s frozen bank accounts.
The scam began when Anne received messages from a fake ‘Jane Etta Pitt,’ claiming the Hollywood star needed someone like her. As Anne was going through a divorce, the AI-generated Brad Pitt sent declarations of love, eventually asking for money under the guise of urgent medical needs. Despite doubts raised by her daughter, Anne transferred large sums, believing she was saving a life.
The truth came to light when Anne saw Brad Pitt in the media with his current partner, and it became clear she had been scammed. However, instead of support, her story has been met with cyberbullying, including mocking social media posts from groups like Toulouse FC and Netflix France. The harassment has taken a toll on Anne’s mental health, and police are now investigating the scam.
The case highlights the dangers of AI scams, the vulnerabilities of individuals, and the lack of empathy in some online responses.
A massive data breach has hit Gravy Analytics, a major US location data broker, compromising precise smartphone location data and internal company information. Hackers claim to have gained access to the company’s systems since 2018, exposing sensitive coordinates that track individuals’ movements. The stolen data includes customer details from prominent firms like Uber, Apple, and government contractors.
Gravy Analytics, through its subsidiary Venntel, has previously sold large amounts of location data to US government agencies. The breach highlights significant security lapses, with the stolen data now at risk of being sold on the dark web. The precise latitude and longitude records could put individuals, especially those in vulnerable positions, in danger.
The incident has sparked fresh scrutiny over data brokers, who often collect and sell sensitive information with little transparency. In December, the FTC moved to restrict Gravy Analytics from selling location data except in cases of national security or law enforcement. Critics argue that these companies prioritise profits over privacy and have called for stricter regulations to hold them accountable.
The US government has officially labelled the extreme right-wing ‘Terrorgram’ network as a terrorist organisation, citing its promotion of violent white supremacist attacks. The group operates mainly on the Telegram platform and has been linked to attacks across the globe, including shootings and planned assaults on critical infrastructure.
The move, announced by the State Department, includes sanctions against three of the network’s leaders based in Brazil, Croatia, and South Africa. The designation freezes any US-based assets belonging to the group and bans Americans from engaging with its members. Officials say the collective has provided detailed guidance for attacks on minorities and government officials, calling for a race war.
US authorities have been ramping up efforts to combat domestic extremism under President Biden, who launched the country’s first national strategy on countering domestic terrorism in 2021. Britain has already taken similar steps, outlawing the Terrorgram collective in April last year.
This crackdown follows criminal charges brought against two alleged leaders of the group, accused of using Telegram to incite violence against Black, Jewish, LGBTQ, and immigrant communities. Authorities stress that dismantling such online hate groups is essential to prevent further extremist attacks.
The UK‘s prisons watchdog has warned that drones are becoming a serious national security threat due to a surge in the smuggling of weapons, drugs, and other contraband into high-security jails. Charlie Taylor, the chief inspector of prisons, called for immediate action from the police and government following investigations into two of England and Wales’ most dangerous prisons, HMP Manchester and HMP Long Lartin. Both facilities, holding notorious criminals and terrorism suspects, have seen an increase in illicit deliveries by drones, putting staff, inmates, and public safety at risk.
Taylor’s report highlights how gangs have exploited weaknesses in security, including the deterioration of basic anti-drone measures like protective netting and CCTV. At Long Lartin, inspectors found that large quantities of illicit items were being delivered, fueling violence and unrest among prisoners. At HMP Manchester, inmates were burning holes in windows to facilitate drone deliveries, raising concerns about potential escapes and further disruptions.
The growing use of sophisticated drones, capable of carrying large payloads and flying under the radar, has made it increasingly difficult for prison authorities to control the flow of contraband. While some prisons have deployed counter-drone technology, most do not block drones from approaching, leaving many vulnerable to this growing threat.
Prison officials are now under mounting pressure to confront this new challenge, with experts warning that the situation is a matter of national security. Taylor also highlighted the need for a more robust approach to tackling gang activity and reducing the supply of illegal items that undermine prison safety.
The US government has announced new restrictions on exporting AI chips and technology, seeking to safeguard its dominance in AI development while limiting China’s access to advanced computing capabilities. The regulations, unveiled during the final days of President Biden’s administration, impose strict caps on AI chip exports to most countries, with exemptions for close allies such as Japan, the UK, and South Korea. Countries like China, Russia, Iran, and North Korea remain barred from accessing this critical technology.
Commerce Secretary Gina Raimondo emphasised the importance of maintaining US leadership in AI to support national security and economic interests. The regulations, which build on a four-year effort to block China’s acquisition of advanced chips, also close existing loopholes and enforce tighter controls. New limits target advanced graphics processing units (GPUs), essential for training AI models, and introduce worldwide licensing requirements for cutting-edge AI technologies. Major cloud providers like Microsoft and Amazon will face new authorisation processes to establish data centres globally under stringent conditions.
Industry leaders, including Nvidia, have expressed concerns over the broad scope of the rules, warning of potential harm to innovation and market dynamics. Nvidia called the restrictions an “overreach,” while Oracle cautioned that the measures could inadvertently benefit Chinese competitors. Despite this criticism, US officials argue the rules are vital for maintaining a competitive edge, given AI’s transformative potential in sectors like healthcare, cybersecurity, and defence. China’s Commerce Ministry condemned the move, vowing to protect its interests in response to the escalating technology standoff.
Microsoft has taken legal action against a group accused of bypassing security measures in its Azure OpenAI Service. A lawsuit filed in December alleges that the unnamed defendants stole customer API keys to gain unauthorised access and generate content that violated Microsoft’s policies. The company claims the group used stolen credentials to develop hacking tools, including software named de3u, which allowed users to exploit OpenAI’s DALL-E image generator while evading content moderation filters.
An investigation found that the stolen API keys were used to operate an illicit hacking service. Microsoft alleges the group engaged in systematic credential theft, using custom-built software to process and route unauthorised requests through its cloud AI platform. The company has also taken steps to dismantle the group’s technical infrastructure, including seizing a website linked to the operation.
Court-authorised actions have enabled Microsoft to gather further evidence and disrupt the scheme. The company says additional security measures have been implemented to prevent similar breaches, though specific details were not disclosed. While the case unfolds, Microsoft remains focused on strengthening its AI security protocols.
Ian Russell, father of Molly Russell, has called on the UK government to take stronger action on online safety, warning that delays in regulation are putting children at risk. In a letter to Prime Minister Sir Keir Starmer, he criticised Ofcom’s approach to enforcing the Online Safety Act, describing it as a “disaster.” Russell accused tech firms, including Meta and X, of prioritising profits over safety and moving towards a more dangerous, unregulated online environment.
Campaigners argue that Ofcom’s guidelines contain major loopholes, particularly in addressing harmful content such as live-streamed material that promotes self-harm and suicide. While the government insists that tech companies must act responsibly, the slow progress of new regulations has raised concerns. Ministers acknowledge that additional legislation may be required as AI technology evolves, introducing new risks that could further undermine online safety.
Russell has been a prominent campaigner for stricter online regulations since his daughter’s death in 2017. Despite the Online Safety Act granting Ofcom the power to fine tech firms, critics believe enforcement remains weak. With concerns growing over the effectiveness of current safeguards, pressure is mounting on the government to act decisively and ensure platforms take greater responsibility in protecting children from harmful content.