Chinese firm MiniMax unveils advanced AI models amid rising tensions

Chinese AI company MiniMax has introduced three new models—MiniMax-Text-01, MiniMax-VL-01, and T2A-01-HD—designed to compete with leading systems developed by firms such as OpenAI and Google. Backed by Alibaba and Tencent, MiniMax has raised $850 million in funding and is valued at over $2.5 billion. The models include a text-only model, a multimodal model capable of processing text and images, and an audio generator capable of creating synthetic speech in multiple languages.

MiniMax-Text-01 boasts a 4-million-token context window, significantly larger than those of competing systems, allowing it to process extensive text inputs. Its performance rivals industry leaders like Google’s Gemini 2.0 Flash in benchmarks measuring problem-solving and comprehension skills. The multimodal MiniMax-VL-01 excels at image-text tasks but trails some competitors on specific evaluations. T2A-01-HD, the audio generator, delivers high-quality synthetic speech and can clone voices using just 10 seconds of recorded audio.

The models, mostly accessible via platforms like GitHub and Hugging Face, come with licensing restrictions that prevent their use in developing competing AI systems. MiniMax has faced controversies, including allegations of unauthorised use of copyrighted data for training and concerns about AI-generated content replicating logos and public figures. The releases coincide with new US restrictions on AI technology exports to China, potentially heightening challenges for Chinese AI firms aiming to compete globally.

Generative AI accelerates US defence strategies

The Pentagon is leveraging generative AI to accelerate critical defence operations, particularly the ‘kill chain’, a process of identifying, tracking, and neutralising threats. According to Dr Radha Plumb, the Pentagon’s Chief Digital and AI Officer, AI’s current role is limited to aiding planning and strategising phases, ensuring commanders can respond swiftly while maintaining human oversight over life-and-death decisions.

Major AI firms like OpenAI and Anthropic have softened their policies to collaborate with defence agencies, but only under strict ethical boundaries. These partnerships aim to balance innovation with responsibility, ensuring AI systems are not used to cause harm directly. Meta, Anthropic, and Cohere are tech giants working with defence contractors, providing tools that optimise operational planning without breaching ethical standards.

In the US, Dr Plumb emphasised that the Pentagon’s AI systems operate as part of human-machine collaboration, countering fears of fully autonomous weapons. Despite debates over AI’s role in defence, officials argue that working with the technology is vital to ensure its ethical application. Critics, however, continue to question the transparency and long-term implications of such alliances.

As AI becomes central to defence strategies, the Pentagon’s commitment to integrating ethical safeguards highlights the delicate balance between technological advancement and human control.

AI-powered Copilot Chat launched by Microsoft

Microsoft has introduced a new chat service, Copilot Chat, allowing businesses to deploy AI agents for routine tasks. The service, powered by OpenAI’s GPT-4, enables users to create AI-driven assistants using natural language commands in English, Mandarin, and other languages. Tasks such as market research, drafting strategy documents, and meeting preparation can be handled for free, though advanced features like Teams call transcription and PowerPoint slide creation require a $30 monthly Microsoft 365 Copilot subscription.

With increasing pressure to generate returns on its substantial AI investments, Microsoft is betting on a pay-as-you-go model to drive adoption. The company is expected to spend around $80 billion on AI infrastructure and data centres this fiscal year. Following concerns about Copilot’s adoption, Microsoft has been pushing its AI tools more aggressively, offering businesses greater flexibility in using AI for automation.

In a move towards greater AI autonomy, Microsoft previously introduced tools allowing customers to create self-sufficient AI agents with minimal human input. Analysts suggest that such innovations could offer a simpler path to monetisation for tech companies, making AI-driven automation more accessible and scalable.

ChatGPT enhanced with new Tasks feature by OpenAI

OpenAI has introduced a new beta feature called Tasks in ChatGPT, expanding into the virtual assistant market. Tasks will let users schedule future actions such as reminders for concert ticket sales or recurring updates like daily weather reports.

ChatGPT may also suggest tasks based on user conversations, with users retaining control to accept or decline them. The feature aims to compete with virtual assistants like Apple’s Siri and Amazon’s Alexa, both of which are being enhanced with AI capabilities.

The updated Alexa will include generative AI features for task automation, with Amazon CEO Andy Jassy announcing its launch in the coming months. Apple has also integrated ChatGPT into Siri under its ‘Apple Intelligence’ initiative, seeking user permission for queries sent to OpenAI’s service.

OpenAI will roll out the Tasks feature in beta to Plus, Team, and Pro users worldwide over the next few days, starting with the web version.

ChatGPT adds task scheduling feature

ChatGPT is rolling out a new task-scheduling feature that allows paying users to set reminders and recurring requests directly with the AI assistant. Available to ChatGPT Plus, Team, and Pro users, the feature can handle practical tasks like sending reminders about passport expirations or offering personalised weekend plans based on the weather.

The task system represents OpenAI’s early venture into AI agents that can perform autonomous actions. Users can set tasks through ChatGPT’s web app by selecting the scheduling option from a dropdown menu. Once enabled, the assistant can deliver reminders or perform regular check-ins, such as providing daily news briefings or checking for concert tickets monthly.

While the feature currently offers limited independence, OpenAI sees it as a foundational step towards more capable AI systems. CEO Sam Altman hinted that 2025 will be a significant year for AI agents that may begin to handle more complex tasks, like booking travel or writing code. For now, ChatGPT’s task feature remains in beta, with plans to refine it based on user feedback.

New Microsoft team focuses on AI development

Microsoft has created a new internal division, CoreAI Platform and Tools, to accelerate its development of AI technologies. The restructuring brings together its developer teams and AI platform under one unit, aimed at making AI a central pillar of the company’s software strategy.

Jay Parikh, a former engineering leader at Meta and CEO of cloud security startup Lacework, will head the new organisation. Reporting directly to CEO Satya Nadella, Parikh will oversee various teams focused on AI infrastructure and tools. His appointment signals Microsoft’s continued push to lead in the fast-evolving AI space.

CoreAI’s formation reflects Microsoft’s increasing emphasis on “model-forward” applications, which Nadella described as reshaping software development across all categories. The company’s recent efforts include embedding AI tools across its productivity suite and cloud services, solidifying its place in the growing AI market.

This latest move builds on Microsoft’s broader strategy to remain a leader in AI innovation, following its high-profile partnership with OpenAI and ongoing investments in cloud-based AI solutions.

OpenAI calls for stronger US AI investment to outpace China

OpenAI has called for increased US investment and supportive regulations to ensure leadership in AI development and prevent China from gaining dominance in the sector. Its ‘Economic Blueprint’ outlines the need for strategic policies around AI resources, including chips, data, and energy.

The document highlights the risk of $175 billion in global funds shifting to China-backed projects if the US fails to attract those investments. OpenAI also proposed stricter export controls on AI models to prevent misuse by adversarial nations and protect national security.

CEO Sam Altman, who contributed $1 million to President-elect Donald Trump’s inaugural fund, seeks stronger ties with the incoming administration, which includes former PayPal executive David Sacks as AI and crypto czar. The company will host an event in Washington DC this month to promote its proposals.

Microsoft-backed OpenAI continues to seek further investment after raising $6.6 billion last year. The startup plans to transform into a for-profit entity to secure additional funding necessary for competing in the expensive AI race.

Microsoft sues hackers over AI security breach

Microsoft has taken legal action against a group accused of bypassing security measures in its Azure OpenAI Service. A lawsuit filed in December alleges that the unnamed defendants stole customer API keys to gain unauthorised access and generate content that violated Microsoft’s policies. The company claims the group used stolen credentials to develop hacking tools, including software named de3u, which allowed users to exploit OpenAI’s DALL-E image generator while evading content moderation filters.

An investigation found that the stolen API keys were used to operate an illicit hacking service. Microsoft alleges the group engaged in systematic credential theft, using custom-built software to process and route unauthorised requests through its cloud AI platform. The company has also taken steps to dismantle the group’s technical infrastructure, including seizing a website linked to the operation.

Court-authorised actions have enabled Microsoft to gather further evidence and disrupt the scheme. The company says additional security measures have been implemented to prevent similar breaches, though specific details were not disclosed. While the case unfolds, Microsoft remains focused on strengthening its AI security protocols.

Digital art website crippled by OpenAI bot scraping

Triplegangers, was forced offline after a bot from OpenAI relentlessly scraped its website, treating it like a distributed denial-of-service (DDoS) attack. The AI bot sent tens of thousands of server requests, attempting to download hundreds of thousands of detailed 3D images and descriptions from the company’s extensive database of digital human models.

The sudden spike in traffic crippled Ukrainian Triplegangers’ servers and left CEO Oleksandr Tomchuk grappling with an unexpected problem. The company, which sells digital assets to video game developers and 3D artists, discovered that OpenAI’s bot operated across hundreds of IP addresses to gather its data. Despite having terms of service that forbid such scraping, the company had not configured the necessary robot.txt file to block the bot.

After days of disruption, Tomchuk implemented protective measures by updating the robot.txt file and using Cloudflare to block specific bots. However, he remains frustrated by the lack of transparency from OpenAI and the difficulty in determining exactly what data was taken. With rising costs and increased monitoring now necessary, he warns that other businesses remain vulnerable.

Tomchuk criticised AI companies for placing the responsibility on small businesses to block unwanted scraping, comparing it to a digital shakedown. “They should be asking permission, not just scraping data,” he said, urging companies to take greater precautions against AI crawlers that can compromise their sites.

Regulators weigh in on Musk’s lawsuit against OpenAI and Microsoft

US antitrust regulators provided legal insights on Elon Musk’s lawsuit against OpenAI and Microsoft, alleging anticompetitive practices. While not taking a formal stance, the Federal Trade Commission (FTC) and Department of Justice (DOJ) highlighted key legal doctrines supporting Musk’s claims ahead of a court hearing in Oakland, California. Musk, a co-founder of OpenAI and now leading AI startup xAI, accuses OpenAI of enforcing restrictive agreements and sharing board members with Microsoft to stifle competition.

The lawsuit also claims OpenAI orchestrated an investor boycott against rivals. Regulators noted such boycotts are legally actionable, even if the alleged organiser isn’t directly involved. OpenAI has denied these allegations, labelling them baseless harassment. Meanwhile, the FTC is conducting a broader probe into AI partnerships, including those between Microsoft and OpenAI, to assess potential antitrust violations.

Microsoft declined to comment on the case, while OpenAI pointed to prior court filings refuting Musk’s claims. However, the FTC and DOJ stressed that even former board members, like Reid Hoffman, could retain sensitive competitive information, reinforcing Musk’s concerns about anticompetitive practices.

Musk’s legal team sees the regulators’ involvement as validation of the seriousness of the case, underscoring the heightened scrutiny around AI collaborations and their impact on competition.