A state-of-the-art £225 million supercomputer, Isambard-AI, is set to become the most powerful in the UK when fully operational this summer. Based at the National Composites Centre in Bristol, the system uses artificial intelligence to aid in developing vaccines and drugs for diseases such as Alzheimer’s, heart disease, and cancer. Researchers are already using its vast computational power to enhance melanoma detection across diverse skin tones.
Professor Simon McIntosh-Smith, a high-performance computing expert at the University of Bristol in the UK, described Isambard-AI as “potentially world-changing.” By simulating molecular interactions, the AI can drastically cut the time and cost of drug development, which traditionally relied on educated guesses and laborious physical experiments. The system virtually screens millions of potential treatments, allowing researchers to identify promising candidates faster.
Despite concerns about its energy consumption, the supercomputer is designed to operate efficiently and may even repurpose its waste heat to warm local homes and businesses. Highlighting the project’s broader significance, Professor McIntosh-Smith likened Isambard-AI to the invention of the internet, emphasising its potential to save millions of lives while keeping its research publicly accessible.
The UK government has introduced a new AI assistant named ‘Humphrey,’ inspired by the scheming character Sir Humphrey Appleby from the sitcom Yes, Minister. This innovative suite of digital tools aims to modernise civil service workflows, reduce costs, and simplify tasks such as summarising public feedback and searching parliamentary records.
The initiative forms part of a broader overhaul of government digital services, announced by Science and Technology Secretary Peter Kyle. Central to this plan are two new apps for secure document storage, including digital driving licences. The Humphrey AI tools, particularly Consult and Parlex, are designed to replace costly external consultants and assist policymakers in navigating parliamentary debates.
Despite the programme’s ambitions, the choice of name has sparked debate. Critics like Tim Flagg from UKAI argue that the association with Sir Humphrey’s ‘devious and controlling’ persona might undermine trust in the technology. However, Flagg also expressed optimism about the government’s technical capabilities, calling the project a positive step towards embracing AI.
The UK government insists that these tools will foster efficiency and collaboration, with improved data sharing between departments being another key feature of the initiative. By cutting consultancy costs and increasing transparency, officials hope Humphrey will become a symbol of progress, rather than parody.
Donald Trump has rescinded a 2023 executive order issued by Joe Biden aimed at mitigating risks associated with AI to consumers, workers, and national security. Biden’s order mandated that developers of high-risk AI systems share safety test results with the US government before public release, under the Defense Production Act. It also required federal agencies to establish safety standards addressing potential threats such as cybersecurity, chemical, and biological risks. This move came amid congressional inaction on AI legislation.
The Republican Party had pledged to overturn Biden’s order, claiming it stifled AI innovation. The party’s 2024 platform emphasises support for AI development that aligns with free speech and human progress. Generative AI technologies, capable of creating content like text and images, have sparked both excitement and concern over their potential to disrupt industries and eliminate jobs.
While Trump revoked Biden’s AI safety framework, he left intact another executive order issued last week that supports the energy needs of advanced AI data centres. Biden’s newer order calls for federal assistance, including leasing Defense and Energy Department sites, to support the rapid growth of AI infrastructure. Meanwhile, US companies like Nvidia have criticised recent Commerce Department restrictions on AI chip exports, reflecting ongoing tensions between regulation and innovation in the tech sector.
Educators are embracing AI to tackle academic dishonesty, which is increasingly prevalent in digital learning environments. Tools like ChatGPT have made it easier for students to generate entire assignments using AI. To counter this, teachers are employing AI detection tools and innovative strategies to maintain academic integrity.
Understanding AI’s capabilities is crucial in detecting misuse. Educators are advised to familiarise themselves with tools like ChatGPT by testing it with sample assignments. Collecting genuine writing samples from students early in the semester provides a baseline for comparison, helping identify potential AI-generated work. Tools designed specifically to detect AI writing further assist in verifying authenticity.
Requesting rewrites is another effective approach when AI usage is suspected. By asking an AI tool to rewrite a suspected piece, teachers can highlight the telltale signs of machine-generated text, such as a lack of personal style and overuse of synonyms. Strong evidence of AI misuse strengthens cases when addressing cheating with students and school administrators.
The rise of AI in education underscores the need for vigilance. Teachers must balance scepticism with evidence-based methods to ensure fairness. Maintaining a collaborative and transparent approach can help foster a culture of learning over shortcuts.
HP Inc has launched the OMEN Max 16, the world’s first AI-driven gaming laptop, promising unparalleled performance and immersive experiences. Unveiled at CES 2025 on January 6, the device features cutting-edge OMEN AI technology that optimises performance and thermals automatically, ensuring uninterrupted gameplay for even the most demanding titles.
The OMEN AI Beta software is a standout innovation, offering gamers a personalised solution for maximising frames per second (FPS). Designed to eliminate trial-and-error troubleshooting, the software recommends optimised operating system, hardware, and game settings tailored to each unique setup. Starting with support for Counter-Strike, the application is set to expand to more popular games.
In addition to its advanced software, the OMEN Max 16 is equipped with top-tier hardware, including an Intel Core Ultra 9 or AMD Ryzen AI 9 processor and up to 64 GB of DDR5 RAM. These features make it capable of handling even the most resource-intensive games with ease.
HP also introduced the OMEN 32x Smart Gaming Monitor, its first gaming display with built-in Google TV, offering gamers an all-in-one entertainment and gaming solution. With these innovations, HP continues to redefine gaming technology, prioritising performance, personalisation, and ease of use.
Social media security firm Spikerz has raised $7 million in a seed funding round led by Disruptive AI, with contributions from Horizon Capital, Wix Ventures, Storytime Capital, and BDMI. The funding highlights the growing demand for innovative solutions to combat cyber threats on social platforms.
The startup specialises in protecting social media accounts from phishing attacks, scams, and other risks posed by increasingly sophisticated cybercriminals. Its platform also helps users detect and remove fake accounts, malicious bots, and visibility restrictions like shadowbans. These features are particularly valuable for businesses, influencers, and brands relying on social platforms for growth.
Spikerz plans to use the investment to enhance its AI-driven platform, expand its global reach, and bolster its team. CEO Naveh Ben Dror emphasised the importance of staying ahead of malicious actors who are now leveraging advanced technologies like generative AI. He described the funding as a strong vote of confidence in the company’s mission to secure social media accounts worldwide.
The firm’s efforts come at a critical time when social media platforms play a central role in the success of businesses and creators. With the latest backing, Spikerz aims to provide cutting-edge tools to safeguard these digital livelihoods.
Apple has suspended its AI-generated news summary feature after criticism from the National Union of Journalists (NUJ). Concerns were raised over the tool’s inaccurate reporting and its potential role in spreading misinformation.
The NUJ welcomed the decision, emphasising the risks posed by automated reporting. Recent errors in AI-generated summaries highlighted how such tools can undermine public trust in journalism. Calls for a more human-centred approach in reporting were made by NUJ assistant general secretary, Séamus Dooley.
Apple’s decision follows growing scrutiny of AI’s role in journalism. Critics argue that while automation can streamline news delivery, it must not compromise accuracy or credibility.
The NUJ has urged Apple to prioritise transparency and accountability as it further develops its AI capabilities. Safeguarding trust in journalism remains a key concern in the evolving media landscape.
According to a recent study, AI models have shown limitations in tackling high-level historical inquiries. Researchers tested three leading large language models (LLMs) — GPT-4, Llama, and Gemini — using a newly developed benchmark, Hist-LLM. The test, based on the Seshat Global History Databank, revealed disappointing results, with GPT-4 Turbo achieving only 46% accuracy, barely surpassing random guessing.
Researchers from Austria’s Complexity Science Hub presented the findings at the NeurIPS conference last month. Co-author Maria del Rio-Chanona highlighted that while LLMs excel at basic facts, they struggle with nuanced, PhD-level historical questions. Errors included incorrect claims about ancient Egypt’s military and armour development, often due to the models extrapolating from prominent but irrelevant data.
Biases in training data also emerged, with models underperforming on questions related to underrepresented regions like sub-Saharan Africa. Lead researcher Peter Turchin acknowledged these shortcomings but emphasised the potential of LLMs to support historians with future improvements.
Efforts are underway to refine the benchmark by incorporating more diverse data and crafting complex questions. Researchers remain optimistic about AI’s capacity to assist in historical research despite its current gaps.
Spain’s government has announced a new initiative to promote the adoption of AI technologies across the country’s businesses. Prime Minister Pedro Sanchez revealed on Monday that the government will provide an additional 150 million euros ($155 million) in subsidies aimed at supporting companies in their efforts to integrate AI into their operations.
The funding is designed to help businesses harness the potential of AI, which has become a critical driver of innovation and efficiency in various sectors, from manufacturing to healthcare and finance. The subsidies will be available to companies looking to develop or adopt AI-based solutions, to foster digital transformation and maintain Spain’s competitive edge in the global economy.
Sanchez emphasised that the funding will play a vital role in ensuring Spain remains at the forefront of the digital revolution, helping to build a robust, AI-powered economy. The move comes as part of Spain’s broader strategy to invest in technology and innovation, aiming to enhance productivity and create new opportunities for growth in both the public and private sectors.
Chinese AI company MiniMax has introduced three new models—MiniMax-Text-01, MiniMax-VL-01, and T2A-01-HD—designed to compete with leading systems developed by firms such as OpenAI and Google. Backed by Alibaba and Tencent, MiniMax has raised $850 million in funding and is valued at over $2.5 billion. The models include a text-only model, a multimodal model capable of processing text and images, and an audio generator capable of creating synthetic speech in multiple languages.
MiniMax-Text-01 boasts a 4-million-token context window, significantly larger than those of competing systems, allowing it to process extensive text inputs. Its performance rivals industry leaders like Google’s Gemini 2.0 Flash in benchmarks measuring problem-solving and comprehension skills. The multimodal MiniMax-VL-01 excels at image-text tasks but trails some competitors on specific evaluations. T2A-01-HD, the audio generator, delivers high-quality synthetic speech and can clone voices using just 10 seconds of recorded audio.
The models, mostly accessible via platforms like GitHub and Hugging Face, come with licensing restrictions that prevent their use in developing competing AI systems. MiniMax has faced controversies, including allegations of unauthorised use of copyrighted data for training and concerns about AI-generated content replicating logos and public figures. The releases coincide with new US restrictions on AI technology exports to China, potentially heightening challenges for Chinese AI firms aiming to compete globally.