Danish pharmaceutical giant Novo Nordisk is strengthening its collaboration with a United States tech firm Valo Health to develop new treatments for obesity, diabetes, and cardiovascular diseases using artificial intelligence and human data. The agreement, originally signed in 2023, has been expanded to cover up to 20 drug candidates, nearly doubling the initial scope of 11 treatments.
The expansion comes as Novo seeks to maintain its competitive edge in the booming obesity drug market, expected to be worth $150 billion in the next decade. A recent clinical trial for its weight-loss drug candidate, CagriSema, delivered underwhelming results, increasing pressure to develop a successor to its best-selling drug, Wegovy. Rival pharmaceutical company Eli Lilly is also pushing forward with its own obesity treatments, intensifying the race for dominance in the sector.
Under the revised deal, Valo Health will receive up to $190 million in near-term payments and milestone payments of around $4.6 billion, significantly increasing its earnings potential compared to the original agreement, which offered up to $2.7 billion. Novo hopes the collaboration will lead to groundbreaking therapies that extend the health benefits of weight-loss drugs beyond obesity treatment.
Halliday, a wearables startup, has launched a pair of smart glasses at CES 2025 that project a tiny digital screen directly into the wearer’s eye. Using a device called the DigiWindow, the glasses display notifications, language translations, and navigation directions in real time without the need for bulky AR lenses.
Priced at $489, the glasses use a small green light to beam an almost 9cm round display into the user’s line of sight. The innovative approach makes US based Halliday’s glasses slimmer, lighter, and more affordable than many augmented reality prototypes. Users can even fit prescription lenses into the frames without affecting the display.
Key features include real-time translation for 40 languages and a proactive AI assistant that offers helpful information during conversations. The device is controlled via a ring worn on the finger, allowing users to navigate its features with thumb gestures. While the AI assistant wasn’t available for testing, the display technology impressed with its functionality.
Halliday’s smart glasses are already available for preorder at a discounted price of $369 via Kickstarter. Shipping is expected to begin in March 2025. The company hopes its sleek design and practical applications will set the glasses apart from other wearables still stuck in prototype stages.
San Francisco-based startup Based Hardware has unveiled Omi, a wearable AI assistant designed to improve productivity. Launched at the Consumer Electronic Show, the device responds to voice commands when worn as a necklace or can attach to the side of the head using medical tape, activating through a unique “brain interface.”
Unlike other AI gadgets that aim to replace smartphones, Omi is meant to complement existing devices. It can answer questions, summarise conversations, and manage tasks like to-do lists and meeting schedules. The startup’s founder, Nik Shevchenko, claims that Omi’s brain interface allows users to interact without saying a wake word by recognising mental focus. However, this feature has yet to be widely tested.
Based Hardware built Omi on an open-source platform to address privacy concerns. Users can store data locally and even develop their own apps for the device. Priced at $89, the consumer version will ship later in 2025, while a developer version is already available.
Omi enters a growing market of AI gadgets that have struggled to meet expectations. Shevchenko hopes Omi’s focus on practical productivity tools will set it apart, but the device’s success will likely depend on whether users embrace its experimental brain interface feature.
Elon Musk’s AI company, xAI, is preparing to launch a controversial feature for its chatbot, Grok, called ‘Unhinged Mode.’ According to a recently updated FAQ on the Grok website, this mode will deliver responses that are intentionally provocative, offensive, and irreverent, mimicking an amateur stand-up comedian pushing boundaries.
Musk first teased the idea of an unfiltered chatbot nearly a year ago, describing Grok as a tool that would answer controversial questions without self-censorship. While Grok has already been known for its edgy responses, it currently avoids politically sensitive topics. The new mode appears to be an effort to deliver on Musk’s vision of an anti-‘woke’ AI assistant, standing apart from more conservative competitors like OpenAI’s ChatGPT.
The move comes amid ongoing debates about political bias in AI systems. Musk has previously claimed that most AI tools lean left due to their reliance on web-based training data. He has vowed to make Grok politically neutral, blaming the internet’s content for any perceived bias in the chatbot’s current outputs. Critics, however, worry that unleashing an unfiltered mode could lead to harmful or offensive outputs, raising questions about the responsibility of AI developers.
As Grok continues to evolve, the AI industry is closely watching how users respond to Musk’s push for a less restrained chatbot. Whether this will prove a success or ignite further controversy remains to be seen.
Elon Musk has echoed concerns from AI researchers that the industry is running out of new, real-world data to train advanced models. Speaking during a livestream with Stagwell’s Mark Penn, Musk noted that AI systems have already processed most of the available human knowledge. He described this data plateau as having been reached last year.
To address the issue, AI developers are increasingly turning to synthetic data, information generated by the AI itself, to continue training models. Musk argued that self-generated data will allow AI systems to improve through self-learning, with major players like Microsoft, Google, and Meta already incorporating this approach in their AI models.
While synthetic data offers cost-saving advantages, it also poses risks. Some experts warn it could cause “model collapse,” reducing creativity and reinforcing biases if the AI reproduces flawed patterns from earlier training data. As the AI sector pivots towards self-generated training material, the challenge lies in balancing innovation with reliability.
Elon Musk’s AI company, xAI, has launched a standalone iOS app for its chatbot, Grok, marking a major expansion beyond its initial availability to X users. The app is now live in several countries, including the US, Australia, and India, allowing users to access the chatbot directly on their iPhones.
The Grok app offers features such as real-time data retrieval from the web and X, text rewriting, summarising long content, and even generating images from text prompts. xAI highlights Grok’s ability to create photorealistic images with minimal restrictions, including the use of public figures and copyrighted material.
In addition to the app, xAI is working on a dedicated website, Grok.com, which will soon make the chatbot available on browsers. Initially limited to X’s paying subscribers, Grok rolled out a free version in November and made it accessible to all users earlier this month. The launch marks a notable push by xAI to establish Grok as a versatile, widely available AI assistant.
The Dutch government announced a deal with Nvidia on Thursday to provide hardware and expertise for a potential AI supercomputing facility. The planned facility is part of the Netherlands‘ broader strategy to bolster AI research and contribute to EU efforts to strengthen Europe’s digital economy.
Last year, the Netherlands allocated €204.5 million ($210 million) for AI investments, with additional funding expected from European subsidies. Economy Minister Dirk Beljaarts hailed the Nvidia agreement as a major step toward realising the project, emphasising the intense global competition for advanced AI technologies.
‘This deal brings building a Dutch AI facility a lot closer,’ Beljaarts said after meeting Nvidia representatives in Silicon Valley, although he refrained from disclosing specific details of the agreement.
The growing use of AI in drug development is dividing opinions among researchers and industry experts. Some believe AI can significantly reduce the time and cost of bringing new medicines to market, while others argue that it has yet to solve the high failure rates seen in clinical trials.
AI-driven tools have already helped identify potential drug candidates more quickly, with some companies reducing the preclinical testing period from several years to just 30 months. However, experts point out that these early successes don’t always translate to breakthroughs in human trials, where most drug failures occur.
Unlike fields such as image recognition, AI in pharmaceuticals faces unique challenges due to limited high-quality data. Experts say AI’s impact could improve if it focuses on understanding why drugs fail in trials, such as problems with dosage, safety, and efficacy. They also recommend new trial designs that incorporate AI to better predict which drugs will succeed in later stages.
While AI won’t revolutionise drug development overnight, researchers agree it can help tackle persistent problems and streamline the process. But achieving lasting results will require better collaboration between AI specialists and drug developers to avoid repeating past mistakes.
The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.
A notorious stretch of the A361 in Devon will receive £1 million in AI and speed camera technology to improve road safety. The investment, part of a £5 million grant from the Department for Transport (DfT), comes after the road was identified as ‘high risk,’ with three fatalities and 30 serious injuries recorded between 2018 and 2022. AI-powered cameras will detect offences such as drivers using mobile phones and failing to wear seatbelts, while speed cameras will be installed at key locations.
A pilot scheme last August recorded nearly 1,800 potential offences along the route, highlighting the need for stricter enforcement. The latest plans include three fixed speed cameras at Ilfracombe, Knowle, and Ashford, as well as two average speed camera systems covering longer stretches of the road. AI cameras will be rotated between different locations to monitor driver behaviour more effectively.
Councillor Stuart Hughes, Devon County Council’s cabinet member for highways, expressed pride in the region’s adoption of AI for road safety improvements. The remaining £4 million from the DfT grant will be allocated to upgrading junctions and improving access for pedestrians and cyclists along the A361.