LinkedIn has introduced its first AI agent, Hiring Assistant, designed to automate many of the time-intensive tasks recruiters face, such as drafting job descriptions, identifying candidate matches, and handling initial outreach. Initially available to a select group of large enterprises, including AMD, Siemens, and Zurich Insurance, Hiring Assistant is expected to expand to more users in the coming months. By automating repetitive tasks, LinkedIn aims to free up recruiters to focus on higher-impact aspects of their jobs.
Built using LinkedIn’s data from over 1 billion users and backed by Microsoft’s OpenAI partnership, Hiring Assistant can refine job requirements based on existing listings, generate candidate pools, and filter applicants by skills rather than traditional markers like location or education. This AI assistant is part of LinkedIn’s broader push to integrate AI into its platform, following similar tools for resume and profile optimisation, career coaching, and job search support.
In its current iteration, Hiring Assistant is already making strides in streamlining recruiting, with plans for future updates to handle interview scheduling, candidate follow-ups, and more. LinkedIn, which has seen AI-driven growth in its premium subscription base, views Hiring Assistant as a key product in its business offerings for recruitment professionals, aiming to enhance LinkedIn’s impact in the hiring sector.
Nvidia-backed biotech firm Iambic Therapeutics has introduced Enchant, an AI model that aims to reduce the time and cost of drug development. Enchant, trained on extensive pre-clinical data, is designed to predict a drug’s early performance with impressive accuracy. In Iambic’s studies, Enchant achieved a 0.74 accuracy score in predicting drug absorption in the human body, compared to previous models which peaked at 0.58. This predictive power could help pharmaceutical companies identify promising drugs sooner, significantly cutting down on failed late-stage trials.
According to Iambic’s co-founder Fred Manby, Enchant could potentially slash development costs by half, as researchers could more accurately assess a drug’s success at the earliest stages. Nobel laureate and Iambic board member Frances Arnold also highlighted Enchant’s unique capabilities, noting that unlike models like Google DeepMind’s AlphaFold, which focus on molecular structure, Enchant evaluates pharmacokinetic and toxicity properties crucial to drug success.
With Enchant, Iambic is poised to set a new standard in the pharmaceutical industry by addressing some of the biggest hurdles in drug development, including high costs and late-stage failures. The AI technology’s rollout could mark a major shift, making drug discovery both faster and more efficient for a variety of treatments.
A new fashion platform, Aesthetic, is launching with a mission to become the ‘Shazam for clothes.’ Using AI-powered technology, the company offers a service called Alma that helps users identify clothing they spot on social media. By sending a post link to Aesthetic via TikTok or Instagram, users are directed to the brand’s website, where they can shop for the outfit or save it to a Lookbook collection.
CEO LJ Northington was inspired to create Aesthetic after noticing a lack of tech innovation in e-commerce. His initial ideas for personal shopping didn’t quite work until he realised that analysing social media feeds could reveal personal style preferences. Northington believes his platform allows users to engage with fashion inspiration without leaving their favourite apps.
Aesthetic is also exploring ways for creators and brands to monetise their styles through personalised pages. Northington mentioned discussions with record labels about allowing artists to promote their trends, such as Beyoncé’s silver-inspired looks for her ‘Renaissance’ tour or Charli XCX’s neon-green fashion from Brat Summer.
The app has attracted funding from Zeal Capital and Slow Capital, with further support from Google Cloud’s AI startup programme. Northington aims to achieve profitability through efficient use of AI. His background includes business development for Westbrook and a psychology degree from Harvard, which has shaped his approach to understanding consumer behaviour.
Smartwatches are revolutionising preventative health by providing continuous, detailed insights into users’ physiological data. At CHUV University Hospital, Chief Anaesthesiologist Patrick Schoettker is exploring ways to leverage smartwatches like the Masimo W1 to monitor patients ahead of surgery. This device collects real-time health data, including heart rate, oxygen levels, and hydration, to create a “digital twin” that could help identify and mitigate risks before operations. Schoettker and his team hope to reduce surgery-related complications by using these insights to anticipate issues.
The potential of smartwatches extends beyond surgery. Leading cardiologists, like Gosia Wamil at Mayo Clinic Healthcare, are already using smartwatch data to detect irregular heart rhythms and other cardiac conditions early, facilitating timely intervention. AI algorithms can now analyse data from wearable devices to predict more serious heart issues, such as low ejection fraction—an early warning sign of heart failure. This technology has also proven useful in tracking health risks among patients with chronic conditions like diabetes, to prevent complications such as heart attacks and strokes.
Beyond cardiology, wearable technology holds promise for neurological and chronic conditions. Research has shown that smartwatches can detect early signs of Parkinson’s disease years before symptoms are noticed by patients. Studies are also underway to assess how smartwatch data might predict seizures in epilepsy patients, helping them better manage risks. As smartwatches grow more sophisticated, their ability to track various health metrics could reshape preventative care. While concerns about false positives remain, experts believe the benefits, such as early detection and reduced healthcare costs, are likely to outweigh these challenges.
Universal Music Group has released a Spanish rendition of Brenda Lee’s 1958 hit ‘Rockin’ Around the Christmas Tree.’ Titled ‘Noche Buena y Navidad,’ the new version was produced using AI technology developed by SoundLabs, with approval from Lee herself and under the guidance of Latin music producer Auero Baqueiro.
The song preserves the original instrumental and background arrangements while substituting Lee’s English vocals with newly generated Spanish vocals. These vocals were created using SoundLabs’ MicDrop, an AI-powered plug-in that replicates voices. The result aims to deliver a performance that feels as though the 13-year-old Brenda Lee recorded it in Spanish from the start.
Universal Music highlighted that the project illustrates how AI can be ethically integrated into music, with full artist consent and creative control. Recent controversies over AI-generated content in entertainment have raised questions about copyright and authenticity, making authorised projects like this one particularly noteworthy.
In June, Universal partnered with SoundLabs to develop official AI-powered vocal models for artists. This approach ensures musicians retain ownership of their voice data and maintain authority over the final output, promoting responsible use of AI in music creation.
CelcomDigi and AmBank have formed a strategic partnership to revolutionise digital healthcare in Malaysia through a newly signed Memorandum of Understanding (MoU). That collaboration will deliver affordable digital healthcare solutions over the next three years, empowering healthcare providers with advanced tools and services that leverage AI to enhance patient care and healthcare delivery.
Under this partnership, CelcomDigi will provide essential connectivity, while AmBank will offer financial services such as specialised medical financing, loans, insurance, and payment solutions, making these innovations more accessible to healthcare institutions. The initiative will introduce various solutions, including Smart Health Kiosks for monitoring vital health metrics and Medi-Scan technology, which utilises AI for biometric assessments. The focus is particularly on improving healthcare access in underserved areas, addressing the historical limitations of quality healthcare in these regions.
The commitment to enhancing healthcare accessibility for all Malaysians aligns with the initiatives of the Malaysian Communications and Multimedia Commission to elevate the country’s healthcare system to a global standard. Integrating telecommunications and digital infrastructure is deemed essential to achieve this goal. Together, the organisations aim to create a more connected and inclusive healthcare ecosystem that supports predictive, preventive, and precision treatments, ultimately improving clinical outcomes for patients.
Elon Musk’s AI venture, xAI, has just enhanced its Grok model with image-understanding capabilities. This means that paid users on the social media platform X can now upload images and engage with Grok to ask questions about them. Announcements from both Musk and the official Grok handle confirm that the feature is in its early stages, with plans to refine and expand it further over time.
Alongside image analysis, Grok’s latest abilities include explaining jokes through this new feature, showcasing an evolving grasp of visual content. Initially released in August, Grok-2 provided premium users on X with access to a multimodal chatbot, featuring image generation through the FLUX.1 model by Black Forest Labs. This is part of xAI’s broader aim to create an immersive AI experience on X, including plans for additional multimodal capabilities through the platform’s developer API.
Looking ahead, Grok is expected to soon handle documents, such as photos and PDFs. Musk hinted at rapid advancements, emphasising xAI’s accelerated timeline compared to others in the industry. To boost appeal for paying subscribers, X has also introduced “Radar,” a tool offering Premium+ users real-time insights into trending topics and ongoing conversations.
Britain’s Competition and Markets Authority (CMA) is investigating the partnership between Alphabet, Google’s parent company, and AI startup Anthropic due to concerns about competition. Regulators have grown increasingly cautious about agreements between major tech firms and smaller startups, especially after Microsoft-backed OpenAI sparked an AI boom with ChatGPT’s launch.
Anthropic, founded by former OpenAI executives Dario and Daniela Amodei, received a $500 million investment from Alphabet last year, with another $1.5 billion promised. The AI startup also relies on Google Cloud services to support its operations, raising concerns over the competitive impact of their collaboration.
The CMA began assessing the partnership in July and has set 19 December as the deadline for its Phase 1 decision. The regulator will determine whether the investigation should proceed to the next stage. Anthropic has pledged full cooperation, insisting that its strategic alliances do not compromise its independence or partnerships with other firms.
Alphabet has emphasised its commitment to fostering an open AI ecosystem. A spokesperson clarified that Anthropic is not restricted to using only Google Cloud services and is free to explore partnerships with multiple providers.
Miles Brundage, a veteran policy researcher and senior adviser at OpenAI, has left the company to pursue independent work in the nonprofit sector. In a post on X and an essay, Brundage explained his decision, stating he believes he can have a greater impact on AI policy and research outside of the industry, where he will have more freedom to publish his findings.
Brundage joined OpenAI in 2018 and played a key role in the company’s policy research, particularly in the responsible deployment of AI systems like ChatGPT. His departure signals ongoing shifts within OpenAI, with the company reorganising its economic research and AGI readiness teams. While OpenAI expressed support for Brundage’s decision, it did not specify who will take over his responsibilities.
Brundage’s exit is part of a broader trend of high-profile departures from OpenAI, with several key figures, including CTO Mira Murati and chief research officer Bob McGrew, having recently resigned. The departures reflect internal disagreements about the company’s direction, especially as it faces criticism over balancing commercial ambitions with AI safety.
European scientists have developed an AI algorithm that can interpret pig sounds to help farmers monitor their animals’ emotions, potentially improving pig welfare. The tool, created by researchers from universities across several European countries, analyses grunts, oinks, and squeals to identify whether pigs are experiencing positive or negative emotions. This could give farmers new insights beyond just monitoring physical health, as emotions are key to animal welfare but are often overlooked on farms.
The study found that pigs on free-range or organic farms produce fewer stress-related calls compared to conventionally raised pigs, suggesting a link between environment and emotional well-being. The AI algorithm could eventually be used in an app to alert farmers when pigs are stressed or uncomfortable, allowing for better management. Short grunts are associated with positive feelings, while longer grunts and high-pitched squeals often indicate stress or discomfort.
Researchers believe that once fully developed, this technology could not only benefit animal welfare but also help consumers make more informed choices about the farms they support.