Faculty AI develops AI for military drones

Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.

While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.

The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.

Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.

UK regulator considers remedies for Synopsys-Ansys deal

The UK‘s competition regulator, the Competition and Markets Authority (CMA), announced it may accept remedies proposed by Synopsys and Ansys to address concerns over their $35 billion merger. The deal, announced in January of last year, involves Synopsys acquiring Ansys, a company known for its software used in industries like aerospace and sports equipment manufacturing.

The CMA outlined the proposed remedies, which include the sale of Ansys’ power consumption analysis product for digital chips and Synopsys’ global optics and photonics software business. The regulator has until March 5 to decide whether to accept these remedies, though it can extend the deadline to 6 May.

Synopsys expressed satisfaction with the CMA’s progress and reiterated its commitment to working closely with the authority. The outcome of the regulator’s review could significantly impact the completion of the merger, which aims to enhance the companies’ capabilities in chip design software.

UK develops first quantum clock for military use

The Ministry of Defence announced that the UK is developing its first quantum clock, a cutting-edge device designed to enhance military intelligence and reconnaissance. Created by the Defence Science and Technology Laboratory, the clock boasts unparalleled precision, losing less than one second over billions of years.

By leveraging quantum mechanics to measure atomic energy fluctuations, the technology reduces reliance on vulnerable GPS systems, offering greater resilience against disruption by adversaries. That marks the UK’s debut in building such a device, with deployment anticipated within five years.

While not the world’s first quantum clock (similar technology was pioneered in the US 15 years ago), the UK effort highlights a growing global race in quantum advancements. Quantum clocks hold potential beyond military applications, impacting satellite navigation, telecommunications, and scientific research.

Countries like the United States and China are heavily investing in quantum technology, seeing its transformative potential. Future UK research aims to miniaturise the quantum clock for broader applications, including integration into military vehicles and aircraft, underscoring its strategic importance in defence and industry.

Social media platforms face penalties over child safety

The UK government is intensifying efforts to safeguard children online, with new measures requiring social media platforms to implement robust age verification and protect young users from harmful content. Technology Secretary Peter Kyle highlighted the importance of ‘watertight’ systems, warning that companies failing to comply could face significant fines or even prison terms for executives.

The measures, part of the Online Safety Act passed in 2023, will see platforms penalised for failing to address issues such as bullying, violent content, and risky stunts. Ofcom, the UK‘s communications regulator, is set to outline further obligations in January, including stricter ID verification for adult-only apps.

Debate continues over the balance between safety and accessibility. While some advocate for bans similar to Australia‘s under-16 restrictions, teenagers consulted by Kyle emphasised the positive aspects of social media, including learning opportunities and community connections. Research into the impact of screen time on mental health is ongoing, with new findings expected next year.

Synopsys faces UK competition probe over $35 billion Ansys merger

The UK’s Competition and Markets Authority (CMA) has voiced concerns over Synopsys’ proposed $35 billion acquisition of Ansys, claiming the deal could harm innovation, reduce product quality, and increase costs in the semiconductor design and light-simulation software markets. The regulator fears diminished competition could negatively impact UK businesses and consumers, particularly in sectors such as artificial intelligence and cloud computing, which rely heavily on semiconductor technology.

Synopsys, a leader in chip design software, announced the acquisition in January, aiming to combine its tools with Ansys’ diverse software offerings, used in industries ranging from aerospace to consumer goods. However, the CMA has highlighted risks of reduced consumer choice and a potential stifling of advancements in the sector. If these concerns are not adequately addressed, the regulator may initiate an in-depth investigation into the merger.

In response, Synopsys has proposed selling its optical solutions business to Keysight Technologies, a move it believes will satisfy the CMA’s concerns. A company spokesperson expressed confidence in resolving the regulatory hurdles and expects the deal to close in the first half of 2025. The CMA’s final decision could shape the future landscape of competition in the semiconductor and simulation software industries, as global demand for advanced technologies continues to grow.

Britain enforces new online safety rules for social media platforms

Britain‘s new online safety regime officially took effect on Monday, compelling social media platforms like Facebook and TikTok to combat criminal activity and prioritise safer design. Media regulator Ofcom introduced the first codes of practice aimed at tackling illegal harms, including child sexual abuse and content encouraging suicide. Platforms have until March 16, 2025, to assess the risks of harmful content and implement measures like enhanced moderation, easier reporting, and built-in safety tests.

Ofcom’s Chief Executive, Melanie Dawes, emphasised that tech companies are now under scrutiny to meet strict safety standards. Failure to comply after the deadline could result in fines of up to £18 million ($22.3 million) or 10% of a company’s global revenue. Britain’s Technology Secretary Peter Kyle described the new rules as a significant shift in online safety, pledging full support for regulatory enforcement, including potential site blocks.

The Online Safety Act, enacted last year, sets rigorous requirements for platforms to protect children and remove illegal content. High-risk sites must employ automated tools like hash-matching to detect child sexual abuse material. More safety regulations are expected in the first half of 2025, marking a major step in the UK’s fight for safer online spaces.

AI technology aims to cut hospital visits for COPD patients

A pioneering NHS trust in Hull and East Yorkshire is harnessing AI to enhance its chronic respiratory illness care. The Lenus COPD support system, introduced in March 2023, has already reduced hospital visits by 40% and aims for even greater improvements with the integration of AI.

The app enables patients to monitor their symptoms through regular self-assessments and offers direct messaging to NHS clinics. AI will soon analyse the collected data to identify patterns and potential triggers, enabling earlier interventions to prevent hospitalisation.

Professor Mike Crooks, who leads the service, emphasised the proactive nature of the system. The AI-driven insights allow clinics to deliver timely care, helping patients stabilise their health before conditions worsen.

Patients like Ruth, diagnosed with COPD at just 14, report transformative results. Frequent hospital visits have become a rarity, and the app has provided her with a reliable lifeline for clinical support.

UK’s online safety rules take effect

Social media platforms operating in the UK have been given until March 2025 to identify and mitigate illegal content on their services or risk fines of up to 10% of their global revenue. The warning comes as the Online Safety Act (OSA) begins to take effect, with Ofcom, the regulator, releasing final guidelines on tackling harmful material, including child sexual abuse, self-harm promotion, and extreme violence.

Dame Melanie Dawes, Ofcom’s chief, described this as the industry’s “last chance” to reform. “If platforms fail to act, we will take enforcement measures,” she warned, adding that public pressure for stricter action could grow. Companies must conduct risk assessments by March, focusing on how such material appears and devising ways to block its spread.

While hailed as a step forward, critics argue the law leaves gaps in child safety measures. The Molly Rose Foundation and NSPCC have expressed concerns about the lack of targeted action on harmful content in private messaging and self-harm imagery. Despite these criticisms, the UK government views the Act as a reset of societal expectations for tech firms, aiming to ensure a safer online environment.

UK court dismisses privacy lawsuit against Google

Google has successfully defended itself against a revived privacy lawsuit in the UK concerning the transfer of patient data from the Royal Free London NHS Trust. The legal case, brought by patient Andrew Prismall on behalf of 1.6 million individuals, alleged that the data shared with Google’s AI division, DeepMind Technologies, was misused.

The Royal Free NHS Trust had transferred the data in 2015 to assist in developing an AI app designed to detect kidney injuries. Although Britain’s Information Commissioner’s Office ruled in 2017 that the data-sharing arrangement violated privacy laws, a subsequent lawsuit against Google and DeepMind was dismissed last year due to insufficient grounds.

On Wednesday, the Court of Appeal upheld this dismissal, rejecting Prismall’s attempt to challenge the earlier ruling. Google has not commented on the outcome, which closes a high-profile chapter in the debate over privacy and technology’s role in healthcare.

AI tool helps detect lung cancer

Dianne Covey, a 69-year-old retired hospital worker from Farncombe, credits an AI tool with helping to save her life after it helped diagnose her lung cancer in a few hours. She visited her GP with a persistent cough, and her chest X-ray was analysed by Annalise.ai, a technology that flags abnormalities for urgent review. The swift diagnosis caught her cancer at Stage 1, offering a positive prognosis.

‘I never really understood artificial intelligence, but now I think it might have saved my life,’ said Ms. Covey. ‘The early diagnosis has given me a second chance at life.’ She is the first patient at the Royal Surrey NHS Foundation Trust to benefit from the AI system, which prioritises X-rays needing immediate attention and enhances accuracy by identifying tiny anomalies often missed in manual reviews.

The Annalise.ai tool is currently being used across five UK NHS trusts in Surrey, Sussex, and Frimley, enabling radiographers to streamline cancer diagnoses. By accelerating and refining the diagnostic process, this technology has the potential to revolutionise early detection, giving countless patients a fighting chance against life-threatening diseases.