Imperial College unveils plans for new AI campus in west London

Imperial College London has launched a public consultation on plans for a new twelve-storey academic building in White City dedicated to AI and data science.

A proposed development that will bring together computer scientists, mathematicians, and business specialists to advance AI research and innovation.

A building that will include laboratories, research facilities, and public areas such as cafés and exhibition spaces. It forms part of Imperial’s wider White City masterplan, which also includes housing, a hotel, and additional research infrastructure.

The university aims to create what it describes as a hub for collaboration between academia and industry.

Outline planning permission for the site was granted by Hammersmith and Fulham Council in 2019. The consultation is open until 26 October, after which a formal planning application is expected later this year. If approved, construction could begin in mid-2026, with completion scheduled for 2029.

Imperial College, established in 1907 and known for its focus on science, engineering, medicine, and business, sees the new campus as a step towards strengthening the position of the UK in AI research and technology development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy bans deepfake app that undresses people

Italy’s data protection authority has ordered an immediate suspension of the app Clothoff, which uses AI to generate fake nude images of real people. The company behind it, based in the British Virgin Islands, is now barred from processing personal data of Italian users.

The watchdog found that Clothoff enables anyone, including minors, to upload photos and create sexually explicit or pornographic deepfakes. The app fails to verify consent from those depicted and offers no warning that the images are artificially generated.

The regulator described the measure as urgent, citing serious risks to human dignity, privacy, and data protection, particularly for children and teenagers. It has also launched a wider investigation into similar so-called ‘nudifying’ apps that exploit AI technology.

Italian media have reported a surge in cases where manipulated images are used for harassment and online abuse, prompting growing social alarm. Authorities say they intend to take further steps to protect individuals from deepfake exploitation and strengthen safeguards around AI image tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI uncovers Lyme disease overlooked by doctors

Oliver Moazzezi endured years of debilitating symptoms, including severe tinnitus, high blood pressure, fatigue, and muscle spasms, following a tick bite three years ago. Doctors initially attributed his issues to anxiety or hearing loss, leaving him feeling dismissed and like a hypochondriac.

Frustrated, the IT consultant turned to AI, inputting all his symptoms into a tool prompted to draw from verified medical sources. Without mentioning Lyme disease, the AI suggested it as a possibility, prompting Oliver to seek a private antibody test that confirmed the diagnosis.

Lyme disease, a bacterial infection spread by infected ticks, often mimics other conditions, making early detection challenging. Lyme symptoms, like Oliver’s rash, fatigue, and tinnitus, disrupted his gym visits, swimming, and ability to hear nature’s sounds.

Specialists echo Oliver’s frustrations with under-diagnosis in the NHS and private care. Tick-borne expert Georgia Tuckey says NHS tests miss Lyme symptom patterns, with 1,500 confirmed cases yearly in England and Wales, but 3,000-4,000 more likely go untreated.

The UK Health Security Agency acknowledges higher unconfirmed instances and ongoing data efforts to better track incidence.

AI shows promise in aiding disease diagnosis, as seen in Oliver Moazzezi’s discovery, empowering patients with insights from verified medical sources. However, experts stress that AI cannot replace doctors, urging professional consultation to ensure accurate, safe treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Unapproved AI tools boom in UK workplaces

Microsoft research reveals 71% of UK employees use unapproved AI tools at work, with 51% doing so weekly, raising concerns about data privacy and cybersecurity risks. Organisations face heightened risks to data privacy and cybersecurity as sensitive information enters unregulated platforms.

Despite these dangers, awareness remains low, as only 32% express concern over data privacy and 29% over IT system vulnerabilities.

Workers favour Shadow AI for its simplicity, with 41% citing familiarity from personal use and 28% noting the absence of approved alternatives at their firms. Common applications include drafting communications (49%), creating reports or presentations (40%), and handling finance tasks (22%).

Generative AI assistants now permeate the workforce, saving an average of 7.75 hours weekly per user- equivalent to 12.1 billion hours annually across the economy, valued at £208 billion.

Sector leaders in IT, telecoms, sales, media, marketing, architecture, engineering, and finance report the highest adoption rates. Employees plan to redirect saved time towards better work-life balance (37%), skill development (31%), and more fulfilling tasks (28%).

Darren Hardman, CEO of Microsoft UK and Ireland, urges businesses to prioritise enterprise-grade tools that blend productivity with robust safeguards.

Optimism about AI has climbed, with 57% of staff feeling excited or confident- up from 34% in January 2025. Familiarity grows too, as confusion over starting points drops from 44% to 36%, and clarity on organisational AI strategies rises from 24% to 43%.

Frontier firms leading in adoption see twice the thriving rates, aligning with global trends where 82% of leaders deem 2025 pivotal for AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Purple Fest highlights AI for disabilities

Entrepreneurs at International Purple Fest in Goa, India, from 9 to 12 October 2025, showcased AI transforming assistive technologies. Innovations like conversational screen readers, adaptive dashboards, and real-time captioning empower millions with disabilities worldwide.

Designed with input from those with lived experience, these tools turn barriers into opportunities for learning, working, and leading independently.

Surashree Rahane, born with club foot and polymelia, founded Yearbook Canvas and champions inclusive AI. Collaborating with Newton School of Technology near New Delhi, she develops adaptive learning platforms tailored to diverse learners.

‘AI can democratise education,’ she stated, ‘but only if trained to avoid perpetuating biases.’ Her work addresses structural barriers like inaccessible systems and biased funding networks.

Prateek Madhav, CEO of AssisTech Foundation, described AI as ‘the great equaliser,’ creating jobs through innovations like voice-to-speech tools and gesture-controlled wheelchairs.

Ketan Kothari, a consultant at Xavier’s Resource Centre in Mumbai, relies on AI for independent work, using live captions and visual description apps. Such advancements highlight AI’s role in fostering agency and inclusion across diverse needs.

Hosted by Goa’s Department for Empowerment of Persons with Disabilities, UN India, and the Ministry of Social Justice, Purple Fest promotes universal design.

Tshering Dema from the UN Development Coordination Office noted that inclusion requires a global mindset shift. ‘The future of work must be co-designed with people,’ she said, reflecting a worldwide transition towards accessibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ICE-tracking apps pulled from the App Store

Apple has taken down several mobile apps used to track US Immigration and Customs Enforcement (ICE) activity, sparking backlash from developers and digital rights advocates. The removals follow reported pressure from the US Department of Justice, which has cited safety and legal concerns.

One affected app, Eyes Up, was designed to alert users to ICE raids and detention locations. Its developer, identified only as Mark for safety reasons, said he believes the decision was politically motivated and vowed to contest it.

The takedown reflects a wider debate over whether app stores should host software linked to law enforcement monitoring or protest activity. Developers argue their tools support community safety and transparency, while regulators say such apps could risk interference with federal operations.

Apple has not provided detailed reasoning for its decision beyond referencing its developer guidelines. Google has also reportedly removed similar apps from its Play Store, citing policy compliance. Both companies face scrutiny over how content moderation intersects with political and civil rights issues.

Civil liberties groups warn that the decision could set a precedent limiting speech and digital activism in the US. The affected developers have said they will continue to distribute their apps through alternative channels while challenging the removals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces UK action over market dominance

Google faces new regulatory scrutiny in the UK after the competition watchdog designated it with strategic market status under a new digital markets law. The ruling could change how users select search engines and how Google ranks online content.

The Competition and Markets Authority said Google controls more than 90 percent of UK searches, giving it a position of unmatched influence. The designation enables the regulator to propose targeted measures to ensure fair competition, with consultations expected later in 2025.

Google argued that tighter restrictions could slow innovation, claiming its search tools contributed £118 billion to the UK economy in 2023. The company warned that new rules might hinder product development during rapid AI advancement.

The move adds to global scrutiny of the tech giant, which faces significant fines and court cases in the US and EU over advertising and app store practices. The CMA’s decision marks the first important use of its new powers to regulate digital platforms with strategic control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tariffs and AI top the agenda for US CEOs over the next three years

US CEOs prioritise cost reduction and AI integration amid global economic uncertainty. According to KPMG’s 2025 CEO Outlook, leaders are reshaping supply chains while preparing for rapid AI transformation over the next three years.

Tariffs are a key factor influencing business strategies, with 89% of US CEOs expecting significant operational impacts. Many are adjusting sourcing models, while 86% say they will increase prices where needed. Supply chain resilience remains the top short-term pressure for decision-making.

AI agents are seen as major game-changers. 84% of CEOs expect a native AI company to become a leading industry player within 3 years, displacing incumbents. Companies are accelerating investment returns, with most expecting payoffs within one to three years.

Cybersecurity is a significant concern alongside AI integration. Forty-six percent have increased spending on digital risk resilience, focusing on fraud prevention and data privacy. CEOs recognise that AI and quantum computing introduce both opportunities and new vulnerabilities.

Workforce transformation is a clear priority. Eighty-six percent plan to embed AI agents into teams next year, while 73% focus on retaining and retraining high-potential talent. Upskilling, governance, and organisational redesign are emerging as essential strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!