British universities abandon X over misinformation concerns

British universities are increasingly distancing themselves from Elon Musk’s X platform, citing its role in spreading misinformation and inciting racial unrest. A Reuters survey found that several institutions have stopped posting or significantly reduced their activity, joining a broader exodus of academics and public bodies. Concerns over falling engagement, violent content, and the platform’s perceived toxicity have driven the shift.

The University of Cambridge has seen at least seven of its colleges stop posting, while Oxford’s Merton College has deleted its account entirely. Institutions such as the University of East Anglia and London Metropolitan University report dwindling engagement, while arts conservatoires like Trinity Lab and the Royal Northern College of Music are focusing their communication efforts elsewhere. Some universities, including Buckinghamshire New University, have publicly stated that X is no longer a suitable space for meaningful discussion.

The retreat from X follows similar moves by British police forces, reflecting growing unease among public institutions. Despite the trend, some universities continue to maintain a presence on the platform, though many are actively exploring alternatives. X did not respond to requests for comment on the issue.

Startup launches AI assistant to simplify daily tasks

San Francisco-based startup Based Hardware has unveiled Omi, a wearable AI assistant designed to improve productivity. Launched at the Consumer Electronic Show, the device responds to voice commands when worn as a necklace or can attach to the side of the head using medical tape, activating through a unique “brain interface.”

Unlike other AI gadgets that aim to replace smartphones, Omi is meant to complement existing devices. It can answer questions, summarise conversations, and manage tasks like to-do lists and meeting schedules. The startup’s founder, Nik Shevchenko, claims that Omi’s brain interface allows users to interact without saying a wake word by recognising mental focus. However, this feature has yet to be widely tested.

Based Hardware built Omi on an open-source platform to address privacy concerns. Users can store data locally and even develop their own apps for the device. Priced at $89, the consumer version will ship later in 2025, while a developer version is already available.

Omi enters a growing market of AI gadgets that have struggled to meet expectations. Shevchenko hopes Omi’s focus on practical productivity tools will set it apart, but the device’s success will likely depend on whether users embrace its experimental brain interface feature.

Meta ends fact-checking program in the US

Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.

In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.

The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.

As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.

UN’s ICAO targeted in alleged cyberattack

The International Civil Aviation Organization (ICAO) is investigating a potential cybersecurity breach following claims that a hacker accessed thousands of its documents. The United Nations agency, which sets global aviation standards, confirmed it is reviewing reports of an incident allegedly linked to a known cybercriminal group.

A post on a popular hacking forum dated 5 January suggested that 42,000 ICAO documents had been compromised, including sensitive personal data. Samples of the leaked information reportedly contain names, dates of birth, home addresses, email contacts, phone numbers, and employment details, with some records appearing to belong to ICAO staff.

ICAO has not confirmed whether the alleged breach is genuine or the extent of any possible data exposure. In response to media inquiries, the agency declined to provide further details beyond its official statement acknowledging the ongoing investigation.

TikTok’s Pool Guy balances fame with everyday work

Miles Laflin, a Bedfordshire-based pool cleaner known as ‘The Pool Guy’, has amassed over 22 million followers across social media platforms for his visually satisfying videos of pool cleaning. Despite his fame, the 34-year-old continues his day job, crediting it with keeping him grounded. Laflin has been in the pool cleaning business for over a decade and began sharing his work on TikTok, where a single video has attracted over 170 million views.

His viral content has significantly boosted his business, with 90% of his work coming from followers who discover him online. Laflin’s success led to him winning the high-quality content creator of the year title at the inaugural UK and Ireland TikTok Awards. He encourages others to share content about their jobs, emphasising that social media offers opportunities for brand deals, global travel, and personal transformation.

Laflin continues to balance his viral fame with his pool cleaning business, a testament to his passion for the trade. He believes there is an audience for every profession, urging aspiring content creators to showcase their unique skills online.

AI model Aitana takes social media by storm

In Barcelona, a pink-haired 25-year-old named Aitana captivates social media with her stunning images and relatable personality. But Aitana isn’t a real person—she’s an AI model created by The Clueless Agency. Launched during a challenging period for the agency, Aitana was designed as a solution to the unpredictability of working with human influencers. The virtual model has proven successful, earning up to €10,000 monthly by featuring in advertisements and modelling campaigns.

Aitana has already amassed over 343,000 Instagram followers, with some celebrities unknowingly messaging her for dates. Her creators, Rubén Cruz and Diana Núñez, maintain her appeal by crafting a detailed “life,” including fictional trips and hobbies, to connect with her audience. Unlike traditional models, Aitana has a defined personality, presented as a fitness enthusiast with a determined yet caring demeanour. This strategic design, rooted in current trends, has made her a relatable and marketable figure.

The success of Aitana has sparked a new wave of AI influencers. The Clueless Agency has developed additional virtual models, including a more introverted character named Maia. Brands increasingly seek these customisable AI creations for their campaigns, citing cost efficiency and the elimination of human unpredictability. However, critics warn that the hypersexualised and digitally perfected imagery promoted by such models may negatively influence societal beauty standards and young audiences.

Despite these concerns, Aitana represents a broader shift in advertising and social media. By democratising access to influencer marketing, AI models like her offer new opportunities for smaller businesses while challenging traditional notions of authenticity and influence in the digital age.

Trump urges Supreme Court to postpone TikTok law

President-elect Donald Trump has called on the US Supreme Court to postpone implementing a law that would ban TikTok or force its sale, arguing for time to seek a political resolution after taking office. The court will hear arguments on the case on 10 January, ahead of a 19 January deadline for TikTok’s Chinese owner, ByteDance, to sell the app or face a US ban.

The move marks a stark shift for Trump, who previously sought to block TikTok in 2020 over national security concerns tied to its Chinese ownership. Trump’s legal team emphasised that his request does not take a stance on the law’s merits but seeks to allow his incoming administration to explore alternatives. Trump has expressed a newfound appreciation for TikTok, citing its role in boosting his campaign visibility.

TikTok, with over 170 million US users, continues to challenge the legislation, asserting that its data and operations affecting US users are fully managed within the country. However, national security concerns persist, with the Justice Department and a coalition of attorneys general urging the Supreme Court to uphold the divest-or-ban mandate. The case highlights the growing debate between free speech advocates and national security interests in regulating digital platforms.

ChatGPT search found vulnerable to manipulation

New research by The Guardian reveals that ChatGPT Search, OpenAI’s recently launched AI-powered search tool, can be misled into generating false or overly positive summaries. By embedding hidden text in web pages, researchers demonstrated that the AI could ignore negative reviews or even produce malicious code.

The feature, designed to streamline browsing by summarising content such as product reviews, is susceptible to hidden text attacks—a well-known vulnerability in large language models. While this issue has been studied before, this marks the first time such manipulation has been proven on a live AI search tool.

OpenAI did not comment on this specific case but stated it employs measures to block malicious websites and is working to improve its defences. Experts note that competitors like Google, with more experience in search technology, have developed stronger safeguards against similar threats.

Vietnam enacts strict internet rules targeting social media and gaming

Vietnam’s new internet law, known as ‘Decree 147,’ came into effect Wednesday, requiring platforms like Facebook and TikTok to verify user identities and share data with authorities upon request. Critics view the move as a crackdown on freedom of expression, with activists warning it will stifle dissent and blur the lines between legal and illegal online activity. Under the rules, tech companies must store verified information alongside users’ names and dates of birth and remove government-designated “illegal” content within 24 hours.

The decree also impacts the booming social commerce sector by allowing only verified accounts to livestream. Additionally, it imposes restrictions on gaming for minors, limiting sessions to one hour and a maximum of 180 minutes daily. Vietnam, with over 65 million Facebook users and a growing gaming population, may see significant disruptions in online behaviour and businesses.

Critics liken the law to China’s tight internet controls. Activists and content creators have expressed fear of persecution, citing recent examples like the 12-year prison sentence for a YouTuber critical of the government. Despite the sweeping measures, some local businesses and gamers remain sceptical about enforcement, suggesting a wait-and-see approach to the decree’s real-world impact.

Mexican cartel scams timeshare owners

The FBI is warning timeshare owners about a telemarketing scam linked to the Jalisco New Generation Cartel, one of Mexico’s most dangerous criminal groups. This sophisticated operation targets individuals, particularly older adults, with offers to buy their timeshares at inflated prices. Victims are tricked into paying fees for taxes, processing, or other fabricated expenses, often losing tens of thousands of dollars.

The scam employs advanced tactics, including impersonation of legitimate businesses and government agencies, as well as the use of fraudulent websites. Call centres operated by the cartel facilitate these schemes, preying on vulnerable individuals while funding broader criminal activities, including drug trafficking. The scammers often re-victimise those they have already defrauded by promising to recover losses in exchange for additional payments.

To avoid falling prey to such fraud, experts advise verifying buyers and companies, avoiding upfront fees, and consulting professionals before proceeding with transactions. Reporting suspicious activity to the authorities is critical in combating these scams and protecting others.