Biden pushes for stronger cybersecurity standards in final days of presidency

President Joe Biden is preparing to introduce a new executive order aimed at strengthening cybersecurity standards for federal agencies and contractors. The proposed measures address growing threats from Chinese-linked cyber operations and criminal cyberattacks, which have targeted critical infrastructure, government emails, and major telecom firms. Under the draft order, contractors must adhere to stricter secure software development practices and provide documentation to be verified by the Cybersecurity and Infrastructure Security Agency (CISA).

The order highlights vulnerabilities exposed by recent cyber incidents, including the May 2023 breach of US government email accounts, attributed to Chinese hackers. New guidelines will also focus on securing access tokens and cryptographic keys, which were exploited during the attack. Contractors whose security practices fail to meet standards may face legal consequences, with referrals to the attorney general for further action.

While experts like Tom Kellermann of Contrast Security support the initiative, some criticise the timeline as insufficient given the immediate threats posed by adversaries like China and Russia. Brandon Wales of SentinelOne views the order as a continuation of efforts across the past two administrations, emphasising the need to enhance existing cybersecurity frameworks while addressing a broad range of threats.

The order underscores Biden’s commitment to cybersecurity as a pressing national security issue. It comes amid escalating concerns about foreign cyber operations and aims to solidify protections for critical US systems before the transition to new leadership.

British universities abandon X over misinformation concerns

British universities are increasingly distancing themselves from Elon Musk’s X platform, citing its role in spreading misinformation and inciting racial unrest. A Reuters survey found that several institutions have stopped posting or significantly reduced their activity, joining a broader exodus of academics and public bodies. Concerns over falling engagement, violent content, and the platform’s perceived toxicity have driven the shift.

The University of Cambridge has seen at least seven of its colleges stop posting, while Oxford’s Merton College has deleted its account entirely. Institutions such as the University of East Anglia and London Metropolitan University report dwindling engagement, while arts conservatoires like Trinity Lab and the Royal Northern College of Music are focusing their communication efforts elsewhere. Some universities, including Buckinghamshire New University, have publicly stated that X is no longer a suitable space for meaningful discussion.

The retreat from X follows similar moves by British police forces, reflecting growing unease among public institutions. Despite the trend, some universities continue to maintain a presence on the platform, though many are actively exploring alternatives. X did not respond to requests for comment on the issue.

Synthetic data seen as AI’s future

Elon Musk has echoed concerns from AI researchers that the industry is running out of new, real-world data to train advanced models. Speaking during a livestream with Stagwell’s Mark Penn, Musk noted that AI systems have already processed most of the available human knowledge. He described this data plateau as having been reached last year.

To address the issue, AI developers are increasingly turning to synthetic data, information generated by the AI itself, to continue training models. Musk argued that self-generated data will allow AI systems to improve through self-learning, with major players like Microsoft, Google, and Meta already incorporating this approach in their AI models.

While synthetic data offers cost-saving advantages, it also poses risks. Some experts warn it could cause “model collapse,” reducing creativity and reinforcing biases if the AI reproduces flawed patterns from earlier training data. As the AI sector pivots towards self-generated training material, the challenge lies in balancing innovation with reliability.

Grok chatbot now available on iOS

Elon Musk’s AI company, xAI, has launched a standalone iOS app for its chatbot, Grok, marking a major expansion beyond its initial availability to X users. The app is now live in several countries, including the US, Australia, and India, allowing users to access the chatbot directly on their iPhones.

The Grok app offers features such as real-time data retrieval from the web and X, text rewriting, summarising long content, and even generating images from text prompts. xAI highlights Grok’s ability to create photorealistic images with minimal restrictions, including the use of public figures and copyrighted material.

In addition to the app, xAI is working on a dedicated website, Grok.com, which will soon make the chatbot available on browsers. Initially limited to X’s paying subscribers, Grok rolled out a free version in November and made it accessible to all users earlier this month. The launch marks a notable push by xAI to establish Grok as a versatile, widely available AI assistant.

Tesla’s driverless tech under investigation

US safety regulators are investigating Tesla’s ‘Actually Smart Summon’ feature, which allows drivers to move their cars remotely without being inside the vehicle. The probe follows reports of crashes involving the technology, including at least four confirmed incidents.

The US National Highway Traffic Safety Administration (NHTSA) is examining nearly 2.6 million Tesla cars equipped with the feature since 2016. The agency noted issues with the cars failing to detect obstacles, such as posts and parked vehicles, while using the technology.

Tesla has not commented on the investigation. Company founder Elon Musk has been a vocal supporter of self-driving innovations, insisting they are safer than human drivers. However, this probe, along with other ongoing investigations into Tesla’s autopilot features, could result in recalls and increased scrutiny of the firm’s driverless systems.

The NHTSA will assess how fast cars can move in Smart Summon mode and the safeguards in place to prevent use on public roads. Tesla’s manual advises drivers to operate the feature only in private areas with a clear line of sight, but concerns remain over its real-world safety applications.

Meta ends fact-checking program in the US

Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.

In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.

The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.

As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.

White House introduces Cyber Trust Mark for smart devices

The White House unveiled a new label, the Cyber Trust Mark, for internet-connected devices like smart thermostats, baby monitors, and app-controlled lights. This new shield logo aims to help consumers evaluate the cybersecurity of these products, similar to how Energy Star labels indicate energy efficiency in appliances. Devices that display the Cyber Trust Mark will have met cybersecurity standards set by the US National Institute of Standards and Technology (NIST).

As more household items, from fitness trackers to smart ovens, become internet-connected, they offer convenience but also present new digital security risks. Anne Neuberger, US Deputy National Security Advisor for Cyber, explained that each connected device could potentially be targeted by cyber attackers. While the label is voluntary, officials hope consumers will prioritise security and demand the Cyber Trust Mark when making purchases.

The initiative will begin with consumer devices like cameras, with plans to expand to routers and smart meters. Products bearing the Cyber Trust Mark are expected to appear on store shelves later this year. Additionally, the Biden administration plans to issue an executive order by the end of the president’s term, requiring the US government to only purchase products with the label starting in 2027. The program has garnered bipartisan support, officials said.

EU’s universal charger regulations take effect

Starting 28 December 2024, all new mobile phones, tablets, digital cameras, and other electronic devices sold in the European Union must have a USB-C charging port. This new rule aims to reduce electronic waste, simplify device use, and cut costs for consumers, who will no longer need to buy a new charger with each new device.

The European Commission’s decision to adopt a common charging standard comes after years of disagreements with tech giants, particularly Apple, which initially opposed the move. While most manufacturers had already adopted USB-C, Apple continued to use its proprietary Lightning port until late 2023. The new law, first approved in 2022, gives laptop makers until 2026 to comply.

With the standardisation of charging ports, the EU expects to save consumers at least 200 million euros annually and reduce electronic waste by over a thousand tonnes annually. The shift to USB-C, which supports faster charging and higher data transfer speeds, is seen as a step toward more efficient and sustainable tech consumption.

Overall, the EU’s new rules are designed to make life easier for consumers by eliminating the need for multiple chargers and benefiting the environment by reducing waste.

Legal world embraces AI for access to justice

AI is revolutionising the legal field, offering solutions to improve fairness and reduce costs in the justice system. Tools powered by AI are being used to streamline tasks like analysing evidence, drafting contracts, and preparing cases. Organisations like the Westway Trust in London are adopting AI to assist clients with complex disputes, such as benefits appeals and housing issues. These tools save hours of work, enabling paralegals to focus on providing better support.

The technology has sparked excitement and debate among legal professionals. AI models are being developed to help barristers identify inconsistencies in real-time court transcripts and assist judges with evidence analysis. Advocates argue that AI could make justice more accessible, while reducing the burden on legal practitioners and cutting costs for clients. However, concerns about accuracy and bias persist, with experts emphasising the importance of human oversight.

Sir Geoffrey Vos, Master of the Rolls, underscores the need for AI to complement, not replace, human judges. Guidelines stress transparency in AI use and the responsibility of lawyers to verify outputs. While tools like ChatGPT can provide general advice, professionals caution against relying on non-specialised AI for legal matters. Experts believe that AI will play a crucial role in addressing the fairness gap in the justice system without compromising the rule of law.

AI model Aitana takes social media by storm

In Barcelona, a pink-haired 25-year-old named Aitana captivates social media with her stunning images and relatable personality. But Aitana isn’t a real person—she’s an AI model created by The Clueless Agency. Launched during a challenging period for the agency, Aitana was designed as a solution to the unpredictability of working with human influencers. The virtual model has proven successful, earning up to €10,000 monthly by featuring in advertisements and modelling campaigns.

Aitana has already amassed over 343,000 Instagram followers, with some celebrities unknowingly messaging her for dates. Her creators, Rubén Cruz and Diana Núñez, maintain her appeal by crafting a detailed “life,” including fictional trips and hobbies, to connect with her audience. Unlike traditional models, Aitana has a defined personality, presented as a fitness enthusiast with a determined yet caring demeanour. This strategic design, rooted in current trends, has made her a relatable and marketable figure.

The success of Aitana has sparked a new wave of AI influencers. The Clueless Agency has developed additional virtual models, including a more introverted character named Maia. Brands increasingly seek these customisable AI creations for their campaigns, citing cost efficiency and the elimination of human unpredictability. However, critics warn that the hypersexualised and digitally perfected imagery promoted by such models may negatively influence societal beauty standards and young audiences.

Despite these concerns, Aitana represents a broader shift in advertising and social media. By democratising access to influencer marketing, AI models like her offer new opportunities for smaller businesses while challenging traditional notions of authenticity and influence in the digital age.