UK government adopts new AI tools

The UK government is exploring new AI tools to streamline public services and assist ministers and civil servants. Among these is Parlex, a tool that predicts how MPs may react to proposed policies, offering insights into potential support or opposition based on MPs’ previous parliamentary contributions. Described as a ‘parliamentary vibe check,’ the tool helps policy teams craft strategies before formally proposing new measures.

Part of the AI suite Humphrey—named after the Yes Minister character—Parlex and other tools aim to modernise government operations. These include Minute, which transcribes ministerial meetings, and Lex, which analyses the impact of laws. Another tool, Redbox, automates submission processing, while Consult is projected to save £80 million annually by improving public consultation processes. The Department for Work and Pensions has also utilised AI to analyse handwritten correspondence, accelerating responses to vulnerable individuals.

The broader government strategy, unveiled by Prime Minister Keir Starmer, emphasises integrating AI into public services while balancing privacy concerns. Plans include sharing anonymised NHS data for AI research under stringent safeguards. Ministers believe these innovations could address economic challenges and boost the UK’s economy by up to £470 billion over the next decade. However, past missteps, such as erroneous fraud accusations stemming from flawed algorithms, highlight the need for careful implementation.

EU strengthens rules for Big Tech on online hate speech regulations

Major tech platforms, including Facebook, YouTube, and X, have pledged to strengthen efforts to combat online hate speech under an updated European Union code of conduct. The revised framework, part of the EU’s Digital Services Act (DSA), mandates stricter measures to reduce illegal and harmful content online.

Companies will collaborate with public and non-profit experts to monitor their responses to hate speech notifications, aiming to review at least two-thirds within 24 hours. Advanced detection tools and transparency regarding recommendation systems will also play key roles in reducing the reach of harmful content before removal.

The EU plans to track compliance closely, requiring platforms to provide country-specific data on hate speech classifications, including race, gender identity, and religion. These measures align with broader efforts to ensure accountability in tech governance.

EU officials emphasised that adherence to the revised code will influence regulatory enforcement under the DSA, marking a significant step in the battle against online hate.

Teachers fight back against AI misuse

Educators are embracing AI to tackle academic dishonesty, which is increasingly prevalent in digital learning environments. Tools like ChatGPT have made it easier for students to generate entire assignments using AI. To counter this, teachers are employing AI detection tools and innovative strategies to maintain academic integrity.

Understanding AI’s capabilities is crucial in detecting misuse. Educators are advised to familiarise themselves with tools like ChatGPT by testing it with sample assignments. Collecting genuine writing samples from students early in the semester provides a baseline for comparison, helping identify potential AI-generated work. Tools designed specifically to detect AI writing further assist in verifying authenticity.

Requesting rewrites is another effective approach when AI usage is suspected. By asking an AI tool to rewrite a suspected piece, teachers can highlight the telltale signs of machine-generated text, such as a lack of personal style and overuse of synonyms. Strong evidence of AI misuse strengthens cases when addressing cheating with students and school administrators.

The rise of AI in education underscores the need for vigilance. Teachers must balance scepticism with evidence-based methods to ensure fairness. Maintaining a collaborative and transparent approach can help foster a culture of learning over shortcuts.

Spikerz raises $7 million to fight social media threats

Social media security firm Spikerz has raised $7 million in a seed funding round led by Disruptive AI, with contributions from Horizon Capital, Wix Ventures, Storytime Capital, and BDMI. The funding highlights the growing demand for innovative solutions to combat cyber threats on social platforms.

The startup specialises in protecting social media accounts from phishing attacks, scams, and other risks posed by increasingly sophisticated cybercriminals. Its platform also helps users detect and remove fake accounts, malicious bots, and visibility restrictions like shadowbans. These features are particularly valuable for businesses, influencers, and brands relying on social platforms for growth.

Spikerz plans to use the investment to enhance its AI-driven platform, expand its global reach, and bolster its team. CEO Naveh Ben Dror emphasised the importance of staying ahead of malicious actors who are now leveraging advanced technologies like generative AI. He described the funding as a strong vote of confidence in the company’s mission to secure social media accounts worldwide.

The firm’s efforts come at a critical time when social media platforms play a central role in the success of businesses and creators. With the latest backing, Spikerz aims to provide cutting-edge tools to safeguard these digital livelihoods.

Meta, X, Google join EU code to combat hate speech

Major tech companies, including Meta’s Facebook, Elon Musk’s X, YouTube, and TikTok, have committed to tackling online hate speech through a revised code of conduct now linked to the European Union’s Digital Services Act (DSA). Announced Monday by the European Commission, the updated agreement also includes platforms like LinkedIn, Instagram, Snapchat, and Twitch, expanding the coalition originally formed in 2016. The move reinforces the EU’s stance against illegal hate speech, both online and offline, according to EU tech commissioner Henna Virkkunen.

Under the revised code, platforms must allow not-for-profit organisations or public entities to monitor how they handle hate speech reports and ensure at least 66% of flagged cases are reviewed within 24 hours. Companies have also pledged to use automated tools to detect and reduce hateful content while disclosing how recommendation algorithms influence the spread of such material.

Additionally, participating platforms will provide detailed, country-specific data on hate speech incidents categorised by factors like race, religion, gender identity, and sexual orientation. Compliance with these measures will play a critical role in regulators’ enforcement of the DSA, a cornerstone of the EU’s strategy to combat illegal and harmful content online.

Algorithm probe puts Elon Musk and X under European Commission scrutiny

The European Commission has intensified its investigation into X, formerly known as Twitter, focusing on the platform’s algorithm changes and content moderation practices. Officials are reviewing the recommendation system and its compliance with the Digital Services Act (DSA). Requests have been made for internal documentation, commercial API access, and records of algorithm changes until 2025.

Concerns have emerged regarding the visibility of specific accounts and how the platform moderates content. Recent claims suggest X’s owner, Elon Musk, has influenced algorithms to promote certain narratives. Although the Commission denies political motives, these developments coincide with controversies surrounding Musk’s political endorsements in Germany.

X’s history with EU regulators includes criticism over transparency and non-compliance, such as restricted data access for researchers and misleading advertising practices. Failure to meet DSA standards could result in penalties, including fines of up to 6% of global revenue or 1% for repeated violations.

The inquiry aims to ensure compliance with the EU regulations and address concerns about misinformation and platform accountability. Enhanced oversight may reshape the governance of digital platforms like X.

Apple halts AI news summaries after NUJ criticism

Apple has suspended its AI-generated news summary feature after criticism from the National Union of Journalists (NUJ). Concerns were raised over the tool’s inaccurate reporting and its potential role in spreading misinformation.

The NUJ welcomed the decision, emphasising the risks posed by automated reporting. Recent errors in AI-generated summaries highlighted how such tools can undermine public trust in journalism. Calls for a more human-centred approach in reporting were made by NUJ assistant general secretary, Séamus Dooley.

Apple’s decision follows growing scrutiny of AI’s role in journalism. Critics argue that while automation can streamline news delivery, it must not compromise accuracy or credibility.

The NUJ has urged Apple to prioritise transparency and accountability as it further develops its AI capabilities. Safeguarding trust in journalism remains a key concern in the evolving media landscape.

Generative AI accelerates US defence strategies

The Pentagon is leveraging generative AI to accelerate critical defence operations, particularly the ‘kill chain’, a process of identifying, tracking, and neutralising threats. According to Dr Radha Plumb, the Pentagon’s Chief Digital and AI Officer, AI’s current role is limited to aiding planning and strategising phases, ensuring commanders can respond swiftly while maintaining human oversight over life-and-death decisions.

Major AI firms like OpenAI and Anthropic have softened their policies to collaborate with defence agencies, but only under strict ethical boundaries. These partnerships aim to balance innovation with responsibility, ensuring AI systems are not used to cause harm directly. Meta, Anthropic, and Cohere are tech giants working with defence contractors, providing tools that optimise operational planning without breaching ethical standards.

In the US, Dr Plumb emphasised that the Pentagon’s AI systems operate as part of human-machine collaboration, countering fears of fully autonomous weapons. Despite debates over AI’s role in defence, officials argue that working with the technology is vital to ensure its ethical application. Critics, however, continue to question the transparency and long-term implications of such alliances.

As AI becomes central to defence strategies, the Pentagon’s commitment to integrating ethical safeguards highlights the delicate balance between technological advancement and human control.

X launches vertical video feed to attract US users

Social network X is introducing a dedicated vertical video feed for users, aiming to capitalise on the removal of ByteDance apps like TikTok and Lemon8 from US app stores. The new video tab, added to the app’s bottom bar, provides users quick access to immersive video content.

X users could scroll through short videos by tapping them in their timeline, but the new tab creates a dedicated space for videos. This marks the platform’s latest effort to enhance video experiences, following the launch of a standalone TV app last year to showcase content from creators and organisations.

As TikTok’s future in the US remains uncertain, other social networks are seizing the opportunity. Meta recently announced a video editing app, Edits, to rival ByteDance’s CapCut, while Bluesky introduced a custom feed for vertical videos, further intensifying competition in the short video market.