Coinbase users in the UK and US can now fund their accounts instantly using eligible Visa debit cards, following a recent partnership with Visa. This integration, announced on 29 October, allows customers to deposit funds in real-time through the Visa Direct network, providing flexibility for those looking to quickly respond to crypto market changes.
The new feature is set to simplify access to trading funds by reducing traditional wait times associated with crypto funding. With Visa Direct, Coinbase users can now top up their accounts or make crypto purchases almost instantly, while also benefiting from instant cash-outs to bank accounts, minimising delays on major transactions.
The partnership further underscores Visa’s growing involvement in the crypto sector. Earlier in October, Visa also launched its Tokenized Asset Platform, enabling banks to manage fiat-backed tokens, including stablecoins. BBVA, a major Spanish bank, is set to trial this platform on the Ethereum blockchain in 2025, marking a significant step in Visa’s broader blockchain strategy.
In a landmark case for AI and criminal justice, a UK man has been sentenced to 18 years in prison for using AI to create child sexual abuse material (CSAM). Hugh Nelson, 27, from Bolton, used an app called Daz 3D to turn regular photos of children into exploitative 3D imagery, according to reports. In several cases, he created these images based on photographs provided by individuals who personally knew the children involved.
Nelson sold the AI-generated images on various online forums, reportedly making around £5,000 (roughly $6,494) over an 18-month period. His activities were uncovered when he attempted to sell one of his digital creations to an undercover officer, charging £80 (about $103) per image.
Following his arrest, Nelson faced multiple charges, including encouraging the rape of a child, attempting to incite a minor in sexual acts, and distributing illegal images. This case is significant as it highlights the dark side of AI misuse and underscores the growing need for regulation around technology-enabled abuse.
The UK government is reintroducing its ‘Data (Use and Access) Bill’ to reform data regulations, projecting a £10B economic boost through streamlined data access and use. Aimed at enhancing efficiency in public sectors like healthcare and law enforcement, the bill also proposes expansions for digital identity verification, open-data projects, and digital registries. Technology Secretary Peter Kyle emphasised the potential to free public sector resources and reduce red tape, allowing people to focus on essential services.
The new bill also incorporates measures to improve data access for researchers, particularly on online risks, echoing aspects of the EU’s Digital Services Act. However, digital rights advocates like Open Rights Group have raised concerns, noting that the bill limits public protections against automated decisions by excluding regular personal data from the scope. This could allow organisations to make impactful automated decisions in areas such as employment and immigration without significant human oversight.
As the Bill reintroduces data reforms while retracting controversial proposals from the previous government, it also addresses updates to marketing rules and fines for privacy violations. These include cookie consent changes and stricter guidelines for unsolicited marketing. By adjusting these regulations, the UK government aims to keep pace with evolving digital standards while ensuring economic growth and improved public service delivery.
Britain’s Competition and Markets Authority (CMA) is investigating the partnership between Alphabet, Google’s parent company, and AI startup Anthropic due to concerns about competition. Regulators have grown increasingly cautious about agreements between major tech firms and smaller startups, especially after Microsoft-backed OpenAI sparked an AI boom with ChatGPT’s launch.
Anthropic, founded by former OpenAI executives Dario and Daniela Amodei, received a $500 million investment from Alphabet last year, with another $1.5 billion promised. The AI startup also relies on Google Cloud services to support its operations, raising concerns over the competitive impact of their collaboration.
The CMA began assessing the partnership in July and has set 19 December as the deadline for its Phase 1 decision. The regulator will determine whether the investigation should proceed to the next stage. Anthropic has pledged full cooperation, insisting that its strategic alliances do not compromise its independence or partnerships with other firms.
Alphabet has emphasised its commitment to fostering an open AI ecosystem. A spokesperson clarified that Anthropic is not restricted to using only Google Cloud services and is free to explore partnerships with multiple providers.
AI could help reduce the number of missed broken bones during X-ray analysis, according to the National Institute for Health and Care Excellence (NICE). The organisation recommends using four AI tools in urgent care settings in England to assist doctors in detecting fractures. This comes as radiologists and radiographers face high vacancy rates, putting a strain on the system.
NICE estimates that missed fractures account for up to 10% of diagnostic errors in emergency departments in the UK. AI is seen as a solution to this problem, working alongside healthcare professionals to catch mistakes that may occur due to heavy workloads. Experts believe using AI can speed up diagnoses, decrease the need for follow-up appointments, and ultimately ease pressure on hospital staff.
AI will not replace human expertise, as radiologists will still review all X-ray images. However, NICE assures that the technology could offer a more accurate and efficient process without increasing the risk of incorrect diagnoses or unnecessary referrals. The consultation period on this proposed use of AI in fracture detection will conclude on 5 November 2024.
Starting in December, Britain’s media regulator Ofcom will outline new safety demands for social media platforms, compelling them to take action against illegal content. Under the new guidelines, tech companies will have three months to assess the risks of harmful content or face consequences, including hefty fines or even having their services blocked. These demands stem from the Online Safety Bill passed last year, aiming to protect users, particularly children, from harmful content.
the UK‘s Ofcom’s Chief Executive Melanie Dawes emphasised that the time for discussion is over, and 2025 will be pivotal for making the internet a safer space. Platforms such as Meta, the parent company of Facebook and Instagram, have already introduced changes to limit risks like children being contacted by strangers. However, the regulator has made it clear that any companies failing to meet the new standards will face strict penalties.
On Monday, Britain announced a major investment of £6.3 billion ($8.2 billion) by US companies ServiceNow, CyrusOne, CloudHQ, and CoreWeave in UK data centre technology. This announcement aligns with the UK government’s broader economic plans, as Prime Minister Keir Starmer hosts the International Investment Summit in London, gathering hundreds of global business leaders.
At the summit, the government is set to unveil an additional £50 billion ($65 billion) in new investments aimed at stimulating growth in sectors like AI, life sciences, and infrastructure. Starmer, emphasising the importance of private sector involvement, aims to create a stable environment that fosters economic expansion, aligning with his Labour Party’s commitment to boosting the economy.
The event will also feature discussions between ministers and business leaders on capitalising on opportunities in emerging industries, including health tech, clean energy, and creative sectors.
The UK government prioritises adopting innovative technologies through its draft industrial strategy, ‘Invest 2035.’ The comprehensive plan aims to accelerate the integration and scaling of new technologies across eight key growth sectors, including cybersecurity solutions and ensuring that all emerging technologies are secure by design.
To support this technological advancement, the strategy focuses on strengthening cyber resilience by enhancing supply chain resilience to mitigate vulnerabilities that could impede long-term growth. Implementing strengthened cyber resilience measures is essential for safeguarding growth-driving sectors against potential digital threats, thereby reinforcing the overall security of the economy.
Additionally, a crucial element of the strategy is the investment in skills and workforce development, as the UK government acknowledges the need to prepare the workforce for future challenges through substantial investments in skills and training. Promoting cybersecurity education is vital, empowering individuals and organisations to protect themselves better and leverage technological advancements.
Furthermore, the draft strategy emphasises public consultation and stakeholder engagement, inviting input from businesses, experts, unions, and other stakeholders to refine the plan before its final publication in spring 2025. The government also highlights the importance of collaboration between itself and the cyber industry, as these partnerships are essential for addressing existing challenges, such as the skills gap and outdated cyber laws. Ultimately, this strategy aims to support the growth of a secure and resilient economy, fostering an environment where organisations can thrive safely in an increasingly digital world.
British police forces are scaling back their presence on X, formerly known as Twitter, due to concerns over the platform’s role in spreading extremist content and misinformation. This decision comes after riots broke out in the UK this summer, fueled by false online claims, with critics blaming Elon Musk’s approach to moderation for allowing hate speech and disinformation to flourish. Several forces, including North Wales Police, have stopped using the platform altogether, citing misalignment with their values.
Of the 33 police forces surveyed, 10 are actively reviewing their use of X, while others are assessing whether the platform is still suitable for reaching their communities. Emergency services have relied on X for more than a decade to share critical updates, but some, like Gwent Police, are reconsidering due to the platform’s tone and reach.
This shift is part of a larger trend in Britain, where some organisations, including charities and health services, have also moved away from X. As new online safety laws requiring tech companies to remove illegal content come into effect, digital platforms, including X, are facing growing scrutiny over their role in spreading harmful material.
The future of the .io domain may be uncertain following a new treaty in which the UK agreed to relinquish control of the Chagos Islands, the British Indian Ocean Territory, to Mauritius. The .io domain, widely used by tech startups and cryptocurrency platforms, originates from this territory, and the transfer of sovereignty calls into question whether the domain will remain in use.
The .io domain was assigned to the Chagos Islands in 1997, though the British government collected some of the revenue from its sales, much to the surprise of the Chagossian people, who were forcibly displaced in the 1960s to make way for a US military base. Now that the UK has agreed to give up the islands, it’s unclear if the domain will continue or be retired, as the Internet Assigned Numbers Authority (IANA) typically phases out country code domains after political changes.
While no official decision has been made regarding the .io domain, its potential retirement follows precedents set with domains like .yu, which was phased out after Yugoslavia’s breakup. The .io domain’s future remains in limbo as Mauritius takes control of the Chagos Islands.