Gambling companies are under investigation for covertly sharing visitors’ data with Facebook’s parent company, Meta, without proper consent, breaching data protection laws. A hidden tracking tool embedded in numerous UK gambling websites has been sending data, such as the web pages users visit and the buttons they click, to Meta, which then uses this information to profile individuals as gamblers. This data is then used to target users with gambling-related ads, violating the legal requirement for explicit consent before sharing such information.
Testing of 150 gambling websites revealed that 52 automatically transmitted user data to Meta, including large brands like Hollywoodbets, Sporting Index, and Bet442. This data sharing occurred without users having the opportunity to consent, resulting in targeted ads for gambling websites shortly after visiting these sites. Experts have raised concerns about the industry’s unlawful practices and called for immediate regulatory action.
The Information Commissioner’s Office (ICO) is reviewing the use of tracking tools like Meta Pixel and has warned that enforcement action could be taken, including significant fines. Some gambling companies have updated their websites to prevent automatic data sharing, while others have removed the tracking tool altogether in response to the findings. However, the Gambling Commission has yet to address the issue of third-party profiling used to recruit new customers.
The misuse of data in this way highlights the risks of unregulated marketing, particularly for vulnerable individuals. Data privacy experts have stressed that these practices not only breach privacy laws but could also exacerbate gambling problems by targeting individuals who may already be at risk.
PlayStation Plus subscribers will receive an automatic five-day extension after a global outage disrupted the PlayStation Network for around 18 hours on Friday and Saturday. Sony confirmed on Sunday that network services had been fully restored and apologised for the inconvenience but did not specify the cause of the disruption.
The outage, which started late on Friday, left users unable to sign in, play online games or access the PlayStation Store. By Saturday evening, Sony announced that services were back online. At its peak, Downdetector.com recorded nearly 8,000 affected users in the US and over 7,300 in the UK.
PlayStation Network plays a vital role in Sony’s gaming division, supporting millions of users worldwide. Previous disruptions have been more severe, including a cyberattack in 2014 that shut down services for several days and a major 2011 data breach affecting 77 million users, leading to a month-long shutdown and regulatory scrutiny.
Britain’s security officials have reportedly ordered Apple to create a so-called ‘back door’ to access all content uploaded to the cloud by its users worldwide. The demand, revealed by The Washington Post, could force Apple to compromise its security promises to customers. Sources suggest the company may opt to stop offering encrypted storage in the UK rather than comply with the order.
Apple has not yet responded to requests for comment outside of regular business hours. The Home Office has served Apple with a technical capability notice, which would require the company to grant access to the requested data. However, a spokesperson from the Home Office declined to confirm or deny the existence of such a notice.
In January, Britain initiated an investigation into the operating systems of Apple and Google, as well as their app stores and browsers. The ongoing regulatory scrutiny highlights growing tensions between tech giants and governments over privacy and security concerns.
The UK government has launched its Code of Practice for the Cyber Security of AI, a voluntary framework designed to enhance security in AI development. The code sets out 13 principles aimed at reducing risks such as AI-driven cyberattacks, system failures, and data vulnerabilities.
The guidelines apply to developers, system operators, and data custodians (any type of business, organisation or individual that controls data permissions and the integrity of data that is used for any AI model or system to function) responsible for creating, deploying, or managing AI systems. Companies that solely sell AI models or components fall under separate regulations. According to the Department for Science, Innovation, and Technology, the code will help ensure AI is developed and deployed securely while fostering innovation and economic growth.
Key recommendations include implementing AI security training, establishing recovery plans, conducting risk assessments, maintaining system inventories, and ensuring transparency about data usage. One of the principles calls to enable human responsibility for AI systems and prescribes to ensure AI decisions are explainable and users understand their responsibilities.
The code references existing standards and best practices for secure software development and security by design, as well as provides useful definitions.
The release of the code follows the UK’s AI Opportunities Action Plan, which outlines strategies to expand the nation’s AI sector and establish global leadership in the field. It also coincides with a call from the National Cyber Security Centre urging software vendors to eliminate ‘unforgivable vulnerabilities‘—security flaws that are easy and cost-effective to fix but are often overlooked in favour of speed and new features.
This code also builds on NCSC’s Guidelines for Secure AI Development which were published in November 2023 and endorsed by 19 international partners.
Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.
Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.
Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.
The United Kingdom is set to become the first country to criminalise the use of AI to create child sexual abuse images. New offences will target AI-generated explicit content, including tools that ‘nudeify’ real-life images of children. The move follows a sharp rise in AI-generated abuse material, with reports increasing nearly five-fold in 2024, according to the Internet Watch Foundation.
The government warns that predators are using AI to disguise their identities and blackmail children into further exploitation. New laws will criminalise the possession, creation, or distribution of AI tools designed for child abuse material, as well as so-called ‘paedophile manuals’ that provide instructions on using such technology. Websites hosting AI-generated child abuse content will also be targeted, and authorities will gain powers to unlock digital devices for inspection.
The measures will be included in the upcoming Crime and Policing Bill. Earlier this month, Britain also announced plans to outlaw AI-generated ‘deepfake’ pornography, making it illegal to create or share sexually explicit deepfakes. Officials say the new laws will help protect children from emerging online threats.
Apple has announced that its AI suite, Apple Intelligence, will support additional languages starting in April, including French, German, Italian, Portuguese, Spanish, Japanese, Korean, and simplified Chinese. The update will also introduce localised English versions for India and Singapore, broadening access to the technology beyond its initial US English release.
The expansion follows a December update that brought support for various English dialects, including those used in Australia, Canada, New Zealand, South Africa, and the UK. However, Apple has yet to confirm when its AI suite will be available in the EU or mainland China.
CEO Tim Cook also revealed that the next version of Siri, which will feature improved on-screen contextual understanding, is expected to launch in the coming months. The update marks Apple’s latest effort to strengthen its AI ecosystem and compete with rivals in the artificial intelligence space.
San Francisco-based startup Waterlily has raised $7 million in seed funding to expand its AI-driven platform for long-term care planning. Founded by Lily Vittayarukskul, the company helps families and financial advisors predict care costs and create tailored financial strategies. Using machine learning and data from government and insurance sources, Waterlily provides personalised recommendations on funding options, such as life insurance and long-term care policies.
Waterlily’s technology was inspired by Vittayarukskul’s personal experience of caring for her aunt, which exposed the financial and emotional strain of long-term care. The platform’s predictive AI can be used for individuals over 40, offering insights into when and how they may need care. The startup already serves major insurance carriers, including Prudential, and hundreds of independent advisors.
With its latest funding round, Waterlily plans to enhance its AI models, expand its team, and strengthen its partnerships. The company is also exploring international expansion to markets such as the UK and Canada, aiming to bridge the gap in long-term care planning and ensure more families are prepared for the future.
Vodafone has achieved a world first by making a video call via satellite using a standard smartphone, marking a significant breakthrough in mobile technology. The call, made from the remote Welsh mountains where there was no network signal, was received by CEO Margherita Della Valle. Vodafone used AST SpaceMobile’s BlueBird satellites, which provide speeds of up to 120 megabits per second, to enable the video call, which included voice, text, and data transmission.
This satellite technology is part of Vodafone’s broader plan to expand satellite connectivity across Europe by 2026. The company aims to offer users a full mobile experience, including video calls, even in areas where traditional network coverage is unavailable. Vodafone is also an investor in AST SpaceMobile, alongside major companies like AT&T, Verizon, and Google.
The race to deploy satellite services is heating up, with competitors like Apple, T-Mobile, and SpaceX already working on satellite-based connectivity. Apple’s iPhones, starting from the iPhone 14, offer satellite texting for emergency services and location sharing. Other companies are testing similar services, with plans for voice and data connectivity in the future.
British astronaut Tim Peake, who attended the launch of Vodafone’s space-to-land gateway, hailed the ability to connect via satellite as an ‘incredible breakthrough.’ Peake, who spent six months aboard the International Space Station, highlighted the importance of staying connected while in remote environments and expressed interest in future space missions.
Paul McCartney has raised concerns about AI potentially ‘ripping off’ artists, urging the British government to ensure that upcoming copyright reforms protect creative industries. In a recent BBC interview, McCartney warned that without proper protections, only tech giants would benefit from AI’s ability to produce content using works created by artists without compensating the original creators.
The music and film industries are facing legal and ethical challenges around AI, as models can generate content based on existing works without paying for the rights to use the original material. In response, the UK government has proposed a system where artists can license their works for AI training, though it also suggests exceptions for AI developers using unreserved rights materials at scale.
McCartney emphasised that while AI has its merits, it should not be used to exploit artists. He highlighted the risk that young creators could lose control over their works, with profits going to tech companies rather than the artists themselves. ‘It should be the person who created it’ who benefits, he said, urging that artists’ rights be prioritised in the evolving landscape of AI.