ByteDance, the owner of TikTok, faces a crucial decision amidst looming legislation threatening to ban the app from US app stores. Sources close to ByteDance revealed that the company may opt to shut down TikTok rather than sell it, should legal avenues be exhausted. Central to this decision is the significance of TikTok’s algorithms, which are considered vital to ByteDance’s operations. Despite TikTok’s contribution being a small fraction of ByteDance’s total revenue and user base, the parent company hesitates to part with its core algorithm.
TikTok’s fate hinges on US legislation, with President Biden signing a bill that could force its sale by 19 January. However, Biden may extend this deadline by three months if ByteDance shows progress. Yet, ByteDance remains tight-lipped about its plans. It merely reiterates its lack of intention to sell TikTok as its CEO expresses confidence in overcoming legal challenges, underlining the app’s importance to its 170 million American users.
The intertwined nature of TikTok with ByteDance’s core algorithms poses a significant hurdle to any potential sale. TikTok’s algorithms align closely with ByteDance’s domestic apps, making it challenging to divest without relinquishing crucial intellectual property. Moreover, ByteDance is adamant about safeguarding its ‘secret source’ – the TikTok algorithm – from falling into the hands of competitors. This stance reflects a broader concern over data security and technological sovereignty.
Why does it matter?
Tensions surrounding TikTok highlight broader geopolitical and technological concerns, with China indicating resistance to any forced divestment of the app. The situation underscores the intricate web of international relations, trade regulations, and corporate strategies shaping the fate of digital platforms like TikTok. As ByteDance navigates this complex landscape, the future of TikTok hangs in the balance, with profound implications for both the company and its millions of users worldwide.
Kenya’s government has advised against banning TikTok amidst concerns over content shared on the platform, suggesting stricter oversight instead. The recommendation comes in response to a parliamentary panel considering a citizen’s petition to ban the Chinese-owned app. The interior ministry alleges TikTok has been used for spreading propaganda, fraud, and distributing sexual content.
The information and communication ministry proposed a co-regulation model, urging TikTok to screen content for compliance with laws in Kenya and submit quarterly reports on removed material. TikTok, owned by Chinese company ByteDance, has yet to comment on the recommendation since it has faced global criticism but defended its user privacy record.
Regulatory scrutiny of TikTok is not unique to Kenya. Italy recently fined three TikTok units for inadequate content checks, especially concerning children’s safety. Meanwhile, in the US, the Senate approved legislation threatening a TikTok ban unless ByteDance divests within the next nine to twelve months. Concerns centre around fears that China could exploit the app for data access or surveillance of American users.
Google has announced another postponement of its plan to phase out third-party cookies in its Chrome browser, with the new target set for 2025. This adjustment marks another delay in a series of postponements that began with the initial announcement in January 2020.
Third-party cookies, which are small data files stored on users’ devices, have been a fundamental component of digital advertising, enabling companies to track users across multiple websites and target them with specific advertisements. However, since 2013, these cookies have raised significant privacy concerns, leading major tech companies, such as Apple, Mozilla, and Microsoft, to reconsider their use.
Google attributes the latest delay to significant feedback from various stakeholders, including industry experts, regulators, and developers, which has highlighted the complexities of removing third-party cookies without disrupting the digital advertising ecosystem. The tech giant has emphasized its commitment to working closely with the entire ecosystem to address these challenges while enhancing consumer privacy protections through its Privacy Sandbox initiative.
The Privacy Sandbox project is Google’s response to the need for a balanced approach that respects user privacy while allowing advertisers to effectively reach their audiences. It is a collection of technologies aimed at creating a more private web browsing experience. Despite its progress, Google acknowledges the necessity of additional time to ensure that all parties can adapt to the changes without significant disruptions.
This decision comes in the context of ongoing scrutiny and regulatory review, particularly from bodies like the UK’s Competition and Markets Authority (CMA), which has emphasized the importance of ensuring that new technologies do not stifle competition.
The extended timeline is intended to allow public discussion and engagement with regulatory authorities and publishers and the advertising industry to migrate their services responsibly. Google’s approach aims to preserve the vitality of the web ecosystem while phasing out technologies that compromise user privacy.
Spain’s High Court has reignited an investigation into the use of NSO Group’s Pegasus software to spy on Prime Minister Pedro Sanchez and other Spanish politicians. The legal move comes after a previous probe was shelved due to a lack of cooperation from Israeli authorities. Investigators plan to collaborate with France, where similar surveillance targeted politicians and public figures.
The investigation aims to uncover the perpetrators behind the spying activities, which triggered a political crisis in Spain in 2022 and resulted in the resignation of the country’s spy chief. However, no individuals or groups have been formally accused yet. The Spanish government has not disclosed whether foreign or domestic entities are suspected of orchestrating the espionage.
Judge Jose Luis Calama decided to reopen the case following revelations from France regarding the use of Pegasus software to surveil journalists, lawyers, and government officials. French President Emmanuel Macron even changed his mobile phone and number due to security concerns arising from the Pegasus spyware case. Calama emphasised the importance of analysing technical data from both countries’ investigations to identify the culprits behind the cyber attacks.
The judge has ordered expert analysis to compare technical elements gathered by Spanish and French authorities, expecting closer collaboration once this analysis is complete. Calama envisions joint efforts between French and Spanish judicial authorities to determine the origin of the Pegasus spy program’s infiltration in both countries. This renewed investigation signals a concerted effort to address concerns surrounding digital surveillance and protect the privacy of politicians and citizens alike.
The Senate has passed a foreign aid package that includes a bill mandating China-based company ByteDance to sell TikTok within a year or face a US ban on the platform. Having cleared both chambers of Congress, the legislation is now headed to President Joe Biden, who has committed to signing it into law. ByteDance will have an initial nine months to finalise a sale, with a possible three-month extension based on progress, though legal challenges could delay enforcement.
The bill’s successful passage through the Senate was achieved through strategic manoeuvring in the House, where it was included in a high-priority foreign aid package. This move compelled the Senate to address the TikTok issue earlier than anticipated. By extending the divestment timeline, more support was garnered in the Senate, resulting in a vote of 79-18 in favour of the bill.
Lawmakers and intelligence officials have voiced concerns over TikTok’s ownership by a China-based company. They cite potential data security risks due to China’s national security law and fear that the Chinese government’s influence could impact US user experiences.
Senate Commerce Committee Chair Maria Cantwell stressed that the legislation aims to prevent foreign adversaries from conducting espionage and harming vulnerable Americans, not to punish specific companies.
Senate Intelligence Committee Chair Mark Warner highlighted worries about Chinese companies owing allegiance to the Chinese government and potential covert manipulation of social media platforms. He dismissed TikTok’s proposed data governance solution, Project Texas, as inadequate. Despite concerns among TikTok users, Warner assured that the legislation is not about silencing voices but addressing critical national security issues.
President Biden has expressed intent to promptly sign the bill into law to facilitate aid to Ukraine, while TikTok has signalled readiness to challenge the law in court if passed.
Concerns are mounting over potential border chaos between the UK and the EU as an app designed to streamline passport checks will not be ready to implement the European Union’s Entry-Exit System (EES). Eurostar CEO Gwendoline Cazenave disclosed the delay, indicating that the railway service intends to install additional kiosks at London’s St Pancras station to manage passport checks effectively. The EES scheme, set to commence on 6 October, requires non-EU passport holders to register fingerprint and facial biometrics with a mobile application to ease pre-registration and avoid lengthy border queues.
While Eurostar aims to reassure passengers about the app’s impending deployment, other border crossings, including the Channel Tunnel operated by Getlink, are preparing for potential disruptions. New processing areas will be constructed at Folkestone and Calais to accommodate the scheme’s requirements. However, the Port of Dover faces significant challenges due to high traffic volumes and limited space, with concerns raised by Kent County Council leader Roger Gough and Port of Dover CEO Doug Bannister regarding potential supply chain disruptions in the UK.
Why does it matter?
In addition to managing the EES rollout, the Port of Dover is grappling with the arrival of migrants in the UK, prompting discussions about implementing live facial recognition technology at migrant processing facilities in Kent. With record numbers of migrants crossing the English Channel, the situation has become politically charged, exacerbating the strain on Dover port. Despite efforts to enhance processing capabilities and implement new technologies, concerns persist about the ability of border staff to manage surges in migrant arrivals effectively, raising questions about security and operational efficiency.
Critics are voicing strong opposition to the UK’s proposed Data Protection and Digital Information Bill (DPDI), particularly its provisions regarding bank account monitoring for benefit recipients and changes to biometric data oversight. A cross-party group of parliamentarians has raised concerns over a proposal to grant the Department for Work and Pensions (DWP) access to individuals’ bank accounts, arguing that such powers could lead to wrongful benefits suspension and intrusive scrutiny.
The DPDI, currently under scrutiny in the House of Lords, faces criticism from various quarters. Last month, the Information Commissioner and numerous charities and campaign organisations criticised the bill for its lack of clarity on data collection and processing safeguards. The controversial provision to monitor benefit seekers’ bank accounts has drawn particular ire, with concerns raised about the scope and potential consequences of such surveillance.
In addition to scrutinising bank account monitoring, the DPDI also seeks to alter the oversight of biometric identification and surveillance technologies. This move has been criticised by former biometrics commissioners, civil society organisations, and the Equality and Human Rights Commission, who warn of significant gaps in existing surveillance oversight. Furthermore, concerns have been raised about the DPDI’s implications for data-sharing agreements between the UK and the European Union, with the European Parliament’s Civil Liberties, Justice, and Home Affairs Committee cautioning that it could jeopardise data-sharing adequacy agreements.
A draft report from the UK Information Commissioner’s Office (ICO) raises concerns about Google’s Privacy Sandbox, which is aimed at preserving privacy in online ad targeting and analytics. The report highlights gaps that could be exploited to compromise privacy and track individuals online. This technology seeks to replace current tracking methods with more privacy-conscious alternatives, but its credibility hinges on its ability to deliver privacy assurances.
If Google’s Privacy Sandbox fails to address regulatory, community, and competitive challenges, it could collapse, leaving adtech rivals to continue tracking users through existing or alternative methods. The ICO report represents another setback for Google’s attempts to reconcile ad targeting with privacy laws like GDPR. Google’s strategy involves moving ad auction mechanics to users’ local devices through web APIs, such as the Topics API in Chrome, which aims to convey user interests to advertisers without identifying individuals.
Critics, including the Electronic Frontier Foundation and rival browser maker Vivaldi, have raised concerns about the Privacy Sandbox’s support for behavioural advertising and its reliance on advertisers’ good behaviour rather than technical guarantees for privacy. Given Google’s market dominance and significant revenue tied to online advertising, scepticism persists about rebuilding ad architecture on its platforms. Both regulators and industry groups like the IAB have expressed concerns about the Privacy Sandbox’s potential competitive disadvantages and limitations, suggesting that Google may need to address these issues before proceeding.
Despite challenges and criticism, Google remains committed to Privacy Sandbox technologies, emphasising their aim to enhance privacy while maintaining targeted advertising. The company continues to engage with regulators and stakeholders to address concerns and ensure a solution that benefits users and the entire advertising ecosystem.
A convicted sex offender in the UK has been banned from using ‘AI-creating tools’ for five years, marking the first known case of its kind. Anthony Dover, 48, received the prohibition as part of a sexual harm prevention order, preventing him from accessing AI generation tools without prior police permission. This includes text-to-image generators and ‘nudifying’ websites used to produce explicit deepfake content.
Dover’s case highlights the increasing concern over the proliferation of AI-generated sexual abuse imagery, prompting government action. The UK recently introduced a new offence making it illegal to create sexually explicit deepfakes of adults without consent, with penalties including prosecution and unlimited fines. The move aims to address the evolving landscape of digital exploitation and safeguard individuals from the misuse of advanced technology.
Charities and law enforcement agencies emphasise the urgent need for collaboration to combat the spread of AI-generated abuse material. Recent prosecutions reveal a growing trend of offenders exploiting AI tools to create highly realistic and harmful content. The Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation (LFF) stress the importance of targeting both offenders and tech companies to prevent the production and dissemination of such material.
Why does it matter?
The decision to restrict an adult sex offender’s access to AI tools sets a precedent for future monitoring and prevention measures. While the specific reasons for Dover’s ban remain unclear, it underscores the broader effort to mitigate the risks posed by digital advancements in sexual exploitation. Law enforcement agencies are increasingly adopting proactive measures to address emerging threats and protect vulnerable individuals from harm in the digital age.
The House of Representatives overwhelmingly voted 360 to 58 on a bill that could result in the unprecedented action of shutting down TikTok, a popular social media platform, over concerns related to Chinese influence and data privacy. The bill, authored by Texas Republican representative Michael McCaul, aims to protect Americans, especially children, from what he described as the ‘malign influence of Chinese propaganda’ on TikTok, which he referred to as a ‘spy balloon in Americans’ phones.’
The legislation was passed as part of a broader foreign aid package put forth by House Republican speaker Mike Johnson, which includes support for Ukraine, Israel, and Taiwan. The updated bill extends the divestment period for TikTok’s parent company, ByteDance, from six months to a year, a move supported by Senate Commerce Committee chair Maria Cantwell to allow sufficient time for potential buyers to negotiate a deal.
Following the House’s passage of the bill, TikTok voiced disappointment, emphasising its substantial economic contribution to the US and arguing against what it sees as an infringement on free speech rights. The bill’s broader implications on data privacy and surveillance practices have also drawn criticism from other tech industry figures, including the president of Signal, who warned of potential repercussions extending beyond TikTok to other social media platforms. Despite these concerns, President Joe Biden has indicated his intention to sign the bill into law if it passes the Senate, aligning with his previous statements and ongoing scrutiny of TikTok’s operations.