How are we being tracked online?

What impact does tracking have?

In the digital world, tracking occurs through digital signals sent from one computer to a server, and from a server to an organisation. Almost immediately, a profile of a user can be created. The information can be leveraged to send personalised advertisements for products and services consumers are interested in, but it can also classify people into categories to send them advertisements to steer them in a certain direction, for example, politically (2024 Romanian election, Cambridge Analytica Scandal skewing the 2016 Brexit referendum and 2016 US Elections). 

Digital tracking can be carried out with minimal costs, rapid execution and the capacity to reach hundreds of thousands of users simultaneously. These methods require either technical skills (such as coding) or access to platforms that automate tracking. 

 Architecture, Building, House, Housing, Staircase, Art, Painting, Person, Modern Art

Image taken from the Internet Archive

This phenomenon has been well documented and likened to George Orwell’s 1984, in which the people of Oceania are subject to constant surveillance by ‘Big Brother’ and institutions of control; the Ministry of Truth (propaganda), Peace (military control), Love (torture and forced loyalty) and Plenty (manufactured prosperity). 

A related concept is the Panopticon, developed by the French philosopher Michel Foucault’s social theory based on the architecture of a prison, enabling constant observation from a central point. Prisoners never know if they are being watched and thus self-regulate their behaviour. In today’s tech-driven society, our digital behaviour is similarly regulated through the persistent possibility of surveillance. 

How are we tracked? The case of cookies and device fingerprinting

  • Cookies

Cookies are small, unique text files placed on a user’s device by their web browser at the request of a website. When a user visits a website, the server can instruct the browser to create or update a cookie. These cookies are then sent back to the server with each subsequent request to the same website, allowing the server to recognise and remember certain information (login status, preferences, or tracking data).

If a user visits multiple websites about a specific topic, that pattern can be collected and sold to advertisers targeting that interest. This applies to all forms of advertising, not just commercial but also political and ideological influence.

  • Device fingerprinting 

Device fingerprinting involves generating a unique identifier using a device’s hardware and software characteristics. Types include browser fingerprinting, mobile fingerprinting, desktop fingerprinting, and cross-device tracking. To assess how unique a browser is, users can test their setup via the Cover Your Tracks tool by the Electronic Frontier Foundation.

Different information will be collected, such as your operating system, language version, keyboard settings, screen resolution, font used, device make and model and more. The more data points collected, the more unique an individual’s device will be.

 Person, Clothing, Footwear, Shoe

Image taken from Lan Sweeper

A common reason to use device fingerprinting is for advertising. Since each individual has a unique identifier, advertisers can distinguish individuals from one another and see which websites they visit based on past collected data. 

Similar to cookies, device fingerprinting is not purely about advertising, as it has some legitimate security purposes. Device fingerprinting, as it creates a unique ID of a device, allows websites to recognise a user’s device. This is useful to combat fraud. For instance, if a known device suddenly logs in from an unknown fingerprint, fraud detection mechanisms may flag and block the login attempt.

Legal considerations

Apart from societal impacts, there are legal considerations to be made, specifically concerning fundamental rights. In the EU and Europe, Articles 7 and 8 of the Charter of Fundamental Rights and Article 8 of the European Convention on Human Rights are what give rise to the protection of personal data in the first place. They form the legal bedrock of digital privacy legislation, such as the GDPR and the ePrivacy Directive. Stemming from the GDPR, there is a protection against unlawful, unfair and opaque processing of personal data.

 Page, Text, Letter

Articles 7 and 8 of the Charter of Fundamental Rights

For tracking to be carried out lawfully, one of the six legal bases of the GDPR must be relied upon. In this case, tracking is usually only lawful if the legal basis of consent is relied upon (Article 6(1)(a) GDPR, which stems from Article 5(1) of the ePrivacy Directive).

Other legal bases, such as the legitimate interest of a business, may allow for limited analytical cookies to be placed, of which the cookies referred to in this analysis are not. 

Regardless of this, to obtain consent, website visitors must ensure that consent is collected prior to processing occurring, freely given, specific, informed and unambiguous. In most cases of website tracking, consent is not collected prior to processing.

In practice, this means that before a consent request is fulfilled by a website visitor, cookies are placed on the user’s device. There are additional concerns about consent not being informed, as users do not know what processing personal data to enable tracking entails. 

Moreover, consent is not specific to what is necessary to the processing, given that processing occurs for broad and unspecified reasons, such as improving visitor experience and understanding the website better, and those explanations are generic and broad.

Further, tracking is typically unfair as users do not expect to be tracked across sites or have digital profiles made about themselves based on website visits. Tracking is also opaque, as users do not understand how tracking occurs. Website owners state that tracking occurs with a lack of explanation on how it occurs in the first place. Users do not know for how long it occurs, what personal data is being used to track or how it benefits website owners. 

Can we refuse tracking

In theory, it is possible to prevent tracking from the get-go. This can be done by refusing to give consent when tracking occurs. However, in practice, refusing consent can still lead to tracking. Outlined below are two concrete examples of this happening daily.

  • Cookies

Regarding cookies, simply put, the refusal of all requests is not honoured, it is ignored. Studies have found that when a user visits a website and refuses to give consent, their request is not honoured. Cookies and similar tracking technologies are placed on the user’s device as if they had accepted cookies.

This increases user frustration as they are given a choice that is non-existent. This occurs as non-essential cookies, which can be refused, are lumped together with essential cookies, which cannot be refused. Therefore, when refusing consent to non-essential cookies, not all are refused, as some are mislabelled.

Another reason for this occurrence is that cookies are placed before consent is sought. Often, website owners outsource cookie banner compliance to more experienced companies. These websites use consent management platforms (CMPs) such as Cookiebot by Usercentrics or One Trust.

When verifying when cookies are placed via these CMPs, the option to load cookies after consent is sought needs to be manually selected. Therefore, website owners need to have knowledge about consent requirements to understand that cookies are not to be placed prior to consent being sought. 

 Person, Food, Sweets, Head, Computer, Electronics

Image taken from Buddy Company

  • Google Consent Mode

Another example is related to Google Consent Mode (GCM). GCM is relevant to mention here as Google is the most common third-party tracker on the web, thus the most likely tracker users will encounter. They have a vast array of trackers ranging from statistics, analytics, preferences, marketing and more. GCM essentially creates a path for website analytics to occur despite consent being refused. This occurs as GCM claims that it can send cookieless ping signals to user devices to know how many users have viewed a website, clicked on a page, searched a term, etc.

This is a novel solution Google is presenting, and it claims to be privacy-friendly, as no cookies are required for this to occur. However, a study on tags, specifically GCM tags, found that GCM is not privacy-friendly and infringes the GDPR. The study found that Google still collects personal data in these ‘cookieless ping signals’ such as user language, screen resolution, computer architecture, user agent string, operating system and its version, complete web page URL and search keywords. Since this data is collected and processed despite the user refusing consent, there are undoubtedly legal issues.

The first reason comes from the lawfulness general principle whereby Google has no lawful basis to process this personal data as the user refused consent, and no other legal basis is used. The second reason stems from the general principle of fairness, as users do not expect that, after refusing trackers and choosing the more privacy-friendly option, their data is still processed as if their consent choice did not matter.

Therefore, from Google’s perspective, GCM is privacy-friendly as no cookies are placed, thus no consent is required to be sought. However, a recent study revealed that personal data is still being processed without any permission or legal basis. 

What next?

  • On an individual level: 

Many solutions have been developed for individuals to reduce the tracking they are subject to. From browser extensions to using devices that are more privacy-friendly and using ad blockers. One notable company tackling this issue is Duck Duck Go, which by default rejects trackers, allows for email protection, and overall reduces trackers when using their browser. Duck Duck Go is not the only company to allow this, many more, such as uBlock Origin and Ghostery, offer similar services.

Specifically, regarding fingerprint ID, researchers have developed ways to prevent device fingerprinting. In 2023, researchers proposed ShieldF, which is a Chromium add-on that reduces fingerprinting for mobile apps and browsers. Other measures include using an IP address that many people use, which is not ideal for home Wi-Fi. Using a combination of a browser extension and a VPN is also unsuitable for every individual, as this demands a substantial amount of effort and sometimes financial costs.  

  • On a systemic level: 

CMPs and GCM are active tracking stakeholders in the tracking ecosystem, and their actions are subject to enforcement bodies. In this case, predominantly data protection authorities (DPA). One prominent DPA working on cookie enforcement is the Dutch DPA, the Autoriteit Persoonsgegevens (AP). In the early months of 2025, the AP has publicly stated that its focus for this upcoming year will be to check cookie compliance. They announced that they would be investigating 10,000 websites in the Netherlands. This has led to investigations into companies with unlawful cookie banners, concluding with warnings and sanctions.

 Pen, Computer, Electronics, Laptop, Pc, Adult, Male, Man, Person, Cup, Disposable Cup, Text

However, these investigations require extensive time and effort. DPAs have already stated that they are overworked and do not have enough personnel or financial resources to cope with the increase in responsibility. Coupled with the fact that sanctioned companies set aside financial pots for these sanctions, or that non-EU businesses do not comply with DPA sanction decisions (the case of Clearview AI). Different ways to tackle non-compliance should be investigated.

For example, in light of the GDPR simplification package, whilst simplifying some measures, other liability measures could be introduced to ensure that enforcement is as vigorous as the legislation itself. The EU has not shied away from holding management boards liable for non-compliance. In a separate legislation on cybersecurity, NIS II Article 20(1) states that ‘management bodies of essential and important entities approve the cybersecurity risk-management measures (…) can be held liable for infringements (…)’. That article allows for board member liability for specific cybersecurity risk-management measures in Article 21. If similar measures cannot be introduced during this time, other moments of amendment can be consulted for this.

Conclusion

Cookies and device fingerprinting are two common ways in which tracking occurs. The potential larger societal and legal consequences of tracking demand that existing robust legislation is enforced to ensure that past politically related historical mistakes are not repeated.

Ultimately, there is no way to completely prevent fingerprinting and cookie-based tracking without significantly compromising the user’s browsing experience. For this reason, the burden of responsibility must shift toward CMPs. This shift should begin with the implementation of privacy-by-design and privacy-by-default principles in the development of their tools (preventing cookie placement prior to consent seeking).

Accountability should occur through tangible consequences, such as liability for board members in cases of negligence. By attributing responsibility to the companies which develop cookie banners and facilitate trackers, the source of the problem can be addressed and held accountable for their human rights violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hanwha and Samsung lead Korea’s cyber insurance push

South Korea is stepping up efforts to strengthen its cyber insurance sector as corporate cyberattacks surge across industries. A string of major breaches has revealed widespread vulnerability and renewed demand for more comprehensive digital risk protection.

Hanwha General Insurance launched Korea’s first Cyber Risk Management Centre last November and partnered with global cybersecurity firm Theori and law firm Shin & Kim to expand its offerings.

Despite the growing need, the market remains underdeveloped. Cyber insurance makes up only 1 percent of Korea’s accident insurance sector, with a 2024 report estimating local cyber premiums at $50 million, just 0.3 percent of the global total.

Regulators and industry voices call for higher mandatory coverage, clearer underwriting standards, and financial incentives to promote adoption.

As Korean demand rises, comprehensive policies offering tailored options and emergency coverage are gaining traction, with Hanwha reporting a 200 percent revenue jump in under a year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU Commission accuses Temu of failing DSA checks

The European Commission has accused Temu of breaching the Digital Services Act by failing to assess and address the sale of illegal or dangerous products.

The accusation follows months of investigation and a review of a required risk report submitted by Temu, which the Commission found too vague.

A mystery shopping exercise by the EU uncovered unsafe toys and electronics on the platform, raising concerns over consumer safety.

Additional parts of the probe are ongoing, including scrutiny of Temu’s use of addictive designs, algorithmic transparency and product recommendations.

Temu now has a few weeks to respond to the preliminary findings, though no final deadline has been given. Under the DSA, confirmed violations could result in fines of up to 6% of a company’s global turnover.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bands rise as real musicians struggle to compete

AI is quickly transforming the music industry, with AI-generated bands now drawing millions of plays on platforms like Spotify.

While these acts may sound like traditional musicians, they are entirely digital creations. Streaming services rarely label AI music clearly, and the producers behind these tracks often remain anonymous and unreachable. Human artists, meanwhile, are quietly watching their workload dry up.

Music professionals are beginning to express concern. Composer Leo Sidran believes AI is already taking work away from creators like him, noting that many former clients now rely on AI-generated solutions instead of original compositions.

Unlike previous tech innovations, which empowered musicians, AI risks erasing job opportunities entirely, according to Berklee College of Music professor George Howard, who warns it could become a zero-sum game.

AI music is especially popular for passive listening—background tracks for everyday life. In contrast, real musicians still hold value among fans who engage more actively with music.

However, AI is cheap, fast, and royalty-free, making it attractive to publishers and advertisers. From film soundtracks to playlists filled with faceless artists, synthetic sound is rapidly replacing human creativity in many commercial spaces.

Experts urge musicians to double down on what makes them unique instead of mimicking trends that AI can easily replicate. Live performance remains one of the few areas where AI has yet to gain traction. Until synthetic bands take the stage, artists may still find refuge in concerts and personal connection with fans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU clears Microsoft deal after privacy changes

The European Data Protection Supervisor (EDPS) has ended its enforcement action against the European Commission over its use of Microsoft, following improvements to data protection practices. The decision came after the Commission revised its contract with Microsoft to improve privacy standards.

Under the updated terms, Microsoft must clarify the reasons for data transfers outside the European Economic Area and name the recipients. Transfers are only allowed to countries with EU-recognised protections or in public interest cases.

Microsoft must also inform the Commission if a foreign government requests access to EU data, unless the request comes from within the EU or a country with equivalent safeguards. The EDPS urged other EU institutions to adopt similar contractual protections if using Microsoft 365.

Despite the EDPS’ clearance, the Commission remains concerned about relying too heavily on a non-EU tech provider for essential digital services. It continues to support the current EU-US data adequacy deal, though recent political changes in the US have cast doubt on its long-term stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UBTech’s Walker S2 marks a leap towards uninterrupted robotic work

The paradigm of robotic autonomy is undergoing a profound transformation with the advent of UBTech’s new humanoid, the Walker S2. Traditionally, robots have been tethered to human assistance for power, requiring manual plugging in or lengthy recharges.

UBTech, a pioneering robotics company, is now dismantling these limitations with a groundbreaking feature in the Walker S2: the ability to swap its battery autonomously. The innovation promises to reshape the landscape of factory work and potentially many other industries, enabling near-continuous, 24/7 operation without human intervention.

The core of this advancement lies in the Walker S2’s sophisticated self-charging mechanism. When a battery begins to deplete, the robot does not power down. Instead, it intelligently navigates to a strategically placed battery swap station.

Once positioned, the robot executes a precise sequence of movements: it twists its torso, deploys built-in tools on its arms to unfasten and remove the drained battery from its back cavity, places it into an empty bay on the swap station, and then expertly retrieves a fresh, fully charged module.

The new battery is then securely plugged into one of its dual battery bays. The process is remarkably swift, taking approximately three minutes, allowing the robot to return to its tasks almost immediately.

The hot-swappable system mirrors the convenience of advanced electric vehicle technology, but its application to humanoid robotics unlocks unprecedented operational efficiency. Standing at 5 feet, 3 inches (approximately 160 cm) tall and weighing 95 pounds (about 43 kg), the Walker S2 is designed to integrate seamlessly into environments built for humans.

It has two 48-volt lithium batteries, ensuring a continuous power supply during the brief swapping procedure. While one battery powers the robot’s ongoing operations, the other can be exchanged.

Each battery provides approximately two hours of operation while walking or up to four hours when the robot stands still and performs tasks. The battery swap stations are not merely power hubs; they also meticulously monitor the health of each battery.

Should a battery show signs of degradation, a technician can be alerted to a timely replacement, further optimising the robot’s longevity and performance.

UBTech claims the Walker S2 is not a mere laboratory prototype but a robust solution engineered for real-world industrial deployment. Extensive testing has been conducted in the highly demanding environments of car factories operated by major Chinese electric vehicle manufacturers, including BYD, Nio, and Zeekr.

The trials validate the robot’s ability to operate effectively in dynamic production lines. The Walker S2 incorporates advanced vision systems, allowing it to detect battery levels and identify fully charged units, indicated by a green light on the stacked battery packs.

The robot autonomously reads the visual cues, ensuring precise selection and connection via a simple USB-style connector. Furthermore, the robot features a display face, enabling it to communicate its operational status to human workers, fostering a collaborative and transparent work environment. For safety, a prominent emergency stop button is also integrated.

China’s strategic investment in robotics is a driving force behind such innovations. Shenzhen, UBTech’s home base, is a thriving hub for robotics, boasting over 1,600 companies in the sector.

The nation’s broader push towards automation, part of its ‘Made in China 2025’ strategy, is a clear statement of global competitiveness, with China betting on AI and robotics to spearhead the next manufacturing era.

The coordinated industrial policy has led to China becoming the world’s largest market for industrial robots and a significant innovator in the field. The implications of robots like the Walker S2, built for non-stop operation, extend far beyond traditional factory floors.

Their ability to manage physical tasks continuously could redefine work in various sectors. Industries such as logistics, with vast warehouses requiring constant material handling, or airports, where baggage and cargo movement is ceaseless, benefit immensely.

Hospitals could also see these humanoids assisting with logistical duties, allowing human staff to concentrate on direct patient care. For businesses, the promise of 24/7 automation translates directly into increased output without additional human resources, ensuring operations move seamlessly day and night.

The Walker S2 exemplifies how advanced automation rapidly moves beyond research labs into practical, demanding workplaces. With its autonomous battery-swapping capability, humanoid robots are poised to work extended hours that far exceed human capacity.

The robots do not require coffee breaks or need sleep; they are designed for relentless productivity, marking a significant step towards a future where machines play an even more integral role in daily industrial and societal functions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fuels new wave of global security breaches

Global corporations are under growing threat from increasingly sophisticated cyber attacks as AI tools boost the capabilities of malicious actors.

Allianz Life recently confirmed a breach affecting most of its 1.4 million North American customers, adding to a string of high-profile incidents this year.

Microsoft is also contending with the aftermath of a wide-scale intrusion, as attackers continue to exploit AI-driven methods to bypass traditional defences.

Cybersecurity firm DeepStrike reports that over 560,000 new malware samples are detected daily, underscoring the scale of the threat.

Each month in 2025 has brought fresh incidents. January saw breaches at the UN and Hewlett-Packard, while crypto lender zkLend lost $9.5 million to hackers in February.

March was marked by a significant attack on Elon Musk’s X platform, and Oracle lost six million data records.

April and May were particularly damaging for retailers and financial services. M&S, Harrods, and Coinbase were among the prominent names hit, with the latter facing a $20 million ransom demand. In June, luxury brands and media companies, including Cartier and the Washington Post, were also targeted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot captures veteran workers’ knowledge to support UK care teams

Peterborough City Council has turned the knowledge of veteran therapy practitioner Geraldine Jinks into an AI chatbot to support adult social care workers.

After 35 years of experience, colleagues frequently approached Jinks seeking advice, leading to time pressures despite her willingness to help.

In response, the council developed a digital assistant called Hey Geraldine, built on the My AskAI platform, which mimics her direct and friendly communication style to provide instant support to staff.

Developed in 2023, the chatbot offers practical answers to everyday care-related questions, such as how to support patients with memory issues or discharge planning. Jinks collaborated with the tech team to train the AI, writing all the responses herself to ensure consistency and clarity.

Thanks to its natural tone and humanlike advice, some colleagues even mistook the chatbot for the honest Geraldine.

The council hopes Hey Geraldine will reduce hospital discharge delays and improve patient access to assistive technology. Councillor Shabina Qayyum, who also works as a GP, said the tool empowers staff to help patients regain independence instead of facing unnecessary delays.

The chatbot is seen as preserving valuable institutional knowledge while improving frontline efficiency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Allianz breach affects most US customers

Allianz Life has confirmed a major cyber breach that exposed sensitive data from most of its 1.4 million customers in North America.

The attack was traced back to 16 July, when a threat actor accessed a third-party cloud system using social engineering tactics.

The cybersecurity breach affected a customer relationship management platform but did not compromise the company’s core network or policy systems.

Allianz Life acted swiftly by notifying the FBI and other regulators, including the attorney general’s office in Maine.

Those impacted are offered two years of credit monitoring and identity theft protection. The company has begun contacting affected individuals but declined to reveal the full number involved due to an ongoing investigation.

No other Allianz subsidiaries were affected by the breach. Allianz Life employs around 2,000 staff in the US and remains a key player within the global insurer’s North American operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Huawei challenges Nvidia with AI super server

Huawei has unveiled its most powerful AI server, the CloudMatrix 384, to challenge Nvidia’s grip on the high-performance AI infrastructure market.

The system, launched at the World AI Conference in Shanghai, uses 384 Ascend 910C chips, significantly outnumbering Nvidia’s 72 B200 GPUs in the GB200 NVL72.

Although Nvidia’s GPUs remain more powerful individually, Huawei’s design relies on stacking and high-speed chip interconnection to boost overall performance.

The company claims the CloudMatrix 384 can deliver 300 petaflops of computing power, well above Nvidia’s 180 petaflops, though it consumes nearly four times more energy.

The US recently reversed its ban on Nvidia’s H20 chip exports to China, seeking to curb Huawei’s momentum. However, ongoing reports of smuggled Nvidia GPUs raise doubts over the effectiveness of these restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!