White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan invests NT$50 million to train AI-ready professionals

Taiwan’s Ministry of Economic Affairs has announced the launch of the first phase of its 2025 AI talent training programme, set to begin in August.

The initiative aims to develop 152 skilled professionals capable of supporting businesses in adopting AI technologies across a vast range of sectors.

Chiu Chiu-hui, Director-General of the Industrial Development Administration, said the programme has attracted over 60 domestic and international companies that will contribute instructors and offer internship placements.

Notable participating firms include Microsoft Taiwan, ASE Group, and Acer. Students will be selected from leading universities, such as National Taipei University, National Taipei University of Technology, National Formosa University, and National Cheng Kung University.

Structured as a one-year curriculum, the training is divided into three four-month phases. The initial stage will focus on theoretical foundations and current industry trends.

The first training stage will be followed by four months of practical application, and finally, four months of on-site corporate internships. Graduates of the programme are required to commit to working for one of the participating companies for a minimum of two years upon completion.

Participants will receive financial support throughout their training. A monthly stipend of NT$20,000 (approximately US$673) will be provided during the academic and practical stages, increasing to NT$30,000 during the internship period.

The government has earmarked NT$50 million for the first phase of the programme, and additional co-investment from private companies is being actively encouraged.

According to Chiu, some Taiwanese firms are struggling to find qualified talent to support their AI ambitions. In response, the ministry trained approximately 70,000 AI professionals last year and has set a slightly lower target of over 50,000 for 2025.

However, the long-term vision remains ambitious — to develop a total of 200,000 AI specialists within the next four years.

Registration for the second phase of the initiative is now open and will close in September. Training will expand to include universities and research institutions across Taiwan, with the next round of classes scheduled to start in October.

Industry leaders have praised the initiative as a timely response to the rapidly evolving technological landscape.

Lee Shu-hsia, Vice President of Human Resources at ASE Group, noted that AI is no longer confined to manufacturing but is increasingly being integrated into various functions such as procurement, human resources, and management.

The cross-departmental adoption is creating demand for AI-literate professionals who can bridge technical knowledge with operational needs.

Danny Chen, General Manager of Microsoft Taiwan’s public business group, added that the digital transformation underway in many companies has led to a significant increase in demand for AI-related talent.

Chen expressed optimism that the training programme will help companies not only recruit but also retain skilled personnel. The Ministry of Economic Affairs has expressed its expectation for participation to grow in the coming years and plans to expand both the scope and scale of the training.

In addition to co-investment, the ministry is exploring partnerships with international institutions to further enhance the programme’s global relevance and ensure alignment with emerging industry standards.

While the government’s long-term goal is to future-proof Taiwan’s workforce, the immediate focus is on plugging the talent gap that threatens to slow industrial innovation.

By linking academic institutions with real-world corporate challenges, the programme aims to produce graduates who are not only technically proficient but also industry-ready from day one.

Observers say the initiative represents a proactive strategy in preparing Taiwan’s economy for the next wave of AI-driven transformation. With AI applications becoming increasingly prevalent in sectors ranging from logistics to administration, building a robust talent pipeline is now viewed as a national priority.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scientists use quantum AI to solve chip design challenge

Scientists in Australia have used quantum machine learning to model semiconductor properties more accurately, potentially transforming how microchips are designed and manufactured.

The hybrid technique combines AI with quantum computing to solve a long-standing challenge in chip production: predicting electrical resistance where metal meets semiconductor.

The Australian researchers developed a new algorithm, the Quantum Kernel-Aligned Regressor (QKAR), which uses quantum methods to detect complex patterns in small, noisy datasets, a common issue in semiconductor research.

By improving how engineers predict Ohmic contact resistance, the approach could lead to faster, more energy-efficient chips. It also offers real-world compatibility, meaning it can eventually run on existing quantum machines as the hardware matures.

The findings highlight the growing role of quantum AI in hardware design and suggest the method could be adopted in commercial chip production in the near future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brainstorming with AI opens new doors for innovation

AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Company, Kevin Li describes how AI complements human brainstorming under time pressure, drawing from his work at Amazon and startup Stealth.

Li argues AI is no longer just a tool but a true collaborator in creative workflows. Generative models can analyse vast data sets and rapidly suggest alternative concepts, helping teams reimagine product features, marketing strategies, and campaign angles. The shift aligns with broader industry trends.

A McKinsey report from earlier this year highlighted that, while only 1% of companies consider themselves mature in AI use, most are investing heavily in this area. Creative use cases are expected to generate massive value by 2025.

Li notes that the most effective use of AI occurs when it’s treated as a sounding board. He recounts how the quality of ideas improved significantly when AI offered raw directions that humans later refined. The hybrid model is gaining traction across multiple startups and established firms alike.

Still, original thinking remains a hurdle. A recent study by PsyPost found human pairs often outperform AI tools in generating novel ideas during collaborative sessions. While AI offers scale, human teams reported more substantial creative confidence and profound originality.

The findings suggest AI may work best at the outset of ideation, followed by human editing and development. Experts recommend setting clear roles for AI in the creative cycle. For instance, tools like ChatGPT or Midjourney might handle initial brainstorming, while humans oversee narrative coherence, tone, and ethics.

The approach is especially relevant in advertising, product design, and marketing, where nuance is still essential. Creatives across X are actively sharing tips and results. One agency leader posted about reducing production costs by 30% using AI tools for routine content work.

The strategy allowed more time and budget to focus on storytelling and strategy. Others note that using AI to write draft copy or generate design options is becoming common. Yet concerns remain over ethical boundaries.

The Orchidea Innovation Blog cautioned in 2023 that AI often recycles learned material, which can limit fresh perspectives. Recent conversations on X raise alarms about over-reliance. Some fear AI-generated content will eradicate originality across sectors, particularly marketing, media, and publishing.

To counter such risks, structured prompting and human-in-the-loop models are gaining popularity. ClickUp’s AI brainstorming guide recommends feeding diverse inputs to avoid homogeneous outputs. Précis AI referenced Wharton research to show that vague prompts often produce repetitive results.

The solution: intentional, varied starting points with iterative feedback loops. Emerging platforms are tackling this in real-time. Ideamap.ai, for example, enables collaborative sessions where teams interact with AI visually and textually.

Jabra’s latest insights describe AI as a ‘thought partner’ rather than a replacement, enhancing team reasoning and ideation dynamics without eliminating human roles. Looking ahead, the business case for AI creativity is strong.

McKinsey projects hundreds of billions in value from AI-enhanced marketing, especially in retail and software. Influencers like Greg Isenberg predict $100 million niches built on AI-led product design. Frank$Shy’s analysis points to a $30 billion creative AI market by 2025, driven by enterprise tools.

Even in e-commerce, AI is transforming operations. Analytics India Magazine reports that brands build eight-figure revenues by automating design and content workflows while keeping human editors in charge. The trend is not about replacement but refinement and scale.

Li’s central message remains relevant: when used ethically, AI augments rather than replaces creativity. Responsible integration supports diverse voices and helps teams navigate the fast-evolving innovation landscape. The future of ideation lies in balance, not substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI sparks fears over future of dubbing

Voice actors across Europe are pushing back against the growing use of AI in dubbing, fearing it could replace human talent in film and television. Many believe dubbing is a creative profession beyond simple voice replication, requiring emotional nuance and cultural sensitivity.

In Germany, France, Italy and the UK, nearly half of viewers prefer dubbed content over subtitles, according to research by GWI. Yet studios are increasingly testing AI tools that replicate actors’ voices or generate synthetic speech, sparking concern across the dubbing industry.

French voice actor Boris Rehlinger, known for dubbing Hollywood stars, says he feels threatened even though AI has not replaced him. He is part of TouchePasMaVF, an initiative defending the value of human dubbing and calling for protection against AI replication.

Voice artists argue that digital voice cloning ignores the craftsmanship behind their performances. As legal frameworks around voice ownership lag behind the technology, many in the industry demand urgent safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba launches Wan2.2 AI for text and image to video generation

Alibaba and Zhipu AI have unveiled new open-source models as China intensifies its efforts to compete with the US in AI development. Alibaba’s Wan2.2 is being promoted as the first large video generation model using a Mixture-of-Experts (MoE) architecture in the open-source space.

The Wan2.2 series includes models for generating video from text and images, supporting hybrid capabilities for advanced multimedia applications. MoE architecture allows these models to use less computing power by dividing tasks among specialised sub-networks.

Zhipu, one of China’s leading AI firms, launched the GLM-4.5 and GLM-4.5-Air models with up to 355 billion parameters, built on a self-developed architecture. The GLM-4.5 model ranked third globally and first among open-source models across 12 performance benchmarks.

China’s open-source ecosystem is expanding rapidly, with Zhipu’s models amassing over 40 million downloads and Alibaba’s Qwen series producing hundreds of derivatives. Industry momentum reflects a strategic shift towards wider adoption, improved efficiency and greater international reach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Survey finds developers value AI for ideas, not final answers

As AI becomes more integrated into developer workflows, a new report shows that trust in AI-generated results erodes. According to Stack Overflow’s 2025 Developer Survey, the use of AI has increased to 84%, up from 76% last year. However, trust in its output has dropped, especially among experienced professionals.

The survey found that 46% of developers now lack trust in AI-generated answers.

That figure marks a sharp increase from 31% in 2024, suggesting growing scepticism despite higher adoption. By contrast, only 3.1% of developers trust AI responses.

Interestingly, trust varies with experience. Beginners were twice as likely to express high confidence in AI, with 6.1% reporting strong trust, compared with just 2.5% among seasoned developers. The results indicate a divide in how AI is perceived across the developer landscape.

Despite doubts, developers continue to use AI tools across various tasks. The vast majority – 78.5% – use AI on an infrequent basis, such as once a month. The pattern holds across experience levels, suggesting cautious and situational usage.

While trust is lacking, developers still see AI as a helpful starting point. Three in five respondents reported favourable views of AI tools overall. One in five viewed them negatively, with the remaining 20% remaining neutral.

However, that usefulness has limits. Developers were quick to seek human input when unsure about AI responses. Seventy-five percent said they would ask someone when they didn’t trust an AI-generated answer. Fifty-eight percent preferred human advice when they didn’t fully understand a solution.

Ethics and security were also areas where developers preferred human judgement. Again, 58% reported turning to colleagues or mentors to evaluate such risks. Such cases show a continued reliance on human expertise in high-stakes decisions.

Stack Overflow CEO Prashanth Chandrasekar acknowledged the limitations of AI in the development process. ‘AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance,’ he said. He added that AI best uses a ‘trusted human intelligence layer’.

The data also revealed that developers may not trust AI entirely but use it to support learning.

Forty-four percent of respondents admitted using AI tools to learn how to code, up from 37% last year.

A further 36% use it for work-related growth or career advancement.

The results highlight the role of AI as an educational companion rather than a coding authority.

It can help users understand concepts or generate basic examples, but most still want human review.

That distinction matters as teams consider how to integrate AI into production workflows.

Some developers are concerned that overreliance on AI could reduce the depth of their problem-solving skills. Others worry about hallucinations — AI-generated content that appears accurate but is misleading or incorrect. Such risks have led to a cautious, layered approach to using AI tools in real-life projects.

Stack Overflow’s findings align with broader AI adoption and trust industry trends. Tech firms are exploring ways to integrate AI safely, but many prioritise transparency and human oversight. Chandrasekar believes developers are uniquely positioned to help shape AI’s future revolution.

‘By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value,’ he said. ‘They’ll help build the AI technologies and products of tomorrow.’

As AI continues to expand into software development, one thing is clear: trust matters. Developers are open to using AI – but only when it supports, rather than replaces, human judgement. The challenge now is building systems that earn and maintain that trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google urges faster AI adoption to boost productivity

Google’s CEO Sundar Pichai and other tech leaders are urging their employees to quickly integrate AI into daily operations. The aim is to enhance productivity as companies seek smarter ways to manage rising costs.

Rather than simply expanding their workforce during periods of growth, firms like Google and Amazon are now focusing on AI tools to improve efficiency. Google plans to spend $85 billion this year to expand data centres for advanced AI models.

Meanwhile, Amazon has warned employees about upcoming layoffs linked to AI adoption, encouraging staff to learn how to work effectively with smaller teams.

Alphabet, Google’s parent company, has already reduced its workforce by about 6% in 2023, cutting thousands of jobs while continuing to invest heavily in AI development. Pichai described this period as one for cautious investment aimed at boosting productivity.

Other tech leaders, including Microsoft’s Julia Liuson and Shopify’s Tobi Lutke, have echoed the sentiment that adopting AI is now essential for success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UAE partnership boosts NeOnc’s clinical trial programme

Biotech firm NeOnc Technologies has gained rapid attention after going public in March 2025 and joining the Russell Microcap Index just months later. The company focuses on intranasal drug delivery for brain cancer, allowing patients to administer treatment at home and bypass the blood-brain barrier.

NeOnc’s lead treatment is in Phase 2A trials for glioblastoma patients and is already showing extended survival times with minimal side effects. Backed by a partnership with USC’s Keck Medical School, the company is also expanding clinical trials to the Middle East and North Africa under US FDA standards.

A $50 million investment deal with a UAE-based firm is helping fund this expansion, including trials run by Cleveland Clinic through a regional partnership. The trials are expected to be fully enrolled by September, with positive preliminary data already being reported.

AI and quantum computing are central to NeOnc’s strategy, particularly in reducing risk and cost in trial design and drug development. As a pre-revenue biotech, the company is betting that innovation and global collaboration will carry it to the next stage of growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How are we being tracked online?

What impact does tracking have?

In the digital world, tracking occurs through digital signals sent from one computer to a server, and from a server to an organisation. Almost immediately, a profile of a user can be created. The information can be leveraged to send personalised advertisements for products and services consumers are interested in, but it can also classify people into categories to send them advertisements to steer them in a certain direction, for example, politically (2024 Romanian election, Cambridge Analytica Scandal skewing the 2016 Brexit referendum and 2016 US Elections). 

Digital tracking can be carried out with minimal costs, rapid execution and the capacity to reach hundreds of thousands of users simultaneously. These methods require either technical skills (such as coding) or access to platforms that automate tracking. 

 Architecture, Building, House, Housing, Staircase, Art, Painting, Person, Modern Art

Image taken from the Internet Archive

This phenomenon has been well documented and likened to George Orwell’s 1984, in which the people of Oceania are subject to constant surveillance by ‘Big Brother’ and institutions of control; the Ministry of Truth (propaganda), Peace (military control), Love (torture and forced loyalty) and Plenty (manufactured prosperity). 

A related concept is the Panopticon, developed by the French philosopher Michel Foucault’s social theory based on the architecture of a prison, enabling constant observation from a central point. Prisoners never know if they are being watched and thus self-regulate their behaviour. In today’s tech-driven society, our digital behaviour is similarly regulated through the persistent possibility of surveillance. 

How are we tracked? The case of cookies and device fingerprinting

  • Cookies

Cookies are small, unique text files placed on a user’s device by their web browser at the request of a website. When a user visits a website, the server can instruct the browser to create or update a cookie. These cookies are then sent back to the server with each subsequent request to the same website, allowing the server to recognise and remember certain information (login status, preferences, or tracking data).

If a user visits multiple websites about a specific topic, that pattern can be collected and sold to advertisers targeting that interest. This applies to all forms of advertising, not just commercial but also political and ideological influence.

  • Device fingerprinting 

Device fingerprinting involves generating a unique identifier using a device’s hardware and software characteristics. Types include browser fingerprinting, mobile fingerprinting, desktop fingerprinting, and cross-device tracking. To assess how unique a browser is, users can test their setup via the Cover Your Tracks tool by the Electronic Frontier Foundation.

Different information will be collected, such as your operating system, language version, keyboard settings, screen resolution, font used, device make and model and more. The more data points collected, the more unique an individual’s device will be.

 Person, Clothing, Footwear, Shoe

Image taken from Lan Sweeper

A common reason to use device fingerprinting is for advertising. Since each individual has a unique identifier, advertisers can distinguish individuals from one another and see which websites they visit based on past collected data. 

Similar to cookies, device fingerprinting is not purely about advertising, as it has some legitimate security purposes. Device fingerprinting, as it creates a unique ID of a device, allows websites to recognise a user’s device. This is useful to combat fraud. For instance, if a known device suddenly logs in from an unknown fingerprint, fraud detection mechanisms may flag and block the login attempt.

Legal considerations

Apart from societal impacts, there are legal considerations to be made, specifically concerning fundamental rights. In the EU and Europe, Articles 7 and 8 of the Charter of Fundamental Rights and Article 8 of the European Convention on Human Rights are what give rise to the protection of personal data in the first place. They form the legal bedrock of digital privacy legislation, such as the GDPR and the ePrivacy Directive. Stemming from the GDPR, there is a protection against unlawful, unfair and opaque processing of personal data.

 Page, Text, Letter

Articles 7 and 8 of the Charter of Fundamental Rights

For tracking to be carried out lawfully, one of the six legal bases of the GDPR must be relied upon. In this case, tracking is usually only lawful if the legal basis of consent is relied upon (Article 6(1)(a) GDPR, which stems from Article 5(1) of the ePrivacy Directive).

Other legal bases, such as the legitimate interest of a business, may allow for limited analytical cookies to be placed, of which the cookies referred to in this analysis are not. 

Regardless of this, to obtain consent, website visitors must ensure that consent is collected prior to processing occurring, freely given, specific, informed and unambiguous. In most cases of website tracking, consent is not collected prior to processing.

In practice, this means that before a consent request is fulfilled by a website visitor, cookies are placed on the user’s device. There are additional concerns about consent not being informed, as users do not know what processing personal data to enable tracking entails. 

Moreover, consent is not specific to what is necessary to the processing, given that processing occurs for broad and unspecified reasons, such as improving visitor experience and understanding the website better, and those explanations are generic and broad.

Further, tracking is typically unfair as users do not expect to be tracked across sites or have digital profiles made about themselves based on website visits. Tracking is also opaque, as users do not understand how tracking occurs. Website owners state that tracking occurs with a lack of explanation on how it occurs in the first place. Users do not know for how long it occurs, what personal data is being used to track or how it benefits website owners. 

Can we refuse tracking

In theory, it is possible to prevent tracking from the get-go. This can be done by refusing to give consent when tracking occurs. However, in practice, refusing consent can still lead to tracking. Outlined below are two concrete examples of this happening daily.

  • Cookies

Regarding cookies, simply put, the refusal of all requests is not honoured, it is ignored. Studies have found that when a user visits a website and refuses to give consent, their request is not honoured. Cookies and similar tracking technologies are placed on the user’s device as if they had accepted cookies.

This increases user frustration as they are given a choice that is non-existent. This occurs as non-essential cookies, which can be refused, are lumped together with essential cookies, which cannot be refused. Therefore, when refusing consent to non-essential cookies, not all are refused, as some are mislabelled.

Another reason for this occurrence is that cookies are placed before consent is sought. Often, website owners outsource cookie banner compliance to more experienced companies. These websites use consent management platforms (CMPs) such as Cookiebot by Usercentrics or One Trust.

When verifying when cookies are placed via these CMPs, the option to load cookies after consent is sought needs to be manually selected. Therefore, website owners need to have knowledge about consent requirements to understand that cookies are not to be placed prior to consent being sought. 

 Person, Food, Sweets, Head, Computer, Electronics

Image taken from Buddy Company

  • Google Consent Mode

Another example is related to Google Consent Mode (GCM). GCM is relevant to mention here as Google is the most common third-party tracker on the web, thus the most likely tracker users will encounter. They have a vast array of trackers ranging from statistics, analytics, preferences, marketing and more. GCM essentially creates a path for website analytics to occur despite consent being refused. This occurs as GCM claims that it can send cookieless ping signals to user devices to know how many users have viewed a website, clicked on a page, searched a term, etc.

This is a novel solution Google is presenting, and it claims to be privacy-friendly, as no cookies are required for this to occur. However, a study on tags, specifically GCM tags, found that GCM is not privacy-friendly and infringes the GDPR. The study found that Google still collects personal data in these ‘cookieless ping signals’ such as user language, screen resolution, computer architecture, user agent string, operating system and its version, complete web page URL and search keywords. Since this data is collected and processed despite the user refusing consent, there are undoubtedly legal issues.

The first reason comes from the lawfulness general principle whereby Google has no lawful basis to process this personal data as the user refused consent, and no other legal basis is used. The second reason stems from the general principle of fairness, as users do not expect that, after refusing trackers and choosing the more privacy-friendly option, their data is still processed as if their consent choice did not matter.

Therefore, from Google’s perspective, GCM is privacy-friendly as no cookies are placed, thus no consent is required to be sought. However, a recent study revealed that personal data is still being processed without any permission or legal basis. 

What next?

  • On an individual level: 

Many solutions have been developed for individuals to reduce the tracking they are subject to. From browser extensions to using devices that are more privacy-friendly and using ad blockers. One notable company tackling this issue is Duck Duck Go, which by default rejects trackers, allows for email protection, and overall reduces trackers when using their browser. Duck Duck Go is not the only company to allow this, many more, such as uBlock Origin and Ghostery, offer similar services.

Specifically, regarding fingerprint ID, researchers have developed ways to prevent device fingerprinting. In 2023, researchers proposed ShieldF, which is a Chromium add-on that reduces fingerprinting for mobile apps and browsers. Other measures include using an IP address that many people use, which is not ideal for home Wi-Fi. Using a combination of a browser extension and a VPN is also unsuitable for every individual, as this demands a substantial amount of effort and sometimes financial costs.  

  • On a systemic level: 

CMPs and GCM are active tracking stakeholders in the tracking ecosystem, and their actions are subject to enforcement bodies. In this case, predominantly data protection authorities (DPA). One prominent DPA working on cookie enforcement is the Dutch DPA, the Autoriteit Persoonsgegevens (AP). In the early months of 2025, the AP has publicly stated that its focus for this upcoming year will be to check cookie compliance. They announced that they would be investigating 10,000 websites in the Netherlands. This has led to investigations into companies with unlawful cookie banners, concluding with warnings and sanctions.

 Pen, Computer, Electronics, Laptop, Pc, Adult, Male, Man, Person, Cup, Disposable Cup, Text

However, these investigations require extensive time and effort. DPAs have already stated that they are overworked and do not have enough personnel or financial resources to cope with the increase in responsibility. Coupled with the fact that sanctioned companies set aside financial pots for these sanctions, or that non-EU businesses do not comply with DPA sanction decisions (the case of Clearview AI). Different ways to tackle non-compliance should be investigated.

For example, in light of the GDPR simplification package, whilst simplifying some measures, other liability measures could be introduced to ensure that enforcement is as vigorous as the legislation itself. The EU has not shied away from holding management boards liable for non-compliance. In a separate legislation on cybersecurity, NIS II Article 20(1) states that ‘management bodies of essential and important entities approve the cybersecurity risk-management measures (…) can be held liable for infringements (…)’. That article allows for board member liability for specific cybersecurity risk-management measures in Article 21. If similar measures cannot be introduced during this time, other moments of amendment can be consulted for this.

Conclusion

Cookies and device fingerprinting are two common ways in which tracking occurs. The potential larger societal and legal consequences of tracking demand that existing robust legislation is enforced to ensure that past politically related historical mistakes are not repeated.

Ultimately, there is no way to completely prevent fingerprinting and cookie-based tracking without significantly compromising the user’s browsing experience. For this reason, the burden of responsibility must shift toward CMPs. This shift should begin with the implementation of privacy-by-design and privacy-by-default principles in the development of their tools (preventing cookie placement prior to consent seeking).

Accountability should occur through tangible consequences, such as liability for board members in cases of negligence. By attributing responsibility to the companies which develop cookie banners and facilitate trackers, the source of the problem can be addressed and held accountable for their human rights violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot