TikTok rejects end-to-end encryption citing safety concerns

TikTok will not adopt end-to-end encryption for direct messages. The company explained that using this technology could hinder safety teams’ and law enforcement’s efforts to detect harmful content in private messages, which the company believes could make users less safe online.

Encrypted messaging ensures that only the sender and recipient can read a conversation and is widely used across the social media industry. Rivals including Facebook, Instagram, Messenger, and X have adopted the technology, saying protecting private communication is central to user privacy.

The issue has become more sensitive because the platform has long faced scrutiny over possible links between its parent company, ByteDance, and the government of the People’s Republic of China, something the company has repeatedly denied. Reflecting these concerns, earlier this year, US lawmakers ordered the separation of TikTok’s US operations from its global business.

The company told the BBC that encrypted messaging would make it impossible for police and platform safety teams to read direct messages when needed. TikTok emphasised that this decision was made to enhance user protection, with a particular focus on the safety of younger users, and that it sees monitoring capabilities as crucial for addressing harmful behaviour.

Industry analyst Matt Navarra said the platform’s decision to ‘swim against the tide’ is ‘notable’ but presents ‘challenging optics’. He noted, ‘Grooming and harassment risks are present in DMs [direct messages], so TikTok can state it is prioritising proactive safety over privacy absolutism,’ though he added that the decision ‘places TikTok out of alignment with global privacy expectations’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa Véliz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global AI race intensifies as China claims leadership in strategic technologies

China asserted its position as the global leader in AI and strategic technology R&D, pledging to accelerate advancement toward technological autonomy. The assertion was prominently featured in government reports presented to the National People’s Congress.

A National Development and Reform Commission report states that China leads international research, development, and implementation in AI, biomedicine, robotics, and quantum technology. The report also references advancements in domestic chip innovation as proof of progress.

Competition between China and the United States for dominance in advanced technologies has escalated. Washington imposed export controls on advanced chips, while Beijing retaliated with restrictions on rare earth resources, escalating trade tensions over strategic technologies.

The report also highlighted the country’s global leadership in open-source AI models and its expansion into emerging technology sectors, including industrial robots and drones. Authorities pledged to nurture future industries such as quantum technology, embodied AI, and 6G networks, while promoting large-scale AI deployment across key sectors.

Officials also plan to launch new data centres, coordinate nationwide computing capacity, and establish mechanisms to prevent AI security risks. The strategy places particular emphasis on embodied AI to boost productivity and performance across sectors. Although US firms command larger investment resources, Beijing is relying on supply chains, manufacturing capacity, and rapid R&D cycles to scale emerging industries despite questions about long-term growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ECB reports minor impact of AI on employment

AI has so far had only a small effect on employment across Europe, according to economists at the European Central Bank. A comparison of 5,000 firms- both AI users and non-users- showed no significant difference in job creation or reduction.

Some firms that use AI intensively were even four percent more likely to hire new staff than average.

Economists noted that AI investment has not replaced existing jobs. In some cases, firms are hiring additional employees to develop and implement AI systems or to scale up operations more efficiently.

Only a minority of firms, around 15 percent, reported reducing labour costs as a motivation for AI adoption.

Despite limited impacts so far, the ECB cautioned that AI could have more significant effects as technology matures. Firms that specifically invest in AI to cut jobs may indeed reduce employment, and the long-term consequences for production processes and labour markets remain uncertain.

The findings come amid rising concern over AI-driven job losses, with companies such as Amazon and Allianz citing AI as a reason for recent cuts. Markets reacted negatively last week after a viral post predicted widespread layoffs, though current evidence shows only minor effects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Growing risks from AI meeting transcription tools

Businesses across the US and Europe are confronting new privacy risks as AI transcription tools spread through workplaces. Tools that automatically record and transcribe meetings increasingly capture sensitive conversations without clear consent.

Privacy specialists warn that organisations in the US and Europe previously focused on rules controlling what employees upload into AI systems. Governance efforts now shift towards monitoring what AI tools record during daily work.

AI services such as Otter, Zoom transcription and Microsoft Copilot can record discussions involving performance reviews, health information and legal matters. Companies in the US and Europe face legal exposure when third-party platforms store recordings without strict controls.

Governance teams in the US and Europe are being urged to introduce clear rules on meeting recordings and retention of transcripts. Stronger policies may include consent requirements, limits on recording sensitive meetings and stricter data storage oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of Europe issues new guidance on AI and gender equality

Ahead of International Women’s Day on 8 March, the Council of Europe adopted two new recommendations addressing gender equality and the prevention of violence against women in the context of emerging technologies.

One recommendation targets the design and use of AI to prevent discrimination, while the other focuses on accountability for technology-facilitated violence against women and girls.

The AI recommendation advises member states on preventing discrimination throughout the lifecycle of AI systems, from development to deployment and retirement. It highlights risks like gender bias while promoting transparency, explainability, and safeguards.

Special attention is given to discrimination based on gender, race, and sexual orientation, gender identity, and expression (SOGIESC).

The second recommendation sets the first international standard for addressing technology-facilitated violence against women. It outlines strategies to overcome impunity, including clearer legal frameworks, accessible reporting systems, and victim-centred approaches.

Emphasis is placed on multistakeholder engagement, trauma-informed policies, and safety-by-design in technology products to prevent digital harm.

Both recommendations reinforce the importance of combining regulation, institutional support, and public awareness to ensure technology advances equality rather than perpetuates harm.

The formal launch is scheduled for 10 June 2026 at the Palais de l’Europe in Strasbourg during an event titled ‘From standards to action: making accountability for technology-facilitated violence against women and girls a reality.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini Canvas reaches millions as Google expands AI Search tools

Google has expanded access to the Canvas feature in Google Search’s AI Mode, making it available to all US users.

Canvas allows users to organise research, draft documents and develop small applications directly inside search.

Prompts can generate code, transform reports into webpages or quizzes, and produce audio summaries from uploaded material. The tool was previously introduced as part of experimental projects in Google Labs.

The feature builds on capabilities already available in Google Gemini and partly overlaps with NotebookLM, which supports research analysis and document processing.

Within Canvas, users can gather information from the web and the Google Knowledge Graph while refining projects through interaction with the Gemini model.

Competition is intensifying across AI development platforms. OpenAI and Anthropic offer similar tools, though their design approaches differ in how collaborative workspaces are triggered and used.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Qualcomm pushes Europe to take the lead in the 6G revolution

Europe is being urged to take a leading role in developing sixth-generation wireless technology as global competition intensifies over the future of connectivity and AI.

Speaking at the Mobile World Congress in Barcelona, Wassim Chourbaji of Qualcomm argued that 6G will represent a technological revolution rather than a gradual improvement over existing networks.

The company expects early pre-commercial deployments to begin around 2028, with broader commercialisation targeted for 2029.

Next-generation wireless networks are expected to support physical AI systems capable of interacting with the real world, including robotics, smart glasses, connected vehicles, and advanced sensing technologies.

High-capacity uploads and faster processing between devices and data centres will allow AI systems to analyse video streams and real-time data more efficiently.

Qualcomm has also launched a coalition aimed at accelerating 6G development with partners including Nokia, Ericsson, Amazon, Google and Microsoft.

Advocates argue that combining European industrial strengths with advanced wireless and AI technologies could allow the continent to secure a leading position in the next phase of global digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China expands oversight of youth online safety

China has introduced new measures to regulate online information that could affect the physical and mental health of minors. Authorities in China said the rules will take effect on 1 March and aim to improve protection for young internet users.

The regulators identified four categories of online information that may harm minors. The authorities have also addressed emerging risks linked to algorithmic recommendations and generative AI technologies.

The framework in China requires internet platforms and content creators to prevent and respond to harmful material. Regulators said companies must strengthen the monitoring and governance of content affecting minors.

Authorities said the measures are designed to create a cleaner online environment for children. Officials also stressed greater responsibility for platforms that manage digital content used by minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US introduces ratepayer protection pledge for AI data centres

The United States government has announced a new policy initiative to ensure that the rapid expansion of data centres and AI infrastructure does not increase electricity costs for American households.

The measure, known as the Ratepayer Protection Pledge, places responsibility for additional energy demand on technology companies operating large-scale data centres.

Officials emphasised that reliable data centre infrastructure is critical to maintaining the country’s economic competitiveness and technological leadership. Facilities that power cloud computing, internet services and AI development are expected to continue expanding rapidly, driven by growing demand for advanced digital services.

At the same time, policymakers warned that rising electricity consumption linked to AI could place pressure on energy systems and consumer utility bills. Under the new pledge, hyperscale technology firms and AI companies commit to covering the full cost of the electricity and infrastructure required to operate their data centres.

Participating companies have agreed to finance new power generation resources, upgrade electricity delivery infrastructure and negotiate separate electricity rate structures with utilities and state authorities. The arrangement is designed to ensure that additional energy demand from large data centres does not translate into higher prices for residential consumers.

Seven major technology companies have formally accepted the terms of the pledge. Authorities argue that the initiative will support continued investment in domestic AI and cloud infrastructure while protecting households from rising energy costs and strengthening the resilience of the national power grid.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!