The Olympic ice dance format combines a themed rhythm dance with a free dance. For the 2026 season, skaters must draw on 1990s music and styles. While most competitors chose recognisable tracks, the Czech siblings used a hybrid soundtrack blending AC/DC with an AI-generated music piece.
Katerina Mrazkova and Daniel Mrazek, ice dancers from Czechia, made their Olympic debut using a rhythm dance soundtrack that included AI-generated music, a choice permitted under current competition rules but one that quickly drew attention.
The International Skating Union lists the rhythm dance music as ‘One Two by AI (of 90s style Bon Jovi)’ alongside ‘Thunderstruck’ by AC/DC. Olympic organisers confirmed the use of AI-generated material, with commentators noting the choice during the broadcast.
Criticism of the music selection extends beyond novelty. Earlier versions of the programme reportedly included AI-generated music with lyrics that closely resembled lines from well-known 1990s songs, raising concerns about originality.
The episode reflects wider tensions across creative industries, where generative tools increasingly produce outputs that closely mirror existing works. For the athletes, attention remains on performance, but questions around authorship and creative value continue to surface.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.
Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.
Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.
The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.
Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.
Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI tools saw significant uptake among young Europeans in 2025, with usage rates far outpacing the broader population. Data shows that 63.8% of individuals aged 16–24 across the EU engaged with generative AI, nearly double the 32.7% recorded among citizens aged 16–74.
Adoption patterns indicate that younger users are embedding AI into everyday routines at a faster pace. Private use led the trend, with 44.2% of young people applying generative AI in personal contexts, compared with 25.1% of the general population.
Educational deployment also stood out, reaching 39.3% among youth, while only 9.4% of the wider population reported similar academic use.
The professional application presented the narrowest gap between age groups. Around 15.8% of young users reported workplace use of generative AI tools, closely aligned with 15.1% among the overall population- a reflection of many young people still transitioning into the labour market.
Country-level data highlights notable regional differences. Greece (83.5%), Estonia (82.8%), and Czechia (78.5%) recorded the highest youth adoption rates, while Romania (44.1%), Italy (47.2%), and Poland (49.3%) ranked lowest.
The findings coincide with Safer Internet Day, observed on 10 February, underscoring the growing importance of digital literacy and online safety as AI usage accelerates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
England is reforming its computing curriculum to place AI awareness and digital literacy at the centre of education. The move follows recommendations from an independent Curriculum and Assessment Review, which concluded that the current framework is too narrow for today’s digital environment and requires a stronger focus on data skills, online safety, and critical thinking.
The reform aims to modernise qualifications while strengthening the UK’s future digital talent pipeline. By embedding AI and digital competencies across the curriculum, the government seeks to equip learners with skills relevant to further study, employment, and participation in a technology-driven society.
The British Computer Society (BCS) has been appointed by the Department for Education to lead the drafting of the new Computing curriculum. The organisation will oversee revisions across key stages 1 to 5, ensuring alignment with classroom practice and developments in the wider digital profession.
A broader Computing GCSE will replace the current Computer Science GCSE, integrating technical foundations with digital literacy and responsible technology use. In addition, the government is exploring a new Level 3 qualification in Data Science and AI, with a public consultation expected later this year to shape the final reforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Court of Justice of the EU has ruled that WhatsApp can challenge an EDPB decision directly in European courts. Judges confirmed that firms may seek annulment when a decision affects them directly instead of relying solely on national procedures.
A ruling that reshapes how companies defend their interests under the GDPR framework.
The judgment centres on a 2021 instruction from the EDPB to Ireland’s Data Protection Commission regarding the enforcement of data protection rules against WhatsApp.
European regulators argued that only national authorities were formal recipients of these decisions. The court found that companies should be granted standing when their commercial rights are at stake.
By confirming this route, the court has created an important precedent for businesses facing cross-border investigations. Companies will be able to contest EDPB decisions at EU level rather than moving first through national courts, a shift that may influence future GDPR enforcement cases across the Union.
Legal observers expect more direct challenges as organisations adjust their compliance strategies. The outcome strengthens judicial oversight of the EDPB and could reshape the balance between national regulators and EU-level bodies in data protection governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Advertising inside ChatGPT marks a shift in where commercial messages appear, not a break from how advertising works. AI systems have shaped search, social media, and recommendations for years, but conversational interfaces make those decisions more visible during moments of exploration.
Unlike search or social formats, conversational advertising operates inside dialogue. Ads appear because users are already asking questions or seeking clarity. Relevance is built through context rather than keywords, changing when information is encountered rather than how decisions are made.
In healthcare and clinical research, this distinction matters. Conversational ads cannot enroll patients directly, but they may raise awareness earlier in patient journeys and shape later discussions with clinicians and care providers.
Early rollout will be limited to free or low-cost ChatGPT tiers, likely skewing exposure towards patients and caregivers. As with earlier platforms, sensitive categories may remain restricted until governance and safeguards mature.
The main risks are organisational rather than technical. New channels will not fix unclear value propositions or operational bottlenecks. Conversational advertising changes visibility, not fundamentals, and success will depend on responsible integration.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The International Federation of Robotics says AI is accelerating the move of robots from research labs into real world use. A new position paper highlights rapid adoption across multiple industries as AI becomes a core enabler.
Logistics, manufacturing and services are leading AI driven robotics deployment. Warehousing and supply chains benefit from controlled environments, while factories use AI to improve efficiency, quality and precision in sectors including automotive and electronics.
The IFR said service robots are expanding as labour shortages persist, with restaurants and hospitality testing AI enabled machines. Hybrid models are emerging where robots handle repetitive work while humans focus on customer interaction.
Investment is rising globally, with major commitments in the US, Europe and China. The IFR expects AI to improve returns on robotics investment over the next decade through lower costs and higher productivity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of Oklahoma have developed a machine-learning model that could significantly speed up the manufacturing of monoclonal antibodies, a fast-growing class of therapies used to treat cancer, autoimmune disorders, and other diseases.
The study, published in Communications Engineering, targets delays in selecting high-performing cell lines during antibody production. Output varies widely between Chinese hamster ovary cell clones, forcing manufacturers to spend weeks screening for high yields.
By analysing early growth data, the researchers trained a model to predict antibody productivity far earlier in the process. Using only the first 9 days of data, it forecast production trends through day 16 and identified higher-performing clones in more than 76% of tests.
The model was developed with Oklahoma-based contract manufacturer Wheeler Bio, combining production data with established growth equations. Although further validation is needed, early results suggest shorter timelines and lower manufacturing costs.
The work forms part of a wider US-funded programme to strengthen biotechnology manufacturing capacity, highlighting how AI is being applied to practical industrial bottlenecks rather than solely to laboratory experimentation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI is not reducing workloads as widely expected but intensifying them, according to new workplace research. Findings suggest productivity gains are being offset by expanding responsibilities and longer working hours.
An eight-month study at a US tech firm found employees worked faster, took on broader tasks, and extended working hours. AI tools enabled staff to take on duties beyond their roles, including coding, research, and technical problem-solving.
Researchers identified three pressure points driving intensification: task expansion, blurred work-life boundaries, and increased multitasking. Workers used AI during breaks and off-hours while juggling parallel tasks, increasing cognitive load.
Experts warn that the early productivity surge may mask burnout, fatigue, and declining work quality. Organisations are now being urged to establish structured ‘AI practices’ to regulate usage, protect focus, and maintain sustainable productivity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.
The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.
Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.
Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.
Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!