Google faces scrutiny over AI use of online content

The European Commission has opened an antitrust probe into Google over concerns it used publisher and YouTube content to develop its AI services on unfair terms.

Regulators are assessing whether Google used its dominant position to gain unfair access to content powering features like AI Overviews and AI Mode. They are examining whether publishers were disadvantaged by being unable to refuse use of their content without losing visibility on Google Search.

The probe also covers concerns that YouTube creators may have been required to allow the use of their videos for AI training without compensation, while rival AI developers remain barred from using YouTube content.

The investigation will determine whether these practices breached EU rules on abuse of dominance under Article 102 TFEU. Authorities intend to prioritise the case, though no deadline applies.

Google and national competition authorities have been formally notified as the inquiry proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US rollout brings AI face tagging to Amazon Ring

Amazon has begun rolling out a new facial recognition feature for its Ring doorbells, allowing devices to identify frequent visitors and send personalised alerts instead of generic motion notifications.

The feature, called Familiar Faces, enables users to create a catalogue of up to 50 individuals, such as family members, friends, neighbours or delivery drivers, by labelling faces directly within the Ring app.

Amazon says the rollout is now under way in the United States, where Ring owners can opt in to the feature, which is disabled by default and designed to reduce unwanted or repetitive alerts.

The company claims facial data is encrypted, not shared externally and not used to train AI models, while unnamed faces are automatically deleted after 30 days, giving users ongoing control over stored information.

Privacy advocates and lawmakers remain concerned, however, citing Ring’s past security failures and law enforcement partnerships as evidence that convenience-driven surveillance tools can introduce long-term risks to personal privacy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Telekom partners with OpenAI to expand advanced AI services across Europe

OpenAI has formed a new partnership with Deutsche Telekom to deliver advanced AI capabilities to millions of people across Europe. The collaboration brings together Deutsche Telekom’s customer base and OpenAI’s research to expand the availability of practical AI tools.

The companies aim to introduce simple, multilingual and privacy-focused AI services starting in 2026, helping users communicate, learn and accomplish tasks more efficiently. Widespread familiarity with platforms such as ChatGPT is expected to support rapid uptake of these new offerings.

Deutsche Telekom will introduce ChatGPT Enterprise internally, giving staff secure access to tools that improve customer support and streamline workflows. The move aligns with the firm’s goal of modernising operations through intelligent automation.

Further integration of AI into network management and employee copilots will support the transition towards more autonomous, self-optimising systems. The partnership is expected to strengthen the availability and reliability of AI services throughout Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI platform accelerates cancer research

A new AI tool developed by Microsoft Research enables scientists to study the environment surrounding tumours on a far wider scale than previously possible.

The platform, called GigaTIME, uses multimodal modelling to analyse routine pathology slides and generate detailed digital maps showing how immune cells interact with cancerous tissue.

Traditional approaches require costly laboratory tests and days of work to produce similar maps, whereas GigaTIME performs the analysis in seconds. The system simulates dozens of protein interactions simultaneously, revealing patterns that were previously difficult or impossible to detect.

By examining tens of thousands of scenarios at once, researchers can better understand tumour behaviour and identify which treatments might offer the greatest benefit. The technology may also clarify why some patients resist therapy and aid the development of new treatment strategies.

GigaTIME is available as an open-source research tool and draws on data from more than 14,000 patients across dozens of hospitals and clinics. The project, developed with Providence and the University of Washington, aims to accelerate cancer research and cut costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI job interviews raise concerns among recruiters and candidates

As AI takes on a growing share of recruitment tasks, concerns are mounting that automated interviews and screening tools could be pushing hiring practices towards what some describe as a ‘race to the bottom’.

The rise of AI video interviews illustrates both the efficiency gains sought by companies and the frustrations candidates experience when algorithms, rather than people, become the first point of contact.

BBC journalist MaryLou Costa found this out first-hand after her AI interviewer froze mid-question. The platform provider, TestGorilla, said the malfunction affected only a small number of users, but the episode highlights the fragility of a process that companies increasingly rely on to sift through rising volumes of applications.

With vacancies down 12% year-on-year and applications per role up 65%, firms argue that AI is now essential for managing the workload. Recruitment groups such as Talent Solutions Group say automated tools help identify the fraction of applicants who will advance to human interviews.

Employers are also adopting voice-based AI interviewers such as Cera’s system, Ami, which conducts screening calls and has already processed hundreds of thousands of applications. Cera claims the tool has cut recruitment costs by two-thirds and saved significant staff time. Yet jobseekers describe a dehumanising experience.

Marketing professional Jim Herrington, who applied for over 900 roles after redundancy, argues that keyword-driven filters overlook the broader qualities that define a strong candidate. He believes companies risk damaging their reputation by replacing real conversation with automated screening and warns that AI-based interviews cannot replicate human judgement, respect or empathy.

Recruiters acknowledge that AI is also transforming candidate behaviour. Some applicants now use bots to submit thousands of applications at once, further inflating volumes and prompting companies to rely even more heavily on automated filtering.

Ivee co-founder Lydia Miller says this dynamic risks creating a loop in which both sides use AI to outpace each other, pushing humans further out of the process. She warns that candidates may soon tailor their responses to satisfy algorithmic expectations, rather than communicate genuine strengths. While AI interviews can reduce stress for some neurodivergent or introverted applicants, she says existing bias in training data remains a significant risk.

Experts argue that AI should augment, not replace, human expertise. Talent consultant Annemie Ress notes that experienced recruiters draw on subtle cues and intuition that AI cannot yet match. She warns that over-filtering risks excluding strong applicants before anyone has read their CV or heard their voice.

With debates over fairness, transparency and bias now intensifying, the challenge for employers is balancing efficiency with meaningful engagement and ensuring that automated tools do not undermine the human relationships on which good recruitment depends.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study warns of risks behind emotional attachments to AI therapists

A new University of Sussex study suggests that AI mental-health chatbots are most effective when users feel emotionally close to them, but warns this same intimacy carries significant risks.

The research, published in Social Science & Medicine, analysed feedback from 4,000 users of Wysa, an AI therapy app used within the NHS Talking Therapies programme. Many users described the AI as a ‘friend,’ ‘companion,’ ‘therapist,’ or occasionally even a ‘partner.’

Researchers say these emotional bonds can kickstart therapeutic processes such as self-disclosure, increased confidence, and improved wellbeing. Intimacy forms through a loop: users reveal personal information, receive emotionally validating responses, feel gratitude and safety, then disclose more.

But the team warns this ‘synthetic intimacy’ may trap vulnerable users in a self-reinforcing bubble, preventing escalation to clinical care when needed. A chatbot designed to be supportive may fail to challenge harmful thinking, or even reinforce it.

The report highlights growing reliance on AI to fill gaps in overstretched mental-health services. NHS trusts use tools like Wysa and Limbic to help manage referrals and support patients on waiting lists.

Experts caution that AI therapists remain limited: unlike trained clinicians, they lack the ability to read nuance, body language, or broader context. Imperial College’s Prof Hamed Haddadi called them ‘an inexperienced therapist’, adding that systems tuned to maintain user engagement may continue encouraging disclosure even when users express harmful thoughts.

Researchers argue policymakers and app developers must treat synthetic intimacy as an inevitable feature of digital mental-health tools, and build clear escalation mechanisms for cases where users show signs of crisis or clinical disorder.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI accountability toolkit unveiled by Amnesty International

Amnesty International has introduced a toolkit to help investigators, activists, and rights defenders hold governments and corporations accountable for harms caused by AI and automated decision-making systems. The resource draws on investigations across Europe, India, and the United States and focuses on public sector uses in welfare, policing, healthcare, and education.

The toolkit offers practical guidance for researching and challenging opaque algorithmic systems that often produce bias, exclusion, and human rights violations rather than improving public services. It emphasises collaboration with impacted communities, journalists, and civil society organisations to uncover discriminatory practices.

One key case study highlights Denmark’s AI-powered welfare system, which risks discriminating against disabled individuals, migrants, and low-income groups while enabling mass surveillance. Amnesty International underlines human rights law as a vital component of AI accountability, addressing gaps left by conventional ethical audits and responsible AI frameworks.

With growing state and corporate investments in AI, Amnesty International stresses the urgent need to democratise knowledge and empower communities to demand accountability. The toolkit equips civil society, journalists, and affected individuals with the strategies and resources to challenge abusive AI systems and protect fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Intellectual property laws in Azerbaijan adapts to AI challenges

Azerbaijan is preparing to update its intellectual property legislation to address the growing impact of artificial intelligence. Kamran Imanov, Chairman of the Intellectual Property Agency, highlighted that AI raises complex questions about authorship, invention, and human–AI collaboration that current laws cannot fully resolve.

The absence of legal personality for AI creates challenges in defining rights and responsibilities, prompting a reassessment of both national and international legal norms. Imanov underlined that reforming intellectual property rules is essential for fostering innovation while protecting creators’ rights.

Recent initiatives, including the adoption of a national AI strategy and the establishment of the Artificial Intelligence Academy, demonstrate Azerbaijan’s commitment to building a robust governance framework for emerging technologies. The country is actively prioritising AI regulation to guide ethical development and usage.

The Intellectual Property Agency, in collaboration with the World Intellectual Property Organization, recently hosted an international conference in Baku on intellectual property and AI. Experts from around the globe convened to discuss the challenges and opportunities posed by AI in the legal protection of inventions and creative works.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The fundamentals of AI

AI is no longer a concept confined to research laboratories or science fiction novels. From smartphones that recognise faces to virtual assistants that understand speech and recommendation engines that predict what we want to watch next, AI has become embedded in everyday life.

Behind this transformation lies a set of core principles, or the fundamentals of AI, which explain how machines learn, adapt, and perform tasks once considered the exclusive domain of humans.

At the heart of modern AI are neural networks, mathematical structures inspired by the human brain. They organise computation into layers of interconnected nodes, or artificial neurones, which process information and learn from examples.

Unlike traditional programming, where every rule must be explicitly defined, neural networks can identify patterns in data autonomously. The ability to learn and improve with experience underpins the astonishing capabilities of today’s AI.

Multi-layer perceptron networks

A neural network consists of multiple layers of interconnected neurons, not just a simple input and output layer. Each layer processes the data it receives from the previous layer, gradually building hierarchical representations.

In image recognition, early layers detect simple features, such as edges or textures, middle layers combine these into shapes, and later layers identify full objects, like faces or cars. In natural language processing, lower layers capture letters or words, while higher layers recognise grammar, context, and meaning.

Without multiple layers, the network would be shallow, limited in its ability to learn, and unable to handle complex tasks. Multi-layer, or deep networks, are what enable AI to perform sophisticated functions like autonomous driving, medical diagnosis, and language translation.

How mathematics drives artificial intelligence

 Blackboard, Text, Document, Mathematical Equation

The foundation of AI is mathematics. Without linear algebra, calculus, probability, and optimisation, modern AI systems would not exist. These disciplines allow machines to represent, manipulate, and learn from vast quantities of data.

Linear algebra allows inputs and outputs to be represented as vectors and matrices. Each layer of a neural network transforms these data structures, performing calculations that detect patterns in data, such as shapes in images or relationships between words in a sentence.

Calculus, especially the study of derivatives, is used to measure how small changes in a network’s parameters, called weights, affect its predictions. This information is critical for optimisation, which is the process of adjusting these weights to improve the network’s accuracy.

The loss function measures the difference between the network’s prediction and the actual outcome. It essentially tells the network how wrong it is. For example, the mean squared error measures the average squared difference between the predicted and actual values, while cross-entropy is used in classification tasks to measure how well the predicted probabilities match the correct categories.

Gradient descent is an algorithm that uses the derivative of the loss function to determine the direction and magnitude of changes to each weight. By moving weights gradually in the direction that reduces the loss, the network learns over time to make more accurate predictions.

Backpropagation is a method that makes learning in multi-layer neural networks feasible. Before its introduction in the 1980s, training networks with more than one or two layers was extremely difficult, as it was hard to determine how errors in the output layer should influence the earlier weights. Backpropagation systematically propagates this error information backwards through the network.

At its core, it applies the chain rule of calculus to compute gradients, indicating how much each weight contributes to the overall error and the direction it should be adjusted. Combined with gradient descent, this iterative process allows networks to learn hierarchical patterns, from simple edges in images to complex objects, or from letters to complete sentences.

Backpropagation has transformed neural networks from shallow, limited models into deep, powerful tools capable of learning sophisticated patterns and making human-like predictions.

Why neural network architecture matters

 Lighting, Light, Network

The arrangement of layers in a network, or its architecture, determines its ability to solve specific problems.

Activation functions introduce non-linearity, giving networks the ability to map complex, high-dimensional data. ReLU (Rectified Linear Unit), one of the most widely used activation functions, addresses critical training issues and enables deep networks to learn efficiently.

Convolutional neural networks (CNNs) excel in image and video analysis. By applying filters across images, CNNs detect local patterns like edges and textures. Pooling layers reduce spatial dimensions, making computation faster while preserving essential features. Local connectivity ensures neurones process only relevant input regions, mimicking human vision.

Recurrent neural networks (RNNs) and their variants, such as LSTMs and GRUs, process sequential data like text or audio. They maintain a hidden state that acts as memory, capturing dependencies over time, a crucial feature for tasks such as speech recognition or predictive text.

Transformer revolution and attention mechanisms

In 2017, AI research took a major leap with the introduction of Transformer models. Unlike RNNs, which process sequences step by step, transformers use attention mechanisms to evaluate all parts of the input simultaneously.

The attention mechanism calculates which elements in a sequence are most relevant to each output. Using linear algebra, it compares query, key, and value vectors to assign weights, highlighting important information and suppressing irrelevant details.

That approach enabled the creation of large language models (LLMs) such as GPT and BERT, capable of generating coherent text, answering questions, and translating languages with unprecedented accuracy.

Transformers reshaped natural language processing and have since expanded into areas such as computer vision, multimodal AI, and reinforcement learning. Their ability to capture long-range context efficiently illustrates the power of combining deep learning fundamentals with innovative architectures.

How does AI learn and generalise?

 Adult, Female, Person, Woman, Face, Head

One of the central challenges in AI is ensuring that networks learn meaningful patterns from data rather than simply memorising individual examples. The ability to generalise and apply knowledge learnt from one dataset to new, unseen situations is what allows AI to function reliably in the real world.

Supervised learning is the most widely used approach, where networks are trained on labelled datasets, with each input paired with a known output. The model learns to map inputs to outputs by minimising the difference between its predictions and the actual results.

Applications include image classification, where the system distinguishes cats from dogs, or speech recognition, where spoken words are mapped to text. The accuracy of supervised learning depends heavily on the quality and quantity of labelled data, making data curation critical for reliable performance.

Unsupervised learning, by contrast, works with unlabelled data and seeks to uncover hidden structures and patterns. Clustering algorithms, for instance, can group similar customer profiles in marketing, while dimensionality reduction techniques simplify complex datasets for analysis.

The paradigm enables organisations to detect anomalies, segment populations, and make informed decisions from raw data without explicit guidance.

Reinforcement learning allows machines to learn by interacting with an environment and receiving feedback in the form of rewards or penalties. Unlike supervised learning, the system is not told the correct action in advance; it discovers optimal strategies through trial and error.

That approach powers innovations in robotics, autonomous vehicles, and game-playing AI, enabling systems to learn long-term strategies rather than memorise specific moves.

A persistent challenge across all learning paradigms is overfitting, which occurs when a network performs exceptionally well on training data but fails to generalise to new examples. Techniques such as dropout, which temporarily deactivate random neurons during training, encourage the network to develop robust, redundant representations.

Similarly, weight decay penalises excessively large parameter values, preventing the model from relying too heavily on specific features. Achieving proper generalisation is crucial for real-world applications: self-driving cars must correctly interpret new road conditions, and medical AI systems must accurately assess patients with cases differing from the training dataset.

By learning patterns rather than memorising data, AI systems become adaptable, reliable, and capable of making informed decisions in dynamic environments.

The black box problem and explainable AI (XAI)

 Animal, Nature, Outdoors, Reef, Sea, Sea Life, Water, Pattern, Coral Reef

Deep learning and other advanced AI technologies rely on multi-layer neural networks that can process vast amounts of data. While these networks achieve remarkable accuracy in image recognition, language translation, and decision-making, their complexity often makes it extremely difficult to explain why a particular prediction was made. That phenomenon is known as the black box problem.

Though these systems are built on rigorous mathematical principles, the interactions between millions or billions of parameters create outputs that are not immediately interpretable. For instance, a healthcare AI might recommend a specific diagnosis, but without interpretability tools, doctors may not know what features influenced that decision.

Similarly, in finance or law, opaque models can inadvertently perpetuate biases or produce unfair outcomes.

Explainable AI (XAI) seeks to address this challenge. By combining the mathematical and structural fundamentals of AI with transparency techniques, XAI allows users to trace predictions back to input features, assess confidence, and identify potential errors or biases.

In practice, this means doctors can verify AI-assisted diagnoses, financial institutions can audit credit decisions, and policymakers can ensure fair and accountable deployment of AI.

Understanding the black box problem is therefore essential not only for developers but for society at large. It bridges the gap between cutting-edge AI capabilities and trustworthy, responsible applications, ensuring that as AI systems become more sophisticated, they remain interpretable, safe, and beneficial.

Data and computational power

 Electronics, Hardware, Computer, Server, Architecture, Building, Computer Hardware, Monitor, Screen

Modern AI depends on two critical ingredients: large, high-quality datasets and powerful computational resources. Data provides the raw material for learning, allowing networks to identify patterns and generalise to new situations.

Image recognition systems, for example, require millions of annotated photographs to reliably distinguish objects, while language models like GPT are trained on billions of words from books, articles, and web content, enabling them to generate coherent, contextually aware text.

High-performance computation is equally essential. Training deep neural networks involves performing trillions of calculations, a task far beyond the capacity of conventional processors.

Graphics Processing Units (GPUs) and specialised AI accelerators enable parallel processing, reducing training times from months to days or even hours. This computational power enables real-time applications, such as self-driving cars interpreting sensor data instantly, recommendation engines adjusting content dynamically, and medical AI systems analysing thousands of scans within moments.

The combination of abundant data and fast computation also brings practical challenges. Collecting representative datasets requires significant effort and careful curation to avoid bias, while training large models consumes substantial energy.

Researchers are exploring more efficient architectures and optimisation techniques to reduce environmental impact without sacrificing performance.

The future of AI

 Body Part, Finger, Hand, Person, Clothing, Glove, Electronics, Hardware

The foundations of AI continue to evolve rapidly, driven by advances in algorithms, data availability, and computational power. Researchers are exploring more efficient architectures, capable of learning from smaller datasets while maintaining high performance.

For instance, self-supervised learning allows a model to learn from unlabelled data by predicting missing information within the data itself, while few-shot learning enables a system to understand a new task from just a handful of examples. These methods reduce the need for enormous annotated datasets and make AI development faster and more resource-efficient.

Transformer models, powered by attention mechanisms, remain central to natural language processing. The attention mechanism allows the network to focus on the most relevant parts of the input when making predictions.

For example, when translating a sentence, it helps the model determine which words are most important for understanding the meaning. Transformers have enabled the creation of large language models like GPT and BERT, capable of summarising documents, answering questions, and generating coherent text.

Beyond language, multimodal AI systems are emerging, combining text, images, and audio to understand context across multiple sources. For instance, a medical AI system might analyse a patient’s scan while simultaneously reading their clinical notes, providing more accurate and context-aware insights.

Ethics, transparency, and accountability remain critical. Explainable AI (XAI) techniques help humans understand why a model made a particular decision, which is essential in fields like healthcare, finance, and law. Detecting bias, evaluating fairness, and ensuring that models behave responsibly are becoming standard parts of AI development.

Energy efficiency and sustainability are also priorities, as training large models consumes significant computational resources.

Ultimately, the future of AI will be shaped by models that are not only more capable but also more efficient, interpretable, and responsible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia enforces under-16 social media ban as new rules took effect

Australia has finally introduced the world’s first nationwide prohibition on social media use for under-16s, forcing platforms to delete millions of accounts and prevent new registrations.

Instagram, TikTok, Facebook, YouTube, Snapchat, Reddit, Twitch, Kick and Threads are removing accounts held by younger users. At the same time, Bluesky has agreed to apply the same standard despite not being compelled to do so. The only central platform yet to confirm compliance is X.

The measure follows weeks of age-assurance checks, which have not been flawless, with cases of younger teenagers passing facial-verification tests designed to keep them offline.

Families are facing sharply different realities. Some teenagers feel cut off from friends who managed to bypass age checks, while others suddenly gain a structure that helps reduce unhealthy screen habits.

A small but vocal group of parents admit they are teaching their children how to use VPNs and alternative methods instead of accepting the ban, arguing that teenagers risk social isolation when friends remain active.

Supporters of the legislation counter that Australia imposes clear age limits in other areas of public life for reasons of well-being and community standards, and the same logic should shape online environments.

Regulators are preparing to monitor the transition closely.

The eSafety Commissioner will demand detailed reports from every platform covered by the law, including the volume of accounts removed, evidence of efforts to stop circumvention and assessments of whether reporting and appeals systems are functioning as intended.

Companies that fail to take reasonable steps may face significant fines. A government-backed academic advisory group will study impacts on behaviour, well-being, learning and unintended shifts towards more dangerous corners of the internet.

Global attention is growing as several countries weigh similar approaches. Denmark, Norway and Malaysia have already indicated they may replicate Australia’s framework, and the EU has endorsed the principle in a recent resolution.

Interest from abroad signals a broader debate about how societies should balance safety and autonomy for young people in digital spaces, instead of relying solely on platforms to set their own rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!