Snowflake launches AI platform for Japan enterprises

Japan’s businesses are set to gain new AI capabilities with the arrival of Snowflake Intelligence, a platform designed to let employees ask complex data questions using natural language.

The tool integrates structured and unstructured data into a single environment, enabling faster and more transparent decision-making.

Early adoption worldwide has seen more than 15,000 AI agents deployed in recent months, reflecting growing demand for enterprise AI. Snowflake Intelligence builds on this momentum by offering rapid text-to-SQL responses, advanced agent management and strong governance controls.

Japanese enterprises are expected to benefit from streamlined workflows, increased productivity, and improved competitiveness as AI agents uncover patterns across various sectors, including finance and manufacturing.

Snowflake aims to showcase the platform’s full capabilities during its upcoming BUILD event in December while promoting broader adoption of data-driven innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce pushes unified data model for safer AI agents

Salesforce and Informatica are promoting a shared data framework designed to provide AI agents with a deeper understanding of business. Salesforce states that many projects fail due to context gaps, which leave agents unable to interpret enterprise data accurately.

Informatica adds master data management and a broad catalogue that defines core business entities across systems. Data lineage tools track how information moves through an organisation, helping agents judge reliability and freshness.

Data 360 merges these metadata layers and signals into a unified context interface without copying enterprise datasets. Salesforce claims that the approach provides Agentforce with a more comprehensive view of customers, processes, and policies, thereby supporting safer automation.

Wyndham and Yamaha representatives, quoted by Salesforce, say the combined stack helps reduce data inconsistency and accelerate decision-making. Both organisations report improved access to governed and harmonised records that support larger AI strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Launch of Qai advances Qatar’s AI strategy globally

Qatar has launched Qai, a new national AI company designed to strengthen the country’s digital capabilities and accelerate sustainable development. The initiative supports Qatar’s plans to build a knowledge-based economy and deepen economic diversification under Qatar National Vision 2030.

The company will develop, operate and invest in AI infrastructure both domestically and internationally, offering high-performance computing and secure tools for deploying scalable AI systems. Its work aims to drive innovation while ensuring that governments, companies and researchers can adopt advanced technologies with confidence.

Qai will collaborate closely with research institutions, policymakers and global partners to expand Qatar’s role in data-driven industries. The organisation promotes an approach to AI that prioritises societal benefit, with leaders stressing that people and communities must remain central to technological progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The fundamentals of AI

AI is no longer a concept confined to research laboratories or science fiction novels. From smartphones that recognise faces to virtual assistants that understand speech and recommendation engines that predict what we want to watch next, AI has become embedded in everyday life.

Behind this transformation lies a set of core principles, or the fundamentals of AI, which explain how machines learn, adapt, and perform tasks once considered the exclusive domain of humans.

At the heart of modern AI are neural networks, mathematical structures inspired by the human brain. They organise computation into layers of interconnected nodes, or artificial neurones, which process information and learn from examples.

Unlike traditional programming, where every rule must be explicitly defined, neural networks can identify patterns in data autonomously. The ability to learn and improve with experience underpins the astonishing capabilities of today’s AI.

Multi-layer perceptron networks

A neural network consists of multiple layers of interconnected neurons, not just a simple input and output layer. Each layer processes the data it receives from the previous layer, gradually building hierarchical representations.

In image recognition, early layers detect simple features, such as edges or textures, middle layers combine these into shapes, and later layers identify full objects, like faces or cars. In natural language processing, lower layers capture letters or words, while higher layers recognise grammar, context, and meaning.

Without multiple layers, the network would be shallow, limited in its ability to learn, and unable to handle complex tasks. Multi-layer, or deep networks, are what enable AI to perform sophisticated functions like autonomous driving, medical diagnosis, and language translation.

How mathematics drives artificial intelligence

 Blackboard, Text, Document, Mathematical Equation

The foundation of AI is mathematics. Without linear algebra, calculus, probability, and optimisation, modern AI systems would not exist. These disciplines allow machines to represent, manipulate, and learn from vast quantities of data.

Linear algebra allows inputs and outputs to be represented as vectors and matrices. Each layer of a neural network transforms these data structures, performing calculations that detect patterns in data, such as shapes in images or relationships between words in a sentence.

Calculus, especially the study of derivatives, is used to measure how small changes in a network’s parameters, called weights, affect its predictions. This information is critical for optimisation, which is the process of adjusting these weights to improve the network’s accuracy.

The loss function measures the difference between the network’s prediction and the actual outcome. It essentially tells the network how wrong it is. For example, the mean squared error measures the average squared difference between the predicted and actual values, while cross-entropy is used in classification tasks to measure how well the predicted probabilities match the correct categories.

Gradient descent is an algorithm that uses the derivative of the loss function to determine the direction and magnitude of changes to each weight. By moving weights gradually in the direction that reduces the loss, the network learns over time to make more accurate predictions.

Backpropagation is a method that makes learning in multi-layer neural networks feasible. Before its introduction in the 1980s, training networks with more than one or two layers was extremely difficult, as it was hard to determine how errors in the output layer should influence the earlier weights. Backpropagation systematically propagates this error information backwards through the network.

At its core, it applies the chain rule of calculus to compute gradients, indicating how much each weight contributes to the overall error and the direction it should be adjusted. Combined with gradient descent, this iterative process allows networks to learn hierarchical patterns, from simple edges in images to complex objects, or from letters to complete sentences.

Backpropagation has transformed neural networks from shallow, limited models into deep, powerful tools capable of learning sophisticated patterns and making human-like predictions.

Why neural network architecture matters

 Lighting, Light, Network

The arrangement of layers in a network, or its architecture, determines its ability to solve specific problems.

Activation functions introduce non-linearity, giving networks the ability to map complex, high-dimensional data. ReLU (Rectified Linear Unit), one of the most widely used activation functions, addresses critical training issues and enables deep networks to learn efficiently.

Convolutional neural networks (CNNs) excel in image and video analysis. By applying filters across images, CNNs detect local patterns like edges and textures. Pooling layers reduce spatial dimensions, making computation faster while preserving essential features. Local connectivity ensures neurones process only relevant input regions, mimicking human vision.

Recurrent neural networks (RNNs) and their variants, such as LSTMs and GRUs, process sequential data like text or audio. They maintain a hidden state that acts as memory, capturing dependencies over time, a crucial feature for tasks such as speech recognition or predictive text.

Transformer revolution and attention mechanisms

In 2017, AI research took a major leap with the introduction of Transformer models. Unlike RNNs, which process sequences step by step, transformers use attention mechanisms to evaluate all parts of the input simultaneously.

The attention mechanism calculates which elements in a sequence are most relevant to each output. Using linear algebra, it compares query, key, and value vectors to assign weights, highlighting important information and suppressing irrelevant details.

That approach enabled the creation of large language models (LLMs) such as GPT and BERT, capable of generating coherent text, answering questions, and translating languages with unprecedented accuracy.

Transformers reshaped natural language processing and have since expanded into areas such as computer vision, multimodal AI, and reinforcement learning. Their ability to capture long-range context efficiently illustrates the power of combining deep learning fundamentals with innovative architectures.

How does AI learn and generalise?

 Adult, Female, Person, Woman, Face, Head

One of the central challenges in AI is ensuring that networks learn meaningful patterns from data rather than simply memorising individual examples. The ability to generalise and apply knowledge learnt from one dataset to new, unseen situations is what allows AI to function reliably in the real world.

Supervised learning is the most widely used approach, where networks are trained on labelled datasets, with each input paired with a known output. The model learns to map inputs to outputs by minimising the difference between its predictions and the actual results.

Applications include image classification, where the system distinguishes cats from dogs, or speech recognition, where spoken words are mapped to text. The accuracy of supervised learning depends heavily on the quality and quantity of labelled data, making data curation critical for reliable performance.

Unsupervised learning, by contrast, works with unlabelled data and seeks to uncover hidden structures and patterns. Clustering algorithms, for instance, can group similar customer profiles in marketing, while dimensionality reduction techniques simplify complex datasets for analysis.

The paradigm enables organisations to detect anomalies, segment populations, and make informed decisions from raw data without explicit guidance.

Reinforcement learning allows machines to learn by interacting with an environment and receiving feedback in the form of rewards or penalties. Unlike supervised learning, the system is not told the correct action in advance; it discovers optimal strategies through trial and error.

That approach powers innovations in robotics, autonomous vehicles, and game-playing AI, enabling systems to learn long-term strategies rather than memorise specific moves.

A persistent challenge across all learning paradigms is overfitting, which occurs when a network performs exceptionally well on training data but fails to generalise to new examples. Techniques such as dropout, which temporarily deactivate random neurons during training, encourage the network to develop robust, redundant representations.

Similarly, weight decay penalises excessively large parameter values, preventing the model from relying too heavily on specific features. Achieving proper generalisation is crucial for real-world applications: self-driving cars must correctly interpret new road conditions, and medical AI systems must accurately assess patients with cases differing from the training dataset.

By learning patterns rather than memorising data, AI systems become adaptable, reliable, and capable of making informed decisions in dynamic environments.

The black box problem and explainable AI (XAI)

 Animal, Nature, Outdoors, Reef, Sea, Sea Life, Water, Pattern, Coral Reef

Deep learning and other advanced AI technologies rely on multi-layer neural networks that can process vast amounts of data. While these networks achieve remarkable accuracy in image recognition, language translation, and decision-making, their complexity often makes it extremely difficult to explain why a particular prediction was made. That phenomenon is known as the black box problem.

Though these systems are built on rigorous mathematical principles, the interactions between millions or billions of parameters create outputs that are not immediately interpretable. For instance, a healthcare AI might recommend a specific diagnosis, but without interpretability tools, doctors may not know what features influenced that decision.

Similarly, in finance or law, opaque models can inadvertently perpetuate biases or produce unfair outcomes.

Explainable AI (XAI) seeks to address this challenge. By combining the mathematical and structural fundamentals of AI with transparency techniques, XAI allows users to trace predictions back to input features, assess confidence, and identify potential errors or biases.

In practice, this means doctors can verify AI-assisted diagnoses, financial institutions can audit credit decisions, and policymakers can ensure fair and accountable deployment of AI.

Understanding the black box problem is therefore essential not only for developers but for society at large. It bridges the gap between cutting-edge AI capabilities and trustworthy, responsible applications, ensuring that as AI systems become more sophisticated, they remain interpretable, safe, and beneficial.

Data and computational power

 Electronics, Hardware, Computer, Server, Architecture, Building, Computer Hardware, Monitor, Screen

Modern AI depends on two critical ingredients: large, high-quality datasets and powerful computational resources. Data provides the raw material for learning, allowing networks to identify patterns and generalise to new situations.

Image recognition systems, for example, require millions of annotated photographs to reliably distinguish objects, while language models like GPT are trained on billions of words from books, articles, and web content, enabling them to generate coherent, contextually aware text.

High-performance computation is equally essential. Training deep neural networks involves performing trillions of calculations, a task far beyond the capacity of conventional processors.

Graphics Processing Units (GPUs) and specialised AI accelerators enable parallel processing, reducing training times from months to days or even hours. This computational power enables real-time applications, such as self-driving cars interpreting sensor data instantly, recommendation engines adjusting content dynamically, and medical AI systems analysing thousands of scans within moments.

The combination of abundant data and fast computation also brings practical challenges. Collecting representative datasets requires significant effort and careful curation to avoid bias, while training large models consumes substantial energy.

Researchers are exploring more efficient architectures and optimisation techniques to reduce environmental impact without sacrificing performance.

The future of AI

 Body Part, Finger, Hand, Person, Clothing, Glove, Electronics, Hardware

The foundations of AI continue to evolve rapidly, driven by advances in algorithms, data availability, and computational power. Researchers are exploring more efficient architectures, capable of learning from smaller datasets while maintaining high performance.

For instance, self-supervised learning allows a model to learn from unlabelled data by predicting missing information within the data itself, while few-shot learning enables a system to understand a new task from just a handful of examples. These methods reduce the need for enormous annotated datasets and make AI development faster and more resource-efficient.

Transformer models, powered by attention mechanisms, remain central to natural language processing. The attention mechanism allows the network to focus on the most relevant parts of the input when making predictions.

For example, when translating a sentence, it helps the model determine which words are most important for understanding the meaning. Transformers have enabled the creation of large language models like GPT and BERT, capable of summarising documents, answering questions, and generating coherent text.

Beyond language, multimodal AI systems are emerging, combining text, images, and audio to understand context across multiple sources. For instance, a medical AI system might analyse a patient’s scan while simultaneously reading their clinical notes, providing more accurate and context-aware insights.

Ethics, transparency, and accountability remain critical. Explainable AI (XAI) techniques help humans understand why a model made a particular decision, which is essential in fields like healthcare, finance, and law. Detecting bias, evaluating fairness, and ensuring that models behave responsibly are becoming standard parts of AI development.

Energy efficiency and sustainability are also priorities, as training large models consumes significant computational resources.

Ultimately, the future of AI will be shaped by models that are not only more capable but also more efficient, interpretable, and responsible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Confluent set to join IBM in major data streaming acquisition

IBM has agreed to acquire data streaming company Confluent in an all-cash deal valued at about $11 billion, signalling a major push to strengthen its data and AI capabilities for enterprise customers.

The acquisition brings Confluent’s real-time data streaming platform into IBM’s portfolio, aiming to help organisations connect, process, and govern data across hybrid cloud environments as AI agents and applications proliferate.

Both companies argue that faster, trusted data flows are becoming essential as enterprises deploy generative and agentic AI at scale, with real-time access increasingly seen as a prerequisite for reliable automation and decision-making.

IBM said the deal will support its ambition to offer an AI-ready data platform that integrates applications, analytics, and infrastructure. At the same time, Confluent sees the combination as a way to accelerate global reach and commercial execution.

The move reflects broader shifts in enterprise architecture, as demand for real-time data systems grows and competition intensifies around AI infrastructure, streaming technologies, and platforms built to support continuous, distributed workloads.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada-EU digital partnership expands cooperation on AI and security

The European Union and Canada have strengthened their digital partnership during the first Digital Partnership Council in Montreal. Both sides outlined a joint plan to enhance competitiveness and innovation, while supporting smaller firms through targeted regulation.

Senior representatives reconfirmed that cooperation with like-minded partners will be essential for economic resilience.

A new Memorandum of Understanding on AI placed a strong emphasis on trustworthy systems, shared standards and wider adoption across strategic sectors.

The two partners will exchange best practices to support sectors such as healthcare, manufacturing, energy, culture and public services.

They also agreed to collaborate on large-scale AI infrastructures and access to computing capacity, while encouraging scientific collaboration on advanced AI models and climate-related research.

A meeting that also led to an agreement on a structured dialogue on data spaces.

A second Memorandum of Understanding covered digital credentials and trust services. The plan includes joint testing of digital identity wallets, pilot projects and new use cases aimed at interoperability.

The EU and Canada also intend to work more closely on the protection of independent media, the promotion of reliable information online and the management of risks created by generative AI.

Both sides underlined their commitment to secure connectivity, with cooperation on 5G, subsea cables and potential new Arctic routes to strengthen global network resilience. Further plans aim to deepen collaboration on quantum technologies, semiconductors and high-performance computing.

A renewed partnership that reflects a shared commitment to resilient supply chains and secure cloud infrastructure as both regions prepare for future technological demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU partners with EIB to support AI gigafactories

The European Commission and the European Investment Bank Group (EIB) have signed a memorandum of understanding to support the development of AI Gigafactories across the EU. The partnership aims to position Europe as a leading AI hub by accelerating financing and the construction of large-scale AI facilities.

The agreement establishes a framework to guide consortia responding to the Commission’s informal Call for Expression of Interest. EIB advisory support will help turn proposals into bankable projects for the 2026 AI Gigafactory call, with possible co-financing.

The initiative builds on InvestAI, announced in February 2025, mobilising €20 billion to support up to five AI Gigafactories. These facilities will boost Europe’s computing infrastructure, reinforce technological sovereignty, and drive innovation across the continent.

By translating Europe’s AI ambitions into concrete, large-scale projects, the Commission and the EIB aim to position the EU as a global leader in next-generation AI, while fostering investment and industrial growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google launches Workspace Studio for AI-powered automation

Google has made Workspace Studio generally available, allowing employees to design, manage, and share AI agents directly within Workspace. Powered by Gemini 3, these agents automate tasks ranging from simple routines to complex business workflows, all without coding.

The platform aims to save time on repetitive work, freeing employees to focus on higher-value activities.

Agents can understand context, reason through problems, and integrate with core Workspace apps such as Gmail, Drive, and Chat, as well as enterprise platforms like Asana, Jira, Mailchimp, and Salesforce.

Early adopters, including cleaning solutions leader Kärcher, have utilised Workspace Studio to streamline workflows, reducing planning time by up to 90% and consolidating multiple tasks into a single minute.

Workspace Studio allows users to build agents using templates or natural language prompts, making automation accessible to non-specialists. Agents can manage status reports, reminders, email triage, and critical tasks, such as legal notices or travel requests.

Teams can also easily share agents, ensuring collaboration and consistency across workflows.

The rollout to business customers will continue over the coming weeks. Users can start creating agents immediately, explore templates, use prompts for automations, and join the Gemini Alpha program to test early features and controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SAP elevates customer support with proactive AI systems

AI has pushed customer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that predicts issues, prevents failures and keeps critical systems running smoothly instead of relying on queues and manual intervention.

Major sales events, such as Cyber Week and Singles Day, demonstrated the impact of this shift, with uninterrupted service and significant growth in transaction volumes and order numbers.

Self-service now resolves most issues before they reach an engineer, as structured knowledge supports AI agents that respond instantly with a confidence level that matches human performance.

Tools such as the Auto Response Agent and Incident Solution Matching enable customers to retrieve solutions without having to search through lengthy documentation.

SAP has also prepared organisations scaling AI by offering support systems tailored for early deployment.

Engineers have benefited from AI as much as customers. Routine tasks are handled automatically, allowing experts to focus on problems that demand insight instead of administration.

Language optimisation, routing suggestions, and automatic error categorisation support faster and more accurate resolutions. SAP validates every AI tool internally before release, which it views as a safeguard for responsible adoption.

The company maintains that AI will augment staff rather than replace them. Creative and analytical work becomes increasingly important as automation handles repetitive tasks, and new roles emerge in areas such as AI training and data stewardship.

SAP argues that progress relies on a balanced relationship between human judgement and machine intelligence, strengthened by partnerships that turn enterprise data into measurable outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT users gain Jira and Confluence access through Atlassian’s MCP connector

Atlassian has launched a new connector that lets ChatGPT users access Jira and Confluence data via the Model Context Protocol. The company said the Rovo MCP Connector supports task summarisation, issue creation and workflow automation directly inside ChatGPT.

Atlassian noted rising demand for integrations beyond its initial beta ecosystem. Users in Europe and elsewhere can now draw on Jira and Confluence data without switching interfaces, while partners such as Figma and HubSpot continue to expand the MCP network.

Engineering, marketing and service teams can request summaries, monitor task progress and generate issues from within ChatGPT. Users can also automate multi-step actions, including bulk updates. Jira write-back support enables changes to be pushed directly into project workflows.

Security updates sit alongside the connector release. Atlassian said the Rovo MCP Server uses OAuth authentication and respects existing permissions across Jira and Confluence spaces. Administrators can also enforce an allowlist to control which clients may connect.

Atlassian frames the initiative as part of its long-term focus on open collaboration. The company said the connector reflects demand for tools that unify context, search and automation, positioning the MCP approach as a flexible extension of existing team practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!