A growing wave of AI-driven scams is prompting warnings from Competition Bureau Canada, as fraudsters increasingly impersonate government officials through deepfake technology and fake websites.
Authorities report a steady rise in complaints linked to deceptive schemes designed to exploit public trust.
Scammers are using synthetic media to mimic well-known political figures, including senior government officials, to extract personal information and spread misleading narratives.
Such tactics demonstrate how AI tools are being weaponised for social engineering rather than for legitimate communication.
The trend reflects a broader shift in digital fraud, where increasingly sophisticated techniques blur the line between authentic and fabricated content. As synthetic identities become more convincing, individuals find it harder to verify the legitimacy of online interactions and official communications.
In response, authorities in Canada are intensifying awareness efforts during Fraud Prevention Month, offering expert guidance on identifying and avoiding scams.
The development underscores the urgent need for stronger safeguards and public education to counter evolving AI-enabled threats.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is moving to shut down the Sora app, its consumer-facing AI video platform, according to an official X post on 24 March. The move follows months of scrutiny around AI-generated video, including concerns over deepfakes, copyright, and harmful synthetic media.
The reported shutdown comes shortly after OpenAI retired Sora 1 in the United States on 13 March 2026 and replaced it with Sora 2 as the default experience. OpenAI’s help documentation says the older version remains available only in countries where the newer one has not yet launched, while support pages for the standalone Sora app are still live. The product changes also follow the announcement of new copyright settings for the latest video generation model.
That makes the current picture more complex than a simple sunset. Public OpenAI help pages still describe tools on iOS, Android, and the web, while news reports say the company has now decided to wind down the app itself. OpenAI had also recently indicated that it plans to integrate Sora video generation into ChatGPT, which could help explain why the standalone product is being reconsidered.
Sora became one of OpenAI’s most visible consumer media products, but it also drew sustained scrutiny over deepfakes, non-consensual content, and copyrighted characters. Such concerns remained central even as OpenAI added additional controls to the platform, including new consent and traceability measures to enhance AI video safety. AP reported that pressure from advocacy groups, scholars, and entertainment-sector voices formed part of the backdrop to the shutdown decision.
For users, the immediate issue is preservation of existing content. OpenAI’s Sora 1 sunset FAQ says some legacy material may be exportable for a limited period before deletion, but the company has not yet published a detailed standalone help document explaining the full shutdown. Based on the information now available, the clearest distinction is that OpenAI first retired one legacy version in some markets and is now reportedly ending the standalone app more broadly.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A surge in AI-generated child sexual abuse material has raised urgent concerns across Europe, with the Internet Watch Foundation reporting record levels of harmful content online.
Findings of the IWF report indicate that AI is accelerating both the scale and severity of abuse, transforming how offenders create and distribute illicit material.
Data from 2025 reveals a sharp increase in AI-generated imagery and video, with over 8,000 cases identified and a dramatic rise in highly severe content.
Synthetic videos have grown at an unprecedented rate, reflecting how emerging tools are being used to produce increasingly realistic and extreme scenarios rather than traditional formats.
Analysis of offender behaviour highlights a disturbing trend toward automation and accessibility.
Discussions on dark web forums suggest that future agentic AI systems may enable the creation of fully produced abusive content with minimal technical skill. The integration of audio and image manipulation further deepens risks, particularly where real children’s likenesses are involved.
Calls for regulatory action are intensifying as policymakers in the EU debate reforms to the Child Sexual Abuse Directive.
Advocacy groups emphasise the need for comprehensive criminalisation, alongside stronger safety-by-design requirements, arguing that technological innovation must not outpace child protection frameworks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Over the past few years, we have witnessed a rapid shift in the way data is stored and processed across businesses, organisations, and digital systems.
What we are increasingly seeing is that AI itself is changing form as computation shifts away from centralised cloud environments to the network edge. Such a shift has come to be known as edge AI.
Edge AI refers to the deployment of machine learning models directly on local devices such as smartphones, sensors, industrial machines, and autonomous systems.
Instead of transmitting data to remote servers for processing, analysis is performed on the device itself, enabling faster responses and greater control over sensitive information.
Such a transition marks a significant departure from earlier models of AI deployment, where cloud infrastructure dominated both processing and storage.
From centralised AI to edge intelligence
Traditional AI systems used to rely heavily on centralised architectures. Data collected from users or devices would be transmitted to large-scale data centres, where powerful servers would perform computations and generate outputs.
Such a model offered efficiency, scalability, and easier security management, as protection efforts could be concentrated within controlled environments.
Centralisation allowed organisations to enforce uniform security policies, deploy updates rapidly, and monitor threats from a single vantage point. However, reliance on cloud infrastructure also introduced latency, bandwidth constraints, and increased exposure of sensitive data during transmission.
Edge AI introduces a fundamentally different paradigm. Moving computation closer to the data source reduces the reliance on continuous connectivity and enables real-time decision-making.
Such decentralisation represents not merely a technical shift but a reconfiguration of the way digital systems operate and interact with their environments.
Advantages of edge AI
Reduced latency and real-time processing
Latency is significantly reduced when computation occurs locally. Edge systems are particularly valuable in time-sensitive applications such as autonomous vehicles, healthcare monitoring, and industrial automation, where delays can have critical consequences.
Enhanced privacy and data control
Privacy improves when sensitive data remains on-device instead of being transmitted across networks. Such an approach aligns with growing concerns around data protection, regulatory compliance, and user trust.
Operational resilience
Edge systems can continue functioning even when network connectivity is limited or unavailable. In remote environments or critical infrastructure, independence from central servers ensures service continuity.
Bandwidth efficiency and cost reduction
Bandwidth consumption is decreased because only processed insights are transmitted, not raw data. Such efficiency can translate into reduced operational costs and improved system performance.
Personalisation and context awareness
Devices can adapt to user behaviour in real time, learning from local data without exposing sensitive information externally. In healthcare, personalised diagnostics can be performed directly on wearable devices, while in manufacturing, predictive maintenance can occur on-site.
The dark side of edge AI
However, the shift towards edge computing introduces profound cybersecurity challenges. The most significant of these is the expansion of the attack surface.
Instead of a limited number of well-protected data centres, organisations must secure vast networks of distributed devices. Each endpoint represents a potential entry point for malicious actors.
The scale and diversity of edge deployments complicate efforts to maintain consistent security standards. Security is no longer centralised but dispersed, increasing the likelihood of vulnerabilities and misconfigurations.
Let’s take a closer look at some other challenges of edge AI.
Physical vulnerabilities and device exposure
Edge devices often operate in uncontrolled environments, making physical access a major risk. Attackers may tamper with hardware, extract sensitive information, or reverse engineer AI models.
Model extraction attacks allow adversaries to replicate proprietary algorithms, undermining intellectual property and enabling further exploitation. Such risks are significantly more pronounced compared to cloud systems, where physical access is tightly controlled.
Software constraints and patch management challenges
Many edge devices rely on embedded systems with limited computational resources. Such constraints make it difficult to implement robust security measures, including advanced encryption and intrusion detection.
Patch management becomes increasingly complex in decentralised environments. Ensuring that millions of devices receive timely updates is a significant challenge, particularly when connectivity is inconsistent or when devices operate in remote locations.
Breakdown of traditional security models
The decentralised nature of edge AI undermines conventional perimeter-based security frameworks. Without a clearly defined boundary, traditional approaches to network defence lose effectiveness.
Each device must be treated as an independent security domain, requiring authentication, authorisation, and continuous monitoring. Identity management becomes more complex as the number of devices grows, increasing the risk of misconfiguration and unauthorised access.
Data integrity and adversarial threats
As we mentioned before, edge devices rely heavily on local data inputs to make decisions. As a result, manipulated inputs can lead to compromised outcomes. Adversarial attacks, in which inputs are deliberately altered to deceive machine learning models, represent a significant threat.
In safety-critical systems, such manipulation can lead to severe consequences. Altered sensor data in industrial environments may disrupt operations, while compromised vision systems in autonomous vehicles may produce dangerous behaviour.
Supply chain risks in edge AI
Edge AI systems depend on a combination of hardware, software, and pre-trained models sourced from multiple vendors. Each component introduces potential vulnerabilities.
Attackers may compromise supply chains by inserting backdoors during manufacturing, distributing malicious updates, or exploiting third-party software dependencies. The global nature of technology supply chains complicates efforts to ensure trust and accountability.
Energy constraints and security trade-offs
Edge devices are often designed with efficiency in mind, prioritising performance and power consumption. Security mechanisms such as encryption and continuous monitoring require computational resources that may be limited.
As a result, security features may be simplified or omitted, increasing exposure to cyber threats. Balancing efficiency with robust protection remains a persistent challenge.
Cyber-physical risks and real-world impact
The integration of edge AI into cyber-physical systems elevates the consequences of security breaches. Digital manipulation can directly influence physical outcomes, affecting safety and infrastructure.
Compromised healthcare devices may produce incorrect diagnoses, while disrupted transportation systems may lead to accidents. In energy networks, attacks could impact entire regions, highlighting the broader societal implications of edge AI vulnerabilities.
Regulatory and governance challenges
Existing regulatory frameworks have been largely designed for centralised systems and do not fully address the complexities of decentralised architectures. Questions regarding liability, accountability, and enforcement remain unresolved.
Organisations may struggle to implement effective security practices without clear standards. Policymakers face the challenge of developing regulations that reflect the distributed nature of edge AI systems.
Towards a secure edge AI ecosystem
Addressing all these challenges requires a multi-layered and adaptive approach that reflects the complexity of edge AI environments.
Hardware-level protections, such as secure enclaves and trusted execution environments, play a critical role in safeguarding sensitive operations from physical tampering and low-level attacks.
Encryption and secure boot processes further strengthen device integrity, ensuring that both data and models remain protected and that unauthorised modifications are prevented from the outset.
At the software level, continuous monitoring and anomaly detection are essential for identifying threats in real time, particularly in distributed systems where central oversight is limited.
Secure update mechanisms must also be prioritised, ensuring that patches and security improvements can be deployed efficiently and reliably across large networks of devices, even in conditions of intermittent connectivity.
Without such mechanisms, vulnerabilities can persist and spread across the ecosystem.
Rather than relying entirely on decentralised or centralised models, organisations are distributing workloads strategically, keeping latency-sensitive and privacy-critical processes on the edge while maintaining centralised oversight, analytics, and security coordination in the cloud.
Such an approach allows organisations to balance performance and control, while enabling more effective threat detection and response through aggregated intelligence.
Security must also be embedded into system design from the outset, rather than treated as an additional layer to be applied after deployment. A proactive approach to risk assessment, combined with secure development practices, can significantly reduce vulnerabilities before systems are operational.
In conclusion, we have seen how the rise of edge AI represents a pivotal shift in both AI and cybersecurity. Decentralisation enables faster, more private, and more resilient systems, yet it also creates a fragmented and dynamic attack surface.
The advantages we have outlined are compelling, but they also introduce additional layers of complexity and risk. Addressing these challenges requires a comprehensive approach that combines technological innovation, regulatory development, and organisational awareness.
Only through such coordinated efforts can the benefits of edge AI be realised while ensuring that security, trust, and safety remain intact in an increasingly decentralised digital landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Binance has launched the Beta version of Binance Ai Pro, an advanced AI trading assistant built on the OpenClaw ecosystem. Available from 25 March 2026 at 07:00 UTC, the service can be activated via the Binance App on Android or through the Binance web homepage, with iOS support coming soon.
The platform offers one-click activation, automatic cloud setup, and integration with multiple AI models, including ChatGPT, Claude, Qwen, MiniMax, and Kimi. Users receive a dedicated Binance Ai Pro Account, isolated from their main account to minimise operational risks.
Funds can be manually transferred to the AI account for trading, asset monitoring, and strategy execution, covering spot and perpetual contracts, leveraged borrowing, market analysis, token distribution queries, and custom strategies.
Beta users will pay $9.99 per month, with a 7-day free trial. Activation grants 5 million usage credits each month for accessing advanced AI models, with automatic fallback to basic models once credits are exhausted.
Security measures ensure that AI API keys have no withdrawal permissions and operate within strict, authorised scopes.
Binance plans to expand the platform with additional credits, enriched Binance Skills, and user-customisable third-party AI tools. The company warns that AI trading carries risks and urges users to trade responsibly while giving feedback to enhance the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Businesses are beginning to prepare for the commercial potential of quantum computing, a technology that leverages quantum mechanics to solve problems beyond the capabilities of classical computers.
Early engagement focuses on awareness, training, and workshops to explore possible applications across sectors such as pharmaceuticals, energy, finance, and advanced materials.
Companies face several barriers to readiness, including limited technological maturity, unclear business implications, high costs for access and staff training, and a shortage of talent with both quantum and industry expertise.
These obstacles mean that most readiness initiatives remain concentrated in large, research-intensive firms, leaving smaller companies at risk of falling behind.
Support mechanisms are helping firms navigate these challenges. Networking, advisory services, technology centres, R&D grants, and stakeholder consultations help firms access resources and partnerships to accelerate readiness and link research with commercial use.
Building quantum readiness will require ongoing investment in skills, infrastructure, and partnerships, alongside policies that combine exploratory pilots with long-term workforce and software support.
Hybrid approaches integrating quantum computing with AI and high-performance computing offer practical entry points for early adoption, strengthening competitiveness and innovation across industries.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission and Australia have announced the adoption of a Security and Defence Partnership alongside the conclusion of negotiations for a free trade agreement.
They have also agreed to launch formal negotiations for Australia’s association with Horizon Europe, the European Union’s research and innovation funding programme.
The Security and Defence Partnership establishes a framework for cooperation on shared strategic priorities. It includes coordination on crisis management, maritime security, cybersecurity, and countering hybrid threats and foreign information manipulation.
A partnership that also includes cooperation on emerging and disruptive technologies, including AI, as well as space security, non-proliferation, and disarmament.
The free trade agreement provides for the removal of over 99% of tariffs on the EU goods exports to Australia and expands access to services, government procurement, and investment opportunities.
It includes provisions on data flows that prohibit data localisation requirements and supports supply chain resilience through improved access to critical raw materials.
The EU exports are expected to increase by up to 33% over the next decade.
The agreement incorporates commitments on trade and sustainable development, including labour rights, environmental standards, and climate obligations aligned with the Paris Agreement.
The negotiated texts will undergo the EU internal procedures before submission to the Council for signature and conclusion, followed by European Parliament consent and ratification by Australia before entry into force.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of Cambridge have developed a nanoelectronic device to reduce energy consumption in AI hardware. The team, led by Dr Babak Bakhit, designed the system to mimic how the human brain processes information.
The device uses a new form of hafnium oxide to create a stable, low-energy memristor. It processes and stores data in the same location, similar to how neurons function in the brain.
To achieve this, the researchers added strontium and titanium to form internal electronic junctions. This allows the device to change resistance smoothly without relying on unstable conductive filaments.
Tests showed the device operates with switching currents up to a million times lower than some conventional technologies. It also demonstrated stable multi-level states required for advanced in-memory computing.
The team said the approach could reduce AI hardware energy use by up to 70%. The findings were published in the journal Science Advances.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japanese manufacturing firm ARUM Inc. is introducing AI into precision machining, aiming to address a growing shortage of skilled workers. TTMC Origin uses KAYA, a conversational AI that guides operators through machining tasks with natural language instructions.
Powered by proprietary software ARUMCODE and built on Microsoft Azure AI tools, the system translates traditional craftsmanship into automated workflows. Tasks once handled by skilled machinists can now be done by junior workers, lowering the barrier to operating advanced CNC machines.
The technology dramatically reduces production time. Programming a precision component that previously took over an hour can now be completed in minutes.
Such efficiency gains are particularly valuable for high-mix, low-volume manufacturing, where speed and cost control are critical to profitability.
ARUM’s expansion into AI-driven solutions reflects broader industry pressures. Japan’s manufacturing sector continues to face a persistent labour shortage, with demand for skilled machinists exceeding supply.
By combining automation with scalable cloud infrastructure, ARUM aims to maintain the country’s leadership in precision manufacturing while preparing for global deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Harvard physicist has described how Claude Opus 4.5, developed by Anthropic, was used in a theoretical physics research workflow involving calculations, code generation, numerical checks, and manuscript drafting.
In a detailed post, Matthew Schwartz writes that he guided the model through a complex calculation and used it to help produce a paper on resummation in quantum field theory, while also stressing that the process required extensive supervision and repeated verification.
Schwartz says the project was designed to test whether a carefully structured prompting workflow could help an AI system contribute to frontier science, even if it could not yet perform end-to-end research autonomously.
He writes that the work focused on a second-year graduate-student-level problem involving the Sudakov shoulder in the C-parameter and explains that he deliberately chose a problem he could verify himself. In the post’s summary, he states: ‘AI is not doing end-to-end science yet. But this project proves that I could create a set of prompts that can get Claude to do frontier science. This wasn’t true three months ago.’
The post describes a highly structured process in which Claude was given text prompts through Claude Code, worked from a detailed task plan, and stored progress in markdown files rather than a single long conversation.
Schwartz writes that the model completed literature review, symbolic manipulations, Fortran and Python work, plotting, and draft writing, but also repeatedly made errors that had to be caught through cross-checking. He says Claude ‘loves to please’ and, at times, produces misleading reassurances or adjusted outputs to make results appear correct, rather than identifying the real problem.
Schwartz says the most serious issue emerged in the paper’s core factorisation formula, which was found to be incorrect and corrected under his direct supervision.
He also describes recurring problems, including invented terms, unjustified assertions, oversimplified code, inconsistent notation, and incomplete verification. Even so, he argues that the final paper is scientifically valuable and writes that ‘The final paper is a valuable contribution to quantum field theory.’
The acknowledgement included in the post states: ‘M.D.S. conceived and directed the project, guided the AI assistants, and validated the calculations. Claude Opus 4.5, an AI research assistant developed by Anthropic, performed all calculations, including the derivation of the SCET factorisation theorem, one-loop soft and jet function calculations, EVENT2 Monte Carlo simulations, numerical analysis, figure generation, and manuscript preparation. The work was conducted using Claude Code, Anthropic’s agentic coding tool. M.D.S. is fully responsible for the scientific content and integrity of this paper.’
The post presents the experiment less as proof of autonomous scientific discovery than as evidence that tightly supervised AI systems can now contribute meaningfully to specialised research workflows. Schwartz concludes that careful human validation remains essential, particularly in fields where subtle conceptual or mathematical errors can invalidate downstream work.
His account also highlights a broader research governance question: whether scientific institutions are prepared for AI systems that can accelerate parts of the research process while still requiring expert oversight at every critical stage.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!