Mozilla integrates Perplexity AI into Firefox’s search features

Mozilla has announced that it is integrating Perplexity’s AI answer engine into Firefox as a choice available in the browser’s search options.

The feature had already been piloted in markets including the US, UK and Germany. Now Firefox is bringing the option to desktop users globally, with mobile rollout expected in the coming months.

When enabled, Perplexity AI offers conversational search. Instead of just showing a list of web pages, answers appear with citations. Users can activate it via the unified search button in the address bar or by configuring their default search engine settings.

Mozilla says the integration reflects positive feedback from early users and signals a desire to give people more choice in how they get information. The company also notes that Perplexity ‘doesn’t share or sell users’ personal data,’ which aligns with Mozilla’s privacy principles.

Firefox also continues to evolve other browser features. One is profiles, now broadly available, which allows users to maintain separate browser setups (for example, work vs home). The browser is also experimenting with visual search features using Google Lens for users who keep Google as their default provider.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI forms Expert Council to guide well-being in AI

OpenAI has announced the establishment of an Expert Council on Well-Being and AI to help it shape ChatGPT, Sora and other products in ways that promote healthier interactions and better emotional support.

The council comprises eight distinguished figures from psychology, psychiatry, human-computer interaction, developmental science and clinical practice.

Members include David Bickham (Digital Wellness Lab, Harvard), Munmun De Choudhury (Georgia Tech), Tracy Dennis-Tiwary (Hunter College), Sara Johansen (Stanford), Andrew K. Przybylski (University of Oxford), David Mohr (Northwestern), Robert K. Ross (public health) and Mathilde Cerioli (everyone.AI).

OpenAI says this new body will meet regularly with internal teams to examine how AI should function in ‘complex or sensitive situations,’ advise on guardrails, and explore what constitutes well-being in human-AI interaction. For example, the council already influenced how parental controls and user-teen distress notifications were prioritised.

OpenAI emphasises that it remains accountable for its decisions, but commits to ongoing learning through this council, the Global Physician Network, policymakers and experts. The company notes that different age groups, especially teenagers, use AI tools differently, hence the need for tailored insights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

An awards win for McAfee’s consumer-first AI defence

McAfee won ‘Best Use of AI in Cybersecurity’ at the 2025 A.I. Awards for its Scam Detector. The tool, which McAfee says is the first to automate deepfake, email, and text-scam detection, underscores a consumer-focused defence. The award recognises its bid to counter fast-evolving online fraud.

Scams are at record levels, with one in three US residents reporting victimisation and average losses of $1,500. Threats now range from fake job offers and text messages to AI-generated deepfakes, increasing the pressure on tools that can act in real time across channels.

McAfee’s Scam Detector uses advanced AI to analyse text, email, and video, blocking dangerous links and flagging deepfakes before they cause harm. It is included with core McAfee plans and available on PC, mobile, and web, positioning it as a default layer for everyday protection.

Adoption has been rapid, with the product crossing one million users in its first months, according to the company. Judges praised its proactive protection and emphasis on accuracy and trust, citing its potential to restore user confidence as AI-enabled deception becomes more sophisticated.

McAfee frames the award as validation of its responsible, consumer-first AI strategy. The company says it will expand Scam Detector’s capabilities while partnering with the wider ecosystem to keep users a step ahead of emerging threats, both online and offline.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft finds 71% of UK workers use unapproved AI tools on the job

A new Microsoft survey has revealed that nearly three in four employees in the UK use AI tools at work without company approval.

A practice, referred to as ‘shadow AI’, that involves workers relying on unapproved systems such as ChatGPT to complete routine tasks. Microsoft warned that unauthorised AI use could expose businesses to data leaks, non-compliance risks, and cyber attacks.

The survey, carried out by Censuswide, questioned over 2,000 employees across different sectors. Seventy-one per cent admitted to using AI tools outside official policies, often because they were already familiar with them in their personal lives.

Many reported using such tools to respond to emails, prepare presentations, and perform financial or administrative tasks, saving almost eight hours of work each week.

Microsoft said only enterprise-grade AI systems can provide the privacy and security organisations require. Darren Hardman, Microsoft’s UK and Ireland chief executive, urged companies to ensure workplace AI tools are designed for professional use rather than consumer convenience.

He emphasised that secure integration can allow firms to benefit from AI’s productivity gains while protecting sensitive data.

The study estimated that AI technology saves 12.1 billion working hours annually across the UK, equivalent to about £208 billion in employee time. Workers reported using the time gained through AI to improve work-life balance, learn new skills, and focus on higher-value projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen content on Instagram now guided by PG-13 standards

Instagram is aligning its Teen Accounts with PG-13 movie standards, aiming to ensure that users under 18 only see age-appropriate material. Teens will automatically be placed in a 13+ setting and will need parental permission to change it.

Parents who want tighter supervision can activate a new ‘Limited Content’ mode that filters out even more material and restricts comments and AI interactions.

The company reviewed its policies to match familiar parental guidelines, further limiting exposure to content with strong language, risky stunts, or references to substances. Teens will also be blocked from following accounts that share inappropriate content or contain suggestive names and bios.

Searches for sensitive terms such as ‘gore’ or ‘alcohol’ will no longer return results, and the same restrictions will extend to Explore, Reels, and AI chat experiences.

Instagram worked with thousands of parents worldwide to shape these policies, collecting more than three million content ratings to refine its protections. Surveys show strong parental support, with most saying the PG-13 system makes it easier to understand what their teens are likely to see online.

The updates begin rolling out in the US, UK, Australia, and Canada and will expand globally by the end of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI predicts future knee X-rays for osteoarthritis patients

In the UK, an AI system developed at the University of Surrey can predict what a patient’s knee X-ray will look like a year in the future, offering a visual forecast alongside a risk score for osteoarthritis progression.

The technology is designed to help both patients and doctors better understand how the condition may develop, allowing earlier and more informed treatment decisions.

Trained on nearly 50,000 knee X-rays from almost 5,000 patients, the system delivers faster and more accurate predictions than existing AI tools.

It uses a generative diffusion model to produce a future X-ray and highlights 16 key points in the joint, giving clinicians transparency and confidence in the areas monitored. Patients can compare their current and predicted X-rays, which can encourage adherence to treatment plans and lifestyle changes.

Researchers hope the technology could be adapted for other chronic conditions, including lung disease in smokers or heart disease progression, providing similar visual insights.

The team is seeking partnerships to integrate the system into real-world clinical settings, potentially transforming how millions of people manage long-term health conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers expose weak satellite security with cheap equipment

Scientists in the US have shown how easy it is to intercept private messages and military information from satellites using equipment costing less than €500.

Researchers from the University of California, San Diego and the University of Maryland scanned internet traffic from 39 geostationary satellites and 411 transponders over seven months.

They discovered unencrypted data, including phone numbers, text messages, and browsing history from networks such as T-Mobile, TelMex, and AT&T, as well as sensitive military communications from the US and Mexico.

The researchers used everyday tools such as TV satellite dishes to collect and decode the signals, proving that anyone with a basic setup and a clear view of the sky could potentially access unprotected data.

They said there is a ‘clear mismatch’ between how satellite users assume their data is secured and how it is handled in reality. Despite the industry’s standard practice of encrypting communications, many transmissions were left exposed.

Companies often avoid stronger encryption because it increases costs and reduces bandwidth efficiency. The researchers noted that firms such as Panasonic could lose up to 30 per cent in revenue if all data were encrypted.

While intercepting satellite data still requires technical skill and precise equipment alignment, the study highlights how affordable tools can reveal serious weaknesses in global satellite security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New YouTube tools provide trusted health advice for teens

YouTube is introducing a new shelf of mental health and wellbeing content designed specifically for teenagers. The feature will provide age-appropriate, evidence-based videos covering topics such as depression, anxiety, ADHD, and eating disorders.

Content is created in collaboration with trusted organisations and creators, including Black Dog Institute, ReachOut Australia, and Dr Syl, to ensure it is both reliable and engaging.

The initiative will initially launch in Australia, with plans to expand to the US, the UK, and Canada. Videos are tailored to teens’ developmental stage, offering practical advice, coping strategies, and medically-informed guidance.

By providing credible information on a familiar platform, YouTube hopes to improve mental health literacy and reduce stigma among young users.

YouTube has implemented teen-specific safeguards for recommendations, content visibility, and advertising eligibility, making it easier for adolescents to explore their interests safely.

The company emphasises that the platform is committed to helping teens access trustworthy resources, while supporting their wellbeing in a digital environment increasingly filled with misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The AI gold rush where the miners are broke

The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economic engine. Capital has flowed into AI companies at an unprecedented pace, fuelled by expectations of substantial future returns.

Yet despite these bloated investments, none of the leading players have managed to break even, let alone deliver a net-positive financial year. Even so, funding shows no signs of slowing, driven by the belief that profitability is only a matter of time. Is this optimism justified, or is the AI boom, for now, little more than smoke and mirrors?

Where the AI money flows

Understanding the question of AI profitability starts with following the money. Capital flows through the ecosystem from top to bottom, beginning with investors and culminating in massive infrastructure spending. Tracing this flow makes it easier to see where profits might eventually emerge.

The United States is the clearest focal point. The country has become the main hub for AI investment, where the technology is presented as the next major economic catalyst and treated by many investors as a potential cash cow.

The US market fuels AI through a mix of venture capital, strategic funding from Big Tech, and public investment. By late August 2025, at least 33 US AI startups had each raised 100 million dollars or more, showing the depth of available capital and investor appetite.

OpenAI stands apart from the rest of the field. Multiple reports point to a primary round of roughly USD 40 billion at a USD 300 billion post-money valuation, followed by secondary transactions that pushed the implied valuation even higher. No other AI company has matched this scale.

Much of the capital is not aimed at quick profits. Large sums support research, model development, and heavy infrastructure spending on chips, data centres, and power. Plans to deploy up to 6 gigawatts of AMD accelerators in 2026 show how funding moves into capacity rather than near-term earnings.

Strategic partners and financiers supply some of the largest investments. Microsoft has a multiyear, multibillion-dollar deal with OpenAI. Amazon has invested USD 4 billion in Anthropic, Google has pledged up to USD 2 billion, and infrastructure players like Oracle and CoreWeave are backed by major Wall Street banks.

AI makes money – it’s just not enough (yet)

Winning over deep-pocketed investors has become essential for both scrappy startups and established AI giants. Tech leaders have poured money into ambitious AI ventures for many reasons, from strategic bets to genuine belief in the technology’s potential to reshape industries.

No matter their motives, investors eventually expect a return. Few are counting on quick profits, but sooner or later, they want to see results, and the pressure to deliver is mounting. Hype alone cannot sustain a company forever.

To survive, AI companies need more than large fundraising rounds. Real users and reliable revenue streams are what keep a business afloat once investor patience runs thin. Building a loyal customer base separates long-term players from temporary hype machines.

OpenAI provides the clearest example of a company that has scaled. In the first half of 2025, it generated around 4.3 billion dollars in revenue, and by October, its CEO reported that roughly 800 million people were using ChatGPT weekly. The scale of its user base sets it apart from most other AI firms, but the company’s massive infrastructure and development costs keep it far from breaking even.

Microsoft has also benefited from the surge in AI adoption. Azure grew 39 percent year-over-year in Q4 FY2025, reaching 29.9 billion dollars. AI services drive a significant share of this growth, but data-centre expansion and heavy infrastructure costs continue to weigh on margins.

NVIDIA remains the biggest financial winner. Its chips power much of today’s AI infrastructure, and demand has pushed data-centre revenue to record highs. In Q2 FY2026, the company reported total revenue of 46.7 billion dollars, yet overall industry profits still lag behind massive investment levels due to maintenance costs and a mismatch between investment and earnings.

Why AI projects crash and burn

Besides the major AI players earning enough to offset some of their costs, more than two-fifths of AI initiatives end up on the virtual scrapheap for a range of reasons. Many companies jumped on the AI wave without a clear plan, copying what others were doing and overlooking the huge upfront investments needed to get projects off the ground.

GPU prices have soared in recent years, and new tariffs introduced by the current US administration have added even more pressure. Running an advanced model requires top-tier chips like NVIDIA’s H100, which costs around 30,000 dollars per unit. Once power consumption, facility costs, and security are added, the total bill becomes daunting for all but the largest players.

Another common issue is the lack of a scalable business model. Many companies adopt AI simply for the label, without a clear strategy for turning interest into revenue. In some industries, these efforts raise questions with customers and employees, exposing persistent trust gaps between human workers and AI systems.

The talent shortage creates further challenges. A young AI startup needs skilled engineers, data scientists, and operations teams to keep everything running smoothly. Building and managing a capable team requires both money and expertise. Unrealistic goals often add extra strain, causing many projects to falter before reaching the finish line.

Legal and ethical hurdles can also derail projects early on. Privacy laws, intellectual property disputes, and unresolved ethical questions create a difficult environment for companies trying to innovate. Lawsuits and legal fees have become routine, prompting some entrepreneurs to shut down rather than risk deeper financial trouble.

All of these obstacles together have proven too much for many ventures, leaving behind a discouraging trail of disbanded companies and abandoned ambitions. Sailing the AI seas offers a great opportunity, but storms can form quickly and overturn even the most confident voyages.

How AI can become profitable

While the situation may seem challenging now, there is still light at the end of the AI tunnel. The key to building a profitable and sustainable AI venture lies in careful planning and scaling only when the numbers add up. Companies that focus on fundamentals rather than hype stand the best chance of long-term success.

Lowering operational costs is one of the most important steps. Techniques such as model compression, caching, and routing queries to smaller models can dramatically reduce the cost of running AI systems. Improvements in chip efficiency and better infrastructure management can also help stretch every dollar further.

Shifting the revenue mix is another crucial factor. Many companies currently rely on cheap consumer products that attract large user bases but offer thin margins. A stronger focus on enterprise clients, who pay for reliability, customisation, and security, can provide a steadier and more profitable income stream.

Building real platforms rather than standalone products can unlock new revenue sources. Offering APIs, marketplaces, and developer tools allows companies to collect a share of the value created by others. The approach mirrors the strategies used by major cloud providers and app ecosystems.

Improving unit economics will determine which companies endure. Serving more users at lower per-request costs, increasing cache hit rates, and maximising infrastructure utilisation are essential to moving from growth at any cost to sustainable profit. Careful optimisation can turn large user bases into reliable sources of income.

Stronger financial discipline and clearer regulation can also play a role. Companies that set realistic growth targets and operate within stable policy frameworks are more likely to survive in the long run. Profitability will depend not only on innovation but also on smart execution and strategic focus.

Charting the future of AI profitability

The AI bubble appears stretched thin, and a constant stream of investments can do little more than artificially extend the lifespan of an AI venture doomed to fail. AI companies must find a way to create viable, realistic roadmaps to justify the sizeable cash injections, or they risk permanently compromising investors’ trust.

That said, the industry is still in its early and formative years, and there is plenty of room to grow and adapt to current and future landscapes. AI has the potential to become a stable economic force, but only if companies can find a compromise between innovation and financial pragmatism. Profitability will not come overnight, but it is within reach for those willing to build patiently and strategically.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urges firms to keep paper backups for cyberattack recovery

The UK government has issued a strong warning to company leaders to prepare for cyber incidents by maintaining paper-based contingency plans. The National Cyber Security Centre (NCSC) emphasised that firms must plan how to continue operations and rebuild IT systems if networks are compromised.

The advice follows a series of high-profile cyberattacks this year targeting major UK firms, including Marks & Spencer, The Co-op, and Jaguar Land Rover, which experienced production halts and supply disruptions after their systems were breached.

According to NCSC chief executive Richard Horne, organisations need to adopt ‘resilience engineering’ strategies, systems designed to anticipate, absorb, recover, and adapt during cyberattacks.

The agency recommends storing response plans offline and outlining alternative communication methods, such as phone trees and manual record-keeping, should email systems fail.

While the total number of cyber incidents investigated by the NCSC, 429 in the first nine months of 2025, remained stable, the number of ‘nationally significant’ attacks nearly doubled from 89 to 204. These include Category 1–3 incidents, ranging from ‘significant’ to ‘national cyber emergency.’

Recent cases highlight the human and operational toll of such events, including a ransomware attack on a London blood testing provider last year that caused severe clinical disruption and contributed to at least one patient death.

Experts say the call for offline backups may sound old-fashioned but is pragmatic. ‘You wouldn’t walk onto a building site without a helmet, yet companies still go online without basic protection,’ said Graeme Stewart, head of public sector at Check Point. ‘Cybersecurity must be treated like health and safety: not optional, but essential.’

The government is also encouraging companies, particularly SMEs, to use the NCSC’s free support tools, including cyber insurance linked to its Cyber Essentials programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot