Meta boosts teen safety as it removes hundreds of thousands of harmful accounts

Meta has rolled out new safety tools to protect teenagers on Instagram and Facebook, including alerts about suspicious messages and a one-tap option to block or report harmful accounts.

The company said it is increasing efforts to prevent inappropriate contact from adults and has removed over 635,000 accounts that sexualised or targeted children under 13.

Of those accounts, 135,000 were caught posting sexualised comments, while another 500,000 were flagged for inappropriate interactions.

Meta said teen users blocked over one million accounts and reported another million after receiving in-app warnings encouraging them to stay cautious in private messages.

The company also uses AI to detect users lying about their age on Instagram. If flagged, those accounts are automatically converted to teen accounts with stronger privacy settings and messaging restrictions. Since 2024, all teen accounts are set to private by default.

Meta’s move comes as it faces mounting legal pressure from dozens of US states accusing the company of contributing to the youth mental health crisis by designing addictive features on Instagram and Facebook. Critics argue that more must be done to ensure safety instead of relying on user action alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starlink suffers widespread outage from a rare software failure

The disruption began around 3 p.m. EDT and was attributed to a failure in Starlink’s core internal software services. The issue affected one of the most resilient satellite systems globally, sparking speculation over whether a botched update or a cyberattack may have been responsible.

Starlink, which serves more than six million users across 140 countries, saw service gradually return after two and a half hours.

Executives from SpaceX, including CEO Elon Musk and Vice President of Starlink Engineering Michael Nicolls, apologised publicly and promised to address the root cause to avoid further interruptions. Experts described it as Starlink’s most extended and severe outage since becoming a major provider.

As SpaceX continues upgrading the network to support greater speed and bandwidth, some experts warned that such technical failures may become more visible. Starlink has rapidly expanded with over 8,000 satellites in orbit and new services like direct-to-cell text messaging in partnership with T-Mobile.

Questions remain over whether Thursday’s failure affected military services like Starshield, which supports high-value US defence contracts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek and others gain traction in US and EU

A recent survey has found that most US and the EU users are open to using Chinese large language models, even amid ongoing political and cybersecurity scrutiny.

According to the report, 71 percent of respondents in the US and 87 percent in the EU would consider adopting models developed in China.

The findings highlight increasing international curiosity about the capabilities of Chinese AI firms such as DeepSeek, which have recently attracted global attention.

While the technology is gaining credibility, many Western users remain cautious about data privacy and infrastructure control.

More than half of those surveyed said they would only use Chinese AI models if hosted outside China. However, this suggests that while trust in the models’ performance is growing, concerns over data governance remain a significant barrier to adoption.

The results come amid heightened global competition in the AI race, with Chinese developers rapidly advancing to challenge US-based leaders. DeepSeek and similar firms now face balancing global outreach with geopolitical limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s AI Overviews reach 2 billion users monthly, reshaping the web’s future

Google’s AI Overviews, the generative summaries placed above traditional search results, now serve over 2 billion users monthly, a sharp rise from 1.5 billion just last quarter.

First launched in May 2023 and widely available in the US by mid-2024, the feature has rapidly expanded across more than 200 countries and 40 languages.

The widespread use of AI Overviews transforms how people search and who benefits. Google reports that the feature boosts engagement by over 10% for queries where it appears.

However, a study by Pew Research shows clicks on search results drop significantly when AI Overviews are shown, with just 8% of users clicking any link, and only 1% clicking within the overview itself.

While Google claims AI Overviews monetise at the same rate as regular search, publishers are left out unless users click through, which they rarely do.

Google has started testing ads within the summaries and is reportedly negotiating licensing deals with select publishers, hinting at a possible revenue-sharing shift. Meanwhile, regulators in the US and EU are scrutinising whether the feature violates antitrust laws or misuses content.

Industry experts warn of a looming ‘Google Zero’ future — a web where search traffic dries up and AI-generated answers dominate.

As visibility in search becomes more about entity recognition than page ranking, publishers and marketers must rethink how they maintain relevance in an increasingly post-click environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN interest surges in the UK as users bypass porn site age checks

Online searches for VPNs skyrocketed in the UK following the introduction of new age verification rules on adult websites such as PornHub, YouPorn and RedTube.

Under the Online Safety Act, these platforms must confirm that visitors are over 18 using facial recognition, photo ID or credit card details.

Data from Google Trends showed that searches for ‘VPN’ jumped by over 700 percent on Friday morning, suggesting many attempt to sidestep the restrictions by masking their location. VPN services allow users to spoof their device’s location to another country instead of complying with local regulations.

Critics argue that the measures are both ineffective and risky. Aylo, the company behind PornHub, called the checks ‘haphazard and dangerous’, warning they put users’ privacy at risk.

Legal experts also doubt the system’s impact, saying it fails to block access to dark web content or unregulated forums.

Aylo proposed that age verification should occur on users’ devices instead of websites storing sensitive information. The company stated it is open to working with governments, civil groups and tech firms to develop a safer, device-based system that protects privacy while enforcing age limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon exit highlights deepening AI divide between US and China

Amazon’s quiet wind-down of its Shanghai AI lab underscores a broader shift in global research dynamics, as escalating tensions between the US and China reshape how tech giants operate across borders.

Instead of expanding innovation hubs in China, major American firms are increasingly dismantling them.

The AWS lab, once central to Amazon’s AI research, produced tools said to have generated nearly $1bn in revenue and over 100 academic papers.

Yet its dissolution reflects a growing push from Washington to curb China’s access to cutting-edge technology, including restrictions on advanced chips and cloud services.

As IBM and Microsoft have also scaled back operations or relocated talent away from mainland China, a pattern is emerging: strategic retreat. Rather than risking compliance issues or regulatory scrutiny, US tech companies are choosing to restructure globally and reduce local presence in China altogether.

With Amazon already having exited its Chinese ebook and ecommerce markets, the shuttering of its AI lab signals more than a single closure — it reflects a retreat from joint innovation and a widening technological divide that may shape the future of AI competition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI for advice and friendship

A growing number of US teens rely on AI for daily decision‑making and emotional support, with chatbots such as ChatGPT, Character.AI and Replika. One Kansas student admits she uses AI to simplify everyday tasks, using it to choose clothes or plan events while avoiding schoolwork.

A survey by Common Sense Media reveals that over 70 per cent of teenagers have tried AI companions, with around half using them regularly. Roughly a third reported discussing serious issues with AI, sometimes finding it as or more satisfying than talking with friends.

Experts express concern that such frequent AI interactions could hinder development of creativity, critical thinking and social skills in young people. The study warns adolescents may become overly validated by AI, missing out on real‑world emotional growth.

Educators caution that while AI offers constant, non‑judgemental feedback, it is not a replacement for authentic human relationships. They recommend AI use be carefully supervised to ensure it complements rather than replaces real interaction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing faces roadblocks to real-world use

Quantum computing holds vast promise for sectors from climate modelling to drug discovery and AI, but it remains far from mainstream due to significant barriers. The fragility of qubits, the shortage of scalable quantum software, and the immense number of qubits required continue to limit progress.

Keeping qubits stable is one of the most significant technical obstacles, with most only lasting microseconds before disruption. Current solutions rely on extreme cooling and specialised equipment, which remain expensive and impractical for widespread use.

Even the most advanced systems today operate with a fraction of the qubits needed for practical applications, while software options remain scarce and highly tailored. Businesses exploring quantum solutions must often build their tools from scratch, adding to the cost and complexity.

Beyond technology, the field faces social and structural challenges. A lack of skilled professionals and fears around unequal access could see quantum benefits restricted to big tech firms and governments.

Security is another looming concern, as future quantum machines may be capable of breaking current encryption standards. Policymakers and businesses must develop defences before such systems become widely available.

AI may accelerate progress in both directions. Quantum computing can supercharge model training and simulation, while AI is already helping to improve qubit stability and propose new hardware designs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump AI strategy targets China and cuts red tape

The Trump administration has revealed a sweeping new AI strategy to cement US dominance in the global AI race, particularly against China.

The 25-page ‘America’s AI Action Plan’ proposes 90 policy initiatives, including building new data centres nationwide, easing regulations, and expanding exports of AI tools to international allies.

White House officials stated the plan will boost AI development by scrapping federal rules seen as restrictive and speeding up construction permits for data infrastructure.

A key element involves monitoring Chinese AI models for alignment with Communist Party narratives, while promoting ‘ideologically neutral’ systems within the US. Critics argue the approach undermines efforts to reduce bias and favours politically motivated AI regulation.

The action plan also supports increased access to federal land for AI-related construction and seeks to reverse key environmental protections. Analysts have raised concerns over energy consumption and rising emissions linked to AI data centres.

While the White House claims AI will complement jobs rather than replace them, recent mass layoffs at Indeed and Salesforce suggest otherwise.

Despite the controversy, the announcement drew optimism from investors. AI stocks saw mixed trading, with NVIDIA, Palantir and Oracle gaining, while Alphabet slipped slightly. Analysts described the move as a ‘watershed moment’ for US tech, signalling an aggressive stance in the global AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!