Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.
Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.
Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.
Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms has announced a temporary pause on teenagers’ access to AI characters across its platforms, including Instagram and WhatsApp. Meta disclosed the decision to review and rebuild the feature for younger users.
In San Francisco, Meta said the restriction will apply to users identified as minors based on declared ages or internal age-prediction systems. Teenagers will still be able to use Meta’s core AI assistant, though interactive AI characters will be unavailable.
The move comes ahead of a major child safety trial in Los Angeles involving Meta, TikTok and YouTube. The Los Angeles case focuses on allegations that social media platforms cause harm to children through addictive and unsafe digital features.
Concerns about AI chatbots and minors have grown across the US, prompting similar action by other companies. In Los Angeles and San Francisco, regulators and courts are increasingly scrutinising how AI interactions affect young users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.
In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.
Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.
Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Gulf states are accelerating AI investment to drive diversification, while regulators struggle to keep pace with rapid technological change. Saudi Arabia, the UAE, and Qatar are deploying AI across key sectors while pursuing regional leadership in digital innovation.
Despite political commitment and large-scale funding, policymakers struggle to balance innovation with risk management. AI’s rapid pace and global reach strain governance, while foreign tech reliance raises sovereignty and security risks.
Corporate influence, intensifying geopolitical competition, and the urgent race to attract foreign capital further complicate oversight efforts, constraining regulators’ ability to impose robust and forward-looking governance frameworks.
With AI increasingly viewed as a source of economic and strategic power, Gulf governments face a narrowing window to establish effective regulatory frameworks before the technology becomes deeply embedded across critical infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nike has launched an internal investigation following claims by the WorldLeaks cybercrime group that company data was stolen from its systems.
The sportswear giant said it is assessing a potential cybersecurity incident after the group listed Nike on its Tor leak site and published a large volume of files allegedly taken during the intrusion.
WorldLeaks claims to have released approximately 1.4 terabytes of data, comprising more than 188,000 files. The group is known for data theft and extortion tactics, pressuring organisations to pay by threatening public disclosure instead of encrypting systems with ransomware.
The cybercrime operation emerged in 2025 after rebranding from Hunters International, a ransomware gang active since 2023. Increased law enforcement pressure reportedly led the group to abandon encryption-based attacks and focus exclusively on stealing sensitive corporate data.
An incident that adds to growing concerns across the retail and apparel sector, following a recent breach affecting Under Armour that exposed tens of millions of customer records.
Nike has stated that consumer privacy and data protection remain priorities while the investigation continues.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US companies are increasingly adopting Chinese AI models as part of their core technology stacks, raising questions about global leadership in AI. In the US, Pinterest has confirmed it is using Chinese-developed models to improve recommendations and shopping features.
In the US, executives point to open-source Chinese models such as DeepSeek and tools from Alibaba as faster, cheaper and easier to customise. US firms say these models can outperform proprietary alternatives at a fraction of the cost.
Adoption extends beyond Pinterest in the US, with Airbnb also relying on Chinese AI to power customer service tools. Data from Hugging Face shows Chinese models frequently rank among the most downloaded worldwide, including across US developers.
Researchers at Stanford University have found Chinese AI capabilities now match or exceed global peers. In the US, firms such as OpenAI and Meta remain focused on proprietary systems, leaving China to dominate open-source AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption.
According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities.
Entry-level roles are particularly exposed as AI systems absorb routine and clerical tasks traditionally used to gain workplace experience.
Georgieva described the effect on young workers as comparable to a labour-market tsunami, arguing that reduced access to foundational roles risks long-term scarring for an entire generation entering employment.
IMF research suggests AI could affect roughly 60 percent of jobs in advanced economies and 40 percent globally, with only about half of exposed workers likely to benefit.
For others, automation may lead to lower wages, slower hiring and intensified pressure on middle-income roles lacking AI-driven productivity gains.
At Davos 2026, Georgieva warned that the rapid, unregulated deployment of AI in advanced economies risks outpacing public policy responses.
Without clear guardrails and inclusive labour strategies, she argued that technological acceleration could deepen inequality rather than supporting broad-based economic resilience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UN agencies have issued a stark warning over the accelerating risks AI poses to children online, citing rising cases of grooming, deepfakes, cyberbullying and sexual extortion.
A joint statement published on 19 January urges urgent global action, highlighting how AI tools increasingly enable predators to target vulnerable children with unprecedented precision.
Recent data underscores the scale of the threat, with technology-facilitated child abuse cases in the US surging from 4,700 in 2023 to more than 67,000 in 2024.
During the COVID-19 pandemic, online exploitation intensified, particularly affecting girls and young women, with digital abuse frequently translating into real-world harm, according to officials from the International Telecommunication Union.
Governments are tightening policies, led by Australia’s social media ban for under-16s, as the UK, France and Canada consider similar measures. UN agencies urged tech firms to prioritise child safety and called for stronger AI literacy across society.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.
The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.
In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.
The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.
The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.
Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.
Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.
Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI’s latest GPT-5.2 model has sparked concern after repeatedly citing Grokipedia, an AI-generated encyclopaedia launched by Elon Musk’s xAI, raising fresh fears of misinformation amplification.
Testing by The Guardian showed the model referencing Grokipedia multiple times when answering questions on geopolitics and historical figures.
Launched in October 2025, the AI-generated platform rivals Wikipedia but relies solely on automated content without human editing. Critics warn that limited human oversight raises risks of factual errors and ideological bias, as Grokipedia faces criticism for promoting controversial narratives.
OpenAI said its systems use safety filters and diverse public sources, while xAI dismissed the concerns as media distortion. The episode deepens scrutiny of AI-generated knowledge platforms amid growing regulatory and public pressure for transparency and accountability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!