Public consultation flaws risk undermining Digital Fairness Act debate

As the European Commission’s public consultation on the Digital Fairness Act enters its final phase, growing criticism points to flaws in how citizen feedback is collected.

Critics say the survey’s structure favours those who support additional regulation while restricting opportunities for dissenting voices to explain their reasoning. The issue raises concerns over how such results may influence the forthcoming impact assessment.

The Call for Evidence and Public Consultation, hosted on the Have Your Say portal, allows only supporters of the Commission’s initiative to provide detailed responses. Those who oppose new regulation are reportedly limited to choosing a single option with no open field for justification.

Such an approach risks producing a partial view of European opinion rather than a balanced reflection of stakeholders’ perspectives.

Experts argue that this design contradicts the EU’s Better Regulation principles, which emphasise inclusivity and objectivity.

They urge the Commission to raise its methodological standards, ensuring surveys are neutral, questions are not loaded, and all respondents can present argument-based reasoning. Without these safeguards, consultations may become instruments of validation instead of genuine democratic participation.

Advocates for reform believe the Commission’s influence could set a positive precedent for the entire policy ecosystem. By promoting fairer consultation practices, the EU could encourage both public and private bodies to engage more transparently with Europe’s diverse digital community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia faces traffic decline as AI and social video reshape online search

Wikipedia’s human traffic has fallen by 8% over the past year, a decline the Wikimedia Foundation attributes to changing information habits driven by AI and social media.

The foundation’s Marshall Miller explained that updates to Wikipedia’s bot detection system revealed much of the earlier traffic surge came from undetected bots, revealing a sharper drop in genuine visits.

Miller pointed to the growing use of AI-generated search summaries and the rise of short-form video as key factors. Search engines now provide direct answers using generative AI instead of linking to external sources, while younger users increasingly turn to social video platforms rather than traditional websites.

Although Wikipedia’s knowledge continues to feed AI models, fewer people are reaching the original source.

The foundation warns that the shift poses risks to Wikipedia’s volunteer-driven ecosystem and donation-based model. With fewer visitors, fewer contributors may update content and fewer donors may provide financial support.

Miller urged AI companies and search engines to direct users back to the encyclopedia, ensuring both transparency and sustainability.

Wikipedia is responding by developing a new framework for content attribution and expanding efforts to reach new readers. The foundation also encourages users to support human-curated knowledge by citing original sources and recognising the people behind the information that powers AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian students get 12 months of Google Gemini Pro at no cost

Google has launched a free twelve-month Gemini Pro plan for students in Australia aged eighteen and over, aiming to make AI-powered learning more accessible.

The offer includes the company’s most advanced tools and features designed to enhance study efficiency and critical thinking.

A key addition is Guided Learning mode, which acts as a personal AI coach. Instead of quick answers, it walks students through complex subjects step by step, encouraging a deeper understanding of concepts.

Gemini now also integrates diagrams, images and YouTube videos into responses to make lessons more visual and engaging.

Students can create flashcards, quizzes and study guides automatically from their own materials, helping them prepare for exams more effectively. The Gemini Pro account upgrade provides access to Gemini 2.5 Pro, Deep Research, NotebookLM, Veo 3 for short video creation, and Jules, an AI coding assistant.

With two terabytes of storage and the full suite of Google’s AI tools, the Gemini app aims to support Australian students in their studies and skill development throughout the academic year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta champions open hardware to power the next generation of AI data centres

The US tech giant, Meta, believes open hardware will define the future of AI data centre infrastructure. Speaking at the Open Compute Project Global Summit, the company outlined a series of innovations designed to make large-scale AI systems more efficient, sustainable, and collaborative.

Meta, one of the OCP’s founding members, said open source hardware remains essential to scaling the physical infrastructure required for the next generation of AI.

During the summit, Meta joined industry peers in supporting OCP’s Open Data Center Initiative, which calls for shared standards in power, cooling, and mechanical design.

The company also unveiled a new generation of network fabrics for AI training clusters, integrating NVIDIA’s Spectrum Ethernet to enable greater flexibility and performance.

As part of the effort, Meta became an initiating member of Ethernet for Scale-Up Networking, aiming to strengthen connectivity across increasingly complex AI systems.

Meta further introduced the Open Rack Wide (ORW) form factor, an open source data rack standard optimised for the power and cooling demands of modern AI.

Built on ORW specifications, AMD’s new Helios rack was presented as the most advanced AI rack yet, embodying the shift toward interoperable and standardised infrastructure.

Meta also showcased new AI hardware platforms built to improve performance and serviceability for large-scale generative AI workloads.

Sustainability remains central to Meta’s strategy. The company presented ‘Design for Sustainability’, a framework to reduce hardware emissions through modularity, reuse, and extended lifecycles.

It also shared how its Llama AI models help track emissions across millions of components. Meta said it will continue to

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy laws block cross-border crypto regulation progress

Regulators continue to face hurdles in overseeing global crypto markets as privacy laws block effective cross-border data sharing, the Financial Stability Board warned. Sixteen years after Bitcoin’s launch, regulation remains inconsistent, with differing national approaches causing data gaps and fragmented oversight.

The FSB, under the Bank for International Settlements, said secrecy laws hinder authorities from monitoring risks and sharing information. Some jurisdictions block data sharing with foreign regulators, while others delay cooperation over privacy and reciprocity concerns.

According to the report, addressing these legal and institutional barriers is essential to improving cross-border collaboration and ensuring more effective global oversight of crypto markets.

However, the FSB noted that reliable data on digital assets remain scarce, as regulators rely heavily on incomplete or inconsistent sources from commercial data providers.

Despite the growing urgency to monitor financial stability risks, little progress has been made since similar concerns were raised nearly four years ago. The FSB has yet to outline concrete solutions for bridging the gap between data privacy protection and effective crypto regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI becomes a new spiritual guide for worshippers in India

Across India, a growing number of worshippers are using AI for spiritual guidance. From chatbots like GitaGPT to robotic deities in temples, technology is changing how people connect with faith.

Apps trained on Hindu scriptures offer personalised advice, often serving as companions for those seeking comfort and purpose in a rapidly changing world.

Developers such as Vikas Sahu have built AI chatbots based on the Bhagavad Gita, attracting thousands of users in just days. Major organisations like the Isha Foundation have also adopted AI to deliver ancient wisdom through modern apps, blending spiritual teachings with accessibility.

Large religious gatherings, including the Maha Kumbh Mela, now use AI tools and virtual reality to guide and connect millions of devotees.

While many find inspiration in AI-guided spirituality, experts warn of ethical and cultural challenges. Anthropologist Holly Walters notes that users may perceive AI-generated responses as divine truth, which could distort traditional belief systems.

Oxford researcher Lyndon Drake adds that AI might challenge the authority of religious leaders, as algorithms shape interpretations of sacred texts.

Despite the risks, faith-driven AI continues to thrive. For some devotees, digital gods and chatbots offer something traditional structures often cannot- immediate, non-judgemental access to spiritual guidance at any time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Harvard’s health division supports AI-powered medical learning

Harvard Health Publishing has partnered with Microsoft to use its health content to train the Copilot AI system. The collaboration seeks to enhance the accuracy of healthcare responses on Microsoft’s AI platform, according to the Wall Street Journal.

HHP publishes consumer health resources reviewed by Harvard scientists, covering topics such as sleep, nutrition, and pain management. The institution confirmed that Microsoft has paid to license its articles, expanding a previous agreement made in 2022.

The move is designed to make medically verified information more accessible to the public through Copilot, which now reaches over 33 million users.

Harvard’s Soroush Saghafian said the deal could help cut errors in AI-generated medical advice, a key concern in healthcare. He emphasised the importance of rigorous testing before deployment, warning that unverified tools could pose serious risks to users.

Harvard continues to invest in AI research and integration across its academic programmes. Recent initiatives include projects to address bias in medical training and studies exploring AI’s role in drug development and cancer treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta to pull all political ads in EU ahead of new transparency law

Meta Platforms has said it will stop selling and showing political, electoral and social issue advertisements across its services in the European Union from early October 2025. The decision follows the EU’s Transparency and Targeting of Political Advertising (TTPA) regulation coming into full effect on 10 October.

Under TTPA, platforms will be required to clearly label political ads, disclose the sponsor, the election or social issue at hand, the amounts paid, and how the ads are targeted. These obligations also include strict conditions on targeting and require explicit consent for certain data use.

Meta called the requirements ‘significant operational challenges and legal uncertainties’ and labelled parts of the new rules ‘unworkable’ for advertisers and platforms. It said that personalised ads are widely used for issue-based campaigns and that limiting them might restrict how people access political or social issue-related information.

The company joins Google, which made a similar move last year citing comparable concerns about TTPA compliance.

While political ads will be banned under paid formats, Meta says organic political content (e.g. users posting or sharing political views) will still be permitted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AWS glitch triggers widespread outages across major apps

A major internet outage hit some of the world’s biggest apps and sites from about 9 a.m. CET Monday, with issues traced to Amazon Web Services. Tracking sites reported widespread failures across the US and beyond, disrupting consumer and enterprise services.

AWS cited ‘significant error rates’ in DynamoDB requests in the US-EAST-1 region, impacting additional services in Northern Virginia. Engineers are mitigating while investigating root cause, and some customers couldn’t create or update Support Cases.

Outages clustered around Virginia’s dense data-centre corridor but rippled globally. Impacted brands included Amazon, Google, Snapchat, Roblox, Fortnite, Canva, Coinbase, Slack, Signal, Vodafone and the UK tax authority HMRC.

Coinbase told users ‘all funds are safe’ as platforms struggled to authenticate, fetch data and serve content tied to affected back-ends. Third-party monitors noted elevated failure rates across APIs and app logins.

The incident underscores heavy reliance on hyperscale infrastructure and the blast radius when core data services falter. Full restoration and a formal post-mortem are pending from AWS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!