Bitcoin wallet vulnerability exposes thousands of private keys

A flaw in the widely used Libbitcoin Explorer (bx) 3.x series has exposed over 120,000 Bitcoin private keys, according to crypto wallet provider OneKey. The flaw arose from a weak random number generator that used system time, making wallet keys predictable.

Attackers aware of wallet creation times could reconstruct private keys and access funds.

Several wallets were affected, including versions of Trust Wallet Extension and Trust Wallet Core prior to patched releases. Researchers said the Mersenne Twister-32’s limited seed space let hackers automate attacks and recreate private keys, possibly causing past fund losses like the ‘Milk Sad’ cases.

OneKey confirmed its own wallets remain secure, using cryptographically strong random number generation and hardware Secure Elements certified to global security standards.

OneKey also examined its software wallets, ensuring that desktop, browser, Android, and iOS versions rely on secure system-level entropy sources. The firm urged long-term crypto holders to use hardware wallets and avoid importing software-generated mnemonics to reduce risk.

The company emphasised that wallet security depends on the integrity of the device and operating environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored pricing is here and personal data is the price signal

AI is quietly changing how prices are set online. Beyond demand-based shifts, companies increasingly tailor offers to individuals, using browsing history, purchase habits, device, and location to predict willingness to pay. Two shoppers may see different prices for the same product at the same moment.

Dynamic pricing raises or lowers prices for everyone as conditions change, such as school-holiday airfares or hotel rates during major events. Personalised pricing goes further by shaping offers for specific users, rewarding cart-abandoners with discounts while charging rarer shoppers a premium.

Platforms mine clicks, time on page, past purchases, and abandoned baskets to build profiles. Experiments show targeted discounts can lift sales while capping promo spend, proving engineered prices scale. The result: you may not see a ‘standard’ price, but one designed for you.

The risks are mounting. Income proxies such as postcode or device can entrench inequality, while hidden algorithms erode trust when buyers later find cheaper prices. Accountability is murky if tailored prices mislead, discriminate, or breach consumer protections without clear disclosure.

Regulators are moving. A competition watchdog in Australia has flagged transparency gaps, unfair trading risks, and the need for algorithmic disclosure. Businesses now face a twin test: deploy AI pricing with consent, explainability, and opt-outs, and prove it delivers value without crossing ethical lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Public consultation flaws risk undermining Digital Fairness Act debate

As the European Commission’s public consultation on the Digital Fairness Act enters its final phase, growing criticism points to flaws in how citizen feedback is collected.

Critics say the survey’s structure favours those who support additional regulation while restricting opportunities for dissenting voices to explain their reasoning. The issue raises concerns over how such results may influence the forthcoming impact assessment.

The Call for Evidence and Public Consultation, hosted on the Have Your Say portal, allows only supporters of the Commission’s initiative to provide detailed responses. Those who oppose new regulation are reportedly limited to choosing a single option with no open field for justification.

Such an approach risks producing a partial view of European opinion rather than a balanced reflection of stakeholders’ perspectives.

Experts argue that this design contradicts the EU’s Better Regulation principles, which emphasise inclusivity and objectivity.

They urge the Commission to raise its methodological standards, ensuring surveys are neutral, questions are not loaded, and all respondents can present argument-based reasoning. Without these safeguards, consultations may become instruments of validation instead of genuine democratic participation.

Advocates for reform believe the Commission’s influence could set a positive precedent for the entire policy ecosystem. By promoting fairer consultation practices, the EU could encourage both public and private bodies to engage more transparently with Europe’s diverse digital community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia faces traffic decline as AI and social video reshape online search

Wikipedia’s human traffic has fallen by 8% over the past year, a decline the Wikimedia Foundation attributes to changing information habits driven by AI and social media.

The foundation’s Marshall Miller explained that updates to Wikipedia’s bot detection system revealed much of the earlier traffic surge came from undetected bots, revealing a sharper drop in genuine visits.

Miller pointed to the growing use of AI-generated search summaries and the rise of short-form video as key factors. Search engines now provide direct answers using generative AI instead of linking to external sources, while younger users increasingly turn to social video platforms rather than traditional websites.

Although Wikipedia’s knowledge continues to feed AI models, fewer people are reaching the original source.

The foundation warns that the shift poses risks to Wikipedia’s volunteer-driven ecosystem and donation-based model. With fewer visitors, fewer contributors may update content and fewer donors may provide financial support.

Miller urged AI companies and search engines to direct users back to the encyclopedia, ensuring both transparency and sustainability.

Wikipedia is responding by developing a new framework for content attribution and expanding efforts to reach new readers. The foundation also encourages users to support human-curated knowledge by citing original sources and recognising the people behind the information that powers AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian students get 12 months of Google Gemini Pro at no cost

Google has launched a free twelve-month Gemini Pro plan for students in Australia aged eighteen and over, aiming to make AI-powered learning more accessible.

The offer includes the company’s most advanced tools and features designed to enhance study efficiency and critical thinking.

A key addition is Guided Learning mode, which acts as a personal AI coach. Instead of quick answers, it walks students through complex subjects step by step, encouraging a deeper understanding of concepts.

Gemini now also integrates diagrams, images and YouTube videos into responses to make lessons more visual and engaging.

Students can create flashcards, quizzes and study guides automatically from their own materials, helping them prepare for exams more effectively. The Gemini Pro account upgrade provides access to Gemini 2.5 Pro, Deep Research, NotebookLM, Veo 3 for short video creation, and Jules, an AI coding assistant.

With two terabytes of storage and the full suite of Google’s AI tools, the Gemini app aims to support Australian students in their studies and skill development throughout the academic year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta champions open hardware to power the next generation of AI data centres

The US tech giant, Meta, believes open hardware will define the future of AI data centre infrastructure. Speaking at the Open Compute Project Global Summit, the company outlined a series of innovations designed to make large-scale AI systems more efficient, sustainable, and collaborative.

Meta, one of the OCP’s founding members, said open source hardware remains essential to scaling the physical infrastructure required for the next generation of AI.

During the summit, Meta joined industry peers in supporting OCP’s Open Data Center Initiative, which calls for shared standards in power, cooling, and mechanical design.

The company also unveiled a new generation of network fabrics for AI training clusters, integrating NVIDIA’s Spectrum Ethernet to enable greater flexibility and performance.

As part of the effort, Meta became an initiating member of Ethernet for Scale-Up Networking, aiming to strengthen connectivity across increasingly complex AI systems.

Meta further introduced the Open Rack Wide (ORW) form factor, an open source data rack standard optimised for the power and cooling demands of modern AI.

Built on ORW specifications, AMD’s new Helios rack was presented as the most advanced AI rack yet, embodying the shift toward interoperable and standardised infrastructure.

Meta also showcased new AI hardware platforms built to improve performance and serviceability for large-scale generative AI workloads.

Sustainability remains central to Meta’s strategy. The company presented ‘Design for Sustainability’, a framework to reduce hardware emissions through modularity, reuse, and extended lifecycles.

It also shared how its Llama AI models help track emissions across millions of components. Meta said it will continue to

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy laws block cross-border crypto regulation progress

Regulators continue to face hurdles in overseeing global crypto markets as privacy laws block effective cross-border data sharing, the Financial Stability Board warned. Sixteen years after Bitcoin’s launch, regulation remains inconsistent, with differing national approaches causing data gaps and fragmented oversight.

The FSB, under the Bank for International Settlements, said secrecy laws hinder authorities from monitoring risks and sharing information. Some jurisdictions block data sharing with foreign regulators, while others delay cooperation over privacy and reciprocity concerns.

According to the report, addressing these legal and institutional barriers is essential to improving cross-border collaboration and ensuring more effective global oversight of crypto markets.

However, the FSB noted that reliable data on digital assets remain scarce, as regulators rely heavily on incomplete or inconsistent sources from commercial data providers.

Despite the growing urgency to monitor financial stability risks, little progress has been made since similar concerns were raised nearly four years ago. The FSB has yet to outline concrete solutions for bridging the gap between data privacy protection and effective crypto regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI becomes a new spiritual guide for worshippers in India

Across India, a growing number of worshippers are using AI for spiritual guidance. From chatbots like GitaGPT to robotic deities in temples, technology is changing how people connect with faith.

Apps trained on Hindu scriptures offer personalised advice, often serving as companions for those seeking comfort and purpose in a rapidly changing world.

Developers such as Vikas Sahu have built AI chatbots based on the Bhagavad Gita, attracting thousands of users in just days. Major organisations like the Isha Foundation have also adopted AI to deliver ancient wisdom through modern apps, blending spiritual teachings with accessibility.

Large religious gatherings, including the Maha Kumbh Mela, now use AI tools and virtual reality to guide and connect millions of devotees.

While many find inspiration in AI-guided spirituality, experts warn of ethical and cultural challenges. Anthropologist Holly Walters notes that users may perceive AI-generated responses as divine truth, which could distort traditional belief systems.

Oxford researcher Lyndon Drake adds that AI might challenge the authority of religious leaders, as algorithms shape interpretations of sacred texts.

Despite the risks, faith-driven AI continues to thrive. For some devotees, digital gods and chatbots offer something traditional structures often cannot- immediate, non-judgemental access to spiritual guidance at any time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Harvard’s health division supports AI-powered medical learning

Harvard Health Publishing has partnered with Microsoft to use its health content to train the Copilot AI system. The collaboration seeks to enhance the accuracy of healthcare responses on Microsoft’s AI platform, according to the Wall Street Journal.

HHP publishes consumer health resources reviewed by Harvard scientists, covering topics such as sleep, nutrition, and pain management. The institution confirmed that Microsoft has paid to license its articles, expanding a previous agreement made in 2022.

The move is designed to make medically verified information more accessible to the public through Copilot, which now reaches over 33 million users.

Harvard’s Soroush Saghafian said the deal could help cut errors in AI-generated medical advice, a key concern in healthcare. He emphasised the importance of rigorous testing before deployment, warning that unverified tools could pose serious risks to users.

Harvard continues to invest in AI research and integration across its academic programmes. Recent initiatives include projects to address bias in medical training and studies exploring AI’s role in drug development and cancer treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot