AI-generated images used in jewellery scam

A jeweller in Hove is dealing with daily complaints from customers of a similarly named but fraudulent business. Stevie Holmes runs Scarlett Jewellery but keeps receiving complaints from customers who confused it with the AI-driven Scarlett Jewels website.

Many reported receiving poor-quality goods or nothing at all.

Holmes said the mix-ups have kept her occupied for at least an hour a day since July. Without clarification, people could post negative comments about her genuine business on social media, potentially damaging its reputation.

Scarlett Jewels is run by Denimtex Limited with an address in Hong Kong, though its website claims a personal story of a retiring designer.

Experts say such scams are increasingly common due to how easy and cheap it is to create AI images. Professor Ana Canhoto from the University of Sussex noted AI-generated product photos often appear too perfect or flawed, while fake reviews and claims of scarcity are typical tactics to mislead buyers.

Trustpilot ratings for Scarlett Jewels are mostly one star, with customers describing items as ‘tat’ or ‘poor quality’.

Authorities are taking action, with the Advertising Standards Authority banning similar ads and Facebook restricting Scarlett Jewels from creating new adverts. Buyers are advised to spot off AI images, large discounts, and genuine reviews to avoid falling for scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin wallet vulnerability exposes thousands of private keys

A flaw in the widely used Libbitcoin Explorer (bx) 3.x series has exposed over 120,000 Bitcoin private keys, according to crypto wallet provider OneKey. The flaw arose from a weak random number generator that used system time, making wallet keys predictable.

Attackers aware of wallet creation times could reconstruct private keys and access funds.

Several wallets were affected, including versions of Trust Wallet Extension and Trust Wallet Core prior to patched releases. Researchers said the Mersenne Twister-32’s limited seed space let hackers automate attacks and recreate private keys, possibly causing past fund losses like the ‘Milk Sad’ cases.

OneKey confirmed its own wallets remain secure, using cryptographically strong random number generation and hardware Secure Elements certified to global security standards.

OneKey also examined its software wallets, ensuring that desktop, browser, Android, and iOS versions rely on secure system-level entropy sources. The firm urged long-term crypto holders to use hardware wallets and avoid importing software-generated mnemonics to reduce risk.

The company emphasised that wallet security depends on the integrity of the device and operating environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored pricing is here and personal data is the price signal

AI is quietly changing how prices are set online. Beyond demand-based shifts, companies increasingly tailor offers to individuals, using browsing history, purchase habits, device, and location to predict willingness to pay. Two shoppers may see different prices for the same product at the same moment.

Dynamic pricing raises or lowers prices for everyone as conditions change, such as school-holiday airfares or hotel rates during major events. Personalised pricing goes further by shaping offers for specific users, rewarding cart-abandoners with discounts while charging rarer shoppers a premium.

Platforms mine clicks, time on page, past purchases, and abandoned baskets to build profiles. Experiments show targeted discounts can lift sales while capping promo spend, proving engineered prices scale. The result: you may not see a ‘standard’ price, but one designed for you.

The risks are mounting. Income proxies such as postcode or device can entrench inequality, while hidden algorithms erode trust when buyers later find cheaper prices. Accountability is murky if tailored prices mislead, discriminate, or breach consumer protections without clear disclosure.

Regulators are moving. A competition watchdog in Australia has flagged transparency gaps, unfair trading risks, and the need for algorithmic disclosure. Businesses now face a twin test: deploy AI pricing with consent, explainability, and opt-outs, and prove it delivers value without crossing ethical lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta champions open hardware to power the next generation of AI data centres

The US tech giant, Meta, believes open hardware will define the future of AI data centre infrastructure. Speaking at the Open Compute Project Global Summit, the company outlined a series of innovations designed to make large-scale AI systems more efficient, sustainable, and collaborative.

Meta, one of the OCP’s founding members, said open source hardware remains essential to scaling the physical infrastructure required for the next generation of AI.

During the summit, Meta joined industry peers in supporting OCP’s Open Data Center Initiative, which calls for shared standards in power, cooling, and mechanical design.

The company also unveiled a new generation of network fabrics for AI training clusters, integrating NVIDIA’s Spectrum Ethernet to enable greater flexibility and performance.

As part of the effort, Meta became an initiating member of Ethernet for Scale-Up Networking, aiming to strengthen connectivity across increasingly complex AI systems.

Meta further introduced the Open Rack Wide (ORW) form factor, an open source data rack standard optimised for the power and cooling demands of modern AI.

Built on ORW specifications, AMD’s new Helios rack was presented as the most advanced AI rack yet, embodying the shift toward interoperable and standardised infrastructure.

Meta also showcased new AI hardware platforms built to improve performance and serviceability for large-scale generative AI workloads.

Sustainability remains central to Meta’s strategy. The company presented ‘Design for Sustainability’, a framework to reduce hardware emissions through modularity, reuse, and extended lifecycles.

It also shared how its Llama AI models help track emissions across millions of components. Meta said it will continue to

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and TSMC celebrate first US-made Blackwell AI chip

A collaboration between NVIDIA and TSMC has marked a historic milestone with the first NVIDIA Blackwell wafer produced on US soil.

The event, held at TSMC’s facility in Phoenix, symbolised the start of volume production for the Blackwell architecture and a major step toward domestic AI chip manufacturing.

NVIDIA’s CEO Jensen Huang described it as a moment that brings advanced technology and industrial strength back to the US.

A partnership that highlights how the companies aim to strengthen the US’s semiconductor supply chain by producing the world’s most advanced chips domestically.

TSMC Arizona will manufacture next-generation two-, three- and four-nanometre technologies, crucial for AI, telecommunications, and high-performance computing. The process transforms raw wafers through layering, etching, and patterning into the high-speed processors driving the AI revolution.

TSMC executives praised the achievement as the result of decades of partnership with NVIDIA, built on innovation and technical excellence.

Both companies believe that local chip production will help meet the rising global demand for AI infrastructure while securing the US’s strategic position in advanced technology manufacturing.

NVIDIA also plans to use its AI, robotics, and digital twin platforms to design and manage future American facilities, deepening its commitment to domestic production.

The companies say their shared investment signals a long-term vision of sustainable innovation, industrial resilience, and technological leadership for the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urges awareness as £106m lost to romance fraud in one year

Romance fraud has surged across the United Kingdom, with new figures showing that victims lost a combined £106 million in the past financial year. Action Fraud, the UK’s national reporting centre for cybercrime, described the crime as one that causes severe financial, emotional, and social damage.

Among the victims is London banker Varun Yadav, who lost £40,000 to a scammer posing as a romantic partner on a dating app. After months of chatting online, the fraudster persuaded him to invest in a cryptocurrency platform.

When his funds became inaccessible, Yadav realised he had been deceived. ‘You see all the signs, but you are so emotionally attached,’ he said. ‘You are willing to lose the money, but not the connection.’

The Financial Conduct Authority (FCA) said banks should play a stronger role in disrupting romance scams, calling for improved detection systems and better staff training to identify vulnerable customers. It urged firms to adopt what it called ‘compassionate aftercare’ for those affected.

Romance fraud typically involves criminals creating fake online profiles to build emotional connections before manipulating victims into transferring money.

The National Cyber Security Centre (NCSC) and UK police recommend maintaining privacy on social media, avoiding financial transfers to online contacts, and speaking openly with friends or family before sending money.

The Metropolitan Police recently launched an awareness campaign featuring victim testimonies and guidance on spotting red flags. The initiative also promotes collaboration with dating apps, banks, and social platforms to identify fraud networks.

Detective Superintendent Kerry Wood, head of economic crime for the Met Police, said that romance scams remain ‘one of the most devastating’ forms of fraud. ‘It’s an abuse of trust which undermines people’s confidence and sense of self-worth. Awareness is the most powerful defence against fraud,’ she said.

Although Yadav never recovered his savings, he said sharing his story helped him rebuild his life. He urged others facing similar scams to speak up: ‘Do not isolate yourself. There is hope.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AWS glitch triggers widespread outages across major apps

A major internet outage hit some of the world’s biggest apps and sites from about 9 a.m. CET Monday, with issues traced to Amazon Web Services. Tracking sites reported widespread failures across the US and beyond, disrupting consumer and enterprise services.

AWS cited ‘significant error rates’ in DynamoDB requests in the US-EAST-1 region, impacting additional services in Northern Virginia. Engineers are mitigating while investigating root cause, and some customers couldn’t create or update Support Cases.

Outages clustered around Virginia’s dense data-centre corridor but rippled globally. Impacted brands included Amazon, Google, Snapchat, Roblox, Fortnite, Canva, Coinbase, Slack, Signal, Vodafone and the UK tax authority HMRC.

Coinbase told users ‘all funds are safe’ as platforms struggled to authenticate, fetch data and serve content tied to affected back-ends. Third-party monitors noted elevated failure rates across APIs and app logins.

The incident underscores heavy reliance on hyperscale infrastructure and the blast radius when core data services falter. Full restoration and a formal post-mortem are pending from AWS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data Act now in force, more data sharing in EU

The EU’s Data Act is now in force, marking a major shift in European data governance. The regulation aims to expand access to industrial and Internet of Things data, giving users greater control over information they generate while maintaining safeguards for trade secrets and privacy.

Adopted as part of the EU’s Digital Strategy, the act seeks to promote fair competition, innovation, and public-sector efficiency. It enables individuals and businesses to share co-generated data from connected devices and allows public authorities limited access in emergencies or matters of public interest.

Some obligations take effect later. Requirements on product design for data access will apply to new connected devices from September 2026, while certain contract rules are deferred until 2027. Member states will set national penalties, with fines in some cases reaching up to 10% of global annual turnover.

The European Commission will assess the law’s impact within three years of its entry into force. Policymakers hope the act will foster a fairer, more competitive data economy, though much will depend on consistent enforcement and how businesses adapt their practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and fusion combine to accelerate clean energy breakthroughs

A new research partnership between Google and Commonwealth Fusion Systems (CFS) aims to accelerate the development of clean, abundant fusion energy. Fusion powers the sun and offers limitless, clean energy, but achieving it on Earth requires stabilising plasma at over 100 million degrees Celsius.

The collaboration builds on prior AI research in controlling plasma using deep reinforcement learning. Google and CFS are combining AI with the SPARC tokamak, using superconducting magnets to achieve net energy gain from fusion.

AI tools such as TORAX, a fast and differentiable plasma simulator, allow millions of virtual experiments to optimise plasma behaviour before SPARC begins operations.

AI is also being applied to find the most efficient operating paths for the tokamak, including optimising magnetic coils, fuel injection, and heat management.

Reinforcement learning agents can optimise energy output in real time while safeguarding the machine, potentially exceeding human-designed methods.

The partnership combines advanced AI with fusion hardware to develop intelligent, adaptive control systems for future clean and sustainable fusion power plants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Labels and Spotify align on artist-first AI safeguards

Spotify partners with major labels on artist-first AI tools, putting consent and copyright at the centre of product design. The plan aims to align new features with transparent labelling and fair compensation while addressing concerns about generative music flooding platforms.

The collaboration with Sony, Universal, Warner, and Merlin will give artists control over participation in AI experiences and how their catalogues are used. Spotify says it will prioritise consent, clearer attribution, and rights management as it builds new tools.

Early direction points to expanded labelling via DDEX, stricter controls against mass AI uploads, and protections against search and recommendation manipulation. Spotify’s AI DJ and prompt-based playlists hint at how engagement features could evolve without sidelining creators.

Future products are expected to let artists opt in, monitor usage, and manage when their music feeds AI-generated works. Rights holders and distributors would gain better tracking and payment flows as transparency improves across the ecosystem.

Industry observers say the tie-up could set a benchmark for responsible AI in music if enforcement matches ambition. By moving in step with labels, Spotify is pitching a path where innovation and artist advocacy reinforce rather than undermine each other.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!