Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Swiss scientists grow mini-brains to power future computers

In a Swiss laboratory, researchers are using clusters of human brain cells to power experimental computers. The start-up FinalSpark is leading this emerging field of biocomputing, also known as wetware, which uses living neurons instead of silicon chips.

Co-founder Fred Jordan said biological neurons are vastly more energy-efficient than artificial ones and could one day replace traditional processors. He believes brain-based computing may eventually help reduce the massive power demands created by AI systems.

Each ‘bioprocessor’ is made from human skin cells reprogrammed into neurons and grouped into small organoids. Electrodes connect to these clumps, allowing the Swiss scientists to send signals and measure their responses in a digital form similar to binary code.

Scientists emphasise that the technology is still in its infancy and not capable of consciousness. Each organoid contains about ten thousand neurons, compared to a human brain’s hundred billion. FinalSpark collaborates with ethicists to ensure the research remains responsible and transparent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated images used in jewellery scam

A jeweller in Hove is dealing with daily complaints from customers of a similarly named but fraudulent business. Stevie Holmes runs Scarlett Jewellery but keeps receiving complaints from customers who confused it with the AI-driven Scarlett Jewels website.

Many reported receiving poor-quality goods or nothing at all.

Holmes said the mix-ups have kept her occupied for at least an hour a day since July. Without clarification, people could post negative comments about her genuine business on social media, potentially damaging its reputation.

Scarlett Jewels is run by Denimtex Limited with an address in Hong Kong, though its website claims a personal story of a retiring designer.

Experts say such scams are increasingly common due to how easy and cheap it is to create AI images. Professor Ana Canhoto from the University of Sussex noted AI-generated product photos often appear too perfect or flawed, while fake reviews and claims of scarcity are typical tactics to mislead buyers.

Trustpilot ratings for Scarlett Jewels are mostly one star, with customers describing items as ‘tat’ or ‘poor quality’.

Authorities are taking action, with the Advertising Standards Authority banning similar ads and Facebook restricting Scarlett Jewels from creating new adverts. Buyers are advised to spot off AI images, large discounts, and genuine reviews to avoid falling for scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored pricing is here and personal data is the price signal

AI is quietly changing how prices are set online. Beyond demand-based shifts, companies increasingly tailor offers to individuals, using browsing history, purchase habits, device, and location to predict willingness to pay. Two shoppers may see different prices for the same product at the same moment.

Dynamic pricing raises or lowers prices for everyone as conditions change, such as school-holiday airfares or hotel rates during major events. Personalised pricing goes further by shaping offers for specific users, rewarding cart-abandoners with discounts while charging rarer shoppers a premium.

Platforms mine clicks, time on page, past purchases, and abandoned baskets to build profiles. Experiments show targeted discounts can lift sales while capping promo spend, proving engineered prices scale. The result: you may not see a ‘standard’ price, but one designed for you.

The risks are mounting. Income proxies such as postcode or device can entrench inequality, while hidden algorithms erode trust when buyers later find cheaper prices. Accountability is murky if tailored prices mislead, discriminate, or breach consumer protections without clear disclosure.

Regulators are moving. A competition watchdog in Australia has flagged transparency gaps, unfair trading risks, and the need for algorithmic disclosure. Businesses now face a twin test: deploy AI pricing with consent, explainability, and opt-outs, and prove it delivers value without crossing ethical lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian students get 12 months of Google Gemini Pro at no cost

Google has launched a free twelve-month Gemini Pro plan for students in Australia aged eighteen and over, aiming to make AI-powered learning more accessible.

The offer includes the company’s most advanced tools and features designed to enhance study efficiency and critical thinking.

A key addition is Guided Learning mode, which acts as a personal AI coach. Instead of quick answers, it walks students through complex subjects step by step, encouraging a deeper understanding of concepts.

Gemini now also integrates diagrams, images and YouTube videos into responses to make lessons more visual and engaging.

Students can create flashcards, quizzes and study guides automatically from their own materials, helping them prepare for exams more effectively. The Gemini Pro account upgrade provides access to Gemini 2.5 Pro, Deep Research, NotebookLM, Veo 3 for short video creation, and Jules, an AI coding assistant.

With two terabytes of storage and the full suite of Google’s AI tools, the Gemini app aims to support Australian students in their studies and skill development throughout the academic year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government urges awareness as £106m lost to romance fraud in one year

Romance fraud has surged across the United Kingdom, with new figures showing that victims lost a combined £106 million in the past financial year. Action Fraud, the UK’s national reporting centre for cybercrime, described the crime as one that causes severe financial, emotional, and social damage.

Among the victims is London banker Varun Yadav, who lost £40,000 to a scammer posing as a romantic partner on a dating app. After months of chatting online, the fraudster persuaded him to invest in a cryptocurrency platform.

When his funds became inaccessible, Yadav realised he had been deceived. ‘You see all the signs, but you are so emotionally attached,’ he said. ‘You are willing to lose the money, but not the connection.’

The Financial Conduct Authority (FCA) said banks should play a stronger role in disrupting romance scams, calling for improved detection systems and better staff training to identify vulnerable customers. It urged firms to adopt what it called ‘compassionate aftercare’ for those affected.

Romance fraud typically involves criminals creating fake online profiles to build emotional connections before manipulating victims into transferring money.

The National Cyber Security Centre (NCSC) and UK police recommend maintaining privacy on social media, avoiding financial transfers to online contacts, and speaking openly with friends or family before sending money.

The Metropolitan Police recently launched an awareness campaign featuring victim testimonies and guidance on spotting red flags. The initiative also promotes collaboration with dating apps, banks, and social platforms to identify fraud networks.

Detective Superintendent Kerry Wood, head of economic crime for the Met Police, said that romance scams remain ‘one of the most devastating’ forms of fraud. ‘It’s an abuse of trust which undermines people’s confidence and sense of self-worth. Awareness is the most powerful defence against fraud,’ she said.

Although Yadav never recovered his savings, he said sharing his story helped him rebuild his life. He urged others facing similar scams to speak up: ‘Do not isolate yourself. There is hope.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data labelling transforms rural economies in Tamil Nadu

India’s small towns are fast becoming global hubs for AI training and data labelling, as outsourcing firms move operations beyond major cities like Bangalore and Chennai. Lower costs and improved connectivity have driven a trend known as cloud farming, which has transformed rural employment.

In Tamil Nadu, workers annotate and train AI models for global clients, preparing data that helps machines recognise objects, text and speech. Firms like Desicrew pioneered this approach by offering digital careers close to home, reducing migration to cities while maintaining high technical standards.

Desicrew’s chief executive, Mannivannan J K, says about a third of the company’s projects already involve AI, a figure expected to reach nearly all within two years. Much of the work focuses on transcription, building multilingual datasets that teach machines to interpret diverse human voices and dialects.

Analysts argue that cloud farming could make rural India the world’s largest AI operations base, much as it once dominated IT outsourcing. Yet challenges remain around internet reliability, data security and client confidence.

For workers like Dhanalakshmi Vijay, who fine-tunes models by correcting their errors, the impact feels tangible. Her adjustments, she says, help AI systems perform better in real-world applications, improving everything from shopping recommendations to translation tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nurses gain AI support as Microsoft evolves Dragon Copilot in healthcare

Microsoft has announced major AI upgrades to Dragon Copilot, its healthcare assistant, extending ambient and generative AI capabilities to nursing workflows and third-party partner integrations.

The update is designed to improve patient journeys, reduce administrative workloads and enhance efficiency across healthcare systems.

The new features allow partners to integrate their own AI applications directly into Dragon Copilot, helping clinicians access trusted information, automate documentation and streamline financial management without leaving their workflow.

Partnerships with Elsevier, Wolters Kluwer, Atropos Health, Canary Speech and others will provide real-time decision support, clinical insights and revenue cycle automation.

Microsoft is also introducing the first commercial ambient AI solution built for nurses, designed to reduce burnout and enhance care quality.

A technology that automatically records nurse-patient interactions and transforms them into editable documentation for electronic health records, saving time and supporting accuracy.

Nurses can also access medical content within the same interface and automate note-taking and summaries, allowing greater focus on patient care.

The company says these developments mark a new phase in its AI strategy for healthcare, strengthening its collaboration with providers and partners.

Microsoft aims to make clinical workflows more connected, reliable and human-centred, while supporting safe, evidence-based decision-making through its expanding ecosystem of AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft warns of a surge in ransomware and extortion incidents

Financially motivated cybercrime now accounts for the majority of global digital threats, according to Microsoft’s latest Digital Defense Report.

The company’s analysts found that over half of all cyber incidents with known motives in the past year were driven by extortion or ransomware, while espionage represented only a small fraction.

Microsoft warns that automation and accessible off-the-shelf tools have allowed criminals with limited technical skills to launch widespread attacks, making cybercrime a constant global threat.

The report reveals that attackers increasingly target critical services such as hospitals and local governments, where weak security and urgent operational demands make them easy victims.

Cyberattacks on these sectors have already led to real-world harm, from disrupted emergency care to halted transport systems. Microsoft highlights that collaboration between governments and private industry is essential to protect vulnerable sectors and maintain vital services.

While profit-seeking criminals dominate by volume, nation-state actors are also expanding their reach. State-sponsored operations are growing more sophisticated and unpredictable, with espionage often intertwined with financial motives.

Some state actors even exploit the same cybercriminal networks, complicating attribution and increasing risks for global organisations.

Microsoft notes that AI is being used by both attackers and defenders. Criminals are employing AI to refine phishing campaigns, generate synthetic media and develop adaptive malware, while defenders rely on AI to detect threats faster and close security gaps.

The report urges leaders to prioritise cybersecurity as a strategic responsibility, adopt phishing-resistant multifactor authentication, and build strong defences across industries.

Security, Microsoft concludes, must now be treated as a shared societal duty rather than an isolated technical task.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!