Anthropic unveils Claude Life Sciences to transform research efficiency

Anthropic has unveiled Claude for Life Sciences, its first major launch in the biotechnology sector.

The new platform integrates Anthropic’s AI models with leading scientific tools such as Benchling, PubMed, 10x Genomics and Synapse.org, offering researchers an intelligent assistant throughout the discovery process.

The system supports tasks from literature reviews and hypothesis development to data analysis and drafting regulatory submissions. According to Anthropic, what once took days of validation and manual compilation can now be completed in minutes, giving scientists more time to focus on innovation.

An initiative that follows the company’s appointment of Eric Kauderer-Abrams as head of biology and life sciences. He described the move as a ‘threshold moment’, signalling Anthropic’s ambition to make Claude a key player in global life science research, much like its role in coding.

Built on the newly released Claude Sonnet 4.5 model, which excels at interpreting lab protocols, the platform connects with partners including AWS, Google Cloud, KPMG and Deloitte.

While Anthropic recognises that AI cannot accelerate physical trials, it aims to transform time-consuming processes and promote responsible digital transformation across the life sciences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung unveils AI-powered redesign of its corporate Newsroom

The South Korean firm, Samsung Electronics, has redesigned its official Newsroom, transforming it into a multimedia platform built around visuals, video and AI-driven features.

A revamped site that aligns with the growing dominance of visual communication, aiming to make corporate storytelling more intuitive, engaging and accessible.

The updated homepage features an expanded horizontal carousel showcasing videos, graphics and feature stories with hover-based summaries for quick insight. Users can browse by theme, play videos directly and enjoy a seamless experience across all Samsung devices.

A redesign by Samsung that also introduces an integrated media hub with improved press tools, content filters and high-resolution downloads. Journalists can now save full articles, videos and images in one click, simplifying access to media materials.

AI integration adds smart summaries and upgraded search capabilities, including tag- and image-based discovery. These tools enhance relevance and retrieval speed, while flexible sorting and keyword highlighting refine user experience.

As Samsung celebrates a decade since launching its Newsroom, such a transformation marks a step toward a more dynamic, interactive communication model designed for both consumers and media professionals in the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK actors’ union demands rights as AI uses performers’ likenesses without consent

The British performers’ union Equity has warned of coordinated mass action against technology companies and entertainment producers that use its members’ images, voices or likenesses in artificial-intelligence-generated content without proper consent.

Equity’s general secretary, Paul W Fleming, announced plans to mobilise tens of thousands of actors through subject access requests under data-protection law, compelling companies to disclose whether they have used performers’ data in AI content.

The move follows growing numbers of complaints from actors about alleged mis-use of their likenesses or voices in AI material. One prominent case involves Scottish actor Briony Monroe, who claims her facial features and mannerisms were used to create the synthetic performer ‘Tilly Norwood’. The AI-studio behind the character denies the allegations.

Equity says the strategy is intended to ‘make it so hard for tech companies and producers to not enter into collective rights’ deals. It argues that existing legislation is being circumvented as foundational AI models are trained using data from actors, but with little transparency or compensation.

The trade body Pact, representing studios and producers, acknowledges the importance of AI but counters that without accessing new tools firms may fall behind commercially. Pact complains about the lack of transparency from companies on what data is used to train AI systems.

In essence, the standoff reflects deeper tensions in the creative industries: how to balance innovation, performer rights and transparency in an era when digital likenesses and synthetic ‘actors’ are emerging rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Swiss scientists grow mini-brains to power future computers

In a Swiss laboratory, researchers are using clusters of human brain cells to power experimental computers. The start-up FinalSpark is leading this emerging field of biocomputing, also known as wetware, which uses living neurons instead of silicon chips.

Co-founder Fred Jordan said biological neurons are vastly more energy-efficient than artificial ones and could one day replace traditional processors. He believes brain-based computing may eventually help reduce the massive power demands created by AI systems.

Each ‘bioprocessor’ is made from human skin cells reprogrammed into neurons and grouped into small organoids. Electrodes connect to these clumps, allowing the Swiss scientists to send signals and measure their responses in a digital form similar to binary code.

Scientists emphasise that the technology is still in its infancy and not capable of consciousness. Each organoid contains about ten thousand neurons, compared to a human brain’s hundred billion. FinalSpark collaborates with ethicists to ensure the research remains responsible and transparent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated images used in jewellery scam

A jeweller in Hove is dealing with daily complaints from customers of a similarly named but fraudulent business. Stevie Holmes runs Scarlett Jewellery but keeps receiving complaints from customers who confused it with the AI-driven Scarlett Jewels website.

Many reported receiving poor-quality goods or nothing at all.

Holmes said the mix-ups have kept her occupied for at least an hour a day since July. Without clarification, people could post negative comments about her genuine business on social media, potentially damaging its reputation.

Scarlett Jewels is run by Denimtex Limited with an address in Hong Kong, though its website claims a personal story of a retiring designer.

Experts say such scams are increasingly common due to how easy and cheap it is to create AI images. Professor Ana Canhoto from the University of Sussex noted AI-generated product photos often appear too perfect or flawed, while fake reviews and claims of scarcity are typical tactics to mislead buyers.

Trustpilot ratings for Scarlett Jewels are mostly one star, with customers describing items as ‘tat’ or ‘poor quality’.

Authorities are taking action, with the Advertising Standards Authority banning similar ads and Facebook restricting Scarlett Jewels from creating new adverts. Buyers are advised to spot off AI images, large discounts, and genuine reviews to avoid falling for scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored pricing is here and personal data is the price signal

AI is quietly changing how prices are set online. Beyond demand-based shifts, companies increasingly tailor offers to individuals, using browsing history, purchase habits, device, and location to predict willingness to pay. Two shoppers may see different prices for the same product at the same moment.

Dynamic pricing raises or lowers prices for everyone as conditions change, such as school-holiday airfares or hotel rates during major events. Personalised pricing goes further by shaping offers for specific users, rewarding cart-abandoners with discounts while charging rarer shoppers a premium.

Platforms mine clicks, time on page, past purchases, and abandoned baskets to build profiles. Experiments show targeted discounts can lift sales while capping promo spend, proving engineered prices scale. The result: you may not see a ‘standard’ price, but one designed for you.

The risks are mounting. Income proxies such as postcode or device can entrench inequality, while hidden algorithms erode trust when buyers later find cheaper prices. Accountability is murky if tailored prices mislead, discriminate, or breach consumer protections without clear disclosure.

Regulators are moving. A competition watchdog in Australia has flagged transparency gaps, unfair trading risks, and the need for algorithmic disclosure. Businesses now face a twin test: deploy AI pricing with consent, explainability, and opt-outs, and prove it delivers value without crossing ethical lines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian students get 12 months of Google Gemini Pro at no cost

Google has launched a free twelve-month Gemini Pro plan for students in Australia aged eighteen and over, aiming to make AI-powered learning more accessible.

The offer includes the company’s most advanced tools and features designed to enhance study efficiency and critical thinking.

A key addition is Guided Learning mode, which acts as a personal AI coach. Instead of quick answers, it walks students through complex subjects step by step, encouraging a deeper understanding of concepts.

Gemini now also integrates diagrams, images and YouTube videos into responses to make lessons more visual and engaging.

Students can create flashcards, quizzes and study guides automatically from their own materials, helping them prepare for exams more effectively. The Gemini Pro account upgrade provides access to Gemini 2.5 Pro, Deep Research, NotebookLM, Veo 3 for short video creation, and Jules, an AI coding assistant.

With two terabytes of storage and the full suite of Google’s AI tools, the Gemini app aims to support Australian students in their studies and skill development throughout the academic year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government urges awareness as £106m lost to romance fraud in one year

Romance fraud has surged across the United Kingdom, with new figures showing that victims lost a combined £106 million in the past financial year. Action Fraud, the UK’s national reporting centre for cybercrime, described the crime as one that causes severe financial, emotional, and social damage.

Among the victims is London banker Varun Yadav, who lost £40,000 to a scammer posing as a romantic partner on a dating app. After months of chatting online, the fraudster persuaded him to invest in a cryptocurrency platform.

When his funds became inaccessible, Yadav realised he had been deceived. ‘You see all the signs, but you are so emotionally attached,’ he said. ‘You are willing to lose the money, but not the connection.’

The Financial Conduct Authority (FCA) said banks should play a stronger role in disrupting romance scams, calling for improved detection systems and better staff training to identify vulnerable customers. It urged firms to adopt what it called ‘compassionate aftercare’ for those affected.

Romance fraud typically involves criminals creating fake online profiles to build emotional connections before manipulating victims into transferring money.

The National Cyber Security Centre (NCSC) and UK police recommend maintaining privacy on social media, avoiding financial transfers to online contacts, and speaking openly with friends or family before sending money.

The Metropolitan Police recently launched an awareness campaign featuring victim testimonies and guidance on spotting red flags. The initiative also promotes collaboration with dating apps, banks, and social platforms to identify fraud networks.

Detective Superintendent Kerry Wood, head of economic crime for the Met Police, said that romance scams remain ‘one of the most devastating’ forms of fraud. ‘It’s an abuse of trust which undermines people’s confidence and sense of self-worth. Awareness is the most powerful defence against fraud,’ she said.

Although Yadav never recovered his savings, he said sharing his story helped him rebuild his life. He urged others facing similar scams to speak up: ‘Do not isolate yourself. There is hope.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot