California launches DROP tool to erase data broker records

Residents in California now have a simpler way to force data brokers to delete their personal information.

The state has launched the Delete Requests and Opt-Out Platform, known as DROP, allowing residents to submit one verified deletion request that applies to every registered data broker instead of contacting each company individually.

A system that follows the Delete Act, passed in 2023, and is intended to create a single control point for consumer data removal.

Once a resident submits a request, the data brokers must begin processing it from August 2026 and will have 90 days to act. If data is not deleted, residents may need to provide extra identifying details.

First-party data collected directly by companies can still be retained, while data from public records, such as voter rolls, remains exempt. Highly sensitive data may fall under separate legal protections such as HIPAA.

The California Privacy Protection Agency argues that broader data deletion could reduce identity theft, AI-driven impersonation, fraud risk and unwanted marketing contact.

Penalties for non-compliance include daily fines for brokers who fail to register or ignore deletion orders. The state hopes the tool will make data rights meaningful instead of purely theoretical.

A launch that comes as regulators worldwide examine how personal data is used, traded and exploited.

California is positioning itself as a leader in consumer privacy enforcement, while questions continue about how effectively DROP will operate when the deadline arrives in 2026.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloud and AI growth fuels EU push for greener data centres

Europe’s growing demand for cloud and AI services is driving a rapid expansion of data centres across the EU.

Policymakers now face the difficulty of supporting digital growth instead of undermining climate targets, yet reliable sustainability data remains scarce.

Operators are required to report on energy consumption, water usage, renewable sourcing and heat reuse, but only around one-third have submitted complete data so far.

Brussels plans to introduce a rating scheme from 2026 that grades data centres on environmental performance, potentially rewarding the most sustainable new facilities with faster approvals under the upcoming Cloud and AI Development Act.

Industry groups want the rules adjusted so operators using excess server heat to warm nearby homes are not penalised. Experts also argue that stronger auditing and stricter application of standards are essential so reported data becomes more transparent and credible.

Smaller data centres remain largely untracked even though they are often less efficient, while colocation facilities complicate oversight because customers manage their own servers. Idle machines also waste vast amounts of energy yet remain largely unmeasured.

Meanwhile, replacing old hardware may improve efficiency but comes with its own environmental cost.

Even if future centres run on cleaner power and reuse heat, the manufacturing footprint of the equipment inside them remains a major unanswered sustainability challenge.

Policymakers say better reporting is essential if the EU is to balance digital expansion with climate responsibility rather than allowing environmental blind spots to grow.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool helps find new treatments for heart disease

A new ΑΙ system developed at Imperial College London could accelerate the discovery of treatments for heart disease by combining detailed heart scans with huge medical databases.

Cardiovascular disease remains the leading cause of death across the EU, accounting for around 1.7 million deaths every year, so researchers believe smarter tools are urgently needed.

The AI model, known as CardioKG, uses imaging data from thousands of UK Biobank participants, including people with heart failure, heart attacks and atrial fibrillation, alongside healthy volunteers.

By linking information about genes, medicines and disease, the system aims to predict which drugs might work best for particular heart conditions instead of relying only on traditional trial-and-error approaches.

Among the medicines highlighted were methotrexate, normally used for rheumatoid arthritis, and diabetes drugs known as gliptins, which the AI suggested could support some heart patients.

The model also pointed to a possible protective effect from caffeine among people with atrial fibrillation, although researchers warned that individuals should not change their caffeine intake based on the findings alone.

Scientists say the same technology could be applied to other health problems, including brain disorders and obesity.

Work is already under way to turn the knowledge graph into a patient-centred system that follows real disease pathways, with the long-term goal of enabling more personalised and better-timed treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk says users are liable for the illegal Grok content

Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.

Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.

India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.

Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.

Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.

Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.

Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Plaud unveils compact AI NotePin S and new meeting app

Hardware maker Plaud has introduced a new AI notetaking pin called the Plaud NotePin S alongside a Mac desktop app for digital meeting notes ahead of CES in Las Vegas.

The wearable device costs 179 dollars and arrives with several accessories so users can attach or wear it in different ways. A physical button allows quick control of recordings and can be tapped to highlight key moments during conversations.

The NotePin S keeps the same core specifications as the earlier model, including 64GB of storage and up to 20 hours of continuous recording.

Two MEMS microphones capture speech clearly within roughly three metres. Owners receive 300 minutes of transcription each month without extra cost. Apple Find My support is also included, so users can locate the device easily instead of worrying about misplacing it.

Compared with the larger Note Pro, the new pin offers a shorter recording range and battery life, but the small size makes it easier to wear while travelling or working on the go.

Plaud says the device suits users who rely on frequent in-person conversations rather than long seated meetings.

Plaud has now sold more than 1.5 million notetaking devices. The company also aims to enter the AI meeting assistant market with a Mac desktop client that detects when a meeting is active and prompts users to capture audio.

The software records system sound and uses AI to organise the transcript into structured notes. Users can also add typed notes and images instead of relying only on audio.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Christians raise concerns over AI used for moral guidance

AI is increasingly used for emotional support and companionship, raising questions about the values embedded in its responses, particularly for Christians seeking guidance. Research cited by Harvard Business Review shows therapy-related use now dominates generative AI.

As Christians turn to AI for advice on anxiety, relationships, and personal crises, concerns are growing about the quality and clarity of its responses. Critics warn that AI systems often rely on vague generalities and may lack the moral grounding expected by faith-based users.

A new benchmark released by technology firm Gloo assessed how leading AI models support human flourishing from a Christian perspective. The evaluation examined seven areas, including relationships, meaning, health, and faith, and found consistent weaknesses in how models addressed Christian belief.

The findings show many AI systems struggle with core Christian concepts such as forgiveness and grace. Responses often default to vague spirituality rather than engaging directly with Christian values.

The authors argue that as AI increasingly shapes worldviews, greater attention is needed to how systems serve Christians and other faith communities. They call for clearer benchmarks and training approaches that allow AI to engage respectfully with religious values without promoting any single belief system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MindRank advances AI-designed weight loss drug to Phase 3 trials

Hangzhou-based biotech start-up MindRank has entered Phase 3 clinical trials for its weight loss drug, marking China’s first AI-assisted Category 1 new drug to reach this stage. The trial involves MDR-001, a small-molecule GLP-1 receptor agonist developed using AI-driven techniques.

MindRank said the weight loss drug was designed to regulate blood sugar and appetite by mimicking natural hormones. According to founder and chief executive Niu Zhangming, the company is targeting regulatory approval in the second half of 2028, with a potential market launch in 2029.

The company said the development process for the weight loss drug took about 4.5 years, significantly shorter than the typical 7 to 10 years required to reach Phase 3 trials. Niu attributed the acceleration to AI tools that reduced research timelines and cut overall R&D costs by more than 60 per cent.

China-based MindRank uses proprietary AI systems, including large language models (LLMs), to identify weight-loss drug targets and shortlist compounds. The approach has raised target research accuracy above 97 per cent and supports safety and efficacy assessments.

Despite these advances, Niu said human expertise remains essential for strategic decision-making and integrating workflows. He added that AI-assisted drug discovery still faces long validation cycles, meaning its impact on life sciences may be more gradual than in other sectors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CES 2026 opens with Samsung focus on AI integration

Samsung will open its CES 2026 presence with a Sunday evening press conference focused on integrating AI across its product portfolio. The event will take place on 4 January at the Wynn in Las Vegas and will be livestreamed online.

Senior executives, including TM Roh, head of the Device eXperience division, and leaders from Samsung’s visual display and digital appliance businesses, are expected to outline the company’s AI strategy. Samsung says the presentation will emphasise AI as a core layer across products and services.

The company has already previewed several AI-enabled devices ahead of CES. The devices include a portable projector that adapts to its surroundings, expanded Google Photos integration on Samsung TVs, and new Micro RGB television displays.

The company is also highlighting AI-powered home appliances designed to anticipate user needs. Examples include refrigerators that track food supplies, generate shopping lists, and detect early signs of device malfunction.

New smartphones are not expected at the event, with the next Galaxy Unpacked launch reportedly scheduled for later in January or early February.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Texas project puts Fermi at centre of nuclear AI push

A large energy and AI campus is taking shape outside Amarillo, Texas, as startup Fermi America plans to build what it says would be the world’s largest private power grid. The project aims to support large-scale AI training using nuclear, gas, and solar power.

Known as Project Matador, the development would host millions of square metres of data centres and generate more electricity than many US states consume at peak demand. The site is near the Pantex nuclear weapons facility and is part of a broader push for US energy and AI dominance.

Fermi is led by former Texas governor and energy secretary Rick Perry alongside investor Toby Neugebauer. The company plans to deploy next-generation nuclear reactors and offer off-grid computing infrastructure, though it has yet to secure a confirmed anchor tenant.

The scale and cost of the project have raised questions among analysts and local residents. Critics point to financing risks, water use, and the challenge of delivering nuclear reactors on time and within budget, while supporters argue the campus could drive economic growth and national security benefits.

Backed by political momentum and rising demand for AI infrastructure, Fermi is pressing ahead with construction and partnerships. Whether Project Matador can translate ambition into delivery remains a key test as competition intensifies in the global race to power next-generation AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs risks and rewards of rapid AI adoption

AI is reshaping Australia’s labour market at a pace that has reignited anxiety about job security and skills. Experts say the speed and visibility of AI adoption have made its impact feel more immediate than previous technological shifts.

Since the public release of ChatGPT in late 2022, AI tools have rapidly moved from novelty to everyday workplace technology. Businesses are increasingly automating routine tasks, including through agentic AI systems that can execute workflows with limited human input.

Research from the HR Institute of Australia suggests the effects are mixed. While some entry-level roles have grown in the short term, analysts warn that clerical and administrative jobs remain highly exposed as automation expands across organisations.

Economic modelling indicates that AI could boost productivity and incomes if adoption is carefully managed, but may also cause short-term job displacement. Sectors with lower automation potential, including construction, care work, and hands-on services, are expected to absorb displaced workers.

Experts and unions say outcomes will depend on skills, policy choices, and governance. Australia’s National AI Plan aims to guide the transition, while researchers urge workers to upskill and use AI as a productivity tool rather than avoiding it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!