AI tool helps find new treatments for heart disease

A new ΑΙ system developed at Imperial College London could accelerate the discovery of treatments for heart disease by combining detailed heart scans with huge medical databases.

Cardiovascular disease remains the leading cause of death across the EU, accounting for around 1.7 million deaths every year, so researchers believe smarter tools are urgently needed.

The AI model, known as CardioKG, uses imaging data from thousands of UK Biobank participants, including people with heart failure, heart attacks and atrial fibrillation, alongside healthy volunteers.

By linking information about genes, medicines and disease, the system aims to predict which drugs might work best for particular heart conditions instead of relying only on traditional trial-and-error approaches.

Among the medicines highlighted were methotrexate, normally used for rheumatoid arthritis, and diabetes drugs known as gliptins, which the AI suggested could support some heart patients.

The model also pointed to a possible protective effect from caffeine among people with atrial fibrillation, although researchers warned that individuals should not change their caffeine intake based on the findings alone.

Scientists say the same technology could be applied to other health problems, including brain disorders and obesity.

Work is already under way to turn the knowledge graph into a patient-centred system that follows real disease pathways, with the long-term goal of enabling more personalised and better-timed treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk says users are liable for the illegal Grok content

Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.

Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.

India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.

Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.

Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.

Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.

Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit overtakes TikTok in the UK social media race

In the UK, Reddit has quietly overtaken TikTok to become Britain’s fourth most-visited social media platform, marking a major shift in how people search for information and share opinions online.

Use of the platform among UK internet users has risen sharply over the past two years, driven strongly by younger audiences who are increasingly drawn to open discussion instead of polished influencer content.

Google’s algorithm changes have helped accelerate Reddit’s rise by prioritising forum-based conversations in search results. Partnership deals with major AI companies have reinforced visibility further, as AI tools increasingly cite Reddit threads.

Younger users in the UK appear to value unfiltered and experience-based conversations, creating strong growth across lifestyle, beauty, parenting and relationship communities, alongside major expansion in football-related discussion.

Women now make up more than half of Reddit’s UK audience, signalling a major demographic shift for a platform once associated mainly with male users. Government departments, including ministers, are also using Reddit for direct engagement through public Q&A sessions.

Tension remains part of the platform’s culture, yet company leaders argue that community moderation and voting systems help manage behaviour.

Reddit is now encouraging users to visit directly instead of arriving via search or AI summaries, positioning the platform as a human alternative to automated answers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns raised over Google AI Overviews and health advice

A Guardian investigation has found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm. The summaries, which appear at the top of search results, are generated using AI and are presented as reliable snapshots of key information.

The investigation identified multiple cases where Google’s AI summaries provided inaccurate medical advice. Examples included incorrect guidance for pancreatic cancer patients, misleading explanations of liver blood test results, and false information about women’s cancer screening.

Health experts warned that such errors could lead people to dismiss symptoms, delay treatment, or follow harmful advice. Some charities said the summaries lacked essential context and could mislead users during moments of anxiety or crisis.

Concerns were also raised about inconsistencies, with the same health queries producing different AI-generated answers at different times. Experts said this variability undermines trust and increases the risk that misinformation will influence health decisions.

Google said most AI Overviews are accurate and helpful, and that the company continually improves quality, particularly for health-related topics. It said action is taken when summaries misinterpret content or lack appropriate context.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data breach exposes users of major patient portal ManageMyHealth

More than 108,000 users of ManageMyHealth may have had their information exposed following a data breach affecting one of the country’s largest patient portals. The incident occurred on Wednesday and is believed to have affected between 6% and 7% of the platform’s 1.8 million registered users.

ManageMyHealth said affected users will be contacted within 48 hours with details about whether and how their data was accessed. Chief executive Vino Ramayah said the company takes the protection of health information extremely seriously and acknowledged the stress such incidents can cause.

He confirmed that the Office of the Privacy Commissioner has been notified and is working with the company to meet legal obligations.

Health Minister Simeon Brown described the breach as concerning but stated that there was no evidence to suggest that Health New Zealand systems, including My Health Account, had been compromised. He added that health services were continuing to operate as normal and that there had been no clinical impact on patient care.

Health New Zealand said it is coordinating with the National Cyber Security Centre and other agencies to understand the scope of the breach and ensure appropriate safeguards are in place.

Officials stressed expectations around security standards, transparency and clear communication, while ongoing engagement with primary care providers and GPs continues.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI platforms reshape everyday online behaviour

AI is rapidly becoming the starting point for many everyday activities, from planning and learning to shopping and decision-making. A new report by PYMNTS Intelligence suggests that AI is no longer just an added digital tool, but is increasingly replacing traditional entry points such as search engines and mobile apps.

The study shows that AI use in the United States has moved firmly into the mainstream, with more than 60 per cent of consumers using dedicated AI platforms over the past year. Younger users and frequent AI users are leading the shift, increasingly turning to AI first rather than using it to support existing online habits.

Researchers found that how people use AI matters as much as how often they use it. Heavy users rely on AI across many aspects of daily life, treating it as a general-purpose system, while lighter users remain cautious and limit AI to lower-risk tasks. Trust plays a decisive role, especially when it comes to sensitive areas such as finances and banking.

The report also points to changing patterns in online discovery. Consumers who use standalone AI platforms are more likely to abandon older methods entirely, while those encountering AI through search engines tend to blend it with familiar tools. That difference suggests that the design and context of AI services strongly influence user behaviour.

Looking ahead, the findings hint at how AI could reshape digital commerce. Many consumers say they would prefer to connect digital wallets directly to AI platforms for payments, signalling a potential shift in how intent turns into transactions. As AI becomes a common entry point to the digital world, businesses and financial institutions face growing pressure to adapt their systems to this new starting line.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google sues group behind mass scam texts

Google has filed a lawsuit against a Chinese-speaking cybercriminal network it says is behind a large share of scam text messages targeting people in the United States. The company says the legal action is aimed at disrupting the group’s online infrastructure rather than seeking damages.

According to the complaint, the group, known as Darcula, develops and sells phishing software that allows scammers to send mass text messages posing as trusted organisations such as postal services, government agencies, or online platforms. The tools are designed to be easy to use, enabling people with little technical expertise to run large-scale scams.

Google says the software has been used by hundreds of scam operators to direct victims to fake websites where credit card details are stolen. The company estimates that hundreds of thousands of payment cards have been compromised globally, with tens of thousands linked to victims in the United States.

The lawsuit asks a US court to grant Google the authority to seize and shut down websites connected to the operation, a tactic technology companies increasingly use when criminal networks operate in countries beyond the reach of US law enforcement. Investigations by journalists and cybersecurity researchers suggest the group operates largely in Chinese and has links to individuals based in China and other countries.

The case highlights the growing scale of text-based fraud in the US, where cybercrime losses continue to rise sharply. Google says it will continue combining legal action with technical measures to limit the reach of large scam networks and protect users from increasingly sophisticated phishing campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hawaii warns residents about phishing using fake government sites

State officials have warned the public about a phishing campaign using the fake domain codify.inc to impersonate official government websites. Cybercriminals aim to steal personal information and login credentials from unsuspecting users.

Several state agencies are affected, including the departments of Labor and Industrial Relations, Education, Health, Transportation, and many others. Fraudulent websites often mimic official URLs, such as dlir.hi.usa.codify.inc, and may use AI-based services to entice users.

Residents are urged to verify website addresses carefully. Official government portals will always end in .gov, and any other extensions like .inc or .co are not legitimate. Users should type addresses directly into their browsers rather than clicking links in unsolicited emails or texts.

Suspicious websites should be reported to the State of Hawaii at soc@hawaii.gov to help protect other residents from falling victim to the scam.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scam texts impersonating Illinois traffic authorities spread

Illinois Secretary of State Alexi Giannoulias has warned residents to stay alert for fraudulent text messages claiming unpaid traffic violations or tolls. Officials say the messages are part of a phishing campaign targeting Illinois drivers.

The scam texts typically warn recipients that their vehicle registration or driving privileges are at risk of suspension. The messages urge immediate action via links that steal money or personal information.

The Secretary of State’s office said it sends text messages only to remind customers about scheduled DMV appointments. It does not communicate by text about licence status, vehicle registration issues, or enforcement actions.

Officials advised residents not to click on links or provide personal details in response to such messages. The texts are intended to create fear and pressure victims into acting quickly.

Residents who receive scam messages are encouraged to report them to the Federal Trade Commission through its online fraud reporting system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Belgium’s influencers seek clarity through a new certification scheme

The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.

Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.

In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.

Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.

Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.

Not everyone is convinced.

Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.

The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.

Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!