More than 108,000 users of ManageMyHealth may have had their information exposed following a data breach affecting one of the country’s largest patient portals. The incident occurred on Wednesday and is believed to have affected between 6% and 7% of the platform’s 1.8 million registered users.
ManageMyHealth said affected users will be contacted within 48 hours with details about whether and how their data was accessed. Chief executive Vino Ramayah said the company takes the protection of health information extremely seriously and acknowledged the stress such incidents can cause.
He confirmed that the Office of the Privacy Commissioner has been notified and is working with the company to meet legal obligations.
Health Minister Simeon Brown described the breach as concerning but stated that there was no evidence to suggest that Health New Zealand systems, including My Health Account, had been compromised. He added that health services were continuing to operate as normal and that there had been no clinical impact on patient care.
Health New Zealand said it is coordinating with the National Cyber Security Centre and other agencies to understand the scope of the breach and ensure appropriate safeguards are in place.
Officials stressed expectations around security standards, transparency and clear communication, while ongoing engagement with primary care providers and GPs continues.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is rapidly becoming the starting point for many everyday activities, from planning and learning to shopping and decision-making. A new report by PYMNTS Intelligence suggests that AI is no longer just an added digital tool, but is increasingly replacing traditional entry points such as search engines and mobile apps.
The study shows that AI use in the United States has moved firmly into the mainstream, with more than 60 per cent of consumers using dedicated AI platforms over the past year. Younger users and frequent AI users are leading the shift, increasingly turning to AI first rather than using it to support existing online habits.
Researchers found that how people use AI matters as much as how often they use it. Heavy users rely on AI across many aspects of daily life, treating it as a general-purpose system, while lighter users remain cautious and limit AI to lower-risk tasks. Trust plays a decisive role, especially when it comes to sensitive areas such as finances and banking.
The report also points to changing patterns in online discovery. Consumers who use standalone AI platforms are more likely to abandon older methods entirely, while those encountering AI through search engines tend to blend it with familiar tools. That difference suggests that the design and context of AI services strongly influence user behaviour.
Looking ahead, the findings hint at how AI could reshape digital commerce. Many consumers say they would prefer to connect digital wallets directly to AI platforms for payments, signalling a potential shift in how intent turns into transactions. As AI becomes a common entry point to the digital world, businesses and financial institutions face growing pressure to adapt their systems to this new starting line.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has filed a lawsuit against a Chinese-speaking cybercriminal network it says is behind a large share of scam text messages targeting people in the United States. The company says the legal action is aimed at disrupting the group’s online infrastructure rather than seeking damages.
According to the complaint, the group, known as Darcula, develops and sells phishing software that allows scammers to send mass text messages posing as trusted organisations such as postal services, government agencies, or online platforms. The tools are designed to be easy to use, enabling people with little technical expertise to run large-scale scams.
Google says the software has been used by hundreds of scam operators to direct victims to fake websites where credit card details are stolen. The company estimates that hundreds of thousands of payment cards have been compromised globally, with tens of thousands linked to victims in the United States.
The lawsuit asks a US court to grant Google the authority to seize and shut down websites connected to the operation, a tactic technology companies increasingly use when criminal networks operate in countries beyond the reach of US law enforcement. Investigations by journalists and cybersecurity researchers suggest the group operates largely in Chinese and has links to individuals based in China and other countries.
The case highlights the growing scale of text-based fraud in the US, where cybercrime losses continue to rise sharply. Google says it will continue combining legal action with technical measures to limit the reach of large scam networks and protect users from increasingly sophisticated phishing campaigns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
State officials have warned the public about a phishing campaign using the fake domain codify.inc to impersonate official government websites. Cybercriminals aim to steal personal information and login credentials from unsuspecting users.
Several state agencies are affected, including the departments of Labor and Industrial Relations, Education, Health, Transportation, and many others. Fraudulent websites often mimic official URLs, such as dlir.hi.usa.codify.inc, and may use AI-based services to entice users.
Residents are urged to verify website addresses carefully. Official government portals will always end in .gov, and any other extensions like .inc or .co are not legitimate. Users should type addresses directly into their browsers rather than clicking links in unsolicited emails or texts.
Suspicious websites should be reported to the State of Hawaii at soc@hawaii.gov to help protect other residents from falling victim to the scam.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Illinois Secretary of State Alexi Giannoulias has warned residents to stay alert for fraudulent text messages claiming unpaid traffic violations or tolls. Officials say the messages are part of a phishing campaign targeting Illinois drivers.
The scam texts typically warn recipients that their vehicle registration or driving privileges are at risk of suspension. The messages urge immediate action via links that steal money or personal information.
The Secretary of State’s office said it sends text messages only to remind customers about scheduled DMV appointments. It does not communicate by text about licence status, vehicle registration issues, or enforcement actions.
Officials advised residents not to click on links or provide personal details in response to such messages. The texts are intended to create fear and pressure victims into acting quickly.
Residents who receive scam messages are encouraged to report them to the Federal Trade Commission through its online fraud reporting system.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The booming influencer economy of Belgium is colliding with an advertising rulebook that many creators say belongs to another era.
Different obligations across federal, regional and local authorities mean that wording acceptable in one region may trigger a reprimand in another. Some influencers have even faced large fines for administrative breaches such as failing to publish business details on their profiles.
In response, the Influencer Marketing Alliance in Belgium has launched a certification scheme designed to help creators navigate the legal maze instead of risking unintentional violations.
Influencers complete an online course on advertising and consumer law and must pass a final exam before being listed in a public registry monitored by the Jury for Ethical Practices.
Major brands, including L’Oréal and Coca-Cola, already prefer to collaborate with certified creators to ensure compliance and credibility.
Not everyone is convinced.
Some Belgian influencers argue that certification adds more bureaucracy at a time when they already struggle to understand overlapping rules. Others see value as a structured reminder that content creators remain legally responsible for commercial communication shared with followers.
The alliance is also pushing lawmakers to involve influencers more closely when drafting future rules, including taxation and safeguards for child creators.
Consumer groups such as BEUC support clearer definitions and obligations under the forthcoming EU Digital Fairness Act, arguing that influencer advertising should follow the same standards as other media instead of remaining in a grey zone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A ransomware attack has disrupted the Oltenia Energy Complex, Romania’s largest coal-based power producer, after hackers encrypted key IT systems in the early hours of 26 December.
The state-controlled company confirmed that the Gentlemen ransomware strain locked corporate files and disabled core services, including ERP platforms, document management tools, email and the official website.
The organisation isolated affected infrastructure and began restoring services from backups on new systems instead of paying a ransom. Operations were only partially impacted and officials stressed that the national energy system remained secure, despite the disruption across business networks.
A criminal complaint has been filed. Additionally, both the National Directorate of Cyber Security of Romania and the Ministry of Energy have been notified.
Investigators are still assessing the scale of the breach and whether sensitive data was exfiltrated before encryption. The Gentlemen ransomware group has not yet listed the energy firm on its dark-web leak site, a sign that negotiations may still be underway.
An attack that follows a separate ransomware incident that recently hit Romania’s national water authority, underlining the rising pressure on critical infrastructure organisations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI tools are increasingly used for simple everyday calculations, yet a new benchmark suggests accuracy remains unreliable.
The ORCA study tested five major chatbots across 500 real-world maths prompts and found that users still face roughly a 40 percent chance of receiving the wrong answer.
Gemini from Google recorded the highest score at 63 percent, with xAI’s Grok almost level at 62.8 percent. DeepSeek followed with 52 percent, while ChatGPT scored 49.4 percent, and Claude placed last at 45.2 percent.
Performance varied sharply across subjects, with maths and conversion tasks producing the best results, but physics questions dragged scores down to an average accuracy below 40 percent.
Researchers identified most errors as sloppy calculations or rounding mistakes, rather than deeper failures to understand the problem. Finance and economics questions highlighted the widest gaps between the models, while DeepSeek struggled most in biology and chemistry, with barely one correct answer in ten.
Users are advised to double-check results whenever accuracy is crucial. A calculator or a verified source is still advised instead of relying entirely on an AI chatbot for numerical certainty.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.
The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.
High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.
The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.
The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.
China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Many visually impaired gamers find mainstream video games difficult due to limited accessibility features. Support groups enable players to share tips, recommend titles, and connect with others who face similar challenges.
Audio and text‑based mobile games are popular, yet console and PC titles often lack voiceovers or screen reader support. Adjustable visual presets could make mainstream games more accessible for partially sighted players.
UK industry bodies acknowledge progress, but barriers remain for millions of visually impaired players. Communities offer social support and provide feedback to developers to improve games and make them inclusive.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!