Medical AI risks in Turkey highlight data bias and privacy challenges

Ankara is seeing growing debate over the risks and benefits of medical AI as experts warn that poorly governed systems could threaten patient safety.

Associate professor Agah Tugrul Korucu said AI offers meaningful potential for healthcare only when supported by rigorous ethical rules and strong oversight instead of rapid deployment without proper safeguards.

Korucu explained that data bias remains one of the most significant dangers because AI models learn directly from the information they receive. Underrepresented age groups, regions or social classes can distort outcomes and create systematic errors.

Turkey’s national health database e-Nabiz provides a strategic advantage, yet raw information cannot generate value unless it is processed correctly and supported by clear standards, quality controls and reliable terminology.

He added that inconsistent hospital records, labelling errors and privacy vulnerabilities can mislead AI systems and pose legal challenges. Strict anonymisation and secure analysis environments are needed to prevent harmful breaches.

Medical AI works best as a second eye in fields such as radiology and pathology, where systems can reduce workloads by flagging suspicious areas instead of leaving clinicians to assess every scan alone.

Korucu said physicians must remain final decision makers because automation bias could push patients towards unnecessary risks.

He expects genomic data combined with AI to transform personalised medicine over the coming decade, allowing faster diagnoses and accurate medication choices for rare conditions.

Priority development areas for Turkey include triage tools, intensive care early warning systems and chronic disease management. He noted that the long-term model will be the AI-assisted physician rather than a fully automated clinician.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Stanford speech warns of AI tsunami

Senator Bernie Sanders has warned at Stanford University in California that the US is unprepared for the speed and scale of the AI revolution. Speaking in California alongside Congressman Ro Khanna, he called the moment one of the most dangerous in modern US history.

At Stanford University, Sanders urged a moratorium on the expansion of AI data centres to slow development while lawmakers catch up. He argued that the American public lacks a clear understanding of the economic and social impact ahead and that New York is already considering a pause.

Khanna, who represents Silicon Valley in California, rejected a complete moratorium but called for steering AI growth through renewable energy and water efficiency standards. He outlined principles to prevent wealth from being concentrated among a small group of tech billionaires.

Sanders also raised concerns in California about job losses and emotional reliance on AI, citing projections of widespread automation. He called for a national debate in the US over whether AI will benefit the public or deepen inequality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

University of Bristol opens free online course on AI

The University of Bristol has launched a free online course called AI Fundamentals, designed to increase public understanding of AI. Many people use AI regularly but feel unsure about how to engage with it effectively, creating a gap that the course aims to address.

AI Fundamentals explores the technology’s complexities, societal impact, and environmental implications. The curriculum emphasises critical thinking about AI, its risks, and its potential, making it relevant for both enthusiasts and the curious general public.

The course runs entirely online over four weeks, requiring about 3 hours of self-paced work per week. No coding or advanced mathematics is needed, allowing learners from all backgrounds to participate and explore AI in a digestible format.

Led by Professors Genevieve Liveley and Seth Bullock, the course draws on expertise across fields including computer science, law, medicine, humanities, and neuroscience. Supported by a £50,000 alum donation and UKRI funding, it is now open for enrolment via FutureLearn.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global microchip shortage pushes electronics prices higher

South African consumers may soon pay more for smartphones and laptops due to a global shortage of memory chips. The high demand is largely driven by AI data centres, which require powerful microchips to operate.

Tech experts report that major AI companies are acquiring large quantities of these chips for their own data centres, limiting supply for other industries. At the same time, importing chips from regions such as China has become more difficult because of trade tensions and tariffs.

Industry leaders, including Apple’s Tim Cook and Tesla’s Elon Musk, have expressed concern over the impact on production and business operations. The strain is being felt across the tech sector as companies compete for the limited supply of components.

With no immediate solution, the increased costs are expected to be passed down to consumers. Analysts warn that the combination of high demand, supply constraints, and global trade issues will make technology and appliances more expensive for consumers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pension savers increasingly rely on AI for retirement planning

AI is becoming a preferred tool for those beginning their retirement planning. Data on searches and website traffic suggests AI is meeting early-stage needs for pension guidance.

Platforms offering general financial information, such as MoneyHelper, have seen traffic fall by 10% over the past six months. At the same time, AI-generated overviews of pension content are on the rise.

AI tools are mainly used to sense-check retirement decisions, model ‘what-if’ scenarios, simplify pension jargon, and assist with tax planning. Users view AI as a thinking partner rather than a replacement for regulated advice.

Despite the rise of AI, bespoke advisory services, such as Pension Wise, have remained relevant, providing personalised guidance that AI cannot fully replace. PensionBee highlights that AI is helpful for basic guidance, but services remain essential for more complex planning.

Experts warn that the retirement sector faces a challenge in maintaining trust and relevance as AI continues to improve. Savers increasingly rely on technology for guidance, signalling a shift in how pensions are researched and managed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Majority of college students use or must use AI in classwork, but institutions lag in AI education

Research from Honorlock indicates a substantial shift in how students engage with generative AI in higher education: more than 56% of surveyed US college–enrolled students report being required to use AI tools in coursework, and 63% use AI for at least some assignments.

The most common uses include grammar and editing support (59%) and text generation (57%), with students also using AI to brainstorm ideas and clarify concepts.

Despite widespread AI use, there remains a significant gap in formal AI education: only 31% of students are aware of AI-focused courses at their institutions, and fewer than 20% have taken them.

Students themselves often learn AI skills independently rather than through a structured curriculum, potentially leaving them unprepared for workplaces where AI fluency is expected.

The survey also highlights academic integrity risks: more than one-third of students admitted to using AI assistance on quizzes or exams, underlining the need for clear AI use policies, responsible-use training and ethical frameworks within higher education.

Researchers and advocates argue that colleges should integrate AI literacy, including ethics, governance, real-world applications and responsible use, into coursework to better equip graduates for AI-enabled careers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenClaw exploits spark a major security alert

A wave of coordinated attacks has targeted OpenClaw, the autonomous AI framework that gained rapid popularity after its release in January.

Multiple hacking groups have taken advantage of severe vulnerabilities to steal API keys, extract persistent memory data, and push information-stealing malware instead of leaving the platform’s expanding user base unharmed.

Security analysts have linked more than 30,000 compromised instances to campaigns that intercept messages and deploy malicious payloads through channels such as Telegram.

Much of the damage stems from flaws such as the Remote Code Execution vulnerability CVE-2026-25253, supply chain poisoning, and exposed administrative interfaces. Early attacks centred on the ‘ClawHavoc’ campaign, which disguised malware as legitimate installation tools.

Users who downloaded these scripts inadvertently installed stealers capable of full compromise, enabling attackers to move laterally across enterprise systems instead of being confined to a single device.

Further incidents emerged on the OpenClaw marketplace, where backdoored ‘skills’ were published from accounts that appeared reliable. These updates executed remote commands that allowed attackers to siphon OAuth tokens, passwords, and API keys in real time.

A Shodan scan later identified more than 312,000 OpenClaw instances running on a default port with little or no protection, while honeypots recorded hostile activity within minutes of appearing online.

Security researchers argue that the surge in attacks marks a decisive moment for autonomous AI frameworks. As organisations experiment with agents capable of independent decision-making, the absence of security-by-design safeguards is creating opportunities for organised threat groups.

Flare’s advisory urges companies to secure credentials and isolate AI workloads instead of relying on default configurations that expose high-privilege systems to the internet.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU DSA fine against X heads to court in key test case

X Corp., owned by Elon Musk, has filed an appeal with the General Court of the European Union against a €120 million fine imposed by the European Commission for breaching the Digital Services Act. The penalty, issued in December, marks the first enforcement action under the 2022 law.

The Commission concluded that X violated transparency obligations and misled users through its verification design, arguing that paid blue checkmarks made it harder to assess account authenticity. Officials also cited concerns about advertising transparency and researchers’ access to platform data.

Henna Virkkunen, the EU’s executive vice-president for tech sovereignty, security, and democracy, said deceptive verification and opaque advertising had no place online. The Commission opened its probe in December 2023, examining risk management, moderation practices, and alleged dark patterns.

X Corp. argued that the decision followed an incomplete investigation and a flawed reading of the DSA, citing procedural errors and due-process concerns. It said the appeal could shape future enforcement standards and penalty calculations under the regulation.

The EU is also assessing whether X mitigated systemic risks, including deepfaked content and child sexual abuse material linked to its Grok chatbot. US critics describe DSA enforcement as a threat to free speech, while EU officials say it strengthens accountability across the digital single market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Claude Code Security by Anthropic aims to detect and patch complex vulnerabilities

Anthropic has introduced Claude Code Security, an AI-powered service that scans software codebases for vulnerabilities and recommends targeted fixes. Built into Claude Code, the capability is rolling out in a limited research preview for Enterprise and Team customers.

The tool analyses code beyond traditional rule-based scanners, examining data flows and component interactions to identify complex, high-severity vulnerabilities. Findings undergo multi-stage verification, receive severity and confidence ratings, and are presented in a dashboard for human review.

Anthropic said the system re-examines its own results to reduce false positives before surfacing them to analysts. Teams can prioritise remediation based on severity ratings and iterate on suggested patches within familiar development workflows.

Claude Code Security builds on more than a year of cybersecurity research. Using Claude Opus 4.6, Anthropic reported discovering more than 500 long-undetected bugs in open-source projects through testing and external partnerships.

The company said AI will increasingly be used to scan global codebases, warning that attackers and defenders alike are adopting advanced models. Open-source maintainers can apply for expedited access as Anthropic expands the preview.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU–US draft data pact allows automated decisions on travellers

A draft data-sharing agreement between the EU and the US Department of Homeland Security would allow automated decisions about European travellers to continue under certain conditions, despite attempts to tighten protections.

The text permits such decisions when authorised under domestic law and relies on safeguards that let individuals request human intervention instead of leaving outcomes entirely to algorithms.

A deal designed to preserve visa-free travel would require national authorities to grant access to biometric databases containing fingerprints and facial scans.

Negotiators are attempting to reconcile the framework with the General Data Protection Regulation, even though the draft states that the new rules would supplement and supersede earlier bilateral arrangements.

Sensitive information, including political views, trade union membership and biometric identifiers, could be transferred as long as protective conditions are applied.

EU countries face a deadline at the end of 2026 to conclude individual agreements, and failure to do so could result in suspension from the US Visa Waiver Program.

A separate clause keeps disputes firmly outside judicial scrutiny by requiring disagreements to be resolved through a Joint Committee instead of national or international courts.

The draft also restricts onward sharing, obliging US authorities to seek explicit consent before passing European-supplied data to third parties.

Further negotiations are expected, with the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs preparing to hold a closed-door review of the talks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!