Legal sector urged to plan for cultural change around AI

A digital agency has released new guidance to help legal firms prepare for wider AI adoption. The report urges practitioners to assess cultural readiness before committing to major technology investment.

Sherwen Studios collected views from lawyers who raised ethical worries and practical concerns. Their experiences shaped recommendations intended to ensure AI serves real operational needs across the sector.

The agency argues that firms must invest in oversight, governance and staff capability. Leaders are encouraged to anticipate regulatory change and build multidisciplinary teams that blend legal and technical expertise.

Industry analysts expect AI to reshape client care and compliance frameworks over the coming years. Firms prepared for structural shifts are likely to benefit most from long-term transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT users gain Jira and Confluence access through Atlassian’s MCP connector

Atlassian has launched a new connector that lets ChatGPT users access Jira and Confluence data via the Model Context Protocol. The company said the Rovo MCP Connector supports task summarisation, issue creation and workflow automation directly inside ChatGPT.

Atlassian noted rising demand for integrations beyond its initial beta ecosystem. Users in Europe and elsewhere can now draw on Jira and Confluence data without switching interfaces, while partners such as Figma and HubSpot continue to expand the MCP network.

Engineering, marketing and service teams can request summaries, monitor task progress and generate issues from within ChatGPT. Users can also automate multi-step actions, including bulk updates. Jira write-back support enables changes to be pushed directly into project workflows.

Security updates sit alongside the connector release. Atlassian said the Rovo MCP Server uses OAuth authentication and respects existing permissions across Jira and Confluence spaces. Administrators can also enforce an allowlist to control which clients may connect.

Atlassian frames the initiative as part of its long-term focus on open collaboration. The company said the connector reflects demand for tools that unify context, search and automation, positioning the MCP approach as a flexible extension of existing team practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FCA begins live AI testing with UK financial firms

The UK’s Financial Conduct Authority has started a live testing programme for AI with major financial firms. The initiative aims to explore AI’s benefits and risks in retail financial services while ensuring safe and responsible deployment.

Participating firms, including NatWest, Monzo, Santander and Scottish Widows, receive guidance from FCA regulators and technical partner Advai. Use cases being trialled range from debt resolution and financial advice to customer engagement and smarter spending tools.

Insights from the testing will help the FCA shape future regulations and governance frameworks for AI in financial markets. The programme complements the regulator’s Supercharged Sandbox, with a second cohort of firms due to begin testing in April 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AstraZeneca backs Pangaea’s AI platform to scale precision healthcare

Pangaea Data, a health-tech firm specialising in patient-intelligence platforms, announced a strategic, multi-year partnership with AstraZeneca to deploy multimodal artificial intelligence in clinical settings. The goal is to bring AI-driven, data-rich clinical decision-making to scale, improving how patients are identified, diagnosed, treated and connected to therapies or clinical trials.

The collaboration will see AstraZeneca sponsoring the configuration, validation and deployment of Pangaea’s enterprise-grade platform, which merges large-scale clinical, imaging, genomic, pathology and real-world data. It will also leverage generative and predictive AI capabilities from Microsoft and NVIDIA for model training and deployment.

Among the planned applications are supporting point-of-care treatment decisions and identifying patients who are undiagnosed, undertreated or misdiagnosed, across diseases ranging from chronic conditions to cancer.

Pangaea’s CEO said the partnership aims to efficiently connect patients to life-changing therapies and trials in a compliant, financially sustainable way. For AstraZeneca, the effort reflects a broader push to integrate AI-driven precision medicine across its R&D and healthcare delivery pipeline.

From a policy and health-governance standpoint, this alliance is important. It demonstrates how multimodal AI, combining different data types beyond standard medical records, is being viewed not just as a research tool, but as a potentially transformative element of clinical care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO launches AI guidelines for courts and tribunals

UNESCO has launched new Guidelines for the Use of AI Systems in Courts and Tribunals to ensure AI strengthens rather than undermines human-led justice. The initiative arrives as courts worldwide face millions of pending cases and limited resources.

In Argentina, AI-assisted legal tools have increased case processing by nearly 300%, while automated transcription in Egypt is improving court efficiency.

Judicial systems are increasingly encountering AI-generated evidence, AI-assisted sentencing, and automated administrative processes. AI misuse can have serious consequences, as seen in the UK High Court where false AI-generated arguments caused delays, extra costs, and fines.

UNESCO’s Guidelines aim to prevent such risks by emphasising human oversight, auditability, and ethical AI use.

The Guidelines outline 15 principles and provide recommendations for judicial organisations and individual judges throughout the AI lifecycle. They also serve as a benchmark for developing national and regional standards.

UNESCO’s Judges’ Initiative, which has trained over 36,000 judicial operators in 160 countries, played a key role in shaping and peer-reviewing the Guidelines.

The official launch will take place at the Athens Roundtable on AI and the Rule of Law in London on 4 December 2025. UNESCO aims for the standards to ensure responsible AI use, improve court efficiency, and uphold public trust in the judiciary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI model boosts accuracy in ranking harmful genetic variants

Researchers have unveiled a new AI model that ranks genetic variants based on their severity. The approach combines deep evolutionary signals with population data to highlight clinically relevant mutations.

The popEVE system integrates protein-scale models with constraints drawn from major genomic databases. Its combined scoring separates harmful missense variants more accurately than leading diagnostic tools.

Clinical tests showed strong performance in developmental disorder cohorts, where damaging mutations clustered clearly. The model also pinpointed likely causal variants in unsolved cases without parental genomes.

Researchers identified hundreds of credible candidate genes with structural and functional support. Findings suggest that AI could accelerate rare disease diagnoses and inform precision counselling worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New findings reveal untrained AI can mirror human brain responses

Researchers at Johns Hopkins report that brain-inspired AI architectures can display human-like neural activity before any training. Structural design may provide stronger starting points than data-heavy methods. The findings challenge long-held views about how machine intelligence forms.

Researchers tested modified transformers, fully connected networks, and convolutional networks across multiple variants. They compared untrained model responses with neural data from humans and primates viewing identical images. The approach allowed a direct measure of architectural influence.

Transformers and fully connected networks showed limited change when scaled. Convolutional models, by contrast, produced patterns that aligned more closely with human brain activity. Architecture appears to be a decisive factor early in development.

Untrained convolutional models matched aspects of systems trained on millions of images. The results suggest brain-like structures could cut reliance on vast datasets and energy-intensive computation. The implications may reshape how advanced models are engineered.

Further research will examine simple, biologically inspired learning rules. The team plans to integrate these mechanisms into future AI frameworks. The goal is to combine architecture and biology to accelerate meaningful advances.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube criticises Australia’s new youth social-media restrictions

Australia’s forthcoming ban on social media accounts for users under 16 has prompted intense criticism from YouTube, which argues that the new law will undermine existing child safety measures.

The report notes that from 10 December, young users will be logged out of their accounts and barred from posting or uploading content, though they will still be able to watch videos without signing in.

YouTube said the policy will remove key parental-control tools, such as content filters, channel blocking and well-being reminders, which only function for logged-in accounts.

Rachel Lord, Google and YouTube public-policy lead for Australia, described the measure as ‘rushed regulation’ and warned the changes could make children ‘less safe’ by stripping away long-established protections.

Communications Minister Anika Wells rejected this criticism as ‘outright weird’, arguing that if YouTube believes its own platform is unsafe for young users, it must address that problem itself.

The debate comes as Australia’s eSafety Commissioner investigates other youth-focused apps such as Lemon8 and Yope, which have seen a surge in downloads ahead of the ban.

Regulators reversed YouTube’s earlier exemption in July after identifying it as the platform where 10- to 15-year-olds most frequently encountered harmful content.

Under the new Social Media Minimum Age Act, companies must deactivate underage accounts, prevent new sign-ups and halt any technical workarounds or face penalties of up to A$49.5m.

Officials say the measure responds to concerns about the impact of algorithms, notifications and constant connectivity on Gen Alpha. Wells said the law aims to reduce the ‘dopamine drip’ that keeps young users hooked to their feeds, calling it a necessary step to shield children from relentless online pressures.

YouTube has reportedly considered challenging its inclusion in the ban, but has not confirmed whether it will take legal action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Governments urged to build learning systems for the AI era

Governments are facing increased pressure to govern AI effectively, prompting calls for continuous institutional learning. Researchers argue that the public sector must develop adaptive capacity to keep pace with rapid technological change.

Past digital reforms often stalled because administrations focused on minor upgrades rather than redesigning core services. Slow adaptation now carries greater risks, as AI transforms decisions, systems and expectations across government.

Experts emphasise the need for a learning infrastructure that facilitates to reliable flow of knowledge across institutions. Singapore and the UAE have already invested heavily in large-scale capability-building programmes.

Public servants require stronger technical and institutional literacy, supported through ongoing training and open collaboration with research communities. Advocates say that states that embed learning deeply will govern AI more effectively and maintain public trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan plans large scale investment to boost AI capability

Japan plans to increase generative AI usage to 80 percent as officials push national adoption. Current uptake remains far lower than in the United States and China.

The government intends to raise early usage to 50 percent and stimulate private investment. A trillion yen target highlights the efforts to expand infrastructure and accelerate deployment across various Japanese sectors quickly.

Guidelines stress risk reduction and stronger oversight through an enhanced AI Safety Institute. Critics argue that measures lack detail and fail to address misuse with sufficient clarity.

Authorities expect broader AI use in health care, finance and agriculture through coordinated public-private work. Annual updates will monitor progress as Japan seeks to enhance its competitiveness and strategic capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot