Google’s Gemini 3.1 Flash-Lite has launched in preview for developers via AI Studio and for enterprises through Vertex AI. Designed for high-volume workloads, it promises fast, cost-effective performance while maintaining high-quality outputs.
Priced at just $0.25 per million input tokens and $1.50 per million output tokens, 3.1 Flash-Lite offers 2.5X faster response times and 45% higher output speed than the previous 2.5 Flash model.
Benchmarks show strong performance across reasoning and multimodal tasks, including an Elo score of 1432 on Arena.ai, 86.9% on GPQA Diamond, and 76.8% on MMMU Pro, surpassing some older, larger Gemini models.
The model also provides adaptive intelligence features, allowing developers to adjust how much the AI ‘thinks’ for each task. The model handles both high-frequency tasks, such as translation, and complex tasks, such as interface generation and simulations.
Early-access developers and companies report that 3.1 Flash-Lite handles complex workloads with precision comparable to larger models. Its speed, affordability, and reasoning capabilities make it an attractive choice for scalable, real-time AI applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.
From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.
The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.
Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.
Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.
Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.
Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.
The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In recent days, social media has been alight with discussions about the 2014 series whose portrayal of AI and ethical dilemmas now feels remarkably prophetic: Silicon Valley. Fans and professionals alike are highlighting how the show’s depiction of AI, automated agents, and ethical dilemmas mirrors today’s real-world challenges.
From algorithmic decision-making to AI shaping social and economic interactions, the series explores the boundaries, responsibilities, and societal impact of AI in ways that feel startlingly relevant. What once seemed like pure comedy is increasingly being seen as a warning, highlighting how the choices we make around AI and its ethical frameworks will shape whether the technology benefits society.
While the show dramatises these dilemmas for entertainment, the real world is now facing the same questions. Recent trends in generative AI, autonomous agents, and large-scale automated decision-making are bringing their predictions to life, raising urgent ethical questions for developers, policymakers, and society alike.
Source: Freepik
The rise of AI ethics: from niche concern to central requirement
The growing influence of AI on society has propelled ethics from a theoretical discussion to a central factor in technological decision-making. Initially confined to academic debate, ethics in AI is now a guiding force in technological development. The impact of AI is becoming tangible across society, from employment and finance to online content.
Technical performance alone no longer defines success; the consequences of design choices have become morally and socially significant. Governments, international organisations, and corporations are responding by developing ethical frameworks.
The EU AI Act, the OECD AI Principles, and numerous corporate codes of conduct signal that society expects AI systems to align with human values, demonstrating accountability, fairness, and trustworthiness. Ethical reflection has become a prerequisite for technological legitimacy and societal acceptance.
Source: Freepik
Functions of AI ethics: trust, guidance, and societal risk
Ethical frameworks for AI fulfil multiple roles, balancing moral guidance with practical necessity. They build public trust between developers, organisations, and users, reassuring society that AI systems operate consistently with shared values.
For developers, ethical principles offer a blueprint for decision-making, helping anticipate societal impact and minimise unintended harm. Beyond guidance, AI ethics acts as a form of societal risk governance, allowing organisations to identify potential consequences before they manifest.
By integrating ethics into design, AI systems become socially sustainable technologies, bridging technical capability with moral responsibility. The approach like this is particularly critical in high-stakes domains such as healthcare, finance, and law, where algorithmic decisions can significantly affect human well-being.
Source: Freepik
The politics of AI ethics: regulatory theatre and corporate influence
Despite widespread adoption, AI ethics frameworks sometimes risk becoming regulatory theatre, where public statements signal commitment but fail to ensure meaningful action. Many organisations promote ethical AI principles, yet consistent enforcement and follow-through often lag behind these claims.
Even with their limitations, ethical frameworks are far from meaningless. They shape public discourse, influence policy, and determine which AI systems gain social legitimacy. The challenge lies in balancing credibility with practical impact, ensuring that ethical commitments are more than symbolic gestures.
Social media platforms like X amplify this tension, with public scrutiny and viral debates exposing both successes and failures in applying ethical principles.
Source: Freepik
AI ethics as a lens for technology and society
The prominence of AI ethics reflects a broader societal transformation in evaluating technology. Modern societies no longer judge AI solely by efficiency, speed, or performance; they assess social consequences, fairness, and the distribution of risks and benefits.
AI is increasingly seen as a social actor rather than a neutral tool, influencing public behaviour, shaping social norms, and redefining concepts such as trust, autonomy, and accountability. Ethical evaluation of AI is not just a philosophical exercise, but demonstrate evolving expectations about the role technology should play in human life.
Source: Freepik
AI ethics as early-warning governance for social impact
AI ethics functions as a critical early-warning system for society. Ethical principles anticipate harms that might otherwise go unnoticed, from systemic bias to privacy violations. By highlighting potential consequences, ethics enables organisations to act proactively, reducing the likelihood of crises and improving public trust.
Moreover, ethics ensures that long-term impacts, including societal cohesion, equity, and fairness, are considered alongside immediate technical performance. In doing so, AI ethics bridges the gap between what AI can do and what society deems acceptable, ensuring that innovation remains aligned with moral and social norms.
Source: Freepik
The bridge between technological power and social legitimacy
AI ethics remains the essential bridge between technological power and social legitimacy. Embedding ethical reflection into AI development ensures that innovation is not only technically effective but also socially sustainable, trustworthy, and accountable.
Yet a growing tension defines the next phase of this evolution: the accelerating pace of innovation often outstrips the slower processes of ethical deliberation and regulation, raising questions about who sets the norms and how quickly societies can adapt.
Rather than acting solely as a safeguard, ethics is increasingly becoming a strategic dimension of technological leadership, shaping public trust, market adoption, and even geopolitical influence in the global race for AI. The rise of AI ethics, therefore, signals more than a moral awakening, reflecting a structural shift in how technological progress is evaluated and legitimised.
As AI continues to integrate into everyday life, ethical frameworks will determine not only how systems function, but also whether they are accepted as part of the social fabric. Aligning innovation with societal values is no longer optional but the condition under which AI can sustain legitimacy, unlock its full potential, and remain a transformative force that benefits society as a whole.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.
Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.
A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).
The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.
Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.
With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.
The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Social platform X has released a standalone version of its private messaging service, X Chat, via Apple’s TestFlight. The initial beta reached capacity within two hours, reflecting strong early demand among iOS users eager to trial the new app.
Michael Boswell confirmed that the first 1,000 places were quickly expanded to 5,000, with further growth expected. Development has been ongoing for several months, and testers have been urged to stress-test the product and submit detailed feedback.
Early screenshots suggest a cleaner interface and possible rebranding to ‘xChat’.
Security claims remain under scrutiny, as experts question whether X Chat’s encryption matches established platforms such as Signal. Clear evidence addressing those concerns in the standalone build has yet to emerge.
Launch of the separate app marks a notable shift from Elon Musk’s earlier ambition to integrate messaging, payments, and content into a single ‘everything app’.
Chats will synchronise across X, its web platform chat.x.com, and the new iOS app, while an Android version is expected soon.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More users are exploring how to switch from ChatGPT to Claude while preserving their existing chat history and preferences. Rather than starting over with a new AI assistant, many want to migrate context and maintain continuity.
The first step is gathering your data from ChatGPT. In Settings, open Personalisation, then review the Memory section to copy any stored preferences you want to retain. You can also export your full chat history through Data Controls by selecting ‘Export Data’.
ChatGPT will generate downloadable files containing your conversations. If you prefer a lighter approach, manually copy key discussions or ask ChatGPT to summarise your main preferences, frequently discussed topics, and custom instructions.
Once your information is ready, open Claude and enable Memory under Settings and Capabilities. Start a new conversation and paste your summaries using a prompt such as ‘Here is important context about me. Please update your memory accordingly.’
After transferring the data, verify that Claude has stored the information accurately. If you plan to leave ChatGPT entirely, review and delete saved memory entries before removing your account to ensure your data is cleared.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
More than 40 million people use ChatGPT alone for health information every day, and both ChatGPT and Claude have recently launched services specifically designed to give consumers health advice.
Yale School of Medicine clinician-educator Shaili Gupta warns that whilst chatbots can democratise access to health information, the risks of overtrust are significant.
Gupta notes that AI chatbots are deliberately designed to feel personal, trained to use pronouns like ‘you’ and ‘I’, which makes users more likely to treat them as authoritative voices rather than information tools.
She cautions against the ‘three C’s’: chatbots that are too competent, too cogent, or too concrete, as these are the most likely to lead patients into harmful health decisions.
Human clinicians, Gupta argues, remain challenging to replace not only because they conduct physical examinations, but also because they bring instinct, experience, and genuine relatability to patient care. She recommends using chatbots for efficiency and general information, whilst leaving diagnosis firmly in the hands of medical professionals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has enhanced its Claude AI chatbot to make switching from other platforms easier. Users on the free plan can now activate Claude’s memory feature, which allows them to import data from other AI platforms using a new dedicated tool.
The update ensures that users don’t have to start over when transferring context and history from competitors like OpenAI’s ChatGPT or Google’s Gemini.
The memory import option, first introduced in October for paid subscribers, now appears under ‘settings’ → ‘capabilities’ for all users. The tool lets users copy a prompt from their previous AI and paste the output into Claude, seamlessly transferring past interactions.
The recent popularity of Claude has been driven by tools such as Claude Code and Claude Cowork, as well as the launch of the Opus 4.6 and Sonnet 4.6 models. Upgrades enhance Claude’s coding, spreadsheet, and complex task capabilities, boosting its appeal to new users.
Anthropic’s visibility has also increased amid debates with the Pentagon, as the company refuses to loosen AI safeguards for military use, drawing ‘red lines’ around mass surveillance and autonomous weapons.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
When Hayao Miyazaki dismissed early AI-generated animation as ‘an insult to life itself’ in 2016, the technology felt distant from mainstream creative work. Less than a decade later, generative AI tools produce images and text in seconds, reviving debate over authorship, copyright, and artistic identity.
In Japan, debate reflects both anxiety and ambition. Illustrators question the use of their work in training data, while policymakers and corporations see AI as vital to easing a projected labour shortfall by 2040. Legal provisions allowing data use for analysis have intensified calls for safeguards.
Public sentiment in Japan remains broadly favourable toward AI adoption. Surveys indicate relatively high levels of trust, with many viewing AI as part of long-term structural adjustment rather than an immediate threat. Economic expectations often outweigh concerns about disruption.
Workplace implementation, however, remains limited. OECD research shows only a small share of employees actively use AI tools, citing skills shortages and cautious corporate culture. Analysts describe a paradox: AI could ease labour pressures, yet adoption is constrained by limited expertise.
Creative professionals report more immediate effects. Surveys highlight income pressures and uncertainty among illustrators and freelancers. As deployment expands, Japan faces the task of balancing economic necessity with cultural preservation and fair access to emerging technologies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Twenty-five years after its launch, SharePoint has grown into one of Microsoft’s largest collaboration platforms, serving more than one billion users annually. The service now underpins vast volumes of enterprise content, with billions of files and millions of sites created each day.
Microsoft positions the platform as a foundational knowledge layer for Microsoft 365 Copilot. As the primary grounding source for Copilot, it contributes to the Work IQ intelligence layer, enabling AI tools to operate within an organisational context.
New agentic capabilities allow teams to build solutions using natural language prompts within governed Microsoft 365 environments. Custom AI skills package organisational standards, terminology, and business logic, helping ensure outputs align with internal policies and workflows.
AI-driven publishing features are now embedded across its web authoring tools. Organisations can plan, refine, and distribute content at scale while maintaining governance controls and consistent communication standards.
Content stored in SharePoint also powers semantic indexing and retrieval systems that support contextual discovery across Microsoft 365 applications. Microsoft says these capabilities enable more proactive knowledge surfacing and strengthen Copilot’s ability to deliver grounded responses.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!