A new ‘Answer Now’ button has been added to Gemini, allowing users to skip extended reasoning and receive instant replies. The feature appears alongside the spinning status indicator in Gemini 3 Pro and Thinking/Flash, but is not available in the Fast model.
When selected, the button confirms that Gemini is ‘skipping in-depth thinking’ before delivering a quicker response. Google says the tool is designed for general questions where speed is prioritised over detailed analysis.
The update coincides with changes to usage limits across subscription plans. AI Pro users now receive 300 Thinking prompts and 100 Pro prompts per day, while AI Ultra users get 1,500 Thinking prompts and 500 Pro prompts daily.
Free users also gain access to the revised limits, listed as ‘Basic access’ for both the Thinking and Pro models. Google has not indicated whether the Fast model will receive the Answer Now feature.
The rollout follows the recent launch of Gemini’s Personal Intelligence feature, which allows the chatbot to draw on Google services such as Gmail and Search history. Google says Answer Now will replace the existing Skip button and is now available on Android, iOS, and the web.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Exiger has launched a free online tool designed to help organisations identify links to forced labour in global supply chains. The platform, called forcedlabor.ai, was unveiled during the annual meeting of the World Economic Forum in Davos.
The tool allows users to search suppliers and companies to assess potential exposure to state-sponsored forced labour, with an initial focus on risks linked to China. Exiger says the database draws on billions of records and is powered by proprietary AI to support compliance and ethical sourcing.
US lawmakers and human rights groups have welcomed the initiative, arguing that companies face growing legal and reputational risks if their supply chains rely on forced labour. The platform highlights risks linked to US import restrictions and enforcement actions.
Exiger says making the data freely available aims to level the playing field for smaller firms with limited compliance budgets. The company argues that greater transparency can help reduce modern slavery across industries, from retail to agriculture.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has narrowed its race to develop a sovereign AI model, eliminating Naver and NCSoft from the government-backed competition. LG AI Research, SK Telecom, and Upstage now advance toward final selection by 2027.
The Ministry of Science and ICT emphasised that independent AI must be trained from scratch with initialised weights. Models reusing pre-trained results, even open source, do not meet this standard.
A wild-card round allows previously eliminated teams to re-enter the competition. Despite this option, major companies have declined, citing unclear benefits and high resource demands.
Industry observers warn that reduced participation could slow momentum for South Korea’s AI ambitions. The outcome is expected to shape the country’s approach to homegrown AI and technological independence.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The message was delivered at the Adopt AI Summit in Paris, where sustainability and ethics featured prominently in discussions on future AI development.
At a Grand Palais panel, policymakers, industry leaders, and UN officials examined AI’s growing energy, water, and computing demands. The discussion focused on balancing AI’s climate applications with the need to reduce its environmental footprint.
Public sector representatives highlighted policy tools such as funding priorities and procurement rules to encourage more resource-efficient AI.
UNESCO officials stressed that energy-efficient AI must remain accessible to lower-income regions, mainly for water management and climate resilience.
Industry voices highlighted practical steps to improve AI efficiency while supporting internal sustainability goals. Participants agreed that coordinated action among governments, businesses, international organisations, and academia is essential for meaningful environmental impact.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI has rapidly entered classrooms worldwide, with students using chatbots for assignments and teachers adopting AI tools for lesson planning. Adoption has been rapid, driven by easy access, intuitive design, and minimal technical barriers.
A new OECD Digital Education Outlook 2026 highlights both opportunities and risks linked to this shift. AI can support learning when aligned with clear goals, but replacing productive struggle may weaken deep understanding and student focus.
Research cited in the report suggests that general-purpose AI tools may improve the quality of written work without boosting exam performance. Education-specific AI grounded in learning science appears more effective as a collaborative partner or research assistant.
Early trials also indicate that GenAI-powered tutoring tools can enhance teacher capacity and improve student outcomes, particularly in mathematics. Policymakers are urged to prioritise pedagogically sound AI that is rigorously evaluated to strengthen learning.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A vulnerability in Google Calendar allowed attackers to bypass privacy controls by embedding hidden instructions in standard calendar invitations. The issue exploited how Gemini interprets natural language when analysing user schedules.
Researchers at Miggo found that malicious prompts could be placed inside event descriptions. When Gemini scanned calendar data to answer routine queries, it unknowingly processed the embedded instructions.
The exploit used indirect prompt injection, a technique in which harmful commands are hidden within legitimate content. The AI model treated the text as trusted context rather than a potential threat.
In the proof-of-concept attack, Gemini was instructed to summarise a user’s private meetings and store the information in a new calendar event. The attacker could then access the data without alerting the victim.
Google confirmed the findings and deployed a fix after responsible disclosure. The case highlights growing security risks linked to how AI systems interpret natural language inputs.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.
The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.
Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.
MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.
The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.
If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.
Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.
Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.
While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.
Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.
The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.
The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.
Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report by Anthropic suggests fears that AI will replace jobs remain overstated, with current use showing AI supporting workers rather than eliminating roles.
Analysis of millions of anonymised conversations with the Claude assistant indicates technology is mainly used to assist with specific tasks rather than full job automation.
The research shows AI affects occupations unevenly, reshaping work depending on role and skill level. Higher-skilled tasks, particularly in software development, dominate use, while some roles automate simpler activities rather than core responsibilities.
Productivity gains remain limited when tasks grow more complex, as reliability declines and human correction becomes necessary.
Geographic differences also shape adoption. Wealthier countries tend to use AI more frequently for work and personal activities, while lower-income economies rely more heavily on AI for education. Such patterns reflect different stages of adoption instead of a uniform global transformation.
Anthropic argues that understanding how AI is used matters as much as measuring adoption rates. The report suggests future economic impact will depend on experimentation, regulation and the balance between automation and collaboration, rather than widespread job displacement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The team behind the Astro web framework is joining Cloudflare, strengthening long-term support for open-source tools used to build fast, content-driven websites.
Major brands and developers widely use Astro to create pages that load quickly by limiting the amount of JavaScript that runs during initial rendering, improving performance and search visibility.
Cloudflare said Astro will remain open source and continue to be developed independently, ensuring long-term stability for the framework and its global user community.
Astro’s creators said the move will allow faster development and broader infrastructure support, while keeping the framework available to developers regardless of hosting provider.
The company added that Astro already underpins platforms such as Webflow and Wix, and that recent updates have expanded runtime support and improved build speeds.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!