Indian creators embrace Adobe AI tools

Adobe says generative AI is rapidly reshaping India’s creator economy, with 97% of surveyed creators reporting a positive impact. Findings come from the company’s inaugural Creators’ Toolkit Report covering more than 16,000 creators worldwide.

Adoption levels in India are among the highest globally, with almost all creators reporting that AI tools are embedded in their daily workflows. Adobe is commonly used for editing, content enhancement, asset generation and idea development across video, image and social media formats.

Despite enthusiasm, concerns remain around trust and transparency. Many creators fear their work may be used to train AI models without consent, while cost, unclear training methods and inconsistent outputs also limit wider confidence.

Interest in agentic AI is also growing, with most Indian creators expressing optimism about systems that automate tasks and adapt to personal creative styles. Mobile devices continue to gain importance, with creators expecting phone output to increase further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Forced labour data opened to the public

Exiger has launched a free online tool designed to help organisations identify links to forced labour in global supply chains. The platform, called forcedlabor.ai, was unveiled during the annual meeting of the World Economic Forum in Davos.

The tool allows users to search suppliers and companies to assess potential exposure to state-sponsored forced labour, with an initial focus on risks linked to China. Exiger says the database draws on billions of records and is powered by proprietary AI to support compliance and ethical sourcing.

US lawmakers and human rights groups have welcomed the initiative, arguing that companies face growing legal and reputational risks if their supply chains rely on forced labour. The platform highlights risks linked to US import restrictions and enforcement actions.

Exiger says making the data freely available aims to level the playing field for smaller firms with limited compliance budgets. The company argues that greater transparency can help reduce modern slavery across industries, from retail to agriculture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Yale researchers unveil AI platform for faster chemistry discovery

Researchers at Yale University have developed an AI platform that accelerates chemical discovery by turning scientific knowledge into practical laboratory guidance. The system, known as MOSAIC, generates detailed experimental procedures across chemistry, including drug design and materials science.

MOSAIC differs from existing AI chemistry tools by combining thousands of specialised AI ‘experts,’ each representing a distinct area of chemical knowledge.

Instead of a single model, the platform draws on diverse reaction expertise to guide complex syntheses, including the synthesis of previously unreported compounds.

Early results suggest the approach significantly improves experimental outcomes. Using MOSAIC, researchers successfully synthesised more than 35 new compounds, spanning pharmaceuticals, catalysts, advanced materials, and other chemical domains.

The system also provides uncertainty estimates, helping scientists prioritise experiments most likely to succeed.

Designed as an open-source framework, MOSAIC aims to move AI beyond prediction and into hands-on laboratory support. Developers say the platform could cut research bottlenecks, improve reproducibility, and widen access to advanced chemical synthesis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sovereign AI race continues with three finalists in South Korea

South Korea has narrowed its race to develop a sovereign AI model, eliminating Naver and NCSoft from the government-backed competition. LG AI Research, SK Telecom, and Upstage now advance toward final selection by 2027.

The Ministry of Science and ICT emphasised that independent AI must be trained from scratch with initialised weights. Models reusing pre-trained results, even open source, do not meet this standard.

A wild-card round allows previously eliminated teams to re-enter the competition. Despite this option, major companies have declined, citing unclear benefits and high resource demands.

Industry observers warn that reduced participation could slow momentum for South Korea’s AI ambitions. The outcome is expected to shape the country’s approach to homegrown AI and technological independence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Economic Forum 2026 highlights human-centred AI at work

Global leaders at the World Economic Forum 2026 are emphasising how AI can strengthen, rather than diminish, human work. Discussions are centred on workforce resilience as economies adapt to rapid technological and structural change.

AI is increasingly taking on routine tasks while providing clearer insights, allowing employees to focus on creativity, judgement, and higher-value activities.

Rather than replacing workers, intelligent tools are reshaping job design, career paths, and leadership expectations, particularly as labour shortages intensify across many developed economies.

Attention is also turning to leadership in an AI-driven workplace. Executives are expected to anticipate risks, spot emerging patterns, and guide teams through change, supported by AI systems that offer earlier and more accurate insights.

Clear communication, upskilling, and trust-building have emerged as core priorities for successful adoption.

Human oversight remains vital as AI enters HR and payroll systems, where errors carry regulatory and reputational risks. Speakers stressed that involving employees directly in AI design improves trust, reduces risk, and ensures intelligent systems address real operational challenges.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO links AI development with climate responsibility

UNESCO has renewed calls for stronger international cooperation to ensure AI supports rather than undermines climate goals, as environmental pressures linked to AI continue to grow.

The message was delivered at the Adopt AI Summit in Paris, where sustainability and ethics featured prominently in discussions on future AI development.

At a Grand Palais panel, policymakers, industry leaders, and UN officials examined AI’s growing energy, water, and computing demands. The discussion focused on balancing AI’s climate applications with the need to reduce its environmental footprint.

Public sector representatives highlighted policy tools such as funding priorities and procurement rules to encourage more resource-efficient AI.

UNESCO officials stressed that energy-efficient AI must remain accessible to lower-income regions, mainly for water management and climate resilience.

Industry voices highlighted practical steps to improve AI efficiency while supporting internal sustainability goals. Participants agreed that coordinated action among governments, businesses, international organisations, and academia is essential for meaningful environmental impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OECD says generative AI reshapes education with mixed results

Generative AI has rapidly entered classrooms worldwide, with students using chatbots for assignments and teachers adopting AI tools for lesson planning. Adoption has been rapid, driven by easy access, intuitive design, and minimal technical barriers.

A new OECD Digital Education Outlook 2026 highlights both opportunities and risks linked to this shift. AI can support learning when aligned with clear goals, but replacing productive struggle may weaken deep understanding and student focus.

Research cited in the report suggests that general-purpose AI tools may improve the quality of written work without boosting exam performance. Education-specific AI grounded in learning science appears more effective as a collaborative partner or research assistant.

Early trials also indicate that GenAI-powered tutoring tools can enhance teacher capacity and improve student outcomes, particularly in mathematics. Policymakers are urged to prioritise pedagogically sound AI that is rigorously evaluated to strengthen learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyberviolence against women rises across Europe amid deepfake abuse

Digital violence targeting women and girls is spreading across Europe, according to new research highlighting cyberstalking, surveillance and online threats as the most common reported abuses.

Digital tools have expanded opportunities for communication, yet online environments increasingly expose women to persistent harassment instead of safety and accountability.

Image-based abuse has grown sharply, with deepfake pornography now dominating synthetic sexual content and almost exclusively targeting women.

More than half of European countries report rising cases of non-consensual intimate image sharing, while national data show women forming a clear majority of cyberstalking and online threat victims.

Algorithmic systems accelerate the circulation of misogynistic material, creating enclosed digital spaces where abuse is normalised rather than challenged. Researchers warn that automated recommendation mechanisms can quickly spread harmful narratives, particularly among younger audiences.

Recent generative technologies have further intensified concerns by enabling sexualised image manipulation with limited safeguards.

Investigations into chatbot-generated images prompted new restrictions, yet women’s rights groups argue that enforcement and prevention still lag behind the scale of online harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini flaw exposed Google Calendar data through hidden prompts

A vulnerability in Google Calendar allowed attackers to bypass privacy controls by embedding hidden instructions in standard calendar invitations. The issue exploited how Gemini interprets natural language when analysing user schedules.

Researchers at Miggo found that malicious prompts could be placed inside event descriptions. When Gemini scanned calendar data to answer routine queries, it unknowingly processed the embedded instructions.

The exploit used indirect prompt injection, a technique in which harmful commands are hidden within legitimate content. The AI model treated the text as trusted context rather than a potential threat.

In the proof-of-concept attack, Gemini was instructed to summarise a user’s private meetings and store the information in a new calendar event. The attacker could then access the data without alerting the victim.

Google confirmed the findings and deployed a fix after responsible disclosure. The case highlights growing security risks linked to how AI systems interpret natural language inputs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!