Adobe launches a free AI learning tool for students

The US software company, Adobe, has introduced Student Spaces, a free AI study tool within Acrobat designed to help students generate learning materials efficiently.

Users can create flashcards, quizzes, mind maps, podcasts, and editable presentations from PDFs, Docs, PowerPoint, Excel, URLs, and handwritten notes.

The tool builds on Acrobat’s AI features, now allowing students to interact with a chat assistant grounded in uploaded documents, reducing errors.

Tested with 500 students from universities including Harvard, Berkeley, and Brown, Adobe emphasises convenience, letting students generate study materials without constantly moving files.

The goal is to simplify study workflows and support learning across multiple document types.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Transparency push for automated recruitment in the UK

The UK’s Information Commissioner’s Office has issued new guidance on the growing use of AI in recruitment, warning jobseekers may be unaware of how automated systems influence hiring decisions. The regulator says greater transparency is needed as adoption accelerates.

Automated decision-making tools are increasingly used to screen applications, analyse CVs and rank candidates. While this can improve efficiency, some applicants may be rejected before any human review takes place.

The regulator highlights risks including bias, lack of clarity and potential unfair treatment if safeguards towards the use of AI are not properly applied. Employers are expected to monitor systems for discrimination and clearly explain how decisions are made.

Jobseekers are entitled to know when automation is used, to challenge outcomes, and to request human review. The guidance aims to ensure fair and lawful hiring practices as AI becomes increasingly embedded in UK recruitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China sets standards for AI ethics review and algorithm accountability

The introduction of new AI ethics guidelines by China signals a structured attempt to formalise governance frameworks for rapidly expanding AI systems.

Coordinated by the Ministry of Industry and Information Technology of the People’s Republic of China and multiple state bodies, the policy integrates ethical oversight directly into technological development processes.

A central feature of the framework is the emphasis on operationalising ethical principles such as fairness, accountability, and human well-being through technical review mechanisms.

By focusing on data selection, algorithmic design, and system architecture, the guidelines move towards embedding ethical safeguards at the development stage and protecting intellectual property rights in AI ethics review technologies.

Such an approach reflects a broader shift towards anticipatory governance, where risks such as bias, discrimination, and algorithmic manipulation are addressed before deployment.

A policy by China that also highlights the role of infrastructure in ethical governance, including the development of auditing tools, risk assessment systems, and curated datasets.

Scenario-based evaluation mechanisms indicate an effort to tailor oversight to specific use cases, recognising that AI risks vary significantly across sectors. Instead of relying solely on static compliance rules, the framework promotes adaptive governance aligned with technological complexity.

Ultimately, the outcome is a governance model that seeks to maintain technological competitiveness while addressing societal risks, contributing to wider global debates on how states can regulate AI systems without constraining their development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CNN develops agent infrastructure for AI media trading

CNN is developing an internal agent infrastructure as part of a plan to begin AI-driven media trading by early 2027. The company aims to complete protocol scoping by the end of the second quarter before moving into testing phases later in the year.

Testing will focus on how properties are interpreted by large language models and how buyers allocate budgets to agent-based systems. Executives say the timeline may change as the technology and market conditions continue to evolve.

The initiative combines in-house development with external technology partners, while aligning with industry frameworks to ensure compatibility. CNN is also working with standards bodies to ensure agent communication produces accurate outcomes for buyers.

Agentic protocols enable systems to exchange information, negotiate pricing, and manage tasks autonomously between buyers and sellers. The company is prioritising consistent communication to support efficient and reliable transactions.

Early efforts are centred on learning and experimentation, even without immediate revenue generation. Initial use cases are expected to focus on performance-driven campaigns before expanding into broader advertising activities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

OpenAI presents policy proposals addressing AI’s economic and labour impacts

Policy proposals advanced by OpenAI outline a vision of economic restructuring in response to the growing influence of AI.

Framed within an emerging ‘intelligence age‘, the approach reflects concerns that AI-driven productivity gains may concentrate wealth while undermining traditional labour-based economic models.

The proposals, therefore, attempt to reconcile market-led innovation with mechanisms aimed at broader distribution of economic benefits.

A central element involves shifting taxation away from labour towards capital, reflecting expectations that automation will reduce reliance on human work.

Instruments such as robot taxes and public wealth funds are presented as potential tools to redistribute gains generated by AI systems.

Such proposals by OpenAI indicate a policy direction where states may need to redefine fiscal structures to sustain social protection systems traditionally funded through employment-based taxation.

Labour market adaptation forms another key pillar, with suggestions including shorter working weeks, portable benefits, and increased corporate contributions to social welfare.

However, reliance on employer-linked mechanisms raises questions about coverage gaps, particularly for individuals displaced by automation. The proposals highlight ongoing tensions between corporate-led welfare models and the need for more comprehensive public safety nets.

Alongside economic measures, the framework addresses governance challenges linked to advanced AI systems, including systemic risks and misuse.

OpenAI’s proposals also recommend that oversight bodies, risk containment strategies, and infrastructure expansion reflect an effort to balance innovation with control.

Treating AI as a utility further signals a shift towards recognising digital infrastructure as a public good, though implementation will depend on political consensus and regulatory capacity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

South Korea-France partnership reshapes AI and technology cooperation strategy

The recent state visit between South Korea and France signals a deepening of bilateral cooperation that extends beyond diplomacy into long-term technological and cultural alignment.

Agreements endorsed by President Lee Jae-myung and President Emmanuel Macron reflect a coordinated effort to strengthen shared capabilities in emerging sectors, while reinforcing institutional ties across research, education, and industry.

A central policy dimension lies in the expansion of cooperation in AI, semiconductors, and quantum technologies, areas increasingly tied to economic security and global competitiveness.

Partnerships between institutions such as KAIST and CNRS highlight a shift towards structured research integration, enabling joint innovation and knowledge transfer.

Such collaboration between South Korea and France is positioned not as an isolated scientific exchange, but as part of broader strategies to secure technological sovereignty and resilient supply chains.

Cultural and educational initiatives complement these ambitions by supporting long-term people-to-people engagement and workforce development. Expanded exchanges in creative industries and language education aim to cultivate talent pipelines that can operate across both economies.

Rather than symbolic diplomacy, these measures serve as enabling mechanisms for sustained cooperation in high-value sectors where human capital remains critical.

From a policy perspective, the agreements illustrate how economies are increasingly forming strategic partnerships to navigate global technological competition.

Instead of relying solely on domestic capacity, coordinated international frameworks are being used to manage innovation risks, diversify supply dependencies, and strengthen regulatory alignment.

The outcome will depend on implementation, yet the direction suggests a model of cooperation that blends economic, technological, and societal priorities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI chatbots are reshaping classroom debates, raising concerns over homogenised discussion

Generative AI chatbots are becoming embedded in university learning at Yale, students and academics told CNN, not only for essays and homework but also for real-time seminar participation. Students described classmates uploading readings and PDFs into chatbots before class, and even typing a professor’s question into AI during discussion to produce an immediate response to repeat aloud.

While this can make contributions sound more polished and prepared, some students said seminar conversations increasingly stall or feel flatter, with fewer personal interpretations and less exploratory debate. One student, ‘Amanda’, said she has noticed many classmates arriving with slick talking points but then offering near-identical arguments and phrasing, making discussions feel less distinctive than in earlier years.

Students gave several reasons for leaning on AI. ‘Jessica’, a senior, said she uses it daily, particularly in an economics seminar where the professor cold-calls students, both to digest readings quickly and to help her translate ideas into cohesive sentences when she struggles to phrase her comments.

‘Sophia’, a junior, said some students appear to use AI to draft ‘scripts’ for what to say in class, driven by insecurity about gaps in their understanding. She believes this weakens creativity and the ability to make original connections, replacing genuine engagement with impressive-sounding language.

A Yale spokesperson said the university is aware students are experimenting with AI in the classroom and noted a wider faculty trend towards limiting or banning laptops, using print-based materials, and prioritising direct engagement and original thinking.

The article links these observations to a March paper in ‘Trends in Cognitive Sciences’, which argues that large language models can systematically homogenise human expression and thought across language, perspective and reasoning. The paper’s authors say LLMs predict statistically likely next words based on training data that overrepresents dominant languages and ideas, potentially narrowing the ‘conceptual space’ for how people write and argue.

They warn that models tend to reproduce ‘WEIRD’ viewpoints, Western, educated, industrialised, rich and democratic, even when prompted otherwise, which may make those styles seem more credible and socially correct while marginalising other perspectives.

Researchers also describe a compounding feedback loop. As AI-generated outputs circulate in human discourse and eventually re-enter training data, sameness can intensify over time. Co-author Morteza Dehghani said offloading reasoning to AI risks intellectual laziness and could have broader social consequences, from weakened innovation to greater susceptibility to persuasion.

Educators quoted described both benefits and risks, and outlined practical responses. Thomas Chatterton Williams, a visiting professor and Bard College fellow, said AI can ‘raise the floor’ of discussion for difficult material but may suppress eccentric or truly original ideas, leaving students without a voice of their own or a sense of authorship.

Former teacher Daniel Buck called AI a ‘supercharged SparkNotes’ that can answer virtually any question, making it harder to detect shortcuts and easier for students to bypass the ‘boring minutiae’ where learning takes hold.

He worries that this also undermines relationships with professors and sustained cognitive work. Yale philosophy professor Sun-Joo Shin said model improvements forced her to redesign the assessment. Problem sets now earn completion credit and feedback, while in-class exams, oral tests and presentations carry more weight.

Williams said he has moved from writing to spontaneous, in-class, handwritten work and uses oral exit exams. Students who avoid AI argued that they are still affected by classmates’ reliance on it because it reduces the value and variety of seminar time, while others urged a middle path in which AI is treated as a collaborator, used to critique ideas rather than as a substitute for generating them or doing the reasoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft markets Copilot as a productivity boost but warns it is ‘for entertainment purposes only’

Microsoft has spent the past year pushing Copilot as a mainstream productivity tool, baking it into Windows 11 and promoting new hardware such as Copilot+ PCs, yet its own legal language urges caution. In Microsoft’s Copilot Terms of Use, updated in October last year, the company states Copilot is ‘for entertainment purposes only’, may ‘make mistakes’, and ‘may not work as intended’.

The terms warn users not to rely on Copilot for important advice and to ‘use Copilot at your own risk’, a caveat that sits uneasily alongside the product’s business-focused marketing.

The Tom’s Hardware article argues Microsoft is not unique in issuing such warnings. Similar disclaimers are common across the generative AI industry. It points to xAI’s guidance that AI is ‘probabilistic in nature’ and may produce ‘hallucinations’, generate offensive or objectionable content, or fail to reflect real people, places or facts.

While these limitations are well known to those familiar with large language models, the piece notes that many users still treat AI output as authoritative, even in professional settings where scepticism should be standard.

To underline the risks of overreliance, the text cites reports of Amazon-related incidents allegedly linked to ‘Gen-AI assisted changes’. It says some AWS outages were reportedly caused after engineers let an AI coding bot address an issue without sufficient oversight, and that Amazon’s website experienced ‘high blast radius’ problems that required senior engineers to step in. These examples are used to illustrate how AI-generated errors can propagate quickly in complex systems when humans fail to verify the output.

Why does it matter?

Overall, the article acknowledges that generative AI can boost productivity, but stresses it remains a tool with no accountability for mistakes, making verification essential. It warns that automation bias, people trusting machine outputs over contradictory evidence, can be intensified by AI systems that produce plausible-sounding answers that pass casual inspection.

While such disclaimers help companies limit legal liability, the piece suggests aggressive marketing of AI as a productivity ‘hack’ may downplay real-world risks, particularly as firms seek returns on the billions invested in AI hardware and talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot