Adobe launches a free AI learning tool for students

The US software company, Adobe, has introduced Student Spaces, a free AI study tool within Acrobat designed to help students generate learning materials efficiently.

Users can create flashcards, quizzes, mind maps, podcasts, and editable presentations from PDFs, Docs, PowerPoint, Excel, URLs, and handwritten notes.

The tool builds on Acrobat’s AI features, now allowing students to interact with a chat assistant grounded in uploaded documents, reducing errors.

Tested with 500 students from universities including Harvard, Berkeley, and Brown, Adobe emphasises convenience, letting students generate study materials without constantly moving files.

The goal is to simplify study workflows and support learning across multiple document types.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI chatbots are reshaping classroom debates, raising concerns over homogenised discussion

Generative AI chatbots are becoming embedded in university learning at Yale, students and academics told CNN, not only for essays and homework but also for real-time seminar participation. Students described classmates uploading readings and PDFs into chatbots before class, and even typing a professor’s question into AI during discussion to produce an immediate response to repeat aloud.

While this can make contributions sound more polished and prepared, some students said seminar conversations increasingly stall or feel flatter, with fewer personal interpretations and less exploratory debate. One student, ‘Amanda’, said she has noticed many classmates arriving with slick talking points but then offering near-identical arguments and phrasing, making discussions feel less distinctive than in earlier years.

Students gave several reasons for leaning on AI. ‘Jessica’, a senior, said she uses it daily, particularly in an economics seminar where the professor cold-calls students, both to digest readings quickly and to help her translate ideas into cohesive sentences when she struggles to phrase her comments.

‘Sophia’, a junior, said some students appear to use AI to draft ‘scripts’ for what to say in class, driven by insecurity about gaps in their understanding. She believes this weakens creativity and the ability to make original connections, replacing genuine engagement with impressive-sounding language.

A Yale spokesperson said the university is aware students are experimenting with AI in the classroom and noted a wider faculty trend towards limiting or banning laptops, using print-based materials, and prioritising direct engagement and original thinking.

The article links these observations to a March paper in ‘Trends in Cognitive Sciences’, which argues that large language models can systematically homogenise human expression and thought across language, perspective and reasoning. The paper’s authors say LLMs predict statistically likely next words based on training data that overrepresents dominant languages and ideas, potentially narrowing the ‘conceptual space’ for how people write and argue.

They warn that models tend to reproduce ‘WEIRD’ viewpoints, Western, educated, industrialised, rich and democratic, even when prompted otherwise, which may make those styles seem more credible and socially correct while marginalising other perspectives.

Researchers also describe a compounding feedback loop. As AI-generated outputs circulate in human discourse and eventually re-enter training data, sameness can intensify over time. Co-author Morteza Dehghani said offloading reasoning to AI risks intellectual laziness and could have broader social consequences, from weakened innovation to greater susceptibility to persuasion.

Educators quoted described both benefits and risks, and outlined practical responses. Thomas Chatterton Williams, a visiting professor and Bard College fellow, said AI can ‘raise the floor’ of discussion for difficult material but may suppress eccentric or truly original ideas, leaving students without a voice of their own or a sense of authorship.

Former teacher Daniel Buck called AI a ‘supercharged SparkNotes’ that can answer virtually any question, making it harder to detect shortcuts and easier for students to bypass the ‘boring minutiae’ where learning takes hold.

He worries that this also undermines relationships with professors and sustained cognitive work. Yale philosophy professor Sun-Joo Shin said model improvements forced her to redesign the assessment. Problem sets now earn completion credit and feedback, while in-class exams, oral tests and presentations carry more weight.

Williams said he has moved from writing to spontaneous, in-class, handwritten work and uses oral exit exams. Students who avoid AI argued that they are still affected by classmates’ reliance on it because it reduces the value and variety of seminar time, while others urged a middle path in which AI is treated as a collaborator, used to critique ideas rather than as a substitute for generating them or doing the reasoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gallup finds AI is shaping some college students’ academic choices

Gallup reported that 16% of currently enrolled college students had changed their major or field of study due to AI’s potential impact. They claim that 14% have thought ‘a great deal’ and 33% ‘a fair amount’ about changing their major or field of study for the same reason.

Gallup said the findings are based on web surveys conducted from 2 to 31 October 2025 with 3,801 adults pursuing an associate or bachelor’s degree. The article is part of Gallup’s work with Lumina Foundation on higher education.

According to Gallup, men were more likely than women to report having changed majors because of AI’s potential impact, at 21% compared with 12%. Associate degree students were also more likely than bachelor’s degree students to say they had changed their major or field of study, at 19% compared with 13%.

Gallup also found that concern about AI’s impact on majors was greater among students in technology and vocational fields than among those in business, humanities, and engineering. In a separate write-up published the same day, the organisation said AI use is already routine for many students, even where institutions discourage or prohibit it.

The research presents the findings as evidence that AI is affecting how some students think about academic choices and future work. It does not show a policy decision or institutional rule change, but it does add survey evidence to debates about AI, higher education, and future-of-work expectations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EPO accelerates digital patent shift with paperless system by 2027

The European Patent Office (EPO) is accelerating its transition towards a fully digital patent system, with plans to implement a paperless patent-granting process by 2027.

Discussions at the latest eSACEPO meeting highlighted steady progress and broad stakeholder support for modernising patent workflows.

Electronic filing and communication are set to become the default, with paper-based processes limited to exceptional cases. The shift aims to improve efficiency and accessibility, supported by legal adjustments and the gradual introduction of structured data formats to enhance processing accuracy.

Digital tools continue to evolve, with the MyEPO platform expanding its functionality through interface upgrades, self-service features and new capabilities such as colour drawing submissions.

The rollout of DOCX filing, alongside optional PDF backups, reflects a cautious approach designed to balance innovation with reliability.

AI is increasingly integrated into patent examination processes, supporting tasks such as search and documentation.

However, the EPO maintains a human-centric model, ensuring that decision-making authority remains with patent examiners while AI enhances productivity and consistency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google expands AI skills initiative to boost career mobility in the UK

Google has launched a nationwide initiative in the UK to improve access to AI skills and support career progression.

The programme, titled ‘AI Works for Britain’, seeks to address structural barriers that limit professional mobility despite widespread access to digital tools.

New research indicates that a significant proportion of the population feels unable to advance, citing gaps in skills, confidence and professional networks.

While a majority already use AI tools, only a minority report meaningful productivity gains, suggesting that effective utilisation remains uneven across the workforce.

An initiative by Google that focuses on practical upskilling through public training hubs, university partnerships and community outreach programmes.

These efforts aim to move users beyond basic interaction with AI tools toward more advanced applications that can enhance employability, efficiency and business development.

The programme in the UK aligns with broader efforts to position AI as a driver of economic inclusion rather than a source of inequality, with policymakers and industry stakeholders emphasising the importance of workforce readiness in an increasingly AI-driven economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNESCO initiative drives new digital platform governance frameworks in South Asia

South Asia is strengthening digital platform governance through a rights-based approach shaped by regional cooperation and international guidance.

A workshop led by UNESCO brought together policymakers, civil society and academics to align platform regulation with principles of freedom of expression and access to information.

The discussions focused on addressing governance gaps linked to misinformation, platform accountability and transparency. Participants examined national experiences and identified shared regulatory challenges, emphasising the need for coordinated regional responses instead of fragmented national measures.

An initiative that also validated regional toolkits designed for policymakers and civil society, translating global principles into practical guidance. These tools aim to support the implementation of governance frameworks that reflect local contexts while upholding international human rights standards.

The process builds on UNESCO’s Internet for Trust guidelines, reinforcing a human-centred model of digital governance. Continued collaboration across South Asia is expected to strengthen regulatory capacity and ensure that digital platforms operate with greater accountability and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI capacity partnership links UNDP and Intel in Lesotho and Liberia

The United Nations Development Programme and Intel are working together to expand AI training and digital skills in Lesotho and Liberia under a Memorandum of Understanding signed in March 2025. According to UNDP, the partnership is intended to combine global technical expertise with local leadership as both countries pursue broader digital transformation goals.

Lesotho and Liberia are approaching the issue from different starting points.UNDP says Lesotho is aiming for universal digital access by 2030, while Liberia is investing in AI in higher education and governance systems to prepare for the future digital economy. Through its partnership with Intel, the UN’s global development network says it is helping close gaps in AI literacy and capacity-building so communities can better understand how AI may affect everyday life.

In Lesotho, UNDP says it has already helped establish 40 Digital Skills Learning Labs and train 40 Digital Ambassadors, including teachers, religious leaders, and local influencers. Intel’s ‘AI for Citizens (AI Community Experiences)’ programme was introduced to provide locally relevant training materials for low-connectivity environments. UNDP says the onboarding included virtual sessions using games and storytelling, while analogue activities and puzzles were used to explain concepts such as computer vision.

Liberia’s work has focused more on higher education and the public sector. UNDP says it supported the University of Liberia in designing its first Master of AI programme through six online sessions with global experts and in-person workshops involving 20 faculty members. The collaboration also extended to government, with targeted training for nearly 100 officials on how AI could improve public service delivery and inform policy decisions.

Anshul Sonak, Global Head of Intel Digital Readiness Programs, said: ‘We are deeply honoured to be a part of the AI training collaboration in Liberia with UNDP. Bringing AI skills and digital literacy to a country rich in history and potential was an amazing experience. We look forward to more collaborations in the future and finding more opportunities for Intel to be a player in the region.’

UNDP says future phases may include expanding training to more communities and countries, adapting content to local languages and contexts, and adding online components as connectivity improves. Dhani Spiller, Head of UNDP’s Digital Capacity Lab, said: ‘This partnership shows what’s possible when we combine UNDP’s development mandate with the innovation and technical depth of private-sector leaders.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!