Microsoft committed to expanding AI in education in Hong Kong

US tech giant Microsoft is committed to offering generative AI services in Hong Kong through educational initiatives, despite OpenAI’s access restrictions in the city and mainland China. Microsoft collaborated with the Education University of Hong Kong Jockey Club Primary School to offer AI services starting last year.

About 220 students in grades 5 and 6 used Microsoft’s chatbot and text-to-image tools in science classes. Principal Elsa Cheung Kam Yan noted that AI enhances learning by broadening students’ access to information and allowing exploration beyond textbooks. Vice-Principal Philip Law Kam Yuen added that in collaboration with Microsoft Hong Kong for 12 years, the school plans to extend AI usage to more classes.

Additionally, Microsoft also has agreements with eight Hong Kong universities to promote AI services. Fred Sheu, national technology officer of Microsoft in Hong Kong, reaffirmed Microsoft’s commitment to maintaining its Azure AI services, which use OpenAI’s models, further emphasising that API restrictions by OpenAI will not affect the company. Microsoft’s investment in OpenAI reportedly allows it to receive up to 49% of the profits from OpenAI’s for-profit arm. As all government-funded universities in Hong Kong have already acquired the Azure OpenAI service, they are thus qualified users. He also emphasised that Microsoft intends to extend this service to all schools in Hong Kong over the next few years.

Mary Meeker examines AI and higher education

Mary Meeker, renowned for her annual ‘Internet Trends’ reports, has released her first study in over four years, focusing on the intersection of AI and US higher education. Meeker’s previous reports were pivotal in analysing the tech economy, often spanning hundreds of pages. Her new report, significantly shorter at 16 pages, explores how the collaboration between technology and higher education can bolster America’s economic vitality.

In her latest report, Meeker asserts that the US has surpassed China in AI leadership. She emphasises that for the US to maintain this edge, technology companies and universities must work together as partners rather than see each other as obstacles. The partnership involves tech companies providing GPUs to research universities and being transparent about future work trends. Simultaneously, higher education institutions must adopt a ‘mindset change,’ treating students as customers and teachers as coaches.

Meeker highlights the historical role of universities like Stanford and MIT in driving tech innovation, initially through government funding, now increasingly through industry support. She underscores the critical nature of the coming years for higher education to remain a driving force in technological advancement. Echoing venture capitalist Alan Patricof, Meeker describes AI as a revolution more profound than transistors, PCs, biotech, the internet, or cloud computing, suggesting that AI is now ready to optimise the vast data accumulated over the past decades.

Meeker’s new report was shared with investors at her growth equity firm, BOND, and published on the firm’s website, aiming to inform and guide the next steps in integrating AI with higher education to sustain America’s technological and economic leadership.

Connecticut launches AI Academy to boost tech skills

Connecticut is spearheading efforts by developing what could be the nation’s first Citizens AI Academy. The free online resource aims to offer classes for learning basic AI skills and obtaining employment certificates.

Democratic Senator James Maroney of Connecticut emphasised the need for continuous learning in this rapidly evolving field. Determining the essential skills for an AI-driven world presents challenges due to the technology’s swift progression and varied expert opinions. Gregory LaBlanc from Berkeley Law School suggested that workers should focus on managing and utilising AI rather than understanding its technical intricacies to complement the capabilities of AI.

Several states, including Connecticut, California, Mississippi, and Maryland, have proposed legislation addressing AI in education. For instance, California is considering incorporating AI literacy into school curricula to ensure students understand AI principles, recognise its use, and appreciate its ethical implications. Connecticut’s AI Academy plans to offer certificates for career-related skills and provide foundational knowledge, from digital literacy to interacting with chatbots.

Despite the push for AI education, concerns about the digital divide persist. Senator Maroney highlighted the potential disadvantage for those needing more basic digital skills or access to technology. Marvin Venay of Bring Tech Home and Tesha Tramontano-Kelly of CfAL for Digital Inclusion stress the importance of affordable internet and devices as prerequisites for effective AI education. Ensuring these fundamentals is crucial for equipping individuals with the necessary tools to thrive in an AI-driven future.

AI revolutionises academic writing, prompting debate over quality and bias

In a groundbreaking shift for the academic world, AI now contributes to at least 10% of research papers, soaring to 20% in computer science, according to The Economist. This transformation is driven by advancements in large language models (LLMs), as highlighted in a University of Tübingen study comparing recent papers with those from the pre-ChatGPT era. The research shows a notable change in word usage, with terms like ‘delivers,’ ‘potential,’ ‘intricate,’ and ‘crucial’ becoming more common, while ‘important’ declines in use.

Chat with statistics of the words used in AI-generated research papers
Source: The Economist

Researchers are leveraging LLMs for editing, translating, simplifying coding, streamlining administrative tasks, and accelerating manuscript drafting. However, this integration raises concerns. LLMs may reinforce existing viewpoints and frequently cite prominent articles, potentially leading to an inflation of publications and a dilution of research quality. This risks perpetuating bias and narrowing academic diversity.

As the academic community grapples with these changes, scientific journals seek solutions to address the challenges as the sophistication of AI increases. Trying to detect and prevent the use of AI is increasingly futile. Other approaches to uphold the quality of research are discussed, including investment into a more solid peer-reviewing process, insisting on replicating experiments, and hiring academics based on the quality of their work instead of quantity, promoted by public obsession.

Recognizing the inevitability of AI’s role in academic writing, Diplo introduced the KaiZen publishing approach. This innovative approach combines just-in-time updates facilitated by AI with reflective writing crafted by humans, aiming to harmonize the strengths of AI and human intellect in producing scholarly work.

As AI continues to revolutionize academic writing, the landscape of research and publication is poised for further evolution, prompting ongoing debates and the search for balanced solutions.

Examiners fooled as AI students outperform real students in the UK

In a groundbreaking study published in PLOS One, the University of Reading has unveiled startling findings from a real-world Turing test involving AI in university exams, raising profound implications for education.

The study, led by the university’s tech team, involved 33 fictitious student profiles using OpenAI’s GPT-4 to complete psychology assignments and exams online. Astonishingly, 94% of AI-generated submissions went undetected by examiners, outperforming their human counterparts by achieving higher grades on average.

Associate Professor Peter Scarfe, a co-author of the study, emphasised the urgent need for educational institutions to address the impact of AI on academic integrity. He highlighted a recent UNESCO survey revealing minimal global preparation for the use of generative AI in education, calling for a reassessment of assessment practices worldwide.

Professor Etienne Roesch, another co-author, underscored the importance of establishing clear guidelines on AI usage to maintain trust in educational assessments and beyond. She stressed the responsibility of both creators and consumers of information to uphold academic integrity amid AI advancements.

The study also pointed to ongoing challenges for educators in combating AI-driven academic misconduct, even as tools like Turnitin adapt to detect AI-authored work. Despite these challenges, educators like Professor Elizabeth McCrum, the University of Reading’s pro-vice chancellor of education, advocate for embracing AI as a tool for enhancing student learning and employability skills.

Looking ahead, Professor McCrum expressed confidence in the university’s proactive stance in integrating AI responsibly into educational practices, preparing students for a future shaped by rapid technological change.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

BBC boosts educational content with £6 million AI investment

The BBC is embarking on a multimillion-pound investment in AI to revamp its educational offerings. It aims to cater to the learning needs of young users while securing its relevance in the digital age. This £6 million investment will bolster BBC Bitesize, transitioning it from a trusted digital textbook to a personalised learning platform, ensuring that learning adapts to each user’s needs and preferences.

As the broadcaster marks a century since its first educational program, it plans to enhance its educational brand further by offering special Live Lessons and interactive content on platforms like CBBC and BBC iPlayer. By leveraging AI-powered learning tools akin to Duolingo, the BBC aims to harness its extensive database of educational content to provide personalised testing, fill learning gaps, and offer tailored suggestions for further learning, akin to a ‘spinach version’ of YouTube.

Why does it matter?

Recognising the need to engage younger audiences and fulfil its founding purpose of informing, educating, and entertaining, the BBC’s investment in educational content serves dual purposes. Amidst concerns over declining viewership among younger demographics, the broadcaster seeks to reinforce its value proposition and attract a broader audience while reaffirming its commitment to public service. Through initiatives like Bitesize, which saw a surge in users during the pandemic, the BBC aims to educate and foster a lifelong relationship with audiences, irrespective of age.

Texas introduces AI grading for standardized tests

Texas students face a new era in standardised testing as the state rolls out an AI-powered scoring system to evaluate open-ended exam questions. The Texas Education Agency (TEA) is implementing an ‘automated scoring engine’ employing natural language processing technology akin to chatbots like OpenAI’s ChatGPT. With plans to replace a majority of human graders, TEA anticipates annual savings of $15–20 million, reducing the need for temporary scorers from 6,000 in 2023 to under 2,000 this year.

The State of Texas Assessments of Academic Readiness (STAAR) exams, revamped last year to include fewer multiple-choice questions, now feature up to seven times more open-ended inquiries. TEA’s director of student assessment, Jose Rios, cites the time-intensive nature of scoring these responses as a driving factor behind the shift. Despite initial training using 3,000 human-graded exam responses and implemented safety nets, including human rescoring for a quarter of computer-graded results and ambiguous AI-confounding answers, concerns linger among educators.

While TEA is optimistic about cost savings, some educators, like Lewisville Independent School District superintendent Lori Rapp, remain cautious. Rapp notes a ‘drastic increase’ in zero-scored constructed responses during the system’s limited trial, raising questions about test question integrity versus automated scoring accuracy. The move towards AI-driven grading aligns with a broader trend in education, with AI essay-scoring engines already in use across 21 states, albeit with mixed success. TEA emphasises distinctions between its ‘closed system’ scoring engine and broader AI, highlighting the importance of transparency and accountability in its implementation.

Why does it matter?

As Texas students navigate this new grading landscape, concerns about fairness and accountability emerge. With generative AI tools already raising issues of academic integrity and equity, questions arise about the consistency and impartiality of AI grading. As the rollout progresses, stakeholders will be watching closely to assess the impact of AI on standardised testing and its implications for education policy and practice.

Law school deans in the US divided over accreditation of online programs

A recent American Bar Association (ABA) proposal to accredit fully online law schools has sparked a nationwide debate among law school deans. In November, the ABA’s council initiated a process to gather public comments on proposed standard changes allowing online law schools without physical campuses to seek accreditation.

While some argue that the move could enhance access to legal education and reduce costs, others, including deans from prestigious institutions such as Villanova University and UC Berkeley, are raising concerns about the quality of education and the lack of data regarding the bar pass rates and employment outcomes of online law school graduates.

The proposal’s fate remains uncertain, with the ABA council set to meet in May and further rounds of feedback anticipated before a decision is reached.

Why does it matter?

Currently, only law schools with physical campuses can obtain accreditation for fully online JD (Juris Doctor) programs and only graduates from accredited institutions can sit for the bar exam. However, there is a shift in some states, exemplified by the Indiana Supreme Court’s recent decision to allow graduates of schools not accredited by the ABA to request a waiver to take the bar exam, following California’s early move. The move aims to address the shortage of attorneys in Indiana, though the quality of the online education program remains questionable.

EU launches toolkit to combat fake news in history education

The Council of Europe and the EU have collaborated to introduce a new educational tool to empower young people to assess content found online and in the media, discern historical inaccuracies, and engage in critical thinking about the material they come across. Dubbed the ‘Toolkit for History Classes: Debunking Fake News and Fostering Critical Thinking,’ this resource comprises 11 online activities designed to help students analyse various topics and events through historical sources and a multiperspective approach. Accompanying this toolkit is a free online training course for secondary school teachers, offering practical guidance on integrating the toolkit into classroom settings. Scheduled for release to the public in Autumn 2024, this initiative seeks to equip students with essential skills for navigating the digital landscape.

The unveiling of the toolkit will take place during the HISTOLAB European Innovation Days in History Education, scheduled from 3 to 5 April at the Council of Europe headquarters in Strasbourg. The conference, which is focused on history education, will bring together over 150 practitioners from across the EU and beyond to showcase and discuss innovative initiatives and practices in research, academia, and history teaching. Participants will explore diverse educational approaches, from analysing historical narratives through social media to using architecture to teach about totalitarian regimes.

The Innovation Days will feature nine practical workshops demonstrating engaging teaching methods that resonate with young learners. Examples include using LEGO to teach concepts of democracy and leveraging the medium of football to impart historical knowledge. With a focus on interactive and student-centred learning, these workshops aim to bridge the gap between traditional teaching methods and the interests of contemporary youth, fostering a deeper understanding of history in the process.

TikTok expands STEM education focus in EU amid regulatory scrutiny

TikTok is intensifying its focus on educational content amid mounting scrutiny in the US and the UK. The platform is rolling out its STEM feed across Europe, starting with the UK and Ireland, following its successful launch in the US last year. This dedicated feed, featuring science, technology, engineering, and mathematics content, will now be integrated alongside the main feeds for users under 18 and can be enabled by older users through the app’s settings. Since its US debut, one-third of teens regularly engage with the STEM feed, with a notable surge in STEM-related content production.

The expansion comes with enhanced measures to ensure content quality and reliability. Namely, TikTok is partnering with Common Sense Networks and Poynter to vet the content appearing on the STEM feed. Common Sense Networks will assess appropriateness, while Poynter will evaluate information reliability. Content failing these checks will not qualify for the STEM feed, aiming to provide users with credible educational materials.

This move arrives amidst growing criticism over TikTok’s handling of harmful content and its impact on young users. Concerns have been raised about addictive design tactics and inadequate protection of minors from inappropriate content. In response, the EU is investigating TikTok’s compliance with online safety regulations.

By emphasising its educational initiatives, including the STEM feed, TikTok aims to position itself as a constructive platform for youth development, countering regulatory scrutiny and public concerns.

Why does it matter?

TikTok’s push for educational content aligns with its recent efforts to present a positive global image to lawmakers and stakeholders. The company has showcased the STEM feed in congressional hearings to refute accusations of harm to young users. Through initiatives like this, TikTok seeks to demonstrate its commitment to promoting learning and responsible content consumption while navigating regulatory challenges in multiple jurisdictions.