Seomjae is set to launch its AI-powered mathematics learning program at CES 2025

Seomjae, a Seoul-based education solutions developer, is set to launch its AI-powered mathematics learning program at the Consumer Electronics Show in Las Vegas next January. The program uses an AI Retrieval-Augmented Generation model, developed over two years by a team of 40 mathematicians and AI developers. It features over 120,000 math problems and 30,000 lectures, offering personalised education tracks for each student.

Beta testing will begin on July 29, involving 50 students from Seoul, Ulsan, and Boston. The feedback will help enhance the technology and its feasibility. The innovative system, called Transforming Educational Content to AI, extracts and analyses information from lectures and problem solutions to provide core content.

Seomjae is also expanding its business portfolio to include an essay-writing educational program through partnerships in the US and Vietnam. The company will participate in Dubai’s Gulf Information Technology Exhibition this October, showcasing its new educational technologies.

A company official expressed excitement about starting beta testing and integrating diverse feedback to improve the program. The goal is to refine the AI system and ensure its effectiveness for students worldwide.

The National Education Association approves AI policy to guide educators

The US National Education Association (NEA) Representative Assembly (RA) delegates have approved the NEA’s first policy statement on the use of AI in education, providing educators with a roadmap for the safe, effective, and accessible use of AI in classrooms.

Since the fall of 2023, a task force of teachers, education support professionals, higher-ed faculty, and other stakeholders has been diligently working on this policy. Their efforts resulted in a 6-page policy statement, which RA delegates reviewed during an open hearing on 24 June and overwhelmingly approved on Thursday.

A central tenet of the new policy is that students and educators must remain at the heart of the educational process. AI should continue the human connection essential for inspiring and guiding students. The policy highlights that while AI can enhance education, it must be used responsibly, focusing on protecting data, ensuring equitable access, and providing opportunities for learning about AI.

The task force identified several opportunities AI presents, such as customising instructional methods for students with disabilities and making classrooms more inclusive. However, they also acknowledged risks, including potential biases due to the lack of diversity among AI developers and the environmental impact of AI technology. It’s crucial to involve traditionally marginalised groups in AI development and policy-making to ensure inclusivity. The policy clarifies that AI shouldn’t be used to make high-stakes decisions like class placements or graduation eligibility.

Why does this matter?

The policy underscores the importance of comprehensive professional learning for educators on AI to ensure its ethical and effective use in teaching. More than 7 in 10 K-12 teachers have never received professional learning on AI. It also raises concerns about exacerbating the digital divide, emphasising that all students should have access to cutting-edge technology and educators skilled in its use across all subjects, not just in computer science.

AI-powered workplace innovation: Tech Mahindra partners with Microsoft

Tech Mahindra has partnered with Microsoft to enhance workplace experiences for over 1,200 customers and more than 10,000 employees across 15 locations by adopting Copilot for Microsoft 365. The collaboration aims to boost workforce efficiency and streamline processes through Microsoft’s trusted cloud platform and generative AI capabilities. Additionally, Tech Mahindra will deploy GitHub Copilot for 5,000 developers, anticipating a productivity increase of 35% to 40%.

Mohit Joshi, CEO and Managing Director of Tech Mahindra, highlighted the transformative potential of the partnership, emphasising the company’s commitment to shaping the future of work with cutting-edge AI technology. Tech Mahindra plans to extend Copilot’s capabilities with plugins to leverage multiple data sources, enhancing creativity and productivity. The focus is on increasing efficiency, reducing effort, and improving quality and compliance across the board.

As part of the initiative, Tech Mahindra has launched a dedicated Copilot practice to help customers unlock the full potential of AI tools, including workforce training for assessment and preparation. The company will offer comprehensive solutions to help customers assess, prepare, pilot, and adopt business solutions using Copilot for Microsoft 365, providing a scalable and personalised user experience.

Judson Althoff, Executive Vice President and Chief Commercial Officer at Microsoft, remarked that the collaboration would empower Tech Mahindra’s employees with new generative AI capabilities, enhancing workplace experiences and increasing developer productivity. The partnership aligns with Tech Mahindra’s ongoing efforts to enhance workforce productivity using GenAI tools, demonstrated by the recent launch of a unified workbench on Microsoft Fabric to accelerate the adoption of complex data workflows.

Microsoft committed to expanding AI in education in Hong Kong

US tech giant Microsoft is committed to offering generative AI services in Hong Kong through educational initiatives, despite OpenAI’s access restrictions in the city and mainland China. Microsoft collaborated with the Education University of Hong Kong Jockey Club Primary School to offer AI services starting last year.

About 220 students in grades 5 and 6 used Microsoft’s chatbot and text-to-image tools in science classes. Principal Elsa Cheung Kam Yan noted that AI enhances learning by broadening students’ access to information and allowing exploration beyond textbooks. Vice-Principal Philip Law Kam Yuen added that in collaboration with Microsoft Hong Kong for 12 years, the school plans to extend AI usage to more classes.

Additionally, Microsoft also has agreements with eight Hong Kong universities to promote AI services. Fred Sheu, national technology officer of Microsoft in Hong Kong, reaffirmed Microsoft’s commitment to maintaining its Azure AI services, which use OpenAI’s models, further emphasising that API restrictions by OpenAI will not affect the company. Microsoft’s investment in OpenAI reportedly allows it to receive up to 49% of the profits from OpenAI’s for-profit arm. As all government-funded universities in Hong Kong have already acquired the Azure OpenAI service, they are thus qualified users. He also emphasised that Microsoft intends to extend this service to all schools in Hong Kong over the next few years.

Mary Meeker examines AI and higher education

Mary Meeker, renowned for her annual ‘Internet Trends’ reports, has released her first study in over four years, focusing on the intersection of AI and US higher education. Meeker’s previous reports were pivotal in analysing the tech economy, often spanning hundreds of pages. Her new report, significantly shorter at 16 pages, explores how the collaboration between technology and higher education can bolster America’s economic vitality.

In her latest report, Meeker asserts that the US has surpassed China in AI leadership. She emphasises that for the US to maintain this edge, technology companies and universities must work together as partners rather than see each other as obstacles. The partnership involves tech companies providing GPUs to research universities and being transparent about future work trends. Simultaneously, higher education institutions must adopt a ‘mindset change,’ treating students as customers and teachers as coaches.

Meeker highlights the historical role of universities like Stanford and MIT in driving tech innovation, initially through government funding, now increasingly through industry support. She underscores the critical nature of the coming years for higher education to remain a driving force in technological advancement. Echoing venture capitalist Alan Patricof, Meeker describes AI as a revolution more profound than transistors, PCs, biotech, the internet, or cloud computing, suggesting that AI is now ready to optimise the vast data accumulated over the past decades.

Meeker’s new report was shared with investors at her growth equity firm, BOND, and published on the firm’s website, aiming to inform and guide the next steps in integrating AI with higher education to sustain America’s technological and economic leadership.

Connecticut launches AI Academy to boost tech skills

Connecticut is spearheading efforts by developing what could be the nation’s first Citizens AI Academy. The free online resource aims to offer classes for learning basic AI skills and obtaining employment certificates.

Democratic Senator James Maroney of Connecticut emphasised the need for continuous learning in this rapidly evolving field. Determining the essential skills for an AI-driven world presents challenges due to the technology’s swift progression and varied expert opinions. Gregory LaBlanc from Berkeley Law School suggested that workers should focus on managing and utilising AI rather than understanding its technical intricacies to complement the capabilities of AI.

Several states, including Connecticut, California, Mississippi, and Maryland, have proposed legislation addressing AI in education. For instance, California is considering incorporating AI literacy into school curricula to ensure students understand AI principles, recognise its use, and appreciate its ethical implications. Connecticut’s AI Academy plans to offer certificates for career-related skills and provide foundational knowledge, from digital literacy to interacting with chatbots.

Despite the push for AI education, concerns about the digital divide persist. Senator Maroney highlighted the potential disadvantage for those needing more basic digital skills or access to technology. Marvin Venay of Bring Tech Home and Tesha Tramontano-Kelly of CfAL for Digital Inclusion stress the importance of affordable internet and devices as prerequisites for effective AI education. Ensuring these fundamentals is crucial for equipping individuals with the necessary tools to thrive in an AI-driven future.

AI revolutionises academic writing, prompting debate over quality and bias

In a groundbreaking shift for the academic world, AI now contributes to at least 10% of research papers, soaring to 20% in computer science, according to The Economist. This transformation is driven by advancements in large language models (LLMs), as highlighted in a University of Tübingen study comparing recent papers with those from the pre-ChatGPT era. The research shows a notable change in word usage, with terms like ‘delivers,’ ‘potential,’ ‘intricate,’ and ‘crucial’ becoming more common, while ‘important’ declines in use.

Chat with statistics of the words used in AI-generated research papers
Source: The Economist

Researchers are leveraging LLMs for editing, translating, simplifying coding, streamlining administrative tasks, and accelerating manuscript drafting. However, this integration raises concerns. LLMs may reinforce existing viewpoints and frequently cite prominent articles, potentially leading to an inflation of publications and a dilution of research quality. This risks perpetuating bias and narrowing academic diversity.

As the academic community grapples with these changes, scientific journals seek solutions to address the challenges as the sophistication of AI increases. Trying to detect and prevent the use of AI is increasingly futile. Other approaches to uphold the quality of research are discussed, including investment into a more solid peer-reviewing process, insisting on replicating experiments, and hiring academics based on the quality of their work instead of quantity, promoted by public obsession.

Recognizing the inevitability of AI’s role in academic writing, Diplo introduced the KaiZen publishing approach. This innovative approach combines just-in-time updates facilitated by AI with reflective writing crafted by humans, aiming to harmonize the strengths of AI and human intellect in producing scholarly work.

As AI continues to revolutionize academic writing, the landscape of research and publication is poised for further evolution, prompting ongoing debates and the search for balanced solutions.

Examiners fooled as AI students outperform real students in the UK

In a groundbreaking study published in PLOS One, the University of Reading has unveiled startling findings from a real-world Turing test involving AI in university exams, raising profound implications for education.

The study, led by the university’s tech team, involved 33 fictitious student profiles using OpenAI’s GPT-4 to complete psychology assignments and exams online. Astonishingly, 94% of AI-generated submissions went undetected by examiners, outperforming their human counterparts by achieving higher grades on average.

Associate Professor Peter Scarfe, a co-author of the study, emphasised the urgent need for educational institutions to address the impact of AI on academic integrity. He highlighted a recent UNESCO survey revealing minimal global preparation for the use of generative AI in education, calling for a reassessment of assessment practices worldwide.

Professor Etienne Roesch, another co-author, underscored the importance of establishing clear guidelines on AI usage to maintain trust in educational assessments and beyond. She stressed the responsibility of both creators and consumers of information to uphold academic integrity amid AI advancements.

The study also pointed to ongoing challenges for educators in combating AI-driven academic misconduct, even as tools like Turnitin adapt to detect AI-authored work. Despite these challenges, educators like Professor Elizabeth McCrum, the University of Reading’s pro-vice chancellor of education, advocate for embracing AI as a tool for enhancing student learning and employability skills.

Looking ahead, Professor McCrum expressed confidence in the university’s proactive stance in integrating AI responsibly into educational practices, preparing students for a future shaped by rapid technological change.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

BBC boosts educational content with £6 million AI investment

The BBC is embarking on a multimillion-pound investment in AI to revamp its educational offerings. It aims to cater to the learning needs of young users while securing its relevance in the digital age. This £6 million investment will bolster BBC Bitesize, transitioning it from a trusted digital textbook to a personalised learning platform, ensuring that learning adapts to each user’s needs and preferences.

As the broadcaster marks a century since its first educational program, it plans to enhance its educational brand further by offering special Live Lessons and interactive content on platforms like CBBC and BBC iPlayer. By leveraging AI-powered learning tools akin to Duolingo, the BBC aims to harness its extensive database of educational content to provide personalised testing, fill learning gaps, and offer tailored suggestions for further learning, akin to a ‘spinach version’ of YouTube.

Why does it matter?

Recognising the need to engage younger audiences and fulfil its founding purpose of informing, educating, and entertaining, the BBC’s investment in educational content serves dual purposes. Amidst concerns over declining viewership among younger demographics, the broadcaster seeks to reinforce its value proposition and attract a broader audience while reaffirming its commitment to public service. Through initiatives like Bitesize, which saw a surge in users during the pandemic, the BBC aims to educate and foster a lifelong relationship with audiences, irrespective of age.

Texas introduces AI grading for standardized tests

Texas students face a new era in standardised testing as the state rolls out an AI-powered scoring system to evaluate open-ended exam questions. The Texas Education Agency (TEA) is implementing an ‘automated scoring engine’ employing natural language processing technology akin to chatbots like OpenAI’s ChatGPT. With plans to replace a majority of human graders, TEA anticipates annual savings of $15–20 million, reducing the need for temporary scorers from 6,000 in 2023 to under 2,000 this year.

The State of Texas Assessments of Academic Readiness (STAAR) exams, revamped last year to include fewer multiple-choice questions, now feature up to seven times more open-ended inquiries. TEA’s director of student assessment, Jose Rios, cites the time-intensive nature of scoring these responses as a driving factor behind the shift. Despite initial training using 3,000 human-graded exam responses and implemented safety nets, including human rescoring for a quarter of computer-graded results and ambiguous AI-confounding answers, concerns linger among educators.

While TEA is optimistic about cost savings, some educators, like Lewisville Independent School District superintendent Lori Rapp, remain cautious. Rapp notes a ‘drastic increase’ in zero-scored constructed responses during the system’s limited trial, raising questions about test question integrity versus automated scoring accuracy. The move towards AI-driven grading aligns with a broader trend in education, with AI essay-scoring engines already in use across 21 states, albeit with mixed success. TEA emphasises distinctions between its ‘closed system’ scoring engine and broader AI, highlighting the importance of transparency and accountability in its implementation.

Why does it matter?

As Texas students navigate this new grading landscape, concerns about fairness and accountability emerge. With generative AI tools already raising issues of academic integrity and equity, questions arise about the consistency and impartiality of AI grading. As the rollout progresses, stakeholders will be watching closely to assess the impact of AI on standardised testing and its implications for education policy and practice.