EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ukraine urges ethical use of AI in education

AI can help build individual learning paths for Ukraine’s 3.5 million students, but its use must remain ethical, First Deputy Minister of Education and Science Yevhen Kudriavets has said.

Speaking to UNN, Kudriavets stressed that AI can analyse large volumes of information and help students acquire the knowledge they need more efficiently. He said AI could construct individual learning trajectories faster than teachers working manually.

He warned, however, that AI should not replace the educational process and that safeguards must be found to prevent misuse.

Kudriavets also said students in Ukraine should understand the reasons behind using AI, adding that it should be used to achieve knowledge rather than to obtain grades.

The deputy minister emphasised that technology itself is neutral, and how people choose to apply it determines whether it benefits education.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Educators rethink assignments as AI becomes widespread

Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.

Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.

Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better. 

The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.

Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.

As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FTC opens inquiry into AI chatbots and child safety

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers social media restrictions for minors

European Commission President Ursula von der Leyen announced that the EU is considering tighter restrictions on children’s access to social media platforms.

During her annual State of the Union address, von der Leyen said the Commission is closely monitoring Australia’s approach, where individuals under 16 are banned from using platforms like TikTok, Instagram, and Snapchat.

‘I am watching the implementation of their policy closely,’ von der Leyen said, adding that a panel of experts will advise her on the best path forward for Europe by the end of 2025.

Currently, social media age limits are handled at the national level across the EU, with platforms generally setting a minimum age of 13. France, however, is moving toward a national ban for those under 15 unless an EU-wide measure is introduced.

Several EU countries, including the Netherlands, have already warned against children under 15 using social media, citing health risks.

In June, the European Commission issued child protection guidelines under the Digital Services Act, and began working with five member states on age verification tools, highlighting growing concern over digital safety for minors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Teens turn to AI chatbots for support, raising mental health concerns

Mental health experts in Iowa have warned that teenagers are increasingly turning to AI chatbots instead of seeking human connection, raising concerns about misinformation and harmful advice.

The issue comes into focus on National Suicide Prevention Day, shortly after a lawsuit against ChatGPT was filed over a teenager’s suicide.

Jessica Bartz, a therapy supervisor at Vera French Duck Creek, said young people are at a vulnerable stage of identity formation while family communication often breaks down.

She noted that some teens use chatbot tools like ChatGPT, Genius and Copilot to self-diagnose, which can reinforce inaccurate or damaging ideas.

‘Sometimes AI can validate the wrong things,’ Bartz said, stressing that algorithms only reflect the limited information users provide.

Without human guidance, young people risk misinterpreting results and worsening their struggles.

Experts recommend that parents and trusted adults engage directly with teenagers, offering empathy and open communication instead of leaving them dependent on technology.

Bartz emphasised that nothing can replace a caring person noticing warning signs and intervening to protect a child’s well-being.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia moves to block AI nudify apps

Australia has announced plans to curb AI tools that generate nude images and enable online stalking. The government said it would introduce new legislation requiring tech companies to block apps designed to abuse and humiliate people.

Communications Minister Anika Wells said such AI tools are fuelling sextortion scams and putting children at risk. So-called ‘nudify’ apps, which digitally strip clothing from images, have spread quickly online.

A Save the Children survey found one in five young people in Spain had been targeted by deepfake nudes, showing how widespread the abuse has become.

Canberra pledged to use every available measure to restrict access, while ensuring that legitimate AI services are not harmed. Australia has already passed strict laws banning under-16s from social media, with the new measures set to build on its reputation as a leader in online safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Latvia launches open AI framework for Europe

Language technology company Tilde has released an open AI framework designed for all European languages.

The model, named ‘TildeOpen’, was developed with the support of the European Commission and trained on the Lumi supercomputer in Finland.

According to Tilde’s head Artūrs Vasiļevskis, the project addresses a key gap in US-based AI systems, which often underperform for smaller European languages such as Latvian. By focusing on European linguistic diversity, the framework aims to provide better accessibility across the continent.

Vasiļevskis also suggested that Latvia has the potential to become an exporter of AI solutions. However, he acknowledged that development is at an early stage and that current applications remain relatively simple. The framework and user guidelines are freely accessible online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!