Zuckerberg’s Meta has unveiled a new generation of smart glasses powered by AI at its annual Meta Connect conference in California. Working with Ray-Ban and Oakley, the company introduced devices including the Meta Ray-Ban Display and the Oakley Meta Vanguard.
These glasses are designed to bring the Meta AI assistant into daily use instead of being confined to phones or computers.
The Ray-Ban Display comes with a colour lens screen for video calls and messaging and a 12-megapixel camera, and will sell for $799. It can be paired with a neural wristband that enables tasks through hand gestures.
Meta also presented $499 Oakley Vanguard glasses aimed at sports fans and launched a second generation of its Ray-Ban Meta glasses at $379. Around two million smart glasses have been sold since Meta entered the market in 2023.
Analysts see the glasses as a more practical way of introducing AI to everyday life than the firm’s costly Metaverse project. Yet many caution that Meta must prove the benefits outweigh the price.
Chief executive Mark Zuckerberg described the technology as a scientific breakthrough. He said it forms part of Meta’s vast AI investment programme, which includes massive data centres and research into artificial superintelligence.
The launch came as activists protested outside Meta’s New York headquarters, accusing the company of neglecting children’s safety. Former safety researchers also told the US Senate that Meta ignored evidence of harm caused by its VR products, claims the company has strongly denied.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japanese regulators are reviewing whether the social media platform X fails to comply with new content removal rules.
The law, which took effect in April, requires designated platforms to allow victims of harmful online posts to request deletion without facing unnecessary obstacles.
X currently obliges non-users to register an account before they can file such requests. Officials say that it could represent an excessive burden for victims who violate the law.
The company has also been criticised for not providing clear public guidance on submitting removal requests, prompting questions over its commitment to combating online harassment and defamation.
Other platforms, including YouTube and messaging service Line, have already introduced mechanisms that meet the requirements.
The Ministry of Internal Affairs and Communications has urged all operators to treat non-users like registered users when responding to deletion demands. Still, X and the bulletin board site bakusai.com have yet to comply.
As said, it will continue to assess whether X’s practices breach the law. Experts on a government panel have called for more public information on the process, arguing that awareness could help deter online abuse.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
YouTube has unveiled a new suite of AI tools designed to enhance the creation of Shorts, with its headline innovation being Veo 3 Fast, a streamlined version of Google DeepMind’s video model.
A system that can generate 480p clips with sound almost instantly, marking the first time audio has been added to Veo-generated Shorts. It is already being rolled out in the US, the UK, Canada, Australia and New Zealand, with other regions to follow instead of a limited release.
The platform also introduced several advanced editing features, such as motion transfer from video to still images, text-based styling, object insertion and Speech to Song Remixing, which converts spoken dialogue into music through DeepMind’s Lyria 2 model.
Testing will begin in the US before global expansion.
Another innovation, Edit with AI, automatically assembles raw footage into a rough cut complete with transitions, music and interactive voiceovers. YouTube confirmed the tool is in trials and will launch in select markets within weeks instead of years.
All AI-generated Shorts will display labels and watermarks to maintain transparency, as YouTube pushes to expand creator adoption and boost Shorts’ growth as a rival to TikTok and Instagram Reels.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia has released its regulatory guidance for the incoming social media age restriction law, which takes effect on December 10. Users under 16 will be barred from holding accounts on most major platforms, including Instagram, TikTok, and Facebook.
The new guidance details what are considered ‘reasonable steps’ for compliance. Platforms must detect and remove underage accounts, communicating clearly with affected users. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.
Platforms are also expected to block attempts to re-register, including the use of VPNs or other workarounds. Companies are encouraged to implement a multi-step age verification process and provide users with a range of options, rather than relying solely on government-issued identification.
Blanket age verification won’t be required, nor will platforms need to store personal data from verification processes. Instead, companies must demonstrate effectiveness through system-level records. Existing data, such as an account’s creation date, may be used to estimate age.
Under-16s will still be able to view content without logging in, for example, watching YouTube videos in a browser. However, shared access to adult accounts on family devices could present enforcement challenges.
Communications Minister Anika Wells stated that there is ‘no excuse for non-compliance.’ Each platform must now develop its own strategy to meet the law’s requirements ahead of the fast-approaching deadline.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.
Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.
Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.
Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.
Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.
Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.
While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.
RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.
Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.
Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.
Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.
Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.
Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI can help build individual learning paths for Ukraine’s 3.5 million students, but its use must remain ethical, First Deputy Minister of Education and Science Yevhen Kudriavets has said.
Speaking to UNN, Kudriavets stressed that AI can analyse large volumes of information and help students acquire the knowledge they need more efficiently. He said AI could construct individual learning trajectories faster than teachers working manually.
He warned, however, that AI should not replace the educational process and that safeguards must be found to prevent misuse.
Kudriavets also said students in Ukraine should understand the reasons behind using AI, adding that it should be used to achieve knowledge rather than to obtain grades.
The deputy minister emphasised that technology itself is neutral, and how people choose to apply it determines whether it benefits education.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.
Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.
Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better.
The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.
Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.
As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.
Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.
Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.
An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.
Other companies receiving orders include Character.AI and Elon Musk’s xAI.
The probe follows growing public concern over the psychological effects of generative AI on young people.
Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!