Meta cuts 600 AI roles even as it expands superintelligence lab

Meta Platforms confirmed today it will cut approximately 600 jobs from its AI division, affecting teams including the Fundamental AI Research (FAIR) unit and product and infrastructure units. The move comes even as the company continues hiring for its elite superintelligence unit, the TBD Lab, which remains unaffected by the cuts.

According to an internal memo from Chief AI Officer Alexandr Wang, the layoff aim is to make remaining teams more load-bearing and impactful. ‘By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,’ Wang wrote.

Meta says employees affected will be encouraged to apply for other roles within the company; many are expected to be reassigned. The company’s earlier hiring spree in AI included poaching top talent from competitors and investing heavily in infrastructure. Analysts say the current cuts reflect a strategic pivot rather than a retreat, from broad AI research to more focused, high-impact model development.

This shift comes as Meta competes with organisations like OpenAI and Google in the race to build advanced large-language models and scaled AI systems. By trimming staff in legacy research and infrastructure units while bolstering resources for its superintelligence arm, Meta appears to be doubling-down on frontier AI even as it seeks to streamline operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI deepfake videos spark ethical and environmental concerns

Deepfake videos created by AI platforms like OpenAI’s Sora have gone viral, generating hyper-realistic clips of deceased celebrities and historical figures in often offensive scenarios.

Families of figures like Dr Martin Luther King Jr have publicly appealed to AI firms to prevent using their loved ones’ likenesses, highlighting ethical concerns around the technology.

Beyond the emotional impact, Dr Kevin Grecksch of Oxford University warns that producing deepfakes carries a significant environmental footprint. Instead of occurring on phones, video generation happens in data centres that consume vast amounts of electricity and water for cooling, often at industrial scales.

The surge in deepfake content has been rapid, with Sora downloaded over a million times in five days. Dr Grecksch urges users to consider the environmental cost, suggesting more integrated thinking about where data centres are built and how they are cooled to minimise their impact.

As governments promote AI growth areas like South Oxfordshire, questions remain over sustainable infrastructure. Users are encouraged to balance technological enthusiasm with environmental mindfulness, recognising the hidden costs behind creating and sharing AI-generated media.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft faces legal action for alleged Copilot subscription deception

The Australian Competition and Consumer Commission (ACCC) has launched Federal Court proceedings against Microsoft Australia and its parent company. The regulator alleges Microsoft misled 2.7 million Australians over Microsoft 365 subscription changes after adding its AI assistant, Copilot.

The ACCC says Microsoft told subscribers to accept higher-priced Copilot plans or cancel, without mentioning the cheaper Classic plan that kept original features. Customers could only discover this option by starting the cancellation process.

ACCC Chair Gina Cass-Gottlieb said Microsoft deliberately concealed the Classic plan to push users onto more expensive subscriptions. She noted that Microsoft 365 is essential for many and that customers deserve transparent information to make informed choices.

The regulator believes many users would have stayed with their original plans if they had known all the options.

The ACCC is seeking penalties, injunctions, and redress, claiming millions faced financial harm from higher renewal charges. The case underscores the regulator’s focus on protecting consumers in the digital economy and ensuring fair practices by major technology firms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia rules out AI copyright exemption

The Albanese Government has confirmed that it will not introduce a Text and Data Mining Exception in Australia’s copyright law, reinforcing its commitment to protecting local creators.

The decision follows calls from the technology sector for an exemption allowing AI developers to use copyrighted material without permission or payment.

Attorney-General Michelle Rowland said the Government aims to support innovation and creativity but will not weaken existing copyright protections. The Government plans to explore fair licensing options to support AI innovation while ensuring creators are paid fairly.

The Copyright and AI Reference Group will focus on fair AI use, more explicit copyright rules for AI works, and simpler enforcement through a possible small claims forum.

The Government said Australia must prepare for AI-related copyright challenges while keeping strong protections for creators. Collaboration between the technology and creative sectors will be essential to ensure that AI development benefits everyone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA boosts open-source robotics with new ROS 2 and Physical AI contributions

At the ROSCon conference in Singapore, NVIDIA unveiled significant open-source contributions to accelerate the future of robotics.

The company announced updates to the ROS 2 framework, new partnerships within the Open Source Robotics Alliance, and the latest release of NVIDIA Isaac ROS 4.0 (all designed to strengthen collaboration in robotics development).

NVIDIA’s involvement in the new Physical AI Special Interest Group aims to enhance real-time robot control and AI processing efficiency.

Its integration of GPU-aware abstractions into ROS 2 allows the framework to handle both CPUs and GPUs seamlessly, ensuring faster and more consistent performance for robotic systems.

Additionally, the company open-sourced Greenwave Monitor, which helps developers quickly identify and fix performance bottlenecks. NVIDIA Isaac ROS 4.0, now available on the Jetson Thor platform, provides GPU-accelerated AI models and libraries to power robot mobility and manipulation.

Global robotics leaders, including AgileX, Canonical, Intrinsic, and Robotec.ai, are already deploying NVIDIA’s open-source tools to enhance simulation, digital twins, and real-world testing.

NVIDIA’s initiatives reinforce its role as a core contributor to the open-source robotics ecosystem and the development of physical AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Copilot Mode turns Edge into an active assistant

Edge says the browser should work with you, not just wait for clicks. Copilot Mode adds chat-first tabs, multi-tab reasoning, and a dynamic pane for in-context help. Plan trips, compare options, and generate schedules without tab chaos.

Microsoft Copilot now resumes past sessions, so projects pick up exactly where you stopped. It can execute multi-step actions, like building walking tours, end-to-end. Optional history signals improve suggestions and speed up research-heavy tasks.

Voice controls handle quick actions and deeper chores with conversational prompts. Ask Copilot to open pages, summarise threads, or unsubscribe you from promo emails. Reservations and other multi-step chores are rolling out next.

Journeys groups past browsing into topic timelines for fast re-entry, with explicit opt-in. Privacy controls are prominent: clear cues when Copilot listens, acts, or views. You can toggle Copilot Mode off anytime.

Security features round things out: local AI blocks scareware overlays by default. Built-in password tools continuously create, store, and monitor credentials. Copilot Mode is in all Copilot markets on Edge desktop and mobile and is coming soon.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft revives friendly AI helper with Mico

Microsoft has unveiled a new AI companion called Mico, designed to replace the infamous Clippy as the friendly face of its Copilot assistant. The animated avatar, shaped like a glowing flame or blob, reacts emotionally and visually during conversations with users.

Executives said Mico aims to balance warmth and utility, offering human-like cues without becoming intrusive. Unlike Clippy, the character can easily be switched off and is intended to feel supportive rather than persistent or overly personal.

Mico’s launch reflects growing debate about personality in AI assistants as tech firms navigate ethical concerns. Microsoft stressed that its focus remains on productivity and safety, distancing itself from flirtatious or emotionally manipulative AI designs seen elsewhere.

The character will first appear in US versions of Copilot on laptops and mobile apps. Microsoft also revealed an AI tutoring mode for students, reinforcing its efforts to create more educational and responsibly designed AI experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta expands AI safety tools for teens

Meta has announced new AI safety tools to give parents greater control over how teenagers use its AI features. The update will first launch on Instagram, allowing parents to disable one-on-one chats between teens and AI characters.

Parents will be able to block specific AI assistants and see topics teens discuss with them. Meta said the goal is to encourage transparency and support families as young users learn to navigate AI responsibly.

Teen protections already include PG-13-guided responses and restrictions on sensitive discussions, such as self-harm or eating disorders. The company said it also uses AI detection systems to apply safeguards when suspected minors misreport their age.

The new parental controls will roll out in English early next year across the US, UK, Canada, and Australia. Meta said it will continue updating features to address parents’ concerns about privacy, safety, and teen wellbeing online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Train your own language model for $100 with NanoChat

Andrej Karpathy has unveiled NanoChat, an open-source framework that lets users train a small-scale language model for around $100 in just a few hours. Designed for accessibility and education, the project offers a simplified path into AI model development without requiring large-scale hardware.

Running on a single GPU, NanoChat automates the full training process, from tokenisation and pretraining to fine-tuning and deployment, using a single script. The resulting model contains about 1.9 billion parameters trained on 38 billion tokens, capable of basic reasoning, text generation, and code completion.

The framework’s compact 8,000-line Python codebase is readable and modifiable, encouraging users to experiment with model design and performance benchmarks such as MMLU and ARC. Released under the MIT Licence, NanoChat provides open access to documentation and scripts on GitHub, making it an ideal resource for students, researchers, and AI enthusiasts eager to learn how language models work.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!