Stranger Things fans question AI use in show finale’s script

The creators of Stranger Things have been accused by some fans of using ChatGPT while writing the show’s fifth and final season, following the release of a behind-the-scenes Netflix documentary.

The series ended on New Year’s Eve with a two-hour finale that saw (SPOILER WARNING) Vecna defeated and Eleven apparently sacrificing herself. The ambiguous ending divided viewers, with some disappointed by the lack of closure.

A documentary titled One Last Adventure: The Making Of Stranger Things 5 was released shortly after the finale. One scene showing Matt and Ross Duffer working on scripts drew attention after a screenshot circulated online.

Some viewers claimed a ChatGPT-style tab was visible on a laptop screen. Others questioned the claim, noting the footage may predate the chatbot’s mainstream use.

Netflix has since confirmed two spin-offs are in development, including a new live-action series and an animated project titled Stranger Things: Tales From ’85.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Malta plans tougher laws against deepfake abuse

Malta’s government is preparing new legal measures to curb the abusive use of deepfake technology, with existing laws now under review. The planned reforms aim to introduce penalties for the misuse of AI in cases of harassment, blackmail, and bullying.

The move mirrors earlier cyberbullying and cyberstalking laws, extending similar protections to AI-generated content. Authorities are promoting AI while stressing the need for strong public safety and legal safeguards.

AI and youth participation were the main themes discussed during the National Youth Parliament meeting, where Abela highlighted the role of young people in shaping Malta’s long-term development strategy, Vision Malta 2050.

The strategy focuses on the next 25 years and directly affects those entering the workforce or starting families.

Young people were described as key drivers of national policy in areas such as fertility, environmental protection, and work-life balance. Senior officials and members of the Youth Advisory Forum attended the meeting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI reshapes Europe’s labour market outlook

European labour markets are showing clear signs of cooling after a brief period of employee leverage during the pandemic.

Slower industrial growth, easing wage momentum and increased adoption of AI are encouraging firms to limit hiring instead of expanding headcounts, while workers are becoming more cautious about changing jobs.

Economic indicators suggest employment growth across the EU will slow over the coming years, with fewer vacancies and stabilising migration flows reducing labour market dynamism.

Germany, France, the UK and several central and eastern European economies are already reporting higher unemployment expectations, particularly in manufacturing sectors facing high energy costs and weaker global demand.

Despite broader caution, labour shortages persist in specific areas such as healthcare, logistics, engineering and specialised technical roles.

Southern European countries benefiting from tourism and services growth continue to generate jobs, highlighting uneven recovery patterns instead of a uniform downturn across the continent.

Concerns about automation are further shaping behaviour, as surveys indicate growing anxiety over AI reshaping roles rather than eliminating work.

Analysts expect AI to transform job structures and skill requirements, prompting workers and employers alike to prioritise adaptability instead of rapid expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Robot vacuum market grows as AI becomes central to cleaning technology

Consumer hardware is becoming more deeply embedded with AI as robot vacuum cleaners evolve from simple automated devices into intelligent household assistants.

New models rely on multimodal perception and real-time decision-making, instead of fixed cleaning routes, allowing them to adapt to complex domestic environments.

Advanced AI systems now enable robot vacuums to recognise obstacles, optimise cleaning sequences and respond to natural language commands. Technologies such as visual recognition and mapping algorithms support adaptive behaviour, improving efficiency while reducing manual input from users.

Market data reflects the shift towards intelligence-led growth.

Global shipments of smart robot vacuums increased by 18.7 percent during the first three quarters of 2025, with manufacturers increasingly competing on intelligent experience rather than suction power, as integration with smart home ecosystems accelerates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eli Lilly and NVIDIA invest in AI-driven pharmaceutical innovation

NVIDIA and Eli Lilly have announced a joint AI co-innovation lab aimed at advancing drug discovery by combining AI with pharmaceutical research.

The partnership combines Lilly’s experience in medical development with NVIDIA’s expertise in accelerated computing and AI infrastructure.

The two companies plan to invest up to $1 billion over five years in research capacity, computing resources and specialist talent.

Based in the San Francisco Bay Area, the lab will support large-scale data generation and model development using NVIDIA platforms, instead of relying solely on traditional laboratory workflows.

Beyond early research, the collaboration is expected to explore applications of AI across manufacturing, clinical development and supply chain operations.

Both NVIDIA and Eli Lilly claim the initiative is designed to enhance efficiency and scalability in medical production while fostering long-term innovation in the life sciences sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Morocco outlines national AI roadmap to 2030

Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation.

The strategy seeks to modernise public services, improve interoperability across digital systems and enhance economic competitiveness, according to officials ahead of the ‘AI Made in Morocco’ event in Rabat.

A central element of the plan involves the creation of Al Jazari Institutes, a national network of AI centres of excellence connecting academic research with innovation and regional economic needs.

A roadmap that prioritises technological autonomy, trusted AI use, skills development, support for local innovation and balanced territorial coverage instead of fragmented deployment.

The initiative builds on the Digital Morocco 2030 strategy launched in 2024, which places AI at the core of national digital policy.

Authorities expect the combined efforts to generate around 240,000 digital jobs and contribute approximately $10 billion to gross domestic product by 2030, while improving the international AI readiness ranking of Morocco.

Additional measures include the establishment of a General Directorate for AI and Emerging Technologies to oversee public policy and the development of an Arab African regional digital hub in partnership with the United Nations Development Programme.

Their main goal is to support sustainable and responsible digital innovation.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Multiply Labs targets automation in cell therapy manufacturing

Robotics firm Multiply Labs is introducing automation into cell therapy manufacturing to cut costs by more than 70% and increase output. The startup applies industrial robotics to clean-room environments, replacing slow and contamination-prone manual processes.

Founded in 2016, the San Francisco-based company collaborates with leading cell therapy developers, including Kyverna Therapeutics and Legend Biotech. Its robotic systems perform sterile, precision tasks involved in producing gene-modified cell therapies at scale.

Multiply Labs uses NVIDIA Omniverse to create digital twins of laboratory environments and Isaac Sim to train robots for specialised workflows. Humanoid robots built on NVIDIA’s Isaac GR00T model are also being developed to assist with material handling while maintaining hygiene standards.

Cell therapies involve modifying patient or donor cells to treat various conditions, including cancers, autoimmune diseases, and genetic disorders. The highly customised nature of these treatments makes production costly and sensitive to human error, increasing the risk of failed batches.

By automating thousands of delicate steps, robotics improves consistency, reduces contamination, and preserves expert knowledge. Multiply Labs states that automation could enable the mass production of life-saving therapies at a lower cost and greater availability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude expands into healthcare and life sciences

Healthcare and life sciences organisations face increasing administrative pressure, fragmented systems, and rapidly evolving research demands. At the same time, regulatory compliance, safety, and trust remain critical requirements across all clinical and scientific operations.

Anthropic has launched new tools and connectors for Claude in Microsoft Foundry to support enterprise-scale AI workflows. Built on Azure’s secure infrastructure, the platform promotes responsible integration across data, compliance, and workflow automation environments.

The new capabilities are designed specifically for healthcare and life sciences use cases, including prior authorisation review, claims appeals processing, care coordination, and patient triage.

In research and development, the tools support protocol drafting, regulatory submissions, bioinformatics analysis, and experimental design.

According to Anthropic, the updates build on significant improvements in Claude’s underlying models, delivering stronger performance in areas such as scientific interpretation, computational biology, and protein understanding.

The aim is to enable faster, more reliable decision-making across regulated, real-world workflows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI enters Colorado classrooms as schools experiment with generative tools

Teachers across Colorado are exploring how AI can be utilised as an instructional assistant to support classroom instruction and student learning.

Some educators are experimenting with generative AI tools that help with tasks like lesson planning, summarising material and creating examples, while also educating students on responsible use of AI.

The broader trend mirrors state and district efforts to develop AI strategies for education. Reports indicate that many districts are establishing steering committees and policies to guide the safe and effective use of classrooms.

In contrast, others limit student access due to privacy concerns, underscoring the need for training and clear guidelines.

Teachers have noted both benefits, such as time savings and personalised support, and challenges, including ethical questions about plagiarism and student independence, highlighting a period of experimentation and adjustment as AI becomes part of mainstream education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!