San Jose became the site of the inaugural Silicon Valley AI Film Festival (SVAIFF) on January 10–11, bringing together filmmakers, tech innovators and creatives to explore how AI is transforming cinema and creative expression.
The event featured AI-generated film trailers, such as “Revolutionary” and “Cosmic,” as well as panel discussions on industry trends and the economic implications of AI in film, and a competition that received over 2,000 entries.
Festival co-founder Cynthia Jiang highlighted how production companies are increasingly using AI in post-production and concept development, while acknowledging resistance remains among some traditional filmmakers.
Human and AI-assisted art appeared throughout the festival, including fashion shows that blended robotics with runway models and featured a humanoid robot performer.
The festival also celebrated creative achievements with awards, such as the Grand Prix for ‘White Night Lake’ and Best Animated Short for ‘A Tree’s Imagination.’ It premiered the feature-length film ‘The Wolves,’ directed by Bing He, who credited generative AI with enabling his vision without replacing his writing role.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Irish government plans to fast-track laws allowing heavy fines for AI abuse. The move follows controversy involving misuse of image generation tools.
Ministers will transpose an existing EU AI Act into Irish law. The framework defines eight harmful uses breaching rights and public decency.
Penalties could reach €35 million or seven percent of global annual turnover. AI systems would be graded by risk under the enforcement regime.
A dedicated AI office is expected to launch by August to oversee compliance. Irish and UK leaders have pressed platforms to curb harmful AI features.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has warned X to address issues related to its Grok AI tool. Regulators say new features enabled the creation of sexualised images, including those of children.
EU Tech Sovereignty Commissioner Henna Virkkunen has stated that investigators have already taken action under the Digital Services Act. Failure to comply could result in enforcement measures being taken against the platform.
X recently restricted Grok’s image editing functions to paying users after criticism from regulators and campaigners. Irish and EU media watchdogs are now engaging with Brussels on the issue.
UK ministers also plan laws banning non-consensual intimate images and tools enabling their creation. Several digital rights groups argue that existing laws already permit criminal investigations and fines.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chinese AI start-up DeepSeek will launch a customised Italian version of its online chatbot following a probe by the Italian competition authority, the AGCM. The move follows months of negotiations and a temporary 2025 ban due to concerns over user data and transparency.
The AGCM had criticised DeepSeek for not sufficiently warning users about hallucinations or false outputs generated by its AI models.
The probe ended after DeepSeek agreed to clearer Italian disclosures and technical fixes to reduce hallucinations. The regulator noted that while improvements are commendable, hallucinations remain a global AI challenge.
DeepSeek now provides longer Italian warnings and detects Italian IPs or prompts for localised notices. The company also plans workshops to ensure staff understand Italian consumer law and has submitted multiple proposals to the AGCM since September 2025.
The start-up must provide a progress report within 120 days. Failure to meet the regulator’s requirements could lead to the probe being reopened and fines of up to €10 million (£8.7m).
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The creators of Stranger Things have been accused by some fans of using ChatGPT while writing the show’s fifth and final season, following the release of a behind-the-scenes Netflix documentary.
The series ended on New Year’s Eve with a two-hour finale that saw (SPOILER WARNING) Vecna defeated and Eleven apparently sacrificing herself. The ambiguous ending divided viewers, with some disappointed by the lack of closure.
A documentary titled One Last Adventure: The Making Of Stranger Things 5 was released shortly after the finale. One scene showing Matt and Ross Duffer working on scripts drew attention after a screenshot circulated online.
Some viewers claimed a ChatGPT-style tab was visible on a laptop screen. Others questioned the claim, noting the footage may predate the chatbot’s mainstream use.
Netflix has since confirmed two spin-offs are in development, including a new live-action series and an animated project titled Stranger Things: Tales From ’85.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European labour markets are showing clear signs of cooling after a brief period of employee leverage during the pandemic.
Slower industrial growth, easing wage momentum and increased adoption of AI are encouraging firms to limit hiring instead of expanding headcounts, while workers are becoming more cautious about changing jobs.
Economic indicators suggest employment growth across the EU will slow over the coming years, with fewer vacancies and stabilising migration flows reducing labour market dynamism.
Germany, France, the UK and several central and eastern European economies are already reporting higher unemployment expectations, particularly in manufacturing sectors facing high energy costs and weaker global demand.
Despite broader caution, labour shortages persist in specific areas such as healthcare, logistics, engineering and specialised technical roles.
Southern European countries benefiting from tourism and services growth continue to generate jobs, highlighting uneven recovery patterns instead of a uniform downturn across the continent.
Concerns about automation are further shaping behaviour, as surveys indicate growing anxiety over AI reshaping roles rather than eliminating work.
Analysts expect AI to transform job structures and skill requirements, prompting workers and employers alike to prioritise adaptability instead of rapid expansion.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Consumer hardware is becoming more deeply embedded with AI as robot vacuum cleaners evolve from simple automated devices into intelligent household assistants.
New models rely on multimodal perception and real-time decision-making, instead of fixed cleaning routes, allowing them to adapt to complex domestic environments.
Advanced AI systems now enable robot vacuums to recognise obstacles, optimise cleaning sequences and respond to natural language commands. Technologies such as visual recognition and mapping algorithms support adaptive behaviour, improving efficiency while reducing manual input from users.
Market data reflects the shift towards intelligence-led growth.
Global shipments of smart robot vacuums increased by 18.7 percent during the first three quarters of 2025, with manufacturers increasingly competing on intelligent experience rather than suction power, as integration with smart home ecosystems accelerates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
NVIDIA and Eli Lilly have announced a joint AI co-innovation lab aimed at advancing drug discovery by combining AI with pharmaceutical research.
The partnership combines Lilly’s experience in medical development with NVIDIA’s expertise in accelerated computing and AI infrastructure.
The two companies plan to invest up to $1 billion over five years in research capacity, computing resources and specialist talent.
Based in the San Francisco Bay Area, the lab will support large-scale data generation and model development using NVIDIA platforms, instead of relying solely on traditional laboratory workflows.
Beyond early research, the collaboration is expected to explore applications of AI across manufacturing, clinical development and supply chain operations.
Both NVIDIA and Eli Lilly claim the initiative is designed to enhance efficiency and scalability in medical production while fostering long-term innovation in the life sciences sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation.
The strategy seeks to modernise public services, improve interoperability across digital systems and enhance economic competitiveness, according to officials ahead of the ‘AI Made in Morocco’ event in Rabat.
A central element of the plan involves the creation of Al Jazari Institutes, a national network of AI centres of excellence connecting academic research with innovation and regional economic needs.
A roadmap that prioritises technological autonomy, trusted AI use, skills development, support for local innovation and balanced territorial coverage instead of fragmented deployment.
The initiative builds on the Digital Morocco 2030 strategy launched in 2024, which places AI at the core of national digital policy.
Authorities expect the combined efforts to generate around 240,000 digital jobs and contribute approximately $10 billion to gross domestic product by 2030, while improving the international AI readiness ranking of Morocco.
Additional measures include the establishment of a General Directorate for AI and Emerging Technologies to oversee public policy and the development of an Arab African regional digital hub in partnership with the United Nations Development Programme.
Their main goal is to support sustainable and responsible digital innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.
Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.
The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.
X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.
eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.
Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.
Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.
Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!