AI-generated film removed from cinemas after public backlash

A prize-winning AI-generated short film has been pulled from cinemas following criticism from audiences. Thanksgiving Day, created by filmmaker Igor Alferov, was due to screen in selected theatres before feature presentations.

Concerns emerged after news of the screening spread online, prompting complaints directed at AMC Theatres. The chain stated it had not programmed the film and that pre-show advertising partner Screenvision Media had arranged the placement.

AMC confirmed it would not participate in the initiative, meaning the AI film will no longer appear in its locations. The animated short, produced using Google’s Gemini 3.1 and Nano Banana Pro tools, had recently won an AI film festival award.

The episode comes amid broader debate about artificial intelligence in Hollywood. Industry insiders suggest studios are quietly increasing AI use in production, even as concerns grow over job losses and economic uncertainty within Los Angeles’ entertainment sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Medical AI risks in Turkey highlight data bias and privacy challenges

Ankara is seeing growing debate over the risks and benefits of medical AI as experts warn that poorly governed systems could threaten patient safety.

Associate professor Agah Tugrul Korucu said AI offers meaningful potential for healthcare only when supported by rigorous ethical rules and strong oversight instead of rapid deployment without proper safeguards.

Korucu explained that data bias remains one of the most significant dangers because AI models learn directly from the information they receive. Underrepresented age groups, regions or social classes can distort outcomes and create systematic errors.

Turkey’s national health database e-Nabiz provides a strategic advantage, yet raw information cannot generate value unless it is processed correctly and supported by clear standards, quality controls and reliable terminology.

He added that inconsistent hospital records, labelling errors and privacy vulnerabilities can mislead AI systems and pose legal challenges. Strict anonymisation and secure analysis environments are needed to prevent harmful breaches.

Medical AI works best as a second eye in fields such as radiology and pathology, where systems can reduce workloads by flagging suspicious areas instead of leaving clinicians to assess every scan alone.

Korucu said physicians must remain final decision makers because automation bias could push patients towards unnecessary risks.

He expects genomic data combined with AI to transform personalised medicine over the coming decade, allowing faster diagnoses and accurate medication choices for rare conditions.

Priority development areas for Turkey include triage tools, intensive care early warning systems and chronic disease management. He noted that the long-term model will be the AI-assisted physician rather than a fully automated clinician.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Global microchip shortage pushes electronics prices higher

South African consumers may soon pay more for smartphones and laptops due to a global shortage of memory chips. The high demand is largely driven by AI data centres, which require powerful microchips to operate.

Tech experts report that major AI companies are acquiring large quantities of these chips for their own data centres, limiting supply for other industries. At the same time, importing chips from regions such as China has become more difficult because of trade tensions and tariffs.

Industry leaders, including Apple’s Tim Cook and Tesla’s Elon Musk, have expressed concern over the impact on production and business operations. The strain is being felt across the tech sector as companies compete for the limited supply of components.

With no immediate solution, the increased costs are expected to be passed down to consumers. Analysts warn that the combination of high demand, supply constraints, and global trade issues will make technology and appliances more expensive for consumers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU DSA fine against X heads to court in key test case

X Corp., owned by Elon Musk, has filed an appeal with the General Court of the European Union against a €120 million fine imposed by the European Commission for breaching the Digital Services Act. The penalty, issued in December, marks the first enforcement action under the 2022 law.

The Commission concluded that X violated transparency obligations and misled users through its verification design, arguing that paid blue checkmarks made it harder to assess account authenticity. Officials also cited concerns about advertising transparency and researchers’ access to platform data.

Henna Virkkunen, the EU’s executive vice-president for tech sovereignty, security, and democracy, said deceptive verification and opaque advertising had no place online. The Commission opened its probe in December 2023, examining risk management, moderation practices, and alleged dark patterns.

X Corp. argued that the decision followed an incomplete investigation and a flawed reading of the DSA, citing procedural errors and due-process concerns. It said the appeal could shape future enforcement standards and penalty calculations under the regulation.

The EU is also assessing whether X mitigated systemic risks, including deepfaked content and child sexual abuse material linked to its Grok chatbot. US critics describe DSA enforcement as a threat to free speech, while EU officials say it strengthens accountability across the digital single market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Altman urges urgent AI regulation

OpenAI chief Sam Altman has called for urgent global regulation of AI, speaking at the AI Impact Summit in New Delhi. Addressing leaders and executives in New Delhi, he said the rapid pace of development demands coordinated international oversight.

In New Delhi, Altman suggested creating a body similar to the International Atomic Energy Agency to oversee advanced AI systems. He warned that highly capable open source biomodels could pose serious biosecurity risks if misused.

Altman argued in New Delhi that democratising AI is essential to prevent power from being concentrated in a single company or country. He added that safeguards are urgently required, even as technology continues to disrupt labour markets.

During the summit in New Delhi, Altman said ChatGPT has 100 million weekly users in India, with more than a third being students. OpenAI also announced plans with Tata Consultancy Services to build data centre infrastructure in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brand turns AI demon into marketing stunt

Beverage company Liquid Death triggered confusion during the Winter Olympics after airing an AI advert featuring a figure skater who transforms into a red-eyed demon. The commercial appeared on Peacock’s Olympics stream but was not posted online, leaving viewers questioning whether it was real.

The brand later confirmed the advert was intentional and designed to parody fears around AI. According to Liquid Death, the limited run and lack of online acknowledgement were meant to amplify the sense of unease during the Winter Olympics broadcast.

Marketing analysts said that brands are increasingly leaning into AI scepticism to build trust with wary consumers. Campaigns from Equinox and Almond Breeze have similarly contrasted human authenticity with AI-generated content.

Despite the strategy, the Winter Olympics stunt drew criticism on social media, with some users labelling the advert AI slop. The reaction highlights both the risks and rewards for brands experimenting with AI-themed messaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK sets 48-hour deadline for removing intimate images

The UK government plans to require technology platforms to remove intimate images shared without consent within forty-eight hours instead of allowing such content to remain online for days.

Through an amendment to the Crime and Policing Bill, firms that fail to comply could face fines amounting to ten percent of their global revenue or risk having their services blocked in the UK.

A move that reflects ministers’ commitment to treat intimate image abuse with the same seriousness as child sexual abuse material and extremist content.

The action follows mounting concern after non-consensual sexual deepfakes produced by Grok circulated widely, prompting investigations by Ofcom and political pressure on platforms owned by Elon Musk.

The government now intends victims to report an image once instead of repeating the process across multiple services. Once flagged, the content should disappear across all platforms and be blocked automatically on future uploads through hash-matching or similar detection tools.

Ministers also aim to address content hosted outside the reach of the Online Safety Act by issuing guidance requiring internet providers to block access to sites that refuse to comply.

Keir Starmer, Liz Kendall and Alex Davies-Jones emphasised that no woman should be forced to pursue platform after platform to secure removal and that the online environment must offer safety and respect.

The package of reforms forms part of a broader pledge to halve violence against women and girls during the next decade.

Alongside tackling intimate image abuse, the government is legislating against nudification tools and ensuring AI chatbots fall within regulatory scope, using this agenda to reshape online safety instead of relying on voluntary compliance from large technology firms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Summit in India hears call for safe AI

The UN Secretary General has warned that AI must augment human potential rather than replace it, speaking at the India AI Impact Summit in New Delhi. Addressing leaders at Bharat Mandapam in New Delhi, he urged investment in workers so that technology strengthens, rather than displaces, human capacity.

In New Delhi, he cautioned that AI could deepen inequality, amplify bias and fuel harm if left unchecked. He called for stronger safeguards to protect people from exploitation and insisted that no child should be exposed to unregulated AI systems.

Environmental concerns also featured prominently in New Delhi, with Guterres highlighting rising energy and water demands from data centres. He urged a shift to clean power and warned against transferring environmental costs to vulnerable communities.

The UN chief proposed a $3 billion Global Fund on AI to build skills, data access and affordable computing worldwide. In New Delhi, he argued that broader access is essential to prevent countries from being excluded from the AI age and to ensure AI supports sustainable development goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Greece positions itself as a global AI bridge

The PM of Greece, Kyriakos Mitsotakis, took part in the India AI Impact Summit in New Delhi as part of a two-day visit that highlighted the country’s ambition to deepen its presence in global technology governance.

A gathering that focuses on creating a coherent international approach to AI under the theme ‘People-Planet-Progress’, with an emphasis on practical outcomes instead of abstract commitments.

Greece presents itself as a link between Europe and the Global South, seeking a larger role in debates over AI policy and geoeconomic strategy.

Mitsotakis is joined by Minister of Digital Governance Dimitris Papastergiou, underscoring Athens’ intention to strengthen partnerships that support technological development.

During the visit, Mitsotakis attended an official dinner hosted by Narendra Modi.

On Thursday, he will address the summit at Bharat Mandapam before holding a scheduled meeting with his Indian counterpart, reinforcing efforts to expand cooperation between Greece and India in emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reliance and OpenAI bring AI search to JioHotstar

OpenAI has joined forces with Reliance Industries to introduce conversational search into JioHotstar.

The integration uses OpenAI’s API so viewers can look for films, series, and live sports through multilingual text or voice prompts, receiving recommendations shaped by their viewing patterns instead of basic keyword results.

A collaboration that extends beyond the platform itself, with plans to surface JioHotstar suggestions directly inside ChatGPT.

The approach presents a two-way discovery layer that links entertainment browsing with conversational queries, pointing toward a new model for how audiences engage with streaming catalogues.

OpenAI is strengthening its footprint in India, where more than 100 million people now use ChatGPT weekly. The company intends to open offices in Mumbai and Bengaluru to support the expansion, adding to its site in New Delhi.

The partnership was announced at the India AI Impact Summit, where Sam Altman appeared alongside industry figures such as Dario Amodei and Sundar Pichai.

A move that aligns with a broader ‘OpenAI for India’ strategy that includes work on data centres with the Tata Group and further collaborations with companies such as Pine Labs, Eternal, and MakeMyTrip.

Executives from both sides said conversational interfaces will reshape how people find and follow programming, helping users navigate entertainment in a more natural way instead of relying on conventional menus.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!