OpenAI upgrades ChatGPT with faster AI images

The US tech company OpenAI has rolled out a significant update to ChatGPT with the launch of GPT Images 1.5, strengthening its generative image capabilities.

A new model that produces photorealistic images using text prompts at speeds up to four times faster than earlier versions, reflecting OpenAI’s push to make visual generation more practical for everyday use.

Users can upload existing photos and modify them through natural language instructions, allowing objects to be added, removed, combined or blended with minimal effort.

OpenAI highlights applications such as clothing and hairstyle try-ons, alongside stylistic filters designed to support creative experimentation while preserving realistic visual quality.

The update also introduces a redesigned ChatGPT interface, including a dedicated Images section available via the sidebar on both mobile apps and the web.

GPT Images 1.5 is now accessible to regular users, while Business and Enterprise subscribers are expected to receive enhanced access and additional features in the coming weeks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Customer trust at risk as retail cyberattacks grow

Retailers face escalating cyber threats as hackers increasingly target customer data, eroding trust and damaging long-term brand value.

Deloitte warns that data breaches and ransomware attacks are becoming more frequent and costly, with some retailers facing losses reaching hundreds of millions, alongside declining consumer confidence.

The expansion of AI-driven personalisation has intensified privacy concerns, as customers weigh convenience against data protection.

While many shoppers accept sharing personal information in exchange for value, confidence depends on clear safeguards, transparent data use and credible security practices across digital channels.

Deloitte argues that leading retailers integrate cybersecurity into their core business strategy, rather than treating it as a compliance obligation.

Priorities include protecting critical digital assets, modernising security operations and building cyber-aware cultures capable of responding to AI-enabled fraud, preserving customer trust and sustaining revenue growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK launches taskforce to boost women in tech

The UK government has formed a Women in Tech taskforce to help more women enter, remain and lead across the technology sector. Technology secretary Liz Kendall will guide the group alongside industry figures determined to narrow long-standing representation gaps highlighted by recent BCS data.

Members include Anne-Marie Imafidon, Allison Kirkby and Francesca Carlesi, who will advise ministers on boosting diversity and supporting economic growth. Leaders stress that better representation enables more inclusive decision-making and encourages technology built with wider perspectives in mind.

The taskforce plans to address barriers affecting women’s progression, ranging from career access to investment opportunities. Organisations such as techUK and the Royal Academy of Engineering argue that gender imbalance limits innovation, particularly as the UK pursues ambitious AI goals.

UK officials expect working groups to develop proposals over the coming months, focusing on practical steps that broaden the talent pool. Advocates say the initiative arrives at a crucial moment as emerging technologies reshape employment and demand more inclusive leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Private surveillance raises concerns in New Orleans

New Orleans has become the first US city to use real time facial recognition through a privately operated system. The technology flags wanted individuals as they pass cameras, with alerts sent directly to police despite ongoing disputes between city officials.

A local non profit runs the network independently and sets its own guard rails for police cooperation. Advocates claim the arrangement limits bureaucracy, while critics argue it bypasses vital public oversight and privacy protections.

Debate over facial recognition has intensified nationwide as communities question accuracy, fairness and civil liberties. New Orleans now represents a major test case for how such tools may develop without clear government regulation.

Officials remain divided over long term consequences while campaigners warn of creeping surveillance risks. Residents are likely to face years of uncertainty as policies evolve and private systems grow more influential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-driven Christmas scams surge online

Cybersecurity researchers are urging greater caution as Christmas approaches, warning that seasonal scams are multiplying rapidly. Check Point has recorded over 33,500 festive phishing emails and more than 10,000 deceptive social ads within two weeks.

AI tools are helping criminals craft convincing messages that mirror trusted brands and local languages. Attackers are also deploying fake e-commerce sites with AI chatbots, as well as deepfake audio and scripted calls to strengthen vishing attempts.

Smishing alerts imitating delivery firms are becoming more widespread, with recent months showing a marked rise in fraudulent parcel scams. Victims are often tricked into sharing payment details through links that imitate genuine logistics updates.

Experts say fake shops and giveaway scams remain persistent risks, frequently launched from accounts created within the past three months. Users are being advised to ignore unsolicited links, verify retailers and treat unexpected offers with scepticism.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI content flood drives ‘slop’ to word of the year

Merriam-Webster has chosen ‘slop’ as its 2025 word of the year, reflecting the rise of low-quality digital content produced by AI. The term originally meant soft mud, but now describes absurd or fake online material.

Greg Barlow, Merriam-Webster’s president, said the word captures how AI-generated content has fascinated, annoyed and sometimes alarmed people. Tools like AI video generators can produce deepfakes and manipulated clips in seconds.

The spike in searches for ‘slop’ shows growing public awareness of poor-quality content and a desire for authenticity. People want real, genuine material rather than AI-driven junk content.

AI-generated slop includes everything from absurd videos to fake news and junky digital books. Merriam-Webster selects its word of the year by analysing search trends and cultural relevance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI generated podcasts flood platforms and disrupt the audio industry

Podcasts generated by AI are rapidly reshaping the audio industry, with automated shows flooding platforms such as Spotify, Apple Podcasts and YouTube.

Advances in voice cloning and speech synthesis have enabled the production to large volumes of content at minimal cost, allowing AI hosts to compete directly with human creators in an already crowded market.

Some established podcasters are experimenting cautiously, using cloned voices for translation, post-production edits or emergency replacements. Others have embraced full automation, launching synthetic personalities designed to deliver commentary, biographies and niche updates at speed.

Studios, such as Los Angeles-based Inception Point AI, have scaled the model to scale, producing hundreds of thousands of episodes by targeting micro-audiences and trending searches instead of premium advertising slots.

The rapid expansion is fuelling concern across the industry, where trust and human connection remain central to listener loyalty.

Researchers and networks warn that large-scale automation risks devaluing premium content, while creators and audiences question how far AI voices can replace authenticity without undermining the medium itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia fines Platform X for pornographic content violations

Platform X has paid an administrative fine of nearly Rp80 million after failing to meet Indonesia’s content moderation requirements related to pornographic material, according to the country’s digital regulator.

The Ministry of Communication and Digital Affairs said the payment was made on 12 December 2025, after a third warning letter and further exchanges with the company. Officials confirmed that Platform X appointed a representative to complete the process, who is based in Singapore.

The regulator welcomed the company’s compliance, framing the payment as a demonstration of responsibility by an electronic system operator under Indonesian law. Authorities said the move supports efforts to keep the national digital space safe, healthy, and productive.

All funds were processed through official channels and transferred directly to the state treasury managed by the Ministry of Finance, in line with existing regulations, the ministry said.

Officials said enforcement actions against domestic and global platforms, including those operating from regional hubs such as Singapore, remain a priority. The measures aim to protect children and vulnerable groups and encourage stronger content moderation and communication.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universities back generative AI but guidance remains uneven

A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.

The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.

Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.

US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.

The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!