OpenAI adds pinned chat feature to ChatGPT apps

The US tech company, OpenAI, has begun rolling out a pinned chats feature in ChatGPT across web, Android and iOS, allowing users to keep selected conversations fixed at the top of their chat history for faster access.

The function mirrors familiar behaviour from messaging platforms such as WhatsApp and Telegram instead of requiring repeated scrolling through past chats.

Users can pin a conversation by selecting the three-dot menu on the web or by long-pressing on mobile devices, ensuring that essential discussions remain visible regardless of how many new chats are created.

An update that follows earlier interface changes aimed at helping users explore conversation paths without losing the original discussion thread.

Alongside pinned chats, OpenAI is moving ChatGPT toward a more app-driven experience through an internal directory that allows users to connect third-party services directly within conversations.

The company says these integrations support tasks such as bookings, file handling and document creation without switching applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands AI training for newsrooms worldwide

The US tech company, OpenAI, has launched the OpenAI Academy for News Organisations, a new learning hub designed to support journalists, editors and publishers adopting AI in their work.

An initiative that builds on existing partnerships with the American Journalism Project and The Lenfest Institute for Journalism, reflecting a broader effort to strengthen journalism as a pillar of democratic life.

The Academy goes live with practical training, newsroom-focused playbooks and real-world examples aimed at helping news teams save time and focus on high-impact reporting.

Areas of focus include investigative research, multilingual reporting, data analysis, production efficiency and operational workflows that sustain news organisations over time.

Responsible use sits at the centre of the programme. Guidance on governance, internal policies and ethical deployment is intended to address concerns around trust, accuracy and newsroom culture, recognising that AI adoption raises structural questions rather than purely technical ones.

OpenAI plans to expand the Academy in the year ahead with additional courses, case studies and live programming.

Through collaboration with publishers, industry bodies and journalism networks worldwide, the Academy is positioned as a shared learning space that supports editorial independence while adapting journalism to an AI-shaped media environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark pushes digital identity beyond authentication

Digital identity has long focused on proving that the same person returns each time they log in. The function still matters, yet online representation increasingly happens through faces, voices and mannerisms embedded in media rather than credentials alone.

As synthetic media becomes easier to generate and remix, identity shifts from an access problem to a problem of media authenticity.

The ‘Own Your Face’ proposal by Denmark reflects the shift by treating personal likeness as something that should be controllable in the same way accounts are controlled.

Digital systems already verify who is requesting access, yet lack a trusted middle layer to manage what is being shown when media claims to represent a real person. The proxy model illustrates how an intermediary layer can bring structure, consistency and trust to otherwise unmanageable flows.

Efforts around content provenance point toward a practical path forward. By attaching machine-verifiable history to media at creation and preserving it as content moves, identity extends beyond login to representation.

Broad adoption would not eliminate deception, yet it would raise the baseline of trust by replacing visual guesswork with evidence, helping digital identity evolve for an era shaped by synthetic media.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI generated podcasts flood platforms and disrupt the audio industry

Podcasts generated by AI are rapidly reshaping the audio industry, with automated shows flooding platforms such as Spotify, Apple Podcasts and YouTube.

Advances in voice cloning and speech synthesis have enabled the production to large volumes of content at minimal cost, allowing AI hosts to compete directly with human creators in an already crowded market.

Some established podcasters are experimenting cautiously, using cloned voices for translation, post-production edits or emergency replacements. Others have embraced full automation, launching synthetic personalities designed to deliver commentary, biographies and niche updates at speed.

Studios, such as Los Angeles-based Inception Point AI, have scaled the model to scale, producing hundreds of thousands of episodes by targeting micro-audiences and trending searches instead of premium advertising slots.

The rapid expansion is fuelling concern across the industry, where trust and human connection remain central to listener loyalty.

Researchers and networks warn that large-scale automation risks devaluing premium content, while creators and audiences question how far AI voices can replace authenticity without undermining the medium itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools enable large-scale monetisation of political misinformation in the UK

YouTube channels spreading fake and inflammatory anti-Labour videos have attracted more than a billion views this year, as opportunistic creators use AI-generated content to monetise political division in the UK.

Research by non-profit group Reset Tech identified more than 150 channels promoting hostile narratives about the Labour Party and Prime Minister Keir Starmer. The study found the channels published over 56,000 videos, gaining 5.3 million subscribers and nearly 1.2 billion views in 2025.

Many videos used alarmist language, AI-generated scripts and British-accented narration to boost engagement. Starmer was referenced more than 15,000 times in titles or descriptions, often alongside fabricated claims of arrests, political collapse or public humiliation.

Reset Tech said the activity reflects a wider global trend driven by cheap AI tools and engagement-based incentives. Similar networks were found across Europe, although UK-focused channels were mostly linked to creators seeking advertising revenue rather than foreign actors.

YouTube removed all identified channels after being contacted, citing spam and deceptive practices as violations of its policies. Labour officials warned that synthetic misinformation poses a serious threat to democratic trust, urging platforms to act more quickly and strengthen their moderation systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Reddit challenges Australia’s teen social media ban

The US social media company, Reddit, has launched legal action in Australia as the country enforces the world’s first mandatory minimum age for social media access.

Reddit argues that banning users under 16 prevents younger Australians from taking part in political debate, instead of empowering them to learn how to navigate public discussion.

Lawyers representing the company argue that the rule undermines the implied freedom of political communication and could restrict future voters from understanding the issues that will shape national elections.

Australia’s ban took effect on December 10 and requires major platforms to block underage users or face penalties that can reach nearly 50 million Australian dollars.

Companies are relying on age inference and age estimation technologies to meet the obligation, although many have warned that the policy raises privacy concerns in addition to limiting online expression.

The government maintains that the law is designed to reduce harm for younger users and has confirmed that the list of prohibited platforms may expand as new safety issues emerge.

Reddit’s filing names the Commonwealth of Australia and Communications Minister Anika Wells. The minister’s office says the government intends to defend the law and will prioritise the protection of young Australians, rather than allowing open access to high-risk platforms.

The platform’s challenge follows another case brought by an internet rights group that claims the legislation represents an unfair restriction on free speech.

A separate list identifies services that remain open for younger users, such as Roblox, Pinterest and YouTube Kids. At the same time, platforms including Instagram, TikTok, Snapchat, Reddit and X are blocked for those under sixteen.

The case is expected to shape future digital access rights in Australia, as online communities become increasingly central to political education and civic engagement among emerging voters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adobe brings its leading creative tools straight into ChatGPT

Yesterday, Adobe opened a new chapter for digital creativity by introducing Photoshop, Adobe Express and Adobe Acrobat inside ChatGPT.

The integration gives 800 million weekly users direct access to trusted creative and productivity tools through a conversational interface. Adobe aims to make creative work easier for newcomers by linking its technology to simple written instructions.

Photoshop inside ChatGPT offers selective edits, tone adjustments and creative effects, while Adobe Express brings quick design templates and animation features to people who want polished content without switching between applications.

Acrobat adds powerful document controls, allowing users to organise, edit or redact PDFs inside the chat. Each action blends conversation with Adobe’s familiar toolsets, giving users either simple text-driven commands or fine control through intuitive sliders.

The launch reflects Adobe’s broader investment in agentic AI and its Model Context Protocol. Earlier releases such as Acrobat Studio and AI Assistants for Photoshop and Adobe Express signalled Adobe’s ambition to expand conversational creative experiences.

Adobe also plans to extend an upcoming Firefly AI Assistant across multiple apps to support faster movement from an idea to a finished design.

All three apps are now available to ChatGPT users on desktop, web and iOS, with Android support expanding soon. Adobe positions the integration as an entry point for new audiences who may later move into the full desktop versions for deeper control.

The company expects the partnership to widen access to creative expression by letting anyone edit images, produce designs or transform documents simply by describing what they want to achieve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian families receive eSafety support as the social media age limit takes effect

Australia has introduced a minimum age requirement of 16 for social media accounts during the week, marking a significant shift in its online safety framework.

The eSafety Commissioner has begun monitoring compliance, offering a protective buffer for young people as they develop digital skills and resilience. Platforms now face stricter oversight, with potential penalties for systemic breaches, and age assurance requirements for both new and current users.

Authorities stress that the new age rule forms part of a broader effort aimed at promoting safer online environments, rather than relying on isolated interventions. Australia’s online safety programmes continue to combine regulation, education and industry engagement.

Families and educators are encouraged to utilise the resources on the eSafety website, which now features information hubs that explain the changes, how age assurance works, and what young people can expect during the transition.

Regional and rural communities in Australia are receiving targeted support, acknowledging that the change may affect them more sharply due to limited local services and higher reliance on online platforms.

Tailored guidance, conversation prompts, and step-by-step materials have been produced in partnership with national mental health organisations.

Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts.

eSafety officials underline that the new limit introduces a delay rather than a ban. The aim is to reduce exposure to persuasive design and potential harm while encouraging stronger digital literacy, emotional resilience and critical thinking.

Ongoing webinars and on-demand sessions provide additional support as the enforcement phase progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia enforces under-16 social media ban as new rules took effect

Australia has finally introduced the world’s first nationwide prohibition on social media use for under-16s, forcing platforms to delete millions of accounts and prevent new registrations.

Instagram, TikTok, Facebook, YouTube, Snapchat, Reddit, Twitch, Kick and Threads are removing accounts held by younger users. At the same time, Bluesky has agreed to apply the same standard despite not being compelled to do so. The only central platform yet to confirm compliance is X.

The measure follows weeks of age-assurance checks, which have not been flawless, with cases of younger teenagers passing facial-verification tests designed to keep them offline.

Families are facing sharply different realities. Some teenagers feel cut off from friends who managed to bypass age checks, while others suddenly gain a structure that helps reduce unhealthy screen habits.

A small but vocal group of parents admit they are teaching their children how to use VPNs and alternative methods instead of accepting the ban, arguing that teenagers risk social isolation when friends remain active.

Supporters of the legislation counter that Australia imposes clear age limits in other areas of public life for reasons of well-being and community standards, and the same logic should shape online environments.

Regulators are preparing to monitor the transition closely.

The eSafety Commissioner will demand detailed reports from every platform covered by the law, including the volume of accounts removed, evidence of efforts to stop circumvention and assessments of whether reporting and appeals systems are functioning as intended.

Companies that fail to take reasonable steps may face significant fines. A government-backed academic advisory group will study impacts on behaviour, well-being, learning and unintended shifts towards more dangerous corners of the internet.

Global attention is growing as several countries weigh similar approaches. Denmark, Norway and Malaysia have already indicated they may replicate Australia’s framework, and the EU has endorsed the principle in a recent resolution.

Interest from abroad signals a broader debate about how societies should balance safety and autonomy for young people in digital spaces, instead of relying solely on platforms to set their own rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen chatbot use surges across the US

Nearly a third of US teenagers engage with AI chatbots each day, according to new Pew data. Researchers say nearly 70% have tried a chatbot, reflecting growing dependence on digital tools during schoolwork and leisure time. Concerns remain over exposure to mature content and possible mental health harms.

Pew surveyed almost 1,500 US teens aged 13 to 17, finding broadly similar usage patterns across gender and income. Older teens reported higher engagement, while Black and Hispanic teens showed slightly greater adoption than White peers.

Experts warn that frequent chatbot use may hinder development or encourage cheating in academic settings. Safety groups have urged parents to limit access to companion-like AI tools, citing risks posed by romantic or intimate interactions with minors.

Companies are now rolling out safeguards in response to public scrutiny and legal pressure. OpenAI and Character.AI have tightened controls, while Meta says it has adjusted policies following reports of inappropriate exchanges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot