OpenAI faced questions after ChatGPT surfaced app prompts for paid users

ChatGPT users complained after the system surfaced an unexpected Peloton suggestion during an unrelated conversation. The prompt appeared for a Pro Plan subscriber and triggered questions about ad-like behaviour. Many asked why paid chats were showing promotional-style links.

OpenAI said the prompt was part of early app-discovery tests, not advertising. Staff acknowledged that the suggestion was irrelevant to the query. They said the system is still being adjusted to avoid confusing or misplaced prompts.

Users reported other recommendations, including music apps that contradicted their stated preferences. The lack of an option to turn off these suggestions fuelled irritation. Paid subscribers warned that such prompts undermine the service’s reliability.

OpenAI described the feature as a step toward integrating apps directly into conversations. The aim is to surface tools when genuinely helpful. Early trials, however, have demonstrated gaps between intended relevance and actual outcomes.

The tests remain limited to selected regions and are not active in parts of Europe. Critics argue intrusive prompts risk pushing users to competitors. OpenAI said refinements will continue to ensure suggestions feel helpful, not promotional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

eSafety highlights risks in connected vehicle technology

Australia’s eSafety regulator is drawing attention to concerns about how connected car features can be misused within domestic and family violence situations.

Reports from frontline workers indicate that remote access tools, trip records and location tracking can be exploited instead of serving their intended purpose as safety and convenience features.

The Australian regulator stresses that increased connectivity across vehicles and devices is creating new challenges for those supporting victim-survivors.

Smart cars often store detailed travel information and allow remote commands through apps and online accounts. These functions can be accessed by someone with shared credentials or linked accounts, which can expose sensitive information.

eSafety notes that misuse of connected vehicles forms part of a broader pattern of technology-facilitated coercive control, where multiple smart devices such as watches, tablets, cameras and televisions can play a role.

The regulator has produced updated guidance to help people understand potential risks and take practical steps with the support of specialist services.

Officials highlight the importance of stronger safeguards from industry, including simpler methods for revoking access, clearer account transfer processes during separation and more transparent logs showing when remote commands are used.

Retailers and dealerships are encouraged to ensure devices and accounts are reset when ownership changes. eSafety argues that design improvements introduced early can reduce the likelihood of harm, rather than requiring complex responses later.

Agencies and community services continue to assist those affected by domestic and family violence, offering advice on account security, safe device use and available support services.

The guidance aims to help people take protective measures in a controlled and safe way, while emphasising the importance of accessing professional assistance.

eSafety encourages ongoing cooperation between industry, government and frontline workers to manage risks linked to emerging automotive and digital technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini Projects feature appears in Google app teardown

Google is preparing a Gemini feature called Projects, offering a small workspace for grouping chats by topic. Early clues show it works like a sandbox that keeps related conversations structured. It is still hidden and not active for anyone.

An Android Authority teardown of the Google app revealed the interface and onboarding prompts. These mention isolating chats, choosing a focus area and adding files for context. The feature remains dormant until Google enables it on the server.

When opening a project, users can name it and then view a simple dashboard. This includes options to set project goals that guide Gemini’s behaviour. The aim is to keep longer tasks organised in one place.

The teardown shows a limit of ten file uploads per project, with no clarity on whether paid tiers will receive more. This may affect complex tasks that require a larger context. Users will also be able to pin projects for quicker access.

Because all information comes from hidden code, Google has not confirmed any details. The design or limits may change before launch. Until the Gemini feature is announced, the findings should be treated as provisional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greek businesses urged to accelerate AI adoption

AI is becoming a central factor in business development, according to Google Cloud executives visiting Athens for Google Cloud Day.

Marianne Janik and Boris Georgiev explained that AI is entering daily life more quickly than many expected, creating an urgent need for companies to strengthen their capabilities. Their visit coincided with the international launch of Gemini 3, the latest version of the company’s AI model.

They argued that enterprises in Greece should accelerate their adoption of AI tools to remain competitive. A slow transition could limit their position in both domestic and international markets.

They also underlined the importance of employees developing new skills that support digital transformation, noting that risk-taking has become a necessary element of strategic progress.

The financial sector is advancing at a faster pace, aided by its long-standing familiarity with digital and analytical tools.

Banks are investing heavily in compliance functions and customer onboarding. Retail is also undergoing a similar transformation, driven by consumer expectations and operational pressures.

Google Cloud Day in Athens brought together a large number of participants, highlighting the sector’s growing interest in practical AI applications and the role of advanced models in shaping business processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Valentino faces backlash over AI-generated handbag campaign

Italian fashion house Valentino has come under intense criticism after posting AI-generated advertisements for its DeVain handbag, with social media users calling the imagery ‘disturbing’ and ‘sloppy’. The BBC report describes how the brand’s digital-creative collaboration produced a surreal promotional video that quickly drew hundreds of negative comments on Instagram.

The campaign features morphing models, swirling bodies and shifting Valentino logos, all rendered by generative AI. Although the post clearly labels the material as AI-produced, many viewers noted that the brand’s reliance on the technology made the luxury product appear less appealing.

Commenters accused the company of prioritising efficiency over artistry and argued that advertising should showcase human creativity rather than automated visuals. Industry analysts have noted that the backlash reflects broader tensions within the creative economy.

Getty Images executive Dr Rebecca Swift said audiences often view AI-generated material as ‘less valuable’, mainly when used by luxury labels. Others warned that many consumers interpret the use of generative AI as a sign of cost-cutting rather than innovation.

Brands including H&M and Guess have faced similar criticism for recent AI-based promotional work, fuelling broader concerns about the displacement of models, photographers and stylists.

While AI is increasingly adopted across fashion to streamline design and marketing, experts say brands risk undermining the emotional connection that drives luxury purchasing. Analysts argue that without a compelling artistic vision at its core, AI-generated campaigns may make high-end labels feel less human at a time when customers are seeking more authenticity, not less.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Jorja Smith’s label challenges ‘AI clone’ vocals on viral track

A dispute has emerged after FAMM, the record label representing Jorja Smith, alleged that the viral dance track I Run by Haven used an unauthorised AI clone of the singer’s voice.

The BBC’s report describes how the song gained traction on TikTok before being removed from streaming platforms following copyright complaints.

The label said it wanted a share of royalties, arguing that both versions of the track, the original release and a re-recording with new vocals, infringed Smith’s rights and exploited the creative labour behind her catalogue.

FAMM said the issue was bigger than one artist, warning that fans had been misled and that unlabelled AI music risked becoming ‘the new normal’. Smith later shared the label’s statement, which characterised artists as ‘collateral damage’ in the race towards AI-driven production.

Producers behind “I Run” confirmed that AI was used to transform their own voices into a more soulful, feminine tone. Harrison Walker said he used Suno, generative software sometimes called the ‘ChatGPT for music’, to reshape his vocals, while fellow producer Waypoint admitted employing AI to achieve the final sound.

They maintain that the songwriting and production were fully human and shared project files to support their claim.

The controversy highlights broader tensions surrounding AI in music. Suno has acknowledged training its system on copyrighted material under the US ‘fair use’ doctrine, while record labels continue to challenge such practices.

Even as the AI version of I Run was barred from chart eligibility, its revised version reached the UK Top 40. At the same time, AI-generated acts such as Breaking Rust and hybrid AI-human projects like Velvet Sundown have demonstrated the growing commercial appeal of synthetic vocals.

Musicians and industry figures are increasingly urging stronger safeguards. FAMM said AI-assisted tracks should be clearly labelled, and added it would distribute any royalties to Smith’s co-writers in proportion to how much of her catalogue they contributed to, arguing that if AI relied on her work, so should any compensation.

The debate continues as artists push back more publicly, including through symbolic protests such as last week’s vinyl release of silent tracks, which highlighted fears over weakened copyright protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Italy secures new EU support for growth and reform

The European Commission has endorsed Italy’s latest request for funding under the Recovery and Resilience Facility, marking an important step in the country’s economic modernisation.

An approval that covers 12.8 billion euros, combining grants and loans, and supports efforts to strengthen competitiveness and long-term growth across key sectors of national life.

Italy completed 32 milestones and targets connected to the eighth instalment, enabling progress in public administration, procurement, employment, education, research, tourism, renewable energy and the circular economy.

Thousands of schools have gained new resources to improve multilingual learning and build stronger skills in science, technology, engineering, arts and mathematics.

Many primary and secondary schools have also secured modern digital tools to enhance teaching quality instead of relying on outdated systems.

Health research forms another major part of the package. Projects focused on rare diseases, cancer and other high-impact conditions have gained fresh funding to support scientific work and improve treatment pathways.

These measures contribute to a broader transformation programme financed through 194.4 billion euros, representing one of the largest recovery plans in the EU.

A four-week review by the Economic and Financial Committee will follow before the payment can be released. Once completed, Italy’s total receipts will exceed 153 billion euros, covering more than 70 percent of its full Recovery and Resilience Facility allocation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!