Brainstorming with AI opens new doors for innovation

AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Company, Kevin Li describes how AI complements human brainstorming under time pressure, drawing from his work at Amazon and startup Stealth.

Li argues AI is no longer just a tool but a true collaborator in creative workflows. Generative models can analyse vast data sets and rapidly suggest alternative concepts, helping teams reimagine product features, marketing strategies, and campaign angles. The shift aligns with broader industry trends.

A McKinsey report from earlier this year highlighted that, while only 1% of companies consider themselves mature in AI use, most are investing heavily in this area. Creative use cases are expected to generate massive value by 2025.

Li notes that the most effective use of AI occurs when it’s treated as a sounding board. He recounts how the quality of ideas improved significantly when AI offered raw directions that humans later refined. The hybrid model is gaining traction across multiple startups and established firms alike.

Still, original thinking remains a hurdle. A recent study by PsyPost found human pairs often outperform AI tools in generating novel ideas during collaborative sessions. While AI offers scale, human teams reported more substantial creative confidence and profound originality.

The findings suggest AI may work best at the outset of ideation, followed by human editing and development. Experts recommend setting clear roles for AI in the creative cycle. For instance, tools like ChatGPT or Midjourney might handle initial brainstorming, while humans oversee narrative coherence, tone, and ethics.

The approach is especially relevant in advertising, product design, and marketing, where nuance is still essential. Creatives across X are actively sharing tips and results. One agency leader posted about reducing production costs by 30% using AI tools for routine content work.

The strategy allowed more time and budget to focus on storytelling and strategy. Others note that using AI to write draft copy or generate design options is becoming common. Yet concerns remain over ethical boundaries.

The Orchidea Innovation Blog cautioned in 2023 that AI often recycles learned material, which can limit fresh perspectives. Recent conversations on X raise alarms about over-reliance. Some fear AI-generated content will eradicate originality across sectors, particularly marketing, media, and publishing.

To counter such risks, structured prompting and human-in-the-loop models are gaining popularity. ClickUp’s AI brainstorming guide recommends feeding diverse inputs to avoid homogeneous outputs. Précis AI referenced Wharton research to show that vague prompts often produce repetitive results.

The solution: intentional, varied starting points with iterative feedback loops. Emerging platforms are tackling this in real-time. Ideamap.ai, for example, enables collaborative sessions where teams interact with AI visually and textually.

Jabra’s latest insights describe AI as a ‘thought partner’ rather than a replacement, enhancing team reasoning and ideation dynamics without eliminating human roles. Looking ahead, the business case for AI creativity is strong.

McKinsey projects hundreds of billions in value from AI-enhanced marketing, especially in retail and software. Influencers like Greg Isenberg predict $100 million niches built on AI-led product design. Frank$Shy’s analysis points to a $30 billion creative AI market by 2025, driven by enterprise tools.

Even in e-commerce, AI is transforming operations. Analytics India Magazine reports that brands build eight-figure revenues by automating design and content workflows while keeping human editors in charge. The trend is not about replacement but refinement and scale.

Li’s central message remains relevant: when used ethically, AI augments rather than replaces creativity. Responsible integration supports diverse voices and helps teams navigate the fast-evolving innovation landscape. The future of ideation lies in balance, not substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google backs EU AI Code but warns against slowing innovation

Google has confirmed it will sign the European Union’s General Purpose AI Code of Practice, joining other companies, including major US model developers.

The tech giant hopes the Code will support access to safe and advanced AI tools across Europe, where rapid adoption could add up to €1.4 trillion annually to the continent’s economy by 2034.

Kent Walker, Google and Alphabet’s President of Global Affairs, said the final Code better aligns with Europe’s economic ambitions than earlier drafts, noting that Google had submitted feedback during its development.

However, he warned that parts of the Code and the broader AI Act might hinder innovation by introducing rules that stray from EU copyright law, slow product approvals or risk revealing trade secrets.

Walker explained that such requirements could restrict Europe’s ability to compete globally in AI. He highlighted the need to balance regulation with the flexibility required to keep pace with technological advances.

Google stated it will work closely with the EU’s new AI Office to help shape a proportionate, future-facing approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Free VPN use surges in UK after online safety law

The UK’s new Online Safety Act has increased VPN use, as websites introduce stricter age restrictions to comply with the law. Popular platforms such as Reddit and Pornhub are either blocking minors or adding age verification, pushing many young users to turn to free VPNs to bypass the rules.

In the days following the Act’s enforcement on 25 July, five of the ten most-downloaded free apps in the UK were VPNs.

However, cybersecurity experts warn that unvetted free VPNs can pose serious risks, with some selling user data or containing malware.

Using a VPN means routing all your internet traffic through an external server, effectively handing over access to your browsing data.

While reputable providers like Proton VPN offer safe free tiers supported by paid plans, lesser-known services often lack transparency and may exploit users for profit.

Consumers are urged to check for clear privacy policies, audited security practices and credible business information before using a VPN. Trusted options for safer browsing include Proton VPN, TunnelBear, Windscribe, and hide.me.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act begins as tech firms push back

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia reverses its stance and restricts YouTube for children under 16

Australia has announced that YouTube will be banned for children under 16 starting in December, reversing its earlier exemption from strict new social media age rules. The decision follows growing concerns about online harm to young users.

Platforms like Facebook, Instagram, Snapchat, TikTok, and X are already subject to the upcoming restrictions, and YouTube will now join the list of ‘age-restricted social media platforms’.

From 10 December, all such platforms will be required to ensure users are aged 16 or older or face fines of up to AU$50 million (£26 million) for not taking adequate steps to verify age. Although those steps remain undefined, users will not need to upload official documents like passports or licences.

The government has said platforms must find alternatives instead of relying on intrusive ID checks.

Communications Minister Anika Wells defended the policy, stating that four in ten Australian children reported recent harm on YouTube. She insisted the government would not back down under legal pressure from Alphabet Inc., YouTube’s US-based parent company.

Children can still view videos, but won’t be allowed to hold personal YouTube accounts.

YouTube criticised the move, claiming the platform is not social media but a video library often accessed through TVs. Prime Minister Anthony Albanese said Australia would campaign at a UN forum in September to promote global backing for social media age restrictions.

Exemptions will apply to apps used mainly for education, health, messaging, or gaming, which are considered less harmful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google adds narrated slide videos to NotebookLM

Google has added a new dimension to NotebookLM by introducing Video Overviews, a feature that transforms your content into narrated slide presentations.

Originally revealed at Google I/O, the tool builds on the popularity of Audio Overviews, which generated AI-hosted podcast-style summaries. Instead of relying solely on audio, users can now enjoy visual storytelling powered by the same AI.

Video Overviews automatically pulls elements like images, diagrams, quotes and statistics from documents to create slide-based summaries.

The tool supports professionals and students by simplifying complex reports or academic papers into engaging visual formats. Users can also customise the video output by defining learning goals, selecting key topics, or tailoring it to a specific audience.

For now, the rollout is limited to English-speaking users on desktops, but Google plans to expand the formats. Narrated slides are the first to launch, combining clear visuals with spoken summaries, helping visual learners engage with content more effectively instead of reading lengthy text.

Alongside the new feature, Google has redesigned the NotebookLM Studio interface. Users can now generate and store multiple outputs—Audio Overviews, Reports, Study Guides, or Mind Maps—all within a single notebook.

The update also allows users to interact with different tools simultaneously, such as listening to an AI podcast while reviewing a study guide, offering a more integrated and versatile learning experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tea dating app suspends messaging after the major data breach

The women’s dating safety app Tea has suspended its messaging feature following a cyberattack that exposed thousands of private messages, posts and images.

The app, which helps women run background checks on men, confirmed that direct messages were accessed during the initial breach disclosed in late July.

Tea has 1.6 million users, primarily in the US. Affected users will be contacted directly and offered free identity protection services, including credit monitoring and fraud alerts.

The company said it is working to strengthen its security and will provide updates as the investigation continues. Some of the leaked conversations reportedly contain sensitive discussions about infidelity and abortion.

Experts have warned that the leak of both images and messages raises the risk of emotional harm, blackmail or identity theft. Cybersecurity specialists recommend that users accept the free protection services as soon as possible.

The breach affected those who joined the app before February 2024, including users who submitted ID photos that Tea had promised would be deleted after verification.

Tea is known for allowing women to check if a potential partner is married or has a criminal record, as well as share personal experiences to flag abusive or trustworthy behaviour.

The app’s recent popularity surge has also sparked criticism, with some claiming it unfairly targets men. As users await more information, experts urge caution and vigilance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bands rise as real musicians struggle to compete

AI is quickly transforming the music industry, with AI-generated bands now drawing millions of plays on platforms like Spotify.

While these acts may sound like traditional musicians, they are entirely digital creations. Streaming services rarely label AI music clearly, and the producers behind these tracks often remain anonymous and unreachable. Human artists, meanwhile, are quietly watching their workload dry up.

Music professionals are beginning to express concern. Composer Leo Sidran believes AI is already taking work away from creators like him, noting that many former clients now rely on AI-generated solutions instead of original compositions.

Unlike previous tech innovations, which empowered musicians, AI risks erasing job opportunities entirely, according to Berklee College of Music professor George Howard, who warns it could become a zero-sum game.

AI music is especially popular for passive listening—background tracks for everyday life. In contrast, real musicians still hold value among fans who engage more actively with music.

However, AI is cheap, fast, and royalty-free, making it attractive to publishers and advertisers. From film soundtracks to playlists filled with faceless artists, synthetic sound is rapidly replacing human creativity in many commercial spaces.

Experts urge musicians to double down on what makes them unique instead of mimicking trends that AI can easily replicate. Live performance remains one of the few areas where AI has yet to gain traction. Until synthetic bands take the stage, artists may still find refuge in concerts and personal connection with fans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Flipkart employee deletes ChatGPT over emotional dependency

ChatGPT has become an everyday tool for many, serving as a homework partner, a research aid, and even a comforting listener. But questions are beginning to emerge about the emotional bonds users form with it. A recent LinkedIn post has reignited the debate around AI overuse.

Simrann M Bhambani, a marketing professional at Flipkart, publicly shared her decision to delete ChatGPT from her devices. In a post titled ‘ChatGPT is TOXIC! (for me)’, she described how casual interaction escalated into emotional dependence. The platform began to resemble a digital therapist.

Bhambani admitted to confiding every minor frustration and emotional spiral to the chatbot. Its constant availability and non-judgemental replies gave her a false sense of security. Even with supportive friends, she felt drawn to the machine’s quiet reliability.

What began as curiosity turned into compulsion. She found herself spending hours feeding the bot intrusive thoughts and endless questions. ‘I gave my energy to something that wasn’t even real,’ she wrote. The experience led to more confusion instead of clarity.

Rather than offering mental relief, the chatbot fuelled her overthinking. The emotional noise grew louder, eventually becoming overwhelming. She realised that the problem wasn’t the technology itself, but how it quietly replaced self-reflection.

Deleting the app marked a turning point. Bhambani described the decision as a way to reclaim mental space and reduce digital clutter. She warned others that AI tools, while useful, can easily replace human habits and emotional processing if left unchecked.

Many users may not notice such patterns until they are deeply entrenched. AI chatbots are designed to be helpful and responsive, but they lack the nuance and care of human conversation. Their steady presence can foster a deceptive sense of intimacy.

People increasingly rely on digital tools to navigate their daily emotions, often without understanding the consequences. Some may find themselves withdrawing from human relationships or journalling less often. Emotional outsourcing to machines can significantly change how people process personal experiences.

Industry experts have warned about the risks of emotional reliance on generative AI. Chatbots are known to produce inaccurate or hallucinated responses, especially when asked to provide personal advice. Sole dependence on such tools can lead to misinformation or emotional confusion.

Companies like OpenAI have stressed that ChatGPT is not a substitute for professional mental health support. While the bot is trained to provide helpful and empathetic responses, it cannot replace human judgement or real-world relationships. Boundaries are essential.

Mental health professionals also caution against using AI as an emotional crutch. Reflection and self-awareness take time and require discomfort, which AI often smooths over. The convenience can dull long-term growth and self-understanding.

Bhambani’s story has resonated with many who have quietly developed similar habits. Her openness has sparked important discussions on emotional hygiene in the age of AI. More users are starting to reflect on their relationship with digital tools.

Social media platforms are also witnessing an increased number of posts about AI fatigue and cognitive overload. People are beginning to question how constant access to information and feedback affects emotional well-being. There is growing awareness around the need for balance.

AI is expected to become even more integrated into daily life, from virtual assistants to therapy bots. Recognising the line between convenience and dependency will be key. Tools are meant to serve, not dominate, personal reflection.

Developers and users alike must remain mindful of how often and why they turn to AI. Chatbots can complement human support systems, but they are not replacements. Bhambani’s experience serves as a cautionary tale in the age of machine intimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings AI Mode to UK search results

Google has officially introduced its AI Mode to UK users, calling it the most advanced version of its search engine.

Instead of listing web links, the feature provides direct, human-like answers to queries. It allows users to follow up with more detailed questions or multimedia inputs such as voice and images. The update aims to keep pace with the rising trend of longer, more conversational search phrases.

The tool first launched in the US and uses a ‘query fan-out’ method, breaking down complex questions into multiple search threads to create a combined answer from different sources.

While Google claims this will result in more meaningful site visits, marketers and publishers are worried about a growing trend known as ‘zero-click searches’, where users find what they need without clicking external links.

Research already shows a steep drop in engagement. Data from the Pew Research Centre reveals that only 8% of users click a link when AI summaries are present, nearly half the rate of traditional search pages. Experts warn that without adjusting strategies, many online brands risk becoming invisible.

Instead of relying solely on classic SEO tactics, businesses are being urged to adopt Generative Engine Optimisation. Using tools like schema markup, GEO focuses on conversational content, visual media, and context-aware formatting.

With nearly half of UK users engaging with AI search daily, adapting to these shifts may prove essential for maintaining visibility and sales.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!