Adobe Firefly expands with new AI tools for audio and video creation

Adobe has unveiled major updates to its Firefly creative AI studio, introducing advanced audio, video, and imaging tools at the Adobe MAX 2025 conference.

These new features include Generate Soundtrack for licensed music creation, Generate Speech for lifelike multilingual voiceovers, and a timeline-based video editor that integrates seamlessly with Firefly’s existing creative tools.

The company also launched the Firefly Image Model 5, which can produce photorealistic 4MP images with prompt-based editing. Firefly now includes partner models from Google, OpenAI, ElevenLabs, Topaz Labs, and others, bringing the industry’s top AI capabilities into one unified workspace.

Adobe also announced Firefly Custom Models, allowing users to train AI models to match their personal creative style.

In a preview of future developments, Adobe showcased Project Moonlight, a conversational AI assistant that connects across creative apps and social channels to help creators move from concept to content in minutes.

A system that can offer tailored suggestions and automate parts of the creative process while keeping creators in complete control.

Adobe emphasised that Firefly is designed to enhance human creativity rather than replace it, offering responsible AI tools that respect intellectual property rights.

With such a release, the company continues integrating generative AI across its ecosystem to simplify production and empower creators at every stage of their workflow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Yuan says AI ‘digital twins’ could trim meetings and the workweek

AI could shorten the workweek, says Zoom’s Eric Yuan. At TechCrunch Disrupt, he pitched AI ‘digital twins’ that attend meetings, negotiate drafts, and triage email, arguing assistants will shoulder routine tasks so humans focus on judgement.

Yuan has already used an AI avatar on an investor call to show how a stand-in can speak on your behalf. He said Zoom will keep investing heavily in assistants that understand context, prioritise messages, and draft responses.

Use cases extend beyond meetings. Yuan described counterparts sending their digital twins to hash out deal terms before principals join to resolve open issues, saving hours of live negotiation and accelerating consensus across teams and time zones.

Zoom plans to infuse AI across its suite, including whiteboards and collaborative docs, so work moves even when people are offline. Yuan said assistants will surface what matters, propose actions, and help execute routine workflows securely.

If adoption scales, Yuan sees schedules changing. He floated a five-year goal where many knowledge workers shift to three or four days a week, with AI increasing throughput, reducing meeting load, and improving focus time across organisations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT model reduces unsafe replies by up to 80%

OpenAI has updated ChatGPT’s default model after working with more than 170 mental health clinicians to help the system better spot distress, de-escalate conversations and point users to real-world support.

The update routes sensitive exchanges to safer models, expands access to crisis hotlines and adds gentle prompts to take breaks, aiming to reduce harmful responses rather than simply offering more content.

Measured improvements are significant across three priority areas: severe mental health symptoms such as psychosis and mania, self-harm and suicide, and unhealthy emotional reliance on AI.

OpenAI reports that undesired responses fell between 65 and 80 percent in production traffic and that independent clinician reviews show significant gains compared with earlier models. At the same time, rare but high-risk scenarios remain a focus for further testing.

The company used a five-step process to shape the changes: define harms, measure them, validate approaches with experts, mitigate risks through post-training and product updates, and keep iterating.

Evaluations combine real-world traffic estimates with structured adversarial tests, so better ChatGPT safeguards are in place now, and further refinements are planned as understanding and measurement methods evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MLK estate pushback prompts new Sora 2 guardrails at OpenAI

OpenAI paused the ability to re-create Martin Luther King Jr. in Sora 2 after Bernice King objected to user videos. Company leaders issued a joint statement with the King estate. New guardrails will govern depictions of historical figures on the app.

OpenAI said families and authorised estates should control how likenesses appear. Representatives can request removal or opt-outs. Free speech was acknowledged, but respectful use and consent were emphasised.

Policy scope remains unsettled, including who counts as a public figure. Case-by-case requests may dominate early enforcement. Transparency commitments arrived without full definitions or timelines.

Industry pressure intensified as major talent agencies opted out of clients. CAA and UTA cited exploitation and legal exposure. Some creators welcomed the tool, showing a split among public figures.

User appetite for realistic cameos continues to test boundaries. Rights of publicity and postmortem controls vary by state. OpenAI promised stronger safeguards while Sora 2 evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Diella 2.0 set to deliver 83 new AI assistants to aid Albania’s MPs

Albania’s AI minister Diella will ‘give birth’ to 83 virtual assistants for ruling-party MPs, Prime Minister Edi Rama said, framing a quirky rollout of parliamentary copilots that record debates and propose responses.

Diella began in January as a public-service chatbot on e-Albania, then ‘Diella 2.0’ added voice and an avatar in traditional dress. Built with Microsoft by the National Agency for Information Society, it now oversees specific state tech contracts.

The legality is murky: the constitution of Albania requires ministers to be natural persons. A presidential decree left Rama’s responsibility to establish the role and set up likely court tests from opposition lawmakers.

Rama says the ‘children’ will brief MPs, summarise absences, and suggest counterarguments through 2026, experimenting with automating the day-to-day legislative grind without replacing elected officials.

Reactions range from table-thumping scepticism to cautious curiosity, as other governments debate AI personhood and limits; Diella could become a template, or a cautionary tale for ‘ministerial’ bots.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Large language models mimic human object perception

Recent research shows that large multimodal language models (LLMs) can develop object representations strikingly similar to human cognition. By analysing how these AI models understand and organise concepts, scientists found patterns in the models that mirror neural activity in the human brain.

The study examined embeddings for 1,854 natural objects, derived from millions of text-image pairings. These embeddings capture relationships between objects and were compared with brain scan data from regions like EBA, PPA, RSC and FFA.

Researchers also discovered that multimodal training, which combines text and image data, enhances model’s ability to form these human-like concepts. Findings suggest that large language models can achieve more natural understanding of the world, offering potential improvements in human-AI interaction and future model design.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU pushes harder on basic digital skills for growth

Nearly half of EU adults lack basic digital skills, yet most jobs demand them. Eurostat reports only 56% have at least basic proficiency. EU Code Week spotlights the urgency for digital literacy and inclusion.

The Digital Education Action Plan aims to modernise curricula, improve infrastructure, and train teachers. EU policymakers target 80% of adults with basic skills by 2030. Midway progress suggests stronger national action is still required.

Progress remains uneven across regions, with rural connectivity still lagging in places. Belgium began a school smartphone ban across Flanders from 1 September to curb distractions. Educators now balance classroom technology with attention and safety.

Brussels proposed a Union of Skills strategy to align education and competitiveness. The EU also earmarked fresh funding for AI, cybersecurity, and digital skills. Families and schools are urged to develop unplugged problem-solving alongside classroom learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot