AI deepfake videos spark ethical and environmental concerns

Deepfake videos created by AI platforms like OpenAI’s Sora have gone viral, generating hyper-realistic clips of deceased celebrities and historical figures in often offensive scenarios.

Families of figures like Dr Martin Luther King Jr have publicly appealed to AI firms to prevent using their loved ones’ likenesses, highlighting ethical concerns around the technology.

Beyond the emotional impact, Dr Kevin Grecksch of Oxford University warns that producing deepfakes carries a significant environmental footprint. Instead of occurring on phones, video generation happens in data centres that consume vast amounts of electricity and water for cooling, often at industrial scales.

The surge in deepfake content has been rapid, with Sora downloaded over a million times in five days. Dr Grecksch urges users to consider the environmental cost, suggesting more integrated thinking about where data centres are built and how they are cooled to minimise their impact.

As governments promote AI growth areas like South Oxfordshire, questions remain over sustainable infrastructure. Users are encouraged to balance technological enthusiasm with environmental mindfulness, recognising the hidden costs behind creating and sharing AI-generated media.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia rules out AI copyright exemption

The Albanese Government has confirmed that it will not introduce a Text and Data Mining Exception in Australia’s copyright law, reinforcing its commitment to protecting local creators.

The decision follows calls from the technology sector for an exemption allowing AI developers to use copyrighted material without permission or payment.

Attorney-General Michelle Rowland said the Government aims to support innovation and creativity but will not weaken existing copyright protections. The Government plans to explore fair licensing options to support AI innovation while ensuring creators are paid fairly.

The Copyright and AI Reference Group will focus on fair AI use, more explicit copyright rules for AI works, and simpler enforcement through a possible small claims forum.

The Government said Australia must prepare for AI-related copyright challenges while keeping strong protections for creators. Collaboration between the technology and creative sectors will be essential to ensure that AI development benefits everyone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI brings conversational edits to Instagram Stories

Instagram is rolling out generative AI editing for Stories, expanding June’s tools with smarter prompts and broader effects. Type what you want removed or changed, and Meta AI does it. Think conversational edits, similar to Google Photos.

New controls include an Add Yours sticker for sharing your custom look with friends. A Presets browser shows available styles at a glance. Seasonal effects launch for Halloween, Diwali, and more.

Restyle Video brings preset effects to short clips, with options to add flair or remove objects. Edits aim to be fast, fun, and reversible. Creativity first, heavy lifting handled by AI.

Text gets a glow-up: Instagram is testing AI restyle for captions. Pick built-ins like ‘chrome’ or ‘balloon,’ or prompt Meta AI for custom styles.

Meta AI hasn’t wowed Instagram users, but this could change sentiment. The pitch: fewer taps, better results, and shareable looks. If it sticks, creating Stories becomes meaningfully easier.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

At UMN, AI meets ethics, history, and craft

AI is remaking daily life, but it can’t define what makes us human. The liberal arts help us probe ethics, meaning, and power as algorithms scale. At the University of Minnesota Twin Cities, that lens anchors curiosity with responsibility.

In the College of Liberal Arts, scholars are treating AI as both a tool and a textbook. They test its limits, trace its histories, and surface trade-offs around bias, authorship, and agency. Students learn to question design choices rather than just consume outputs.

Linguist Amanda Dalola, who directs the Language Center, experiments with AI as a language partner and reflective coach. Her aim isn’t replacement but augmentation, faster feedback, broader practice, richer cultural context. The point is discernment: when to use, when to refuse.

Statistician Galin Jones underscores the scaffolding beneath the hype. You cannot do AI without statistics, he tells students, so the School of Statistics emphasises inference, uncertainty, and validation. Graduates leave fluent in models, and in the limits of what models claim.

Composer Frederick Kennedy’s opera I am Alan Turing turns theory into performance. By staging Turing’s questions about machine thought and human identity, the work fuses history, sound design, and code. Across philosophy, music, and more, CLA frames AI as a human story first.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft revives friendly AI helper with Mico

Microsoft has unveiled a new AI companion called Mico, designed to replace the infamous Clippy as the friendly face of its Copilot assistant. The animated avatar, shaped like a glowing flame or blob, reacts emotionally and visually during conversations with users.

Executives said Mico aims to balance warmth and utility, offering human-like cues without becoming intrusive. Unlike Clippy, the character can easily be switched off and is intended to feel supportive rather than persistent or overly personal.

Mico’s launch reflects growing debate about personality in AI assistants as tech firms navigate ethical concerns. Microsoft stressed that its focus remains on productivity and safety, distancing itself from flirtatious or emotionally manipulative AI designs seen elsewhere.

The character will first appear in US versions of Copilot on laptops and mobile apps. Microsoft also revealed an AI tutoring mode for students, reinforcing its efforts to create more educational and responsibly designed AI experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta expands AI safety tools for teens

Meta has announced new AI safety tools to give parents greater control over how teenagers use its AI features. The update will first launch on Instagram, allowing parents to disable one-on-one chats between teens and AI characters.

Parents will be able to block specific AI assistants and see topics teens discuss with them. Meta said the goal is to encourage transparency and support families as young users learn to navigate AI responsibly.

Teen protections already include PG-13-guided responses and restrictions on sensitive discussions, such as self-harm or eating disorders. The company said it also uses AI detection systems to apply safeguards when suspected minors misreport their age.

The new parental controls will roll out in English early next year across the US, UK, Canada, and Australia. Meta said it will continue updating features to address parents’ concerns about privacy, safety, and teen wellbeing online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta seeks delay in complying with Dutch court order on Facebook and Instagram timelines

Meta has yet to adjust Facebook and Instagram’s timelines despite an Amsterdam court ruling that found its current design violates European law. The company says it needs more time to make the required changes and has asked the court to extend its deadline until 31 January 2026.

The dispute stems from Meta’s use of algorithmic recommendation systems that determine what posts appear on users’ feeds and in what order. Both Instagram and Facebook have the option to set your timeline to chronological order. However, the option is hard to find and is set back to the original algorithmic timeline as soon as users close the app.

The Amsterdam court earlier ruled that these systems, which reset user preferences and hide options for chronological viewing, breach the Digital Services Act (DSA) by denying users genuine autonomy, freedom of choice, and control over how information is presented.

The judge ordered Meta to modify both apps within two weeks or face penalties of €100,000 per day, up to €5 million. More than two weeks later, Meta has yet to comply, arguing that the technical changes cannot be completed within the court’s timeline.

Dutch civil rights group Bits of Freedom, which brought the case, criticised the delay as a refusal to take responsibility. ‘The legislator wants it, experts say it can be done, and the court says it must be done. Yet Meta fails to bring its platforms into line with our legislation,’ said Evelyn Austin, the organisation’s director said in a statement.

The Amsterdam Court of Appeal will review Meta’s request for an extension on 27 October.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU warns Meta and TikTok over transparency failures

The European Commission has found that Meta and TikTok violated key transparency obligations under the EU’s Digital Services Act (DSA). According to preliminary findings, both companies failed to provide adequate data access to researchers studying public content on their platforms.

The Commission said Facebook, Instagram, and TikTok imposed ‘burdensome’ conditions that left researchers with incomplete or unreliable data, hampering efforts to investigate the spread of harmful or illegal content online.

Meta faces additional accusations of breaching the DSA’s rules on user reporting and complaints. The Commission said the ‘Notice and Action’ systems on Facebook and Instagram were not user-friendly and contained ‘dark patterns’, manipulative design choices that discouraged users from reporting problematic content.

Moreover, Meta allegedly failed to give users sufficient explanations when their posts or accounts were removed, undermining transparency and accountability requirements set by the law.

Both companies have the opportunity to respond before the Commission issues final decisions. However, if the findings are confirmed, Meta and TikTok could face fines of up to 6% of their global annual revenue.

The EU executive also announced new rules, effective next week, that will expand data access for ‘vetted’ researchers, allowing them to study internal platform dynamics and better understand how large social media platforms shape online information flows.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!