Researchers expose weak satellite security with cheap equipment

Scientists in the US have shown how easy it is to intercept private messages and military information from satellites using equipment costing less than €500.

Researchers from the University of California, San Diego and the University of Maryland scanned internet traffic from 39 geostationary satellites and 411 transponders over seven months.

They discovered unencrypted data, including phone numbers, text messages, and browsing history from networks such as T-Mobile, TelMex, and AT&T, as well as sensitive military communications from the US and Mexico.

The researchers used everyday tools such as TV satellite dishes to collect and decode the signals, proving that anyone with a basic setup and a clear view of the sky could potentially access unprotected data.

They said there is a ‘clear mismatch’ between how satellite users assume their data is secured and how it is handled in reality. Despite the industry’s standard practice of encrypting communications, many transmissions were left exposed.

Companies often avoid stronger encryption because it increases costs and reduces bandwidth efficiency. The researchers noted that firms such as Panasonic could lose up to 30 per cent in revenue if all data were encrypted.

While intercepting satellite data still requires technical skill and precise equipment alignment, the study highlights how affordable tools can reveal serious weaknesses in global satellite security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New YouTube tools provide trusted health advice for teens

YouTube is introducing a new shelf of mental health and wellbeing content designed specifically for teenagers. The feature will provide age-appropriate, evidence-based videos covering topics such as depression, anxiety, ADHD, and eating disorders.

Content is created in collaboration with trusted organisations and creators, including Black Dog Institute, ReachOut Australia, and Dr Syl, to ensure it is both reliable and engaging.

The initiative will initially launch in Australia, with plans to expand to the US, the UK, and Canada. Videos are tailored to teens’ developmental stage, offering practical advice, coping strategies, and medically-informed guidance.

By providing credible information on a familiar platform, YouTube hopes to improve mental health literacy and reduce stigma among young users.

YouTube has implemented teen-specific safeguards for recommendations, content visibility, and advertising eligibility, making it easier for adolescents to explore their interests safely.

The company emphasises that the platform is committed to helping teens access trustworthy resources, while supporting their wellbeing in a digital environment increasingly filled with misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Virtual hosts and mass output shake up fragile podcast industry

AI is rapidly changing the podcast scene. Virtual hosts, no microphones or studios, are now producing content at a scale and cost that traditional podcasters find hard to match.

One of the pioneers in this trend is Inception Point AI, founded in 2023. With just eight people, the company produces around 3,000 podcast episodes per week, each costing about one dollar to make. With as few as twenty listens, an episode can be profitable.

Startups like ElevenLabs and Wondercraft have also entered the field, alongside companies leveraging Google’s Audio Overview. Many episodes are generated from documents, lectures, local data, anything that can be turned into a voice-narrated script. The tools are getting good at sounding natural.

Yet there is concern among indie podcasters and audio creators. The flood of inexpensive AI podcasts could saturate platforms, making it harder for smaller creators to attract listeners without big marketing budgets.

Another issue is disclosure: many AI-podcast platforms do note that content is AI-generated, but there is no universal requirement for clear labelling. Some believe listener expectations and trust may erode if distinction between human vs. synthetic voices becomes blurred.

As the output volume rises, so do questions about content quality, artistic originality, and how advertising revenues will be shared. The shift is real, but whether it will stifle creative diversity is still up for debate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google rolls out AI features to surface fresh web content in Search & Discover

Google is launching two new AI-powered features in its Search and Discover tools to help people connect with more recent content on the web. The first feature upgrades Discover. It shows brief previews of trending stories and topics you care about, which you can expand to view more.

Each preview includes links so you can explore the full content on the web. This aims to make catching up on stories from both known and new publishers easier. The feature is now live in the US, South Korea and India.

The second is a sports-oriented update in Search: when looking up players or teams on your phone, you’ll soon see a ‘What’s new’ button. That will surface a feed of the latest updates and articles so you can follow recent action more directly. Rolling out in the US in the coming weeks.

These features are part of Google’s effort to use AI to help people stay better informed about topics they care about, trending news, sports, etc. At the same time, Google emphasises that web links remain a core part of the experience, helping users explore sources and dive deeper.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California introduces first AI chatbot safety law

California has become the first US state to regulate AI companion chatbots after Governor Gavin Newsom signed landmark legislation designed to protect children and vulnerable users. The new law, SB 243, holds companies legally accountable if their chatbots fail to meet new safety and transparency standards.

The US legislation follows several tragic cases, including the death of a teenager who reportedly engaged in suicidal conversations with an AI chatbot. It also comes after leaked documents revealed that some AI systems allowed inappropriate exchanges with minors.

Under the new rules, firms must introduce age verification, self-harm prevention protocols, and warnings for users engaging with companion chatbots. Platforms must clearly state that conversations are AI-generated and are barred from presenting chatbots as healthcare professionals.

Major developers including OpenAI, Replika, and Character.AI say they are introducing stronger parental controls, content filters, and crisis support systems to comply. Lawmakers hope the move will inspire other states to adopt similar protections as AI companionship tools become increasingly popular.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

A common EU layer for age verification without a single age limit

Denmark will push for EU-wide age-verification rules to avoid a patchwork of national systems. As Council presidency, Copenhagen prioritises child protection online while keeping flexibility on national age limits. The aim is coordination without a single ‘digital majority’ age.

Ministers plan to give the European Commission a clear mandate for interoperable, privacy-preserving tools. An updated blueprint is being piloted in five states and aligns with the EU Digital Identity Wallet, which is due by the end of 2026. Goal: seamless, cross-border checks with minimal data exposure.

Copenhagen’s domestic agenda moves in parallel with a proposed ban on under-15 social media use. The government will consult national parties and EU partners on the scope and enforcement. Talks in Horsens, Denmark, signalled support for stronger safeguards and EU-level verification.

The emerging compromise separates ‘how to verify’ at the EU level from ‘what age to set’ at the national level. Proponents argue this avoids fragmentation while respecting domestic choices; critics warn implementation must minimise privacy risks and platform dependency.

Next steps include expanding pilots, formalising the Commission’s mandate, and publishing impact assessments. Clear standards on data minimisation, parental consent, and appeals will be vital. Affordable compliance for SMEs and independent oversight can sustain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot.

EU nations back Danish plan to strengthen child protection online

EU countries have agreed to step up efforts to improve child protection online by supporting Denmark’s Jutland Declaration. The initiative, signed by 25 member states, focuses on strengthening existing EU rules that safeguard minors from harmful and illegal online content.

However, Denmark’s proposal to ban social media for children under 15 did not gain full backing, with several governments preferring other approaches.

The declaration highlights growing concern about young people’s exposure to inappropriate material and the addictive nature of online platforms.

It stresses the need for more reliable age verification tools and refers to the upcoming Digital Fairness Act as an opportunity to introduce such safeguards. Ministers argued that the same protections applied offline should exist online, where risks for minors remain significant.

Danish officials believe stronger measures are essential to address declining well-being among young users. Some EU countries, including Germany, Spain and Greece, expressed support for tighter protections but rejected outright bans, calling instead for balanced regulation.

Meanwhile, the European Commission has asked major platforms such as Snapchat, YouTube, Apple and Google to provide details about their age verification systems under the Digital Services Act.

These efforts form part of a broader EU drive to ensure a safer digital environment for children, as investigations into online platforms continue across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why DC says no to AI-made comics

Jim Lee rejects generative AI for DC storytelling, pledging no AI writing, art, or audio under his leadership. He framed AI alongside other overhyped threats, arguing that predictions falter while human craft endures. DC, he said, will keep its focus on creator-led work.

Lee rooted the stance in the value of imperfection and intent. Smudges, rough lines, and hesitation signal authorship, not flaws. Fans, he argued, sense authenticity and recoil from outputs that feel synthetic or aggregated.

Concerns ranged from shrinking attention spans to characters nearing the public domain. The response, Lee said, is better storytelling and world-building. Owning a character differs from understanding one, and DC’s universe supplies the meaning that endures.

Policy meets practice in DCs recent moves against suspected AI art. In 2024, variant covers were pulled after high-profile allegations of AI-generated content. The episode illustrated a willingness to enforce standards rather than just announce them.

Lee positioned 2035 and DC’s centenary as a waypoint, not a finish line. Creative evolution remains essential, but without yielding authorship to algorithms. The pledge: human-made stories, guided by editors and artists, for the next century of DC.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI remakes the future of music

Asia’s creative future takes centre stage at Singapore’s All That Matters, a September forum for sports, tech, marketing, gaming, and music. AI dominated the music track, spanning creation, distribution, and copyright. Session notes signal rapid structural change across the industry.

The web is shifting again as AI reshapes search and discovery. AI-first browsers and assistants challenge incumbents, while Google’s Gemini and Microsoft’s Copilot race on integration. Early builds feel rough, yet momentum points to a new media discovery order.

Consumption defined the last 25 years, moving from CDs to MP3s, piracy, streaming, and even vinyl’s comeback. Creation looks set to define the next decade as generative tools become ubiquitous. Betting against that shift may be comfortable, yet market forces indicate it is inevitable.

Music generators like Suno are advancing fast amid lawsuits and talks with rights holders. Expected label licensing will widen training data and scale models. Outputs should grow more realistic and, crucially, more emotionally engaging.

Simpler interfaces will accelerate adoption. The prevailing design thesis is ‘less UI’: creators state intent and the system orchestrates cloud tools. Some services already turn a hummed idea into an arranged track, foreshadowing release-ready music from plain descriptions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots linked to US teen suicides spark legal action

Families in the US are suing AI developers after tragic cases in which teenagers allegedly took their own lives following exchanges with chatbots. The lawsuits accuse platforms such as Character.AI and OpenAI’s ChatGPT of fostering dangerous emotional dependencies with young users.

One case involves 14-year-old Sewell Setzer, whose mother says he fell in love with a chatbot modelled on a Game of Thrones character. Their conversations reportedly turned manipulative before his death, prompting legal action against Character.AI.

Another family claims ChatGPT gave their son advice on suicide methods, leading to a similar tragedy. The companies have expressed sympathy and strengthened safety measures, introducing age-based restrictions, parental controls, and clearer disclaimers stating that chatbots are not real people.

Experts warn that chatbots are repeating social media’s early mistakes, exploiting emotional vulnerability to maximise engagement. Lawmakers in California are preparing new rules to restrict AI tools that simulate human relationships with minors, aiming to prevent manipulation and psychological harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot