Tech giants under fire in Australia for failing online child protection standards

Australia’s eSafety commissioner report showed that tech giants, including Apple, Google, Meta, and Microsoft, have failed to act against online child sexual abuse. Namely, it was found that Apple and YouTube do not track the number of abuse reports they receive or how quickly they respond, raising serious concerns. Additionally, both companies failed to disclose the number of trust and safety staff they employ, highlighting ongoing transparency and accountability issues in protecting children online.

In July 2024, the eSafety Commissioner of Australia took action by issuing legally enforceable notices to major tech companies, pressuring them to improve their response to child sexual abuse online.

These notices legally require recipients to comply within a set timeframe. Under the order, each companies were required to report eSafety every six months over a two-year period, detailing their efforts to combat child sexual abuse material, livestreamed abuse, online grooming, sexual extortion, and AI-generated content.

While these notices were issued in 2022 and 2023, there has been minimal effort by the companies to take action to prevent such crimes, according to Australia’s eSafety Commissioner Julie Inman Grant.

Key findings from the eSafety commissioner are:

  • Apple did not use hash-matching tools to detect known CSEA images on iCloud (which was opt-in, end-to-end encrypted) and did not use hash-matching tools to detect known CSEA videos on iCloud or iCloud email. For iMessage and FaceTime (which were end-to-end encrypted), Apple only used Communication Safety, Apple’s safety intervention to identify images or videos that likely contain nudity, as a means of ‘detecting’ CSEA.
  • Discord did not use hash-matching tools for known CSEA videos on any part of the service (despite using hash-matching tools for known images and tools to detect new CSEA material).
  • Google did not use hash-matching tools to detect known CSEA images on Google Messages (end-to-end encrypted), nor did it detect known CSEA videos on Google Chat, Google Messages, or Gmail.
  • Microsoft did not use hash-matching tools for known CSEA images stored on OneDrive18, nor did it use hash-matching tools to detect known videos within content stored on OneDrive or Outlook.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU proposal to scan private messages gains support

The European Union’s ‘Chat Control’ proposal is gaining traction, with 19 member states now supporting a plan to scan all private messages on encrypted apps. From October, apps like WhatsApp, Signal, and Telegram must scan all messages, photos, and videos on users’ devices before encryption.

France, Denmark, Belgium, Hungary, Sweden, Italy, and Spain back the measure, while Germany has yet to decide. The proposal could pass by mid-October under the EU’s qualified majority voting system if Germany joins.

The initiative aims to prevent child sexual abuse material (CSAM) but has sparked concerns over mass surveillance and the erosion of digital privacy.

In addition to scanning, the proposal would introduce mandatory age verification, which could remove anonymity on messaging platforms. Critics argue the plan amounts to real-time surveillance of private conversations and threatens fundamental freedoms.

Telegram founder Pavel Durov recently warned of societal collapse in France due to censorship and regulatory pressure. He disclosed attempts by French officials to censor political content on his platform, which he refused to comply with.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The end of the analogue era and the cognitive rewiring of new generations

Navigating a world beyond analogue

The digital transformation of daily life represents more than just a change in technological format. It signals a deep cultural and cognitive reorientation.

Rather than simply replacing analogue tools with digital alternatives, society has embraced an entirely new way of interacting with information, memory, time, and space.

For younger generations born into this reality, digital mediation is not an addition but the default mode of experiencing the world. A redefinition like this introduces not only speed and convenience but also cognitive compromises, cultural fragmentation, and a fading sense of patience and physical memory.

Generation Z as digital natives

Generation Z has grown up entirely within the digital realm. Unlike older cohorts who transitioned from analogue practices to digital habits, members of Generation Z were born into a world of touchscreen interfaces, search engines, and social media ecosystems.

As Generation Z enters the workforce, the gap between digital natives and older generations is becoming increasingly apparent. For them, technology has never been a tool to learn. It has always been a natural extension of their daily life.

young university students using laptop and studying with books in library school education concept

The term ‘digital native’, first coined by Marc Prensky in 2001, refers precisely to those who have never known a world without the internet. Rather than adapting to new tools, they process information through a technology-first lens.

In contrast, digital immigrants (those born before the digital boom) have had to adjust their ways of thinking and interacting over time. While access to technology might be broadly equal across generations in developed countries, the way individuals engage with it differs significantly.

Instead of acquiring digital skills later in life, they developed them alongside their cognitive and emotional identities. This fluency brings distinct advantages. Young people today navigate digital environments with speed, confidence, and visual intuition.

They can synthesise large volumes of information, switch contexts rapidly, and interact across multiple platforms with ease.

The hidden challenges of digital natives

However, the native digital orientation also introduces unique vulnerabilities. Information is rarely absorbed in depth, memory is outsourced to devices, and attention is fragmented by endless notifications and competing stimuli.

While older generations associate technology with productivity or leisure, Generation Z often experiences it as an integral part of their identity. The integration can obscure the boundary between thought and algorithm, between agency and suggestion.

Being a digital native is not just a matter of access or skill. It is about growing up with different expectations of knowledge, communication, and identity formation.

Memory and cognitive offloading: Access replacing retention

In the analogue past, remembering involved deliberate mental effort. People had to memorise phone numbers, use printed maps to navigate, or retrieve facts from memory rather than search engines.

The rise of smartphones and digital assistants has allowed individuals to delegate that mental labour to machines. Instead of internalising facts, people increasingly learn where and how to access them when needed, a practice known as cognitive offloading.

digital brain

Although the shift can enhance decision-making and productivity by reducing overload, it also reshapes the way the brain handles memory. Unlike earlier generations, who often linked memories to physical actions or objects, younger people encounter information in fast-moving and transient digital forms.

Memory becomes decentralised and more reliant on digital continuity than on internal recall. Rather than cognitive decline, this trend marks a significant restructuring of mental habits.

Attention and time: From linear focus to fragmented awareness

The analogue world demanded patience. Sending a letter meant waiting for days, rewinding a VHS tape required time, and listening to an album involved staying on the same set of songs in a row.

Digital media has collapsed these temporal structures. Communication is instant, entertainment is on demand, and every interface is designed to be constantly refreshed.

Instead of promoting sustained focus, digital environments often encourage continuous multitasking and quick shifts in attention. App designs, with their alerts, pop-ups, and endless scrolling, reinforce a habit of fragmented presence.

Studies have shown that multitasking not only reduces productivity but also undermines deeper understanding and reflection. Many younger users, raised in this environment, may find long periods of undivided attention unfamiliar or even uncomfortable.

The lost sense of the analogue

Analogue interactions involved more than sight and sound. Reading a printed book, handling vinyl records, or writing with a pen engaged the senses in ways that helped anchor memory and emotion. These physical rituals provided context and reinforced cognitive retention.

highlighter in male hand marked text in book education concept

Digital experiences, by contrast, are streamlined and screen-bound. Tapping icons and swiping a finger across glass lack the tactile diversity of older tools. Sensory uniformity might lead to a form of experiential flattening, where fewer physical cues are accessible to strengthen memory.

Digital photography lacks the permanence of a printed one, and music streamed online does not carry the same mnemonic weight as a cherished cassette or CD once did.

From communal rituals to personal streams

In the analogue era, media consumption was more likely to be shared. Families gathered around television sets, music was enjoyed communally, and photos were stored in albums passed down across generations.

These rituals helped synchronise cultural memory and foster emotional continuity and a sense of collective belonging.

The digital age favours individualised streams and asynchronous experiences. Algorithms personalise every feed, users consume content alone, and communication takes place across fragmented timelines.

While young people have adapted with fluency, creating their digital languages and communities, the collective rhythm of cultural experience is often lost.

People no longer share the same moment. They now experience parallel narratives shaped by personal profiles and rather than social connections.

Digital fatigue and social withdrawal

However, as the digital age reaches a point of saturation, younger generations are beginning to reconsider their relationship with the online world.

While constant connectivity dominates modern life, many are now striving to reclaim physical spaces, face-to-face interactions, and slower forms of communication.

In urban centres, people often navigate large, impersonal environments where community ties are weak and digital fatigue is contributing to a fresh wave of social withdrawal and isolation.

Despite living in a world designed to be more connected than ever before, younger generations are increasingly aware that a screen-based life can amplify loneliness instead of resolving it.

But the withdrawal from digital life has not been without consequences.

Those who step away from online platforms sometimes find themselves excluded from mainstream social, political, or economic systems.

Others struggle to form stable offline relationships because digital interaction has long been the default. Both groups would probably say that it feels like living on a razor’s edge.

Education and learning in a hybrid cognitive landscape

Education illustrates the analogue-to-digital shift with particular clarity. Students now rely heavily on digital sources and AI for notes, answers, and study aids.

The approach offers speed and flexibility, but it can also hinder the development of critical thinking and perseverance. Rather than engaging deeply with material, learners may skim or rely on summarised content, weakening their ability to reason through complex ideas.

ChatGPT students Jocelyn Leitzinger AI in education

Educators must now teach not only content but also digital self-awareness. Helping students understand how their tools shape their learning is just as important as the tools themselves.

A balanced approach that includes reading physical texts, taking handwritten notes, and scheduling offline study can help cultivate both digital fluency and analogue depth. This is not a nostalgic retreat, but a cognitive necessity.

Intergenerational perception and diverging mental norms

Older and younger generations often interpret each other through the lens of their respective cognitive habits. What seems like a distraction or dependency to older adults may be a different but functional way of thinking to younger people.

It is not a decline in ability, but an adaptation. Ultimately, each generation develops in response to the tools that shape its world.

Where analogue generations valued memorisation and sustained focus, digital natives tend to excel in adaptability, visual learning, and rapid information navigation.

multi generation family with parents using digital tablet with daughter at home

Bridging the gap means fostering mutual understanding and encouraging the retention of analogue strengths within a digital framework. Teaching young people to manage their attention, question their sources, and reflect deeply on complex issues remains vital.

Preserving analogue values in a digital world

The end of the analogue era involves more than technical obsolescence. It marks the disappearance of practices that once encouraged mindfulness, slowness, and bodily engagement.

Yet abandoning analogue values entirely would impoverish our cognitive and cultural lives. Incorporating such habits into digital living can offer a powerful antidote to distraction.

Writing by hand, spending time with printed books, or setting digital boundaries should not be seen as resistance to progress. Instead, these habits help protect the qualities that sustain long-term thinking and emotional presence.

Societies must find ways to integrate these values into digital systems and not treat them as separate or inferior modes.

Continuity by blending analogue and digital

As we have already mentioned, younger generations are not less capable than those who came before; they are simply attuned to different tools.

The analogue era may be gone for good, but its qualities need not be lost. We can preserve its depth, slowness, and shared rituals within a digital (or even a post-digital) world, using them to shape more balanced minds and more reflective societies.

To achieve something like this, education, policy, and cultural norms should support integration. Rather than focus solely on technical innovation, attention must also turn to its cognitive costs and consequences.

Only by adopting a broader perspective on human development can we guarantee that future generations are not only connected but also highly aware, capable of critical thinking, and grounded in meaningful memory.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moflin, Japan’s AI-powered robot pet with a personality

A fluffy, AI-powered robot pet named Moflin is capturing the imagination of consumers in Japan with its unique ability to develop distinct personalities based on how it is ‘raised.’ Developed by Casio, Moflin recognises its owner and learns their preferences through interactions such as cuddling and stroking, boasting over four million possible personality variations.

Priced at ¥59,400, Moflin has become more than just a companion at home, with some owners even taking it along on day trips. To complement the experience, Casio offers additional services, including a specialised salon to clean and maintain the robot’s fur, further enhancing its pet-like feel.

Erina Ichikawa, the lead developer, says the aim was to create a supportive sidekick capable of providing comfort during challenging moments, blending technology with emotional connection in a new way.

A similar ‘pet’ was also seen in China. Namely, AI-powered ‘smart pets’ like BooBoo are gaining popularity in China, especially among youth, offering emotional support and companionship. Valued for easing anxiety and isolation, the market is set to reach $42.5 billion by 2033, reflecting shifting social and family dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US federal appeals court renews scrutiny in child exploitation suit against Musk’s X

A federal appeals court in San Francisco has reinstated critical parts of a lawsuit against Elon Musk’s social media platform X, previously known as Twitter, regarding child exploitation content. 

While recognising that X holds significant legal protections against liability for content posted by users, the 9th Circuit panel determined that the platform must address allegations of negligence stemming from delays in reporting explicit material involving minors to authorities.

The troubling case revolves around two minors who were tricked via SnapChat into providing explicit images, which were later compiled and widely disseminated on Twitter. 

Despite being alerted to the content, Twitter reportedly took nine days to remove it and notify the National Center for Missing and Exploited Children, during which the disturbing video received over 167,000 views. 

The court emphasised that once the platform was informed, it had a clear responsibility to act swiftly, separating this obligation from typical protections granted by the Communications Decency Act.

The ruling additionally criticised X for having an infrastructure that allegedly impeded users’ ability to report child exploitation effectively. 

However, the court upheld the dismissal of other claims, including allegations that Twitter knowingly benefited from sex trafficking or deliberately amplified illicit content. 

Advocates for the victims welcomed the decision as a step toward accountability, setting the stage for further legal scrutiny and potential trial proceedings.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK Online Safety Act under fire amid free speech and privacy concerns

The UK’s Online Safety Act, aimed at protecting children and eliminating illegal content online, is stirring a strong debate due to its stringent requirements on social media platforms and websites hosting adult content.

Critics argue that the act’s broad application could unintentionally suppress free speech, as highlighted by social media platform X.

X claims the act results in the censorship of lawful content, reflecting concerns shared by politicians, free-speech campaigners, and content creators.

Moreover, public unease is evident, with over 468,000 individuals signing a petition for the act’s repeal, citing privacy concerns over mandatory age checks requiring personal data on adult content sites.

Despite mounting criticism, the UK government is resolute in its commitment to the legislation. Technology Secretary Peter Kyle equates opposition to siding with online predators, emphasising child protection.

The government asserts that the act also mandates platforms to uphold freedom of expression alongside child safety obligations.

While X criticises both the broad scope and the tight compliance timelines of the act, warning of pressures towards over-censorship, it calls for significant statutory revisions to protect personal freedoms while safeguarding children.

The government rebuffs claims that the Online Safety Act compromises free speech, with assurances that the law equally protects freedom of expression.

Meanwhile, Ofcom, the UK’s communications regulator, has initiated investigations into the compliance of several companies managing pornography sites, highlighting the rigorous enforcement.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Children’s screen time debate heats up as experts question evidence

A growing number of scientists are questioning whether fears over children’s screen time are truly backed by evidence. While many parents worry about smartphones, social media, and gaming, experts say the science behind these concerns is often flawed or inconsistent.

Professor Pete Etchells of Bath Spa University and other researchers argue that common claims about screen time harming adolescent brains or causing depression lack strong evidence.

Much of the existing research relies on self-reported data and fails to account for critical factors like loneliness or the type of screen engagement.

One major study found no link between screen use and poor mental wellbeing, while others stress the importance of distinguishing between harmful content and positive online interaction.

Still, many campaigners and psychologists maintain that screen restrictions are vital. Groups such as Smartphone Free Childhood are pushing to delay access to smartphones and social media.

Others, like Professor Jean Twenge, say the risks of screen overuse—less sleep, reduced social time, and more time alone—create a ‘terrible formula for mental health.’

With unclear guidance and evolving science, parents face tough choices in a rapidly changing tech world. As screens become more common via AI, smart glasses, and virtual communities, the focus shifts to how children can use technology wisely and safely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

VPN dangers highlighted as UK’s Online Safety Act comes into force

Britons are being urged to proceed with caution before turning to virtual private networks (VPNs) in response to the new age verification requirements set by the Online Safety Act.

The law, now in effect, aims to protect young users by restricting access to adult and sensitive content unless users verify their age.

Instead of offering anonymous access, some platforms now demand personal details such as full names, email addresses, and even bank information to confirm a user’s age.

Although the legislation targets adult websites, many people have reported being blocked from accessing less controversial content, including alcohol-related forums and parts of Wikipedia.

As a result, more users are considering VPNs to bypass these checks. However, cybersecurity experts warn that many VPNs can pose serious risks by exposing users to scams, data theft, and malware. Without proper research, users might install software that compromises their privacy rather than protecting it.

With Ofcom reporting that eight per cent of children aged 8 to 14 in the UK have accessed adult content online, the new rules are viewed as a necessary safeguard. Still, concerns remain about the balance between online safety and digital privacy for adult users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!