The end of the analogue era and the cognitive rewiring of new generations

Navigating a world beyond analogue

The digital transformation of daily life represents more than just a change in technological format. It signals a deep cultural and cognitive reorientation.

Rather than simply replacing analogue tools with digital alternatives, society has embraced an entirely new way of interacting with information, memory, time, and space.

For younger generations born into this reality, digital mediation is not an addition but the default mode of experiencing the world. A redefinition like this introduces not only speed and convenience but also cognitive compromises, cultural fragmentation, and a fading sense of patience and physical memory.

Generation Z as digital natives

Generation Z has grown up entirely within the digital realm. Unlike older cohorts who transitioned from analogue practices to digital habits, members of Generation Z were born into a world of touchscreen interfaces, search engines, and social media ecosystems.

As Generation Z enters the workforce, the gap between digital natives and older generations is becoming increasingly apparent. For them, technology has never been a tool to learn. It has always been a natural extension of their daily life.

young university students using laptop and studying with books in library school education concept

The term ‘digital native’, first coined by Marc Prensky in 2001, refers precisely to those who have never known a world without the internet. Rather than adapting to new tools, they process information through a technology-first lens.

In contrast, digital immigrants (those born before the digital boom) have had to adjust their ways of thinking and interacting over time. While access to technology might be broadly equal across generations in developed countries, the way individuals engage with it differs significantly.

Instead of acquiring digital skills later in life, they developed them alongside their cognitive and emotional identities. This fluency brings distinct advantages. Young people today navigate digital environments with speed, confidence, and visual intuition.

They can synthesise large volumes of information, switch contexts rapidly, and interact across multiple platforms with ease.

The hidden challenges of digital natives

However, the native digital orientation also introduces unique vulnerabilities. Information is rarely absorbed in depth, memory is outsourced to devices, and attention is fragmented by endless notifications and competing stimuli.

While older generations associate technology with productivity or leisure, Generation Z often experiences it as an integral part of their identity. The integration can obscure the boundary between thought and algorithm, between agency and suggestion.

Being a digital native is not just a matter of access or skill. It is about growing up with different expectations of knowledge, communication, and identity formation.

Memory and cognitive offloading: Access replacing retention

In the analogue past, remembering involved deliberate mental effort. People had to memorise phone numbers, use printed maps to navigate, or retrieve facts from memory rather than search engines.

The rise of smartphones and digital assistants has allowed individuals to delegate that mental labour to machines. Instead of internalising facts, people increasingly learn where and how to access them when needed, a practice known as cognitive offloading.

digital brain

Although the shift can enhance decision-making and productivity by reducing overload, it also reshapes the way the brain handles memory. Unlike earlier generations, who often linked memories to physical actions or objects, younger people encounter information in fast-moving and transient digital forms.

Memory becomes decentralised and more reliant on digital continuity than on internal recall. Rather than cognitive decline, this trend marks a significant restructuring of mental habits.

Attention and time: From linear focus to fragmented awareness

The analogue world demanded patience. Sending a letter meant waiting for days, rewinding a VHS tape required time, and listening to an album involved staying on the same set of songs in a row.

Digital media has collapsed these temporal structures. Communication is instant, entertainment is on demand, and every interface is designed to be constantly refreshed.

Instead of promoting sustained focus, digital environments often encourage continuous multitasking and quick shifts in attention. App designs, with their alerts, pop-ups, and endless scrolling, reinforce a habit of fragmented presence.

Studies have shown that multitasking not only reduces productivity but also undermines deeper understanding and reflection. Many younger users, raised in this environment, may find long periods of undivided attention unfamiliar or even uncomfortable.

The lost sense of the analogue

Analogue interactions involved more than sight and sound. Reading a printed book, handling vinyl records, or writing with a pen engaged the senses in ways that helped anchor memory and emotion. These physical rituals provided context and reinforced cognitive retention.

highlighter in male hand marked text in book education concept

Digital experiences, by contrast, are streamlined and screen-bound. Tapping icons and swiping a finger across glass lack the tactile diversity of older tools. Sensory uniformity might lead to a form of experiential flattening, where fewer physical cues are accessible to strengthen memory.

Digital photography lacks the permanence of a printed one, and music streamed online does not carry the same mnemonic weight as a cherished cassette or CD once did.

From communal rituals to personal streams

In the analogue era, media consumption was more likely to be shared. Families gathered around television sets, music was enjoyed communally, and photos were stored in albums passed down across generations.

These rituals helped synchronise cultural memory and foster emotional continuity and a sense of collective belonging.

The digital age favours individualised streams and asynchronous experiences. Algorithms personalise every feed, users consume content alone, and communication takes place across fragmented timelines.

While young people have adapted with fluency, creating their digital languages and communities, the collective rhythm of cultural experience is often lost.

People no longer share the same moment. They now experience parallel narratives shaped by personal profiles and rather than social connections.

Digital fatigue and social withdrawal

However, as the digital age reaches a point of saturation, younger generations are beginning to reconsider their relationship with the online world.

While constant connectivity dominates modern life, many are now striving to reclaim physical spaces, face-to-face interactions, and slower forms of communication.

In urban centres, people often navigate large, impersonal environments where community ties are weak and digital fatigue is contributing to a fresh wave of social withdrawal and isolation.

Despite living in a world designed to be more connected than ever before, younger generations are increasingly aware that a screen-based life can amplify loneliness instead of resolving it.

But the withdrawal from digital life has not been without consequences.

Those who step away from online platforms sometimes find themselves excluded from mainstream social, political, or economic systems.

Others struggle to form stable offline relationships because digital interaction has long been the default. Both groups would probably say that it feels like living on a razor’s edge.

Education and learning in a hybrid cognitive landscape

Education illustrates the analogue-to-digital shift with particular clarity. Students now rely heavily on digital sources and AI for notes, answers, and study aids.

The approach offers speed and flexibility, but it can also hinder the development of critical thinking and perseverance. Rather than engaging deeply with material, learners may skim or rely on summarised content, weakening their ability to reason through complex ideas.

ChatGPT students Jocelyn Leitzinger AI in education

Educators must now teach not only content but also digital self-awareness. Helping students understand how their tools shape their learning is just as important as the tools themselves.

A balanced approach that includes reading physical texts, taking handwritten notes, and scheduling offline study can help cultivate both digital fluency and analogue depth. This is not a nostalgic retreat, but a cognitive necessity.

Intergenerational perception and diverging mental norms

Older and younger generations often interpret each other through the lens of their respective cognitive habits. What seems like a distraction or dependency to older adults may be a different but functional way of thinking to younger people.

It is not a decline in ability, but an adaptation. Ultimately, each generation develops in response to the tools that shape its world.

Where analogue generations valued memorisation and sustained focus, digital natives tend to excel in adaptability, visual learning, and rapid information navigation.

multi generation family with parents using digital tablet with daughter at home

Bridging the gap means fostering mutual understanding and encouraging the retention of analogue strengths within a digital framework. Teaching young people to manage their attention, question their sources, and reflect deeply on complex issues remains vital.

Preserving analogue values in a digital world

The end of the analogue era involves more than technical obsolescence. It marks the disappearance of practices that once encouraged mindfulness, slowness, and bodily engagement.

Yet abandoning analogue values entirely would impoverish our cognitive and cultural lives. Incorporating such habits into digital living can offer a powerful antidote to distraction.

Writing by hand, spending time with printed books, or setting digital boundaries should not be seen as resistance to progress. Instead, these habits help protect the qualities that sustain long-term thinking and emotional presence.

Societies must find ways to integrate these values into digital systems and not treat them as separate or inferior modes.

Continuity by blending analogue and digital

As we have already mentioned, younger generations are not less capable than those who came before; they are simply attuned to different tools.

The analogue era may be gone for good, but its qualities need not be lost. We can preserve its depth, slowness, and shared rituals within a digital (or even a post-digital) world, using them to shape more balanced minds and more reflective societies.

To achieve something like this, education, policy, and cultural norms should support integration. Rather than focus solely on technical innovation, attention must also turn to its cognitive costs and consequences.

Only by adopting a broader perspective on human development can we guarantee that future generations are not only connected but also highly aware, capable of critical thinking, and grounded in meaningful memory.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X challenges India’s expanded social media censorship in court

Tensions have escalated between Elon Musk’s social media platform, X, and the Indian government over extensive online content censorship measures.

Triggered by a seemingly harmless post describing a senior politician as ‘useless,’ the incident quickly spiralled into a significant legal confrontation.

X has accused Prime Minister Narendra Modi’s administration of overstepping constitutional bounds by empowering numerous government bodies to issue content-removal orders, significantly expanding the scope of India’s digital censorship.

At the heart of the dispute lies India’s increased social media content regulation since 2023, including launching the Sahyog platform, a centralised portal facilitating direct content-removal orders from officials to tech firms.

X rejected participating in Sahyog, labelling it a ‘censorship portal,’ and subsequently filed a lawsuit in Karnataka High Court earlier this year, contesting the legality of India’s directives and website, which it claims undermine free speech.

Indian authorities justify their intensified oversight by pointing to the need to control misinformation, safeguard national security, and prevent societal discord. They argue that the measures have broad support within the tech community. Indeed, major players like Google and Meta have reportedly complied without public protest, though both companies have declined to comment on their stance.

However, the court documents reveal that the scope of India’s censorship requests extends far beyond misinformation.

Authorities have reportedly targeted satirical cartoons depicting politicians unfavorably, criticism regarding government preparedness for natural disasters, and even media coverage of serious public incidents like a deadly stampede at a railway station.

While Musk and Prime Minister Modi maintain an outwardly amicable relationship, the conflict presents significant implications for X’s operations in India, one of its largest user bases.

Musk, a self-proclaimed free speech advocate, finds himself at a critical juncture, navigating between principles and the imperative to expand his business ventures within India’s substantial market.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok adopts crowd‑sourced verification tool to combat misinformation

TikTok has rolled out Footnotes in the United States, its crowd‑sourced debunking initiative to supplement existing misinformation controls.

Vetted contributors will write and rate explanatory notes beneath videos flagged as misleading or ambiguous. If a note earns broad support, it becomes visible to all US users.

The system uses a ‘bridging‑based’ ranking framework to encourage agreement between users with differing viewpoints, making the process more robust and reducing partisan bias. Initially launched as a pilot, the platform has already enlisted nearly 80,000 eligible US users.

Footnotes complements TikTok’s integrity setup, including automated detection, human moderation, and partnerships with fact‑checking groups like AFP. Platform leaders note that effectiveness improves as contributors engage more across various topics.

Past research shows comparable crowd‑sourced systems often struggle to publish most submissions, with fewer than 10% of Notes appearing publicly on other platforms. Concerns remain over the system’s scalability and potential misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI sparks fears over future of dubbing

Voice actors across Europe are pushing back against the growing use of AI in dubbing, fearing it could replace human talent in film and television. Many believe dubbing is a creative profession beyond simple voice replication, requiring emotional nuance and cultural sensitivity.

In Germany, France, Italy and the UK, nearly half of viewers prefer dubbed content over subtitles, according to research by GWI. Yet studios are increasingly testing AI tools that replicate actors’ voices or generate synthetic speech, sparking concern across the dubbing industry.

French voice actor Boris Rehlinger, known for dubbing Hollywood stars, says he feels threatened even though AI has not replaced him. He is part of TouchePasMaVF, an initiative defending the value of human dubbing and calling for protection against AI replication.

Voice artists argue that digital voice cloning ignores the craftsmanship behind their performances. As legal frameworks around voice ownership lag behind the technology, many in the industry demand urgent safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google states it has not received UK request to weaken encryption

Google has confirmed it has not received a request from the UK government to create a backdoor in its encrypted services. The clarification comes amid ongoing scrutiny of surveillance legislation and its implications for tech companies offering end-to-end encrypted services.

Reports indicate that the UK government may be reconsidering an earlier request for Apple to enable access to user data through a technical backdoor, which is a move that prompted strong opposition from the US government. In response to these developments, US Senator Ron Wyden has sought to clarify whether similar requests were made to other major technology companies.

While Google initially declined to respond to inquiries from Senator Wyden’s office, the company had not received a technical capabilities notice—an official order under UK law that could require companies to enable access to encrypted data.

Senator Wyden, who serves on the Senate Intelligence Committee, addressed the matter in a letter to Director of National Intelligence Tulsi Gabbard. The letter urged the US intelligence community to assess the potential national security implications of the UK’s surveillance laws and any undisclosed requests to US companies.

Meta, which offers encrypted messaging through WhatsApp and Facebook Messenger, also stated in a 17 March communication to Wyden’s office that it had ‘not received an order to backdoor our encrypted services, like that reported about Apple.’

While companies operating in the UK may be restricted from disclosing certain surveillance orders under law, confirmations such as Google’s provide rare public insight into the current landscape of international encryption policy and cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants back Trump’s AI deregulation plan amid public concern over societal impacts

Donald Trump recently hosted an AI summit in Washington, titled ‘Winning the AI Race,’ geared towards a deregulated atmosphere for AI innovation. Key figures from the tech industry, including Nvidia’s CEO Jensen Huang and Palantir’s CTO Shyam Sankar, attended the event.

Co-hosted by the Hill and Valley Forum and the Silicon Valley All-in Podcast, the summit was a platform for Trump to introduce his ‘AI Action Plan‘, comprised of three executive orders focusing on deregulation. Trump’s objective is to dismantle regulatory restrictions he perceives as obstacles to innovation, aiming to re-establish the US as a leader in AI exportation globally.

The executive orders announced target the elimination of ‘ideological dogmas such as diversity, equity, and inclusion (DEI)’ in AI models developed by federally funded companies. Additionally, one order promotes exporting US-developed AI technologies internationally, while another seeks to lessen environmental restrictions and speed up approvals for energy-intensive data centres.

These measures are seen as reversing the Biden administration’s policies, which stressed the importance of safety and security in AI development. Technology giants Apple, Meta, Amazon, and Alphabet have shown significant support for Trump’s initiatives, contributing to his inauguration fund and engaging with him at his Mar-a-Lago estate. Leaders like OpenAI’s Sam Altman and Nvidia’s Jensen Huang have also pledged substantial investments in US AI infrastructure.

Despite this backing, over 100 groups, including labour, environmental, civil rights, and academic organisations, have voiced their opposition through a ‘People’s AI action plan’. These groups warn of the potential risks of unregulated AI, which they fear could undermine civil liberties, equality, and environmental safeguards.

They argue that public welfare should not be compromised for corporate gains, highlighting the dangers of allowing tech giants to dominate policy-making. That discourse illustrates the divide between industry aspirations and societal consequences.

The tech industry’s influence on AI legislation through lobbying is noteworthy, with a report from Issue One indicating that eight of the largest tech companies spent a collective $36 million on lobbying in 2025 alone. Meta led with $13.8 million, employing 86 lobbyists, while Nvidia and OpenAI saw significant increases in their expenditure compared to previous years. The substantial financial outlay reflects the industry’s vested interest in shaping regulatory frameworks to favour business interests, igniting a debate over the ethical responsibilities of unchecked AI progress.

As tech companies and pro-business entities laud Trump’s deregulation efforts, concerns persist over the societal impacts of such policies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta tells Australia AI needs real user data to work

Meta, the parent company of Facebook, Instagram, and WhatsApp, has urged the Australian government to harmonise privacy regulations with international standards, warning that stricter local laws could hamper AI development. The comments came in Meta’s submission to the Productivity Commission’s review on harnessing digital technology, published this week.

Australia is undergoing its most significant privacy reform in decades. The Privacy and Other Legislation Amendment Bill 2024, passed in November and given royal assent in December, introduces stricter rules around handling personal and sensitive data. The rules are expected to take effect throughout 2024 and 2025.

Meta maintains that generative AI systems depend on access to large, diverse datasets and cannot rely on synthetic data alone. In its submission, the company argued that publicly available information, like legislative texts, fails to reflect the cultural and conversational richness found on its platforms.

Meta said its platforms capture the ways Australians express themselves, making them essential to training models that can understand local culture, slang, and online behaviour. It added that restricting access to such data would make AI systems less meaningful and effective.

The company has faced growing scrutiny over its data practices. In 2024, it confirmed using Australian Facebook data to train AI models, although users in the EU have the option to opt out—an option not extended to Australian users.

Pushback from regulators in Europe forced Meta to delay its plans for AI training in the EU and UK, though it resumed these efforts in 2025.

Australia’s Office of the Australian Information Commissioner has issued guidance on AI development and commercial deployment, highlighting growing concerns about transparency and accountability. Meta argues that diverging national rules create conflicting obligations, which could reduce the efficiency of building safe and age-appropriate digital products.

Critics claim Meta is prioritising profit over privacy, and insist that any use of personal data for AI should be based on informed consent and clearly demonstrated benefits. The regulatory debate is intensifying at a time when Australia’s outdated privacy laws are being modernised to protect users in the AI age.

The Productivity Commission’s review will shape how the country balances innovation with safeguards. As a key market for Meta, Australia’s decisions could influence regulatory thinking in other jurisdictions confronting similar challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!