Meme coins: Fast gains or crypto gambling?

Meme coins have exploded in the crypto market, attracting investors with promises of fast profits and viral hype. These digital tokens, often inspired by internet memes and pop culture, like Dogecoin, Pepe, Dogwifhat and most recently Trump coin, do not usually offer clear utility. Instead, their value mostly depends on social media buzz, influencer endorsements, and community enthusiasm. In 2025, meme coins remain a controversial yet dominant trend in crypto trading. 

Viral but vulnerable: the rise of meme coins 

Meme coins are typically created for humour, social engagement, or to ride viral internet trends, rather than to solve real-world problems. Despite this, they are widely known for their popularity and massive online appeal. Many investors are drawn to meme coins because of the potential for quick, large returns. 

For example, Trump-themed meme coins saw explosive growth in early 2024, with MAGA meme coin (TRUMP) briefly surpassing a $500 million market cap, despite offering no real utility and being driven largely by political hype and social media buzz. 

Analysis reports indicate that in 2024, between 40,000 and 50,000 new meme tokens were launched daily, with numbers soaring to 100,000 during viral surges. Solana tops the list of blockchains for meme coin activity, generating 17,000 to 20,000 new tokens each day. 

Chainplay’s ‘State of Memecoin 2024’ report found that over half (55.24%) of the meme coins analysed were classified as ‘malicious’. 

A chaotic blend of internet culture, greed, and adrenaline, meme coins turn crypto investing into a thrilling game where hype rules and fortunes flip in seconds.

The risks of rug pulls and scams in meme coin projects 

Beneath the humour and viral appeal, meme coins often hide serious structural risks. Many are launched by developers with little to no accountability, and most operate with centralised liquidity pools controlled by a small number of wallets. The setup allows creators or early holders to pull liquidity or dump large token amounts without warning, leading to devastating price crashes—commonly referred to as ‘rug pulls.’ 

On-chain data regularly reveals that a handful of wallets control the vast majority of supply in newly launched meme tokens, making market manipulation easy and trust almost impossible. These coins are rarely audited, lack transparency, and often have no clear roadmap or long-term utility, which leaves retail investors highly exposed. 

The combination of hype-driven demand and opaque tokenomics makes meme coins a fertile ground for fraud and manipulation, further eroding public confidence in the broader crypto ecosystem. 

A chaotic blend of internet culture, greed, and adrenaline, meme coins turn crypto investing into a thrilling game where hype rules and fortunes flip in seconds.

Gambling disguised as investing: The adrenaline rush of meme coins 

Meme coins tap into a mindset that closely resembles gambling more than traditional investing. The entire culture around them thrives on adrenaline-fueled speculation, where every price spike feels like hitting a jackpot and every drop triggers a high-stakes rollercoaster of emotions. Known as the ‘degen’ culture, traders chase quick wins fuelled by FOMO, hype, and the explosive reach of social media.

The thrill-seeking mentality turns meme coin trading into a game of chance. Investors often make impulsive decisions based on hype rather than fundamentals, hoping to catch a sudden pump before the inevitable crash. 

It is all about momentum. The volatile swings create an addictive cycle: the excitement of rapid gains pulls traders back in, despite the constant risk of losing everything.

While early insiders and large holders strategically time their moves to cash out big, most retail investors face losses, much like gamblers betting in a casino. The meme coin market, therefore, functions less like a stable investment arena and more like a high-risk gambling environment where luck and timing often outweigh knowledge and strategy. 

A chaotic blend of internet culture, greed, and adrenaline, meme coins turn crypto investing into a thrilling game where hype rules and fortunes flip in seconds.

Is profit from meme coins possible? Yes, but…

While some investors have made substantial profits from meme coins, success requires expert knowledge, thorough research, and timing. Analysing tokenomics, community growth, and on-chain data is essential before investing. Although they can be entertaining, investing in meme coins is a risky gamble. Luck remains a big key factor, so meme coins are never considered safe or long-term investments.

Meme coins vs Bitcoin: A tale of two mindsets 

Many people assume that all cryptocurrencies share the same mindset, but the truth is quite different. Interestingly, cryptocurrencies like Bitcoin and meme coins are based on contrasting philosophies and psychological drivers.

Bitcoin embodies a philosophy of trust through transparency, decentralisation, and long-term resilience. It appeals to those seeking stability, security, and a store of value rooted in technology and community consensus—a digital gold that invites patience and conviction. In essence, Bitcoin calls for building and holding with reason and foresight. 

Meme coins, on the other hand, thrive on the psychology of instant gratification, social identity, and collective enthusiasm. They tap into our desire for excitement, quick wins, and belonging to a viral movement. Their value is less about utility and more about shared emotion— the hope, the hype, and the adrenaline rush of catching the next big wave. Meme coins beckon with the thrill of the moment, the gamble, and the social spectacle. It makes meme coins a reflection of the speculative and impulsive side of human nature, where the line between investing and gambling blurs.

Understanding these psychological underpinnings helps explain why the two coexist in the crypto world, yet appeal to vastly different types of investors and mindsets. 

A chaotic blend of internet culture, greed, and adrenaline, meme coins turn crypto investing into a thrilling game where hype rules and fortunes flip in seconds.

How meme coins affect the reputation of the entire crypto market

The rise and fall of meme coins do not just impact individual traders—they also cast a long shadow over the credibility of the entire crypto industry. 

High-profile scams, rug pulls, and pump-and-dump schemes associated with meme tokens erode public confidence and validate sceptics’ concerns. Many retail traders enter the meme coin space with high hopes and are quickly disillusioned by manipulation and sudden losses. 

This leads to a sense of betrayal, triggering risk aversion and a generalised mistrust toward all crypto assets, even those with strong fundamentals like Bitcoin or Ethereum. Such disillusionment does not stay contained. It spills over into mainstream sentiment, deterring new investors and slowing institutional adoption. 

As more people associate crypto with gambling and scams rather than innovation and decentralisation, the market’s growth potential suffers. In this way, meme coins—though intended as jokes—could have serious consequences for the future of blockchain credibility. 

 Gold, Face, Head, Person

Trading thrills or ticking time bomb?

Meme coins may offer flashes of fortune, but their deeper role in the crypto ecosystem raises a provocative question: are they reshaping finance or just distorting it? In a market where jokes move millions and speculation overrides substance, the real gamble may not just be financial—it could be philosophical. 

Are we embracing innovation, or playing a dangerous game with digital dice? In the end, meme coins are not just a bet on price—they are a reflection of what kind of future we want to build in crypto. Is it sustainable value, or just viral chaos? The roulette wheel is still spinning. 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cognitive offloading and the future of the mind in the AI age

AI reshapes work and learning

The rapid advancement of AI is bringing to light a range of emerging phenomena within contemporary human societies.

The integration of AI-driven tools into a broad spectrum of professional tasks has proven beneficial in many respects, particularly in terms of alleviating the cognitive and physical burdens traditionally placed on human labour.

By automating routine processes and enhancing decision-making capabilities, AI has the potential to significantly improve efficiency and productivity across various sectors.

In response to these accelerating technological changes, a growing number of nations are prioritising the integration of AI technologies into their education systems to ensure students are prepared for future societal and workforce transformations.

China advances AI education for youth

China has released two landmark policy documents aimed at integrating AI education systematically into the national curriculum for primary and secondary schools.

The initiative not only reflects the country’s long-term strategic vision for educational transformation but also seeks to position China at the forefront of global AI literacy and talent development.

chinese flag with the city of shanghai in the background and digital letters ai somewhere over the flag

The two guidelines, formally titled the Guidelines for AI General Education in Primary and Secondary Schools and the Guidelines for the Use of Generative AI in Primary and Secondary Schools, represent a scientific and systemic approach to cultivating AI competencies among school-aged children.

Their release marks a milestone in the development of a tiered, progressive AI education system, with carefully delineated age-appropriate objectives and ethical safeguards for both students and educators.

The USA expands AI learning in schools

In April, the US government outlined a structured national policy to integrate AI literacy into every stage of the education system.

By creating a dedicated federal task force, the administration intends to coordinate efforts across departments to promote early and equitable access to AI education.

Instead of isolating AI instruction within specialised fields, the initiative seeks to embed AI concepts across all learning pathways—from primary education to lifelong learning.

The plan includes the creation of a nationwide AI challenge to inspire innovation among students and educators, showcasing how AI can address real-world problems.

The policy also prioritises training teachers to understand and use AI tools, instead of relying solely on traditional teaching methods. It supports professional development so educators can incorporate AI into their lessons and reduce administrative burdens.

The strategy encourages public-private partnerships, using industry expertise and existing federal resources to make AI teaching materials widely accessible.

European Commission supports safe AI use

As AI becomes more common in classrooms around the globe, educators must understand not only how to use it effectively but also how to apply it ethically.

Rather than introducing AI tools without guidance or reflection, the European Commission has provided ethical guidelines to help teachers use AI and data responsibly in education.

european union regulates ai

Published in 2022 and developed with input from educators and AI experts, the EU guidelines are intended primarily for primary and secondary teachers who have little or no prior experience with AI.

Instead of focusing on technical complexity, the guidelines aim to raise awareness about how AI can support teaching and learning, highlight the risks involved, and promote ethical decision-making.

The guidelines explain how AI can be used in schools, encourage safe and informed use by both teachers and students, and help educators consider the ethical foundations of any digital tools they adopt.

Rather than relying on unexamined technology, they support thoughtful implementation by offering practical questions and advice for adapting AI to various educational goals.

AI tools may undermine human thinking

However, technological augmentation is not without drawbacks. Concerns have been raised regarding the potential for job displacement, increased dependency on digital systems, and the gradual erosion of certain human skills.

As such, while AI offers promising opportunities for enhancing the modern workplace, it simultaneously introduces complex challenges that must be critically examined and responsibly addressed.

One significant challenge that must be addressed in the context of increasing reliance on AI is the phenomenon known as cognitive offloading. But what exactly does this term entail?

What happens when we offload thinking?

Cognitive offloading refers to the practice of using physical actions or external tools to modify the information processing demands of a task, with the aim of reducing the cognitive load on an individual.

In essence, it involves transferring certain mental functions—such as memory, calculation, or decision-making—to outside resources like digital devices, written notes, or structured frameworks.

digital brain

While this strategy can enhance efficiency and performance, it also raises concerns about long-term cognitive development, dependency on technological aids, and the potential degradation of innate mental capacities.

How AI may be weakening critical thinking

A study, led by Dr Michael Gerlich, Head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, published in the journal Societies raises serious concerns about the cognitive consequences of AI augmentation in various aspects of life.

The study suggests that frequent use of AI tools may be weakening individuals’ capacity for critical thinking, a skill considered fundamental to independent reasoning, problem-solving, and informed decision-making.

More specifically, Dr Gerlich adopted a mixed-methods approach, combining quantitative survey data from 666 participants with qualitative interviews involving 50 individuals.

Participants were drawn from diverse age groups and educational backgrounds and were assessed on their frequency of AI tool use, their tendency to offload cognitive tasks, and their critical thinking performance.

The study employed both self-reported and performance-based measures of critical thinking, alongside statistical analyses and machine learning models, such as random forest regression, to identify key factors influencing cognitive performance.

Younger users, who rely more on AI, think less critically

The findings revealed a strong negative correlation between frequent AI use and critical thinking abilities. Individuals who reported heavy reliance on AI tools—whether for quick answers, summarised explanations, or algorithmic recommendations—scored lower on assessments of critical thinking.

The effect was particularly pronounced among younger users aged 17 to 25, who reported the highest levels of cognitive offloading and showed the weakest performance in critical thinking tasks.

In contrast, older participants (aged 46 and above) demonstrated stronger critical thinking skills and were less inclined to delegate mental effort to AI.

Higher education strengthens critical thinking

The data also indicated that educational attainment served as a protective factor: those with higher education levels consistently exhibited more robust critical thinking abilities, regardless of their AI usage levels.

These findings suggest that formal education may equip individuals with better tools for critically engaging with digital information rather than uncritically accepting AI-generated responses.

Now, we must understand that while the study does not establish direct causation, the strength of the correlations and the consistency across quantitative and qualitative data suggest that AI usage may indeed be contributing to a gradual decline in cognitive independence.

However, in his study, Gerlich also notes the possibility of reverse causality—individuals with weaker critical thinking skills may be more inclined to rely on AI tools in the first place.

Offloading also reduces information retention

While cognitive offloading can enhance immediate task performance, it often comes at the cost of reduced long-term memory retention, as other studies show.

The trade-off has been most prominently illustrated in experimental tasks such as the Pattern Copy Task, where participants tasked with reproducing a pattern typically choose to repeatedly refer to the original rather than commit it to memory.

Even when such behaviours introduce additional time or effort (e.g., physically moving between stations), the majority of participants opt to offload, suggesting a strong preference for minimising cognitive strain.

These findings underscore the human tendency to prioritise efficiency over internalisation, especially under conditions of high cognitive demand.

The tendency to offload raises crucial questions about the cognitive and educational consequences of extended reliance on external aids. On the one hand, offloading can free up mental resources, allowing individuals to focus on higher-order problem-solving or multitasking.

On the other hand, it may foster a kind of cognitive dependency, weakening internal memory traces and diminishing opportunities for deep engagement with information.

Within the framework, cognitive offloading is not a failure of memory or attention but a reconfiguration of cognitive architecture—a process that may be adaptive rather than detrimental.

However, the perspective remains controversial, especially in light of findings that frequent offloading can impair retention, transfer of learning, and critical thinking, as Gerlich’s study argues.

If students, for example, continually rely on digital devices to recall facts or solve problems, they may fail to develop the robust mental models necessary for flexible reasoning and conceptual understanding.

The mind may extend beyond the brain

The tension has also sparked debate among cognitive scientists and philosophers, particularly in light of the extended mind hypothesis.

Contrary to the traditional view that cognition is confined to the brain, the extended mind theory argues that cognitive processes often rely on, and are distributed across, tools, environments, and social structures.

digital brain spin

As digital technologies become increasingly embedded in daily life, this hypothesis raises profound questions about human identity, cognition, and agency.

At the core of the extended mind thesis lies a deceptively simple question: Where does the mind stop, and the rest of the world begin?

Drawing an analogy to prosthetics—external objects that functionally become part of the body—Clark and Chalmers argue that cognitive tools such as notebooks, smartphones, and sketchpads can become integrated components of our mental system.

These tools do not merely support cognition; they constitute it when used in a seamless, functionally integrated manner. This conceptual shift has redefined thinking not as a brain-bound process but as a dynamic interaction between mind, body, and world.

Balancing AI and human intelligence

In conclusion, cognitive offloading represents a powerful mechanism of modern cognition, one that allows individuals to adapt to complex environments by distributing mental load.

However, its long-term effects on memory, learning, and problem-solving remain a subject of active investigation. Rather than treating offloading as inherently beneficial or harmful, future research and practice should seek to balance its use, leveraging its strengths while mitigating its costs.

Human VS Ai Background Brain and heart hd background 1024x576 1

Ultimately, we -as educators, policymakers, and technologists- have to shape the future of learning, work and confront a central tension: how to harness the benefits of AI without compromising the very faculties—critical thought, memory, and independent judgment—that define human intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise of AI in Hollywood, gaming, and music

It feels like just yesterday that the internet was buzzing over the first renditions of OpenAI’s DALL·E tool, with millions competing to craft the funniest, weirdest prompts and sharing the results across social media. The sentiment was clear: the public was fascinated by the creative potential of this new technology.

But beneath the laughter and viral memes was a quieter, more uneasy question: what happens when AI not only generates quirky artwork, but begins to reshape our daily lives, both online and off? As it turns out, that process was already underway behind the scenes, and we were none the wiser.

AI in action: How the entertainment industry is using it today

Three years later, we have reached a point where AI’s influence seems to have passed the point of no return. The entertainment industry was among the first to embrace this technology, and starting with the 2025 Academy Awards, films that incorporate AI are now eligible for Oscar nominations.

That decision has been met with mixed reactions, to put it lightly. While some have praised the industry’s eagerness to explore new technological frontiers, others have claimed that AI greatly diminishes the human contribution to the art of filmmaking and therefore takes away the essence of the seventh art form.

The first wave of AI-enhanced storytelling

One recent example is the film The Brutalist, in which AI was used to refine Adrien Brody’s Hungarian dialogue to sound more authentic. Such a move that sparked both technical admiration and creative scepticism.

With AI now embedded in everything from voiceovers to entire digital actors, we are only beginning to confront what it truly means when creativity is no longer exclusively human.

Academy Awards 2025, Adrien Brody, The Brutalist, The Oscars, Best Actor
Adrien Brody’s Hungarian dialogue in ‘The Brutalist’ was subject to generative AI to make it sound more authentic. Screenshot / YouTube/ Oscars

Setting the stage: AI in the spotlight

The first major big-screen resurrection occurred in 1994’s The Crow, where Brandon Lee’s sudden passing mid-production forced the studio to rely on body doubles, digital effects, and existing footage to complete his scenes. However, it was not until 2016 that audiences witnessed the first fully digital revival.

In Rogue One: A Star Wars Story, Peter Cushing’s character was brought back to life using a combination of CGI, motion capture, and a facial stand-in. Although primarily reliant on traditional VFX, the project paved the way for future use of deepfakes and AI-assisted performance recreation across movies, TV shows, and video games.

Afterward, some speculated that studios tied to Peter Cushing’s legacy, such as Tyburn Film Productions, could pursue legal action against Disney for reviving his likeness without direct approval. While no lawsuit was filed, questions were raised about who owns a performer’s digital identity after death.

The digital Jedi: How AI helped recreate Luke Skywalker

Fate would have it that AI’s grand debut would take place in a galaxy far, far away, with the surprise appearance of Luke Skywalker in the Season 2 finale of The Mandalorian (spoiler alert). The moment thrilled fans and marked a turning point for the franchise, but it was more than just fan service.

Here’s the twist: Mark Hamill did not record any new voice lines. Instead, actor Max Lloyd-Jones performed the physical role, while Hamill’s de-aged voice was recreated with the help of Respeecher, a Ukrainian company specialising in AI-driven speech synthesis.

Impressed by their work, Disney turned to Respeecher once again, this time to recreate James Earl Jones’s iconic Darth Vader voice for the Obi-Wan Kenobi miniseries. Using archival recordings that Jones signed over for AI use, the system synthesised new dialogue that perfectly matched the intonation and timbre of his original trilogy performances.

Darth Vader, James Earl Jones, Star Wars, Obi-Wan Kenobi, Respeecher, AI voice synthesizer
Screenshot / YouTube / Star Wars

AI in moviemaking: Preserving legacy or crossing a line?

The use of AI to preserve and extend the voices of legendary actors has been met with a mix of admiration and unease. While many have praised the seamless execution and respect shown toward the legacy of both Hamill and Jones, others have raised concerns about consent, creative authenticity, and the long-term implications of allowing AI to perform in place of humans.

In both cases, the actors were directly involved or gave explicit approval, but these high-profile examples may be setting a precedent for a future where that level of control is not guaranteed.

A notable case that drew backlash was the planned use of a fully CGI-generated James Dean in the unreleased film Finding Jack, decades after his death. Critics and fellow actors have voiced strong opposition, arguing that bringing back a performer without their consent reduces them to a brand or asset, rather than honouring them as an artist.

AI in Hollywood: Actors made redundant?

What further heightened concerns among working actors was the launch of Promise, a new Hollywood studio built entirely around generative AI. Backed by wealthy investors, Promise is betting big on Muse, a GenAI tool designed to produce high-quality films and TV series at a fraction of the cost and time required for traditional Hollywood productions.

Filmmaking is a business, after all, and with production budgets ballooning year after year, AI-powered entertainment sounds like a dream come true for profit-driven studios.

Meta’s recent collaboration with Blumhouse Productions on Movie Gen only adds fuel to the fire, signalling that major players are eager to explore a future where storytelling may be driven as much by algorithms as by authentic artistry.

AI in gaming: Automation or artistic collapse?

Speaking of entertainment businesses, we cannot ignore the world’s most popular entertainment medium: gaming. While the pandemic triggered a massive boom in game development and player engagement, the momentum was short-lived.

As profits began to slump in the years that followed, the industry was hit by a wave of layoffs, prompting widespread internal restructuring and forcing publishers to rethink their business models entirely. In hopes of cost-cutting, AAA companies had their eye on AI as their one saving grace.

Nvidia developing AI chips, along with Ubisoft and EA investing in AI and machine learning, have sent clear signals to the industry: automation is no longer just a backend tool, it is a front-facing strategy.

With AI-assisted NPC behaviour and AI voice acting, game development is shifting toward faster, cheaper, and potentially less human-driven production. In response, game developers have become concerned about their future in the industry, and actors are less inclined to sign off their rights for future projects.

AI voice acting in video games

In an attempt to compete with wealthier studios, even indie developers have turned to GenAI to replicate the voices of celebrity voice actors. Tools like ElevenLabs and Altered Studio offer a seemingly straightforward way to get high-quality talent, but if only it were that simple.

Copyright laws and concerns over authenticity remain two of the strongest barriers to the widespread adoption of AI-generated voices. especially as many consumers still view the technology as a crutch rather than a creative tool for game developers.

The legal landscape around AI-generated voices remains murky. In many places, the rights to a person’s voice, or its synthetic clone, are poorly defined, creating loopholes developers can exploit.

AI voice cloning challenges legal boundaries in gaming

The legal ambiguity has fuelled a backlash from voice actors, who argue that their performances are being mimicked without consent or pay. SAG-AFTRA and others began pushing for tighter legal protections in 2023.

A notable flashpoint came in 2025, when Epic Games faced criticism for using an AI-generated Darth Vader voice in Fortnite. SAG-AFTRA filed a formal complaint, citing licensing concerns and a lack of actor involvement.

Not all uses have been controversial. CD Projekt Red recreated the voice of the late Miłogost Reczek in Cyberpunk 2077: Phantom Liberty, with his family’s blessing, thus setting a respectful precedent for the ethical use of AI.

How AI is changing music production and artist Identity

AI is rapidly reshaping music production, with a recent survey showing that nearly 25% of producers are already integrating AI tools into their creative workflows. This shift reflects a growing trend in how technology is influencing composition, mixing, and even vocal performance.

Artists like Imogen Heap are embracing the change with projects like Mogen, an AI version of herself that can create music and interact with fans, blurring the line between human creativity and digital innovation.

Major labels are also experimenting: Universal Music has recently used AI to reimagine Brenda Lee’s 1958 classic in Spanish, preserving the spirit of the original while expanding its cultural reach.

AI and the future of entertainment

As AI becomes more embedded in entertainment, the line between innovation and exploitation grows thinner. What once felt like science fiction is now reshaping the way stories are told, and who gets to tell them.

Whether AI becomes a tool for creative expansion or a threat to human artistry will depend on how the industry and audiences choose to engage with it in the years ahead. As in any business, consumers vote with their wallets, and only time will tell whether AI and authenticity can truly go hand-in-hand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise of tech giants in healthcare: How AI is reshaping life sciences

Silicon Valley targets health

The intersection of technology and healthcare is rapidly evolving, fuelled by advancements in ΑΙ and driven by major tech companies that are expanding their reach into the life sciences sector.

Once primarily known for consumer electronics or search engines, companies like Google, Amazon, Microsoft, Apple, and IBM are now playing an increasingly central role in transforming the medical field.

These companies, often referred to as ‘Big Tech’, are pushing the boundaries of what was once considered science fiction, using AI to innovate across multiple aspects of healthcare, including diagnostics, treatment, drug development, clinical trials, and patient care.

silicon valley tech companies

AI becomes doctors’ new tool

At the core of this revolution is AI. Over the past decade, AI has evolved from a theoretical tool to a practical and transformative force within healthcare.

Companies are developing advanced machine learning algorithms, cognitive computing models, and AI-powered systems capable of matching—and sometimes surpassing—human capabilities in diagnosing and treating diseases.

AI is also reshaping many aspects of healthcare, from early disease detection to personalised treatments and even drug discovery. This shift is creating a future where AI plays a significant role in diagnosing diseases, developing treatment plans, and improving patient outcomes at scale.

One of the most significant contributions of AI is in diagnostics. Google Health and its subsidiary DeepMind are prime examples of how AI can be used to outperform human experts in certain medical tasks.

For instance, DeepMind’s AI tools have demonstrated the ability to diagnose conditions like breast cancer and lung disease with remarkable accuracy, surpassing the abilities of human radiologists in some cases.

google deepmind AI progress Demis Hassabis

Similarly, Philips has filed patents for AI systems capable of detecting neurodegenerative diseases and tracking disease progression using heart activity and motion sensors.

From diagnosis to documentation

These breakthroughs represent only a small part of how AI is revolutionising diagnostics by improving accuracy, reducing time to diagnosis, and potentially saving lives.

In addition to AI’s diagnostic capabilities, its impact extends to medical documentation, an often-overlooked area that affects clinician efficiency.

Traditionally, doctors spend a significant amount of time on paperwork, reducing the time they can spend with patients.

However, AI companies like Augmedix, DeepScribe, and Nabla are addressing this problem by offering solutions that generate clinical notes directly from doctor-patient conversations.

AI doctor

These platforms integrate with electronic health record (EHR) systems and automate the note-taking process, which reduces administrative workload and frees up clinicians to focus on patient care.

Augmedix, for example, claims to save up to an hour per day for clinicians, while DeepScribe’s AI technology is reportedly more accurate than even GPT-4 for clinical documentation.

Nabla takes this further by offering AI-driven chatbots and decision support tools that enhance clinical workflows and reduce physician burnout.

Portable ultrasounds powered by AI

AI is also transforming medical imaging, a field traditionally dependent on expensive, bulky equipment that requires specialised training.

Innovators like Butterfly Network are developing portable, AI-powered ultrasound devices that can provide diagnostic capabilities at a fraction of the cost of traditional equipment. These devices offer greater accessibility, particularly in regions with limited access to medical imaging technology.

The ability to perform ultrasounds and MRIs in remote areas, using portable devices powered by AI, is democratising healthcare and enabling better diagnostic capabilities in underserved regions.

An advanced drug discovery

In the realm of drug discovery and treatment personalisation, AI is making significant strides. Companies like IBM Watson are at the forefront of using AI to personalise treatment plans by analysing vast amounts of patient data, including medical histories, genetic information, and lifestyle factors.

IBM Watson has been particularly instrumental in the field of oncology, where it assists physicians by recommending tailored cancer treatment protocols.

treatment costs.

A capability like this is made possible by the vast amounts of medical data Watson processes to identify the best treatment options for individual patients, ensuring that therapies are more effective by considering each patient’s unique characteristics.

Smart automation in healthcare

Furthermore, AI is streamlining administrative tasks within healthcare systems, which often burden healthcare providers with repetitive, time-consuming tasks like appointment scheduling, records management, and insurance verification.

By automating these tasks, AI allows healthcare providers to focus more on delivering high-quality care to patients.

Amazon Web Services (AWS), for example, is leveraging its cloud platform to develop machine learning tools that assist healthcare providers in making more effective clinical decisions while improving operational efficiency.

It includes using AI to enhance clinical decision-making, predict patient outcomes, and manage the growing volume of patient data that healthcare systems must process.

Startups and giants drive the healthcare race

Alongside the tech giants, AI-driven startups are also playing a pivotal role in healthcare innovation. Tempus, for example, is integrating genomic sequencing with AI to provide physicians with actionable insights that improve patient outcomes, particularly in cancer treatment.

The fusion of data from multiple sources is enhancing the precision and effectiveness of medical decisions. Zebra Medical Vision, another AI-driven company, is using AI to analyse medical imaging data and detect a wide range of conditions, from liver disease to breast cancer.

Zebra’s AI algorithms are designed to identify conditions often before symptoms even appear, which greatly improves the chances of successful treatment through early detection.

Tech giants are deeply embedded in the healthcare ecosystem, using their advanced capabilities in cloud computing, AI, and data analytics to reshape the industry.

partners handshake ai companies

Microsoft, for example, has made significant strides in AI for accessibility, focusing on creating healthcare solutions that empower individuals with disabilities. Their work is helping to make healthcare more inclusive and accessible for a broader population.

Amazon’s AWS cloud platform is another example of how Big Tech is leveraging its infrastructure to develop machine learning tools that support healthcare providers in delivering more effective care.

M&A meets medicine

In addition to developing their own AI tools, these tech giants have made several high-profile acquisitions to accelerate their healthcare strategies.

Google’s acquisition of Fitbit, Amazon’s purchase of PillPack and One Medical, and Microsoft’s $19.7 billion acquisition of Nuance are all clear examples of how Big Tech is seeking to integrate AI into every aspect of the healthcare value chain, from drug discovery to clinical delivery.

These acquisitions and partnerships also enable tech giants to tap into new areas of the healthcare market and provide more comprehensive, end-to-end solutions to healthcare providers and patients alike.

Smart devices empower health

Consumer health technologies have also surged in popularity, thanks to the broader trend of digital health and wellness tools. Fitness trackers, smartwatches, and mobile health apps allow users to monitor everything from heart rates to sleep quality.

Devices like the Apple Watch and Google’s Fitbit collect health data continuously, providing users with personalised insights into their well-being.

seoul 05 02 2022 male hand with two apple watches with pink and gray strap on white background

Instead of being isolated within individual devices, the data is increasingly being integrated into broader healthcare systems, enabling doctors and other healthcare providers to have a more complete view of a patient’s health.

This integration has also supported the growth of telehealth services, with millions of people now opting for virtual consultations powered by Big Tech infrastructure and AI-powered triage tools.

Chinese hospitals embrace generative AI

The rise of generative AI is also transforming healthcare, particularly in countries like China, where technology is advancing rapidly. Once considered a distant ambition, the use of generative AI in healthcare is now being implemented at scale.

The technology is being used to manage massive drug libraries, assist with complex diagnoses, and replicate expert reasoning processes, which helps doctors make more informed decisions.

At Beijing Hospital of Traditional Chinese Medicine, Ant Group’s medical model has impressed staff by offering diagnostic suggestions and replicating expert reasoning, streamlining consultations without replacing human doctors.

Our choice in a tech-driven world

As AI continues to evolve, tech giants are likely to continue disrupting the healthcare industry while also collaborating with traditional healthcare providers.

While some traditional life sciences companies may feel threatened by the rise of Big Tech in healthcare, those that embrace AI and form partnerships with tech companies will likely be better positioned for success.

The convergence of AI and healthcare is already reshaping the future of medicine, and traditional healthcare players must adapt or risk being left behind.

generate an image of an artificial intelligence head in front of a human head and digital codes in the background reproducing all the human heads inputs and psychological reactions

Despite the tremendous momentum, there are challenges that need to be addressed. Data privacy, regulatory concerns, and the growing dominance of Big Tech in healthcare remain significant hurdles.

If these challenges are addressed responsibly, however, the integration of AI into healthcare could modernise care delivery on a global scale.

Rather than replacing doctors, the goal is to empower them with better tools, insights, and outcomes. The future of healthcare is one where technology and human expertise work in tandem, enhancing the patient experience and improving overall health outcomes.

As human beings, we must understand that the integration of technology across multiple sectors is a double-edged sword. It can either benefit us and help build better future societies, or mark the beginning of our downfall— but in the end, the choice will always be ours.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bitcoin’s political puppeteers: From code to clout

Bitcoin was once seen as the cornerstone of a financial utopia — immune to political control, free from traditional banking systems, and governed solely by blockchain protocols. For a while, that dream felt real — and we lived it.

Today, things have changed. The whole crypto market has become increasingly sensitive to political influence, the actions of crypto whales, and rising global tensions.

While financial markets are expected to respond to global developments, Bitcoin’s price volatility has started to reflect something more concerning. Instead of being driven primarily by innovation or organic adoption, BTC price movements are increasingly shaped by media exposure and the strategic trades by influential figures.

In this shifting ecosystem, manipulation and concentrated influence are gradually undermining the core ideals of decentralisation and financial autonomy. Is this really the revolution we were promised? 

Trump’s family growing grip on the crypto market

Donald Trump has not always been a crypto fan. Once critical of Bitcoin, he is now positioning himself as a pro-crypto leader. It is a shift driven by opportunity — not just political, but financial. Trump understands that supporting digital assets could help the USA become a global crypto hub. But it also aligns perfectly with his reputation as a businessman first, politician second. 

The issue lies in the outsized influence his words now have in the crypto space. A single post on social media like X or Truth can send Bitcoin’s price up or down. Whether he is praising crypto or denying personal gain, the market reacts instantly. 

His sons, Donald Trump Jr. and Eric Trump are also active — often promoting the narrative that banks are obsolete and crypto is the future. They frequently make suggestive remarks about market trends. At times, they even imply where investors should put their money — all while staying within legal limits. Still, this pattern subtly steers market sentiment, raising concerns about coordinated influence and the deliberate shaping of market trends.

The launch of politically themed meme coins like $TRUMP and $MELANIA added fuel to the fire. These coins sparked massive rallies — and equally dramatic crashes. In fact, Bitcoin’s all-time high was followed by a sharp fall, partially triggered by the hype and eventual dump around these tokens.

Investigations now suggest insider activity. One wallet made $39 million in just 12 hours after buying $MELANIA before it was even announced. Meanwhile, $TRUMP coin insiders moved $4.6 million in USDC right before the major token unlock.

While technically legal, these actions raise serious ethical concerns. Also, 80% of its supply is controlled by insiders — including Donald Trump himself. It points to a clear pattern of influence, where strategic actions are being used to shape market movements and drive profits for a select few.

What we are seeing is the unprecedented impact of a single family. The combination of political clout and financial ambition is reshaping crypto sentiment, and Bitcoin is reflecting the shift as well. It is no longer subtle — and it is certainly troubling. Crypto is supposed to be free from central influence — yet right now, it bends under the weight of a single name.

Whales and the Michael Saylor effect 

Beyond politics, crypto whales are playing their part in manipulating Bitcoin’s movements. They can cause major price swings by buying or selling in bulk. 

One of the most influential is Michael Saylor, co-founder of Strategy. His company holds approximately 555,450 BTC and is still buying. Every time he announces a new purchase, Bitcoin prices spike. Traders monitor his every move — his tweets are treated like trading signals. 

But Saylor has bigger plans. He once said he could become a Bitcoin bank — a statement that sparked backlash. What is particularly striking is that a businessman who has supported Bitcoin’s decentralised nature from the beginning is now acting in ways that appear to contradict it. Bitcoin was designed to avoid central control — not to be dominated by one player, no matter how bullish. When too much BTC ends up concentrated in one place, the autonomous promise begins to crack. 

Market trust is shifting from code to individuals — and that is risky.

Global tensions as a Bitcoin barometer

Bitcoin does not just respond to tweets anymore. Global tensions have made it a geopolitical asset — a barometer of financial anxiety. 

Recent US tariffs, particularly on Chinese mining equipment, have raised mining costs. Tariffs also disrupted the supply chain for mining rigs, slowing down expansion and affecting hash rates.

At the same time, when the US exempted tech products like iPhones and laptops from tariffs, Bitcoin surged — reaching $86,000. It shows how trade policy and tech pressure are now directly linked to Bitcoin price action. 

Yet, there always seems to be a push-and-pull dynamic at play — not necessarily coordinated, but clearly driven by short-term momentum and opportunistic interests.

It is where irony lies — Bitcoin was built to be apolitical. But today, it is tightly tied to global politics. Its price now swings in response to elections, sanctions, and international conflicts — the very forces it was meant to bypass. What was once a decentralised alternative to traditional finance is becoming a mirror of the same systems it sought to disrupt. 

Bitcoin: from decentralised dream to politically-driven reality 

Bitcoin is no longer moved by natural market fundamentals alone. It dances to the tune of political tweets, whale decisions, and global conflicts. A decentralised dream now faces a centralised reality.

It all started when governments and financial institutions began taking an active interest in Bitcoin and the broader cryptocurrency market. While mainstream adoption was essential for legitimising digital assets, that level of attention came with strings attached — most notably, external influence.

What was once an alternative movement powered by decentralised ideals has gradually attracted the gaze of political leaders, regulators, and corporate giants. The tale of two sides of the sword: the promise of legitimacy, tempered by the risk of losing the system’s independence. 

In this environment, the absence of central control and the self-governing nature of the system are becoming increasingly symbolic. The market reacts not just to algorithms or adoption metrics, but also to the opinions and actions of a powerful few — raising concerns about market manipulation, unequal access, and the long-term health of crypto’s founding vision. Is that really a non-centralised structure?

Crypto was meant to free us from financial gatekeepers. But if Bitcoin can be shaken by one man’s post on a social network, we must ask: can it still considered free? 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Technological inventions blurring the line between reality and fiction

The rapid progress of AI over the past few years has unsettled the global population, reaching a point where it is extremely difficult to say with certainty whether certain content has been created by AI or not.

We are confronted with this phenomenon through photos, video and audio recordings that can easily confuse us and force us to question our perception of reality.

Digital twins are being used by scammers in the crypto space to impersonate influencers and execute fraudulent schemes.

And while the public often focuses on deepfakes, at the same time we are witnessing inventions and patents emerging around the world that deserve admiration, but also spark important reflection: are we nearing, or have we already crossed, the ethical red line?

For these and many other reasons, in a world where the visual and functional differences between science fiction and reality have almost disappeared, the latest inventions come as a shock.

We are now at a point where we are facing technologies that force us to redefine what we mean by the word ‘reality’.

Neuralink: Crossing the boundary between brain and machine

Amyotrophic lateral sclerosis (ALS) is a rare neurological disease caused by damage and degeneration of motor neurons—nerve cells in the brain and spinal cord. This damage disrupts the transmission of nerve impulses to muscles via peripheral nerves, leading to a progressive loss of muscle function.

However, the Neuralink chip, developed by Elon Musk’s company, has helped one patient type with their mind and speak using their voice. This breakthrough opens the door to a new form of communication where thoughts become direct interactions.

Liquid robot from South Korea

Scenes from sci-fi films are becoming reality, and in this case (thankfully), a liquid robot has a noble purpose—to assist in rescue missions and be applied in medicine.

Currently in the early prototype stage, it has been demonstrated in labs through a collaboration between MIT and Korean research institutes.

ULS exoskeleton as support for elderly care

Healthcare workers and caregivers in China have had their work greatly simplified thanks to the ULS Robotics exoskeleton, weighing only five kilograms but enabling users to lift up to 30 kilograms.

This represents a leap forward in caring for people with limited mobility, while also increasing safety and efficiency. Commercial prototypes have been tested in hospitals and industrial environments.

https://twitter.com/ulsrobotics/status/1317426742168940545

Agrorobots: Autonomous crop spraying

Another example from China that has been in use for several years. Robots equipped with AI perform precise crop spraying. The system analyses pests and targets them without the need for human presence, reducing potential health risks.

The application has become standardised, with expectations for further expansion and improvement in the near future.

The stretchable battery of the future

Researchers in Sweden have developed a flexible battery that can double in length without losing energy, making it ideal for wearable technologies.

Although not yet commercially available, it has been covered in scientific journals. The aim is for it to become a key component in bendable devices, smart clothing and medical implants.

Volonaut Airbike: A sci-fi vehicle takes off

When it comes to innovation, the Volonaut Airbike hits the mark perfectly. Designed to resemble a single-seat speeder bike from Star Wars, it represents a giant leap toward personal air travel.

Functional prototypes exist, but testing remains limited due to high production costs and regulatory hurdles related to traffic laws. Nevertheless, the Polish company behind it remains committed to this idea, and it will be exciting to follow its progress.

NEO robot: The humanoid household assistant

A Norwegian company has been developing a humanoid robot capable of performing household tasks, including gardening chores like collecting and bagging leaves or grass.

These are among the first serious steps toward domestic humanoid assistants. Currently functioning in demo mode, the robot has received backing from OpenAI.

Lenovo Yoga Solar: The laptop that loves sunlight

If you find yourself without a charger but with access to direct sunlight, this laptop will do everything it can to keep you powered. Using solar energy, 20 minutes of charging in sunlight provides around one hour of video playback.

Perfect for ecologists and digital nomads. Although not yet commercially available, it has been showcased at several major tech expos.

https://www.youtube.com/watch?v=px1iEW600Pk

What comes next: The need for smart regulation

As technology races ahead, regulation must catch up. From neurotech to autonomous robots, each innovation raises new questions about privacy, accountability, and ethics.

Governments and tech developers alike must collaborate to ensure that these inventions remain tools for good, not risks to society.

So, what is real and what is generated?

This question will only become harder to answer as time goes on. But on the other hand, if the technological revolution continues to head in a useful and positive direction, perhaps there is little to fear.

The true dilemma in this era of rapid innovation may not be about the tools themselves, but about the fundamental question: Is technology shaping us, or do we still shape it?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rewriting the AI playbook: How Meta plans to win through openness

Meta hosted its first-ever LlamaCon, a high-profile developer conference centred around its open-source language models. Timed to coincide with the release of its Q1 earnings, the event showcased Llama 4, Meta’s newest and most powerful open-weight model yet.

The message was clear – Meta wants to lead the next generation of AI on its own terms, and with an open-source edge. Beyond presentations, the conference represented an attempt to reframe Meta’s public image.

Once defined by social media and privacy controversies, Meta is positioning itself as a visionary AI infrastructure company. LlamaCon wasn’t just about a model. It was about a movement Meta wants to lead, with developers, startups, and enterprises as co-builders.

By holding LlamaCon the same week as its earnings call, Meta strategically emphasised that its AI ambitions are not side projects. They are central to the company’s identity, strategy, and investment priorities moving forward. This convergence of messaging signals a bold new chapter in Meta’s evolution.

The rise of Llama: From open-source curiosity to strategic priority

When Meta introduced LLaMA 1 in 2023, the AI community took notice of its open-weight release policy. Unlike OpenAI and Anthropic, Meta allowed researchers and developers to download, fine-tune, and deploy Llama models on their own infrastructure. That decision opened a floodgate of experimentation and grassroots innovation.

Now with Llama 4, the models have matured significantly, featuring better instruction tuning, multilingual capacity, and improved safety guardrails. Meta’s AI researchers have incorporated lessons learned from previous iterations and community feedback, making Llama 4 an update and a strategic inflexion point.

Crucially, Meta is no longer releasing Llama as a research novelty. It is now a platform and stable foundation for third-party tools, enterprise solutions, and Meta’s AI products. That is a turning point, where open-source ideology meets enterprise-grade execution.

Zuckerberg’s bet: AI as the engine of Meta’s next chapter

Mark Zuckerberg has rarely shied away from bold, long-term bets—whether it’s the pivot to mobile in the early 2010s or the more recent metaverse gamble. At LlamaCon, he clarified that AI is now the company’s top priority, surpassing even virtual reality in strategic importance.

He framed Meta as a ‘general-purpose AI company’, focused on both the consumer layer (via chatbots and assistants) and the foundational layer (models and infrastructure). Meta CEO envisions a world where Meta powers both the AI you talk to and the AI your apps are built on—a dual play that rivals Microsoft’s partnership with OpenAI.

This bet comes with risk. Investors are still sceptical about Meta’s ability to turn research breakthroughs into a commercial advantage. But Zuckerberg seems convinced that whoever controls the AI stack—hardware, models, and tooling—will control the next decade of innovation, and Meta intends to be one of those players.

A costly future: Meta’s massive AI infrastructure investment

Meta’s capital expenditure guidance for 2025—$60 to $65 billion—is among the largest in tech history. These funds will be spent primarily on AI training clusters, data centres, and next-gen chips.

That level of spending underscores Meta’s belief that scale is a competitive advantage in the LLM era. Bigger compute means faster training, better fine-tuning, and more responsive inference—especially for billion-parameter models like Llama 4 and beyond.

However, such an investment raises questions about whether Meta can recoup this spending in the short term. Will it build enterprise services, or rely solely on indirect value via engagement and ads? At this point, no monetisation plan is directly tied to Llama—only a vision and the infrastructure to support it.

Economic clouds: Revenue growth vs Wall Street’s expectations

Meta reported an 11% year-over-year increase in revenue in Q1 2025, driven by steady performance across its ad platforms. However, Wall Street reacted negatively, with the company’s stock falling nearly 13% following the earnings report, because investors are worried about the ballooning costs associated with Meta’s AI ambitions.

Despite revenue growth, Meta’s margins are thinning, mainly due to front-loaded investments in infrastructure and R&D. While Meta frames these as essential for long-term dominance in AI, investors are still anchored to short-term profit expectations.

A fundamental tension is at play here – Meta is acting like a venture-stage AI startup with moonshot spending, while being valued as a mature, cash-generating public company. Whether this tension resolves through growth or retrenchment remains to be seen.

Global headwinds: China, tariffs, and the shifting tech supply chain

Beyond internal financial pressures, Meta faces growing external challenges. Trade tensions between the US and China have disrupted the global supply chain for semiconductors, AI chips, and data centre components.

Meta’s international outlook is dimming with tariffs increasing and Chinese advertising revenue falling. That is particularly problematic because Meta’s AI infrastructure relies heavily on global suppliers and fabrication facilities. Any disruption in chip delivery, especially GPUs and custom silicon, could derail its training schedules and deployment timelines.

At the same time, Meta is trying to rebuild its hardware supply chain, including in-house chip design and alternative sourcing from regions like India and Southeast Asia. These moves are defensive but reflect how AI strategy is becoming inseparable from geopolitics.

Llama 4 in context: How it compares to GPT-4 and Gemini

Llama 4 represents a significant leap from Llama 2 and is now comparable to GPT-4 in a range of benchmarks. Early feedback suggests strong performance in logic, multilingual reasoning, and code generation.

However, how it handles tool use, memory, and advanced agentic tasks is still unclear. Compared to Gemini 1.5, Google’s flagship model, Llama 4 may still fall short in certain use cases, especially those requiring long context windows and deep integration with other Google services.

But Llama has one powerful advantage – it’s free to use, modify, and self-host. That makes Llama 4 a compelling option for developers and companies seeking control over their AI stack without paying per-token fees or exposing sensitive data to third parties.

Open source vs closed AI: Strategic gamble or masterstroke?

Meta’s open-weight philosophy differentiates it from rivals, whose models are mainly gated, API-bound, and proprietary. By contrast, Meta freely gives away its most valuable assets, such as weights, training details, and documentation.

Openness drives adoption. It creates ecosystems, accelerates tooling, and builds developer goodwill. Meta’s strategy is to win the AI competition not by charging rent, but by giving others the keys to build on its models. In doing so, it hopes to shape the direction of AI development globally.

Still, there are risks. Open weights can be misused, fine-tuned for malicious purposes, or leaked into products Meta doesn’t control. But Meta is betting that being everywhere is more powerful than being gated. And so far, that bet is paying off—at least in influence, if not yet in revenue.

Can Meta’s open strategy deliver long-term returns?

Meta’s LlamaCon wasn’t just a tech event but a philosophical declaration. In an era where AI power is increasingly concentrated and monetised, Meta chooses a different path based on openness, infrastructure, and community adoption.

The company invests tens of billions of dollars without a clear monetisation model. It is placing a massive bet that open models and proprietary infrastructure can become the dominant framework for AI development.

Meta is facing a major antitrust trial as the FTC argues its Instagram and WhatsApp acquisitions were made to eliminate competition rather than foster innovation.

Meta’s move positions it as the Android of the LLM era—ubiquitous, flexible, and impossible to ignore. The road ahead will be shaped by both technical breakthroughs and external forces—regulation, economics, and geopolitics.

Whether Meta’s open-source gamble proves visionary or reckless, one thing is clear – the AI landscape is no longer just about who has the most innovative model. It’s about who builds the broadest ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beyond the imitation game: GPT-4.5, the Turing Test, and what comes next

From GPT-4 to 4.5: What has changed and why it matters

In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.

What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.

The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.

The Turing Test: Origins, purpose, and modern relevance

In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.

In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.

Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.

Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.

How GPT-4.5 fooled the judges: Inside the Turing Test study

In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.

The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.

That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.

What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?

Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA

While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.

It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.

The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.

The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.

The power of persona: How character shaped perception

One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.

The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.

Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.

That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.

In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.

Limitations of the Turing Test: Beyond the illusion of intelligence

While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.

Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.

No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.

As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.

Wider implications: Rethinking the role of AI in society

GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?

From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.

How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?

 Body Part, Hand, Person, Finger, Smoke Pipe

On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?

As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.

What comes next: Human-machine dialogue in the post-Turing era

With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.

Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.

We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.

GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft at 50 – A journey through code, cloud, and AI

The start of a software empire

Microsoft, the American tech giant, was founded 50 years ago, on 4 April 1975, by Harvard dropout Bill Gates and his childhood friend Paul Allen. Since then, the company has evolved from a small startup into the world’s largest software company.

Its early success can be traced back to a pivotal deal in 1975 involving the Altair computer, which inspired the pair to launch the business officially.

That same drive for innovation would later secure Microsoft a breakthrough in 1980 when it partnered with IBM. A collaboration that was supplying the DOS operating system for IBM PCs, a move that turned Microsoft into a household name.

In 1986, Microsoft went public at $21 per share, according to the NASDAQ.  A year later, Gates popped up on the billionaire list, the youngest ever to hold the status at the time, at 31 years old.

Microsoft expands its empire

Throughout the 1980s and 1990s, Microsoft’s dominance in the software industry grew rapidly, particularly with the introduction of Windows 3.0 in 1990, which sold over 60 million copies and solidified the company’s control over the PC software market.

Microsoft, founded 50 years ago by Bill Gates and Paul Allen, evolved from a small startup to the world’s largest software company, revolutionising the tech landscape.

Over the decades, Microsoft has diversified its portfolio far beyond operating systems. Its Productivity and Business Processes division now includes the ever-popular Office Suite, which caters to both commercial and consumer markets, and the business-focused LinkedIn platform.

Equally significant is Microsoft’s Intelligent Cloud segment, led by its Azure Cloud Services, now the second-largest cloud platform globally, which has transformed the way businesses manage computing infrastructure.

The strategic pivot into cloud computing has been complemented by a range of other products, including SQL Server, Windows Server, and Visual Studio.

The giant under scrutiny

The company’s journey has not been without challenges. Its rapid rise in the 1990s attracted regulatory scrutiny, leading to high-profile antitrust cases and significant fines in both the USA and Europe.

Triggered by concerns over Microsoft’s growing dominance in the personal computer market, US regulators launched a series of investigations into whether the company was actively working to stifle competition.

The initial Federal Trade Commission probe was soon picked up by the Department of Justice, which filed formal charges in 1998. At the heart of the case was Microsoft’s practice of bundling its software, mainly Internet Explorer, with the Windows operating system.

 Flag, American Flag

Critics argued that this not only marginalised competitors like Netscape, but also made it difficult for users to install or even access alternative programs.

From Bill Gates to Satya Nadella

Despite these setbacks, Microsoft has continually adapted to the evolving technological landscape. When Steve Ballmer became CEO in 2000, some doubted his leadership, yet Microsoft maintained its stronghold in both business and personal computing.

In the early 2000s, the company overhauled its operating systems under the codename Project Longhorn.

The initiative led to the release of Windows Vista in 2007, which received mixed reactions. However, Windows 7 in 2009 helped Microsoft regain favour, while subsequent updates like Windows 8 and 8.1 aimed to modernise the user experience, especially on tablets.

The transition from Bill Gates to Steve Ballmer, and later to Satya Nadella in 2014, marked a new era of leadership that saw the company’s market capitalisation soar and its focus shift to cloud computing and AI.

A man in a suit and tie

Under Nadella’s stewardship, Microsoft has invested heavily in AI, including a notable $1 billion investment in OpenAI in 2019.

The strategic move, alongside the integration of AI features across its software ecosystem, from Microsoft 365 to Bing and Windows, signals the company’s determination to remain at the forefront of technological innovation.

Microsoft’s push for innovation through major acquisitions and investments

Microsoft has consistently demonstrated its commitment to expanding its technological capabilities and market reach through strategic acquisitions.

In 2011, Microsoft made headlines with its $8.5 billion acquisition of Skype, a move intended to rival Apple’s FaceTime and Google Voice by integrating Skype across Microsoft platforms like Outlook and Xbox.

 Airport, Terminal, Sign, Symbol, Airport Terminal, Text

Other strategic acquisitions played a significant role in Microsoft’s evolution. The company purchased LinkedIn, Skype, GitHub and Mojang, the studios behind Minecraft. In recent years, the company has made notable investments in key sectors, including cloud infrastructure, cybersecurity, ΑΙ, and gaming.

One of the most significant acquisitions was Inflection AI in 2024. This deal bolstered Microsoft’s efforts to integrate AI into everyday applications. Personal AI tools, essential for both consumers and businesses, enhance productivity and personalisation.

The acquisition strengthens Microsoft’s position in conversational AI, benefiting platforms such as Microsoft 365, Azure AI, and OpenAI’s ChatGPT, which Microsoft heavily supports.

By enhancing its capabilities in natural language processing and user interaction, this acquisition allows Microsoft to offer more intuitive and personalised AI solutions, helping it compete with companies like Google and Meta.

Microsoft acquires Fungible and Lumenisity for cloud innovation

In a strategic push to enhance its cloud infrastructure, Microsoft has made notable acquisitions in recent years, including Fungible and Lumenisity.

In January 2023, Microsoft acquired Fungible for $190 million. Fungible specialises in data processing units (DPUs), which are crucial for optimising tasks like network routing, security, and workload management.

By integrating Fungible’s technology, Microsoft enhances the operational efficiency of its Azure data centres, cutting costs and energy consumption while offering more cost-effective solutions to enterprise customers. This move positions Microsoft to capitalise on the growing demand for robust cloud services.

Similarly, in December 2022, Microsoft acquired Lumenisity, a company known for its advanced fibre optic technology. Lumenisity’s innovations boost network speed and efficiency, making it ideal for handling high volumes of data traffic.

azure

The move has strengthened Azure’s network infrastructure, improving data transfer speeds and reducing latency, particularly important for areas like the Internet of Things (IoT) and AI-driven workloads that require reliable, high-performance connectivity.

Together, these acquisitions reflect Microsoft’s ongoing commitment to innovation in cloud services and technology infrastructure.

Microsoft expands cybersecurity capabilities with Miburo acquisition

Microsoft has also announced its agreement to acquire Miburo, a leading expert in cyber intelligence and foreign threat analysis. This acquisition further strengthens Microsoft’s commitment to enhancing its cybersecurity solutions and threat detection capabilities.

Miburo, known for its expertise in identifying state-sponsored cyber threats and disinformation campaigns, will be integrated into Microsoft’s Customer Security and Trust organisation.

The acquisition will bolster Microsoft’s existing threat detection platforms, enabling the company to better address emerging cyber threats and state-sanctioned information operations.

Miburo’s analysts will work closely with Microsoft’s Threat Intelligence Center, data scientists, and other security teams to expand the company’s ability to counter complex cyber-attacks and the use of information operations by foreign actors.

 Sphere, Ball, Football, Soccer, Soccer Ball, Sport, Text, Photography

Miburo’s mission to protect democracies and ensure the integrity of information environments aligns closely with Microsoft’s goals of safeguarding its customers against malign influences and extremism.

A strategic move that further solidifies Microsoft’s position as a leader in cybersecurity and reinforces its ongoing investment in addressing evolving global security challenges.

Microsoft’s $68.7 billion Activision Blizzard acquisition boosts gaming and the metaverse

Perhaps the most ambitious acquisition in recent years was Activision Blizzard, which Microsoft acquired for $68.7 billion in 2022.

A close up of a device

With this purchase, Microsoft significantly expanded its presence in the gaming industry, integrating popular franchises like Call of Duty, World of Warcraft, and Candy Crush into its Xbox ecosystem.

The acquisition not only enhances Xbox’s competitiveness against Sony’s PlayStation but also positions Microsoft as a leader in the metaverse, using gaming as a gateway to immersive digital experiences.

This deal reflects the broader transformation in the gaming industry driven by cloud gaming, virtual reality, and blockchain technology.

A greener future: Microsoft’s sustainability goals

Another crucial element of the company’s business strategy is its dedication to sustainability, which will serve as the foundation of its operations and future objectives.

Microsoft has set ambitious targets to become carbon negative and water positive and achieve zero waste by 2030 while protecting ecosystems.

With a vast global presence spanning over 60 data centre regions, Microsoft leverages its cloud computing infrastructure to optimise both performance and sustainability.

The company’s approach focuses on integrating efficiency into every aspect of its infrastructure, from data centres to custom-built servers and silicon.

A key strategy in Microsoft’s sustainability efforts is its Power Purchase Agreements (PPAs), which aim to bring more carbon-free electricity to the grids where the company operates.

By securing over 34 gigawatts of renewable energy across 24 countries, Microsoft is not only advancing its own sustainability goals but also supporting the global transition to clean energy.

Microsoft plans major investment in AI infrastructure

Microsoft has also announced plans to invest $80 billion in building data centres designed to support AI workloads by the end of 2025. A significant portion of this investment, more than half, will be directed towards the USA.

As AI technology continues to grow, Microsoft’s spending includes billions on Nvidia graphics processing units (GPUs) to train AI models.

The rapid rise of OpenAI’s ChatGPT, launched in late 2022, has sparked a race among tech companies to develop their own generative AI models.

openai GPT

Having invested more than $13 billion in OpenAI, Microsoft has integrated its AI models into popular products such as Windows and Teams, while also expanding its cloud services through Azure.

Microsoft’s growth strategy shapes the future of tech innovation

All these acquisitions and investments reflect a cohesive strategy aimed at enhancing Microsoft’s leadership in key technology areas.

From AI and gaming to cybersecurity and cloud infrastructure, the company is positioning itself at the forefront of digital transformation. However, while these deals present significant growth opportunities, they also pose challenges.

Ensuring successful integration, managing regulatory scrutiny, and creating synergies between acquired entities will be key to Microsoft’s long-term success. In conclusion, Microsoft’s strategy highlights its dedication to innovation and technology leadership.

From its humble beginnings converting BASIC for Altair to its current status as a leader in cloud and AI, Microsoft’s story is one of constant reinvention and enduring influence in the digital age.

By diversifying across multiple sectors, including gaming, cloud computing, AI, and cybersecurity, the company is building a robust foundation for future growth.

A digital business model that not only reinforces Microsoft’s market position but also plays a vital role in shaping the future of technology.

For more information on these topics, visit diplomacy.edu.

Ghibli trend as proof of global dependence on AI: A phenomenon that overloaded social networks and systems

It is rare to find a person in this world (with internet access) who has not, at least once, consulted AI about some dilemma, idea, or a simple question.

The wide range of information and rapid response delivery has led humanity to embrace a ‘comfort zone’, allowing machines to reason for them, and recently, even to create animated photographs.

This brings us to a trend that, within just a few days, managed to spread across the planet through almost all meridians – the Ghibli style emerged spontaneously on social networks. When people realised they could obtain animated versions of their favourite photos within seconds, the entire network became overloaded.

 Art, Painting, Person, Computer, Computer Hardware, Computer Keyboard, Electronics, Hardware, Face, Head, Cartoon, Pc, Book, Publication, Yuriko Yamaguchi

Since there was no brake mechanism, reactions from leading figures were inevitable, with Sam Altman, CEO of OpenAI, speaking out.

He stated that the trend had surpassed all expectations and that servers were ‘strained’, making the Ghibli style available only to ChatGPT users subscribed to Plus, Pro, and Team versions.

Besides admiring AI’s incredible ability to create iconic moments within seconds, this phenomenon also raises the issue of global dependence on artificial intelligence.

Why are we all so in love with AI?

The answer to this question is rather simple, and here’s why. Imagine being able to finally transform your imagination into something visible and share all your creations with the world. It doesn’t sound bad, does it?

This is precisely where AI has made its breakthrough and changed the world forever. Just as Ghibli films have, for decades, inspired fans with their warmth and nostalgia, AI technology has created something akin to the digital equivalent of those emotions.

People are now creating and experiencing worlds that previously existed only in their minds. However, no matter how comforting it sounds, warnings are often raised about maintaining a sense of reality to avoid ‘falling into the clutches’ of a beautiful virtual world.

Balancing innovation and simplicity

Altman warned about the excessive use of AI tools, stating that even his employees are sometimes overwhelmed by the progress of artificial intelligence and the innovations it releases daily.

As a result, people are unable to adapt as quickly as AI, with information spreading faster than ever before.

However, there are also frequent cases of misuse, raising the question – where is the balance?

The culture of continuous production has led to saturation but also a lack of reflection. Perhaps this very situation will bring about the much-needed pause and encourage people to take a step back and ‘think more with their own heads’.

Ghibli is just one of many: How AI trends became mainstream

AI has been with us for a long time, but it was not as popular until major players like OpenAI, Gemini, Azure, and many others appeared. The Ghibli trend is just one of many that have become part of pop culture in recent years.

Since 2018, we have witnessed deepfake technologies, where various video clips, due to their ability to accurately recreate faces in entirely different contexts, flood social networks almost daily.

AI-generated music and audio recordings have also been among the most popular trends promoted over the past four years because they are ‘easy to use’ and offer users the feeling of creating quality content with just a few clicks.

There are many other trends that have captured the attention of the global public, such as the Avatar trend (Lensa AI), generated comics and stories (StoryAI and ComicGAN), while anime-style generators have actually existed since 2022 (Waifu Labs).

Are we really that lazy or just better organised?

The availability of AI tools at every step has greatly simplified everyday life. From applications that assist in content creation, whether written or in any other format.

For this reason, the question arises – are we lazy, or have we simply decided to better organise our free time?

This is a matter for each individual, and the easiest way to examine is to ask yourself whether you have ever consulted AI about choosing a film or music, or some activity that previously did not take much energy.

AI offers quick and easy solutions, which is certainly an advantage. However, on the other hand, excessive use of technology can lead to a loss of critical thinking and creativity.

Where is the line between efficiency and dependence if we rely on algorithms for everything? That is an answer each of us will have to find at some point.

A view on AI overload: How can we ‘break free from dependence’?

The constant reliance on AI and the comfort it provides after every prompt is appealing, but abusing it leads to a completely different extreme.

The first step towards ‘liberation’ is to admit that there is a certain level of over-reliance, which does not mean abandoning AI altogether.

Understanding the limitations of technology can definitely be the key to returning to essential human values. Digital ‘detox’ implies creative expression without technology.

Can we use technology without it becoming the sole filter through which we see the world? After all, technology is a tool, not a dominant factor in decision-making in our lives.

Ghibli trend enthusiasts – the legendary Hayao Miyazaki does not like AI

The founder of Studio Ghibli, Hayao Miyazaki, recently reacted to the trend that has overwhelmed the world. The creator of famous works such as Princess Mononoke, Howl’s Moving Castle, Spirited Away, My Neighbour Totoro, and many others is vehemently opposed to the use of AI.

Known for his hand-drawn approach and whimsical storytelling, Miyazaki has addressed ethical issues, considering that trends and the mass use of AI tools are trained on large amounts of data, including copyrighted works.

Besides criticising the use of AI in animation, he believes that such tools cannot replace the human touch, authenticity, and emotions conveyed through the traditional creation process.

For Miyazaki, art is not just a product but a reflection of the artist’s soul – something machines, no matter how advanced, cannot truly replicate.

For more information on these topics, visit diplomacy.edu.