Technological inventions blurring the line between reality and fiction

The rapid progress of AI over the past few years has unsettled the global population, reaching a point where it is extremely difficult to say with certainty whether certain content has been created by AI or not.

We are confronted with this phenomenon through photos, video and audio recordings that can easily confuse us and force us to question our perception of reality.

Digital twins are being used by scammers in the crypto space to impersonate influencers and execute fraudulent schemes.

And while the public often focuses on deepfakes, at the same time we are witnessing inventions and patents emerging around the world that deserve admiration, but also spark important reflection: are we nearing, or have we already crossed, the ethical red line?

For these and many other reasons, in a world where the visual and functional differences between science fiction and reality have almost disappeared, the latest inventions come as a shock.

We are now at a point where we are facing technologies that force us to redefine what we mean by the word ‘reality’.

Neuralink: Crossing the boundary between brain and machine

Amyotrophic lateral sclerosis (ALS) is a rare neurological disease caused by damage and degeneration of motor neurons—nerve cells in the brain and spinal cord. This damage disrupts the transmission of nerve impulses to muscles via peripheral nerves, leading to a progressive loss of muscle function.

However, the Neuralink chip, developed by Elon Musk’s company, has helped one patient type with their mind and speak using their voice. This breakthrough opens the door to a new form of communication where thoughts become direct interactions.

Liquid robot from South Korea

Scenes from sci-fi films are becoming reality, and in this case (thankfully), a liquid robot has a noble purpose—to assist in rescue missions and be applied in medicine.

Currently in the early prototype stage, it has been demonstrated in labs through a collaboration between MIT and Korean research institutes.

ULS exoskeleton as support for elderly care

Healthcare workers and caregivers in China have had their work greatly simplified thanks to the ULS Robotics exoskeleton, weighing only five kilograms but enabling users to lift up to 30 kilograms.

This represents a leap forward in caring for people with limited mobility, while also increasing safety and efficiency. Commercial prototypes have been tested in hospitals and industrial environments.

Agrorobots: Autonomous crop spraying

Another example from China that has been in use for several years. Robots equipped with AI perform precise crop spraying. The system analyses pests and targets them without the need for human presence, reducing potential health risks.

The application has become standardised, with expectations for further expansion and improvement in the near future.

The stretchable battery of the future

Researchers in Sweden have developed a flexible battery that can double in length without losing energy, making it ideal for wearable technologies.

Although not yet commercially available, it has been covered in scientific journals. The aim is for it to become a key component in bendable devices, smart clothing and medical implants.

Volonaut Airbike: A sci-fi vehicle takes off

When it comes to innovation, the Volonaut Airbike hits the mark perfectly. Designed to resemble a single-seat speeder bike from Star Wars, it represents a giant leap toward personal air travel.

Functional prototypes exist, but testing remains limited due to high production costs and regulatory hurdles related to traffic laws. Nevertheless, the Polish company behind it remains committed to this idea, and it will be exciting to follow its progress.

NEO robot: The humanoid household assistant

A Norwegian company has been developing a humanoid robot capable of performing household tasks, including gardening chores like collecting and bagging leaves or grass.

These are among the first serious steps toward domestic humanoid assistants. Currently functioning in demo mode, the robot has received backing from OpenAI.

Lenovo Yoga Solar: The laptop that loves sunlight

If you find yourself without a charger but with access to direct sunlight, this laptop will do everything it can to keep you powered. Using solar energy, 20 minutes of charging in sunlight provides around one hour of video playback.

Perfect for ecologists and digital nomads. Although not yet commercially available, it has been showcased at several major tech expos.

What comes next: The need for smart regulation

As technology races ahead, regulation must catch up. From neurotech to autonomous robots, each innovation raises new questions about privacy, accountability, and ethics.

Governments and tech developers alike must collaborate to ensure that these inventions remain tools for good, not risks to society.

So, what is real and what is generated?

This question will only become harder to answer as time goes on. But on the other hand, if the technological revolution continues to head in a useful and positive direction, perhaps there is little to fear.

The true dilemma in this era of rapid innovation may not be about the tools themselves, but about the fundamental question: Is technology shaping us, or do we still shape it?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft drops passwords in Authenticator app to support passkeys

Microsoft has announced that its Authenticator app will stop supporting the saving of new passwords from 1 June, with autofill features to be removed in July. By August, users will no longer have access to any passwords stored in the app.

The decision marks a shift in Microsoft’s focus from app-based password management to browser-based solutions, particularly via Microsoft Edge.

The company recommends that users move their saved passwords to a dedicated password manager or the Edge browser immediately.

Instead of continuing to develop Authenticator as a full password manager, Microsoft is encouraging users to adopt passkeys—digital credentials that offer stronger security.

Passkeys use cryptographic keys stored locally on devices, making them much harder to steal or guess compared to traditional passwords.

Microsoft insists this change is part of a broader push to phase out outdated password systems in favour of safer, faster authentication methods.

Security experts support this move but caution users to take immediate action to prevent losing access to important logins.

Microsoft itself admits that Authenticator was never a proper password manager in the traditional sense, and that dedicated apps such as 1Password or Apple’s built-in password tools provide better options for storing credentials securely.

Users should ensure they export or migrate their stored information well before the August cutoff.

A change like this also reflects Microsoft’s alignment with industry trends, alongside Apple and Google, to accelerate the adoption of passkeys.

The company argues that with attackers increasingly exploiting weak or reused passwords, replacing them altogether with newer technology is not just advisable—it’s essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target UK retailers with fake IT calls

British retailers are facing a new wave of cyberattacks as hackers impersonate IT help desk staff to infiltrate company systems. The National Cyber Security Centre (NCSC) has issued an urgent warning following breaches at major firms including Marks & Spencer, Co-op, and Harrods.

Attackers use sophisticated social engineering tactics—posing as locked-out employees or IT support staff—to trick individuals into giving up passwords and security details. The NCSC urges companies to strengthen how their IT help desks verify employee identities, particularly when handling password resets for senior staff.

Security experts in the UK recommend using multi-step verification methods and even code words to confirm identities over the phone. These additional layers are vital, as attackers increasingly exploit trust and human error rather than technical vulnerabilities.

While the NCSC hasn’t named any group officially, the style of attack closely resembles the methods of Scattered Spider, a loosely connected network of young, English-speaking hackers. Known for high-profile cyber incidents—including attacks on Las Vegas casinos and public transport systems—the group often coordinates via platforms like Discord and Telegram.

However, those claiming responsibility for the latest breaches deny links to Scattered Spider, calling themselves ‘DragonForce.’ Speaking to the BBC, the group claimed to have stolen significant customer and employee data from Co-op and hinted at more disruptions in the future.

The NCSC is investigating with law enforcement to determine whether DragonForce is a new player or simply a rebranded identity of the same well-known threat actors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How digital twins are being weaponised in crypto scams

Digital twins are virtual models of real-world objects, systems, or processes. They enable real-time simulations, monitoring, and predictions, helping industries like healthcare and manufacturing optimise resources. In the crypto world, cybercriminals have found a way to exploit this technology for fraudulent activities.

Scammers create synthetic identities by gathering personal data from various sources. These digital twins are used to impersonate influencers or executives, promoting fake investment schemes or stealing funds. The unregulated nature of crypto platforms makes it easier for criminals to exploit users.

Real-world scams are already happening. Deepfake CEO videos have tricked executives into transferring funds under false pretences. Counterfeit crypto platforms have also stolen sensitive information from users. These scams highlight the risks of AI-powered digital twins in the crypto space.

Blockchain offers solutions to combat these frauds. Decentralised identities (DID) and NFT identity markers can verify interactions. Blockchain’s immutable audit trails and smart contracts can help secure transactions and protect users from digital twin scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Trump signals new extension for TikTok divestment deadline

President Donald Trump indicated he would extend the deadline set for the Chinese-owned company ByteDance to sell TikTok’s US operations if negotiations remain unfinished by 19 June.

The popular short-video app, used by around 170 million Americans, played a significant role in Trump’s appeal to younger voters during his 2024 election campaign. Trump described TikTok positively, hinting at protective measures rather than outright prohibition.

Originally mandated by Congress, the TikTok ban was supposed to be enforced starting on 19 January. Trump, however, has twice extended this deadline amid ongoing negotiations.

A potential agreement to spin off TikTok’s US operations into a new, US-majority-owned firm was suspended after China objected, a reaction spurred by Trump’s substantial tariffs on Chinese goods.

Democratic senators have challenged Trump’s authority to postpone the deadline further, arguing that the proposed spin-off arrangement does not satisfy legal conditions outlined in the original legislation.

Insiders indicate negotiations continue behind the scenes, though a resolution remains dependent on settling broader trade conflicts between the US and China.

Trump remains firm about maintaining high tariffs on China, now at 145%, which he insists significantly impacts the Chinese economy.

Yet, he has left the door open to eventually lowering these tariffs within a more comprehensive trade agreement, acknowledging China’s strong desire to resume business with the U.S.

Despite multiple extensions, the fate of TikTok’s US operations remains uncertain, as political and economic factors continue shaping negotiations. Trump’s willingness to extend deadlines reflects broader geopolitical dynamics between Washington and Beijing, linking digital platform regulation closely with international trade policy.

Chefs quietly embrace AI in the kitchen

At this year’s Michelin Guide awards in France, AI sparked nearly as much conversation as the stars themselves.

Paris-based chef Matan Zaken, of the one-star restaurant Nhome, said AI dominated discussions among chefs, even though many are hesitant to admit they already rely on tools like ChatGPT for inspiration and recipe development.

Zaken openly embraces AI in his kitchen, using platforms like ChatGPT Premium to generate ingredient pairings—such as peanuts and wild garlic—that he might not have considered otherwise. Instead of starting with traditional tastings, he now consults vast databases of food imagery and chemical profiles.

In a recent collaboration with the digital collective Obvious Art, AI-generated food photos came first, and Zaken created dishes to match them.

Still, not everyone is sold on AI’s place in haute cuisine. Some top chefs insist that no algorithm can replace the human palate or creativity honed by years of training.

Philippe Etchebest, who just earned a second Michelin star, argued that while AI may be helpful elsewhere, it has no place in the artistry of the kitchen. Others worry it strays too far from the culinary traditions rooted in local produce and craftsmanship.

Many chefs, however, seem more open to using AI behind the scenes. From managing kitchen rotas to predicting ingredient costs or carbon footprints, phone apps like Menu and Fullsoon are gaining popularity.

Experts believe molecular databases and cookbook analysis could revolutionise flavour pairing and food presentation, while robots might one day take over laborious prep work—peeling potatoes included.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Zealand central bank warns of AI risks

The Reserve Bank of New Zealand has warned that the swift uptake of AI in the financial sector could pose a threat to financial stability.

A report released on Monday highlighted how errors in AI systems, data privacy breaches and potential market distortions might magnify existing vulnerabilities instead of simply streamlining operations.

The central bank also expressed concern over the increasing dependence on a handful of third-party AI providers, which could lead to market concentration instead of healthy competition.

A reliance like this, it said, could create new avenues for systemic risk and make the financial system more susceptible to cyber-attacks.

Despite the caution, the report acknowledged that AI is bringing tangible advantages, such as greater modelling accuracy, improved risk management and increased productivity. It also noted that AI could help strengthen cyber resilience rather than weaken it.

The analysis was published just ahead of the central bank’s twice-yearly Financial Stability Report, scheduled for release on Wednesday.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US lawmakers push for app store age checks

A new bill introduced by US lawmakers could force app stores like Apple’s App Store and Google Play to verify the age of all users, in a move aimed at increasing online safety for minors.

Known as the App Store Accountability Act, the legislation would require age categorisation and parental consent before minors can download apps or make in-app purchases. If passed, the law would apply to platforms with at least five million users and would come into effect one year after approval.

The bill proposes dividing users into age brackets — from ‘young child’ to ‘adult’ — and holding app stores accountable for enforcing access restrictions.

Lawmakers behind the bill, Republican Senator Mike Lee and Representative John James, argue that Big Tech companies must take responsibility for limiting children’s exposure to harmful content. They believe app stores are the right gatekeepers for verifying age and protecting minors online.

Privacy advocates and tech companies have voiced concern about the bill’s implications. Legal experts warn that verifying users’ ages may require sensitive personal data, such as ID documents or facial recognition scans, raising the risk of data misuse.

Apple said such verification would apply to all users, not just children, and criticised the idea as counterproductive to privacy.

The proposal has widened a rift between app store operators and social media platforms. While Meta, X, and Snap back centralised age checks at the app store level, Apple and Google accuse them of shifting the burden of responsibility.

Both tech giants emphasise the importance of shared responsibility and continue to engage with lawmakers on crafting practical and privacy-conscious solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok faces a €530 million EU record fine over data concerns

TikTok has been handed a €530 million ($600 million) fine by Ireland’s Data Protection Commissioner (DPC) over data privacy violations involving user information transfers to China. 

The EU privacy watchdog highlighted that TikTok failed to ensure that the EU citizens’ data received sufficient protection against potential access by Chinese authorities, raising concerns among EU lawmakers.

The regulator has also set a tight six-month deadline for TikTok to align its data practices with the EU standards. If the platform cannot demonstrate compliance, particularly in safeguarding the EU user information from being accessed remotely by China-based employees, it could face a suspension of data transfers entirely.

TikTok strongly opposes the ruling, asserting it has consistently adhered to EU-approved frameworks that restrict and monitor data access. The platform also highlighted recent security enhancements, including dedicated EU and US data centres, as proof of its commitment. 

TikTok claims it has never received or complied with any request for the EU user data from Chinese authorities, framing the ruling as an overly strict measure that could disrupt broader industry practices.

However, the regulator revealed new concerns following TikTok’s recent disclosure that some EU user data had been inadvertently stored on servers in China, although subsequently deleted. 

The revelation prompted Ireland’s privacy watchdog to consider additional regulatory actions, underscoring its serious concerns about TikTok’s overall transparency of data handling.

The case represents the second major privacy reprimand against TikTok in recent years, following a €345 million fine in 2023 over mishandling children’s data. It also marks the DPC’s pattern of taking tough actions against global tech companies headquartered in Ireland, as it aims to enforce compliance strictly under the EU’s rigorous General Data Protection Regulation (GDPR).

Google admits using opted-out content for AI training

Google has admitted in court that it can use website content to train AI features in its search products, even when publishers have opted out of such training.

Although Google offers a way for sites to block their data from being used by its AI lab, DeepMind, the company confirmed that its broader search division can still use that data for AI-powered tools like AI Overviews.

An initiative like this has raised concern among publishers who seek reduced traffic as Google’s AI summarises answers directly at the top of search results, diverting users from clicking through to original sources.

Eli Collins, a vice-president at Google DeepMind, acknowledged during a Washington antitrust trial that Google’s search team could train AI using data from websites that had explicitly opted out.

The only way for publishers to fully prevent their content from being used in this way is by opting out of being indexed by Google Search altogether—something that would effectively make them invisible on the web.

Google’s approach relies on the robots.txt file, a standard that tells search bots whether they are allowed to crawl a site.

The trial is part of a broader effort by the US Department of Justice to address Google’s dominance in the search market, which a judge previously ruled had been unlawfully maintained.

The DOJ is now asking the court to impose major changes, including forcing Google to sell its Chrome browser and stop paying to be the default search engine on other devices. These changes would also apply to Google’s AI products, which the DOJ argues benefit from its monopoly.

Testimony also revealed internal discussions at Google about how using extensive search data, such as user session logs and search rankings, could significantly enhance its AI models.

Although no model was confirmed to have been built using that data, court documents showed that top executives like DeepMind CEO Demis Hassabis had expressed interest in doing so.

Google’s lawyers have argued that competitors in AI remain strong, with many relying on direct data partnerships instead of web scraping.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!