Quantum-resistant crypto wallets now possible without address changes

Sui Research revealed a way for blockchain wallets to upgrade for quantum safety without a hard fork or address changes. The approach, based on EdDSA cryptography, allows compatible networks like Solana, Sui and Near to transition securely with minimal disruption.

Cryptographer Kostas Chalkias described the breakthrough as the first backward-compatible path to quantum safety for wallets. The upgrade uses zero-knowledge proofs to verify private key control without exposing data, keeping original public keys and supporting dormant accounts.

While praised as one of the most important cryptographic advancements in recent years, the upgrade method does not apply to Bitcoin or Ethereum. These networks use different signature methods, which may need bigger changes to stay secure as quantum tech evolves.

Although quantum computers are not yet capable of breaking blockchain encryption, researchers and developers are racing to prepare. The risk of millions of wallets becoming vulnerable has triggered serious debate in the crypto community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT gets smarter with Study Mode to support active learning

OpenAI has launched a new Study Mode in ChatGPT to help users engage more deeply with learning. Rather than simply providing answers, the feature guides users through concepts and problem-solving step-by-step. It is designed to support critical thinking and improve long-term understanding.

The company developed the feature with educators, scientists, and pedagogy experts. They aimed to ensure the AI supports active learning and doesn’t just deliver quick fixes. The result is a mode that encourages curiosity, reflection, and metacognitive development.

According to OpenAI, Study Mode allows users to approach subjects more critically and thoroughly. It breaks down complex ideas, asks questions, and helps manage cognitive load during study. Instead of spoon-feeding, the AI acts more like a tutor than a search engine.

The shift reflects a broader trend in educational technology — away from passive learning tools. Many students turn to AI for homework help, but educators have warned of over-reliance. Study Mode attempts to strike a balance by promoting engagement over shortcuts.

For instance, rather than giving the complete solution to a maths problem, Study Mode might ask: ‘What formula might apply here?’ or ‘How could you simplify this expression first?’ This approach nudges students to participate in the process and build fundamental problem-solving skills.

It also adapts to different learning needs. In science, it might walk through hypotheses and reasoning. It may help analyse a passage or structure an essay in the humanities. Prompting users to think aloud mirrors effective tutoring strategies.

OpenAI says feedback from teachers helped shape the feature’s tone and pacing. One key aim was to avoid overwhelming learners with too much information at once. Instead, Study Mode introduces concepts incrementally, supporting better retention and understanding.

The company also consulted cognitive scientists to align with best practices in memory and comprehension. However, this includes encouraging users to reflect on their learning and why specific steps matter. Such strategies are known to improve both academic performance and self-directed learning.

While the feature is part of ChatGPT, it can be toggled on or off. Users can activate Study Mode when tackling a tricky topic or exploring new material. They can then switch to normal responses for broader queries or summarised answers.

Educators have expressed cautious optimism about the update. Some see it as a tool supporting homework, revision, or assessment preparation. However, they also warn that no AI can replace direct teaching or personalised guidance.

Tools like this could be valuable in under-resourced settings or for independent learners.

Study Mode’s interactive style may help level the playing field for students without regular academic support. It also gives parents and tutors a new way to guide learners without doing the work for them.

Earlier efforts included teacher guides and classroom use cases. However, Study Mode marks a more direct push to reshape how students use AI in learning.

It positions ChatGPT not as a cheat sheet, but as a co-pilot for intellectual growth.

Looking ahead, OpenAI says it plans to iterate based on user feedback and teacher insights. Future updates may include subject-specific prompts, progress tracking, or integrations with educational platforms. The goal is to build a tool that adapts to learning styles without compromising depth or rigour.

As AI continues to reshape education, tools like Study Mode may help answer a central question: Can technology support genuine understanding, instead of just faster answers? With Study Mode, OpenAI believes the answer is yes, if used wisely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brainstorming with AI opens new doors for innovation

AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Company, Kevin Li describes how AI complements human brainstorming under time pressure, drawing from his work at Amazon and startup Stealth.

Li argues AI is no longer just a tool but a true collaborator in creative workflows. Generative models can analyse vast data sets and rapidly suggest alternative concepts, helping teams reimagine product features, marketing strategies, and campaign angles. The shift aligns with broader industry trends.

A McKinsey report from earlier this year highlighted that, while only 1% of companies consider themselves mature in AI use, most are investing heavily in this area. Creative use cases are expected to generate massive value by 2025.

Li notes that the most effective use of AI occurs when it’s treated as a sounding board. He recounts how the quality of ideas improved significantly when AI offered raw directions that humans later refined. The hybrid model is gaining traction across multiple startups and established firms alike.

Still, original thinking remains a hurdle. A recent study by PsyPost found human pairs often outperform AI tools in generating novel ideas during collaborative sessions. While AI offers scale, human teams reported more substantial creative confidence and profound originality.

The findings suggest AI may work best at the outset of ideation, followed by human editing and development. Experts recommend setting clear roles for AI in the creative cycle. For instance, tools like ChatGPT or Midjourney might handle initial brainstorming, while humans oversee narrative coherence, tone, and ethics.

The approach is especially relevant in advertising, product design, and marketing, where nuance is still essential. Creatives across X are actively sharing tips and results. One agency leader posted about reducing production costs by 30% using AI tools for routine content work.

The strategy allowed more time and budget to focus on storytelling and strategy. Others note that using AI to write draft copy or generate design options is becoming common. Yet concerns remain over ethical boundaries.

The Orchidea Innovation Blog cautioned in 2023 that AI often recycles learned material, which can limit fresh perspectives. Recent conversations on X raise alarms about over-reliance. Some fear AI-generated content will eradicate originality across sectors, particularly marketing, media, and publishing.

To counter such risks, structured prompting and human-in-the-loop models are gaining popularity. ClickUp’s AI brainstorming guide recommends feeding diverse inputs to avoid homogeneous outputs. Précis AI referenced Wharton research to show that vague prompts often produce repetitive results.

The solution: intentional, varied starting points with iterative feedback loops. Emerging platforms are tackling this in real-time. Ideamap.ai, for example, enables collaborative sessions where teams interact with AI visually and textually.

Jabra’s latest insights describe AI as a ‘thought partner’ rather than a replacement, enhancing team reasoning and ideation dynamics without eliminating human roles. Looking ahead, the business case for AI creativity is strong.

McKinsey projects hundreds of billions in value from AI-enhanced marketing, especially in retail and software. Influencers like Greg Isenberg predict $100 million niches built on AI-led product design. Frank$Shy’s analysis points to a $30 billion creative AI market by 2025, driven by enterprise tools.

Even in e-commerce, AI is transforming operations. Analytics India Magazine reports that brands build eight-figure revenues by automating design and content workflows while keeping human editors in charge. The trend is not about replacement but refinement and scale.

Li’s central message remains relevant: when used ethically, AI augments rather than replaces creativity. Responsible integration supports diverse voices and helps teams navigate the fast-evolving innovation landscape. The future of ideation lies in balance, not substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI sparks fears over future of dubbing

Voice actors across Europe are pushing back against the growing use of AI in dubbing, fearing it could replace human talent in film and television. Many believe dubbing is a creative profession beyond simple voice replication, requiring emotional nuance and cultural sensitivity.

In Germany, France, Italy and the UK, nearly half of viewers prefer dubbed content over subtitles, according to research by GWI. Yet studios are increasingly testing AI tools that replicate actors’ voices or generate synthetic speech, sparking concern across the dubbing industry.

French voice actor Boris Rehlinger, known for dubbing Hollywood stars, says he feels threatened even though AI has not replaced him. He is part of TouchePasMaVF, an initiative defending the value of human dubbing and calling for protection against AI replication.

Voice artists argue that digital voice cloning ignores the craftsmanship behind their performances. As legal frameworks around voice ownership lag behind the technology, many in the industry demand urgent safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Survey finds developers value AI for ideas, not final answers

As AI becomes more integrated into developer workflows, a new report shows that trust in AI-generated results erodes. According to Stack Overflow’s 2025 Developer Survey, the use of AI has increased to 84%, up from 76% last year. However, trust in its output has dropped, especially among experienced professionals.

The survey found that 46% of developers now lack trust in AI-generated answers.

That figure marks a sharp increase from 31% in 2024, suggesting growing scepticism despite higher adoption. By contrast, only 3.1% of developers trust AI responses.

Interestingly, trust varies with experience. Beginners were twice as likely to express high confidence in AI, with 6.1% reporting strong trust, compared with just 2.5% among seasoned developers. The results indicate a divide in how AI is perceived across the developer landscape.

Despite doubts, developers continue to use AI tools across various tasks. The vast majority – 78.5% – use AI on an infrequent basis, such as once a month. The pattern holds across experience levels, suggesting cautious and situational usage.

While trust is lacking, developers still see AI as a helpful starting point. Three in five respondents reported favourable views of AI tools overall. One in five viewed them negatively, with the remaining 20% remaining neutral.

However, that usefulness has limits. Developers were quick to seek human input when unsure about AI responses. Seventy-five percent said they would ask someone when they didn’t trust an AI-generated answer. Fifty-eight percent preferred human advice when they didn’t fully understand a solution.

Ethics and security were also areas where developers preferred human judgement. Again, 58% reported turning to colleagues or mentors to evaluate such risks. Such cases show a continued reliance on human expertise in high-stakes decisions.

Stack Overflow CEO Prashanth Chandrasekar acknowledged the limitations of AI in the development process. ‘AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance,’ he said. He added that AI best uses a ‘trusted human intelligence layer’.

The data also revealed that developers may not trust AI entirely but use it to support learning.

Forty-four percent of respondents admitted using AI tools to learn how to code, up from 37% last year.

A further 36% use it for work-related growth or career advancement.

The results highlight the role of AI as an educational companion rather than a coding authority.

It can help users understand concepts or generate basic examples, but most still want human review.

That distinction matters as teams consider how to integrate AI into production workflows.

Some developers are concerned that overreliance on AI could reduce the depth of their problem-solving skills. Others worry about hallucinations — AI-generated content that appears accurate but is misleading or incorrect. Such risks have led to a cautious, layered approach to using AI tools in real-life projects.

Stack Overflow’s findings align with broader AI adoption and trust industry trends. Tech firms are exploring ways to integrate AI safely, but many prioritise transparency and human oversight. Chandrasekar believes developers are uniquely positioned to help shape AI’s future revolution.

‘By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value,’ he said. ‘They’ll help build the AI technologies and products of tomorrow.’

As AI continues to expand into software development, one thing is clear: trust matters. Developers are open to using AI – but only when it supports, rather than replaces, human judgement. The challenge now is building systems that earn and maintain that trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Italy investigates Meta over AI integration in WhatsApp

Italy’s antitrust watchdog has investigated Meta Platforms over allegations that the company may have abused its dominant position by integrating its AI assistant directly into WhatsApp.

The Rome-based authority, formally known as the Autorità Garante della Concorrenza e del Mercato (AGCM), announced the probe on Wednesday, stating that Meta may have breached European Union competition regulations.

The regulator claims that the introduction of the Meta AI assistant into WhatsApp was carried out without obtaining prior user consent, potentially distorting market competition.

Meta AI, the company’s virtual assistant designed to provide chatbot-style responses and other generative AI functions, has been embedded in WhatsApp since March 2025. It is accessible through the app’s search bar and is intended to offer users conversational AI services directly within the messaging interface.

The AGCM is concerned that this integration may unfairly favour Meta’s AI services by leveraging the company’s dominant position in the messaging market. It warned that such a move could steer users toward Meta’s products, limit consumer choice, and disadvantage competing AI providers.

‘By pairing Meta AI with WhatsApp, Meta appears to be able to steer its user base into the new market not through merit-based competition, but by ‘forcing’ users to accept the availability of two distinct services,’ the authority said.

It argued that this strategy may undermine rival offerings and entrench Meta’s position across adjacent digital services. In a statement, Meta confirmed cooperating fully with the Italian authorities.

The company defended the rollout of its AI features, stating that their inclusion in WhatsApp aimed to improve the user experience. ‘Offering free access to our AI features in WhatsApp gives millions of Italians the choice to use AI in a place they already know, trust and understand,’ a Meta spokesperson said via email.

The company maintains its approach, which benefits users by making advanced technology widely available through familiar platforms. The AGCM clarified that its inquiry is conducted in close cooperation with the European Commission’s relevant offices.

The cross-border collaboration reflects the growing scrutiny Meta faces from regulators across the EU over its market practices and the use of its extensive user base to promote new services.

If the authority finds Meta in breach of EU competition law, the company could face a fine of up to 10 percent of its global annual turnover. Under Article 102 of the Treaty on the Functioning of the European Union, abusing a dominant market position is prohibited, particularly if it affects trade between member states or restricts competition.

To gather evidence, AGCM officials inspected the premises of Meta’s Italian subsidiary, accompanied by Guardia di Finanza, the tax police’s special antitrust unit in Italy.

The inspections were part of preliminary investigative steps to assess the impact of Meta AI’s deployment within WhatsApp. Regulators fear that embedding AI assistants into dominant platforms could lead to unfair advantages in emerging AI markets.

By relying on its established user base and platform integration, Meta may effectively foreclose competition by making alternative AI services harder to access or less visible to consumers. Such a case would not be the first time Meta has faced regulatory scrutiny in Europe.

The company has been the subject of multiple investigations across the EU concerning data protection, content moderation, advertising practices, and market dominance. The current probe adds to a growing list of regulatory pressures facing the tech giant as it expands its AI capabilities.

The AGCM’s investigation comes amid broader EU efforts to ensure fair competition in digital markets. With the Digital Markets Act and AI Act emerging, regulators are becoming more proactive in addressing potential risks associated with integrating advanced technologies into consumer platforms.

As the investigation continues, Meta’s use of AI within WhatsApp will remain under close watch. The outcome could set an essential precedent for how dominant tech firms can release AI products within widely used communication tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE partnership boosts NeOnc’s clinical trial programme

Biotech firm NeOnc Technologies has gained rapid attention after going public in March 2025 and joining the Russell Microcap Index just months later. The company focuses on intranasal drug delivery for brain cancer, allowing patients to administer treatment at home and bypass the blood-brain barrier.

NeOnc’s lead treatment is in Phase 2A trials for glioblastoma patients and is already showing extended survival times with minimal side effects. Backed by a partnership with USC’s Keck Medical School, the company is also expanding clinical trials to the Middle East and North Africa under US FDA standards.

A $50 million investment deal with a UAE-based firm is helping fund this expansion, including trials run by Cleveland Clinic through a regional partnership. The trials are expected to be fully enrolled by September, with positive preliminary data already being reported.

AI and quantum computing are central to NeOnc’s strategy, particularly in reducing risk and cost in trial design and drug development. As a pre-revenue biotech, the company is betting that innovation and global collaboration will carry it to the next stage of growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Aeroflot cyberattack cripples Russian flights in major breach

A major cyberattack on Russia’s flagship airline Aeroflot has caused severe disruptions to flights, with hundreds of passengers stranded at airports. Responsibility was claimed by two hacker groups: Ukraine’s Silent Crow and the Belarusian hacktivist collective Belarus Cyber-Partisans.

The attack is among the most damaging cyber incidents Russia has faced since the full-scale invasion of Ukraine in February 2022. Past attacks disrupted government portals and large state-run firms such as Russian Railways, but most resumed operations quickly. This time, the effects were longer-lasting.

Social media showed crowds of delayed passengers packed into Moscow’s Sheremetyevo Airport, Aeroflot’s main hub. The outage affected not only Aeroflot but also its subsidiaries, Rossiya and Pobeda.

Most of the grounded flights were domestic. However, international services to Belarus, Armenia, and Uzbekistan were also cancelled or postponed due to the IT failure.

Early on Monday, Aeroflot issued a statement warning of unspecified problems with its IT infrastructure. The company alerted passengers that delays and disruptions were likely as a result.

Later, Russia’s Prosecutor’s Office confirmed that the outage was the result of a cyberattack. It announced the opening of a criminal case and launched an investigation into the breach.

Kremlin spokesperson Dmitry Peskov described the incident as ‘quite alarming’, admitting that cyber threats remain a serious risk for all major service providers operating at scale.

In a Telegram post, Silent Crow claimed it had maintained access to Aeroflot’s internal systems for over a year. The group stated it had copied sensitive customer data, internal communications, audio recordings, and surveillance footage collected on Aeroflot employees.

The hackers claimed that all of these resources had now either been destroyed or made inaccessible. ‘Restoring them will possibly require tens of millions of dollars. The damage is strategic,’ the group wrote.

Screenshots allegedly showing Aeroflot’s compromised IT dashboards were shared via the same Telegram channel. Silent Crow hinted it may begin publishing the stolen data in the coming days.

It added: ‘The personal data of all Russians who have ever flown with Aeroflot have now also gone on a trip — albeit without luggage and to the same destination.’

The Belarus Cyber-Partisans, who have opposed Belarusian President Alexander Lukashenko’s authoritarian regime for years, said the attack was carefully planned and intended to cause maximum disruption.

‘This is a very large-scale attack and one of the most painful in terms of consequences,’ said group coordinator Yuliana Shametavets. She told The Associated Press that the group spent months preparing the strike and accessed Aeroflot’s systems by exploiting several vulnerabilities.

The Cyber-Partisans have previously claimed responsibility for other high-profile hacks. In April 2024, they said they had breached the internal network of Belarus’s state security agency, the KGB.

Belarus remains a close ally of Russia. Lukashenko, in power for over three decades, has permitted Russia to use Belarusian territory as a staging ground for the invasion of Ukraine and to deploy tactical nuclear weapons on Belarusian soil.

Russia’s aviation sector has already faced repeated interruptions this summer, often caused by Ukrainian drone attacks on military or dual-use airports. Flights have been grounded multiple times as a precaution, disrupting passenger travel.

The latest cyberattack adds a new layer of difficulty, exposing the vulnerability of even the most protected elements of Russia’s transportation infrastructure. While the full extent of the data breach is yet to be independently verified, the implications could be long-lasting.

For now, it remains unclear how long it will take Aeroflot to fully restore services or what specific data may have been leaked. Both hacker groups appear determined to continue using cyber tools as a weapon of resistance — targeting Russia’s most symbolic assets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google backs EU AI Code but warns against slowing innovation

Google has confirmed it will sign the European Union’s General Purpose AI Code of Practice, joining other companies, including major US model developers.

The tech giant hopes the Code will support access to safe and advanced AI tools across Europe, where rapid adoption could add up to €1.4 trillion annually to the continent’s economy by 2034.

Kent Walker, Google and Alphabet’s President of Global Affairs, said the final Code better aligns with Europe’s economic ambitions than earlier drafts, noting that Google had submitted feedback during its development.

However, he warned that parts of the Code and the broader AI Act might hinder innovation by introducing rules that stray from EU copyright law, slow product approvals or risk revealing trade secrets.

Walker explained that such requirements could restrict Europe’s ability to compete globally in AI. He highlighted the need to balance regulation with the flexibility required to keep pace with technological advances.

Google stated it will work closely with the EU’s new AI Office to help shape a proportionate, future-facing approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Free VPN use surges in UK after online safety law

The UK’s new Online Safety Act has increased VPN use, as websites introduce stricter age restrictions to comply with the law. Popular platforms such as Reddit and Pornhub are either blocking minors or adding age verification, pushing many young users to turn to free VPNs to bypass the rules.

In the days following the Act’s enforcement on 25 July, five of the ten most-downloaded free apps in the UK were VPNs.

However, cybersecurity experts warn that unvetted free VPNs can pose serious risks, with some selling user data or containing malware.

Using a VPN means routing all your internet traffic through an external server, effectively handing over access to your browsing data.

While reputable providers like Proton VPN offer safe free tiers supported by paid plans, lesser-known services often lack transparency and may exploit users for profit.

Consumers are urged to check for clear privacy policies, audited security practices and credible business information before using a VPN. Trusted options for safer browsing include Proton VPN, TunnelBear, Windscribe, and hide.me.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!