Bank of America advises clients to invest in crypto

Bank of America is expanding cryptocurrency access for its wealth management clients, recommending a 1–4% allocation of digital assets across portfolios. The move brings crypto exposure to a broader range of clients, beyond the bank’s previously ultra-wealthy clientele.

Starting January 5, the bank will cover four of the largest Bitcoin ETFs, including Bitwise Bitcoin ETF, Fidelity’s Wise Origin Bitcoin Fund, Grayscale’s Bitcoin Trust, and BlackRock’s iShares Bitcoin Trust, which collectively manage over $94 billion in assets.

The recommendation aligns with a broader trend among traditional financial institutions encouraging crypto adoption.

Firms such as Morgan Stanley, BlackRock, and Fidelity have issued similar guidance in the past year. Vanguard recently opened its brokerage platform to ETFs and mutual funds that primarily hold cryptocurrencies.

Chris Hyzy, Chief Investment Officer at Bank of America Private Bank, said that a modest allocation of 1–4% in digital assets may suit investors who are comfortable with high volatility and interested in thematic innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

V3.2 models signal renewed DeepSeek momentum

DeepSeek has launched two new reasoning-focused models, V3.2 and V3.2-Speciale. The release marks a shift toward agent-style systems that emphasise efficiency. Both models are positioned as upgrades to the firm’s earlier experimental work.

The V3.2 model incorporates structured thinking into its tool-use behaviour. It supports fast and reflective modes while generating large training datasets. DeepSeek says this approach enables more exhaustive testing across thousands of tasks.

V3.2-Speciale is designed for high-intensity reasoning workloads and contests. DeepSeek reports performance levels comparable to top proprietary systems. Its Sparse Attention method keeps costs down for long and complex inputs.

The launch follows pressure from rapid advances by key rivals. DeepSeek argues the new line narrows capability gaps despite lower budgets. Earlier momentum came from strong pricing, but expectations have increased.

The company views the V3.2 series as supporting agent pipelines and research applications. It frames the update as proof that efficient models can still compete globally. Developers are expected to use the systems for analytical and technical tasks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Thrive Holdings deepens AI collaboration with OpenAI for business transformation

OpenAI and Thrive Holdings have launched a partnership to accelerate enterprise adoption of AI. The work focuses on applying AI to high-volume business functions such as accounting and IT services. Both companies say these areas offer immediate gains in speed, accuracy, and cost efficiency.

OpenAI will place its teams inside Thrive Holdings’ companies to improve core workflows. The partners want a model they can replicate across other sectors. They say embedding AI in real operations delivers better results than external tools.

Executives say AI is reshaping how organisations deliver value in competitive markets. OpenAI’s Brad Lightcap described the collaboration as an example of rapid, organisation-wide transformation. He said the approach could guide other businesses seeking practical pathways to use advanced AI tools.

Thrive Holdings views the initiative as part of a broader shift in how technology is adopted. Founder Joshua Kushner said industry experts are now driving change from within their sectors. He added that Thrive’s portfolio offers the data and domain knowledge needed to refine AI for specialised tasks.

Both partners expect the model to scale into additional business areas as uptake grows. They see long-term opportunities to adapt the framework to more enterprise functions. The ambition is to demonstrate how embedded AI can boost performance and sustain operational improvements.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

SIM-binding mandate forces changes to WhatsApp use in India

India plans to change how major messaging apps operate under new rules requiring SIM binding and frequent re-verification. The directive obliges platforms to confirm that the original SIM remains active, altering long-standing habits around device switching. Services have 90 days to comply with the order.

The Department of Telecom says continuous SIM checks will reduce misuse by linking each account to a live subscriber identity. Companion tools such as WhatsApp Web will automatically log out every 6 hours. Users will need to relink sessions with a QR code to stay connected.

The rules apply to apps that rely on phone numbers, including WhatsApp, Signal, Telegram, and local platforms. The approach mirrors SIM-bound verification used in banking apps in India. It adds a deeper security layer that goes beyond one-time codes and registration checks.

The change may inconvenience people who use Wi-Fi-only tablets or older devices without an active SIM card. It also affects anyone who relies on WhatsApp Web for work or on multi-device setups under a single number. Messaging apps may need new login systems to ease the shift.

Officials argue that tighter controls are needed to limit cyber fraud and protect consumers. Users may still access services, but with reduced flexibility and more frequent verification. India’s move signals a broader push for stronger digital safeguards across core communications tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dublin startup raises US$2.5 m to protect AI data with encryption

Mirror Security, founded at University College Dublin, has announced a US$2.5 million (approx. €2.15 million) pre-seed funding round to develop what it describes as the next generation of secure AI infrastructure.

The startup’s core product, VectaX, is a fully homomorphic encryption (FHE) engine designed for AI workloads. This technology allows AI systems to process, train or infer on data that remains encrypted, meaning sensitive or proprietary data never has to be exposed in plaintext, even during computation.

Backed by leading deep-tech investors such as Sure Valley Ventures (SVV) and Atlantic Bridge, Mirror Security plans to scale its engineering and AI-security teams across Ireland, the US and India, accelerate development of encrypted inferencing and secure fine-tuning, and target enterprise markets in the US.

As organisations increasingly adopt AI, often handling sensitive data, Mirror Security argues that conventional security measures (like policy-based controls) fall short. Its encryption native approach aims to provide cryptographic guarantees rather than trust-based assurances, positioning the company as a ‘trust layer’ for the emerging AI economy.

The Irish startup also announced a strategic partnership with Inception AI (a subsidiary of G42) to deploy its full AI security stack across enterprise and government systems. Mirror has also formed collaborations with major technology players including Intel, MongoDB, and others.

From a digital policy and global technology governance perspective, this funding milestone is significant. It underlines how the increasing deployment of AI, especially in enterprise and government contexts, is creating demand for robust, privacy-preserving infrastructure. Mirror Security’s model offers a potential blueprint for how to reconcile AI’s power with data confidentiality, compliance, and sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple support scam targets users with real tickets

Cybercriminals are increasingly exploiting Apple’s support system to trick users into surrendering their accounts. Fraudsters open real support tickets in a victim’s name, which triggers official Apple emails and creates a false sense of legitimacy. These messages appear professional, making it difficult for users to detect the scam.

Victims often receive a flood of alerts, including two-factor authentication notifications, followed by phone calls from callers posing as Apple agents. The scammers guide users through steps that appear to secure their accounts, often directing them to convincing fake websites that request sensitive information.

Entering verification codes or following instructions on these fraudulent pages gives attackers access to the account. Even experienced users can fall prey because the emails come from official Apple domains, and the phone calls are carefully scripted to build trust.

Experts recommend checking support tickets directly within your Apple ID account, never sharing verification codes, and reviewing all devices linked to your account. Using antivirus software, activating two-factor authentication, and limiting personal information online further strengthen protection against such sophisticated phishing attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Safran and UAE institute join forces on AI geospatial intelligence

Safran.AI, the AI division of Safran Electronics & Defence, and the UAE’s Technology Innovation Institute have formed a strategic partnership to develop a next-generation agentic AI geospatial intelligence platform.

The collaboration aims to transform high-resolution satellite imagery into actionable intelligence for defence operations.

The platform will combine human oversight with advanced geospatial reasoning, enabling operators to interpret and respond to emerging situations faster and with greater precision.

Key initiatives include agentic reasoning systems powered by large language models, a mission-specific AI detector factory, and an autonomous multimodal fusion engine for persistent, all-weather monitoring.

Under the agreement, a joint team operating across France and the UAE will accelerate innovation within a unified operational structure.

Leaders from both organisations emphasise that the alliance strengthens sovereign geospatial intelligence capabilities and lays the foundations for decision intelligence in national security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Valentino faces backlash over AI-generated handbag campaign

Italian fashion house Valentino has come under intense criticism after posting AI-generated advertisements for its DeVain handbag, with social media users calling the imagery ‘disturbing’ and ‘sloppy’. The BBC report describes how the brand’s digital-creative collaboration produced a surreal promotional video that quickly drew hundreds of negative comments on Instagram.

The campaign features morphing models, swirling bodies and shifting Valentino logos, all rendered by generative AI. Although the post clearly labels the material as AI-produced, many viewers noted that the brand’s reliance on the technology made the luxury product appear less appealing.

Commenters accused the company of prioritising efficiency over artistry and argued that advertising should showcase human creativity rather than automated visuals. Industry analysts have noted that the backlash reflects broader tensions within the creative economy.

Getty Images executive Dr Rebecca Swift said audiences often view AI-generated material as ‘less valuable’, mainly when used by luxury labels. Others warned that many consumers interpret the use of generative AI as a sign of cost-cutting rather than innovation.

Brands including H&M and Guess have faced similar criticism for recent AI-based promotional work, fuelling broader concerns about the displacement of models, photographers and stylists.

While AI is increasingly adopted across fashion to streamline design and marketing, experts say brands risk undermining the emotional connection that drives luxury purchasing. Analysts argue that without a compelling artistic vision at its core, AI-generated campaigns may make high-end labels feel less human at a time when customers are seeking more authenticity, not less.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Jorja Smith’s label challenges ‘AI clone’ vocals on viral track

A dispute has emerged after FAMM, the record label representing Jorja Smith, alleged that the viral dance track I Run by Haven used an unauthorised AI clone of the singer’s voice.

The BBC’s report describes how the song gained traction on TikTok before being removed from streaming platforms following copyright complaints.

The label said it wanted a share of royalties, arguing that both versions of the track, the original release and a re-recording with new vocals, infringed Smith’s rights and exploited the creative labour behind her catalogue.

FAMM said the issue was bigger than one artist, warning that fans had been misled and that unlabelled AI music risked becoming ‘the new normal’. Smith later shared the label’s statement, which characterised artists as ‘collateral damage’ in the race towards AI-driven production.

Producers behind “I Run” confirmed that AI was used to transform their own voices into a more soulful, feminine tone. Harrison Walker said he used Suno, generative software sometimes called the ‘ChatGPT for music’, to reshape his vocals, while fellow producer Waypoint admitted employing AI to achieve the final sound.

They maintain that the songwriting and production were fully human and shared project files to support their claim.

The controversy highlights broader tensions surrounding AI in music. Suno has acknowledged training its system on copyrighted material under the US ‘fair use’ doctrine, while record labels continue to challenge such practices.

Even as the AI version of I Run was barred from chart eligibility, its revised version reached the UK Top 40. At the same time, AI-generated acts such as Breaking Rust and hybrid AI-human projects like Velvet Sundown have demonstrated the growing commercial appeal of synthetic vocals.

Musicians and industry figures are increasingly urging stronger safeguards. FAMM said AI-assisted tracks should be clearly labelled, and added it would distribute any royalties to Smith’s co-writers in proportion to how much of her catalogue they contributed to, arguing that if AI relied on her work, so should any compensation.

The debate continues as artists push back more publicly, including through symbolic protests such as last week’s vinyl release of silent tracks, which highlighted fears over weakened copyright protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot