Databricks has secured a fresh funding round that pushes its valuation beyond $100bn, cementing its place among the world’s most valuable private tech firms. The Series K deal marks a sharp rise from the company’s $62bn figure in late 2024 and underscores investor confidence in its long-term AI strategy.
The new capital will accelerate Databricks’ global expansion, fuel acquisitions in the AI space, and support product innovation. Upcoming launches include Agent Bricks, a platform for enterprise-grade AI agents, and Lakebase, a new operational database that extends the company’s ecosystem.
Chief executive Ali Ghodsi said the round was oversubscribed, reflecting strong investor demand. He emphasised that businesses can leverage enterprise data to create secure AI apps and agents, noting that this momentum supports Databricks’ growth across 15,000 customers.
The company has also expanded its role in the broader AI ecosystem through partnerships with Microsoft, Google Cloud, Anthropic, SAP, and Palantir. Last year, it opened a European headquarters in London to cement the UK as a key market and strengthen ties with global enterprises.
Databricks has avoided confirming an IPO timeline, though Ghodsi told CNBC that investor appetite surged after fintech Figma’s listing. With Klarna now eyeing a return to New York, Databricks’ soaring valuation highlights how leading AI firms continue to attract capital even as market conditions shift.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
TeraWulf has secured a $3.2 billion financial backstop from Google to develop a 160-megawatt data centre at its Lake Mariner site in New York. Google will receive warrants for 32.5 million shares, lifting its stake in TeraWulf to about 14%.
Unlike its existing Bitcoin mining activities, the new deal focuses exclusively on AI and high-performance computing (HPC) workloads. TeraWulf confirmed it will maintain its Bitcoin mining operations but has no plans for expansion in that area.
The pivot reflects a broader trend in the mining industry, where companies increasingly shift capacity toward AI following the April 2024 halving that cut block rewards.
Executives highlighted that while Bitcoin mining offers immediate cash flow and grid flexibility, the long-term growth lies in powering AI and HPC demand. Research from VanEck suggests that if miners redirected just 20% of their power toward AI hosting, the industry could see $13.9 billion in additional annual revenue.
TeraWulf’s leadership said the partnership with Google positions the company as a key player in building next-generation digital infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has introduced a new feature in Docs that allows Workspace subscribers to turn written documents into audio using its Gemini AI assistant.
The tool produces natural-sounding voices, offers playback controls such as pausing and rewinding, and even highlights text as it is read. The rollout marks a step toward transforming Docs from a simple text editor into a multimedia platform that serves both accessibility and productivity needs.
Available under the Tools menu, the feature caters to auditory learners, professionals on the move, and users with visual impairments.
Gemini provides several AI voice options and synchronises narration with text, offering an audiobook-like experience that could change how people review drafts, collaborate remotely, or proofread reports.
The audio tool is limited to select Workspace plans, including Business, Enterprise, and Education, reflecting Google’s strategy of tying advanced AI functions to premium tiers.
Analysts believe the integration could encourage organisations to upgrade, especially as Google seeks to keep pace with rivals such as Microsoft, which has similar Copilot features in Office.
Looking ahead, experts suggest Gemini’s audio capabilities could expand to real-time translation and interactive playback.
By weaving audio into Docs, Google strengthens its position in the growing competition over AI-powered productivity while pushing for more inclusive and efficient workflows.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Wyoming has launched the Frontier Stable Token (FRNT), becoming the first US state to issue a government-backed stablecoin. The initiative aims to modernise payments for citizens and businesses, offering a secure and efficient way to transact.
The token is fully reserved, backed by dollars and short-term treasuries held in trust, and structured to be 2% over-collateralised. State officials emphasised that this design strengthens confidence and avoids the risks often linked to privately issued stablecoins.
The launch was announced during the Wyoming Blockchain Symposium and coincided with new federal legislation, the GENIUS Act, which sets more explicit rules for stablecoin issuers.
Ahead of the rollout, Wyoming tested a blockchain-based payment to a government contractor, proving the token’s ability to reduce costs and streamline transactions.
By introducing FRNT, Wyoming has positioned itself as a digital asset pioneer within the US. The move reflects growing confidence in stablecoins, which have already reached a $260 billion market and could expand to $1 trillion within years.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU has engaged in talks with the Bangladesh Telecommunication Regulatory Commission to strengthen cooperation on data protection, cybersecurity, and the country’s digital economy.
The meeting was led by EU Ambassador Michael Miller and BTRC Chairman Major General (retd) Md Emdad ul Bari.
The EU emphasised safeguarding fundamental rights while encouraging innovation and investment. With opportunities in broadband expansion, 5G deployment, and last-mile connectivity, the EU reaffirmed its commitment to supporting Bangladesh’s vision for a secure and inclusive digital future.
Both parties agreed to deepen collaboration, with the EU offering technical expertise under its Global Gateway strategy to help Bangladesh build a safer and more connected digital landscape.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google’s Cloud Experience lead Hayete Gallot says developer interest in sovereign cloud solutions is rising sharply amid AI concerns. More clients are asking to control how and where their data is stored, processed, and encrypted within public cloud environments.
Microsoft said it could not guarantee full cloud data sovereignty in July, increasing pressure on rivals to offer stronger protections.
Gallot noted that sovereignty is more than location. Cybersecurity measures such as encryption, ownership, and administrative access are now top priorities for businesses.
On AI, Gallot dismissed fears that assistants will replace developers, saying skills like prompt writing still require critical thinking.
She believes modern developers must adapt, comparing today’s AI tools to learning older languages like Pascal or Fortran.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has introduced AI-powered translation tools for creators on Instagram and Facebook, allowing reels to be dubbed into other languages with automatic lip syncing.
The technology uses the creator’s voice instead of a generic substitute, ensuring tone and style remain natural while lip movements match the dubbed track.
The feature currently supports English-to-Spanish and Spanish-to-English, with more languages expected soon. On Facebook, it is limited to creators with at least 1,000 followers, while all public Instagram accounts can use it.
Viewers automatically see reels in their preferred language, although translations can be switched off in settings.
Through Meta Business Suite, creators can also upload up to 20 custom audio tracks per reel, offering manual control instead of relying only on automated translations. Audience insights segmented by language allow performance tracking across regions, helping creators expand their reach.
Meta has advised creators to prioritise face-to-camera reels with clear speech instead of noisy or overlapping dialogue.
The rollout follows a significant update to Meta’s Edits app, which added new editing tools such as real-time previews, silence-cutting and over 150 fresh fonts to improve the Reels production process.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea is advancing plans for a won-denominated stablecoin as the Financial Services Commission (FSC) drafts a regulatory framework. The proposal will set rules for issuance, collateral, and controls, marking South Korea’s first unified approach to stablecoins.
Political and industry momentum has been growing under pro-crypto President Lee Jae-myung. Surveys show strong public interest, while USD-backed stablecoins dominate local trading and remittances.
Eight major banks are collaborating on a joint won-based token, seeking regulatory approval to maintain competitiveness and reduce reliance on foreign-issued coins.
The private sector has already launched South Korea’s first won-pegged stablecoin. On 5 August, entertainment platform fanC and software firm Initech unveiled KRWIN, pegged 1:1 to the Korean won.
The pilot tests transferability and real-world use in payments, remittances, and tourism, with plans for a broader rollout hinted at by a trademark application.
Regional interest in stablecoins is rising across Asia, with Japan and Hong Kong also exploring initiatives. Dollar-backed stablecoins like USDT and USDC still dominate, keeping competition and adoption timelines uncertain despite won-pegged token launches.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.
Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.
While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles.
Improving access to justice
Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain.
NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.
While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure.
Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.
AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law.
Risking human rights
While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.
Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.
Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight.
Image via Pixabay / jessica45
Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.
However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors.
While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.
The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance.
The policy path forward
As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.
The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights.
Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems.
The future of justice
AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.
However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support.
Image via Pixabay / souandresantana
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!