The ruling, from US District Court Judge Amit P. Mehta, bars Google from entering or maintaining exclusive deals that tie the distribution of its search products, such as Search, Chrome, and Gemini, to other apps or revenue agreements.
The tech giant will also have to share specific search data with rivals and offer search and search ad syndication services to competitors at standard rates.
The ruling comes a year after Judge Mehta found that Google had illegally maintained its monopoly in online search. The Department of Justice brought the case and pushed for stronger measures, including forcing Google to sell off its Chrome browser and Android operating system.
It also sought to end Google’s lucrative agreements with companies like Apple and Samsung, in which it pays billions to be the default search engine on their devices. The judge acknowledged during the trial that these default placements were ‘extremely valuable real estate’ that effectively locked out rivals.
A final judgement has not yet been issued, as Judge Mehta has given Google and the Department of Justice until 10 September to submit a revised plan. A technical committee will be established to help enforce the judgement, which will go into effect 60 days after entry and last for six years.
Experts say the ruling may influence a separate antitrust trial against Google’s advertising technology business, and that the search case itself is likely to face a lengthy appeals process, stretching into 2028.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have announced a joint effort to clarify spot cryptocurrency trading. Regulators confirmed that US and foreign exchanges can list spot crypto products- leveraged and margin ones.
The guidance follows the President’s Working Group on Digital Asset Markets recommendations, which called for rules that keep blockchain innovation within the country.
Regulators said they are ready to review filings, address custody and clearing, and ensure spot markets meet transparency and investor protection standards.
Under the new approach, major venues such as the New York Stock Exchange, Nasdaq, CME Group and Cboe Global Markets could seek to list spot crypto assets. Foreign boards of trade recognised by the CFTC may also be eligible.
The move highlights a policy shift under President Donald Trump’s administration, with Congress and the White House pressing for greater regulatory clarity.
In July, the House of Representatives passed the CLARITY Act, a bill on crypto market structure now before the Senate. The moves and the regulators’ statement mark a key step in aligning US digital assets with established financial rules.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.
xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.
Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.
Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.
The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.
The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.
The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.
A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.
Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.
California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.
The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple is moving forward with its integrated approach to AI by testing an internal chatbot designed for retail training. The company focuses on embedding AI into existing services rather than launching a consumer-facing chatbot like Google’s Gemini or ChatGPT.
The new tool, Asa, is being tested within Apple’s SEED app, which offers training resources for store employees and authorised resellers. Asa is expected to improve learning by allowing staff to ask open-ended questions and receive tailored responses.
Screenshots shared by analyst Aaron Perris show Asa handling queries about device features, comparisons, and use cases. Although still in testing, the chatbot is expected to expand across Apple’s retail network in the coming weeks.
The development occurs amid broader AI tensions, as Elon Musk’s xAI sued Apple and OpenAI for allegedly colluding to limit competition. Apple’s focus on internal AI tools like Asa contrasts with Musk’s legal action, highlighting disputes over AI market dominance and platform integration.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has pledged to rein in excessive competition in AI, signalling Beijing’s desire to avoid wasteful investment while keeping the technology central to its economic strategy.
The National Development and Reform Commission stated that provinces should develop AI in a coordinated manner, leveraging local strengths to prevent duplication and overlap. Officials in China emphasised the importance of orderly flows of talent, capital, and resources.
The move follows President Xi Jinping’s warnings about unchecked local investment. Authorities aim to prevent overcapacity problems, such as those seen in electric vehicles, which have fueled deflationary pressures in other industries.
While global investment in data centres has surged, Beijing is adopting a calibrated approach. The state also vowed stronger national planning and support for private firms, aiming to nurture new domestic leaders in AI.
At the same time, policymakers are pushing to attract private capital into traditional sectors, while considering more central spending on social projects to ease local government debt burdens and stimulate long-term consumption.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mark Zuckerberg’s ambitious plan to assemble a dream team of AI researchers at Meta has instead created internal instability.
High-profile recruits poached from rival firms have begun leaving within weeks of joining, citing cultural clashes and frustration with the company’s working style. Their departures have disrupted projects and unsettled long-time executives.
Meta had hoped its aggressive hiring spree would help the company rival OpenAI, Google, and Anthropic in developing advanced AI systems.
Instead of strengthening the company’s position, the strategy has led to delays in projects and uncertainty about whether Meta can deliver on its promises of achieving superintelligence.
The new arrivals were given extensive autonomy, fuelling tensions with existing teams and creating leadership friction. Some staff viewed the hires as destabilising, while others expressed concern about the direction of the AI division.
The resulting turnover has left Meta struggling to maintain momentum in its most critical area of research.
As Meta faces mounting pressure to demonstrate progress in AI, the setbacks highlight the difficulty of retaining elite talent in a fiercely competitive field.
Zuckerberg’s recruitment drive, rather than propelling Meta ahead, risks slowing down the company’s ability to compete at the highest level of AI development.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is preparing to build a significant new data centre in India as part of its Stargate AI infrastructure initiative. The move will expand the company’s presence in Asia and strengthen its operations in its second-largest market by user base.
OpenAI has already registered as a legal entity in India and begun assembling a local team.
The company plans to open its first office in New Delhi later this year. Details regarding the exact location and timeline of the proposed data centre remain unclear, though CEO Sam Altman may provide further information during his upcoming visit to India.
The project represents a strategic step to support the company’s growing regional AI ambitions.
OpenAI’s Stargate initiative, announced by US President Donald Trump in January, involves private sector investment of up to $500 billion for AI infrastructure, backed by SoftBank, OpenAI, and Oracle.
The initiative seeks to develop large-scale AI capabilities across major markets worldwide, with the India data centre potentially playing a key role in the efforts.
The expansion highlights OpenAI’s focus on scaling its AI infrastructure while meeting regional demand. The company intends to strengthen operational efficiency, improve service reliability, and support its long-term growth in Asia by establishing local offices and a significant data centre.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.
The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.
In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.
Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.
Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.
Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.
Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Beatoven.ai has launched Maestro, a generative AI model for instrumental music that will later expand to vocals and sound effects. The company claims it is the first fully licensed AI model, ensuring royalties for artists and rights holders.
Trained on licensed datasets from partners such as Rightsify and Symphonic Music, Maestro avoids scraping issues and guarantees attribution. Beatoven.ai, with two million users and 15 million tracks generated, says Maestro can be fine-tuned for new genres.
The platform also includes tools for catalogue owners, allowing labels and publishers to analyse music, generate metadata, and enhance back-catalogue discovery. CEO Mansoor Rahimat Khan said Maestro builds an ‘AI-powered music ecosystem’ designed to push creativity forward rather than mimic it.
Industry figures praised the approach. Ed Newton-Rex of Fairly Trained said Maestro proves AI can be ethical, while Musical AI’s Sean Power called it a fair licensing model. Beatoven.ai also plans to expand its API into gaming, film, and virtual production.
The launch highlights the wider debate over licensing versus scraping. Scraping often exploits copyrighted works without payment, while licensed datasets ensure royalties, higher-quality outputs, and long-term trust. Advocates argue that licensing offers a more sustainable and fairer path for GenAI music.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!