Coinbase secures approval to operate in India

Coinbase has officially registered with India’s Financial Intelligence Unit (FIU), allowing it to offer crypto trading services in the country, the company announced on Tuesday. The US-based exchange plans to launch its initial retail services later this year, followed by further investments and product rollouts. While a specific timeline has not been disclosed, Coinbase sees India as a key market with strong growth potential.

Interest in cryptocurrency has surged in India, particularly among young investors looking to supplement their incomes. Despite a 30% tax on crypto trading gains—one of the highest globally—the sector remains largely unregulated. Other major exchanges operating in the country include CoinDCX, Binance, and KuCoin.

India requires virtual asset service providers to register with the FIU and comply with anti-money laundering regulations. The government is currently reviewing its stance on crypto, influenced by global regulatory trends and recent policy shifts in the US. As the regulatory landscape evolves, Coinbase aims to establish a strong foothold in the Indian market while adhering to local compliance standards.

For more information on these topics, visit diplomacy.edu.

Allstate faces lawsuit for security failures in data breach

New York State has taken legal action against Allstate, accusing its National General unit of mishandling customer data security and failing to report a breach that exposed sensitive information.

The state’s Attorney General, Letitia James, filed the lawsuit in Manhattan, claiming that the breaches, which occurred in 2020 and 2021, resulted in hackers accessing the driver’s license numbers of over 360,000 people.

According to the lawsuit, National General did not notify affected drivers or state agencies about the first breach, which occurred between August and November 2020.

The second, larger breach, was discovered three months later in January 2021. James alleges that National General violated the state’s Stop Hacks and Improve Electronic Data Security Act by failing to protect customer information adequately.

In response, Allstate defended its actions, stating that it had resolved the issue years ago, secured its systems, and offered free credit monitoring to affected consumers.

The lawsuit seeks civil fines of $5,000 per violation, in addition to other remedies. This legal action follows similar penalties imposed on other US companies for data security lapses, including fines for Geico and Travelers.

For more information on these topics, visit diplomacy.edu.

US drops AI investment proposal against Google

The US Department of Justice (DOJ) has decided to drop its earlier proposal to force Alphabet, Google’s parent company, to sell its investments in AI companies, including its stake in Anthropic, a rival to OpenAI.

The proposal was originally included in a wider initiative to boost competition in the online search market. The DOJ now argues that restricting Google’s AI investments might lead to unintended consequences in the rapidly changing AI sector.

While this move represents a shift in the government’s approach, the DOJ and 38 state attorneys general are continuing their antitrust case against Google. They argue that Google holds an illegal monopoly in the search market and is distorting competition.

The government’s case includes demands for Google to divest its Chrome browser and implement other measures to foster competition.

Google has strongly opposed these efforts, stating that they would harm consumers, the economy, and national security. The company is also planning to appeal the proposals.

As part of the ongoing scrutiny, the DOJ’s latest proposal mandates that Google notify the government of any future investments in generative AI, a move intended to curb further concentration of power in the sector.

This case is part of a broader wave of antitrust scrutiny facing major tech companies like Google, Apple, and Meta, as US regulators seek to rein in the market dominance of Big Tech.

For more information on these topics, visit diplomacy.edu.

Authors challenge Meta’s use of their books in AI training

A lawsuit filed by authors Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates against Meta has taken a significant step forward as a federal judge has ruled that the case will continue.

The authors allege that Meta used their books to train its Llama AI models without consent, violating their intellectual property rights.

They further claim that Meta intentionally removed copyright management information (CMI) from the works to conceal the alleged infringement.

Meta, however, defends its actions, arguing that the training of AI models qualifies as fair use and that the authors lack standing to sue.

Despite this, the judge allowed the lawsuit to move ahead, acknowledging that the authors’ claims suggest concrete injury, specifically regarding the removal of CMI to hide the use of copyrighted works.

While the lawsuit touches on several legal points, the judge dismissed claims related to the California Comprehensive Computer Data Access and Fraud Act, stating that there was no evidence of Meta accessing the authors’ computers or servers.

Meta’s defence team has continued to assert that the AI training practices were legally sound, though the ongoing case will likely provide more insight into the company’s stance on copyright.

The ruling adds to the growing list of copyright-related lawsuits involving AI models, including one filed by The New York Times against OpenAI. As the debate around AI and intellectual property rights intensifies, this case could set important precedents.

For more information on these topics, visit diplomacy.edu.

NHS looks into Medefer data flaw after security concerns

NHS is investigating allegations that a software flaw at private medical services company Medefer left patient data vulnerable to hacking.

The flaw, discovered in November, affected Medefer’s internal patient record system in the UK, which handles 1,500 NHS referrals monthly.

A software engineer who found the issue believes the vulnerability may have existed for six years, but Medefer denies this claim, stating no data has been compromised.

The engineer discovered that unprotected application programming interfaces (APIs) could have allowed outsiders to access sensitive patient information.

While Medefer has insisted that there is no evidence of any breach, they have commissioned an external security agency to review their systems. The agency confirmed that no breach was found, and the company asserts that the flaw was fixed within 48 hours of being discovered.

Cybersecurity experts have raised concerns about the potential risks posed by the flaw, emphasising that a proper investigation should have been conducted immediately.

Medefer reported the issue to the Information Commissioner’s Office (ICO) and the Care Quality Commission (CQC), both of which found no further action necessary. However, experts suggest that a more thorough response could have been beneficial given the sensitive nature of the data involved.

For more information on these topics, visit diplomacy.edu.

Labour probe launched into Scale AI’s pay and working conditions

The United States Department of Labor is investigating Scale AI, a data labeling startup backed by Nvidia, Amazon, and Meta, for its compliance with fair pay and working conditions under the Fair Labor Standards Act.

The inquiry began nearly a year ago during Joe Biden’s presidency, with officials examining whether the company meets federal labour regulations. Scale AI has been cooperating with the department to clarify its business practices and the evolving nature of the AI sector.

Founded in 2016, Scale AI plays a crucial role in training advanced AI models by providing accurately labeled data. The company also operates a platform where researchers exchange AI-related insights, with contributors spanning over 9,000 locations worldwide.

In response to the investigation, a company spokesperson stated that the majority of payments to contributors are made on time, with 90% of payment-related inquiries resolved within three days.

Valued at $14 billion following a late-stage funding round last year, Scale AI serves major clients such as OpenAI, Cohere, Microsoft, and Morgan Stanley.

The company insists that contributor feedback is overwhelmingly positive and maintains that it prioritises fair pay and support for its workforce.

For more information on these topics, visit diplomacy.edu.

Fast-delivery firms face antitrust scrutiny in India

Fast-delivery giants Zomato, Swiggy, and Zepto are facing an antitrust investigation in India over allegations of deep discounting practices that harm smaller retailers.

The All India Consumer Products Distributors Federation (AICPDF), which represents 400,000 distributors, has filed a case with the Competition Commission of India (CCI) to examine the business practices of these companies.

They claim that the discounting strategies of these platforms result in unfair pricing models that harm traditional retailers.

The quick-commerce sector in India, where products are delivered within minutes from local warehouses, has grown rapidly in recent years. However, this growth has come at the expense of brick-and-mortar stores, which cannot match the discounts offered by online platforms.

A recent survey showed a significant shift in consumer behaviour, with many shoppers reducing their purchases from supermarkets and independent stores due to the appeal of fast-delivery options.

The filing by the AICPDF, which has reviewed the pricing of several popular products, accuses companies like Zepto, Swiggy’s Instamart, and Zomato’s Blinkit of offering products at prices significantly lower than those available in traditional stores.

However, this has raised concerns about the long-term impact on local businesses. The CCI is now set to review the case, which may result in a formal investigation.

As India’s quick-commerce market continues to grow, estimated to reach $35 billion by 2030, the regulatory scrutiny of this sector is intensifying. The outcome of this case could shape the future of the industry, especially as companies like Zepto and Swiggy prepare for further expansion.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

UK artists raise alarm over AI law proposals

A new proposal by the UK government to alter copyright laws has sparked significant concern among artists, particularly in Devon. The changes would allow AI companies to use the content found on the internet, including artwork, to help train their models unless the creators opt-out. Artists like Sarah McIntyre, an illustrator from Bovey Tracey, argue that such a shift could undermine their rights, making it harder for them to control the use of their work and potentially depriving them of income.

The Devon Artist Network has expressed strong opposition to these plans, warning that they could have a devastating impact on creative industries. They believe that creators should retain control over their work, without needing to actively opt out of its use by AI. While some, like Mike Phillips from the University of Plymouth in the UK, suggest that AI could help artists track copyright violations, the majority of artists remain wary of the proposed changes.

The Department for Science, Innovation and Technology has acknowledged the concerns and confirmed that no decisions have yet been made. However, it has stated that the current copyright framework is limiting the potential of both the creative and AI sectors. As consultations close, the future of the proposal remains uncertain.

For more information on these topics, visit diplomacy.edu.

Google warns breakup plans could harm economy and security

Google has urged the Trump administration to reconsider efforts to break up the company as part of ongoing antitrust lawsuits.

The meeting with government officials took place last week, according to a source familiar with the matter. The United States Department of Justice (DOJ) is pursuing two cases against Google, focusing on its dominance in search and advertising technology.

Executives at Google have expressed concerns that proposed remedies, including the potential divestment of the Chrome browser and changes to search engine agreements, could negatively impact the American economy and national security.

The DOJ has not yet commented on the discussions. A trial to determine appropriate remedies is set for April, with a final ruling expected in August.

President Trump’s administration is expected to take a softer approach to antitrust enforcement compared to his predecessor.

Industry experts believe this could lead to adjustments in the DOJ’s stance on breaking up Google, potentially reshaping the legal battle over its market power.

For more information on these topics, visit diplomacy.edu.