Meta Platforms plans to invest up to $65 billion in 2025 to strengthen its artificial intelligence infrastructure, positioning itself against competitors OpenAI and Google. Chief Executive Mark Zuckerberg announced the plans, including ramped-up hiring for AI roles and the development of a massive 2-gigawatt data centre, enough to cover much of Manhattan.
The company, a significant buyer of Nvidia’s AI chips, aims to have over 1.3 million graphics processors in place by the end of the year. Meta intends to introduce about 1 gigawatt of computing power in 2025, marking a pivotal step in its strategy. Zuckerberg highlighted the transformative potential of AI, predicting its influence on Meta’s products and business over the coming years.
Competition in the AI sector has intensified, with companies like Microsoft and Amazon also committing tens of billions to AI infrastructure. Meta’s announcement follows news of Stargate, a $500 billion AI venture involving OpenAI, SoftBank, and Oracle. Analysts suggest Meta’s timing underscores its determination to remain a key player in the AI race.
Meta has distinguished itself with its open-source Llama AI models, which are freely accessible to consumers and businesses. Zuckerberg expects Meta’s AI assistant, already serving 600 million users, to reach over 1 billion by 2025. The planned investment significantly exceeds previous spending levels, signalling Meta’s commitment to leading in the rapidly evolving AI landscape
Some US TikTok users are voicing concerns over what they perceive as heightened content moderation following the app’s return. The platform, owned by China’s ByteDance, faced a temporary ban over national security concerns before being revived through an executive order. Although TikTok insists its policies remain unchanged, many users report noticeable differences in their experience.
Content creators claim that livestreams are less frequent and posts are being flagged or removed for guideline violations at higher rates. Some allege the platform has been restricting searches, issuing misinformation warnings, and deleting previously acceptable content, such as comments mentioning ‘Free Palestine’ or referencing political figures. TikTok asserts it does not permit violent or hateful content and blames temporary instability during the restoration of its US operations.
Prominent creators have shared their struggles. Comedian Pat Loller reported his satirical video on Elon Musk faced sharing restrictions, while political commentator Danisha Carter’s account was permanently banned for alleged policy violations. Other users describe strikes against seemingly harmless content, fuelling suspicions that moderation may target specific identities or viewpoints.
The controversy has revived debates about censorship and freedom of speech on social media platforms. As TikTok navigates its future, including potential acquisition by a US buyer, creators and users alike question the impact of these changes on online expression.
Meta has come under scrutiny after its AI chatbot failed to identify the current US president correctly. Despite Donald Trump’s inauguration on Monday, the chatbot continued to name Joe Biden as president through Thursday. The error led Meta to activate its high-priority troubleshooting protocol, a ‘site event’, to address the issue urgently.
The incident marked at least the third emergency Meta faced this week during the US presidential transition. Other problems included forcing users to re-follow Trump administration profiles on social media and hashtag search errors on Instagram. Meta attributed the re-following issue to delays in transferring White House accounts, which affected ‘unfollow’ requests.
Complaints also arose after searches for Democratic hashtags were blocked while Republican hashtags displayed results normally. Meta acknowledged the issue, claiming it affected searches for various hashtags across the platform. These errors come amid broader platform changes, including scrapping fact-checking programs and reshaping its leadership.
Critics have linked the missteps to perceived shifts in Meta’s political alignment. CEO Mark Zuckerberg’s attendance at Trump’s inauguration and recent strategic moves, such as appointing Trump allies to key positions, have fuelled debate over the platform’s neutrality.
Germany’s interior minister, Nancy Faeser, has called on social media companies to take stronger action against disinformation ahead of the federal parliamentary election on 23 February. Faeser urged platforms like YouTube, Facebook, Instagram, X, and TikTok to label AI-manipulated videos, clearly identify political advertising, and ensure compliance with European laws. She also emphasised the need for platforms to report and remove criminal content swiftly, including death threats.
Faeser met with representatives of major tech firms to underline the importance of transparency in algorithms, warning against the risk of online radicalisation, particularly among young people. Her concerns come amidst growing fears of disinformation campaigns, possibly originating from Russia, that could influence the upcoming election. She reiterated that platforms must ensure they do not fuel societal division through unchecked content.
Calls for greater accountability in the tech industry are gaining momentum. At the World Economic Forum in Davos, Spanish Prime Minister Pedro Sánchez criticised social media owners for enabling algorithms that erode democracy and “poison society.” Faeser’s warnings highlight the growing international demand for stronger regulations on social media to safeguard democratic processes.
Google secured an injunction from London’s High Court on Wednesday, preventing the enforcement of Russian legal judgments against the company. The rulings related to lawsuits filed by Russian entities, including Tsargrad TV and RT, over the closure of Google and YouTube accounts. Judge Andrew Henshaw granted the permanent injunction, citing Google’s terms and conditions, which require disputes to be resolved in English courts.
The Russian judgments included severe ‘astreinte penalties,’ which increased daily and amounted to astronomical sums. Google’s lawyers argued that some fines levied on its Russian subsidiary reached numbers as large as an undecillion roubles—a figure with 36 zeroes. Judge Henshaw highlighted that the fines far exceeded the global GDP, supporting the court’s decision to block their enforcement.
A Google spokesperson expressed satisfaction with the ruling, criticising Russia’s legal actions as efforts to restrict information access and penalise compliance with international sanctions. Since 2022, Google has taken measures such as blocking over 1,000 YouTube channels, including state-sponsored news outlets, and suspending monetisation of content promoting Russia‘s actions in Ukraine.
The Indian government has issued notices to ride-hailing companies Ola and Uber, launching an investigation into allegations of price discrimination. Concerns have arisen over reports and user complaints suggesting that iPhone users are being charged significantly higher fares for the same rides compared to those using Android devices. This investigation, led by the Central Consumer Protection Agency (CCPA), aims to determine if these price discrepancies are indeed occurring and whether they constitute unfair trade practices.
The government has previously expressed strong opposition to differential pricing, deeming it an unfair and discriminatory practice. India is a crucial market for both Ola and Uber, with intense competition among various ride-hailing services. The outcome of this investigation could have significant implications for the industry, potentially impacting pricing models and consumer trust.
Beyond the ride-hailing sector, the CCPA will also examine potential pricing disparities in other sectors, including food delivery and online ticketing platforms. The broader investigation aims to identify and address any instances where consumers may be facing discriminatory pricing based on factors such as the device they use or other personal characteristics.
Ensuring fair and transparent pricing practices in the digital economy is crucial. As technology continues to shape our daily lives, it is essential to address concerns about potential algorithmic biases and discriminatory practices that may be embedded within digital platforms. The Indian government’s action sends a clear message that such practices will not be tolerated and that consumer protection remains a top priority.
LinkedIn, owned by Microsoft, faces a class-action lawsuit from its Premium customers who allege that the platform improperly shared their private messages with third parties to train AI models. The lawsuit alleges that LinkedIn introduced a new privacy setting last August that allowed users to control the sharing of their data, yet failed to adequately inform them about the use of their messages for AI training.
Customers claim that a stealthy update to LinkedIn’s privacy policy on 18 September outlined this data usage, while also stating that opting out of data sharing would not prevent past training from being utilised.
The plaintiffs, representing millions of Premium users, seek damages for breaches of contract and violations of California’s unfair competition laws. In addition, they demand compensation of $1,000 for each individual affected by alleged violations of the federal Stored Communications Act. The lawsuit highlights concerns over the potential misuse of customer data, asserting that LinkedIn deliberately obscured its practices to evade scrutiny regarding user privacy.
LinkedIn has denied the allegations, stating that the claims lack merit. The legal action arose just hours after President Donald Trump announced a significant AI investment initiative, backed by Microsoft and other major companies. In San Jose, California, the case has been filed as De La Torre v. LinkedIn Corp in the federal district court.
With privacy becoming an increasingly crucial issue, the implications of this lawsuit could resonate throughout the tech industry. Customers are scrutinising platforms’ commitments to safeguarding personal information, especially in the context of rapidly evolving AI technologies.
UK citizens will soon be able to carry essential documents, such as their passport, driving licence, and birth certificates, in a digital wallet on their smartphones. This plan was unveiled by Peter Kyle, the Secretary of State for Science, Innovation and Technology, as part of a broader initiative to streamline interactions with government services. The digital wallet, set to launch in June, aims to simplify tasks like booking appointments and managing government communications.
Initially, the digital wallet will hold a driving licence and a veteran card, with plans to add other documents like student loans, vehicle tax, and benefits. The government is also working with the Home Office to include digital passports, although these will still exist alongside physical versions. The app will be linked to an individual’s ID and could be used for various tasks, such as sharing certification or claiming welfare discounts.
Security and privacy concerns have been addressed, with recovery systems in place for lost phones and strong data protection measures. Kyle emphasised that the app complies with current data laws and features like facial recognition would enhance security. He also reassured that while the system will be convenient for smartphone users, efforts will be made to ensure those without internet access aren’t left behind.
The technology, developed in the six months since Labour took power, is part of a push to modernise government services. Kyle believes the new digital approach will help create a more efficient and user-friendly relationship between citizens and the state, transforming the public service experience.
OpenAI has told an Indian court that removing training data used for its ChatGPT service would conflict with its legal obligations in the United States. The company, backed by Microsoft, is defending a copyright lawsuit filed by Indian news agency ANI, which accuses OpenAI of using its content without permission and demands the deletion of ANI’s data from ChatGPT’s memory.
In a January 10 filing, OpenAI argued that Indian courts lack jurisdiction as the company has no physical presence or data servers in India. It also emphasised its legal obligation in the US to preserve training data while litigation is ongoing. OpenAI denied wrongdoing, asserting its systems make fair use of publicly available data, a stance it has maintained in similar copyright disputes globally.
ANI insists the Delhi court has the authority to rule on the case, citing concerns over unfair competition and alleging that ChatGPT reproduces its content verbatim. OpenAI, however, countered that ANI manipulated prompts to elicit such responses. The court is set to hear the case on January 28, marking a key moment in India’s scrutiny of AI and copyright law.
Britain’s Competition and Markets Authority (CMA) has opened an investigation into the dominance of Apple and Google in the smartphone ecosystem. The probe will examine their operating systems, app stores, and browsers to determine whether their ‘strategic market status’ stifles competition and innovation, particularly for businesses developing content and services.
CMA Chief Executive Sarah Cardell emphasised the potential for more competitive mobile ecosystems to drive innovation and boost economic growth in the UK. Both Apple and Google defended their practices, with Apple highlighting its ecosystem’s support for jobs in Britain and Google pointing to Android’s openness as a driver of choice and affordability.
The investigation, the CMA’s second under new regulatory powers, will explore whether Apple and Google are leveraging their dominance unfairly by prioritising their apps and services or imposing restrictive terms on developers. A conclusion is expected by October 22, 2025, as Britain continues to tighten its oversight of major tech companies.