Frank McCourt’s Project Liberty, along with a group of partners, has formally proposed a bid to acquire TikTok’s US assets from ByteDance. The consortium announced its intentions just ahead of ByteDance’s January 19 deadline to sell the platform or face a ban under legislation signed by President Joe Biden in April.
The group has gathered sufficient financial backing, including interest from private equity funds, family offices, and high-net-worth individuals, with debt financing from a leading US bank. The proposed value of the deal has not been disclosed.
McCourt stated the goal is to keep TikTok accessible to millions of US users without relying on its current algorithm while preventing a ban. Efforts are underway to engage with ByteDance, President-elect Trump, and the incoming administration to finalise the deal.
Nvidia has voiced strong opposition to a reported plan by the Biden administration to impose new restrictions on the export of AI chips, urging the outgoing president to avoid making a decision that could impact the incoming Trump administration. The company warned that such measures would harm the US economy, hinder innovation, and benefit adversaries like China. Nvidia’s Vice President, Ned Finkle, called the policy a “last-minute” move that would leave a legacy of criticism from both US industry and the global community.
The proposed restrictions, as reported by Bloomberg, aim to limit AI chip exports to certain countries, particularly targeting China to prevent the enhancement of its military capabilities. While some nations would face outright bans, the rules would also cap the computing power that can be exported to others. The Biden administration has yet to confirm the details, and requests for comment from the White House and the Commerce Department went unanswered.
Industry groups, including the Information Technology Industry Council, which represents major tech firms like Amazon, Microsoft, and Meta, have expressed concern about the policy. They argue that it would impose arbitrary limitations on US companies’ global competitiveness and risk ceding market leadership to foreign rivals. Nvidia warned that these restrictions could push international markets toward alternative technologies, undermining the US technology sector.
President-elect Donald Trump, who begins his second term on January 20, previously enacted technology export restrictions to China during his first term, citing national security concerns. Nvidia’s statement reflects apprehension about the continuity of US policy on AI chip exports under the new administration.
A hacker claims to have breached US location tracking company Gravy Analytics, leaking around 1.4 gigabytes of data. The allegation, shared on a Russian-language cybercriminal forum, included screenshots suggesting a data theft. Verification attempts were complicated as Gravy’s website remained offline and the company did not respond to messages.
Cybersecurity experts reviewing the leaked data found the breach credible. Marley Smith from RedSense and John Hammond from Huntress both confirmed the data appeared legitimate, though the hacker’s identity remains unclear.
The FTC expressed concerns that such data could be misused for stalking, blackmail, and espionage but declined to comment on the breach. FTC Chair Lina Khan recently warned that targeted advertising practices leave sensitive data highly vulnerable.
The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.
A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.
The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.
Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.
While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.
Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.
In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.
The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.
As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.
Amazon Web Services (AWS) has announced a $11 billion investment to build new data centres in Georgia, aiming to support the growing demand for cloud computing and AI technologies. The facilities, located in Butts and Douglas counties, are expected to create at least 550 high-skilled jobs and position Georgia as a leader in digital innovation.
The move highlights a broader trend among tech giants investing heavily in AI-driven advancements. Last week, Microsoft revealed an $80 billion plan for fiscal 2025 to expand data centres for AI training and cloud applications. These facilities are critical for supporting resource-intensive AI technologies like machine learning and generative models, which require vast computational power and specialised infrastructure.
The surge in AI infrastructure has also raised concerns about energy consumption. A report from the Electric Power Research Institute suggests data centres could account for up to 9% of US electricity usage by 2030. To address this, Amazon has secured energy supply agreements with utilities like Talen Energy in Pennsylvania and Entergy in Mississippi, ensuring reliable power for its expanding operations.
Amazon’s commitment underscores the growing importance of AI and cloud services, as companies race to meet the demands of a rapidly evolving technological landscape.
The White House unveiled a new label, the Cyber Trust Mark, for internet-connected devices like smart thermostats, baby monitors, and app-controlled lights. This new shield logo aims to help consumers evaluate the cybersecurity of these products, similar to how Energy Star labels indicate energy efficiency in appliances. Devices that display the Cyber Trust Mark will have met cybersecurity standards set by the US National Institute of Standards and Technology (NIST).
As more household items, from fitness trackers to smart ovens, become internet-connected, they offer convenience but also present new digital security risks. Anne Neuberger, US Deputy National Security Advisor for Cyber, explained that each connected device could potentially be targeted by cyber attackers. While the label is voluntary, officials hope consumers will prioritise security and demand the Cyber Trust Mark when making purchases.
The initiative will begin with consumer devices like cameras, with plans to expand to routers and smart meters. Products bearing the Cyber Trust Mark are expected to appear on store shelves later this year. Additionally, the Biden administration plans to issue an executive order by the end of the president’s term, requiring the US government to only purchase products with the label starting in 2027. The program has garnered bipartisan support, officials said.
AI startups have played a key role in reviving United States venture capital funding, with total capital raised in 2024 increasing by nearly 30% year-on-year, according to PitchBook. AI firms secured a record 46.4% of the $209 billion raised, a sharp rise from less than 10% a decade ago. The surge in investment has been driven by growing enthusiasm for AI technology, particularly since OpenAI’s ChatGPT gained widespread attention in late 2022. Major funding rounds, including $6.6 billion for OpenAI and $12 billion for Elon Musk’s xAI, highlight investor confidence in AI’s potential.
Despite the strong investment trends, analysts warn that maintaining such momentum could be challenging, especially for foundation model firms that require significant capital for computing power and expertise. Venture capital funding overall still faces hurdles, with only $76 billion raised in 2024—the lowest in five years. Exit values also remain well below their 2021 peak, although they improved from 2023’s seven-year low. While the IPO market did not recover as quickly as expected, year-end listings like ServiceTitan have provided some renewed optimism.
Hopes for a stronger IPO and M&A market are tied to the incoming administration of President-elect Donald Trump, which is expected to introduce policies favourable to technology and business. Industry experts believe more venture-backed companies could go public in the second half of 2025, helping to sustain the investment rebound. With AI continuing to dominate venture capital funding, the sector’s ability to meet ambitious business milestones will be critical to maintaining investor confidence.
The future of TikTok in the United States hangs in the balance as the Supreme Court prepares to hear arguments on 10 January over a law that could force the app to sever ties with its Chinese parent company, ByteDance, or face a ban. The case centres on whether the law violates the First Amendment, with TikTok and its creators arguing that it does, while the US government maintains that national security concerns justify the measure. If the government wins, TikTok has stated it would shut down its US operations by 19 January.
Creators who rely on TikTok for income are bracing for uncertainty. Many have taken to the platform to express their frustrations, fearing disruption to their businesses and online communities. Some are already diversifying their presence on other platforms like Instagram and YouTube, though they acknowledge TikTok’s unique algorithm has provided visibility and opportunities not found elsewhere. Industry experts believe many creators are adopting a wait-and-see approach, avoiding drastic moves until the Supreme Court reaches a decision.
The Biden administration has pushed for a resolution without success, while President-elect Donald Trump has asked the court to delay the ban so he can weigh in once in office. If the ban proceeds, app stores and internet providers will be required to stop supporting TikTok, ultimately rendering it unusable. TikTok has warned that even a temporary shutdown could lead to a sharp decline in users, potentially causing lasting damage to the platform. A ruling from the Supreme Court is expected in the coming weeks.