The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.
A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.
The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.
Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.
While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.
Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.
In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.
The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.
As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.
Amazon Web Services (AWS) has announced a $11 billion investment to build new data centres in Georgia, aiming to support the growing demand for cloud computing and AI technologies. The facilities, located in Butts and Douglas counties, are expected to create at least 550 high-skilled jobs and position Georgia as a leader in digital innovation.
The move highlights a broader trend among tech giants investing heavily in AI-driven advancements. Last week, Microsoft revealed an $80 billion plan for fiscal 2025 to expand data centres for AI training and cloud applications. These facilities are critical for supporting resource-intensive AI technologies like machine learning and generative models, which require vast computational power and specialised infrastructure.
The surge in AI infrastructure has also raised concerns about energy consumption. A report from the Electric Power Research Institute suggests data centres could account for up to 9% of US electricity usage by 2030. To address this, Amazon has secured energy supply agreements with utilities like Talen Energy in Pennsylvania and Entergy in Mississippi, ensuring reliable power for its expanding operations.
Amazon’s commitment underscores the growing importance of AI and cloud services, as companies race to meet the demands of a rapidly evolving technological landscape.
The White House unveiled a new label, the Cyber Trust Mark, for internet-connected devices like smart thermostats, baby monitors, and app-controlled lights. This new shield logo aims to help consumers evaluate the cybersecurity of these products, similar to how Energy Star labels indicate energy efficiency in appliances. Devices that display the Cyber Trust Mark will have met cybersecurity standards set by the US National Institute of Standards and Technology (NIST).
As more household items, from fitness trackers to smart ovens, become internet-connected, they offer convenience but also present new digital security risks. Anne Neuberger, US Deputy National Security Advisor for Cyber, explained that each connected device could potentially be targeted by cyber attackers. While the label is voluntary, officials hope consumers will prioritise security and demand the Cyber Trust Mark when making purchases.
The initiative will begin with consumer devices like cameras, with plans to expand to routers and smart meters. Products bearing the Cyber Trust Mark are expected to appear on store shelves later this year. Additionally, the Biden administration plans to issue an executive order by the end of the president’s term, requiring the US government to only purchase products with the label starting in 2027. The program has garnered bipartisan support, officials said.
AI startups have played a key role in reviving United States venture capital funding, with total capital raised in 2024 increasing by nearly 30% year-on-year, according to PitchBook. AI firms secured a record 46.4% of the $209 billion raised, a sharp rise from less than 10% a decade ago. The surge in investment has been driven by growing enthusiasm for AI technology, particularly since OpenAI’s ChatGPT gained widespread attention in late 2022. Major funding rounds, including $6.6 billion for OpenAI and $12 billion for Elon Musk’s xAI, highlight investor confidence in AI’s potential.
Despite the strong investment trends, analysts warn that maintaining such momentum could be challenging, especially for foundation model firms that require significant capital for computing power and expertise. Venture capital funding overall still faces hurdles, with only $76 billion raised in 2024—the lowest in five years. Exit values also remain well below their 2021 peak, although they improved from 2023’s seven-year low. While the IPO market did not recover as quickly as expected, year-end listings like ServiceTitan have provided some renewed optimism.
Hopes for a stronger IPO and M&A market are tied to the incoming administration of President-elect Donald Trump, which is expected to introduce policies favourable to technology and business. Industry experts believe more venture-backed companies could go public in the second half of 2025, helping to sustain the investment rebound. With AI continuing to dominate venture capital funding, the sector’s ability to meet ambitious business milestones will be critical to maintaining investor confidence.
The future of TikTok in the United States hangs in the balance as the Supreme Court prepares to hear arguments on 10 January over a law that could force the app to sever ties with its Chinese parent company, ByteDance, or face a ban. The case centres on whether the law violates the First Amendment, with TikTok and its creators arguing that it does, while the US government maintains that national security concerns justify the measure. If the government wins, TikTok has stated it would shut down its US operations by 19 January.
Creators who rely on TikTok for income are bracing for uncertainty. Many have taken to the platform to express their frustrations, fearing disruption to their businesses and online communities. Some are already diversifying their presence on other platforms like Instagram and YouTube, though they acknowledge TikTok’s unique algorithm has provided visibility and opportunities not found elsewhere. Industry experts believe many creators are adopting a wait-and-see approach, avoiding drastic moves until the Supreme Court reaches a decision.
The Biden administration has pushed for a resolution without success, while President-elect Donald Trump has asked the court to delay the ban so he can weigh in once in office. If the ban proceeds, app stores and internet providers will be required to stop supporting TikTok, ultimately rendering it unusable. TikTok has warned that even a temporary shutdown could lead to a sharp decline in users, potentially causing lasting damage to the platform. A ruling from the Supreme Court is expected in the coming weeks.
The Sixth Circuit Court of Appeals has struck down federal net neutrality rules, ruling that the US Federal Communications Commission (FCC) does not have the authority to regulate internet service providers (ISPs) in this way. The decision challenges the FCC’s attempt to reclassify ISPs as common carriers under Title II of the Communications Act, a move to prevent discrimination in internet traffic, such as slowing speeds or blocking content.
The court’s ruling follows the Supreme Court’s 2024 decision to eliminate Chevron deference, a legal principle that typically allows courts to defer to regulatory agencies’ interpretations. With this shift, the judges were free to question the FCC’s interpretation of the law and ultimately concluded that ISPs cannot be regulated as telecommunications services.
The decision has sparked a call for legislative action. FCC Chair Jessica Rosenworcel urged lawmakers to pass laws safeguarding net neutrality, reflecting public demand for a fair and open internet. Meanwhile, Republican figures, including FCC Commissioner Brendan Carr, celebrated the ruling, viewing it as a victory against government overreach in regulating the internet.
This legal setback comes as the Biden administration’s push for net neutrality faces increasing challenges, and it remains uncertain whether future attempts to reinstate the rules will succeed.
Do Kwon, the South Korean cryptocurrency entrepreneur responsible for the collapse of TerraUSD and Luna currencies, pleaded not guilty to US criminal fraud charges on Thursday. The plea followed his extradition from Montenegro earlier this week.
Kwon, co-founder of Terraform Labs, is accused of orchestrating a multi-billion-dollar fraud scheme that led to an estimated $40 billion loss in cryptocurrency value in 2022. Federal prosecutors in Manhattan unsealed a nine-count indictment against Kwon, charging him with securities fraud, wire fraud, commodities fraud, and conspiracy to commit money laundering.
The indictment claims Kwon deceived investors by falsely promoting TerraUSD as a stablecoin guaranteed to maintain its $1 value. Prosecutors allege that when TerraUSD’s value dropped in 2021, Kwon secretly enlisted a high-frequency trading firm to inflate the token’s price, misleading investors and artificially boosting its sister token, Luna.
These alleged misrepresentations drove substantial investment into Terraform Labs’ products, propelling Luna’s market value to $50 billion by early 2022. However, the scheme unravelled in May 2022 when TerraUSD and Luna crashed, causing turmoil in the broader cryptocurrency market.
Kwon, 33, remains in custody in Manhattan after declining to seek bail during his initial court appearance. His trial is set to begin on 8 January. Kwon has faced mounting legal troubles, including a $4.55 billion settlement with the US Securities and Exchange Commission and a federal jury finding him liable for defrauding investors earlier this year.
His case is part of a broader crackdown on cryptocurrency figures, including FTX’s Sam Bankman-Fried and Celsius Network’s Alex Mashinsky, as US authorities tighten scrutiny over the volatile industry.
Due to national security concerns, the US Commerce Department announced plans to consider new rules restricting or banning Chinese-made drones. The proposed regulations, open for public comment until 4 March, aim to safeguard the drone supply chain against potential threats from China and Russia.
Officials warn that adversaries could exploit these devices to access sensitive US data remotely. China dominates the US commercial drone market, with DJI, the world’s largest drone manufacturer, accounting for more than half of all sales.
The Biden administration has already taken steps to curb Chinese drone activity. In December, President Joe Biden signed legislation requiring an investigation into whether drones from companies like DJI and Autel Robotics pose unacceptable security risks.
If unresolved within a year, these companies may be barred from launching new products in the US. Additionally, DJI has faced scrutiny over alleged ties to Beijing’s military and accusations of violating the Uyghur Forced Labor Prevention Act, claims the company denies.
US Commerce Secretary Gina Raimondo hinted at measures similar to those targeting Chinese vehicles, focusing on drones with Chinese or Russian components. While DJI disputes allegations of data transmission and surveillance risks, US lawmakers remain concerned.
The evolving landscape underscores Washington’s broader efforts to address perceived security vulnerabilities in Chinese technology.