Spain‘s Santander has launched its digital bank, Openbank, in the United States, aiming to expand its retail presence and fund up to $30 billion in auto loans. As one of the few European banks with a United States retail foothold, Santander hopes this move will help it compete more effectively in the market.
Santander‘s US operations already hold over $45 billion in retail deposits and $60 billion in auto-related loans. The new digital bank aims to reduce funding costs by shifting away from more expensive wholesale funding. Openbank is offering a 5.25% yield on its savings accounts to attract US customers.
Openbank‘s launch is part of Santander’s broader global strategy to become a digital bank with branches, aiming to increase market share in a competitive US banking landscape. The bank has no immediate plans to re-enter the mortgage lending market, focusing instead on its digital offering.
Santander’s CEO for the US, Tim Wennes, emphasised that while hiring for Openbank will be limited, the bank will evaluate partnership opportunities to expand the platform. The digital shift comes as Santander seeks to boost returns from its US operations.
The US government is nearly finalising rules restricting American investments in certain advanced technologies in China, particularly AI, semiconductors, microelectronics, and quantum computing. These regulations are designed to prevent US know-how from contributing to China’s military capabilities following an executive order signed by President Joe Biden in August 2023. The rules are under review by the Office of Management and Budget and are expected to be released soon, possibly before the upcoming US presidential election on 5 November.
The new regulations will require US investors to notify the Treasury Department about specific investments in sensitive technologies. While the rules will ban certain investments outright, they also include several exceptions. For example, some publicly traded securities and certain types of debt financing will not fall under the restrictions. However, US companies and individuals will determine which transactions are subject to the new limits.
Earlier drafts of the rules, published in June, gave the public a chance to provide feedback and proposed banning AI investments that involved systems trained with substantial computing power. The final regulations are expected to provide additional clarity, particularly concerning the thresholds for restricted transactions in AI and the role of limited partners in such investments.
Experts like Laura Black, a former Treasury official, anticipate that the regulations will take effect at least 30 days after release. These measures reflect the US government’s growing focus on curbing China’s access to critical technologies while balancing the need for certain economic exceptions in mutual funds and syndicated debt financing sectors.
The upcoming release will be a significant step in the Biden administration’s broader effort to safeguard US technological advantage and national security interests in the face of growing competition from China.
The US Securities and Exchange Commission (SEC) has filed an appeal in its case against Ripple, though it does not challenge the court’s decision that XRP is not a security. Instead, the SEC’s appeal, submitted on 16 October, questions Ripple’s XRP sales on exchanges and personal sales by its executives, Brad Garlinghouse and Chris Larsen.
Ripple’s chief legal officer, Stuart Alderoty, clarified that the ruling regarding XRP’s status as a non-security remains unchanged. Ripple is set to file its own Form C in response within seven days, and both parties will agree on a briefing schedule for the ongoing case.
The legal process is expected to take up to 90 days, with the SEC required to file its first brief within that period. Ripple’s legal team remains confident as the case progresses.
US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.
Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.
The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.
Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.
These artificial avatars would operate on social media and online platforms, featuring realistic expressions and high-quality images akin to government IDs. JSOC also seeks technologies to produce convincing facial and background videos, including ‘selfie videos’, to avoid detection by social media algorithms.
US state agencies have previously announced frameworks to combat foreign information manipulation, citing national security threats from these technologies. Despite recognising the global dangers posed by deepfakes, SOCOM’s initiative underscores a willingness to engage with the technology for potential military advantage.
Experts expressed concern over the ethical implications and potential for increased misinformation, warning of the entirely deceptive nature of deepfakes, with no legitimate applications beyond deceit, possibly encouraging further global misuse. Furthermore, such practices pose the risk of diminished public trust in government communications, exacerbated by perceived hypocrisy in deploying such technology.
Why does it matter?
This plan reflects an ongoing interest in leveraging digital manipulation for military purposes, despite previous incidents where platforms like Meta dismantled similar US-linked networks. It further shows a contradiction in the US’s stance on deepfake use, as it simultaneously condemns similar actions by countries like Russia and China.
Republican presidential candidate Donald Trump revealed that he spoke with Apple CEO Tim Cook about the financial penalties imposed on the tech giant by the European Union. Trump claimed that Cook informed him about a recent $15 billion fine from the EU, along with an additional $2 billion penalty, although Apple has not confirmed the details of the call.
The EU is investigating major tech companies to limit their influence and promote fair competition for smaller businesses. Recently, Apple encountered major challenges, including a court ruling that required the company to pay about $14 billion in back taxes to Ireland. Additionally, Apple was hit with a $2 billion antitrust fine for allegedly restricting competition in the music streaming sector via its App Store.
During the podcast with Patrick Bet-David, Trump expressed his commitment to protect American companies from what he described as unfair treatment. He stated, ‘Tim, I got to get elected first. But I’m not going to let them take advantage of our companies.’ Trump and Democrat Kamala Harris are currently in a tight race for the 5 November presidential election.
Funding for AI and cloud companies in the United States, Europe, and Israel is experiencing a resurgence, after three years of decline, and is expected to reach $79.2 billion by the end of 2024, according to venture capital firm Accel. This marks a 27% increase compared to the $62.5 billion invested in 2023. Generative AI is playing a major role, accounting for around 40% of this year’s investments.
Of the $56 billion invested in generative AI over the past two years, 80% went to US-based companies, with the remainder split between Europe and Israel. In the US, OpenAI, Anthropic, and Elon Musk’s xAI led funding rounds, while Europe saw significant investment in companies like Mistral, Aleph Alpha, and DeepL. AI foundation models attracted two-thirds of the total AI investment during this period.
Generative AI investment in Europe is growing quickly, rising from $2.4 billion in 2023 to $6.4 billion in 2024. By comparison, the US saw $25 billion in funding for private AI companies in 2024. Accel noted, however, that outside of AI, the focus in the tech industry has shifted towards profitability, signalling the end of an era of high growth in software.
Despite the shift, the booming AI sector is seen as transformative, with Accel partner Philippe Botteri comparing the current AI wave to other major technological shifts like the rise of broadband, mobile, and cloud computing.
The United States Federal Trade Commission (FTC) has introduced a new ‘click to cancel‘ rule, designed to simplify the process of ending subscriptions. The rule mandates that businesses must make it just as easy for consumers to cancel a subscription as it is to sign up for one, and requires customer consent before renewing subscriptions or converting free trials into paid services.
Under the new regulations, businesses will no longer be allowed to force customers to navigate chatbots or agents to cancel subscriptions initiated via an app or website. The rule will take effect in about six months and aims to save consumers time and money by eliminating unnecessary hurdles. For subscriptions made in person, companies must provide an option to cancel by phone or online.
The FTC has previously sued Amazon and Adobe for making it difficult for consumers to cancel subscriptions. Amazon was accused of using misleading website designs to push people into automatic Prime renewals, while Adobe allegedly imposed hidden fees and unclear cancellation terms. Both companies have rejected the claims.
Similar measures have also been adopted in the United Kingdom. The Digital Markets, Competition and Consumers Act 2024 ensures that businesses must give clear information to customers before they enter into subscription agreements, and make it easier for them to cancel or end contracts.
The Cybersecurity Association of China (CSAC) has urged a security review of Intel’s products in China, alleging that the US chipmaker poses a national security risk. Although CSAC is an industry group, it has strong connections to the Chinese government, and its claims may prompt action from the Cyberspace Administration of China (CAC).
CSAC’s post on WeChat accuses Intel’s chips, including its Xeon processors used for AI, of containing vulnerabilities and backdoors allegedly tied to the US NSA. The group warns that using Intel products threatens China’s national security and critical infrastructure.
This recommendation comes amid growing US-China tensions over technology and trade. Last year, the CAC banned Chinese infrastructure operators from using products from Micron Technology after a security review, raising concerns that Intel could face a similar outcome.
Intel’s China unit responded, emphasising its commitment to product safety and quality. The company stated on its WeChat account that it will cooperate with authorities to clarify concerns. If the CAC carries out a security review, it could impact Intel’s sales in its significant Chinese market. Intel’s shares recently dropped 2.7% in US premarket trading.
Wolfspeed is set to receive $750 million in government grants for its new silicon carbide wafer manufacturing plant in North Carolina, as announced by the US Commerce Department. This funding news caused the US chipmaker’s shares to surge over 30%. The preliminary agreement requires Wolfspeed to strengthen its balance sheet to safeguard taxpayer funds.
Investment firms, led by Apollo Global Management, have pledged an additional $750 million in financing for Wolfspeed. The company produces energy-efficient chips using silicon carbide, crucial for applications like electric vehicles and renewable energy systems. As part of a larger $6 billion expansion plan, Wolfspeed aims to increase its manufacturing capacity in Marcy, New York.
Wolfspeed anticipates up to $1 billion in cash tax refunds from the advanced manufacturing tax credit under the Chips and Science Act. CEO Gregg Lowe highlighted the significance of Wolfspeed’s products to the US economy and national security. However, the company has encountered difficulties this year, with its stock plummeting nearly 75% due to a decline in electric vehicle demand. The grant remains subject to due diligence and is not yet finalised.