Taiwanese companies eye expansion in Texas

Taiwanese electronics companies are preparing to increase investments in Texas, with major announcements expected in May, coinciding with President Donald Trump’s first 100 days in office. Richard Lee, head of the Taiwan Electrical and Electronic Manufacturers’ Association, revealed that several large Taiwanese companies, particularly those in the AI server industry, are looking to expand their operations in Texas. This follows proactive efforts by Texas’ Republican-led government to attract Taiwanese investment.

The move comes as Trump has criticised Taiwan for its semiconductor business and threatened tariffs on trade partners with significant trade deficits, potentially targeting Taiwan. Last week, Taiwan’s President Lai Ching-te pledged to invest more in the US, adding to the momentum. Companies like Foxconn, Compal, and Inventec, which already have operations in Texas, are expected to announce further expansions, particularly to accommodate the growing demand for AI-related technologies.

Foxconn, which manufactures products for major tech companies like Apple and Nvidia, has already made a $33 million investment in land and factory buildings in Texas. With the demand for AI servers rising, Taiwanese manufacturers are eyeing Texas as a strategic location to meet the growing market needs. However, neither Foxconn nor Compal has yet provided specific details on their plans.

For more information on these topics, visit diplomacy.edu.

France’s ANSSI and international partners advocate risk-based approach for secure AI systems

The French National Cybersecurity Agency (ANSSI) has released new guidance on securing AI systems, emphasising a risk-based approach to AI deployment. Several international partners, including Canada, Singapore, Germany, Italy, Norway, the United Kingdom, Estonia and others, have co-signed the document.

The publication highlights the growing integration of AI across sectors and the need for organisations to assess and mitigate associated risks, particularly as they adopt large language models (LLMs).

ANSSI outlines key security challenges specific to AI, including vulnerabilities in data integrity, supply chain risks, and the potential for AI systems to be exploited as attack vectors. The report identifies major risks such as:

  • Compromises in AI hosting and management infrastructure
  • Supply chain attacks targeting AI components
  • Interconnections between AI and IT systems increasing attack surfaces
  • Long-term loss of control over AI-driven processes
  • Malfunctions affecting AI system reliability

To address these challenges, the document advocates for a structured approach to AI security, recommending that organisations:

  • Align AI system autonomy with risk assessments and operational criticality
  • Map AI supply chains and monitor interconnections with IT infrastructure
  • Implement continuous monitoring and maintenance of AI systems
  • Anticipate regulatory and technological developments impacting AI security
  • Strengthen training and awareness on AI-related risks

The publication also advises against using AI for automating critical actions without safeguards, urging organisations to conduct dedicated risk analyses and assess security measures at every stage of the AI system lifecycle.

For more information on these topics, visit diplomacy.edu.

Singapore unveils new AI governance initiatives to strengthen global safety standards

The Singapore government introduced three new AI governance initiatives to promote safety and global best practices. The initiatives include the Global AI Assurance Pilot, which focuses on testing generative AI applications; a joint testing report with Japan to enhance AI safety across different linguistic environments; and the publication of the Singapore AI Safety Red Teaming Challenge evaluation report, aimed at addressing AI performance across languages and cultures.

The announcement was made by Josephine Teo, Singapore’s Minister for Digital Development and Information, at the AI Action Summit (AIAS) in Paris. During her speech, Minister Teo emphasised Singapore’s commitment to fostering international collaboration on AI safety, noting the importance of understanding public concerns and ensuring AI systems are tested for safety and responsibility. She also highlighted the role of private sector partnerships in shaping AI use cases and risk management strategies.

The new initiatives include practical efforts to ensure AI models, particularly large language models (LLMs), are secure and culturally sensitive. The AI Assurance Pilot, for instance, will bring together global AI assurance vendors and companies deploying real-life GenAI applications to establish future standards for AI governance. The joint testing report with Japan aims to improve the safety of LLMs across multiple languages, addressing potential gaps in non-English safeguards. Additionally, the Red Teaming Challenge provided insights into AI performance and cultural bias, with participants testing LLMs for issues such as violent crime and privacy violations.

For more information on these topics, visit diplomacy.edu.

Apptronik expands humanoid robot production with new investment

AI robotics company Apptronik has raised $350 million in a funding round led by B Capital and Capital Factory, with participation from Google. The Texas-based firm is focused on scaling production of Apollo, its humanoid robot designed to perform warehouse and manufacturing tasks such as moving packages and handling logistics.

Apptronik is competing with major players like Tesla and Figure AI in the rapidly advancing field of humanoid robotics, where artificial intelligence is driving new breakthroughs. CEO Jeff Cardenas compared this moment in robotics to the rise of large language models in 2023, predicting that 2025 will see significant developments in automation.

The company plans to expand Apollo’s capabilities into other industries, including elder care and healthcare. It has also partnered with Google DeepMind’s robotics team and secured commercial agreements with Mercedes-Benz and GXO Logistics, positioning itself as a key player in the evolving robotics landscape.

For more information on these topics, visit diplomacy.edu.

Google’s India policy head resigns amid market challenges

Google’s head of public policy in India, Sreenivasa Reddy, has stepped down, marking the second high-profile exit from the role in two years. Reddy, who joined the company in September 2023 after stints at Microsoft and Apple, played a crucial role in navigating regulatory challenges while Google expanded its services in India. The company confirmed his departure but declined to provide further details.

India remains a critical market for Google, with the majority of the country’s smartphones running on its Android system. The tech giant has faced increasing scrutiny from regulators over antitrust issues, even as it continues to grow its presence with local manufacturing and AI investments.

In the interim, Iarla Flynn, Google’s policy head for northern Europe, will take over the role. The company reaffirmed its commitment to the Indian market, emphasising its long-term vision despite the ongoing leadership changes.

For more information on these topics, visit diplomacy.edu.

Elon Musk’s xAI unveils Grok-3, taking on AI giants

Elon Musk’s AI startup, xAI, has unveiled its latest AI model, Grok-3, which the billionaire claims is the most advanced chatbot technology. In a live-streamed presentation, Musk and his engineers demonstrated how Grok-3 outperforms competitors, including OpenAI’s GPT-4o and Google’s Gemini, across math, science, and coding benchmarks. With over ten times the computational power of its predecessor, Grok-3 completed pre-training in early January and is now continuously evolving, Musk said, promising visible improvements within just 24 hours.

A key innovation introduced with Grok-3 is DeepSearch, an advanced reasoning chatbot designed to enhance search capabilities by providing transparent explanations of how it processes queries. The feature allows users to engage in research, brainstorming, and data analysis more deeply and clearly. The model is being rolled out immediately to X’s Premium+ subscribers, with an upcoming SuperGrok subscription planned for mobile and web platforms.

The launch marks another escalation in the rivalry between Musk’s xAI and OpenAI, the company he co-founded but later distanced himself from. Musk has been openly critical of OpenAI’s shift toward a for-profit model and recently filed lawsuits against the organisation, accusing it of betraying its founding principles. His bid to acquire OpenAI’s nonprofit arm for $97.4 billion was rejected last week, with OpenAI’s CEO, Sam Altman, dismissing the offer as an attempt to hinder the company’s progress.

Why does it matter?

The AI sector is experiencing an unprecedented investment boom, with xAI reportedly seeking to raise $10 billion in new funding, potentially pushing its valuation to $75 billion. Meanwhile, OpenAI is in talks to raise as much as $40 billion, which could boost its valuation to an astonishing $300 billion. These soaring numbers highlight the capital-intensive nature of AI development, with global tech giants and investment groups pouring billions into the race to dominate AI.

However, new challenges are emerging. Last month, Chinese AI firm DeepSeek introduced R1, an open-source model that matched or surpassed leading American AI systems on key industry benchmarks. The company claims it developed R1 at a fraction of the cost incurred by its US counterparts, suggesting that the dominance of firms like OpenAI and xAI could face disruption from more cost-efficient alternatives shortly.

Study warns of AI’s role in fueling bank runs

A new study from the UK has raised concerns about the risks of bank runs fueled by AI-generated fake news spread on social media. The research, published by Say No to Disinfo and Fenimore Harper, highlights how generative AI can create false stories or memes suggesting that bank deposits are at risk, leading to panic withdrawals. The study found that a significant portion of UK bank customers would consider moving their money after seeing such disinformation, especially with the speed at which funds can be transferred through online banking.

The issue is gaining traction globally, with regulators and banks worried about the growing role of AI in spreading malicious content. Following the collapse of Silicon Valley Bank in 2023, which saw $42 billion in withdrawals within a day, financial institutions are increasingly focused on detecting disinformation that could trigger similar crises. The study estimates that a small investment in social media ads promoting fake content could cause millions in deposit withdrawals.

The report calls for banks to enhance their monitoring systems, integrating social media tracking with withdrawal monitoring to better identify when disinformation is impacting customer behaviour. Revolut, a UK fintech, has already implemented real-time monitoring for emerging threats, urging financial institutions to be prepared for potential risks. While banks remain optimistic about AI’s potential, the financial stability challenges it poses are still a growing concern for regulators.

As financial institutions work to mitigate AI-related risks, the broader industry is also grappling with how to balance the benefits of AI with the threats it may pose. UK Finance, the industry body, emphasised that banks are making efforts to manage these risks, while regulators continue to monitor the situation closely.

For more information on these topics, visit diplomacy.edu.

Anthropic’s Claude tested as UK explores AI chatbot for public services

The UK government has partnered with AI startup Anthropic to explore the use of its chatbot, Claude, in public services. The collaboration aims to improve access to public information and streamline interactions for citizens.

Anthropic, a competitor of ChatGPT creator OpenAI and supported by tech giants Google and Amazon, signed a memorandum of understanding with the government.

The initiative aligns with Prime Minister Keir Starmer’s ambition to establish the UK as a leader in AI and enhance public service efficiency through innovative technologies.

Technology minister Peter Kyle highlighted the importance of this partnership, emphasising its role in positioning the UK as a hub for advanced AI development.

Claude has already been employed by the European Parliament to simplify access to its archives, demonstrating its potential in reducing time for document retrieval and analysis.

This step underscores Britain’s commitment to leveraging cutting-edge AI for the benefit of individuals and businesses nationwide.

For more information on these topics, visit diplomacy.edu.

AI push in China planned by Apple

Apple is preparing to introduce its AI features to iPhones in China by mid-year. Efforts include significant software adaptations and collaboration with local partners to meet the country’s unique requirements.

Teams based in China and the US are actively working to customise the Apple Intelligence platform for the region. Insiders suggest the launch could happen as early as May, provided technical and regulatory challenges are resolved.

Regulatory compliance remains a critical hurdle for Apple. The project reflects the company’s growing emphasis on localising its technology for key international markets, including China.

For more information on these topics, visit diplomacy.edu.

US utilities boost spending to meet surging AI energy demand

US electric utilities are significantly increasing their capital investment plans to expand power generation and strengthen the grid as AI and cloud computing drive up electricity consumption.

Companies such as PPL Corp, Dominion, and Exelon have revised their spending plans upward, with PPL announcing a nearly 40% increase to $20 billion through 2028.

The surge in demand is largely fuelled by data centres, which are now being built at an unprecedented scale, reaching capacities of up to 1 gigawatt per site.

Utility executives have dismissed concerns that market disruptions, such as Chinese AI startup DeepSeek’s recent emergence, would weaken demand from major tech firms.

Instead, companies including American Electric Power (AEP) and Duke Energy have received assurances from technology customers that their expansion plans remain unchanged. AEP is considering adding $10 billion to its existing $54 billion capital plan, while Duke is increasing its five-year spending by $10 billion.

Rising demand for electricity is expected to reach record levels in the US by 2026, driven not only by data centres but also by manufacturing and electrification in sectors like transportation.

While utilities race to expand power supplies, regulatory approval remains a challenge, and increased investment could lead to higher electricity costs for households and businesses.

Some utilities are also exploring whether data centres should bear a greater share of the costs associated with grid expansion.

For more information on these topics, visit diplomacy.edu.