India‘s central bank has raised concerns over the increasing fraud in digital payments and announced new measures to improve security. Reserve Bank of India (RBI) Governor Sanjay Malhotra warned that cyber fraud and data breaches are becoming more frequent as banks and consumers adopt new technology. To counter this, the RBI will introduce exclusive website domain names to reduce the risk of deceptive online practices.
Fraudsters often use misleading domain names to trick users into revealing sensitive information or making fraudulent transactions. To enhance online security and credibility, the RBI will launch dedicated domains for financial institutions. Banks will use ‘bank.in’ while non-bank financial entities will operate under ‘fin.in’. These exclusive domains will provide a unique digital identity, making it easier for users to recognise legitimate platforms.
The Institute for Development and Research in Banking Technology (IDRBT) will oversee the registration process for these domains, with actual registrations set to begin in April 2025. The initiative is part of the RBI’s broader effort to strengthen cybersecurity and protect consumers in the rapidly growing digital payments sector.
More than 100 organisations, including Amnesty International and the AI Now Institute, have called on the AI industry and regulators to address the technology’s growing environmental impact. In an open letter published ahead of a major AI conference in Paris, the signatories highlight concerns over emissions, reliance on fossil fuels, and resource depletion caused by AI infrastructure.
The letter urges tech companies and governments to ensure that data centres operate without fossil fuels, warning that electricity demand from AI could double by 2026, reaching levels equivalent to Japan‘s annual consumption. The expansion of AI infrastructure is also straining water and land resources, with data centres requiring vast amounts of water for cooling and humidity control. Transparency on AI’s full environmental impact is another key demand.
Despite these warnings, the US government appears committed to AI expansion, with President Donald Trump pushing for faster approvals of new power stations, including those reliant on coal. The letter’s signatories stress that unchecked AI growth disproportionately affects communities most vulnerable to climate change and call for a shift towards responsible and sustainable AI development.
Japanese startup ArkEdge Space revealed on Friday that it helped build an observation satellite for Taiwan’s space agency that has captured what may be the highest-quality Earth imagery from a spacecraft smaller than a suitcase. The optical satellite, ONGLAISAT, took 2.5-metre resolution images after being launched into orbit around 400 km above Earth in December.
Takayoshi Fukuyo, ArkEdge’s CEO, described the clarity of the images as comparable to aerial photography, despite the satellite’s small size. Black-and-white photos, including those of Seattle suburbs and Argentina’s Patagonia, were released showing impressive detail. The satellite, co-developed with the University of Tokyo, mounts Taiwan’s space agency’s optical equipment onto a compact cubesat.
ONGLAISAT’s mission will conclude in early March, but the optical technology demonstrated during the mission will contribute to future satellite projects. Taiwan, keen to strengthen its space infrastructure amid rising tensions with China, is also progressing with other space ventures, including weather satellites and satellite internet collaborations with Amazon’s Kuiper. Additionally, Taiwan’s space agency has deepened partnerships with Japanese space companies like Space One and ispace.
OpenAI announced on Thursday that it is evaluating US states as potential locations for data centres supporting its ambitious Stargate project, which aims to secure the US’s lead in the global AI race. The project is seen as crucial for ensuring that AI development remains democratic and open, rather than falling under authoritarian control, according to Chris Lehane, OpenAI’s chief global affairs officer.
Stargate, a venture backed by SoftBank, OpenAI, Oracle, and other investors, is set to receive up to $500 billion for AI infrastructure. A significant portion of this investment, $100 billion, will be deployed immediately, with the rest scheduled over the next few years. Texas has been designated as the flagship location for Stargate’s data centres. An initial site under construction in Abilene is expected to begin operations later this year.
The announcement follows the rise of DeepSeek, a Chinese AI model that challenges the traditional view that AI development requires large, specialised data centres. DeepSeek’s use of cheaper chips has raised concerns among investors, leading to a significant drop in tech stock values, including a record $593 billion loss for Nvidia, the leading AI chipmaker.
OpenAI is considering data centre locations in approximately 16 states, with plans to expand the Stargate network to five to ten campuses in the coming months.
South Korea has temporarily blocked employee access to Chinese AI startup DeepSeek over security concerns. A government notice urged ministries and agencies to exercise caution when using AI services, including DeepSeek and ChatGPT. Korea Hydro & Nuclear Power, the defence ministry, and the foreign ministry have all imposed restrictions on DeepSeek access.
Australia and Taiwan have already banned DeepSeek from government devices, citing security risks. Italy previously ordered the company to block its chatbot over privacy concerns. Authorities in the US, India, and parts of Europe are also reviewing the implications of using the AI service. South Korea’s privacy watchdog plans to question DeepSeek on its handling of user data.
Korean businesses are also tightening restrictions on generative AI. Kakao Corp advised employees to avoid using DeepSeek, despite its recent partnership with OpenAI. SK Hynix has limited access to generative AI services, and Naver has asked employees not to use AI tools that store data externally.
DeepSeek has not yet responded to requests for comment. The company’s latest AI models, released last month, have drawn attention for their capabilities and cost efficiency. However, growing security concerns are leading governments and corporations to impose stricter controls on their use.
Elizabeth Kelly, the inaugural director of the United States AI Safety Institute, has stepped down from her role after a year overseeing efforts to measure and counter risks from advanced AI systems. During her tenure, the institute reached agreements with OpenAI and Anthropic to test their models before release and collaborated with global AI safety organisations.
The institute, created under former President Joe Biden’s administration, operates within the US Commerce Department‘s National Institute of Standards and Technology. Since taking office, President Donald Trump has revoked Biden’s 2023 executive order on AI, raising questions about the institute’s future direction under the new administration.
Kelly did not comment further on her departure but expressed optimism in a LinkedIn post, stating that the institute’s mission remains crucial to the future of AI innovation. The White House has yet to clarify its plans for AI regulation and safety oversight.
Amazon is set to unveil its long-awaited generative AI-powered Alexa, with a preview event scheduled for 26 February in New York. The update marks the most significant overhaul since the voice assistant’s launch in 2014, aiming to improve user interactions with advanced AI-driven conversations. A final decision on the product’s readiness is expected at an internal meeting on 14 February.
The new AI capabilities will allow Alexa to handle multiple requests in sequence and act on behalf of users without direct input. While initially free for a limited number of users, Amazon is considering a monthly subscription fee of $5 to $10. The company will continue offering the existing version, known as Classic Alexa, though it has reportedly stopped adding new features to it.
Despite Alexa’s early success, usage has remained limited due to a lack of major updates in recent years. The generative AI revamp is designed to make Alexa more useful for tasks like shopping, scheduling, and entertainment. Analysts suggest that even a fraction of users subscribing to the service could generate significant revenue for Amazon.
The update will rely on AI software from Anthropic, a startup backed by Amazon’s $8 billion investment. Previous attempts to launch an improved Alexa were delayed due to concerns over accuracy and performance. With the upcoming release, Amazon hopes to re-establish Alexa as a key part of everyday digital interactions.
OpenAI is set to air its first-ever television advert during the upcoming Super Bowl, marking its entry into commercial advertising. The Wall Street Journal reported that the AI company will join other major tech firms in leveraging the massive Super Bowl audience to promote its brand. Google previously used the event to highlight its AI capabilities.
The Super Bowl is one of the most sought-after advertising platforms, with high costs reflecting its enormous reach. A 30-second slot for the 2025 game has sold for up to $8 million, an increase from $7 million last year.
The 2024 Super Bowl attracted an estimated 210 million viewers, and this year’s event will take place in New Orleans on 9 February at the Caesars Superdome.
OpenAI has seen rapid growth since launching ChatGPT in 2022, reaching over 300 million weekly active users. The company is in talks to raise up to $40 billion at a $300 billion valuation and recently appointed Kate Rouch as its first chief marketing officer. Microsoft holds a significant stake in the AI firm.
Luca Casarini, a prominent Italian migrant rescue activist, was warned by Meta that his phone had been targeted with spyware. The alert was received through WhatsApp, the same day Meta accused surveillance firm Paragon Solutions of using advanced hacking methods to steal user data. Paragon, reportedly American-owned, has not responded to the allegations.
Casarini, who co-founded the Mediterranea Saving Humans charity, has faced legal action in Italy over his rescue work. He has also been a target of anti-migrant media and previously had his communications intercepted in a case related to alleged illegal immigration. He remains unaware of who attempted to hack his device or whether the attack had judicial approval.
The revelation follows a similar warning issued to Italian journalist Francesco Cancellato, whose investigative news outlet, Fanpage, recently exposed far-right sympathies within Prime Minister Giorgia Meloni’s political youth wing. Italy’s interior ministry has yet to comment on the situation.
Australia has banned Chinese AI startup DeepSeek from all government devices, citing security risks. The directive, issued by the Department of Home Affairs, requires all government entities to prevent the installation of DeepSeek’s applications and remove any existing instances from official systems. Home Affairs Minister Tony Burke stated that the immediate ban was necessary to safeguard Australia’s national security.
The move follows similar action taken by Italy and Taiwan, with other countries also reviewing potential risks posed by the AI firm. DeepSeek has drawn global attention for its cost-effective AI models, which have disrupted the industry by operating with lower hardware requirements than competitors. The rapid rise of the company has raised concerns over data security, particularly regarding its Chinese origins.
This is not the first time Australia has taken such action against a Chinese technology firm. Two years ago, the government imposed a nationwide ban on TikTok for similar security reasons. As scrutiny over AI intensifies, more governments may follow Australia’s lead in limiting DeepSeek’s reach within public sector networks.