Tether expands AI ambitions with new apps

Tether, the world’s largest stablecoin issuer, is diving deeper into the world of artificial intelligence (AI) with several new applications in development. Tether Data, the company’s AI division, is working on a range of tools including AI Translate, AI Voice Assistant, and AI Bitcoin Wallet Assistant. These apps will focus on maintaining the privacy and self-custodial control over both data and money, according to CEO Paolo Ardoino.

The AI Bitcoin Wallet Assistant will allow users to interact with a chatbot interface to manage their BTC wallet, such as checking their balance or making transactions. Meanwhile, the AI Translate tool provides simple chatbot-based translation, and the AI Voice Assistant will enable voice responses instead of text. Tether plans to launch an open-source AI SDK platform, compatible with various devices including mobile phones and laptops.

Tether’s commitment to AI growth has been evident since 2023, with the company acquiring a stake in Northern Data Group, a European crypto miner specialising in cloud computing and generative AI. The firm also began a global recruitment drive for AI talent in March 2023, intending to innovate and set new industry standards.

The firm has been making significant strides in both the AI and crypto industries, as it reported record profits of $13 billion for 2024, and its USDT stablecoin has seen an all-time high market capitalisation of $141 billion. Tether’s AI platform is expected to launch by the end of Q1 2025.

AI giant OpenAI to debut Super Bowl commercial

OpenAI is set to air its first-ever television advert during the upcoming Super Bowl, marking its entry into commercial advertising. The Wall Street Journal reported that the AI company will join other major tech firms in leveraging the massive Super Bowl audience to promote its brand. Google previously used the event to highlight its AI capabilities.

The Super Bowl is one of the most sought-after advertising platforms, with high costs reflecting its enormous reach. A 30-second slot for the 2025 game has sold for up to $8 million, an increase from $7 million last year.

The 2024 Super Bowl attracted an estimated 210 million viewers, and this year’s event will take place in New Orleans on 9 February at the Caesars Superdome.

OpenAI has seen rapid growth since launching ChatGPT in 2022, reaching over 300 million weekly active users. The company is in talks to raise up to $40 billion at a $300 billion valuation and recently appointed Kate Rouch as its first chief marketing officer. Microsoft holds a significant stake in the AI firm.

Greece plans AI-focused worker retraining initiatives

Greece is taking steps to address the impact of AI on the labour market by strengthening its Labour Market Needs Assessment Mechanism and implementing retraining programs. Speaking at a conference in Brussels, Labour Minister Niki Kerameus highlighted the rapid pace of AI development and its transformative effects on the workforce. She emphasised the need for protective measures to ensure workers benefit fully from AI’s potential.

Kerameus outlined two key initiatives Greece is focusing on. The first involves mapping current and future labour market needs, especially for new skills and specialities driven by AI. The Ministry of Labour is enhancing its market needs with a diagnostic mechanism to track real-time employee skills and labour market demands.

The second initiative involves retraining programs to help workers adapt to the evolving job landscape. Kerameus reassured that while AI will continue to change how people work, it should not be feared. Greece is prioritising skills programs, particularly in digital and green sectors, and aims to involve 10% of the active workforce in these initiatives by 2026.

OpenAI expands ChatGPT into education with California university deal

OpenAI is set to introduce an education-focused version of its chatbot to around 500,000 students and faculty at California State University. The rollout, covering 23 campuses, aims to provide personalised tutoring for students and administrative support for faculty members. The initiative is part of OpenAI’s broader effort to integrate its technology into education despite initial concerns about cheating and plagiarism.

Universities such as the Wharton School, the University of Texas at Austin, and the University of Oxford have already adopted ChatGPT Enterprise. In response, OpenAI launched ChatGPT Edu in May last year to cater specifically to academic institutions. The education sector has become a growing focus for AI companies, with Alphabet investing $120 million into AI education programs and preparing to introduce its Gemini chatbot into school-issued Google accounts for teenage students.

Competition in AI-driven education is intensifying. In the UK, Prime Minister Keir Starmer inaugurated the first Google-funded AI university in London, providing teens with AI and machine learning resources. As AI adoption in schools increases, major tech companies are vying for a dominant role in shaping the future of digital learning.

Security concerns lead to Australian ban on DeepSeek

Australia has banned Chinese AI startup DeepSeek from all government devices, citing security risks. The directive, issued by the Department of Home Affairs, requires all government entities to prevent the installation of DeepSeek’s applications and remove any existing instances from official systems. Home Affairs Minister Tony Burke stated that the immediate ban was necessary to safeguard Australia’s national security.

The move follows similar action taken by Italy and Taiwan, with other countries also reviewing potential risks posed by the AI firm. DeepSeek has drawn global attention for its cost-effective AI models, which have disrupted the industry by operating with lower hardware requirements than competitors. The rapid rise of the company has raised concerns over data security, particularly regarding its Chinese origins.

This is not the first time Australia has taken such action against a Chinese technology firm. Two years ago, the government imposed a nationwide ban on TikTok for similar security reasons. As scrutiny over AI intensifies, more governments may follow Australia’s lead in limiting DeepSeek’s reach within public sector networks.

Bloomberg: Google drops pledge to avoid harmful AI uses, including weapons

Google has removed a key passage from its AI principles that previously committed to steering clear of potentially harmful applications, including weapons. The now-missing section, titled ‘AI applications we will not pursue,’ explicitly stated that the company would not develop technologies likely to cause harm, as seen in archived versions of the page reviewed by Bloomberg.

The change has sparked concern among AI ethics experts. Margaret Mitchell, former co-lead of Google’s ethical AI team and now chief ethics scientist at Hugging Face, criticised the move. ‘Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically, it means Google will probably now work on deploying technology directly that can kill people,’ she said.

With ethics guardrails shifting, questions remain about how Google will navigate the evolving AI landscape—and whether its revised stance signals a broader industry trend toward prioritising market dominance over ethical considerations.

UK announces AI cyber code for companies developing and managing AI systems

The UK government has launched its Code of Practice for the Cyber Security of AI, a voluntary framework designed to enhance security in AI development. The code sets out 13 principles aimed at reducing risks such as AI-driven cyberattacks, system failures, and data vulnerabilities.

The guidelines apply to developers, system operators, and data custodians (any type of business, organisation or individual that controls data permissions and the integrity of data that is used for any AI model or system to function) responsible for creating, deploying, or managing AI systems. Companies that solely sell AI models or components fall under separate regulations. According to the Department for Science, Innovation, and Technology, the code will help ensure AI is developed and deployed securely while fostering innovation and economic growth.

Key recommendations include implementing AI security training, establishing recovery plans, conducting risk assessments, maintaining system inventories, and ensuring transparency about data usage. One of the principles calls to enable human responsibility for AI systems and prescribes to ensure AI decisions are explainable and users understand their responsibilities.

The code references existing standards and best practices for secure software development and security by design, as well as provides useful definitions.

The release of the code follows the UK’s AI Opportunities Action Plan, which outlines strategies to expand the nation’s AI sector and establish global leadership in the field. It also coincides with a call from the National Cyber Security Centre urging software vendors to eliminate ‘unforgivable vulnerabilities‘—security flaws that are easy and cost-effective to fix but are often overlooked in favour of speed and new features.

This code also builds on NCSC’s Guidelines for Secure AI Development which were published in November 2023 and endorsed by 19 international partners.

Altman explores AI partnerships with India’s IT Minister

OpenAI CEO Sam Altman met with India’s IT Minister Ashwini Vaishnaw on Wednesday to discuss India’s vision of developing a low-cost AI ecosystem. Vaishnaw shared on X that the meeting centred on India’s strategy to build a comprehensive AI stack, including GPUs, models, and applications. He noted that OpenAI expressed interest in collaborating on all three aspects.

Altman’s visit to India, his first since 2023, comes amid ongoing legal challenges the company faces in the country, which is its second-largest market by user numbers. Vaishnaw recently praised Chinese startup DeepSeek for its affordable AI assistant, drawing parallels between DeepSeek’s cost-effective approach and India’s goal of creating a budget-friendly AI model. Vaishnaw highlighted India’s ability to achieve major technological feats at a fraction of the cost, as demonstrated by its moon mission.

Altman’s trip also included stops in Japan and South Korea, where he secured deals with SoftBank and Kakao. In Seoul, he discussed the Stargate AI data centre project with SoftBank and Samsung, a venture backed by US President Donald Trump.

EU supports OpenEuroLLM for open-source AI innovation

The European Commission has launched the OpenEuroLLM Project, a new initiative aimed at developing open-source, multilingual AI models. The project, which began on February 1, is supported by a consortium of 20 European research institutions, companies, and EuroHPC centres. Coordinated by Jan Hajič from Charles University and co-led by Peter Sarlin of AMD Silo AI, the project is designed to produce large language models (LLMs) that are proficient in all EU languages and comply with the bloc’s regulatory framework.

The OpenEuroLLM Project has been awarded the Strategic Technologies for Europe Platform (STEP) Seal, a recognition granted to high-quality initiatives under the Digital Europe Programme. This endorsement highlights the project’s importance as a critical technology for Europe. The LLMs developed will be open-sourced, allowing their use for commercial, industrial, and public sector purposes. The project promises full transparency, with public access to documentation, training codes, and evaluation metrics once the models are released.

The initiative aims to democratise access to high-quality AI technologies, helping European companies remain competitive globally and empowering public organisations to deliver impactful services. While the timeline for model release and specific focus areas have not yet been detailed, the European Commission has already committed funding and anticipates attracting further investors in the coming weeks.

Google search to function more like an AI assistant

Google is set to transform its Search engine into a more advanced AI-driven assistant, CEO Sundar Pichai revealed during an earnings call. The company’s ongoing AI evolution began with controversial “AI overviews” and is now expanding to include new capabilities developed by its research division, DeepMind. Google’s goal is to allow Search to browse the web, analyse information, and deliver direct answers, reducing reliance on traditional search results.

Among the upcoming innovations is Project Astra, a multimodal AI system capable of interpreting live video and responding to real-time questions. Another key development is Gemini Deep Research, an AI agent designed to generate in-depth reports, effectively automating research tasks that users previously conducted themselves. Additionally, Project Mariner could enable AI to interact with websites on behalf of users, potentially reshaping how people navigate the internet.

The shift towards AI-powered Search has sparked debate, particularly among businesses that depend on Google’s traffic and advertising. Google’s first attempt at AI integration resulted in embarrassing errors, such as incorrect and bizarre search responses. Despite initial setbacks, the company is pushing ahead, believing AI-enhanced Search will redefine how people find and interact with information online.