Chinese firm successfully deploys AI-powered satellite 

Earlier this month, a Chinese company successfully launched a satellite called WonderJourney-1A (WJ-1A) from the Jiuquan Satellite Launch Center in Inner Mongolia. What sets this satellite apart is its incorporation of an AI system. The name WonderJourney derives from the ancient Chinese philosopher Zhuangzi, who originally introduced the concept of the “Universe”.

The Chinese satellite is embedded with a brain, a powerful AI known as the String Edge AI Platform. With its AI-powered smart operating system, the String enables real-time observation and processing without sending data back to the ground.

Using AI in satellites can enhance the satellite’s autonomy, adaptability to changing conditions, and ability to process and analyze data in real-time, allowing for more accurate observations and helping with emergency responses.

Why does it matter?

This innovation opens up new possibilities for space exploration, communication, scientific research, and practical applications such as smart car and drone connectivity, improved weather forecasting, and natural disaster warning and monitoring. Having an “AI assistant” in space will allow the system to continuously learn without sending massive amounts of data back to Earth.

US senators introduced two new AI bills

Following the growing interest in addressing technology issues, US senators introduced two separate bipartisan bills focused on AI on Thursday.

The first bill aims to clarify guidelines for the government’s use of artificial intelligence-based communication methods when interacting with the public. Essentially, the bill requires that government agencies inform the public when the agencies are using AI to interact with them while also enabling people to appeal decisions made by AI. The second bill proposes creating a dedicated entity responsible for monitoring and evaluating the country’s position in the latest developments in AI to ensure continued competitiveness.

Due to the rise of AI, there have been discussions among legislators about the need for new frameworks and regulations that would regulate its application. Senate Majority Leader Chuck Schumer stated that he organized educational briefings on AI for senators, including the first classified briefing on the topic, emphasizing the importance of informing lawmakers about AI.

Russia’s Sberbank releases own Artificial Intelligence (AI) chatbot -GigaChat

Russian lender Sberbank has launched its own conversational Artificial Intelligence (AI) platform, GigaChat, to rival OpenAI’s ChatGPT. The technology is currently in invite-only testing mode and is capable of communicating more intelligently in Russian than other foreign neural networks. GigaChat is part of Sberbank’s push into technology, as it seeks to reduce Russia’s reliance on imports and expand its offerings beyond traditional banking services. The launch of GigaChat follows the release of ChatGPT last year, which has led to a surge in the development of AI chatbots and other conversational interfaces in the technology sector.

According to reports, GigaChat is capable of understanding and processing natural language, and can be used to develop virtual assistants, chatbots, and other conversational interfaces. The platform uses machine learning algorithms to improve its responses and can be customised for different business needs.

Moderna and IBM to use AI and quantum computing to advance messenger RNA technology

Moderna and IBM have announced a partnership to use generative artificial intelligence and quantum computing to advance messenger RNA (mRNA) technology. This technology is at the core of Moderna’s Covid-19 vaccine, which has been highly effective in protecting against the virus. IBM’s quantum computing systems could help Moderna accelerate the discovery and creation of new mRNA vaccines and therapies, and IBM will provide experts to help Moderna scientists explore the use of quantum technologies in this area. Moderna will also have access to IBM’s generative AI model to design a new class of vaccines and therapies.

The agreement comes as Moderna looks to harness its mRNA technology to target other diseases beyond Covid. IBM is also investing in AI with new partnerships, including a deal with NASA to build AI foundation models to advance climate science.

The EU AI Act must address human rights concerns, urged human rights organisations

The European Parliament plans to propose stricter rules for foundation models, such as ChatGPT, under the AI Act to regulate AI based on its capacity to cause harm. The proposed rules include compliance requirements for foundation model providers, data governance measures, and transparency obligations. Downstream economic operators would become responsible for complying with the AI Act’s stricter regime if they modify a high-risk model.

However, the proposed EU AI Act has been criticised by human rights organisations for failing to ban many harmful and dangerous uses of AI in the context of immigration enforcement. Data-intensive technologies, including AI systems, are increasingly being used to make Europe’s borders impenetrable, which pushes people towards more precarious and deadly routes, strips them of their fundamental privacy rights and unjustifiably prejudices their claims to immigration status. The human rights organisations name the European border agency Frontex as an example, which stands accused of being complicit in grave human rights violations at many EU borders and is known to use various AI-powered technological systems to facilitate illegal pushback operations. Therefore, human rights organisations have called on the EU lawmakers to ensure that the legislation protects everyone, including asylum seekers, from dangerous and racist surveillance technologies and to ensure that AI technologies are used to protect, not surveil.

The last few days have seen increased calls for stricter regulation of AI. A group of 12 European Union lawmakers working on legislation related to AI have called for a summit to discuss ways to control the development of advanced AI systems, stating that they were evolving faster than expected. In addition, forty-two German trade unions and associations have urged the European Union to strengthen draft AI rules due to concerns about generative AI, such as ChatGPT. ARTICLE 19, a human rights organisation, called for a ban on remote biometric surveillance and emotion recognition technologies in the AI Act.

The European Commission will finalise the details of AI rules over the coming months before they become legislation. The political agreement on the AI Act will be voted on by leading European Parliament committees on 26 April.

German trade unions urge the EU to strengthen rules for AI

Forty-two German trade unions and associations representing more than 140,000 authors and performers have urged the European Union to strengthen draft AI rules due to concerns about generative AI, such as ChatGPT. In a letter to the European Commission, European Council, and EU lawmakers, the unions called for the regulation of generative AI across the entire product cycle, and for providers of such technology to be held liable for all content generated and disseminated by the AI, especially for infringement of personal rights and copyrights, misinformation or discrimination.

The letter also highlighted the need to address questions of accountability, liability, and remuneration before irreversible harm occurs, and called for generative AI to be at the centre of any meaningful AI market regulation. The letter said providers of foundation models such as Microsoft, Alphabet’s Google, Amazon and Meta Platforms should not be allowed to operate central platform services to distribute digital content.

In a similar vain, a group of 12 European Union lawmakers working on legislation related to AI have called for a summit to discuss ways to control the development of advanced AI systems, stating that they were evolving faster than expected. 

The European Commission will finalise the details of AI rules over the coming months before they become legislation. The political agreement on the AI Act will be voted on by leading European Parliament committees on 26 April. One of the primary topics under discussion among parliamentarians is whether to include general-purpose AI systems in the Act.

ARTICLE 19 urges EU to band remote biometric surveillance in Artificial Intelligence (AI) Act

ARTICLE 19, an international human rights organisation, has called for a full ban on remote biometric surveillance and emotion recognition technologies ahead of the European Parliament’s vote on the EU Artificial Intelligence Act. The organisation has urged policymakers to strengthen human rights considerations in the Act and to be cautious about relying on standard-setting bodies to guide the implementation of crucial aspects of the Act. According to ARTICLE 19, there has been a rise in the number and types of AI systems being deployed in the EU to surveil people’s movements in public spaces on a mass scale, infringing on privacy and potentially deterring people from engaging in civic activities. The organization argues that emotion recognition technologies are based on discriminatory and pseudo-scientific foundations and are inconsistent with international human rights standards. The AI Act puts a strong emphasis on developing technical standards to provide guidance to implement the Act’s requirements, which ARTICLE 19 believes will fall on European Standardisation Organisations that are not inclusive or multistakeholder, and have limited opportunities for human rights expertise to meaningfully participate in their processes.

ARTICLE 19 joins a list of actors, who have called for stronger rules on AI products and services. In a similar vain, a group of 12 European Union lawmakers working on legislation related to AI have called for a summit to discuss ways to control the development of advanced AI systems, stating that they were evolving faster than expected. In addition, forty-two German trade unions and associations have urged the European Union to strengthen draft AI rules due to concerns about generative AI, such as ChatGPT.

The European Commission will finalise the details of AI rules over the coming months before they become legislation. The political agreement on the AI Act will be voted on by leading European Parliament committees on 26 April.

Elon Musk plans to develop Artificial Intelligence (AI) model called ‘TruthGPT’

Elon Musk has revealed plans to develop an AI model called ‘TruthGPT‘ which he claims will be a ‘maximum truth-seeking AI’ that seeks to understand the nature of the universe. The tech entrepreneur believes that such an AI would be unlikely to destroy humanity because it would view humans as an interesting part of the universe. However, It is unclear how far along the development of TruthGPT is at this point.

Musk has previously voiced concerns about the risks associated with large-scale AI models and has urged companies to pause ‘giant AI experiments’ that they cannot understand or control. Musk has raised concerns that ChatGPT is politically biased and told associates that he wants to create AI models that are more truth-seeking.

In addition, Elon Musk has recently created a new artificial intelligence company called X.AI Corp. that is incorporated in Nevada, USA. The business invokes the name of what Elon Musk has described as his effort to create an everything app called X

EU lawmakers call for summit to regulate Artificial Intelligence (AI)

A group of 12 European Union lawmakers working on legislation related to AI have called for a summit to discuss ways to control the development of advanced AI systems, stating that they were evolving faster than expected. The MEPs urged US President Joe Biden and European Commission President Ursula von der Leyen to convene the meeting. In a letter, they also called for greater responsibility from AI firms.

The call follows an open letter from 1,000 technology figures, including Elon Musk, calling for a pause in the development of more powerful AI systems. The MEPs disagreed with some of the more alarmist statements in the letter but said that they agreed with the core message of the need for significant political action.

Over the past few weeks, legislators worldwide have been actively deliberating on how to govern the use of AI. China’s cyberspace regulator published draft measures for managing generative AI services. The draft included measures such as submitting security assessments to authorities before companies launch new AI products to the public. While the US government has also been seeking public comments on potential accountability measures for AI systems as questions loom about their impact on national security and education. On the other hand, the European Commission proposed the draft rules for an AI Act nearly two years ago, under which AI tools are expected to be classified according to their perceived level of risk, from low to high risk.

The political agreement on the AI Act will be voted on by leading European Parliament committees on 26 April. The European Parliament intends to conclude its stance on the Act by May to commence negotiations with the EU Council and Commission. One of the primary topics under discussion among parliamentarians is whether to include general-purpose AI systems in the Act. Additionally, due to increasing concerns about generative AI systems such as ChatGPT, there is a possibility of introducing further specific provisions for such systems.

Switzerland and UK sign agreement to strengthen cooperation in innovation, including quantum computing and AI

The Swiss State Secretariat for Education, Research and Innovation and the UK Department for Business, Energy and Industrial Strategy signed a Memorandum of Understanding (MoU) to strengthen bilateral cooperation in the field of innovation and emerging technologies. The MoU aims to enable strengthened cooperation between the research and innovation communities in the two countries in areas such as deep science and deep tech, including artificial intelligence and quantum technology. Other envisioned areas of stronger cooperation include the commercialisation of innovative concepts, and science and innovation policy and diplomacy.