Cloud computing

AI and cloud computing

Together, AI and cloud computing enable advanced AI applications, scalable infrastructure, collaborative research, cost optimisation, and efficient resource management. But the misuse of AI can threaten the security of cloud infrastructure. 

The interplay between AI and cloud computing

The merging of AI’s capabilities with cloud-based computing environments, often referred to as the AI cloud, is already in progress. This can be seen in digital assistants, which combine AI technology with cloud resources and big data to deliver immediate services, such as facilitating purchases or real-time information like traffic and weather. Notable examples of this integration include virtual assistants like Siri, Amazon Alexa, and Google Home, as well as ChatGPT, which is powered by Microsoft Azure’s cloud infrastructure. Cloud computing does this by providing AI algorithms with the necessary computational power and scalability for large-scale data processing and complex computations. By using distributed computing in the cloud, AI tasks can be accelerated through parallel execution, reducing both development and deployment time. When edge computing and AI are combined, intelligence can be brought to the network edge, enabling real-time analysis and responsiveness. At the same time, AI techniques optimise cloud infrastructure management, enhancing performance and reducing costs.

AI, cloud computing and (data) security

The convergence of AI and cloud computing offers both opportunities and obstacles to security and privacy. On the one hand, AI-driven security solutions can contribute to improved cloud security by detecting threats, identifying anomalies, and utilising sophisticated encryption techniques. Furthermore, AI facilitates better authentication methods for cloud users. But AI can also pose threats to the security and privacy of cloud infrastructure and data. Malicious actors can target AI systems to manipulate or deceive AI algorithms, leading to unauthorised access and data breaches. Cyberattacks against cloud computing systems can also be automated and accelerated through AI, which are difficult to detect and may result in compromised data. Additionally, AI-powered malware and chatbots may be developed to deceive users and gain access to sensitive information.

Cloud computing caused a shift from storing data on hard disks on our computers to storing it on servers in the cloud. Some examples include your email account which you can access from different devices, and any photos, videos, or documents you store online (even if your account is private).

Cloud computing offers ubiquitous access to all of our data and services from any device anywhere where there is an Internet connection.

The first wave of cloud computing started with the use of online mail servers (Gmail, Yahoo, etc), social media applications (Facebook, Twitter, etc), and online applications (Wikis, blogs, Google docs, etc).

Apart from everyday applications, cloud computing is used extensively for business software. More and more of our digital assets are moving from our hard disks to the cloud. Due to their large server farms, tech giants such as Google, Microsoft, Apple, Amazon, and Facebook are among the main cloud computing players in the private sector.

Emerging technologies

AI and emerging technologies

Often referred to as an emerging technology, although it hasnโ€™t been for some time, AI has not only spread across different industries and sectors, but it is increasingly influencing our daily lives. AI is also changing the dynamics of computing and accelerating the development of other emerging technologies, offering opportunities but also leading to new challenges.

AI to accelerate opportunities 

Data is a key ingredient for most emerging technologies. But it is AI that leverages the use of the data. AI allows designers to accurately predict, fine-tune, and adjust parameters for 3D printing. It helps to control processes, avoid errors, and save time. In biotech, AI can be used to analyse large-scale genomic data for personalised medicine, accelerate drug discovery processes through predictive modelling, and support brain-computer interfaces used, for instance, in brain-controlled prosthetic limbs. AI can also improve user experience in VR and AR. For this purpose, AI is used to create digital content, improve the display of digital information, and ensure safe interactions in mixed-reality settings.

AI increases challenges

Reverse engineering uses AI to reconstruct 3D-printed objects without the consent of the designers. AI systems in the field of biotechnology might be used to expose sensitive private data and breach privacy by analysing personal biometric information. The interplay between AI and neuroscience advancements โ€“ such as brain-computer interfaces โ€“ raises questions of privacy, security, and even human autonomy. The use of VR and AR devices holds a high risk for data misuse with the help of AI. By creating a digital environment and avatars with AI, the authenticity of the content and digital identity are at stake.

We live in an era of fast technological progress, with new digital devices, applications, and tools being developed almost on a daily basis. 3D printing, augmented reality (AR) and virtual reality (VR), biotechnology, and quantum technology are some of the most rapidly advancing areas, with many implications for society.

How is 3D printing impacting current manufacturing business models, and what consequences does it have for the future of work? Is AR an opportunity to improve the provision of education, especially in remote areas? And what are the ethical boundaries within which biotechnology should operate? These are some of the policy questions linked to these emerging technologies.

Blockchain

AI and blockchain technology

AI has various applications in blockchain technology. It has the potential to enhance blockchain systems by analysing smart contracts, detecting fraud, optimising scalability, and enabling tokenisation, among other issues. But it also comes with challenges, for instance, in the form of AI-driven attacks aimed at exploiting blockchain vulnerabilities.

AI to complement blockchain technology

AI algorithms are already used to optimise the consensus mechanism used on cryptocurrency blockchains by analysing and enhancing the efficiency and effectiveness of the consensus protocols. Using machine learning algorithms and data analysis, AI can identify patterns, optimise parameters, and predict successful consensus strategies. Additionally, AI can help address challenges related to scalability and energy consumption.

AI can also enable the tokenisation of assets, facilitating the creation and management of digital assets on blockchain platforms. Asset management systems powered by AI can automate processes like asset valuation, portfolio management, and investment decision-making. Security issues associated with blockchain can be identified and mitigated using AI. For instance, AI is used to analyse patterns in DDoS attacks and identify possible security holes in the code. AI techniques are also employed to verify smart contracts and reduce the likelihood of exploits and vulnerabilities. Furthermore, by analysing transaction patterns and identifying suspicious behaviour, AI can detect fraudulent activities within blockchain networks and help prevent illicit activities such as fraud and money laundering. Additionally, AI can help enhance the privacy and security of blockchain networks by developing advanced encryption algorithms and employing privacy-preserving techniques to protect sensitive data and transactions.

As is the case with other technologies, different blockchain systems are often incompatible with each other. AI solutions that enable different blockchains to communicate are in development and will potentially create new opportunities.

Challenges at the intersection of AI and blockchain 

The integration of AI and blockchain technology presents several challenges. Adversarial attacks are a significant concern, as AI can exploit blockchain system vulnerabilities and compromise security and integrity. The analytical capabilities of AI can potentially de-anonymise blockchain data, thereby raising privacy concerns. Additionally, the resource-intensive nature of AI systems often necessitates significant computational power; when integrated with blockchain systems, AI systems can exacerbate scalability and performance issues (i.e. the limited resources of blockchain networks may be strained by the processing power and storage requirements of AI tasks). Finally, governance and regulatory challenges arise when determining responsibility and accountability in decentralised AI-powered blockchain systems.


Digitalisation, e-commerce, and the emergence of e-money in our daily lives made the notion of non-physical currency quite common. Since the early 2000s, the idea of a digital payment system and a digital currency native to the Internet has become very attractive.

What is a blockchain? Simply put, it is a data ledger (think of an accounting ledger, which records every โ€˜inโ€™ and โ€˜outโ€™ transaction). The ledger is distributed, which means that many copies of the same ledger exist on computers worldwide. It is also protected by strong cryptography to protect it from malicious actors attempting to change any information within the blockchain.

How was this technology born? In 1992, W. Scott Stornetta and Stuart Haber presented the idea of blocks of digital data that are chained by cryptography to prevent tampering with time-stamped documents. By 2008, an anonymous person known by the name of Satoshi Nakamoto, proposed a new payment system to a group of prominent cryptographers and mathematicians through a cyberpunk mailing list.

The proposal, called Bitcoin: Peer-to-peer electronic payment system, was based on an online distributed ledger – verified by cryptography – functioning through a โ€˜proof-of-workโ€™ consensus mechanism – the same technology that was being used to tackle spam. The term blockchain was not mentioned in the proposal; it was coined later on, with reference to Stornetta and Haberโ€™s proposal.

How is new data added to a blockchain? Every computer (or node) synchronises the data through a consensus-based mechanism. Once data is added, it cannot be added or altered on a blockchain unless there is a consensus.

There are many types of blockchain databases. The main types are open blockchains, and closed or private blockchains.

Blockchain

Artificial intelligence

About AI: A brief introduction

artificial intelligence concept 1024x682 1

Artificial intelligence (AI) might sound like something from a science fiction movie in which robots are ready to take over the world. While such robots are purely fixtures of science fiction (at least for now), AI is already part of our daily lives, whether we know it or not.

Think of your Google inbox: Some of the emails you receive end up in your spam folder, while others are marked as โ€˜socialโ€™ or โ€˜promotionโ€™. How does this happen? Google uses AI algorithms to automatically filter and sort e-mails by categories. These algorithms can be seen as small programs that are trained to recognise certain elements within an email that make it likely to be a spam message, for example. When the algorithm identifies one or several of those elements, it marks the email as spam and sends it to your spam folder. Of course, algorithms do not work perfectly, but they are continuously improved. When you find a legitimate email in your spam folder, you can tell Google that it was wrongly marked as spam. Google uses that information to improve how its algorithms work.

AI is widely used in internet services: Search engines use AI to provide better search results; social media platforms rely on AI to automatically detect hate speech and other forms of harmful content; and, online stores use AI to suggest products you are likely interested in based on your previous shopping habits. More complex forms of AI are used in manufacturing, transportation, agriculture, healthcare, and many other areas. Self-driving cars, programs able to recognise certain medical conditions with the accuracy of a doctor, systems developed to track and predict the impact of weather conditions on crops โ€“ they all rely on AI technologies.

As the name suggests, AI systems are embedded with some level of โ€˜intelligenceโ€™ which makes them capable to perform certain tasks or replicate certain specific behaviours that normally require human intelligence. What makes them โ€˜intelligentโ€™ is a combination of data and algorithms. Letโ€™s look at an example which involves a technique called machine learning. Imagine a program able to recognise cars among millions of images. First of all, that program is fed with a high number of car images. Algorithms then โ€˜studyโ€™ those images to discover patterns, and in particular the specific elements that characterise the image of a car. Through machine learning, algorithms โ€˜learnโ€™ what a car looks like. Later on, when they are presented with millions of different images, they are able to identify the images that contain a car. This is, of course, a simplified example โ€“ there are far more complex AI systems out there. But basically all of them involve some level of initial training data and an algorithm which learns from that data in order to be able to perform a task.

Some AI systems go beyond this, by being able to learn from themselves and improve themselves. One famous example is DeepMind’s AlphaGo Zero: The program initially only knows the rules of the Go game; however, it then plays the game with itself and learns from its successes and failures to become better and better.

Going back to where we started: Is AI really able to match human intelligence? In specific cases โ€“ like playing the game of Go โ€“ the answer is โ€˜yesโ€™. That being said, what has been coined as โ€˜artificial general intelligenceโ€™ (AGI) โ€“ advanced AI systems that can replicate human intellectual capabilities in order to perform complex and combined tasks โ€“ does not yet exist. Experts have divided opinions on whether AGI is something we will see in the near future, but it is certain that scientists and tech companies will continue to develop more and more complex AI systems.


The policy implications of AI

Applying AI for social good is a principle that many tech companies have adhered to. They see AI as a tool that can help address some of the worldโ€™s most pressing problems, in areas such as climate change and disease eradication. The technology and its many applications certainly carry significant potential for good, but there are also risks. Accordingly, the policy implications of AI advancements are farโ€reaching. While AI can generate economic growth, there are growing concerns over the significant disruptions it could bring to the labour market. Issues related to privacy, safety, and security are also in focus.

As innovations in the field continue, more and more AI standards and AI governance frameworks are being developed to help ensure that AI applications have minimal unintended consequences.

AI illustration robot futureofwork 320x220 1

Social and economic

AI has significant potential to stimulate economic growth and contribute to sustainable development. But it also comes with disruptions and challenges.

AI illustration robot riskfree planet 320x220 1

Safety and security

AI applications bring into focus issues related to cybersecurity (from cybersecurity risks specific to AI systems to AI applications in cybersecurity), human safety, and national security.

AI illustration judge robot color EthicalIssues 320x220 1

Human rights

The uptake of AI raises profound implications for privacy and data protection, freedom of expression, freedom of assembly, non-discrimination, and other human rights and freedoms.

robot skeleton hamlet 320x220 1

Ethical concerns

The involvement of AI algorithms in judgments and decision-making gives rise to concerns about ethics, fairness, justice, transparency, and accountability.

Governing AI

When debates on AI governance first emerged, one overarching question was whether AI-related challenges (in areas such as safety, privacy, and ethics) call for new legal and regulatory frameworks, or whether existing ones could be adapted to also cover AI. 

Applying and adapting existing regulation was seen by many as the most suitable approach. But as AI innovation accelerated and applications became more and more pervasive, AI-specific governance and regulatory initiatives started emerging at national, regional, and international levels.

640px Flag of the United States.svg

USA Bill of Rights

The Blueprint for an AI Bill of Rights is a guide for a society that protects people from AI threats and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practiceโ€”a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualising these principles in the technological design process. 

640px Flag of the Peoples Republic of China.svg

Chinaโ€™s Interim Measures for Generative Artificial Intelligence

Released in July 2023 and applicable starting 15 August 2023, the measures apply to ‘the use of generative AI to provide services for generating text, pictures, audio, video, and other content to the public in the People’s Republic of China’. The regulation covers issues related to intellectual property rights, data protection, transparency, and data labelling, among others.

Photo of European Union flag. Waving EU flag.

EU’s AI Act

Proposed by the European Commission in April 2021, the EU AI Act was formally adopted by the European Council on 21 May 2024, and came into effect on 1 August of the same year. The AI regulation introduces a risk-based regulatory approach for AI systems: if an AI system poses exceptional risks, it is banned; if an AI system comes with high risks (for instance, the use of AI in performing surgeries), it will be strictly regulated; if an AI system only involves limited risks, focus is placed on ensuring transparency for end users.

unesco 0

UNESCO Recommendation on AI Ethics

Adopted by UNESCO member states in November 2021, the recommendation outlines a series of values, principles, and actions to guide states in the formulation of their legislation, policies, and other instruments regarding AI. For instance, the document calls for action to guarantee individuals more privacy and data protection, by ensuring transparency, agency, and control over their personal data. Explicit bans on the use of AI systems for social scoring and mass surveillance are also highlighted, and there are provisions for ensuring that real-world biases are not replicated online.

OECD 1

OECD Recommendation on AI

Adopted by the OECD Council in May 2019, the recommendation encourages countries to promote and implement a series of principles for responsible stewardship of trustworthy AI, from inclusive growth and human-centred values to transparency, security, and accountability. Governments are further encouraged to invest in AI research and development, foster digital ecosystems for AI, shape enabling policy environments, build human capacities, and engage in international cooperation for trustworthy AI.

Image of Council of Europe

Council of Europe work on a Convention on AI and human rights

In 2021 the Committee of Ministers of the Council of Europe (CoE) approved the creation of a Committee on Artificial Intelligence (CAI) tasked with elaborating a legal instrument on the development, design, and application of AI systems based on the CoEโ€™s standards on human rights, democracy and the rule of law, and conducive to innovation. On 17 May 2024, the Committee of Ministers adopted the Framework Convention on AI, Human Rights, Democracy and the Rule of Law. The Convention will be opened for signature on 5 September 2024.

Flag United Nations

Group of Governmental Experts on Lethal Autonomous Weapons Systems

Within the UN System, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (LAWS) to explore the technical, military, legal, and ethical implications of LAWS.  The group has been convened on an annual basis since its creation. In 2019, it agreed on a series of Guiding principles, which, among other issues, confirmed the application of international humanitarian law to the potential development and use of LAWS, and highlighted that human responsibility must be retained for decisions on the use of weapons systems.

AL72s0cf Logo of the Global Partnership on Artificial Intelligence

Global Partnership on Artificial Intelligence

Launched in June 2020 and counting 29 members in 2024, the Global Partnership on Artificial Intelligence (GPAI) is a multistakeholder initiative dedicated to ‘sharing multidisciplinary research and identifying key issues among AI practitioners, with the objective of facilitating international collaboration, reducing duplication, acting as a global reference point for specific AI issues, and ultimately promoting trust in and the adoption of trustworthy AI’.

 Logo

(AU) Continental AI Strategy

Adopted by the African Union Executive Council on July 18-19, 2024, the AI Strategy advocates for unified national approaches among AU member states to navigate the complexities of AI-driven transformation. It seeks to enhance regional and global cooperation, positioning Africa as a leader in inclusive and responsible AI development. The Continental AI Strategy emphasises a people-centric, development-oriented, and inclusive approach, structured around five key focus areas and fifteen policy recommendations.

AI standards as a bridge between technology and policy

Despite their technical nature โ€“ or rather because of that โ€“ standards have an important role to play in bridging technology and policy. In the words of three major standard developing organisations (SDOs), standards can โ€˜underpin regulatory frameworks and [โ€ฆ] provide appropriate guardrails for responsible, safe and trustworthy AI developmentโ€™. As hard regulations are being shaped to govern the development and use of AI, standards are increasingly seen as a mechanism to demonstrate compliance with legal provisions.

Right now standards for AI are developed within a wide range of SDOs at national, regional, and international levels. In the EU, for instance, the European Committee for Standardization (CEN), the European Electrotechnical Committee for Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI) are working on AI standards to complement the upcoming AI Act. At the International Telecommunication Union (ITU), several study groups and focus groups within its Telecommunication Standardization Sector (ITU-T) are carrying out standardisation and pre-standardisation work across issues as diverse as AI-enabled multimedia applications, AI for health, and AI for natural disaster management. And the Joint Technical Committee 1 on Information Technology โ€“ an initiative of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) has a subcommittee dedicated to AI standards.

TOXvXnQI BrAIn 04 1024x1024 1

National AI strategies

As AI technologies continue to evolve at a fast pace and have more and more applications in various areas, countries are increasingly aware that they need to keep up with this evolution and to take advantage of it. Many are developing national AI development strategies, as well as addressing the economic, social, and ethical implications of AI advancements. China, for example, released a national AI development plan in 2017, intended to help make the country the world leader in AI by 2030 and build a national AI industry worth of US$150 billion. In the United Arab Emirates (UAE), the adoption of a national AI strategy was complemented by the appointment of a State Minister for AI to work on โ€˜making the UAE the worldโ€™s best prepared [country] for AI and other advanced technologiesโ€™. Canada, FranceGermany and Mauritius were among the first countries to launch national AI strategies. These are only a few examples; there are many more countries that have adopted or are working on such plans and strategies, as the map below shows.  

Last updated: September 2025

In depth: Africa and artificial intelligence

Africa is making steps towards a faster uptake of AI, and AI-related investments and innovation are advancing across the continent. Governments are adopting national AI strategies, regional and continental organisations are exploring the same, and there is increasing participation in global governance processes focused on various aspects of AI.

You can see the cover page of the African report on digital foreign policy and diplomacy.

AI on the international level

The Council of Europe, the EU, OECD, and UNESCO are not the only international spaces where AI-related issues are discussed; the technology and its policy implications are now featured on the agenda of a wide range of international organisations and processes. Technical standards for AI are being developed at ITU, the ISO, the IEC, and other standard-setting bodies. ITU is also hosting an annual AI for Good summit exploring the use of AI to accelerate progress towards sustainable development. UNICEF has begun working on using AI to realise and uphold children’s rights, while the International Labour Organization (ILO) is looking at the impact of AI automation on the world of work. The World Intellectual Property Organization (WIPO) is discussing intellectual property issues related to the development of AI, the World Health Organization (WHO) looks at the applications and implications of AI in healthcare, and the World Meteorological Organization (WMO) has been using AI in weather forecast, natural hazard management, and disaster risk reduction.

As discussions on digital cooperation have advanced at the UN level, AI has been one of the topics addressed within this framework. The 2019 report of the UN High-Level Panel on Digital Cooperation tackles issues such as the impact of AI on labour markets, AI and human rights, and the impact of the misuse of AI on trust and social cohesion. The UN Secretary-General’s Roadmap on Digital Cooperation, issued in 2020, identifies gaps in international coordination, cooperation, and governance when it comes to AI. The Our Common Agenda report released by the Secretary-General in 2021 proposes the development of a Global Digital Compact (with principles for โ€˜an open, free and secure digital future for allโ€™) which could, among other elements, promote the regulation of AI โ€˜to ensure that it is aligned with shared global valuesโ€™. 

AI and its governance dimensions have featured high on the agenda of bilateral and multilateral processes such as the EU-US Trade and Technology Council, G7, G20, and BRICS. Regional organisations such as the African Union (AU), the Association of Southeast Asian Nations (ASEAN), and the Organization of American States (OAS) are also paying increasing attention to leveraging the potential of AI for economic growth and sustainable development.

In recent years, annual meetings of the Internet Governance Forum (IGF) have featured AI among their main themes.

Artificial Intelligence course

More on the policy implications of AI

The economic and social implications of AI

AI has significant potential to stimulate economic growth. In production processes, AI systems increase automation, and make processes smarter, faster, and cheaper, and therefore bring savings and increased efficiency. AI can improve the efficiency and the quality of existing products and services, and can also generate new ones, thus leading to the creation of new markets. It is estimated that the AI industry could contribute up to US$15.7 trillion to the global economy by 2030. Beyond the economic potential, AI can also contribute to achieving sustainable development goals (SDGs); for instance, AI can be used to detect water service lines containing hazardous substances (SDG 6 โ€“ clean water and sanitation), to optimise the supply and consumption of energy (SDG 7 โ€“ affordable and clean energy), and to analyse climate change data and generate climate modelling, helping to predict and prepare for disasters (SDG 13 โ€“ climate action). Across the private sector, companies have been launching programmes dedicated to fostering the role of AI in achieving sustainable development. Examples include IBMโ€™s Social Science for GoodGoogleโ€™s AI for Social Good, and Microsoftโ€™s AI for Good projects.

For this potential to be fully realised, there is a need to ensure that the economic benefits of AI are broadly shared at a societal level, and that the possible negative implications are adequately addressed. The 2022 edition of the Government AI Readiness Index warns that โ€˜care needs to be taken to make sure that AI systems donโ€™t just entrench old inequalities or disenfranchise people. In a global recession, these risks are evermore important.โ€™ One significant risk is that of a new form of global digital divide, in which some countries reap the benefits of AI, while others are left behind. Estimates for 2030 show that North America and China will likely experience the largest economic gains from AI, while developing countries – with lower rates of AI adoption – will register only modest economic increases.

The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate that automated systems will make some jobs obsolete, and lead to unemployment. Such concerns have led to discussions about the introduction of a ‘universal basic income’ that would compensate individuals for disruptions brought on the labour market by robots and by other AI systems. There are, however, also opposing views, according to which AI advancements will generate new jobs, which will compensate for those lost without affecting the overall employment rates. One point on which there is broad agreement is the need to better adapt the education and training systems to the new requirements of the jobs market. This entails not only preparing the new generations, but also allowing the current workforce to re-skill and up-skill itself.

Explore related digital policy topics and their links with AI

AI, safety, and security

AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the need to design systems that can properly react to unforeseen situations with minimal unintended consequences. Beyond self-driving cars, the (potential) development of other autonomous systems – such as lethal autonomous weapons systems – has sparked additional and intense debates on their implications for human safety.

AI also has implications in the cybersecurity field. In addition to the cybersecurity risks associated with AI systems (e.g. as AI is increasingly embedded in critical systems, they need to be secured to potential cyberattacks), the technology has a dual function: it can be used as a tool to both commit and prevent cybercrime and other forms of cyberattacks. As the possibility of using AI to assist in cyberattacks grows, so does the integration of this technology into cybersecurity strategies. The same characteristics that make AI a powerful tool to perpetrate attacks also help to defend against them, raising hopes for levelling the playing field between attackers and cybersecurity experts.

Going a step further, AI is also looked at from the perspective of national security. The US Intelligence Community, for example, has included AI among the areas that could generate national security concerns, especially due to its potential applications in warfare and cyber defense, and its implications for national economic competitiveness.

Explore related digital policy topics and their links with AI

AI and human rights

AI systems work with enormous amounts of data, and this raises concerns regarding privacy and data protection. Online services such as social media platforms, e-commerce stores, and multimedia content providers collect information about usersโ€™ online habits, and use AI techniques such as machine learning to analyse the data and to โ€˜improve the userโ€™s experienceโ€™ (for example, Netflix suggests movies you might want to watch based on movies you have already seen). AI-powered products such as smart speakers also involve the processing of user data, some of it of personal nature. Facial recognition technologies embedded in public street cameras have direct privacy implications.

How is all of this data processed? Who has access to it and under what conditions? Are users even aware that their data is extensively used? These are only some of the questions generated by the increased use of personal data in the context of AI applications. What solutions are there to ensure that AI advancements do not come at the expense of user privacy? Strong privacy and data protection regulations (including in terms of enforcement), enhanced transparency and accountability for tech companies, and embedding privacy and data protection guarantees into AI applications during the design phase are some possible answers.

Algorithms, which power AI systems, could also have consequences on other human rights. For example, AI tools aimed at automatically detecting and removing hate speech from online platforms could negatively affect freedom of expression: Even when such tools are trained on significant amounts of data, the algorithms could wrongly identify a text as hate speech. Complex algorithms and human-biassed big data sets can serve to reinforce and amplify discrimination, especially among those who are disadvantaged.

Explore related digital policy topics and their links with AI

Ethical concerns

As AI algorithms involve judgements and decision-making – replicating similar human processes – concerns are being raised regarding ethics, fairness, justice, transparency, and accountability. The risk of discrimination and bias in decisions made by or with the help of AI systems is one such concern, as illustrated in the debate over facial recognition technology (FRT). Several studies have shown that FRT programs present racial and gender biases, as the algorithms involved are largely trained on photos of males and white people. If law enforcement agencies rely on such technologies, this could lead to biassed and discriminatory decisions, including false arrests.

One way of addressing concerns over AI ethics could be to combine ethical training for technologists (encouraging them to prioritise ethical considerations when creating AI systems) with the development of technical methods for designing AI systems in a way that they can avoid such risks (i.e. fairness, transparency, and accountability by design). The Institute of Electrical and Electronics Engineersโ€™ Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is one example of initiatives that are aimed at ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design and development of intelligent systems.

Researchers are carefully exploring the ethical challenges posed by AI and are working, for example, on the development of AI algorithms that can ‘explain themselves’. Being able to better understand how an algorithm makes a certain decision could also help improve that algorithm.


AI and other digital technologies and infrastructures

 Person, Astronomy, Outer Space

Telecom infrastructure

AI is used to optimise network performance, conduct predictive maintenance, dynamically allocate network resources, and improve customer experience, among others.

 Neighborhood, Art

Internet of things

The interplay between AI and IoT can be seen in multiple applications, from smart home devices and vehicle autopilot systems to drones and smart cities applications.

semiconductors

Semiconductors

AI algorithms are used in the design of chips, for improved performance and power efficiency, for instance. And then semiconductors themselves are used in AI hardware and research.

 Art, Drawing

Quantum computing

Although largely still a field of research, quantum computing promises enhanced computational power which, coupled with AI, can help address complex problems.

 Arch, Architecture, Art, Crib, Furniture, Infant Bed, Building, Factory, Manufacturing

Other advanced technologies

AI techniques are increasingly used in the research and development of other emerging and advanced technologies, from 3D printing and virtual reality, to biotechnology and synthetic biology.

Semiconductors

AI and semiconductors

Semiconductors and AI are closely intertwined. Semiconductors are the backbone of modern computing and are present in a vast array of electronic devices, from servers and data centres to smartphones and laptops. At the same time, AI is a quickly expanding technology that critically depends on computing power and data processing abilities.

AI for improving semiconductors

The synergy between semiconductors and AI is essential for advancing AI capabilities and propelling innovation in the semiconductor industry. Semiconductors serve as the foundation for AI technologies by providing the necessary computing power and data processing capabilities. AI algorithms rely on the processing capabilities of semiconductors, such as central processing units (CPUs), graphics processing units (GPUs), and specialised AI hardware, like tensor processing units (TPUs) and neural processing units (NPUs). These semiconductor components accelerate AI computations and enable the execution of complex AI algorithms.


AI also enables improved design, more effective manufacturing processes, and enhanced chip performance. AI can be used, for instance, to anticipate equipment breakdowns, optimise process variables, and enhance quality control, all of which can result in more effective and economical manufacturing processes. Overall, AI’s integration into the semiconductor industry can help enhance the entire lifecycle of semiconductor chips, making semiconductors more efficient, cost-effective, and reliable. AI has also become a valuable tool in semiconductor design, assisting chip engineers in navigating the complexities of their work.

Challenges in using AI in the semiconductor industry

The use of AI in the semiconductor industry presents several challenges, including increased complexity and costs. Specialised hardware and algorithms must be developed to effectively implement AI, necessitating significant investments in research, development, and infrastructure. The proprietary nature of AI algorithms, data, and models used for optimisation and design can potentially lead to infringements on intellectual property rights, thus raising ownership disputes. With AI systems being connected to networks, they are more susceptible to cyberattacks and unauthorised access.

Semiconductors, often referred to as microchips, or simply chips, are an essential component of electronic devices that have become an important part of our everyday life. We can find them in our smartphones, computers, TVs, vehicles, advanced medical equipment, military systems, and countless other applications. In 2023, the sales of semiconductors reached a record $526.8 billion, according to the Semiconductor Industry Association. It is estimated that we use 120 chips per person on the planet on average. For example, a typical car uses between 50 and 150. However, a modern electric vehicle can use up to 3,000.

Semiconductor chips power our world. They are a key component of nearly every electronic device we use and they also power factories in which these devices are produced. Think for a minute of all the encounters you have with electronic devices. How many have you seen or used in the last week? In the last 24 hours? Each has important components that have been manufactured with electronic materials.

pic 1
Image by axonite from Pixabay

To understand the important role of semiconductor chips we have to explain what they are and how they are designed and produced. A substance that does not conduct electricity is called an insulator. A substance that conducts electricity is called a conductor. Semiconductors are substances with the properties of both an insulator and a conductor. They control and manage the flow of electric current in electronic equipment and devices. 

The most used semiconductor is silicon. Using semiconductors, we can create electronic discrete components, such as diodes and transistors and integrated circuits (ICs). An IC is a small device implementing several electronic functions. It is made up of two major parts: a tiny and very fragile silicon chip and a package, which is intended to protect the internal silicon chip and to provide users with a practical way of handling the component. Semiconductor devices installed inside many electronics appliances are important electronic components that support functioning of the world.


Types of chips

We can categorise types of chips according to the ICs used or to their functionality. 

Sorted by types of IC used, there are three types of chips:

  • Digital 
  • Analog 
  • Mixed

Most computer processors currently use digital circuits. These circuits usually combine transistors and logic gates. Digital circuits use digital, discrete signals that are generally based on a binary scheme. Two different voltages are assigned, each representing a different logical value. On the other hand, in analog circuits, voltage and current vary continuously at specified points in the circuit. Power supply chips are usually analog chips. Another application using analog circuits is communication systems. Mixed integrated circuits are typically digital chips with added technology for working with both analog and digital circuits. An analog-to-digital converter (ADC) and a digital-to-analog converter (DAC) are essential parts of these types of circuit. 

pic 2
Image by Recklessstudios from Pixabay

Sorted according to functionality, categories of semiconductors include:

  • Memory chips 
  • Microprocessors 
  • Graphic processing units (GPUs)
  • Application-specific integrated circuits (ASICs) 
  • Systems-on-a-chip (SoCs)

Memory chips

The main function of semiconductor memory chips is to store data and programs on computers and data storage devices. Electronic semiconductor memory technology can be split into two main categories, based on the way in which the memory operates:

  1. Read-only memory (ROM
  2. Random-access memory (RAM)

There are many types of ROM and RAM available. They stem from a variety of applications and also the number of technologies available. This section contains a brief overview of the functionality of the main memory chip types.

table 1
Overview of main memory chip types
pic 3 Memory Sharing infographic July 2022
Semiconductor Memory Technologies by Diplo Creative Lab

Microprocessors

Microprocessors are made of one or more central processing units (CPUs). Multiple CPUs can be found in computer servers, personal computers (PCs), tablets, and smartphones.

The 32- and 64-bit microprocessors in PCs and servers today are mostly based on x86 chip architectures, first developed decades ago. Mobile devices like smartphones typically use an ARM chip architecture. Less powerful 8-, 16-, and 24-bit microprocessors (called microcontrollers) are found in products such as toys and vehicles. We will address these architectures later in the Technology and production section as the first step of chip production.

Graphic processing units 

A graphics processing unit (GPU), which is a type of microprocessor, renders graphics for the smoother display that is expected in modern videos and games by most consumers of electrical devices. GPU rendering is the use of a GPU in the automatic generation of two-dimensional or three-dimensional images from a model, done by computer programs. 

A GPU can be used in combination with a CPU, where it can increase computer performance by taking some more complex computations, such as rendering, from the CPU. This is a big improvement, since it accelerates how quickly applications can process data; the GPU can perform many calculations simultaneously. It also allows development of more advanced software in fields such as machine learning and cryptocurrency mining.

Application-specific integrated circuits

Application-specific integrated circuits (ASICs) are made for a specific purpose. They enable significant amounts of circuitry to be incorporated onto a single chip, decreasing the number of external components. They can be used in a wide range of applications, such as bitcoin mining, personal digital assistants, and environmental monitoring.

Systems-on-a-chip

The system-on-a-chip (SoC) is one of the newest types of IC chips, a single chip that contains all of the electronic components needed for an entire electronic or computer system. The capabilities of an SoC are more comprehensive than those of a microcontroller chip, because they almost always include a CPU with RAM, ROM, and input/output (I/O). The SoC may also integrate camera, graphics, and audio and video processing in a smartphone.


Technology and production

pic 4
Image by Ranjat M from Pixabay

Production phases

Most semiconductor companies choose to work on two main stages of production: manufacturing and/or design. Those focused solely on manufacture/fabrication are called foundries (also known as fabs or semiconductor fabrication plants). Those focused on design are called fabless companies. Fabless companies such as Broadcom, Qualcomm, and HiSilicon (the in-house design firm of China’s Huawei) specialise in chip design and outsource fabrication, assembly, and packaging. They contract the Taiwan Semiconductor Manufacturing Company (TSMC) and others to fabricate for them. A third type semiconductor company focuses on both manufacturing and design, and are called Integrated Device Manufacturers, or IDMs. Intel and Samsung are among the worldโ€™s biggest IDMs. Other semiconductor companies work on assembly and packaging, and the manufacture of semiconductor equipment. 

Process nodes and wafers

One term you might often notice when reading about chips is process node. This represents the standardised process used across a whole range of products. The semiconductor process is based on a set of steps to make an IC with transistors that have to meet certain levels of performance and size characteristics. Standardising the process allows faster production and improvement of these chips. Separate teams are not needed for each smaller group of products; the same solutions can be used for many products at the same time. This makes production more efficient. Creating a smaller process node means coming up with a new manufacturing process with smaller features and better tolerances by integrating new manufacturing technologies.

Process nodes are usually named with a number followed by the abbreviation for nanometer: 7nm, 10nm, 14nm, etc. Nowadays, there is no correlation between the name of the node and any feature of the CPU. TSMCโ€™s vice president of corporate research, Dr Philip Wong said concerning the node names: โ€œIt used to be the technology node, the node number, means something, some features on the wafer. Today, these numbers are just numbers. Theyโ€™re like models in a car โ€“ itโ€™s like BMW 5-series or Mazda 6. It doesnโ€™t matter what the number is, itโ€™s just a destination of the next technology, the name for it. So, letโ€™s not confuse ourselves with the name of the node with what the technology actually offers.โ€ 

Another term you might run into is wafer. It is a thin slice of semiconductor, such as a crystalline silicon, used for the fabrication of ICs. The larger the wafer, the more chips that can be placed on it.

W
Picture of a wafer by Samuel Faber from Pixabay

Investment in producing semiconductor chips

The main goal of producing semiconductor chips is to try and make them as small as possible. If we can create a smaller process node, we can have smaller chips and we can fit more of them on a wafer, which results in higher profit. However, transistors are physical objects; there is a physical limit to how small they can be.

The history of semiconductor chips

In 1965, Gordon Moore, the co-founder of Fairchild Semiconductor International Inc., and Intel (and former CEO of the latter), predicted that manufacturers would go from 65 to 65k transistors per processor in the next 10 years. Mooreโ€™s predictions of the exponential growth trajectory that the industry was on were captured in Mooreโ€™s Law, which states that the number of transistors in a dense IC doubles about every two years. The Law not only predicted the increasing computer power, it was also a self-fulfilling prophecy. The improvement in semiconductors over the years attracted more investment in production, materials, and manpower, which in turn brought a lot of profit.


Steps in the chip production

chart 1
Chart of chip production split

Instruction Set Architecture 

As a first step, how the processor will perform its most basic instructions is defined, for example do calculations or access memory. The Instruction Set Architecture (ISA) acts as an interface between the hardware and the software, specifying both what the processor is capable of doing as well as how it gets done. The main goal is to turn this model of how the CPU is controlled by the software into an industry standard. This allows processors and operating systems of multiple companies to follow the same standard and become interoperable. 

For example, Windows, Mac iOS, and Linux can run on a variety of Intel and AMD chips through the power of x86. The x86 ISA family was developed by Intel; it is the world’s predominant hardware platform for laptops, desktops, and servers. For mobile phones, Arm chips are used in most cases. Arm is a reduced instruction set computing (RISC) architecture developed by the British company Arm Limited. Some companies, such as Samsung and Huawei, create their own chips. Intel and AMD own most of the x86 and only licence their ISA to a single active competitor: VIA Technologies Inc., in Taiwan. Moving a complex operating system to a new ISA would take a lot of time.

Chip design

Today, circuit diagrams are created by companies. Some of them, like Intel and Samsung, manufacture what they design. However, most are fabless companies; they outsource the manufacturing to foundries. This allows them to focus only on the design part, while other parts of the process are left to other players. In addition, a lot of companies that use specialised chips now design their own chips, so that they donโ€™t have to rely on Intel, for example, to create a chip that suits their needs. Examples include Apple, Samsung, and Huawei designing chips for their phones; Google for its AI service Tensorflow; Microsoft and Amazon for their data centres. 

Fabrication

In this step, the goal is getting that design onto silicon wafers. This is a complex process that also requires a lot of capital. It is extremely expensive to produce chips, as manufacturers have to spend around 30%โ€“50% on capital expenditures, compared to the 3%โ€“5% designers spend. In most cases it is done by foundries, such as TSMC. Leaders in the field, foundries can use their machines in multiple production parts for different kinds of chips. Most competitors gave up on trying to compete with TSMC since it didnโ€™t make sense economically

Some of the world leaders are deciding to split their design and fabrication business, such as Samsung and Samsung Foundry, and AMD and GlobalFoundries. Even Intel might start outsourcing their manufacturing to an external foundry. 

Equipment and software

Custom equipment and software are required for each chip. For example, extreme UV lithography machines (EUVs) are required in lithography, in which the design is transferred to the silicon wafer using EUVs. The Dutch company ASML is the sole producer of high-end EUV machines. ASML CEO Peter Wennink said they have sold a total of about 140 EUV systems in the past decade, each one now costing up to $200 million. TSMC buys around half of the machines they produce. This is just one example of a monopoly in the production of equipment.

Packaging and testing

Silicon wafers are cut up into individual chips. Wires and connectors are attached. And the chips are put into a protective housing. Theyโ€™re tested for quality before being distributed and sold. 

The future of semiconductor chips

As semiconductors get increasingly complex, it will be more and more expensive to compete in this space, creating a further concentration of power, which in turn creates economic and political tensions. Other factors, such as experiments with new materials for semiconductors, changes in the prices of metal materials, and the increase in development of new technologies in artificial intelligence (AI), internet of things (IoT), and similar fields will affect future sales and add new challenges and opportunities.


Supply chain disruption

The supply chain issue focuses on the ongoing global chip shortage, which started in 2020. The issue is simple: demand for ICs is greater than supply. Many companies and governments are searching for a solution to accelerate chip production. As a consequence of the supply chain disruption, prices for electrical devices have increased; production times are longer; and devices such as graphics cards, computers, video game consoles and gear, and automobiles are in short supply. 

Trend to fabless companies

The move from foundries to fabless companies helped complicate the chip shortage. More and more major semiconductor factories are adopting the fabless model and outsourcing to major manufacturers like TSMC and Samsung. For example, Intel talked with Samsung and TSMC to outsource some chip production to them.

In 2020, the USA had captured 47% of the global market share of semiconductor sales, but only 12% of manufacturing, according to the Semiconductor Industry Association. The country has put semiconductors at the top of its diplomatic agenda as it tries to work out export policies with its partners. 

How COVID-19 created a global computer chip shortage

The global pandemic had a major influence on the chip shortage. COVID-19 forced people to work and do everything from home, which for most of us meant we needed to upgrade our computers, get better speakers and cameras, make home theatres, and play a lot of video games.

Most businesses struggled to set up remote work systems, and there was an increased need for cloud infrastructure. All this together, along with the pause in production during the lockdowns, caused a massive supply chain disruption for electronics companies. Some governments are now increasing their investments in this industry, so they can hopefully lessen the impact of the disruption.

The semiconductor production process is very complex. Typically, the lead time is over four months for products that are already established. Trying to switch to a new manufacturer can take over a year, specifically since chip designs need to match the manufacturerโ€™s ability to produce those designs and make them function on a high level.

pic 6
Semiconductor development and production timelines by McKinsey & Company

How the car industry contributed to the chip shortage

Cars are getting more advanced each year and they need more semiconductors, such as advanced semiconductors to run increasingly more complex in-vehicle computer systems; and  older, less advanced semiconductors for things like power steering.

During the pandemic, the auto supply chain was disrupted. Cars require custom chips, which are commissioned by automakers. Chips for phones are not in short supply on that level, because they are designed around standardised chips. Car manufacturers use custom components to prevent aftermarket profits for third parties. The lead time to build standard semiconductors is about six months. The lead time for custom chips is two-to-three years. The pause in production created a massive delay of a lot of vehicles.

For instance, car manufacturers cut chip orders in early 2020 as sales of vehicles decreased. After sales recovered, the demand for chips increased even more than expected in the second part of 2020, which meant manufacturers had to move the production lines even later.

How the Chinaโ€“US trade war contributed to the chip shortage

At the base of this conflict stand two competing economic systems. The USA imports more from China than from any other country, and China is one of the largest export markets for US goods and services. Ever since September 2020, when the USA imposed restrictions on Chinaโ€™s Semiconductor Manufacturing International Corporation (SMIC), Chinaโ€™s largest chip manufacturer, it has made it harder for China to sell to companies that cooperate with the USA. Consequently, TSMC and Samsung chips were used more, creating an issue for those companies as they were already working and producing at maximum capacity.


Geopolitics

  • Only IDMs
  • Only fabrication
  • Fabrication and IDMs
  • Design and fabrication
  • All three (design, fabrication and IDMs)
  • None of three options

In the past couple of years, semiconductors have become a geopolitical issue. The strategic technology of semiconductors is not only the foundation of modern electronics, but also the foundation of the international economic balance of power. The transnational supply chain is a big part of this technology that is now distributed in its production and supply across the world, with multiple countries specialised in particular parts of the production chain.

Since the supply chain is so internationally distributed, there has been an increase in patent infringement lawsuits in this field, lawsuits on the grounds of misappropriation of intellectual property, and ones such as GlobalFoundries seeking orders that will prevent semiconductors produced with the allegedly infringing technology by Taiwan-based TSMC from being imported into the USA and Germany.

Policy measures for cooperation have been proposed, such as the EU Commission’s proposed CHIPS Act to confront semiconductor shortages and strengthen Europe’s technological leadership, and the World Semiconductor Council (WSC) series of policy proposals to strengthen the industry through greater international cooperation. However, creating global policies and regulations that respect the national legal frameworks of each global actor in the semiconductor industry is not easy to achieve, but there is a trend towards international cooperation with policies set in place.

pic 7
Image by Gerd Altmann from Pixabay

China 

Chinaโ€™s role in this supply chain is that of a vast consumer of semiconductors, importing a sizable percentage. China still cannot meet its semiconductor needs domestically. However, it is working on building a chain of production and wants to move to higher-value production. 

The manufacturing industry is built on semiconductors. China uses them in a variety of electronic manufacturing sectors. Thus, if anyone takes action against China in the semiconductor industry, they would disrupt the production chain in many other sectors. For example, US export controls directed at Huawei have had a significant effect on the global smartphone market. This has undermined Huaweiโ€™s capacity to deliver cutting-edge consumer devices, which consequently cut their market shares compared to their competitors, but acted as a stimulus to the industry. Beijing has made chips a top priority in the next 5-Year Plan. It will invest $1.4 trillion to develop the industry by 2025. In 2020, the country invested 407% more than the previous year. Its main goals are semiconductor independence with 77% of chips used in China, coming from China.

USA 

The USA is home to most chip design companies, such as Qualcomm, Broadcom, Nvidia, and AMD. However, these companies increasingly have to rely on foreign companies for manufacturing. 

โ€œWe definitely believe there should be fabs of TSMC [and] Samsung being built in America, but we also believe the CHIPS Act should be preferential for U.S. IP [intellectual property] and U.S. companies like Intel,โ€ said Patrick P. Gelsinger, CEO of Intel in an online interview hosted by the Washington-based Atlantic Council think tank on 10 January 2022. 

In 2022, the USA announced an investment of more than $20 billion to build two new chip plants in the state of Ohio. Construction is set to begin in late 2022, with production predicted to go onstream in 2025.

During the pandemic, the Biden administration presented its plans to end the supply chain crisis by changing the supply chain, starting with changing the production of certain elements in the USA. The USA will work more closely with trusted friends and partners, nations that share US values so that their supply chain cannot be used against the country as leverage.

The US administration needs to look at both national and economical security. A thriving US semiconductor industry means a strong American economy, high-paying jobs, and a national ripple-effect, such as the impact on transportation with new vehicles increasingly relying on chips for safety and fuel efficiency.

Taiwan

Taiwan-based TSMC has a huge role in the global semiconductor supply chain. As the number one chip manufacturer, it has built its market dominance for years. TSMC has set such a high standard for chip production it will take a long time for a competitor to reach its level. 

Taiwan doesnโ€™t have the same trade issues that China has, since it cooperates with many countries. For example TSMC committed to building a $12 billion fabrication plant in Arizona, USA, to start producing 5nm chips by 2024 (not 3nm, which will be the cutting edge then produced in Taiwan).

Building has begun; TSMC is hiring US engineers and sending them to Taiwan for training, although, according to Taiwan’s Minister of the National Development Council Ming-Hsin Kung, the pace of construction depends on Congress approving federal subsidies.

Taiwan is preparing to introduce tougher laws to protect the semiconductor industry from Chinese industrial espionage. “High-tech industry is the lifeline of Taiwan. However, the infiltration of the Chinese supply chain into Taiwan has become serious in recent years. They are luring away high-tech talent, stealing national critical technologies, circumventing Taiwan’s regulations, operating in Taiwan without approval and unlawfully investing in Taiwan, which is causing harm to Taiwan’s information technology security as well as the industry’s competitiveness,” said Lo Ping-cheng, Minister without Portfolio and Spokesperson for the Executive Yuan.

The overreliance on a single Taiwanese chip fabrication company carries supply chain risks for the broader semiconductor industry.

In February of 2022, Taiwanโ€™s Economy Minister Wang Mei-hua emphasised: “Taiwan will continue to be a trusted partner of the global semiconductor industry and help stabilise supply chain resilience.” The statement said that Taiwan has “tried its best” to help the EU and other partners resolve a global shortage of chips. TSMC has said it was still in the very early stages of assessing a potential manufacturing plant in Europe, as they are currently focusing on building chip factories in the USA.

South Korea 

Samsung is a major manufacturer of electronic components, semiconductors being one of them. Although the company is catching up with TMSC, this still means only two companies are able to provide that type of service at the cutting edge of technology and it will be difficult to change this situation anytime soon. In 2021, Samsung’s market share of the global semiconductor industry was 12%. In addition to exporting semiconductors to countries such as the USA and China, Samsung uses its own semiconductors in its other products as well as selling them to technology companies in South Korea that use them, too.

South Korea has its own version of the CHIPS Act offering state support to the domestic chip industry currently led by Samsung and SK Hynix. Unlike other global actors, South Koreaโ€˜s new chip law does not specify quantitative targets for how much it would cost for their government to carry out its plans and what the consequence would be on economic growth or job creation. As a result, South Koreaโ€™s large corporations will be subject to a 6%โ€“10% tax break for facility investment and a 30%โ€“40% tax credit for research and development, while smaller companies will have a larger degree of tax relief.

The EU 

As a consequence of the crisis in the shortage of the semiconductor chips that led to problems such as a lack of components and European companies closing, the EU has started taking steps towards the goal of doubling chip manufacturing output to 20% of the global market by 2030. Security, energy efficiency, and green transition are additional goals it is focusing on. The new Digital Compass Plan will fund various high/tech initiatives to boost digital sovereignty.

Emerging market opportunities, such as AI, edge computing, and digital transformation bring a lot of demand for chip production. New needs for AI will bring new production models and collaboration. The EUโ€™s strengths are R&D, manufacturing equipment, and raw materials. Its weaknesses lie in semiconductor IP and digital design, design tools, manufacturing, and packaging.

As a result, the EU needs the CHIPS Act. The goals will be to strengthen its research and technology leadership; build and reinforce its own capacity to innovate in the design, manufacturing, and packaging of the advanced chips; put in place an adequate framework to substantially increase its production capacity by 2030; address the acute skills shortage; and develop an in-depth understanding of the global semiconductor supply chains. 

Three pillars of the CHIPS Act (by European Semiconductor Board) :

  1. Chips for Europe Initiative โ€“ Initiative on infrastructure building in synergy with the EU’s research programmes; support to start-ups and small and medium enterprises (SMEs).
  2. Security of Supply โ€“ First-of-a-kind semiconductor production facilities. 
  3. Monitoring and Crisis Response โ€“ Monitoring and alerting, a crisis coordination mechanism with member states, and strong commission powers in times of crisis. 

Member states are enthusiastic about this topic. They all understand the importance and feel the effects of the shortage. They have already started working on these problems in the expert group, trying to find possible solutions to face these kinds of challenges. 

The next steps

The prospect of European or American firms that could do similar service in the next five years is unrealistic. The EU is promising to invest โ‚ฌ43 billion over the period of 2030. TMSC will invest over $40 billion in 2022 alone on its capital expenditure, and Samsung will try to match it, which shows the difference in the investments and also further proves how it will be hard to catch up with the leaders of chip production. 

U.S. Access Board calls for public comment on accessibility guidelines for SSTMs

Self-service transaction machines (SSTMs) and kiosks are a common feature in places of public accommodation, government offices, and other buildings and facilities. They typically have touchscreen interfaces with on-screen buttons or a keyboard. Without a physical keypad or other tactile controls, these machines are unusable by many people who are blind or have low vision if the information is not provided audibly. They also frequently lack captioning and text equivalents for audible information.ย 

The U.S. Access Board has issued an advance notice of proposed rulemaking (ANPRM) on supplemental accessibility guidelines for different types of SSTMs, including electronic self-service kiosks, for persons with disabilities.

The Board seeks comments on accessibility related to the various types of self-service transaction machines (SSTMs), use and design of SSTMs, location of SSTMs, and economic impacts on small business, non-profit, and governmental entities in the implementation of accessible SSTMs.

Greece PM attends the presentation of a new e-portal for persons with disabilities

During the presentation of the national portal for persons with disabilities and the Centers for Certification of Disabilities (KEPA), Prime minister Kyriakos Mitsotakis while referring to the evaluation system that involves periodic examinations of persons with disabilities, said โ€˜Why must someone go through repeated examinations for a problem that will accompany them the rest of their lives? Why could we not call on digital technology to simplify the lives of fellow citizens we should care for more than others and prioritize serving them? How is it possible to summon them to a service dedicated to people with disabilities and have these services be inaccessibleโ€™? . These e-services will solve the problem of physical access and exhausting in-person procedures.
The premier also assured persons with disabilities and their families that this would not be a temporary help, but a policy with guaranteed funding from several sources.
In his address regarding the digital services, Labor & Social Affairs Minister Kostis Chatzidakis said, โ€˜The multiple visits to KEPAs are abolished. Up to now, an individual had to show up three or four times at KEPAs to certify their disability. Today, only one visit is required, when they will be examined by a doctorsโ€™ committee. The rest will take place digitallyโ€™.

Judgments will be made available in a free text search portal with accessibility features: Justice Chandrachud

While speaking at the third Professor Shamnad Basheer Memorial Lecture organized by the LiveLaw, Justice DY Chandrachud, said that the Supreme Court will join the National Judicial Data Grid in the near future and all the decisions of the Supreme Court will be made available in a free text search portal. Those judgments will have accessibility features built into them for the easy access of persons with disabilities.

Delivering the lecture on the topic Making Disability Rights Real: Addressing accessibility and more, Justice Chandrachud said that the e-Committee of the Supreme Court has been making efforts to make the digital infrastructure of the judicial system more accessible to persons with disabilities.

โ€˜We have introduced audio-captchas on the Supreme Court as well as High Court websites to ensure that visually impaired professionals face no hindrances in looking up the cause list or the case status. We have also ensured that case files are readable and screen-reader-friendly to make them accessible to persons with disabilities. The e-committee in collaboration with National Informatics Centre (NIC) has also created a judgment search portal accessible to persons with disability. Over seventy-five lakh judgments of the High Courts will be freely available for access. The visually challenged will not have to confront the unwillingness of private software developers to accommodate their needs”, Justice Chandrachud said.

Freedom of speech is Non-negotiable, uncompromising says Vice President

During the inauguration of the third edition of Lokmanthan, Vice President Jagdeep Dhankhar said,โ€™ the freedom of expression in India is non-negotiable, and we cannot compromise on it. If we depart from this, we will compromise with the sovereignty and wholesomeness of the countryโ€™. He expressed regret that, matters are being discussed on the streets instead of on the floor of the assembly.
While addressing the media, he said, โ€˜another concerning trend that needs to curtailed is the problem of pseudo-intellectuals. Can public space be allowed to be dominated by this category of people with the help of media-created eclipses?โ€™

84 IT Projects to Support Persons with Disabilities deployed across the country by NCC

Between 2012 and 2020, the Nigerian Communications Commission (NCC) deployed 84 assistive Information Technology projects across the country. The E-Accessibility project seeks to meet the ICT needs of persons with disabilities in Nigeria by providing ICT tools, assistive technologies, training, and Internet provision in the identified locations. This was disclosed by the Executive Vice Chairman (EVC) of the Commission, Prof. Umar Danbatta, during a courtesy visit to their offices, by a delegation from National Commission for Persons with Disabilities (NCPWD).
In his remarks, the NCPWDโ€™s Executive Secretary, James Lalu, said the purpose of the agencyโ€™s visit was to keep the NCC management abreast of its mandates and activities and to seek greater collaborations for the benefits of the estimated 35.5 million persons with disabilities in Nigeria. โ€˜What we want to achieve is to make Nigeria a country that is comfortable for PLWD by ending discrimination and providing an adequate reporting system, and we have seen NCC as a strategic and important partner in this journeyโ€™, he said.