Tallinn University of Technology leads European consortium in establishing AI safety center

With Estonia’s expertise in digital governance and a commitment to innovation, the center aims to address AI safety concerns by focusing on correctness, security, and ethical deployment.

 Flag, Person

A group of leading European universities, led by Tallinn University of Technology, is set to establish the Estonian Center for Safe and Trustworthy AI (ECSTAI) to take the lead in discussing AI safety in Europe. Estonia’s experience in digital governance and its commitment to technological advancement are considered advantages for the country in hosting a centre focused on AI safety. The ECSTAI’s vision focuses on three pillars of AI safety: correctness, security, and ethical deployment, aiming to establish a new research discipline – AI Safety engineering.

A recent seminar in Brussels, moderated by an Estonian former official, highlighted the need for a balanced approach to AI development, considering both innovation and ethical concerns. Panelists stressed the need for a balanced approach, valuing innovation, regulation, ethics and involving academia to develop best practices in Europe’s tech ecosystem. The consensus among panellists was on the necessity of building trust in society to ensure AI’s human-centric benefits and avoid favouring a selected elite.

Why does it matter?

As AI continues to evolve, establishing AI safety hubs like ECSTAI is crucial in developing best practices. The establishment of ECSTAI signifies a proactive endeavour to align AI advancements in Europe with ethical principles and societal values. However, it’s worth noting the EU’s regulatory prowess demonstrated through initiatives such as the EU AI Act and the GDPR. These initiatives highlight the importance for the centre to focus on the governance aspect of AI and prioritise cultivating an innovation ecosystem within the EU.