Over 50 civil society organisations have sent a letter to the European Commission demanding that clear regulatory red lines are introduced in the commission’s upcoming legislative proposal on artificial intelligence (AI), to prevent the use of AI which violates fundamental rights. The signatories argue that, without appropriate limitations on the use of AI-based technologies, there are risks of violations to human rights and freedoms by governments and companies. In their view, the upcoming regulatory proposal should establish clear limitations to what can be considered lawful uses of AI. Limitations such as: (a) the enabling of biometric mass surveillance and monitoring of public spaces; (b) the exacerbation of structural discrimination, exclusion and collective harms; (c) the manipulation or control of human behaviour and associated threats to human dignity, agency, and collective democracy. The civil society groups call for an explicit ban on the indiscriminate use of arbitrarily targeted use of biometrics in public or publicly accessible spaces which can lead to mass surveillance. They also note that legal restrictions or legislative red lines need to be introduced to cover AI uses which contravene fundamental rights, including, inter alia, uses of AI at the border, predictive policing, systems which restrict access to social rights and benefits, and risk assessment tools in the criminal justice context. Moreover, they demand an explicit inclusion of marginalised and affected communities in the development of EU AI legislation and policy.