On the 14th of June 2020 the public consultation on the European Commission's White Paper on Artificial Intelligence closed. The white paper, released last February, outlines a regulatory framework for high-risk AI (see the Observatory article). Here we proposed the key points of the opinion contributed by Panoptykon Foundation, a Warsaw-based NGO with a mission to protect fundamental rights in the context of surveillance and fast-changing information technologies
1. EU regulatory framework should cover *all* AI systems that will be applied to humans and/or may affect them.
2. Instead of allowing all AI applications by default, the EU should set clear rules for which uses of AI are permitted and which are not, incl. explicit bans.
3. Impact on human rights should be assessed for each AI application (not only those in pre-defined sectors), publicly disclosed, and - in the case of high risk - subject to review. We propose a "GDPR+" impact assessment regime.
4. People should know when they have to do with AI. Deployers of AI should give a general explanation of its logic, purpose, and potential risks, like it is already the case with processed food and pharmaceuticals.
5. Whenever AI in any way affects people, they should be offered individual explanations, regardless of whether human oversight was involved, how significant the impact was, or whether personal data was processed and people can be singled out.
Read the whole text:
Call for opinion: organisations that contributed to the open consultation on the White Paper are invited to share it with the Observatory. For sharing the opinion please contact the Observatory team: Atia Cortés (firstname.lastname@example.org), Teresa Scantamburlo (email@example.com), Francesca Foffano (firstname.lastname@example.org).
- #White Paper
- #European Union
- #Panoptykon Foundation
- #open call