EU policy/Consultation on the White Paper on Artificial Intelligence (2020)

 Home    About    Statement    Monitoring    Documentation    Handouts    Team    Transparency


Consultation on the White Paper on Artificial Intelligence

2020

 

This page contains the questions from the public consultation by the European Commission on its Artificial Intelligence White Paper. It is intended as a working documents for Wikimedians to collaboratively draft Wikimedia's answers to this legislative initiative of the EU.

The EU's survey will remain open until 14 June 2020, but we will take input into account that has been added here until 31 May 2020.

Contribution

edit

The following documents were submitted by the FKAGEU:
1. Survey answers
2. A paper on IPR and AI
3. A call for "public money, public code" within AI

Introduction

edit

Artificial intelligence (AI) is a strategic technology that offers many benefits for citizens and the economy. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans and the protection of workers, and in many other ways that we can only begin to imagine.

At the same time, AI entails a number of potential risks, such as risks to safety, gender-based or other kinds of discrimination, opaque decision-making, or intrusion in our private lives.

The European approach for AI aims to promote Europe’s innovation capacity in the area of AI while supporting the development and uptake of ethical and trustworthy AI across the EU. According to this approach, AI should work for people and be a force for good in society.

For Europe to seize fully the opportunities that AI offers, it must develop and reinforce the necessary industrial and technological capacities. As set out in the accompanying European strategy for data, this also requires measures that will enable the EU to become a global hub for data.

Consultation

edit

The current public consultation comes along with the White Paper on Artificial Intelligence - A European Approach aimed to foster a European ecosystem of excellence and trust in AI and a Report on the safety and liability aspects of AI. The White Paper proposes:

  • Measures that will streamline research, foster collaboration between Member States and increase investment into AI development and deployment;
  • Policy options for a future EU regulatory framework that would determine the types of legal requirements that would apply to relevant actors, with a particular focus on high-risk applications.

This consultation enables all European citizens, Member States and relevant stakeholders (including civil society, industry and academics) to provide their opinion on the White Paper and contribute to a European approach for AI. To this end, the following questionnaire is divided in three sections:

  • Section 1 refers to the specific actions, proposed in the White Paper’s Chapter 4 for the building of an ecosystem of excellence that can support the development and uptake of AI across the EU economy and public administration;
  • Section 2 refers to a series of options for a regulatory framework for AI, set up in the White Paper’s Chapter 5;
  • Section 3 refers to the Report on the safety and liability aspects of AI.

Other initiatives

edit

A number of other EU initiatives address AI.

One of them is the own-initiative (i.e. non-legislative) report by the European Parliament, Framework of ethical aspects of artificial intelligence, robotics and related technologies (2020-04-21 draft); IMCO held a meeting on 2020-05-18.

Section 1 - An ecosystem of excellence

edit

To build an ecosystem of excellence that can support the development and uptake of AI across the EU economy, the White Paper proposes a series of actions.

In your opinion, how important are the six actions proposed in section 4 of the White Paper on AI?

edit

(1-5: 1 is not important at all, 5 is very important)?

  • Working with Member states
  • Focusing the efforts of the research and innovation community
  • Skills
  • Focus on SMEs
  • Partnership with the private sector
  • Promoting the adoption of AI by the public sector

Are there other actions that should be considered?

Comments
  • Public sector should use open source and open data exclusively and promote it. All research and start-ups funded by public money.
  • Focusing the efforts of the research and innovation community and Promoting the adoption of AI by the public sector (in addition with promotion of free content and algorithm release, to be specified in further actions) should be underlined from Wiki community --Mattia Luigi Nappi (talk)

  • Working with Member states - 4 The member states need to be involved in order to have a shared idea of what is included under this broad notion of AI and the possibilities for their local organisations.
  • Focusing the efforts of the research and innovation community - 3 While research is important, currently most of the research is already published publicly even when done by companies like Google and Facebook.
  • Skills - 5 More important that research is the ability to apply the tools which are readily available in a responsible way. In my opinion, this is the most important part
  • Focus on SMEs - 2 While it might be nice for SMEs to have AI, i think most of them are not ready for it and would be served best with premade tools to lower the amount of skill needed rather than a focused approach for them to implement it themselves.
  • Partnership with the private sector - 4 I think that the private sector has more need and use-cases for AI and thus cooperation is needed
  • Promoting the adoption of AI by the public sector - 1 I think the public sector should be very conservative in its adoption of AI. In the Netherlands our Tax agency has already had some law suits concerning discrimination based on nationality brought against them. ( Robin Verhoef) (talk)

  • ...

Revising the Coordinated Plan on AI (Action 1)

edit

The Commission, taking into account the results of the public consultation on the White Paper, will propose to Member States a revision of the Coordinated Plan to be adopted by end 2020.

In your opinion, how important is it in each of these areas to align policies and strengthen coordination as described in section 4.A of the White Paper?

edit

(1-5: 1 is not important at all, 5 is very important)?

  • Strengthen excellence in research
  • Establish world-reference testing facilities for AI
  • Promote the uptake of AI by business and the public sector
  • Increase the financing for start-ups innovating in AI
  • Develop skills for AI and adapt existing training programmes
  • Build up the European data space

Are there any other actions to strengthen the research and innovation community that should be given a priority?

Comments
  • The European Commission can help by promoting the best practices in the field of AI, i.e. the projects based entirely on free software and open data, which are instrumental for open science, open government and more generally for transparency, citizen empowerment and human rights. We can provide examples of best practices like Wikimedia software, Moses MT, Tesseract, GROBID and others useful for the promotion of culture. Nemo 08:41, 18 May 2020 (UTC)[reply]

---

  • Strengthen excellence in research - 5
  • Establish world-reference testing facilities for AI - 3
  • Promote the uptake of AI by business and the public sector - 2 AI should not be a solution in search of a problem so I think promotion is not worth the efford
  • Increase the financing for start-ups innovating in AI - 2 I believe that most AI start ups do not have the capability to live up to the responsible AI promise and that there is enough private funding for them avaible
  • Develop skills for AI and adapt existing training programmes - 4 A lack of skilled people is already a problem now so more training is a good idea.
  • Build up the European data space - 2 While creating a European data space for academic and public data is a good idea, this is already happening and I think that hoping for the sharing of private company data based on this is not viable.

Robin Verhoef

Focusing on Small and Medium Enterprises (SMEs)

edit

The Commission will work with Member States to ensure that at least one digital innovation hub per Member State has a high degree of specialisation on AI.

In your opinion, how important are each of these tasks of the specialised Digital Innovation Hubs mentioned in section 4.D of the White Paper in relation to SMEs?

edit

(1-5: 1 is not important at all, 5 is very important)?

  • Help to raise SME’s awareness about potential benefits of AI
  • Provide access to testing and reference facilities
  • Promote knowledge transfer and support the development of AI expertise for SMEs
  • Support partnerships between SMEs, larger enterprises and academia around AI projects
  • Provide information about equity financing for AI startups

Are there any other tasks that you consider important for specialised Digital Innovations Hubs?

Comments
  • It should be important to underline that the commercial side of the private sector should not be favorized more than non-profit associations (for example partnerships between SMEs, larger enterprises and academia around AI projects is not considering non-profit organizations as Wikimedia). --Mattia Luigi Nappi (talk)

  • ...

Section 2 - An ecosystem of trust

edit

Chapter 5 of the White Paper sets out options for a regulatory framework for AI.

In your opinion, how important are the following concerns about AI?

edit

(1-5: 1 is not important at all, 5 is very important)?

  • AI may endanger safety
  • AI may breach fundamental rights (such as human dignity, privacy, data protection, freedom of expression, workers' rights etc.)
  • The use of AI may lead to discriminatory outcomes
  • AI may take actions for which the rationale cannot be explained
  • AI may make it more difficult for persons having suffered harm to obtain compensation
  • AI is not always accurate

Do you have any other concerns about AI that are not mentioned above? Please specify:

Comments
  1. AI may endanger safety (4 - and I would say that discrimination endangers folks' safety, whether by denying people jobs, housing or other opportunities that keep people safe and healthy)
  2. AI may breach fundamental rights (such as human dignity, privacy, data protection, freedom of expression, workers' rights etc.) (5 - and the right to be free from discrimination ought to be considered a fundamental right)
  3. The use of AI may lead to discriminatory outcomes (5)
  4. AI may take actions for which the rationale cannot be explained (3 - a lot of folks prioritize the right of explanation, but I'd rather have an algorithm that isn't used if it's discriminatory (or isn't discriminatory) over one with biases that can be explained.)
  5. AI may make it more difficult for persons having suffered harm to obtain compensation (4)
  6. AI is not always accurate (4)
  • A bad use of AI may endanger fundamental rights and may lead to discriminatory outcomes. Furthermore it can over-responsibilize internet users when publishing data containing information about other people (panorama containing images of people, etc.)--Mattia Luigi Nappi (talk)
  • Citizens should never be subjected to any state power making use of closed source or closed data AI. Any usage of secret/unpublished methods and data to limit a citizen's freedoms would infringe articles 20, 21, 41, 47 and 49 of the Charter of Fundamental Rights of the European Union. Nemo 08:42, 18 May 2020 (UTC)[reply]
    • This is partly reflected in the IMCO draft report: «7. Stresses that where public money contributes to the development or implementation of an algorithmic system, the code, the generated data -as far as it is non-personal- and the trained model should be public by default»; «access to data should be extended to appropriate parties notably independent researchers, media and civil society organisations»; «Notes that it is essential for the software documentation, the algorithms and data sets used to be fully accessible to market surveillance authorities, while respecting Union law; invites the Commission to assess if additional prerogatives should be given to market surveillance authorities in this respect;» Nemo 19:45, 18 May 2020 (UTC)[reply]
  • ...

Do you think that the concerns expressed above can be addressed by applicable EU legislation? If not, do you think that there should be specific new rules for AI systems?

edit
  • Current legislation is fully sufficient

*Current legislation may have some gaps

  • There is a need for a new legislation
  • Other
  • No opinion
Comments
  • Fundamental rights must take precedence over laws of lesser rank, such as the database rights directive. Nemo 08:42, 18 May 2020 (UTC)[reply]
  • Some applications of AI systems ought to be banned, with face recognition used by law enforcement commonly identified among those candidates.

If you think that new rules are necessary for AI system, do you agree that the introduction of new compulsory requirements should be limited to high-risk applications (where the possible harm caused by the AI system is particularly high)?

edit
  • Yes
  • No
  • Other
  • No opinion

If you wish, please indicate the AI application or use that is most concerning (“high-risk”) from your perspective:

Comments
  • New rules are necessary but it is probably way too early to define high-risk now. It is a question we will need to answer in a few years' time. We should start by applying rules we already have, like GDPR, rules on liability and product safety. If there are gaps, we should strive to create general rules.AiI applications may become high-risk over time or depending on the context they are employed in.

In your opinion, how important are the following mandatory requirements of a possible future regulatory framework for AI (as section 5.D of the White Paper)?

edit

(1-5: 1 is not important at all, 5 is very important)

  • The quality of training data sets
  • The keeping of records and data
  • Information on the purpose and the nature of AI systems
  • Robustness and accuracy of AI systems
  • Human oversight
  • Clear liability and safety rules
Comments
  1. The quality of training data sets (5 - the quality of training data is one of the outsized, hidden drivers behind biased algorithms. However, perfect data for an oppressive AI system should not be the goal, and some applications of AI systems ought to be off limits.)
  2. The keeping of records and data (3/4)

Information on the purpose and the nature of AI systems (5 - computer scientists and engineers have a right to know what they're building, especially if an #AI system can be used for benign as well as insidious purposes. This information should not only include details about training data, such as whether the data was collected consensually and knowingly, and the purpose of the algorithm, including dual-use applications that may be introduced down the line, but information on the downstream buyers of AI systems.)

  1. Robustness and accuracy of AI systems (4 - with the caveat that a perfect system used to oppress should never be allowed)
  2. Human oversight (4 - humans are also significant introducers of biases into AI systems. This comes with the caveat that the humans involved in oversight ought to be invested in equity and justice, and the humans in judgment ought to reflect the full range of human experiences - not just those of the men who predominantly build AI systems.)
  3. Clear liability and safety rules (4 - with the slight reframing toward clear justice rules. Face recognition algorithms don't run afoul of liability or safety rules in many jurisdictions but, if used by certain people or agencies, can have significant implications for justice. To the extent that new laws are needed, those laws should focus on how to create a more just landscape, not just one that's easy for industry to navigate carefully without internalizing the consequences of unjust AI systems.)

In addition to the existing EU legislation, in particular the data protection framework, including the General Data Protection Regulation and the Law Enforcement Directive, or, where relevant, the new possibly mandatory requirements foreseen above (see question above), do you think that the use of remote biometric identification systems (e.g. face recognition) and other technologies which may be used in public spaces need to be subject to further EU-level guidelines or regulation:

edit
  • No further guidelines or regulations are needed
  • Biometric identification systems should be allowed in publicly accessible spaces only in certain cases or if certain conditions are fulfilled (please specify)
  • Other special requirements in addition to those mentioned in the question above should be imposed (please specify)
  • Use of Biometric identification systems in publicly accessible spaces, by way of exception to the current general prohibition, should not take place until a specific guideline or legislation at EU level is in place.
  • Biometric identification systems should never be allowed in publicly accessible spaces
  • No opinion
Comments
  • It is hard for us to answer such a question in a sensible and responsible way here. We would be happy to participate in a stand alone consultation targeted solely at this and similar issues. --Dimi z (talk) 09:58, 8 June 2020 (UTC)[reply]

Do you believe that a voluntary labelling system (Section 5.G of the White Paper) would be useful for AI systems that are not considered high-risk in addition to existing legislation?

edit
  • Very much
  • Much
  • Rather not
  • Not at all
  • No opinion

Do you have any further suggestion on a voluntary labelling system?

Comments
  • Rather not. I am scared that a voluntary labelling can create confusion among users, it could give disadvantage to who want to honestly show that is using a low-risk AI. --sNappy(talk) 23:56, 17 May 2020 (UTC)[reply]

  • ...

What is the best way to ensure that AI is trustworthy, secure and in respect of European values and rules?

edit
  • Compliance of high-risk applications with the identified requirements should be self-assessed ex-ante (prior to putting the system on the market)
  • Compliance of high-risk applications should be assessed ex-ante by means of an external conformity assessment procedure
  • Ex-post market surveillance after the AI-enabled high-risk product or service has been put on the market and, where needed, enforcement by relevant competent authorities
  • A combination of ex-ante compliance and ex-post enforcement mechanisms
  • Other enforcement system
  • No opinion

Do you have any further suggestion on the assessment of compliance?

Comments
  • A combination of both methods should be better in order to protect both users (who have been damaged) and makers (who should not be incriminated for having made bad use of AI only after several milion operations done with AI velocity. They should at least have the possibility to not run this risk by making a (low-cost) official conformity test before running the system publicly). --sNappy(talk) 23:56, 17 May 2020 (UTC)[reply]

  • ...

Section 3 - An ecosystem of trust

edit
edit
  • Cyber risks
  • Personal security risks
  • Risks related to the loss of connectivity
  • Mental health risks

In your opinion, are there any further risks to be expanded on to provide more legal certainty?

Comments
  • I would add as Further risks: Risk to endanger other people that are persecuted by unfair governments. (worst-case example: I shoot a photo of a crowd and upload it on Commons, I don't know that there are people that are persecuted, an unfair government can immediately find where they are by a face-recognition algorithm and find a way to kill them.)--sNappy(talk) 23:34, 17 May 2020 (UTC)[reply]

  • ...

Do you think that the safety legislative framework should consider new risk assessment procedures for products subject to important changes during their lifetime?

edit
  • Yes
  • No
  • No opinion

Do you have any further considerations regarding risk assessment procedures?

Comments

  • ...

Do you think that the current EU legislative framework for liability (Product Liability Directive) should be amended to better cover the risks engendered by certain AI applications?

edit
  • Yes
  • No
  • No opinion

Do you have any further considerations regarding the question above?

Comments
  • Even the producer of the software doesn't know what they are doing if it is not open source/untransparent. Otherwise it is the same as regular porducts and software.
  • It should be re-affirmed that companies are liable for everything they make. So if they can't prove the mistake is not with them, the liability is with them, they can't say "we didn't know, the code was hidden and we can't explain what went wrong." If I buy a washing machine and it burns down, the producer can't tell you to go look for the cable manufacturer because the wiring melted down."
  • A problem with the PLD is that it is based on user expectations for products. There are no expectations for new systems (AI), so the Product Liability Directive may not properly apply. This needs to be clarified. What are people entitled to expect?
  • There could be some liability protections for open source products.

  • Example of lame excuses for which no new directive is probably needed, the recent scandal of the COVID-19 study published by two famous journals: «“It is important to understand the nature of this database,” he added. “We are not responsible for the source data, thus the labor intensive task required for exporting the data from an EHR (electronic health record), converting it into the format required by our data dictionary, and fully de-identifying the data is done by the healthcare partner. Surgisphere does not reconcile languages or coding systems.”». [1] Nemo 14:12, 5 June 2020 (UTC)[reply]

Do you think that the current national liability rules should be adapted for the operation of AI to better ensure proper compensation for damage and a fair allocation of liability?

edit
  • Yes, for all AI applications
  • Yes, for specific AI applications
  • No
  • No opinion

Do you have any further considerations regarding the question above?

Comments
  • We believe in some jurisdictions product liability also covers immaterial damages caused by software. We encourage the Commission to look into this and try to harmonise the situation in favour of immaterial damages.