Googles AI Ethics Committee has just lost the North European Unions AI Code.

 Googles AI Ethics Committee has just lost the North European Unions AI Code.

The European Commission announced the launch of a pilot project to test draft ethical rules for the development and application of AI technologies to ensure their implementation in practice.

In addition, the project aims to collect feedback and promote people-oriented artificial intelligence to become an international consensus, so that more discussions on this topic can be held at the upcoming meetings of the Group of Seven (G7) and the Group of Twenty (G20).

Last summer, the Commissions High-Level Panel on Artificial Intelligence (composed of 52 experts from various industries, academia and civil society) released a draft ethical code on trusted artificial intelligence in December.

A revised version of the document was submitted to the Committee in March. In addition to the existing laws and regulations that machine learning technology needs to respect, it summarizes the consultation results of experts into seven key requirements of AI, namely:

Human Rights and Supervision: Artificial Intelligence System should be the guarantee of a fair society by supporting human action and basic rights, rather than reducing, restricting or misleading human autonomy.

Stability and security: Trusted AI requires that the algorithm be safe, reliable and robust enough to handle errors or inconsistencies in all life cycle phases of AI systems.

Privacy and data management: Citizens should have complete control over their data, and their data will not be used to harm or discriminate against them.

Transparency: Ensure traceability of AI systems.

Diversity, fairness and avoidance of discrimination: AI systems should take into account all human capabilities, skills and needs, and ensure the accessibility of AI.

Social and environmental well-being: Artificial intelligence systems should be used to promote positive social change, sustainable development and ecological responsibility.

Accountability system: A corresponding mechanism should be established to ensure that the responsibilities and obligations of AI systems and their applications are assumed.

The next stage of the European Commissions AI ethics strategy is to observe the operation of the draft norms in a wide range of pilot projects. The pilot involves a wide range of stakeholders, including international organizations and companies from outside the EU.

The European Commission said the pilot phase would start this summer. It asked companies and public organizations to register their artificial intelligence forums, the European Union of Artificial Intelligence (European AI Alliance), to receive notifications at the start of the pilot.

Members of its high-level expert group on artificial intelligence will also introduce and explain these guidelines to stakeholders in member countries. Members of the panel will present their work in detail during the third EU Digital Day in Brussels tomorrow.

In its official communication on Building Trust in People-Oriented Artificial Intelligence, the Commission explained the pilot plan as follows:

This work will include two aspects: (i) pilot projects for stakeholders in AI applications, including public management; (i i) consultation, discussion and awareness-raising processes among member countries and stakeholder groups, including industry and services:

(i) Starting from June 2019, all stakeholders and individuals will be invited to conduct an assessment of the inventory and provide suggestions for improvement. In addition, the AI Senior Expert Group will hold in-depth discussions with private and public sector stakeholders to collect more detailed comments on how to apply the guidelines in a wide range of applications. All feedback on the feasibility of the guidelines will be evaluated by the end of 2019.

(ii) In the meantime, the Committee will organize further expansion of the activity to give delegates of senior experts in artificial intelligence more opportunities to introduce the guidelines and to provide more feedback from stakeholders in member countries on the evaluation of the guidelines.

Andrus Ansip, vice president of Digital Single Market, commented in a statement: The ethical dimension of AI is not a luxury function or add-on. Only by giving it trust can our society fully benefit from technology. Ethical AI is a win-win proposition. It can become a competitive advantage in Europe - a trusted leader of AI based on people.

Mariya Gabriel, Commissioner of the European Unions Digital Economy and Society, added in another supportive statement: Today, we have taken an important step towards achieving AI in the European Union for security and ethics. Based on EU values, we have laid a solid foundation with the broad and constructive participation of many stakeholders in business, academia and civil society. These requirements will now be put into practice and international discussions will be promoted around people-oriented artificial intelligence.

The Committee said that in early 2020, after the pilot phase, the AI expert group would review the critical needs assessment form based on the feedback received. On the basis of the review, the committee will evaluate the pilot results and propose the next steps.

By autumn 2019, the Commission also plans to launch a network of AI research centers. In addition, it plans to establish a network of digital innovation centres, promote discussions between member countries and stakeholders, develop and apply data sharing models to maximize the use of public data space.

These plans belong to the European Commissions artificial intelligence strategy of April 2018. The goal of the strategy is to make the investment of artificial intelligence in public and private fields exceed 20 billion euros per year in 10 years, thus generating more available data and cultivating relevant talents.

(Selected from Cnet compiled Hanbing report data cited to CNET)

Source: Netease Intelligent Responsible Editor: Feng Zhenyu_NBJS8113