A Global Consultation to Enhance the Toolkit for Responsible Artificial Intelligence (AI) Innovation in Law Enforcement
Are you developing AI tools for law enforcement agencies?
Join us and share your perspective on responsible AI innovation in law enforcement in a technology providers-only consultation, scheduled to take place virtually on 3 November 2022.
For more information on the consultation, please download the Concept Note below:
Criteria for participation
This call is open to representatives of companies and organizations worldwide that are designing developing and producing technologies to be used by law enforcement agencies. In this regard, participants are eligible if they:
- Are or work for a private company (including start-up companies), a non-profit organization, an academic institution, a judicial institution, a think-tank or if they are member of a consortium.
- Design, develop, produce or trade AI technological tools for law enforcement agencies.
To facilitate an optimum and inclusive environment for discussion, only a limited number of participants will be selected to join the consultation. Selection will be made on a first come, first served basis, and with due consideration to geographical (countries and regions) and technical criteria (the specific domain in which the applicant works).
How to participate
Interested industry partners should contact the AI Team of UNICRI at unicri.aicentre@un.org, and provide:
- The name of their company or organization.
- The location of their company or organization’s headquarters.
- A brief description of the company or organization’s experience/mandate in providing technology to law enforcement agencies, highlighting, in particular, relevant AI tools they (or they are about to) design, develop, produce and sale.
- The role they perform in the entity they represent.
The call will close on 1 November 2022.
About our work
Although AI is considered to be as a promising technology, its use by law enforcement agencies can often be a highly sensitive and controversial subject. The absence of specific guidance and good practices on the use of AI in law enforcement has placed increased value on the need to advance AI governance.
To this end, the INTERPOL Innovation Centre and the Centre for AI and Robotics of the United Nations Interregional Crime and Justice Research Institute (UNICRI) undertook to develop the Toolkit for the Responsible AI Innovation in Law Enforcement, with the financial support of the European Union.
The objective of the initiative is to fill gaps in terms of guidance and help law enforcement agencies across the globe in the development, procurement and deployment of AI in responsible manner. To do so, the Toolkit will include a series of practical and operationally-oriented resources to support law enforcement agencies in adhering to responsible AI innovation, which are drawn from human rights and ethical principles related to policing, governance, transparency and accountability.
With a view to promoting these principles in the Toolkit development process and ensuring that the project is accepted by and benefits from the insights of all relevant stakeholders, INTERPOL and UNICRI are organizing a series of consultations with representatives from law enforcement agencies, industry, academia, judiciary and prosecutors, and human rights experts.