Anticipate where and when a crime is likely to take place (risk in space)
Anticipate who is likely to commit – or become a victim – of a crime (risk in behavior)
Rely on current and historical data, such as arrest data or police reports, to predict future incidents of crime. This is done by using machine learning techniques to identify patterns or links across different variables in the data, such as time, location and type of crime. The patterns that emerge are used to generate models that predict when and where future incidents of crime are likely to take place, which can assist law enforcement in making decisions about its operations, priority targets, and where to allocate resources.
Rely on personal data, such as age, criminal history, and patterns of victimization, to predict who is most likely to become a perpetrator or a victim of crime. Individuals are then scored according to their ‘risk profiles’, which can assist law enforcement in identifying and preventing individuals who are considered ‘high risk’ before a crime takes place.
Are seen to be a cost-effective way to ensure police decisions are data-driven and that resources are allocated to areas where they are most needed. Supporters of place-based predictions believe these tools have the potential reduce discriminatory forms of policing, such as racial profiling and excessive use of force, since officers will have access to more data when making decisions.
Are seen to be a cost-effective way to prevent crime and violence by using ‘early warning’ notification systems to track repeat offenders or victims. Supporters of people-based predictions believe these tools can help police identify those who are at risk before an incident occurs, which can fast-track early intervention and divert people away from the criminal justice system.
Rely solely on police data, there is a strong likelihood that bias about who commits crime and where it takes place will be amplified since these technologies are not trained in how to identify biases embedded in the datasets. This can lead to the legitimisation of discriminatory practices, such as the over-policing of minor offences and under-policing of high-risk communities. This is why those who defend place-based predictions argue that it is not only the quality – but also the source – of input data, that matters.
Can perpetuate systemic forms of bias against individuals and threaten privacy rights, including those with criminal records. For people who don’t have a criminal record, other types of information may be monitored, which could result in unreasonable attention being paid to individuals.
Focuses on what AI tools are being used for, such as promoting democratic values and increasing access to justice.
Focuses on how AI is being used (transparency, accountability, fairness, etc.).
People must understand what it does, how it works and the risks that are involved.
Ensures that systems are functioning properly and that someone takes responsibility when things go wrong.
Does not perpetuate bias or impose unfair discriminatory outcomes against persons.
The power to decide whether to take action suggested by AI tools ultimately rests with humans.
Measures must be in place to protect the personal information of intended users, beneficiaries, and other stakeholders of AI-driven technologies.
Promoting well-being, preserving dignity and sustaining the planet.
Independent Fairness Testing (IFT) testing is used to detect forms of algorithmic bias that may create or reinforce discriminatory practices against disadvantaged groups of persons. Considered by experts to be one of the essential components of AI governance, IFT uses different metrics to measure the fairness of the model.
Third-party auditing should be conducted regularly following the deployment of AI-driven technologies. Users of AI should invite independent and experienced third parties to review their algorithmic decision systems, which requires disclosing sufficient information to allow accurate testing, monitoring and feedback. Ultimately, the goal is to inform end-users and stakeholders that an algorithmic decision system was audited by a trusted third-party and that it remains open to independent auditing in the future.
Social Impact Assessments (SIAs) measure the impact of AI-driven technologies on the social elements of life. SIAs have been traditionally conducted on affected groups of persons against six categories of metrics, including: (1) employment (including labor market standards and rights); (2) income; (3) access to services (including education, social services etc.); (4) respect for fundamental rights (including equality); (5) public health; and (6) safety. It is important to note that this is not an exhaustive list and should be context specific.
Involves examining the machine learning process using an interdisciplinary team of experts. Ideally, this would involve pairing data scientists with a social scientist; integrating traditional machine learning metrics with fairness measures; balancing representativeness with critical mass constraints when engaging in sampling for training data; and keeping de-biasing in mind when building algorithmic models.
Ensures privacy principles are embedded in the products from their conception through the development process. This entails: (1) using only the data that is needed to achieve a particular purpose; (2) letting people know about the personal data that is stored and giving them the ability to correct or delete information; (3) using anonymized data when possible so it is not possible to connect someone with the data that was involved; and (4) including restrictions at the outset about how data will be used or transferred.
Embeds ethical principles in the design, development, and deployment of AI based solutions, which may require specific tasks to be completed at different stages in the development process. Drawing on expertise from diverse disciplines may be useful in identifying other ethical issues that could be implicated during deployment of the technology.
Institutional ‘readiness’ means more than political will and investment in AI. Readiness looks at the strength of existing infrastructure and capacities of institutions to design, develop, deploy, and oversee their use of AI-driven technologies. This includes things like digital literacy and skills of users, data infrastructure and connectivity, quality assurance and performance management systems, as well as cybersecurity protocols and procedural safeguards. Assessing institutional readiness is critical in all sectors, but when institutions are looking to deploy high-risk technologies, such as crime prediction tools, building capacity to improve readiness becomes urgent and critical.
Procurement can become a key driver for the adoption of responsible AI if measures are taken to uphold ethical principles. Developing procurement guidelines and ethical frameworks can be a useful way to mitigate some of the risks of AI, especially with high-risk technologies like crime prediction tools, to ensure AI is used in a responsible and ethical manner.
A type of impact assessment that measures the social consequences of a planned intervention or action, including the deployment of crime prediction tools or other AI-driven technologies. In this regard, SIAs are a systematic process of identifying, analysing, monitoring, and managing the intended and unintended consequences, as well as both the positive and negative social changes, arising from the use of AI.
The purpose of an SIA is to develop a better understanding of: (1) the landscape of risk in a given context; (2) how those risks interact with one another; (3) the social consequences they produce (both intended and unintended); and (4) and who is most likely to benefit and who is most likely to be harmed. By developing a better understanding of the landscape of risk and how these risks interact to produce social consequences - both positive and negative - it is then possible to develop enhanced targeted risk mitigation strategies. In this way, practitioners could better assess the source and type of risk present, and so optimize use of AI-driven technologies to expand the number of people they are designed to benefit.
SIAs aim to predict and assess the consequences of a proposed action or initiative before a decision to implement is made. This is critical for deploying crime-prediction tools, which are classified as a ‘high risk’ technology with significant social consequences.
Conducted prior to using high-risk technologies. They should follow a similar process to an SIA and the results generated from the assessment should inform the decision whether to deploy crime prediction tools, which may be affected if the anticipated risks arising in a particular institutional environment are assessed to be too high, the quality of training data is not representative enough, or if the communities designated for crime prediction tools require more information, greater levels of explicability or more opportunities to engage with stakeholders.
Identification of risks and harms that might occur and who will be affected according to protected grounds (e.g., gender, race, nationality, location, etc.)
Determine questions to be addressed, methodology and indicators to be used in all areas identified during screening.
Qualitative methods (interviews, consultations with affected groups); quantitative methods (modeling, regression analysis)
Assessment of evidence against structural factors and other considerations (e.g., laws that may enable or constrain certain behaviors)
Presentation to an entity that can hold the institution in question to account.
Follow-ups with affected groups and implementing institutions about how the recommendations will be implemented.
O Instituto Igarapé utiliza cookies e outras tecnologias semelhantes para melhorar a sua experiência, de acordo com a nossa Política de Privacidade e nossos Termos de Uso e, ao continuar navegando, você concorda com essas condições.