As the New South Wales (NSW) Police Force accelerates its adoption of Artificial Intelligence (AI), the spotlight is intensifying on the crucial need for transparency and robust accountability measures. The establishment of a dedicated AI centre within the NSW Police signifies a strategic commitment to leveraging cutting-edge technology for law enforcement. However, this advancement must be carefully navigated to address the ethical and societal implications that AI implementation inherently introduces. See our Full Guide
The concerns articulated by technology experts and civil liberties advocates highlight the potential pitfalls of unchecked AI implementation. At the heart of the matter is the use of predictive policing and risk-scoring systems, particularly those powered by tools from Australian intelligence software company Fivecast. While the promise of enhanced crime prevention is alluring, the reality is that these systems can perpetuate biases, compromise reliability, and ultimately infringe upon personal liberties.
Professor Toby Walsh, a leading AI expert from UNSW, rightly points to the limitations inherent in any "predictive" tool, emphasizing the risk of bias. His reference to the COMPAS system – Correctional Offender Management Profiling for Alternative Sanctions - serves as a stark reminder of how algorithmic bias can lead to discriminatory outcomes. The danger lies in feeding historical data, reflecting existing societal inequalities, into these systems, which then inadvertently amplify and perpetuate those inequalities in their predictions. This can result in disproportionate targeting of specific communities, exacerbating existing social tensions and eroding public trust in law enforcement.
Dr. Reuben Kirkham's concerns about the lack of transparency surrounding the use of these tools are equally valid. Without clear oversight and open communication, it becomes impossible to assess whether algorithmic assessments are being properly safeguarded. The opaqueness surrounding how these systems operate and the data they utilize fuels suspicion and hinders meaningful public discourse. Furthermore, Dr. Kirkham's warning about the reliability of internet-sourced material underscores the vulnerability of these systems to manipulation and misinformation. The potential for malicious actors to influence police responses through the creation of artificial social media traffic is a serious threat that must be addressed proactively.
The case study detailing Fivecast's Onyx platform, used to analyze social media activity related to protests in Western Sydney, further underscores the need for vigilance. While the NSW Police Minister confirmed the force's use of Fivecast software for intelligence purposes, the lack of specific details regarding its application raises concerns. The invocation of intelligence-gathering methodology as a shield against scrutiny is insufficient, particularly when the technology in question has the potential to impact fundamental rights and freedoms.
The parallels drawn by Greens MLC Abigail Boyd to the Department of Homeland Security's use of Fivecast's Onyx platform in the United States further heightens concerns about potential mission creep and the erosion of civil liberties. The lack of clarity from the NSW Police regarding the use of AI-powered tools to predict crime, coupled with their refusal to confirm the shutdown of a previous AI recommendation system linked to youth crime and domestic violence watchlists, only adds to the atmosphere of uncertainty.
The functionalities described in Fivecast's case study—automated link analysis and AI risk detectors—raise serious questions about the scope and potential impact of these tools. The ability to assess individuals and online groups based on their social media activity could lead to unwarranted surveillance, chilling effects on free speech, and the creation of de facto watchlists based on algorithmic assessments.
Jonathan Hall Spence, principal solicitor at the Justice and Equity Centre, succinctly captures the crux of the issue: "when it comes to policing, where personal liberty is at stake, the risks are extremely high." The implementation of AI in law enforcement demands a proactive and collaborative approach. The NSW Police must engage with communities and civil society to ensure these tools are used fairly, appropriately, and in a manner that respects fundamental rights.
Moving forward, several key actions are necessary to address the challenges of transparency and accountability in the NSW Police's embrace of AI:
-
Establish Clear Ethical Guidelines and Oversight Mechanisms: The NSW Police must develop and publicly release clear ethical guidelines governing the development, deployment, and use of AI-powered tools. These guidelines should be informed by ethical principles, human rights standards, and best practices in AI governance. Furthermore, an independent oversight body should be established to monitor compliance with these guidelines and investigate potential abuses.
-
Promote Algorithmic Transparency: The NSW Police must strive for greater transparency regarding the algorithms used in their AI-powered tools. This includes providing clear explanations of how these algorithms work, the data they use, and the factors they consider in making predictions or assessments. While complete transparency may not be feasible due to security considerations, efforts should be made to provide as much information as possible without compromising operational effectiveness.
-
Implement Robust Data Governance Practices: The NSW Police must implement robust data governance practices to ensure the accuracy, fairness, and security of the data used by their AI-powered tools. This includes implementing data validation procedures to identify and correct errors, ensuring that data is representative of the population being served, and protecting data from unauthorized access or misuse.
-
Provide Training and Education: The NSW Police must provide comprehensive training and education to officers on the ethical and legal implications of using AI-powered tools. This training should emphasize the importance of human oversight, the potential for bias, and the need to exercise sound judgment in interpreting algorithmic outputs.
-
Engage in Public Dialogue: The NSW Police must engage in ongoing public dialogue about the use of AI in law enforcement. This includes soliciting feedback from communities, civil society organizations, and technology experts, and being transparent about the benefits and risks of these technologies.
By embracing these measures, the NSW Police can harness the potential of AI to enhance public safety while safeguarding fundamental rights and building public trust. The road ahead requires a commitment to transparency, accountability, and ethical leadership.