Last week, The Verge published an independent investigation into New Orleans’ system of predictive policing, which was developed by Palantir Technologies and implemented in secret in 2013, unbeknownst to the City Council.

Palantir, which is valued at over $20 billion, has always thrived on secrecy. It was founded with seed money from the CIA’s venture capital fund. It focuses on data analytics and its clients include at least twelve U.S. government groups, including the Marine Corps and FBI. The Verge states that Palantir’s partnership with New Orleans went under the radar for so long because it was established through a philanthropic relationship with Mayor Mitch Landrieu’s NOLA for Life program. Influencing city police departments through philanthropy isn’t new for Palantir—its prior contracts with the LAPD and NYPD developed through charitable giving.

The predictive policing system Palantir developed for New Orleans isn’t completely new, either. The model is very similar to a model implemented in Chicago. Predictive policing tries to make the law enforcement process proactive rather than reactive. Through using algorithms, police can get information about where future crime is likely to occur and can take steps to prevent it. If this idea sounds eerily like the 2002 movie Minority Report, based on the Philip K. Dick novel, the reality is not far off. Chicago has used this kind of network analysis to give individual citizens “police risk scores” and then implement proactive interventions. Proponents argue that it allows police to be in the right place at the right time and can be effective when deterring repeat offenders. So, why not embrace Palantir and encourage predictive policing?

There’s no transparency. In 2016, the Brennan Center for Justice filed an action against the NYPD for failing to provide information on its predictive policing system. Public reports showed that the City had paid $2.5 million to Palantir, but there was no public information about the parameters of the technology. Neither Palantir nor the NYPD had any incentive to be transparent. Further, as a private company, Palantir isn’t answerable to the public in the way a police department might eventually be, further decreasing the likelihood of information sharing.

Palantir’s desire for secrecy is possibly understandable in light of the fact that their technology is used by U.S. government agencies for counterterrorism and national security initiatives. However, this kind of reasoning only contributes to the erosion between the roles of the military and domestic police, a phenomenon most commonly seen surrounding the provision of military-grade weapons to police officers. As activist Ana Muniz commented, “The military is supposed to defend the territory from external enemies, that’s not the mission of the police—they’re not supposed to look at the population as an external enemy.”

Predictive policing amplifies racial bias. For predictive policing to be accurate, the dataset set it relies on needs to be accurate as well. Algorithms make predictions by analyzing patterns from an initial data set and then look for those patterns in new data. Police data aren’t collected uniformly, and initial data sets fed into algorithms will reflect institutional racial and income bias. For example, black men are much more likely to be stopped by the police than white men. If this is the initial data fed into an algorithm, it will likely send police to neighborhoods that are heavily black and overstate black men as individuals likely to commit future crimes. Studies support this idea. Analysis of Oakland’s PredPol system showed that despite a theoretically race-neutral algorithm, black neighborhoods would be targeted at twice the rate of white neighborhoods for drug crimes. This outcome occurred despite estimates from health surveys that illicit drug use is equal across racial groups. Concerns about racist algorithms and artificial intelligence extend beyond the realm of law enforcement, but the impacts are particularly harmful here.

 We have no proof that predictive policing works. Predictive policing is still fairly new, but the RAND corporation commissioned a study in 2014 to look at its impacts in Shreveport, LA. They found that there was no statistically significant reduction in crime from predictive policing.

It creates serious privacy concerns. Palantir’s website claims that a core component of its mission is “protecting our fundamental rights to privacy and civil liberties.” Whether that’s true or not, implementation has looked a little different. In New Orleans, a political consultant who helped establish the city’s relationship with Palantir is quoted as saying, “Unless you’re the cousin of some drug dealer that went bad, you’re going to be okay.” This reads as surveillance of the innocent. In Chicago, proactive interventions sent police to the homes of citizens who had not committed any violent crimes to warn them of consequences if they did.

Once we get in, can we get out? Maybe not when dealing with Palantir. Buzzfeed reports that when the NYPD tried to cancel its contract with Palantir in June 2017, the company dug its heels in and refused to provide its analysis in a standardized format that could work with other software. This fight points to a future of cities battling private companies for data, or potentially becoming so dependent on them that there are no other options.

“Predictive policing used to be the future, and now it is the present,” stated former NYPD commissioner William Bratton in 2016. Bratton is right—despite the concerns detailed above, it is unlikely that any police department reverses course. Hopefully, scholarship and journalism can continue to bring to light ways communities can and should scrutinize the use of predictive policing. As Andrew Ferguson Guthrie noted in an article for the Washington University Law Review, “Without successful answers to . . . questions about data, methodology, scientific legitimacy, transparency, accountability, vision, practice, administration, and security, any predictive policing system remains open to criticism and challenge.” Finally, there also is the chance that predictive policing could be a tool to reduce bias and increase safety and justice in American communities. However, that won’t be the case as long as its applications are shrouded in Palantir-supported secrecy.