![]()
Sexist, racist, and discriminatory artificial intelligence has a new opponent: the ACLU.
Earlier this month, the 97-year-old nonprofit advocacy organization launched a partnership with AI Now, a New York-based research initiative that studies the social consequences of artificial intelligence. “We are increasingly aware that AI-related issues impact virtually every civil rights and civil liberties issue that the ACLU works on,” Rachel Goodman, a staff attorney in the ACLU’s Racial Justice program, tells Co.Design.
AI is silently reshaping our entire society: our day-to-day work, the products we purchase, the news we read, how we vote, and how governments govern, for example. But as anyone who’s searched endlessly through Netflix without finding anything to watch can attest, AI isn’t perfect. But while it’s easy to pause a movie when Netflix’s algorithm misjudges your tastes, the stakes are much higher when it comes to the algorithms that are used to decide more serious issues, like prison sentences, credit scores, or housing.
These algorithms are often proprietary: We don’t know exactly how they work or how they’re designed. This makes it virtually impossible to audit them, which is why research that digs into how AI is programmed is so crucial. In short, AI’s biases are civil liberty problems, so the partnership between AI Now and the ACLU is a natural one. Together, they hope to become a formidable force in achieving bias-free AI.
AI Now arose from a 2016 symposium, hosted by the White House and led by Microsoft researcher Kate Crawford and Google researcher Meredith Whittaker, that delved into the social and economic problems of AI–and issued recommendations on how to address them. Those suggestions included cross-disciplinary partnerships designed to spark new research and advocacy around these issues, of which its collaboration with the ACLU is one.
The ACLU is primarily concerned with three areas where AI is at work: criminal justice; equity as it relates to fair housing, fair lending, and fair credit; and surveillance. The partnership is nascent, so the organization is still formulating exactly how it will address these themes. For starters, it will launch a fellowship related to AI and form working groups around these areas. It will also host workshops to help determine its position on these issues–like, for instance, how to frame questions that arise as municipalities begin to adopt AI and how to support civil liberties advocates as they look to the ACLU for guidance on how technology should be restricted, deployed, or designed.
Goodman points out that as AI matures and becomes more affordable, more organizations and jurisdictions are incorporating it into their practices, opening up the floodgates for more bias to enter society. “We’re at the [AI] adoption moment,” she says. “In some ways we’re at the beginning of the new era where the rules of the road are being established with respect to how AI is involved with government.”
In the criminal justice arena, the ACLU sees two pressing issues: predictive policing (essentially using data to predict where crime will happen and directing law enforcement resources to those areas) and risk assessment (using algorithms to suggest bail amounts, sentencing, and parole).
For example, Los Angeles began using a service called PredPol to determine areas where burglaries and car break-ins might occur. After the mathematical model–which uses past police reports–outlines where these crimes are likely to occur, the LAPD dispatches police to the area. The presence of police alone is a deterrent and in some precincts, crime dropped 25%. But meanwhile, Oakland police decided not to implement the same technology because the city was concerned about racial profiling.
“Governments are really being pushed to do more with less money and AI tools are, at least on a surface level, appealing ways to do that and make decisions efficiently,” Goodman says. “We want to see if there are appropriate roles [for AI] and to ensure tools are fair and free of racial biases. Those are hard questions and hard math problems.”
A second area that the ACLU is prioritizing is machine bias in financing and lending. The algorithms that determine insurance premiums are sometimes racially biased, and so are the systems that advertise predatory lending services. The FTC, for example, investigated how payday loan companies were targeting online advertising to specific racial and income groups. For example, Wells Fargo–which recently paid a $35 million racial bias settlement–is currently building out an artificial intelligence team.
“Housing providers and banks are adopting algorithmic tools on who gets a job or home or mortgage or loan at what rate and we know those tools very often incorporate all sorts of biases,” Goodman says, mentioning that this is important for racial justice, women’s rights, and LGBTQ rights.
The third area is surveillance, which relates to policing tactics like facial recognition software and police body cameras, but also to personal freedoms and surveillance of civilians. For example, police departments frame body cameras as a step toward accountability–but regulations on how footage is created, used, and handled are sparse. More cameras also mean more routine government surveillance of the public realm at large, and potential invasions of privacy. In New York, civil rights attorneys are challenging the city’s plan to deploy 23,000 body cameras by 2019.
“There is a whole universe of data trails that are increasingly picked up as we move through the physical world and there are implications in online and offline worlds for privacy rights and political freedoms,” Goodman says.
Goodman doesn’t think AI is doomed to be biased forever, and remains optimistic about the future.
In the short term, the initiative will develop a research agenda, facilitate conversations about AI issues that arise in the ACLU network, open lines of communication with developers who design these algorithms so that they can address bias during that process, and speak with governments that are thinking of adopting AI. The long game involves forming a legal agenda on how AI should be regulated and how to protect rights and liberties as the technology becomes more pervasive.
“Developers of these tools have been fairly isolated from conversations about legal and policy and ethical frameworks that are vital to the work they’re doing,” she says. “Many of the ill effects are not intentional. It comes from people designing technology in closed rooms in close conversations and not thinking of the real world.”
Through its AI Now partnership, the ACLU hopes technologists, algorithm designers, and civil libertarians will have better and more open communication. If bias in AI is addressed head on, accepted practices that perpetuate bias can be challenged and fixed. “We can inject legal principles earlier on in the conversation so ideally we aren’t in the position of arguing the illegality of technology that’s already being used,” Goodman says.
Additionally, they hope that a deeper understanding of AI can help the organization plan how best to apply existing legal protections to new technology.
While the partnership helps the ACLU better protect civil liberties by gaining technical understanding, it also gives AI Now a more comprehensive view of the real-world scenarios where the effects of algorithmic bias are experienced. Since it’s now formally dialed into the ACLU’s advocacy network, AI Now can direct research initiatives in a more informed, cross-disciplinary way.
“We just continually find that when you put legal thinkers in the room with people who are deep experts on the subjects of science, technology, and social issues, it makes us better advocates,” Goodman says.