Washington could be the first state to rein in automated decision-making

By DJ Pangburn

Governments are increasingly turning to automated systems to make decisions for criminal sentencing, medicare eligibility, and school placement. Public officials and companies tout gains in speed and efficiency, and the hope that automated decisions can bring more fairness into bureaucratic processes.

But the drawbacks are coming into clearer focus. Computer algorithms can also reinforce or introduce new biases. Studies have shown that facial recognition tools used by law enforcement could disproportionately impact African-Americans, that predictive policing software could unfairly target minority neighborhoods because of incomplete crime databases, and that algorithms can perpetuate racial disparities in insurance premium costs.

But critics say the tools tend to exist in a legal Wild West, with little transparency and accountability: intellectual property laws and government secrecy prevents the public from auditing or in some cases even knowing about them.

Lawmakers in Washington State are taking steps to tame this digital frontier. On February 6th, legislators in the House held a hearing on an algorithmic accountability bill that would establish guidelines for procurement and use of automated decision systems in government “in order to protect consumers, improve transparency, and create more market predictability.”

HB1655 and SB 5527, as the companion bills are known in the Washington House and Senate, respectively, could create the first such state-wide regulation in the country. New York City passed the U.S.’s first algorithmic accountability law in 2017.

“Automated decision systems are rapidly being adopted to make or assist in core decisions in a variety of government and business functions, including criminal justice, health care, education, employment, public benefits, insurance, and commerce,” reads the proposed House bill. “These automated decision systems are often deployed without public knowledge, are unregulated, and vendors selling the systems may require restrictive contractual provisions that undermine government transparency and accountability.”

Allowing these systems to make core government and business decisions “raises concerns about due process, fairness, accountability, and transparency, as well as other civil rights and liberties,” it says.

Shankar Narayan, Director of the Technology and Liberty Project at the ACLU of Washington, says that virtually every important decision made about a person may well include some component of machine decision making, based on data that is unprotected or poorly protected.

Washington could be the first state to rein in automated decision-making | DeviceDaily.com
Washington bill HB1655 would introduce standards for automated decision systems

“The person doesn’t know what piece of data they hemorrhaged that is feeding that automated decision making tool,” says Narayan. “All they might find is that their public benefits check suddenly disappears because an algorithm said they’re eligible for it, or their time with a case worker is slashed because the algorithm said they were eligible for less time.”

The ACLU is one of 100 groups that have urged California jurisdictions and others across the country to suspend pre-trial risk assessment tools until they have considered the risks and consequences, including racial bias. And ACLU-WA was instrumental in helping pass Seattle’s 2017 surveillance ordinance, one of many transparency rules that city legislatures have passed in recent years.

A major provision of the new proposal would ensure that public-sector automated systems and the data sets they use are made “freely available by the vendor before, during, and after deployment for agency or independent third-party testing, auditing, or research.” Echoing recommendations made by scholars at the AI Now Institute at New York University, the law aims to help ensure that an automated system’s “impacts, including potential bias, inaccuracy, or disparate impacts” can be understood and remedied.

For Narayan, ACLU-WA, and other stakeholders, the proposed rules reflect homework that public agencies should be doing anyway. “The bill creates essentially a report card with some basic information about the tool that’s submitted to the [state’s] chief privacy officer who can then make that open for public comment,” he explains. “All of this happens before the tool is adopted.”

A number of community groups joined ACLU-WA in supporting HB1655 and SB 5527. Latino organizations like El Centro de la Raza and Casa Latina, the local chapter of the American Muslim Empowerment Network, the Asian-American community groups Densho and the Asian Pacific Islander Coalition, and others signed onto an ACLU-WA letter expressing support for the companion bills. The Critical Platforms Studies Group, a interdisciplinary research collective at the University of Washington, is also a signatory of the letter.

At a hearing on Wednesday, several academics testified in support of the bills, including Katherine Pratt, a doctoral candidate in Electrical and Computer Engineering the University of Washington.

“These programs are as biased as the humans that designed them,” she said, “and when the humans are almost exclusively affluent white men, it is difficult to claim they’re being designed with the population at large in mind.”

Not only would the bill require the algorithm be “surfaced,” as Narayan says, but it would put a fairness standard in place that would prevent an algorithm from discriminating against a person. He says this provision would help companies to start policing themselves, as they would know when they might be running afoul of Washington’s anti-discrimination laws.

“If you couldn’t discriminate against a group in the brick and mortar world in Washington because of our law against discrimination, you shouldn’t be able to make an end-run around the law with an algorithm,” says Narayan. “And I would hope that tech companies would agree with that.”

Although no one testified against the proposals, opposition to the bills is still an open question. Government officials and tech executives have argued that too much transparency could imperil companies’ intellectual property and dissuade companies from working with governments, but HB1655 and SB 5527 would not reveal the source code that lies behind algorithmic decision making tools. While Microsoft has made public its support of fairness and accountability in technology in general, the other large Seattle tech company, Amazon, has been mostly quiet.

“It remains to be seen if the vendors even put a position down on the record,” says Narayan. “But I think it’s important to involve them in the discussion, and make sure that they are providing their perspectives so at the very least we know what’s comfortable for some of the vendors.”

“There is always pain in going from the Wild West without any regulation or transparency at all to having some rules,” he adds. “But I think ultimately these rules are going to allow tools that truly are legitimate and non-biased to be able roll out, while legitimately eliminating tools that are biased.”

Opposition from vendors and government agencies ultimately limited the ambitions of New York City’s algorithmic accountability rules, according to Julia Powles, an Associate Professor of Tech Law & Policy at the University of Western Australia Law School. The final law, as she wrote in The New Yorker in 2017, created a task force to develop recommendations on which type of systems should be regulated, how the public can weigh in, and how the government can address instances in which people are harmed by automated decision making.

But the law includes no disclosure requirements, except for a reference to “making technical information . . . publicly available where appropriate.”

Freddi Goldstein, a spokeswoman for Mayor Bill de Blasio’s office, told Powles, “Publishing the proprietary information of a company with whom we contract would not only violate our agreement, it would also prohibit other companies from ever doing business with us, which would prevent us from trying innovative solutions to solve everyday problems through technology.”

Two recent cases in New York City, one dealing with predictive policing software and another related to surveillance of Black Lives Matter activists, suggest the public’s right to know trump’s proprietary information. As Sonia Katyal, co-director of the Berkeley Center for Law and Technology, argues in a new paper, intellectual property laws that protect government-purchased software should not outstrip civil rights concerns.

No municipal or state government has gone as far as Washington has in attempting to rein in algorithms, says Narayan. He expects the proposals to influence other state and city legislatures.

“Lawmakers in other states are likely to be looking at this because I think that they’re making up their minds,” says Narayan. “The number of people that signed on to this bipartisan bill shows that there is a real interest. That in and of itself shows that people see this as something important and worthwhile.”

 
 

Fast Company , Read Full Story

(41)