Unpacking the Black Box: MFIA Tackles Algorithmic Accountability in Connecticut

Federal and local governments and agencies are increasingly using algorithms to either influence or determine actions, policies, services, programs, employment, contracting, rulemaking, budgeting, and resource allocation. Algorithms not only reflect human bias, but they can also, without proper checks and procedures, systematically compound human bias to the further disadvantage of marginalized populations. While algorithms may not be wholly determinative in making such decisions, they have a notable influence that must be subjected to scrutiny. However, despite the increasing governmental reliance on algorithms, laws and regulations that ensure algorithmic accountability are almost non-existent, and the public remains uninformed, or unaware, of the role algorithms play in federal, state, and local government.

MFIA’s Algorithmic Accountability Project seeks to evaluate the current level of transparency around the use of algorithms in Connecticut state and local government to inform legislative reform. With an ultimate aim of enhancing governmental transparency and accountability surrounding decisions that impact constituents, the Project is focused on predictive and decision-making algorithms that, at the very least, facilitate human decision-making.

Solutions to this problem range from requiring the disclosure of algorithms or algorithmic use under Freedom of Information Act (FOIA) statutes to establishing regulatory bodies. Additionally, obligations could be placed on government agencies to regularly issue algorithmic impact statements, similar to environmental impact statements. However, any solution requires full appreciation of the problem at hand. To that end, the Algorithmic Accountability Project provides two useful case studies that illustrate the different levels at which algorithm use is obfuscated from public scrutiny.

The Case Studies

Our research found that many government agencies in Connecticut could be using algorithms, but algorithm use in two agencies — the State Department of Education (SDE) and the Department of Children and Family (DCF) — caught our particular interest. From the outset, we had decided to use FOIA requests to 1.) gain more information about any governmental algorithm use and 2.) glean from the disclosures the current level of transparency around algorithm use. However, as SDE and DCF present different degrees of information asymmetry, we took different approaches to the respective FOIA requests.

In the case of SDE, there is a complete lack of transparency regarding its algorithm use, prompting an exploratory FOIA request, whereas, in the case of DCF in which there is public disclosure of instances of algorithm use, we opted for a more targeted FOIA request. Both FOIA requests, however, sought some basic information including express disclosure of whether algorithms are being used, whether there have been active steps to measure any disparate impact that could have come about from the algorithm use, how the algorithms are audited, and how the algorithms are procured.

The FOIA Request to SDE

Connecticut’s secondary school system relies on various school lotteries to place children into public schools, magnet schools, and technical schools. Importantly, the school choice lottery system is the state’s primary vehicle to combat school segregation, pursuant to the landmark 1996 Hartford desegregation case Sheff v O’Neill. However, over the years there have been allegations of inequity in the system. Recently, there were complaints of 15,000 person waitlists for popular schools and complaints about racial caps that prevent students of color from attending their preferred school even when seats are available. Moreover, concerns with the effectiveness of the Hartford desegregation efforts resulted in a new Sheff v O’Neill settlement in 2020 that ultimately replaced racial and ethnic factors with socioeconomic factors in the magnet and technical school lotteries.

Public documents acknowledge that at least some of the lotteries are “computer-based.” This prompts many questions— to what extent are these decisions automated? What biases do such system hold? Where is the accountability? These questions have greater force not only because of the civil rights backdrop of the school choice lottery system, but also because the system may expand in the near future. In February, Governor Lamont proposed to expand the school choice lotteries to two more cities, Danbury and Norwalk. Accountability is needed for students already impacted as well as students to be impacted.

Though there has been some recognition that the lotteries are “computer-based,” it is ultimately unclear to what extent algorithms are being used or if they are even being used at all. This is the case with most government agencies not only in Connecticut but across the country as well. Moreover, because of the impact that any algorithm use could have on school desegregation, and thusly the educational opportunities of students of color, there is an emergency and importance to uncovering the truth about algorithm use in Connecticut’s school choice lottery system. For these reasons, we submitted an exploratory FOIA to SDE that broadly seeks information about the following: the nature of the procurement of any algorithm used in the school lotteries; the effectiveness of any algorithms used; the disparate impact of any algorithms used; data inputs taken by any algorithms used and how those inputs are weighed; validation procedures any algorithms used are subjected to; any training materials for any algorithms used; and the source codes of any algorithms used.

The FOIA Request to DCF

DCF provides family services, including identifying potential at-risk children and intervening as needed. Under its contract with two companies called Eckerd Connects and MindShare Technology, DCF uses an algorithm called Rapid Safety Feedback (RSF) that predicts child safety outcomes, which licensed clinicians use to review cases and determine whether follow-up is needed. Another aspect of the agency’s child protective services is “Careline,” a telephone hotline where people can call in with reports of child abuse and neglect. It is unclear whether the algorithms used for Careline are also a part of the RSF model or a different set of technologies.

RSF’s use of predictive analytics has come under sharp criticism for the secrecy surrounding the working of the algorithms themselves. In 2018, the Illinois Department of Children and Family Services terminated its contract with Eckerd and MindShare early after the program led to false positives and false negatives. Part of the agency’s decision to terminate the program came from Eckerd and MindShare refusing to reveal details about what goes into their formula, even after the deaths of children whose cases had not been flagged as high risk. The agency called the program unreliable, where high-profile child deaths occurred with little warning from the software while it marked thousands of other children as needing urgent protection. Other state child welfare agencies also use RSF, including Louisiana, Maine, Oklahoma, Tennessee, Ohio, Indiana, although Illinois was the most high-profile case.

In our FOIA request, our goal was to learn whether similar concerns applied to DCF as they did in Illinois. Even though we know DCF uses RSF, we don’t know if it works the same way as it did in Illinois. For example, even if the two states were using the same set of algorithms, program implementation and outcomes could vary based on agency personnel discretion and other factors. In this way, although we know that algorithms are being used, we are still lacking crucial information regarding how the technology is being used. Similarly to the FOIA request to SDE, our goal was to shed light on the agency’s practices, especially because what current limited information we do have is concerning. Our FOIA request focused on the following: the nature of the procurement and agreement processes with Eckerd Connects, MindShare, and similar entities; how DCF uses such software and services; what data inputs the algorithm takes and how it weighs them; how accurate the predictive outcomes are and whether they reveal any assessment of disparate impact; and correspondence between DCF and vendors, as well as any materials such as employee training documents provided by the vendors.

Next Steps

As the Project heads into the summer, we are submitting a third FOIA request to the Connecticut Department of Administrative Services (DAS), which announced in 2019 that it would begin using artificial intelligence to sort through the first round of applicants for state positions. Since then, there has been no further information on its implementation, and as such, our FOIA request seeks to shed light on any possible discrimination or bias in hiring and employment decisions at the Connecticut state level. This lack of information mirrors the SDE case study so our FOIA request focuses on the following: the nature of the procurement and agreement processes with any relevant entities; how DAS uses such software and services; what data inputs the algorithm takes and how it weighs them; whether DAS looks for disparate impact; any employee training documents provided by the vendors; and any documents regarding the appeals process.

Going forward, we plan to use the information we receive from the government to shed light on how Connecticut deploys algorithms in its decision-making and better understand any potential issues or discrepancies that arise from current practices. Our hope is that this will help inform potential legislation on algorithmic accountability within the state of Connecticut. Possible regulatory mechanisms could include requiring an algorithmic impact statement; requiring auditing and oversight; and/or strengthening Connecticut FOIA standards to mandate the disclosure of algorithms. Ultimately, we hope to achieve more accountable and transparent algorithmic practices across the state of Connecticut, helping pave the way to set a national standard.