As recently reported in The Chronicle of Social Change, Los Angeles County and other jurisdictions have been developing a new tool to help assess the risk to children who have been referred to the child welfare system.
Los Angeles, like many other jurisdictions, is using an earlier generation of risk and safety assessment tools. But at least in my experience in the District of Columbia, few professionals have any faith in the utility of these tools.
I am referring to the so-called Structured Decision Making (SDM) model for child protection, produced by the National Council on Crime and Delinquency (NCCD). The model includes six different assessments conducted at different stages of a case.
According to NCCD, the model is in use in 33 jurisdictions in the United States and in a total of 48 jurisdictions worldwide. As recently as May of this year, Texas adopted the SDM assessments for investigations and is planning to adopt the remaining assessments for ongoing services in the fall.
SDM is based on checklists filled out by social workers based on their interviews and observations, as well as other data they might be able to obtain. Using this information, the computer uses “actuarial science” to generate a risk level or a recommended action.
Unfortunately, these SDM assessments rely on yes-or-no questions that do not capture many important factors. Speaking to The Chronicle‘s Holden Slattery, Los Angeles social workerRuby Guillen complained that the yes-no framework fails to distinguish between a mother experiencing depression after her husband’s death and a mother suffering from schizophrenia and refusing medication.
She added that the instrument also fails to incorporate important risk factors, including the risk posed by secondary caregivers such as a mother’s boyfriend—a major concern in child welfare.
The head of child welfare in Los Angeles, Philip Browning, expressed concern that, due to the manual nature of the process, workers could manipulate the process to provide the desired result. I know from personal experience that this is easy to do, especially since the checklists include poorly defined multiple-choice answers that leave workers no choice other than using their judgment.
Alternatively, the worker can simply override the system-generated result. However, I discovered that neither was necessary in the District of Columbia. In one case, I decided to leave the system-generated recommendation in place even though we had already decided to override it, just to see if anyone cared. Not surprisingly, nobody did.
So why are jurisdictions using these instruments if they are useless and ignored? In the District of Columbia, the SDM is so tightly integrated with the entire case management process within the electronic database that it might be costly to remove it. But if the costs in time spent by social workers were calculated, removing SDM would probably save money.
It is possible that in jurisdictions where social workers are not required to have a Master’s Degree or a license, the SDM provides some needed guidance in making decisions. But integration within the data system does not explain why the state system in Texas would be adopting the tool today.
Automated risk assessment algorithms (whether old or new) are supposed to improve upon the ability of the social worker to make child protection decisions. But the SDM risk assessments do not provide any such improvement in my experience. Whether a new generation of risk assessments can do better remains to be seen.
The risk modeling experiment conducted in Los Angeles used a wide variety of data including parents’ mental health and substance abuse history. Given privacy rules, I am not sure how the Department of Children and Family Services is going to get access to all this information.
Regardless of the answer to that question, I hope that those developing the new tools will field-test them extensively to make sure that they don’t suffer from the same problems as SDM.
But let’s not wait until new tools are adopted to throw out SDM and similar first-generation risk assessment tools, at least in jurisdictions where they are useless and ignored.
There is a real cost to having workers complete useless assessments, and it is measured in time: The time it takes workers to complete them; supervisors to review them (if they bother to); and trainers to train workers to use them.
Worst, perhaps, is the effect on worker morale of having to complete busywork when there is so much real work to be done.
This column was published in the Chronicle of Social Change on July 30, 2015.
No comments:
Post a Comment