Big data, little kids: How technology is changing child welfare | LIVE STREAM



good afternoon ladies and gentlemen gonna get started hi there good afternoon my name is Naomi Shaffer Riley I'm a resident fellow here at the American Enterprise Institute and I'm thrilled to be able to welcome you today to what I expect is going to be a fascinating and very productive conversation about the ways that our child welfare system can be making better use of data in a recent interview with Bloomberg News Steve Ballmer who spent 14 years as CEO of Microsoft described visiting a County Department of Mental Health whose organizational system consisted of quote hundreds of sticky notes in a variety of colors tracking things like different types of programs for different ages not surprisingly the whole sticky note system has failed to produce the results that we would like dysfunction within a single agency is only the beginning however in an era where there are a variety of different public and private agencies that could be helping our most vulnerable families it is unfortunate to find that our computer systems are rarely even speaking to each other this makes it difficult for us to know which children are most at risk in what tools we even have to help them the fact that the data on education child welfare public housing and other kinds of assistance to the poor are siloed in different agencies different cities and different counties even within a single state makes it difficult to keep children safe at the front end of the child welfare system and to help find permanent loving homes for them if their parents are unable or unwilling to care for them but there's hope today I'm excited to be able to introduce to you two teams of researchers who are on the cutting edge of using data to improve our child welfare outcomes so today first we'll be hearing from John Gonzaga and Theorem IRA's John is a senior consultant of research and data analytics at adoption share and the director of data science for the Chan Zuckerberg initiative he co-created the first iteration of the family match assessment an online tool for matching prospective families and children while predicting successful long-term placements for adoption via is the founder and CEO of adoption share a national website designed to bring efficiency and transparency to the adoption process she was also named a young influencer by catalyst um after that I will come up I will have like a tiny tiny break and I will will then hear from our second team of researchers Emily Putnam Hornstein and Rima vive en Nathan did I get it Emily is an associate professor of Social Work at the Social Work in the School of Social Work at the University of Southern California she directs the children's data network a data and research collaborative focused on the linkage and analysis of administrative records Rima is a professor in the School of Economics at Auckland University of Technology and the co-director of the Center for social data analytics Rima recently led an international research team in developing a predictive risk modeling tool to assist child welfare call screening decisions in Allegheny County Pennsylvania after we hear the two presentations and then we're going to bring everyone together to have a panel discussion and then we will opening it up you opening up for questions from the audience so with that I'm gonna turn it over to John and Thea and please enjoy hi thank you and good afternoon it is such an honor to be speaking here today thank you to Naomi and well for organizing this amazing event I just would like to share the focus of our presentation is really in theme of Big Data little kids is really showing how the adoption share family match program is changing child welfare today but first just for a couple of introductions and Jon I'll have you come up here to in just a second my name is Thea Ramirez I am the founder and CEO of a nonprofit organization called adoption share our mission is to we exist to leverage technology to help create families and one of the ways that we're doing that is through our family match application which we're super excited to share with you about I have a master's degree in clinical social work from Savannah State University and I am super passionate about seeing change and reform coming to child welfare arena dr. John Gonzaga would you like to introduce yourself sure I'm dr. John Gonzaga and I've been working with Tia and adoption chair for about five years now my background is in social psychology I have a degree from the University California Berkeley and I'll be back talking to you in a few minutes so again the focus of this will really be on our adoption share family match program but first to just set the stage as sort of the problem what is the problem that adoption share is trying to solve through our family match program I want to talk to you a little bit about the numbers right now in child welfare so as of August of 2018 our child welfare programs served approximately a little under seven hundred thousand children four hundred and forty two thousand of those children actually are in formal foster care the foster care system a little under a quarter about a hundred and twenty three thousand of these children have a designation or goal of waiting to be adopted and again as of right now in this fiscal year we're about 40 percent of our way to completing that goal by the end of June now if these numbers sounds familiar to you that's because they they are in fact when you look at the Afghans data for over the past decade what you'll find is that every single year we're kind of hitting the same mark of success that that 50 percent number we're sort of hovering around a success of getting about 50 percent of our kids with a goal or designation of waiting to be adopted actually adopted and this has been happening for actually well over a decade and anytime you hit an area where you've got this statistical like you know consistency really it's important to ask the question why and and beyond the why is how like what what are what is child welfare doing what are our chief strategies right now that we're using to help kids get into permanency through adoption and what we have kind of called this or coined as the child centric approach is really the number one strategy that the United States is leveraging to get kids into permanency through adoption child-centric just for our quick definition of that is any strategy or initiative that relegates or gives the onus of responsibility on a child to quote-unquote sort of sell him or herself to a prospective adoptive parent we've been doing this for almost two centuries beginning with the orphan trains in 1850 in which we have placed about two hundred and fifty thousand children in our country by taking kids that were on streets and industrialized cities putting them on trains putting them up on platforms and hoping that a family would take that child in in fact just a quick FYI there for you that term put up for adoption that we now find offensive actually harkens back to this era in which we literally put up kids on platforms by 1919 there was a glimmer of hope with doctor Jesse Taft who wrote the first kind of publication on making an argument for perhaps there's a more scientific way to get kids into families than just putting them up on these platforms though her research was really limited to just investigating the child attributes that are at play and she didn't go into the family side of the the problem it was actually a glimmer of hope and that was a hundred years ago by 1929 we saw the end of the Orphan Train by 1945 we started seeing adoption exchanges across the country bubble up to published the pictures of predominately african-american youth that needed to have a non-white family member recruited and then by 1960 these same sort of publications were being used for all hard-to-place kids and then there's kind of a jump to 2001 in which we digitalized this whole process and now have listings of children that are waiting for adoption now publicly accessible at any time by anyone and regardless of your motivation or intent you can now access photo listings of children that are available today and meanwhile you actually have families that are waiting there is an incredible research done by an organization called listening to parents that worked with the Harvard University to identify that there's actually two hundred and forty thousand families that every single year will call their local child welfare agency requesting information on how to adopt a child from foster care but less than four percent are successful in achieving that goal now that was in 2011 so eight years ago and now we we probably our hypothesis is that the number of interested families have more than likely tripled in the advent of a number of organizations that are doing incredible work recruiting more and more foster families and so at the end of the day what commonly unfortunately gets continually reframed as a resource problem right like if we just had one more family if we could just find more families we could solve this crisis is really at its core a misplaced diagnosis and incorrect diagnosis the primary problem today in our foster care system is the inability of the current system to actually connect our waiting children with our families who can actually solve this problem for them and either provide a stable placement while they're in foster care or a forever home through adoption that is the primary problem that adoption share we feel is best equipped to solve through the family match program so just a little bit about this solution so family match was created to be a data driven application designed to promote permanency for youth your compatibility matching so kind of building on dr. of traps publication back from nineteen nineteen but instead of just looking at the child attributes also looking at the family attributes what are the things that play that either promote disruption or success in foster and adoptive relationships and beyond that being able to leverage this technology in a state allowing a state to see all of their resource families in their centralized when in one centralized location became kind of the paramount and foundation for it I'm quickly before I proceed I just want to explain how it works family match is actually a URL you can go to right now if you'd like if you're a family with an approved home study and able to foster or adopt a child you can create an account on family match and complete two compatibility assessments that dr. Gonzaga will be talking a little bit about today and caseworkers upload and enter in the child once these assessments have been completed both the family and child enter into the matching pool and from there when the match happens our team is actually following that match from match to placement and from placement to a finalized adoption we actually have a series of touch points on both the case Berger and the family to assess how things are actually going and using that outcomes data to actually leverage and iterate on our current matching model today which again dr. Gonzaga will be sharing more just quickly before I hand it off to dr. Gonzaga one of the Paramount forms of kind of what is different about family matching this whole approach if I can just kind of instill anything into you in you today is that we are really flipping the script with the way America is matching kids and families and that's by being centrally family focused families are our solutions if we want stable placements if we want families to actually enter into this arena and provide foster adoptive resource options then we have to be family focused and so that is where we really have carved out a broad swath of our technology has been an inviting families to demonstrate and share about themselves to talk about their families and to have some sort of skin in the game when after all this is their family that that's at stake as well the other core components that dr. Jean Gonzaga will be sharing more about is how a family match is really equipped to be a centralized repository and we'll speak to some of the strengths of that as well as leveraging the power of did and predictive analytics to better make more most appropriate matches and so here to join me as dr. Jean Gonzaga Thank You Thea as we said I'm going to talk a little bit more about the technology and the data applications behind this and one of the most exciting things when I first was introduced this idea was how it is that we could leverage technology to be able to improve a system that already has all the core components to be able to match families and children more effectively I think one of the most underrated and potentially genius ideas in here was being able to bring all of the resources that you have into a centralized repository so in many locations now you have families and children placed into a district or an agency which severely limits the number of individuals that you can match and if you think about the central problem it's not about the number of children you have in the system or the number of families you have in this system it's the overall number of matches you can make and then more matches you can make the more flexible you have a system that allows you to make trade-offs between where children are going to be placed with what families so let's say hypothetically you have an agency in Florida where we're active right now that has 50 families and 50 children right they can make two thousand five hundred potential matches if you said there are twenty agencies in the state of California in the state of Florida and you had a thousand families and a thousand children you can now in one centralized system make a million matches which creates a lot more flexibility so that scale allows you to be able to more accurately and more effectively match children and families and what I'm going to turn to now is how is it that you do that what was the method that we went to in order to figure out what that system looks like um one of the things I did in my past was to help build the matching system ad harmony and one of the things that we based that system on is that there is a tremendously long literature that takes a look at what makes relationships successful over the long haul there's an equally long scientific literature that looks at what are the attributes that make a cup that makes families that make foster care adopt and adoptions successful over the long harms when did they disrupt and when are they successful so our initial idea was to look back at the existing research literature and take advantage of the expertise that is already out there and look at the attributes that have been scientifically validated to predict long term disruption and these are just some of them out there we know for example that personality makes a difference we know for example that children's attributes make a difference we know for example that the context that parents come in make a difference and really these attributes broke down into four major categories one most well-established are the attributes of the child so children who have behavioral problems tend to do worse children that go through lots of placements tend to do worse over time um what has also been shown in the literature is that there are attributes of the family that make them stronger or weaker candidates to be matched with so if they have resources in their background how committed are they to the relationship do they have a support structure underneath them that is going to be able to help them there is also a lot about the situation what resources are actually available to the families to be able to get through what is going to be a huge transition for them when they bring a new child into the household and most importantly for us there are attributes of the unique match between families and children that predict whether or not they're successful so for example it's been shown that kids who are exceptionally sensitive have to be our except that's sort of like shy have to be placed into what are called attachment sensitive families families that are extra sensitive to what their emotional needs are and if you don't place them there their outcomes are going to be a lot worse it's also shown for example that if you place a kid high in negativity into a household with a mother who tends to be a very authoritarian parent that leads to bad outcomes now that's not an attribute of the child or an attribute of the parent both of those things can be successful if the placement is made correctly but it's an attribute is of the match so we took this to be able to develop a matching system that allows us to take in the attributes of the children the families and the match and predict what is going to be most successful I'm going to tell you a little bit about how this works so the first thing that we do is and we currently have an algorithm that takes care of two things the first is we need to take a look at what we call deal breakers like what are the things that families are unable or unwilling to take care of because they don't have the resources or that's just not their preference so if a family does not want to take care of a teenager because they don't feel comfortable doing that then why should we show them teenagers shouldn't we should just scream through and figure out which one of the children actually fit the attributes that they have this allows us to take the whole system so instead of screening through a thousand matches you screen through a hundred matches to be able to see which ones are more successful or most likely to be successful on top of that we then take the existing matches that are left the matches that are left and we score them we take a look at the attributes of the child the attributes of family the attributes of the match and we come up with a score that ranks their likelihood of disruption now these are just these are hypothetical scores this is actually not the real scoring system that we have but it allows you to see which of these families are the most likely destinations for children to have the greatest likelihood of success now this does not mean they will be successful in that situation it just means that all other things being equal they are more likely to be successful with this family than they would be for another family and it allows the caseworker to then take a look at the most likely families the ones that are most compatible and then do their due diligence about whether or not that family is it actually a good destination for that child the most powerful thing and the thing that our central repository and the collective data system allows us to do is to optimize the solution because when you are thinking about it from where am I going to place this child I'm not actually thinking about the trade-offs of like if I take this child and put them with the best family maybe that means that other children in the system aren't going to be placed as well are not going to have families so if a child only has a few options you may want to take care of that child first and allow children with more options even if they go with the family that isn't their highest compatibility score or the best family that's out there this is not something that we're currently put into the system but as we scale and leverage the power of data over time it will get stronger and stronger now the great thing about this system is even though we have based all of this on the existing literature so we feel very confident that the attributes are the things that are most critical to predicting long-term success we know that the waiting's of this algorithm are not optimal right now because we have not actually looked at what the outcomes of those children are going to be family match follows these children follows these two matches follows the families that are created and figures out which ones disrupt which ones are most satisfied over time and even though we currently do not have the data to do this we'll follow these families and retrain our algorithm models on actual outcomes which allows us to be flexible to the type of relationship that we're trying to build foster care is going to be different from adoptions as well as being able to take care of the contextual conditions that may vary from state to state or agency to agency so like Netflix like Facebook we will learn our way into making this algorithm better and stronger over time and with that I'm gonna turn back over to Thea to talk about oh sorry I have one more slide um this leads to what we believe is going to be a virtuous cycle for our matching system as we bring families into a centralized repository we're able to make better matches more efficiently and place more children into placements by following them across time we can retrain our model and those successes lead to better word-of-mouth an easier way of bringing people into the system of acquiring new families and as more children becoming the system gets more efficient at making better and better matches which leads to better outcomes which leads to more kids coming out of the system which leads to a stronger system that should draw more people in and without I'm gonna turn back to Thea but she can talk about some of the places that we've piloted this so far so 2018 and the fiscal year 2018 has been quite an exciting year that were just about to conclude here at the end of June we actually launched two pilots around the same time in July July in Virginia being predominantly our first official statewide pilot that was backed by the did with the support of the General Assembly and Department Virginia Department of Social Services and around that same time we also launched a pilot in the state of Florida overachiever much questionmark just kidding okay so in Virginia we really kind of started our focus as like a grassroots movement although we had a strong support from the General Assembly and Virginia Department of Social Services Virginia is a Commonwealth is a decentralized state meaning that we really had to kind of take it on the road and and really enlist the participation of the pilot opportunity to a number of DSS offices in fact there's about 100 20 so we had our work cut out for us in that timeframe we were able to successfully onboard about a hundred child placement agencies and within the past 10 to 11 months about 50% of those are DSS offices so these are County run offices we also welcomed about 400 families and on-boarded about 290 for cases so these are kids that have had their assessments completed on their behalf by a caseworker but although we had the support of e DSS and the General Assembly and although we had a lot of enthusiasm and although we had a lot of diversity among the family I'll just pause here and tell you that even piloting in two states right now our family pools statistics have remained virtually the same and in both states we're seeing that 50% of our families are wanting an older youth we're seeing that 51% of our families are interested in kids with disabilities 80% are open to more than one child 64% are are interested in either gender 59% open to all race so in fact this narrative that continually gets pushed out that we have kids that some people wouldn't want to be matched with is quite frankly not true and they're just waiting there so although we had that large diversity of families and the enthusiasm of the stakeholders in Virginia we've really only experienced and achieved just a handful of matches placements and finalized adoptions within this past year but Florida on the other hand was a different story in Florida we actually had more of a grass tops scenario we actually partnered with an organization called the selfless love foundation that really made it possible for us to target the leadership at CBC's which is their community-based care model in Florida and by getting the buy-in of the workers in each and every one of those agencies we actually were able to bring on 95 percent of the CBC's within the first couple months alone and and that sort of environment and participation actually kind of got us over the hurdle within ten months we successfully matched a hundred and sixty kids that's twenty percent okay that's twenty percent of the hardest to place kids in ten months and that's only by virtue of just 50 percent of our CDC's that are using it that we're able to actually at least have a minimum of one match so if you think about that for a moment the amount of potential there by just having every single CVC participating even and as we continue our work in Florida we could more than likely double that number in a very short amount of time and so the time to match which is something that is not currently being tracked by either the state or the federal government right now in Florida was just over two and a half months so that meant that the child had at EPR and had been waiting and then they were registered on family match and by the time they were registered on family match so their duration their life cycle and family match it took about two and a half months to get them matched now why that why is that significant a lot of our children in family match because this is the hardest to place kids these are the kids that get to TPR and do not have an identified family these are the kids that had been in the system for over two years without an identified family so their journey in foster care started way before that CPR started right so these kids have been in there the longest and are some of them are getting matched in as little as 18 days so as John has taught us through what the power of a centralized repository the power of really turning on a light switch and allowing placement workers to be able to see the totality of all of their resource options we can already prove out the efficacy on that front right away which is really a very very exciting 46 percent of our placements in Florida have celebrated finalized adoptions and as John said we expect and anticipate that we will have some disruptions in fact right now we're heartened to know that 10% is about our disruption right currently which compared to the expected rate of between 15 and 25 percent is encouraging which is the National reported disruption rate currently this is just some of the exciting things that have happened in Florida just a typical you know I guess the the data that we have on kids and families so we have a over 1,100 families that have been registered lifetime so some of those kids have obviously been matched adopted have their cases something has changed in their goal we've had lifetime 1200 families registered and so we see a lot of potential to continue to grow but obviously as I mentioned we've been doing these two pilots right one in Virginia one in Florida what's the difference so I just kind of want to close with a lessons learned opportunity in terms of what we have found to have been the sort of the gas on the fire the accelerator to getting more and more kids into permanency and really having a landscape that's the most ideal for this kind of technology and innovation to perform at an optimal play at an optimal rate the first takeaway here is that this solving the connection problem is actually an achievable problem this is something that we can actually do in our lifetime we don't have to continually see these Afcon reports that are just showing a 50% success rate like we can actually move the needle in this by employing this sort of technology and data the second takeaway is that what we've learned is the structure of a state can impact greatly the speed at which we can resolve these connection issues so for states that have more of a county run or County administered kind of setup which of many states in this country do our hypothesis is that they're going to have a little bit of a tougher road in terms of strengthening those connections and and allowing the the sharing of families from one County to the next to meet a need whereas a state like Florida that although it's privatized and you have CDC to CDC across the state you have kind of a backbone that allows that sort of sharing and incentivizes the collaboration among one region with another the third is that data is absolutely key and is so important as dr. Gonzaga explained earlier but also in terms of our own learnings and what we've found to be able to leverage our learnings and hand that over to our stakeholders in the states that we're working in has actually informed better practices in Florida we've seen one particular agency that actually was able to almost match 40% of their kids in as little as six months starting to see the efficiency of this and rethought their own process flow of how their their their internal organization operated to keep up with this kind of efficiency and in Virginia we've really seen that there are really three sort of big challenges problem statements that that that the data has been able to build out a better narrative and story around one being that when families don't actually have access to their home studies when a family is not given their ability excuse me to maintain ownership of that central precious record that family's journey and trying to adopt our fosters a little bit of a slower road also that that collaborative relationships and cooperative agreements are huge and can be game changers in a state that has a lot of it emphasis and power going to the county level and lastly the line delineation so in in some states you have a foster to adopt line and in Florida we were able to see that families could be foster families they could be adoptive families or they could be duly licensed by having states that just have one centralized line the foster to adopt line oftentimes because the foster care matching is a big deal and we have more demand on foster families as a resource workers were more are more incentivized to just be working with their contingency of foster families so if you raise your hand and say I'd like to adopt but really like 90 percent of my problem as a caseworker is finding foster homes I might just kind of put you at the bottom of the list or maybe not even contact you or use you as a resource because that's not my most pressing need so oftentimes we see that kids that are waiting to be adopted in certain setups across our country I actually have a lower likelihood of achieving those goals because of that structure and then lastly is just a in conclusion and this might seem a little bit ironic because I am speaking at AI and we're talking about data and technology but I just liked I'd be remiss if I didn't point out that technology is never the Silver Bullet you can have the most in an amazing program but if you're putting that on top of a broken process it's not going to equal improve improved efficiency right it's not going to get you the goals that you need and I think that's an important thing and also kind of a segue to my what my last point of lessons learned is that the role of and the work of nonprofits is so imperative in this space particularly as it relates to technology I can't tell you enough as a nonprofit founder and CEO my desire is not to sell a license to a state to sell a product and walk away and give you maybe one or two enhancements a year our goal is transformation and so for for those of us in the child welfare community to really embrace and rally around our nonprofit organizations that are moving the needle is really paramount in making sure that we're actually being able to build and posture our solutions to be unified and not sort of subsets of products Asians across our country that work is incredibly important and we need your support we can't do this without the time talent and resources and treasures that that many of us have access to and could give to move the needle so with that thank you so much thank you I'm gonna give you one quick note for those of you lay people in the audience TPR as termination of parental rights we're gonna try to stick to as few acronyms as possible so everyone has access to this and with that I'm going to call up Emily and Rima to talk about their research thank you so much for inviting us and it's been such a pleasure to one more excellent thank you and so you're gonna wonder why a New Zealander is here working in Allegheny I just love pierogies and beer and I can't get enough of them if you are from Pittsburgh pierogi is this wonderful concoction of mashed potato wrapped with dough around it because you can never get enough carbohydrates and that's my story and I'm sticking to it so thank you very much we've been working in Allegheny family screening developing the Allegheny family screening tool now for over four or five years we I led an international team which one the one the RFP to do this and so I'm gonna walk through very simply what this tool does what the evaluation impact evaluation is for it and then emily is going to talk to some of the potential of this type of work in bigger settings and whether it's scalable so one of our favorite people is Erin Dalton who's COEs deputy director at Allegheny County and she loves to quote she she always a sort of teases us as academics that we are constantly doing quotes so she wants a quote herself and this is the quote that I think is a real million dollar shot you know what's scarier than predictive risk modeling to help make important decisions it's the way that we're making those decisions right now so for most people who live in the world of child welfare and child protection data our child protection systems we know that we're not doing the best that we can we know that if someone gets called in on Monday and gets Dave they get a different result too if they get called in on Friday and get Janet we know there's lots of unintended wanted variation in how we're looking after our families and the striking thing is that such a number of kids in the u.s. are being subject to child protection system one in every three American child before they turn 18 is investigated one in three now if none no one in this room has been investigated it tells you we have communities out there where everyone is investigated one in two african-american kids before they turn 18 investigator this is not just called in this is summer not on their door talk to mom talk to the children maybe even talk to the teacher about is there a case of child abuse neglect when we developed the child protection system we never thought it was gonna go drift machine kids at such a rate and why does it happen it happens because every time there's a tragedy in the front page when the opioid crisis hit we open the front gates of our child protection system and their calls come in and they flood in and so one of the ideas behind the allegheny family screening tool is this two challenges that we're trying to deal with one is when calls for child abuse and make it happen we know that sometimes we're not paying attention to kids we should have paid attention to how do we know that because constantly we're getting children in hospitals in icy use with broken bones and when you look at the history you saw that people were calling about those kids and we were what's called screening them out that is not doing anything about it but there's also another problem we have we have a problem that we have a whole bunch of kids coming into child protection and we're going out investigating sometimes even opening cases for kids that really we should never have gone out on the challenge for us is which of these kids are which and because the floodgates of child protection are so wide open it's very hard for a front line workers to be able to separate signal from noise so the Allegheny family screening tool is an OP it's the use of algorithms exactly like Thea and Jian have been doing it's looking to see where the weakened the data we already have about the children whose calls are coming in to see if we can start stratifying or identifying children who had extreme the risk and sure and who are not at risk now that sounds simple it has been incredibly the most controversial project I have ever worked on I will show you the scars but I don't want to put you off but it was very controversial because people were concerned that this was going to exacerbate bias that we were taking data from bias systems and we were going to exacerbate it and we were totally open and honest about everything all the challenges but we were also totally open and honest as the agency was about the huge problem that we in Alleghany had in front of us and we as a country had and frankly some of the research I've done internationally we in the developed high-income countries have with how we're opening our floodgates so the allegheny family screening to is a simple tool when the call comes in the call screening staff they're the people who are required to decide whether to screen the child in that is go investigate or screen the child out that is not to do anything overall in the u.s. around 50% of calls are screened out now with the screening tool they get a screen shot which is basically this which tells you what the predictive risk model says is the risk that this child will be removed from home in the next two years the people are really interested there was a very nice New York Times piece on this project it's it's quite an in-depth article asked lots of different voices and I really encourage you to to read that as a good insight into what the tool does so in Alleghany over a year we have about 14,000 referrals to child welfare so those are calls that come in to counties from parents from doctors from mandated reporters saying we think there might be a case of abuse or neglect here the county screens out half screens in half the way the Allegheny family screening tool works is that when the call comes in we have a unusual data system in Allegheny which is we have integrated data that is that the system harvest data from a whole bunch of different systems the child welfare system is only one of them it can go find out where the mom was on TANF whether our dad had any jail stay jail history and a whole bunch of other fields there were devils homelessness history it can harvest all those things and bring it to bear to create the risk score the risk score is then predicting the probability that if you screen this child in they will have an out of home placement in two years the reason we chose out of home placement is because we want in an outcome that was as secure to out of system problem that is a judge had to decide that this child was at such risk of abuse on link if they remain at home that the judge saw it fit to say the child should be removed so when we calibrated this data with what's called a holdout set this is kind of what you see on the horizontal axis is the scores that were given to the COS on the vertical axis is the rate of placement of removal of the children as you can see a child who receives a score of 1 only 2.4 percent of those kids ended up being removed but over 30 percent of those kids had been screened in historically 30 percent of kids were screened in when there was very little evidence that they were going to end up having egregious outcomes that's not to say that the screening in wasn't the thing that was solving this problem but it was pretty clear and similarly with children who scored a 20 on our tool half of them ended up being removed and I should say we've also calibrated this against Archer Hospital data a childhood scores at 20 is 24 times more likely to be hospitalized for maltreatment injury than children who score in the bottom from at-tin and below 24 times more likely they are more likely when they become older to be hospitalized for suicide and self-harm about 17 times more likely so we are even when we go to out of system data finding that these are the children we should be and ought to be most concerned about so we've had a lot of interesting press and review of our tool and this is sort of the kind of balance the ones on the left around the side are ones that have been very positive about the fact that we've had independent ethical review we've been incredibly transparent we work very closely with the community Virginia Eubanks in automating inequality was a bit more negative but she was negative about the use of any of these types of data use in child welfare so in general we've had lots of props for our approach but it's only last last month really that the evaluation came out the evaluation was conducted independently of our research team our research team is very committed to independent evaluation I just don't believe that we ought to be evaluating our own interventions so County managed the intervention it was funded by foundations they put out a call for proposals go up a doctor goldhaber fee but who's a associate professor at Stanford and a decision in decision science one won the grant and he the first finding it's an 18-month follow-up the first finding is there was no unintended adverse consequences so there was no harm as a result of this tool there wasn't an increase in screening rates the second finding is that there was in his words a moderate increase in the accuracy of those who were screened in so no more children were not screened in but the churun who were screened in were in higher need and it reduced the racial disparities between of case openings so the implementation of the tool actually did the reverse to what a lot of the critics were concerned about why did it do that it did it by reducing the screening slight reduction in the screening rates for black african-american children and it increased the screen that the identification of white children who were at risk so it did two things and this is as a researcher exactly what I expect I think these sorts of tools are helpful if they're deployed correctly in reversing unconscious bias because think of the world that to the scariness were living in before the tool they were seeing the race of the child right and now what this thing is the race of the child but they're seeing a hold of 50 percent of black and african-american kids who with a low score and they're seeing white kids with a high score so previously when they might have been using race to imply risk the school is able to offset some of those unconscious biases so the next question we're going to ask is whether this type of work is scalable perfect so as this Allegheny County work was rolling out any number of other counties and jurisdictions in the US were watching very closely thinking about whether predictive analytics predictive risk modeling could also be deployed at the front end of their system and one of those states was California so what I'm going to talk about is some work that we've done with the state and the counties of California over the last two years to really explore that question as to whether the model that was developed in Allegheny is kind of scalable and generalizable and most importantly whether or not these models can be deployed with any degree of accuracy if one does not have the beautiful integrated data warehouse that defines Allegheny County so Allegheny County is truly unique in having made investments going back 10 plus years to integrate data Holdings across the county from behavioral health to child protection to 911 calls and so all of that information was already available to frontline staff as they screened calls that came into the hotline and therefore was also available for mining purposes when developing this algorithm in California where we have 58 counties there is any amount of data that shared locally at the county level but what the state wanted to do was first have an open and transparent examination as to whether using just the one data source that is available to every single worker in the state could be used to improve the accuracy of front-end decisions and that one single data source is the historical child protection data that we have from our state facas system or from our large-scale administrative data system so what we sought to do in California was very similar to Allegheny we had some different outcomes we look could look at so we explored predicting everything from a future report of abuse and neglect three plus referral kind of a chronic mess foster care placement and we were predicting that long arc risk so one of the things that certainly surfaced in Allegheny was not only the sense that perhaps they were screening in and out perhaps the wrong children but also the recognition that where workers are probably best able to assess immediate safety and risk concerns are those where there is a call that comes in that surface is a very acute need but we're we're far less good at developing a kind of a profile of what services would be needed is when we're trying to look over that long arc and this is an algorithm where we were able to harvest three hundred-plus predictors from the administrative child protection data everything from information about the focal victim child who was the subject of the call things around their history of child protection involvement the nature of this allegation as well as information about other individuals who were on that call everyone from siblings to parents to perpetrators what I'm going to talk about are actually the findings that emerged from our attempt to predict foster care placement as Rima mentioned that does feel like a a good outcome to predict because you have a little bit of that that third party influence in that children are not removed and placed in foster care without the action of the courts but one of the things I do want to point out is that you'll notice here that we're defining these as system outcomes I want to fully acknowledge that we have not built a model in any of these jurisdictions that predicts true abuse and neglect what we are able to do is observe which children are the most likely to either have chronic or future system involvement because if we know that today and we have the ability to deploy our preventive interventions differently then hopefully we can reduce the likelihood of that future system involvement and in the case of foster care placement hopefully that's something that we can provide services now that would stabilize the family protect the child and would reduce the need for an eventual removal so developing this model and we take the underlying probability and largely for illustrative purposes what we've done in California is we've simply used that risk score that's generated and distributed children and referrals into risk deciles very simply everyone gets a score from one to ten these children who have a score of 10 it means that the model believes that they are in the top 10% in terms of the likelihood of foster care placement a child with a score of 1 that's a child with the lowest risk it's about 10% we also like to be the death tile approach because from a workload standpoint we know our child protection agencies and frontline workers are strapped right and so we're trying to also think about how we can direct their attention effectively with tools such as this so what do we see when we look at the differentiation that these risk deciles provide when it comes to future risk we see that when we we run the model and this by the way was built on around 3 million observations that we had for the state of California using some historic data these numbers that I'm presenting here are based on a holdout set of records where we were looking to test the classification accuracy of the algorithm that had been developed and so what we see is when we look at the 10% of children where the model on the day the call came in flagged them as being at highest risk of kind of this deep end system penetration we do see that 60% of those children ended up placed in foster care and I also want to really call your attention to the low end of the risk distribution because one of the things to note is not just that fewer than one in a hundred children with a score of one ended up placed in foster care but if we look at the bottom 50% we're seeing a rate of foster care placement that is less than one in a hundred so mirroring very much the the data and the findings that we saw in Alleghany so then the question is what is the business use case this and how would this actually improve front-line practice above and beyond tools and what we have today because maybe what I'm showing you here is exactly what every single worker already knows maybe they're already able to figure out who those tens are and who those ones are and so to examine that what we did is we looked at the highest risk children according to the model and we explored how many of those children were screened out without an investigation and the numbers not huge but we see that about one in five of those children was just screened out without any kind of in-person assessment by a social worker or a caseworker at the other end of the risk distribution what we see is even though these were children the model fed were quite low risk we were screening in 63% of those children and families for investigation so then we said well maybe maybe those were good calls right maybe there was something that the model didn't see that was communicated to the caseworker and they knew and those were good calls and so what we did is we actually looked at the high-risk children who were screened out to see what happened to them and what we found is that one and two still ended up getting reported and placed in foster care okay so definitely a very very high number hard to argue that there was not a missed opportunity to do something at the low end we found that even though we screened in 63% of these children for investigation which to Reema's point is not without a real burden not only on the agency in terms of resources but also the family we found that fewer than one in a hundred ever ended up placed in foster care now the low end of the risk distribution is a little tricky it could be that the mere fact that a caseworker went out did something to remove the risk but the literature would suggest that that is very unlikely the case the other thing that we wanted to do is I mentioned at the outset that we are predicting future system involvement and that we believe that foster care placement is a an important proxy for child safety and harm but if it's not then what we our potential we doing is kind of orienting the system to identify children who are not at greatest risk and we are missing the kids who in fact are so one of the things that we did is we tried to just as with Allegheny County validate the sensitivity of the model to picking up children who are at more objective forms of harm and so what we did is we took all the children who were in our data and we linked them to vital death records encoded whether there was a maltreatment fatality and we also linked them to near fatality reports that were filed with the state indicating there was a very serious abuse or neglect injury here's what we found so if you look at the left hand where you see the gray bars that's just the distribution of children and referrals overall because remember we're putting kids in deciles on the right hand side of the screen what you see is the distribution of children who experienced a near fatality or a fatality so again we took a few hundred kids who experienced that severe outcome and we said what would their risk score have been had we had this tool in place and what would it have been at the call that came in before the event itself the near fatality or the fatality and what we found is that 59% of those children would have been flagged by the model as falling in that top decile of risk another 19% would have been flagged as falling in death aisle nine so 78% of these cases would have been flagged as falling in the highest to risk death piles now to be clear there are still children who fall in low-risk deciles and die so I think it would be very very mistaken for us to hold out any of these tools as things that will prevent all child fatalities but assuming that we have interventions and protocols that are positive and affect the safety of children we would be reaching those children who have clearly the highest risk we then did something else we also decided to look at camps as a little bit of a counterfactual so we also took from our data children who died from cancer and we said how would they have been scored by this algorithm and what we see is that we have built an algorithm that appears to have absolutely no relationship to risk of cancer death as we would expect and hope it would not versus a model that is very sensitive to maltreatment near fatalities and fatalities so I only have a minute left so I'm going to tick through this very quickly as my last slide which is this is where I get to offer just a few candid reflections on what are some of the challenges to moving the needle working on this predictive risk modeling work Reema described it as some of the hardest work she's ever done certainly true for me it feels very uncomfortable to a lot of audiences that we would be able to use data to inform our understanding of people's risk of something that's very uncomfortable for us to talk about which is abuse and neglect but that ignore is an empirical reality which is we have the data today to do a much better job figuring out which children are at risk I've also been surprised by the number of attacks that are coming from any effort to use data to be more targeted and intentional in our approach to delivering services people believe and want universal services I get it it doesn't need to be an either/or but that's often the way the debate is framed I've also been surprised we complain about our child protection system all the time think about the headlines and yet any time we try to innovate with data or new tools we also seem totally against any opportunity for moving the needle through new tools new approaches I also want to mention the racial ethnic inequalities and disparities it's amazing that that there are some positive findings coming out of Allegheny not because it's surprising but because it's been so long since we've had anything that would suggest we can actually move that and when we attack predictive risk modeling as a tool that will reinforce racial bias we need to acknowledge that we have to be very careful but we ignore reality that we have human decision-makers with lots of bias as well and finally I do believe that we have a problem in the field where we tend to adopt a particular tool a particular protocol a particular program and government has it and we just stop there and we use things that are 10 15 20 years old and so whether it's predictive risk modeling or other new interventions that are being deployed something where I hope that there is more investment in that work thank you I [Applause] told you that was gonna be fascinating well thank you all that was really just really interesting work and it is rare unfortunately to see this kind of innovation sometimes in this field and it's very refreshing to be able to hear from all of you today so I guess I want to actually start there with this question of what are the barriers to innovation that that all of you have faced we've talked a little bit about kind of the the territorial aspect different counties have different priorities is our decentralized system the biggest problem is there anything the federal government can do to sort of help us get over this so I just you know maybe if each of you want to address kind of what you think are the biggest barriers to innovation and this I'd love to hear from them well I'm happy to start I think that there are real challenges when we have existing tools that are threatened by innovations so I'll be clear there there are risk assessment tools that are used in child protection they've been used they were developed you know 15 20 years ago and anything that is new and is being rolled out is a threat to basically the monopoly that those vendors have when it comes to those tools so that's been kind of one reality I think the other reality is that our child protection agencies and our government agencies are so under a magnifying that there is a real fear of trying new things because if it fails it may be a very high-profile failure and there is a long history of heads rolling anytime there is a mistake made and so there's just not much room for any missteps and yet we continue to have many of the same missteps because we're not willing or able to give them the room to try new things for me I just echo what Emily has said definitely a barrier – a barrier would be a resistance to change that we are literally taking our technology and our ideas but oftentimes even when you get the buy-in from the leadership you have to sort of you know enlist this full support and participation of the person whose job it is to do that to make that placement decision and oftentimes what you're running up against is Emily said you know decades of other types of practice of other types of ways of solving the problem and so a lot a lot of time and energy has to be kind of coupled with the ability to get in the trenches with them – it can't just be something that you sort of set on the table and walk away from and check in you know a couple years from then you have to like build the relationships you have to know the people and invest in those local communities and that takes funding which i think is the second biggest barrier to getting these sorts of ideas and innovations out there is that to not just provide the technology the resource but then to actually have the programmatic arm that's really helping root it into the community that you're trying to serve so my takeaways I think we have a very high turnover leadership and child welfare because we have a very immature attitude to how to assess their performance and that is really stifling innovation because they can't get warm in their seat they can't build good community trust because they and the lack of community trust you need social licence to innovate you get social license by showing and partnering with communities for the long haul no very hard for a ciao welfare directors to get social license to innovate because they're not there long enough I think they're really two things one is as I think has been a compelling picture here you can resolve a lot of problems when you have the right data and the right systems in place to be able to build the algorithms but the system's it selves were built in a world that was not in operable nature operable with each other so you see a lot of divided counties and agencies and states that have different systems that don't talk to each other and I think one of the great things Emily has done is be able to bring divergent systems in California together into one place to be able to leverage hey there is data on whether or not kids face adverse outcomes that we can bring in but those systems didn't talk to each other until someone actually made the effort to do that um that limitation on the data means that you can't let the algorithm do all the things that are out there I think the second day actually lies a little bit on us which is like we think of algorithmic solutions but we don't necessarily think about giving value back to the caseworkers themselves like how is it that this makes your job and your life easier and one of the things that I've discovered in other places as soon as you say here's an algorithm and guess what it lets you do your job better and you can focus things on what you need to do and you don't have to go home every night feeling emotionally overtaxed because you couldn't get everything there's much more buy-in to be there so I think this like as we said this is this is not a silver bullet like you actually need to get buy-in and you actually need to give value back to the people yeah I was actually I don't know if if you mentioned Athea but I think something that maybe people who are not in this world don't understand is that a lot of the the caseworkers actually in the short term they like this sort of territorial kind of position they take because it's their job to find a bed for this child tonight and if they give up those homes to this greater pool they're worried that when it comes time to find the bed for the child tonight that the family that they always turn to in this case is not going to be there and so getting them to see these long-term outcomes is a harder sell right what was interesting to us when we were rolling out in Florida our hypothesis was that we would have the most amount of matches from outside the location of where the child was was situated or located and what we found was that 45% of our matches actually I don't know if I mentioned that in the in the talk we're actually from out of County so the majority of the matches were actually still happening where the child and family both had shared residency and that was really compelling and also sort of alleviated some of those fears from workers that just thought it was a transactional system in which we're gonna like give you all of our families when they're already so scared of losing people but then also to see that it could be a bi-directional focus and approach and you understand where the taste workers are coming from because this is the system that they have been dealing with for the entire duration of their careers and it's very counterintuitive to think like oh actually if I put into this whole system I have more options to be able to place my children not any less options to build a place my system is not the way that they have been trained to think for them for the longest time okay um all of you mentioned Emily and Rima you touched on a little more the race question how much of the kind of skepticism about innovation or the concern about too much innovation has to do with not wanting to go anywhere near the race issues in this country right now and and and and sort of just tiptoeing past them and and making sure that you as a government official or you as an agency just is not getting criticized for taking the wrong tack there so I guess I can speak because I'm to throw an outsider and we actually did a very interesting research project where we me and a couple of design colleagues went into u.s. and New Zealand context of children families whose children have been removed and families of color to understand why and how they felt uncomfortable about algorithms and one of in child welfare and we found that what was happening is that they had so little trust of the system that any innovation that system was producing in was endowed with the same level of distrust so it could be an innovation that we thought was actually going to improve things or an innovation that was going to be worsen things but because the system already was so distrusted that innovation that happened out of that system was incredibly distrusted so that's a really interesting finding for us because it really you realized that there is a sort of a foundational issues that we have to work with the trust of the system in order to get that social license listen I think that races is a really tricky topic and I think it's terrifying for government officials to potentially push out or suggest a tool where we don't quite know what it will yield and where we need to fully acknowledge that if we think that our past practice was biased and did lead to the over surveillance and over intervention of services with low-income and minority children and families then yes that that is going to be baked in to some effect within the data so I think the most important thing is that we just talk about it and be open about it but what I will say and these are some funny if I didn't talk about when we look at the data in California the algorithm can help surface where those kind of disparities or that unwarranted variation is emerging in today's practice so when we look at those very low risk families and we look at low risk Latino african-american and white children what we see if current practice is leading to systematically higher rates of screening in for our low risk children who are Latino and African American and at the other end we're seeing that we are systematically screening out more of our high-risk white children so exactly what we see in Alleghany and so the conversation needs to be not as their bias and the data there is not is this going to solve everything it won't but how do we on the margins start to use tools like this so we can be more intentional in addressing some of the historic policies and I think when the systems are designed well and accountable to themselves for their outcome you can actually make changes to be able to address those issues but I think happens a lot of times in the current systems is you have a system those problems are there but because you are not looking at the long-term outcomes and connecting those those dots you actually never see that this thing is biased out there because the systems that we're building are actually being held accountable you can shine light and then make corrections and yes there are definitely systems out there that bacon bias but if you have a human in the loop system that's being very cautious about how it's being interpreted and being applied you can you can address that problem as it arises I want to ask you about the the algorithm itself that you're using I know you mentioned Thea that I think he said 59% of your of the families that you surveyed said they were open to adopting across racial lines is that one of the questions that you're asking in terms of a match and do you think that the fact that families can say that you know in their survey makes them more open about what they what they are and aren't willing to accept so I'll take the first part of that question and then John if you want to talk about that our algorithm part but we actually the questions around those deal-breakers there's kind of our code name which we're changing to conversation starters because what we're finding is actually what families like families say I want to adopt their first day in a class there's that you know they're being presented with a checklist you know tell us everything you wanted a child and they don't realize that at that moment in time they're being cemented all of those things that they've said and so on our end we asked for what was your home study actually licensed you to provide services for and there are home studies that will tell inform that the family can take a number a certain number of kids a certain number of like a gender like if they're if there's a gender preference it'll bake in and say that this family has been approved for all genders or just a type of that and so what we do in our end is just validate with that extra information to make sure that the families in our matching pool are not only approve and eligible but actually what they've said their preferences are in fact match what they've been approved and like legally have been found to be able to support as far as a child I don't know I'm gonna take that and then in terms of preferences themselves like in multiple different contexts you find that preferences are very soft so people in dating relationships will say I don't ever want somebody who smokes you say well would you take this person who fits all of your other criteria but they smoke twice a year of course I take someone who's my critize a year so when you press people on that you generally find that their their range of options is wider than they might give preference to and linking those into the system allows you to make those expansions which if I could just say to Naomi the benefit of a lot of times when we talk about matching systems and in foster care or adoption what is really being said is it's a series of filtered searches and depending upon how the system has been wired on the back ends your age preference could be cutting you off at like a day or two days after like the date of birth of the child and the age range of the family that wants to adopt them so but it was interesting we did a couple more questions and queries on just our total family pool and asked them you know for gender specifically because that seemed to be a big one 64% are open to either male or female we asked if given if given information on another child that was the opposite gender would you be willing to accept placement and 90% of our family said oh absolutely we would strongly consider that so these these artificial parameters that were sort of self imposing on families I think has done a mass injustice and has eliminated swathes of families that otherwise could have been great candidates for our kids all right well I think we're gonna open it up for questions now we have a microphone so if you'll wait until the microphone comes and just tell us your name and please be sure to ask a question not just make a statement so any questions hi I'm from Mississippi and what happens with the wrist model when the state is corrupt and it's not necessarily the family because I have a feeling that's probably not built into your model and there's a case in Mississippi where the what the house is actually recorded in the child was tracked long-term through their employment as an adult without it ever being disclosed it's been reported to the police and it still has not been cleaned up or been in caravan I guarantee that's not in your wrist model so your models only as good as reality and so I mean you can you can build the wrist model and defend the rightest model but the problem is is if the stay is corrupted that is also an issue so I I wouldn't disagree the models that have been deployed in Alleghany in Colorado and the proof of concept research that we were doing in California was really focused on that very upstream triaging decision where we've got partial information that's often communicated about a child safety from the collar and the hotline screener is being charged with making a very important decision are we screening this in for additional kind of at least eyes or not so in that sense the risk tool wouldn't touch anything at the very back end at this point but I hear you and can I just say we train so one of the way important things is to train staff on how they use the algorithm and one of the ways we trained staff is by showing them things that the algorithm got wrong because it was really important to keep them hyper vigilant to exactly that that there is context in the data and there's actually sometimes data that's wrong and you give me an example of what you would say about when what the algorithm got wrong Howie what you would say to someone about that so when we did so we did simulations for our frontline workers in Colorado and we stratified the cases based on weather so there are two positives and false positives who- whole thing it's Andy stratified enough of the false and then they started to see that they had removed and that screen them in and the algorithm had told them not to and then they followed it and they were right and the algorithm was wrong and they reflected about maybe they were seeing things in that Dave's allegations that they had been vigilant to that the algorithm had not and it's really important when you train to make sure that they see enough of those so they're reflective in their practice and they understand and I think there's really two very critical pieces that come out of your point and your question one is the models are only as good as the data as their so you have to be very cautious about what data sources you're linking into and if those data sources are not are not quality then the model itself is not going to you can't produce something on data that's poor the second thing is to never take away the efficacy of the caseworkers or those who are on the frontlines they are the ones that are ultimately accountable for the decision so these are tools for them to try to make better decisions and focus their attention but if they sense that something is wrong you need to you need to empower them to be able to make that call at that point hey thank you for the presentation Chris Kingsley at the KC foundation you've been at some pains to talk about and this is really a question for Emily and Reema the application of your tool at the front end of the child welfare system and you also mentioned that one of the things that makes innovation and that system is difficult is because mmm you know agency are under such pressure right to ensure that no child is seriously harmed or dies in their care so there must be huge incentives for agencies to want to apply these tools deeper under the system to kids who are already known to child welfare can you talk a little bit about the potential application of these tools there what's already being done in that space and what limits you would place on sort of what the data science supports and does not sure so I know that there are jurisdictions that are exploring applications further into the system including when we reunify families can we identify those where there may be greater evidence of need for say wraparound services or continued supports after reunification to prevent reentry there are also use cases around ongoing quality assurance and supervision for open cases that may be more complex where knowing that there's a greater risk and more complexity you would just want to devote some additional resources and attention I also think that it's very interesting with with the new family first legislation which for those of you who are not steeped in child welfare is important federal legislation I know is a topic of panel a few weeks ago one of the things that will come up if there will be new prevention dollars flowing from the feds down to the states and the states are going to need to figure out how they are defining candidacy for those prevention services and the definition is that these need to be individuals who are imminent risk of foster care placement so one of the other conversations we've heard with a number of jurisdictions as may be tools like this can help us to define canada the– so that we can draw down additional prevention dollars and supports for families where now we have an empirical basis for saying that this child is actually at risk of foster care placement versus just trying to come up with our own definition can I just also jump in there because I'm a bit worried about sort of wholesale prediction of foster placement because they are good and bad so we've just been doing some work linking in New Zealand just to try to get a handle on what features of foster care payment we would all agree with a bad one because turnover is that we've linked it to suicide and self-harm to criminal justice involvement to school dropout to truancy real objective stuff and aren't trying to understand and unpack what things we really think are bad in foster care because at the moment it just feels like everyone's saying foster care is bad so we should just predict foster care and if you're at risk of you know and reunification is good and if you've predicted implication and are just a simple predictive simple odds ratios so that reunification sometimes harmful fir for children so I think you know we first need to know what it's a bad thing before we know to avoid it I just I wanted to follow up just quickly and John one of the things that I know you're you're trying to avoid with the program is as the adoption disruption and I wonder if you can just sort of try to quantify that I think a lot of people are kind of unaware of how many of these attempted adoptions don't work out and and and and why that is or so the what we've had access to is just an understanding that about two-thirds of children in foster care so of the all of the four hundred forty three thousand kids two-thirds of them will likely experience a minimum of three placement moves by their thirteenth month and care and so this is if you can imagine just sort of the the trauma that that actually inflicts on the neurobiological development of a child we now know the research is there to support how areas of the brain that are most impacted whenever a child and family are separated for whatever reason it actually impacts the area of the brain that controls emotional regulation and self-control so the behavioral issues that start to pop up literally we're kind of eradicating a child's ability in care to form long lasting attachment so so it's a huge issue and certainly it impacts both on an adoption placement side but also just you know the front of the sphere when when a child actually has to come into foster care you know the the our mission our focus is really how can we use this data to best inform that first placement so that it's the only placement until the child needs to either be reunified or some other case plan is is created for for that child and just one last question unless there are any others I wanted to just ask about we talked a little about how there's not a lot of funding for the programs like this what what are you seeing from foundations in terms of their interest in supporting this work and and who would you like to see supporting this work going forward Wow yeah funding is absolutely huge I think one of the barriers we've had actually we've had a lot of interest from actually just private individuals a one in particular that has really lighted the way for us down in Florida is actually by two philanthropist Ed and Ashley Brown that created the selfless love Foundation but they're a brand-new foundation we get a lot of excitement from like new foundations and philanthropic people where we start to hit the kind of like the friction is really big you know long like historically old foundations that typically require you know a one to two year relationship build up before you know you're even invited to apply for a grant and simply like when you're trying to kind of bring something like family match to scale it's not that we don't want to get to know you it's just that we don't have time to doing to kind of jumping through those hoops just for the opportunity to apply certainly we have our eyes on a number of organizations I think hopefully one day maybe even like the Chan Zuckerberg initiative would love to step up you have an in there yeah the Ballmer foundation certainly the the people behind AEI there's I know there's incredible resources and and funding out there but the the problem is the ability to access and actually scale this so that we can do that in a more simplified way and from a broader perspective it's very the funding is very complex because the technical logical solutions need multiple different ways it's not just about building the relationships or building a tool you know there's compelling examples of here how you need like you need policy work that's to be done you need technology work that needs to be done you need like on the ground program work that needs to be done and a lot of the structures in place don't consider all of those different aspects and the only thing I will say on the funding front is you guys have a beautiful use case it makes people feel good the work that Reema and I are doing I think is critical but we have talked to so many foundations that are really interested in having those first conversations but I think our thirst a little bit hesitant they're nervous to to kind of dive into predictive risk modeling because it's a little bit controversial and they're they're not quite sure what that will imply but you know I agree that it's it's complex in terms of the technology and the development but in the whole big scheme of things it's actually a relatively small investment to help a jurisdiction get from the R&D side which is the necessary model development to the implementation so we're not talking about you know hundreds of millions of dollars to design complex interoperable systems we've shown that actually you can do this work and add value to current practice with just existing data without even all of those cross system bills though that would be and I do feel like the foundations don't understand child welfare as well as they do some other fields so I think there's a really crucial important role in educating them just about the 1 in 3 number just about how the system works I would really look welcome any opportunity to just get foundations sort of at zero ground zero with child welfare hopefully this is one of them please join me in thanking our panel today [Applause]

Leave a Reply

Your email address will not be published. Required fields are marked *