Future Ready PA Index – Spring 2017 Webinar


Good afternoon. Thank you for taking the time on this beautiful Friday and spring to join us for updates around the Future Ready PA index. My name is Matt Stem and I’m the Deputy Secretary for the office of elementary and secondary education at the Pennsylvania Department of Education, and it’s my pleasure to be here with you today. There are several things we’d like to accomplish over the next 30 or so minutes. We’ll begin with discussing the purpose of the Future Ready PA Index from there we’re going to talk about single summative rating systems, a review of what they are, and then in particular what some of the limitations of such systems are. Then we’re very excited to share a snapshot of our proposed Future Ready PA Index, including looking at a sample indicator, some of the reporting elements that will be a part of the system, and additional information. And finally we’ll discuss next steps, including a timeline of where the work goes from here and how this all rolls into our ESSA plan, as well as our plan for accountability in Pennsylvania. The purpose of the proposed Future Ready PA Index is to establish a system of school performance measures that moves beyond point in time achievement and ensures clear identification of school contributions towards student learning, growth, and success in the classroom and beyond. And, this guiding vision has really been central to our work around this system for over the past year and a half. We’ve had stakeholders involved in this work, over a thousand stakeholders over the past year and a half, and you know before even moving into looking at indicators and methodology it really began with assuring ourselves that we had a guiding vision for this work. The webinar that we’re going to do today builds on the work from the January 18th webinar in which our focus at that time was on the indicators in the system, and today we’re going to focus on methodology and how we take those indicators and report publicly schools success. For those who didn’t see or were unable to participate in the January 18th webinar, we’re going to very briefly move through the indicators that we discussed at that time that are going to be a part of the Future Ready PA Index. However if you’d like a deeper dive into more detail on these indicators that webinar is still published on our website in the same place where this video is housed and we would certainly encourage you to go back and watch that video for additional background. But there are three categories to the Future Ready PA Index in which the indicators are grouped. The first category is state assessment measures, the second is on-track measures, and the third is college and career readiness measures>So in this first category you’ll see that the percent proficient and advanced on PSSA’s and Keystone’s in English language arts, math, and science is still a significant part of the assessment measures as are PVAS growth scores and then I’ll draw your attention to the percent advanced on PSSA’s and Keystone exams this is an indicator that used to be in the extra credit section in the past but this has been moved up to state assessment measures so it’s not a new indicator but it is now going to be located in a different place in the Index and we’ll talk a little bit more about that as we move forward. Under on- track measures there were really no changes here we, have grade three reading indicators of success, grades seven math indicators of success, English language proficiency, and chronic absenteeism. Under college and career measures you’ll see that we have the career standards benchmark, percent proficient advanced on industry standards based competency assessments, and or industry recognized credentials and here we’re really talking about NOCTE, NIMS, and other Occupational certificates we still have the AP IB College course offerings graduation rate, the post-secondary transition to school military or work, and then again here I want to draw your attention to a couple of indicators that also used to be a part of the extra credit in our original proposal and now are being moved up into this section and they are the percent advanced on industry standards based competency assessments and/or industry recognized credentials as well as the percent of non-CTE students graduating with at least one high-value industry-recognized credential. So, no longer in the extra credit section but here in the category of college and career measures. Prior versions of the proposed indicators over the last year included a single summative rating with associated weightings. As we shared in our January webinar, last November PDE began an extensive review of other methodologies to ensure that our proposed system would best meet the needs of our communities. In that review process, we looked closely at the benefits and limitations of various reporting systems. Most reporting systems fall into the broader category of single summative rating systems. Single summative rating systems essentially aggregate various measures through policy determined weightings into a single summative score. Examples of some of those various methodologies are within those systems you can have a single numeric value as that single summative rating, some states use A-F ratings others use star ratings for example a school could have somewhere between one to five stars, and then there are others that use essentially words or phrases to represent categories based on either score ranges or percentile rankings. Again, what they all have in common though is the aggregation of various measures into one single summative rating. There are some significant limitations inherent in these sorts of systems. First, single summative rating systems place policy values on each indicator and the accountability system, so raising the weight of one measure by default reduces the weight of other measures, and those decisions again are of weighting are based on policy values. Also, single summative rating systems may mask transparency of performance on individual measures. Higher performance on one indicator can offset lower performance on another. So as an example, you know a school can receive an overall rating of an “A” while still having struggling subgroup populations. Single summative rating systems may suggest or imply comparability when it does not exist that the overall score may be easily misinterpreted or misused. It raises a question: is one school better than another if they’re strong in different areas and we’re going to look at an example of a situation like that in just a moment. Finally, single summative rating systems can tend to oversimplify what is a very complex system by combining dissimilar measures and funneling them into one rating. In other words, the score can be presented in a simple way, but the combinations of inputs and outputs across various measures make it difficult to justify some of the claims implied by the score. I’d like to illustrate some of the limitations we just discussed the following example is based on a real example of two high performing high schools with very similar achievement patterns. If you take a look at these two high schools that we have in front of us and we’ve tried to make this maybe even a little overly simplified just to illustrate the point so this is just a partial list of indicators and instead of showing individual scores we’ve just lumped scores together to give a general sense of the performance of these two schools, and and let’s assume that in this case both high school number one and high school number two have very very similar performance across the majority of indicators, including proficiency growth in even things like industry credentials, AP IB, and graduation. But let’s say for this example that high school number one has very high scores, as high school number two does, on proficiency but that high school number one has proficiency scores overall that are a few points higher a little bit higher than high school number two. But then let’s say that high school number two has AP or IB scores that are very high, significantly higher than high school number one, and also they have the percent of students performing at the advanced level on Keystone’s significantly higher than high school number one. So the question is, without going any further if we were to stop at this point and ask you know various individuals to make a claim of which is the better high school, would would it be the high school that has slightly higher proficiency scores or would it be the high school that has significantly higher students scoring either a three or four on their their AP exam or four or higher on our IB or scoring at the proficient level on their Keystone’s? And there’s good rationale behind each but in a system that for example weights proficiency as one of the highest weightings, high school number one could end up with a ninety nine while high school number two ends up with a ninety one. And this is the type of thing that we see play out and it’s very difficult to make a claim that high school number one, first of all, it’s difficult to make claim that high school number one would be a quote unquote better high school but certainly would be exceptionally difficult to make a claim that high school number one is eight points better than high school number two. So, that would lead us to say well then the fix is to raise the weighting of AP, IB, and advanced in Keystone’s. And that’s solves this issue, but then what that does is when we make that policy decision to increase the weighting there then by default we’re decreasing the weightings in other areas, even areas outside of proficiency things such as you know the industry certificates and graduation and other indicators. So it becomes almost like like a zero-sum game to get these waited and we know that policy decisions drive the allocation of resources. So if the state places policy values through assigned weights we are essentially telling schools where to invest their time, professional development, and resources. Although this example is based on two high-performing schools, the same limitations can be seen at schools across the score range. In addition to the limitations already discussed, there are other complexities as well when comparing schools single scores. So you have this issue for example around grade configurations in Pennsylvania as with most states we have various elementary school models and so one elementary school or primary center may only have a grade three, while another elementary school has grades three, four, and five. Knowing that achievement does change relative to grade levels we see those patterns – how do we effectively in a reliable way have a single score with different grade levels have at the same sort of comparability when the same grade levels aren’t being tested in those buildings. The same holds true at the middle school and junior high school levels, and then very much so we see this with our junior- senior high schools when comparing those with senior high schools, and this is something that we do all the time – we take these single summative ratings, and we compare junior senior high school that has seventh and eighth grade PSSA scores included, and compare that school to a senior high school that only has Keystone scores applied to it, and in some cases even banked scores that bank at the grade eleven. So really this becomes a challenge and the complexity when trying to use a single score to compare schools with different grade bands. Finally, we’ve got this whole issue around subgroup performance and you know in terms of subgroups in a single rating system, how do you calculate weightings in a valid and reliable way given the vastly different sized populations that exist in each school. What makes sense for one school can create unintended consequences of others and so how do you appropriately weight those sorts of measures and do it in a valid and reliable way. In light of the challenges related to single summative rating systems, the department began to consider the benefits of dashboard reporting as used by some other states. Dashboard accountability approaches are designed to present actual school performance on each measure in a public format without aggregating into a single summative score. The dashboard approach treats the accountability system as a tool for continuous improvement. It maximizes transparency of performance on individual measures and, this is very important, in a dashboard approach you keep dissimilar measures distinct, and thus you avoid many of the accountability weighting issues that we’ve been discussing on the prior slides. We believe that a dashboard approach to reporting is the most effective way to communicate a school’s progress to its community. For the Future Ready PA Index, we want to, first of all, be sure that we’re communicating school progress in clear, concise terms. Areas of success, as well as areas of in need of improvement, should be readily evident – it should be easy to see for any stakeholder what those areas are for each school. And additionally, overall school performance should be apparent, so if a school is in comprehensive support and improvement status, or targeted support and improvement status, that would be designated prominently on the front of each schools page. We really want to make sure that we’re establishing context for measures for each of the measures through comparison to statewide averages. A number by itself doesn’t mean anything absent the broader context of the performance of other schools, and and we’re going to endeavor to provide that sort of context to make meaning from the scores and the data that we present. We want to provide transparency around subgroup performance again it should be easy to see how each subgroup is performing on each indicator and stakeholders should not have to click to a separate link or go to a different site to view subgroup performance – it should be right there and should be elevated as a part of our tool. We also want to be sure that we’re identifying values at the community level, not the state policy level – so like our earlier example when we looked at those two high schools a community should be able to elevate those indicators that are best aligned to their local context, and as opposed to having those indicators weighted at the state level. Finally, we need to be able to show progress towards state goals over time – this is really a way to hold ourselves accountable as a state, for moving more and more Pennsylvania students towards towards our statewide goals over time. Now we’re going to look at a sample indicator. Understand that this is a conceptual illustration to demonstrate the data and designations that we’re proposing. For this sample, we’re looking at the percent proficient and advanced on English language arts – that’s the indicator that we’re using just for this example. Over the next several slides, we’ll focus on the specific design elements that you see in front of you. Probably the easiest element to understand is school performance and school performance is essentially the place where we’re going to measure performance on each indicator, it will be reported for the all student groups and subgroups and for the majority of indicators it’s going to be represented as a percentage. Here you can see the school performance by the all group and then by each subgroup – so if we highlight this here, you’ll see that in this sample school the percent proficient or advanced in English language arts was eighty four point three percent economically disadvantaged with sixty five point five percent English language learners at eleven point eight percent, and it goes down from there. So very easy to see performance on this indicator by subgroup. Next is statewide average, and statewide average is reported for the all student group and subgroups and applies to most but not all indicators. As we shared before, this is a critically important data point because absent this score, it’s very difficult for stakeholders to make meaning out of school performance. Here we can see statewide average laid out alongside of school performance and we’ll highlight that here. So the statewide average for English language arts for all students is sixty two point five percent proficient or advanced, and then you can see for each of the subgroup populations forty five percent for economically disadvantaged, eleven point six percent for English language learners, and down it goes. So think about how much more meaning we can make for this particular school when having the statewide averages compared to, so you think about the performance of this school for they’re all group at eighty four point three percent, and now having the sixty two point five percent statewide average makes much more meaning out of that number, and the same holds true, for example, for the sixty five point five percent proficient or advanced in this school that were economically disadvantaged, compared to forty five percent statewide. So now it really becomes clear that these schools numbers have more meaning when stakeholders parents and others can compare them with statewide averages without having to go anywhere else, having this all in one place. The statewide goal represents the state goals as identified in Pennsylvania’s ESSA plan, and the goals vary by indicator and a statewide goal will apply to most but not all indicators. This is the goal mapped out to the year 2030 and this is where we as a state are holding ourselves accountable as we look at the statewide average and look to close that gap each year. So our statewide goal, for example, for English language arts knowing that our current average is at sixty two point five percent proficient or advanced is 81 percent proficient or advanced. If we don’t see that gap closing over time, then that’s an indicator that you know at the state level we need to be sure that we’re creating the conditions for school success, including the appropriate levels of supports, technical assistance, and certainly resources and funding that allow our schools to best serve our students. The last and very important elements to highlight are the school progress designations. These are applied at the all-group level as well as at the subgroup level. The highest designation that a group can receive, either the all- group or the subgroups, is the meets or exceeds statewide goal designation. This means that for a particular indicator the statewide goal was met or exceeded outright. The second-highest designation is meets progress target, and we’re not going to get into the specifics of these calculations here in this webinar, though we certainly will be sharing these details in the weeks ahead. But in very simple terms, there’s two ways that a group of students can meet the progress target – one is by hitting an interim yearly target established by mapping out sort of benchmark point-in-time scores that a school would need to achieve on the way towards the 2030 statewide goals. The second way that a student or group of students can hit the meets progress target is to demonstrate a percent of improvement over the prior year, and that percent is again based on percent improvements needed to close the gap towards the statewide goal – so even if the interim yearly target isn’t hit, if there’s a certain percent of growth over the prior year that would then allow a group of students to meet the progress target. Then finally the final designation is not meeting statewide goal or progress target, and this would be for any group of students that didn’t hit the statewide goal and didn’t meet the progress target through either of the two ways that can be met. So if you look all the way to the right, you’ll see a section entitled school progress towards statewide goal. And underneath there, you can see the progress designations for the all group in each of the subgroups, and we’ll highlight that. So in this school, the all group of students exceeded the statewide goal and that’s clearly labeled in dark blue in a way that elevates that designation. But then for each of the subgroups you’ll see that there’s individual designations there as well, and they’re color-coded and the goal in this is to make it very easy for any stakeholder to see how the all group of students is performing as well as individual subgroups on any given indicator. So for example here we see that our black or african-american subgroup as well as our white subgroup had exceeded the statewide goal, our economically disadvantaged Asian and IEP students have met the progress targets, and then our English language learners, Hispanic students, and Native Hawaiian Pacific Islander populations did not meet the statewide goal or the progress target. There are some additional elements to highlight in terms of the functionality and view of the Future Ready PA Index. We are looking to include all of the indicators on one page for a full overview of the all student group – so having all of the state assessment measures, the on track indicators, and the college and career-ready measures in one place so that anyone viewing a school’s performance does not have to click through multiple pages or multiple screens to be able to get a sense of overall school performance. Again, ease of use and simplicity is a key to our design. Then we would endeavor to have drop-down measures for menus for each of the all groups that would allow subgroup performance to then drop down and be viewed without having to click somewhere else to see that. Finally we would want to have prominent school classifications for any school that’s designated as either comprehensive support and improvement or targeted support and improvement. So this is just a very small example of what it can look like to have the indicators collapsed. Now this is a PowerPoint, so there’s no scroll-ability for this example, but you take a look from the state assessment measures, and looking at the way percent proficient and advanced is laid out you can see that we have those three indicators in one place with the all student group school progress elevated and easily seen. From here we would have it set up so that you can click on any one indicator and see the drop-down of subgroup performance. As we close we want to take a final moment to address the various components of our accountability and reporting systems to ensure clarity. Pennsylvania, like most states, has historically had multiple accountability and data reporting systems for different purposes. Beginning in the fall of 2018, we would propose the following components to satisfy the various systems and various different purposes: so for our public facing report card, we would propose using the Future Ready PA Index as the front-facing indicator of school success. For federal accountability, we will be selecting indicators from the Future Ready PA Index to identify our comprehensive support and targeted support schools as required by ESSA. So rather than having a whole different methodology and a whole different set of indicators and calculations, we will identify those indicators in Future Ready PA that are used for federal accountability and we will identify those in a prominent way so that it’s clear to anyone looking at the screen and looking at a school’s performance which are the federal reporting measures. And then finally, in terms of educator evaluation, a building level score is still required by Act 82, and that would be generated using current formulas and weightings identified in regulations, and would not be a part of the Future Ready PA Index So if there are ever changes to Act 82 in the future, we would respond accordingly, but that data will reside outside of the Future Ready PA Index and so any changes in Act 82 wouldn’t require any changes to the Future Ready PA Index. In terms of next steps there’s much to be done in the weeks and months ahead. We’re going to continue finalizing our indicator level calculations, as well as our federal accountability calculations, and raising them with more detail for all stakeholders. So we’ll continue to keep the field apprised of those developments and get two-way feedback, but also other targeted groups – I know that we have parent sessions specifically for parents scheduled throughout May and into June, and we’ll be sharing more details and getting that feedback that helps inform our design. We’re looking to to share a draft of the ESSA plan before the formal public review and so sometime early summer we should have a draft of our ESSA plan to share and also gain additional feedback – so all the things we’ve been discussing will be a part of that draft plan and and we hope to get significant feedback to ensure, again, that we’re hearing from all stakeholders. We’ll be submitting our plan on September 18th of this year, and also are going to be preparing technical assistance for educators and others to understand how to use the system, what the indicators mean, and how to ensure that the Future Ready PA Index is a useful tool to guide continuous improvement of our schools. We endeavor to have the Future Ready PA Index implementation set for the fall of 2018. At the bottom of your screen you see the email address: [email protected] and we really encourage you if you’re watching this now and you have questions, thoughts, suggestions please email this address – we have a team of individuals that checks this email daily, and not only will we be able to respond to you, but more importantly we’ll be able to take your feedback, bring it to our teams that are working so diligently on this system, and again ensure that the feedback we’re receiving is informing our plan development. We believe that we have an excellent, once in a decade opportunity to take Pennsylvania’s forward-facing reporting system to the next level, and we look forward to collaborating with lawmakers, the State Board, educators, parents industry partners, and others to to make this new system a reality. Thank you for taking the time to be here with us today, and we wish everyone a wonderful weekend.

1 thought on “Future Ready PA Index – Spring 2017 Webinar”

  1. Analytical data is only valuable if there is real accountability shown through action to correct deficiencies. As it stands now, the public has very little confidence based upon lack of follow through to address past deficiencies such how such a huge percentage of public school children are failing basic math. The numbers are reported but the schools have no real plan of action to address. It simply gets reported.

Leave a Reply

Your email address will not be published. Required fields are marked *