Delaware Liberal

Guest Post: Delaware’s Education Insight Project (Part 2: What Will It Look Like?)

The following is a guest post by Mike Oboryshko. Mike is a Red Clay parent and a friend of mine and of this blog.

In Part 1, I introduced the Education Insight Project, and went over the RFPs that define the beginning of design and implementation. In Part 2, we will get into some more detail.

What do the dashboards look like, and what does Texas have to do with any of this?
Now, I’d like to be able to give you a list of all the dashboards, and tell you what they look like and how they work. But that would be missing the point of the RFP. The Dashboard RFP specifies that the job of figuring out what dashboards to build, and how they work, will be done by the winning vendor (together with DOE).

There is some direction though. The Dashboard RFP specifies that the dashboards will be developed according to a reference design created by the Michael and Susan Dell Foundation (MSDF). Yes, that Michael Dell, the billionaire computer guy from Texas (why is Texas always in the middle of these things?). Dell has created a foundation devoted to education, among other things, and one of its undertakings was to help Texas develop a new technology platform for its education system. From the Dashboard RFP:

The dashboard and its database will be based on design documentation funded by the Michael and Susan Dell Foundation that can be found here:  http://www.texasstudentdatasystem.org/dcd and here: http://www.districtconnections.com. This documentation will be revised and extended based on Delaware’s dashboard analysis and design process.

So the winning vendor’s job (together with DOE) is to review the Texas implementation, conduct further analysis together with Delaware stakeholders, and then figure out how to use or modify the Texas design to suit Delaware. We can copy and modify the Texas dashboards, or build our own. The point is that once the Data Warehouse is in place (see Part 1), new dashboards can be created fairly easily, once we know which ones we want to build.

I personally am hopeful there will be some dashboards aimed at parents, and that parents will be given a seat at the table when they are being designed. The Governor has in fact blogged about a Parent Dashboard, and expects it to be implemented later in the project. I have been trying to find out the scope and level of detail of student information in a Parent Dashboard. Will the current Home Access Center will be replaced by a new dashboard, or if it will co-exist with the new dashboards? I could not find an example of a Parent Dashboard in the Texas implementation.

To find out more about the MSDF Texas implementation, check out:
http://www.texasstudentdatasystem.org/dcd
http://www.districtconnections.com (documentation)

These links (on http://www.texasstudentdatasystem.org/dcd) give the best idea of what the Texas implementation looks like from the teacher’s point of view. I expect the Teacher Insight Dashboard will look a lot like this:

Elementary school
Middle school
High school

What is the data-driven approach?
Honestly, I don’t know that much about it. Professional educators will be able to describe the data-driven approach far better than I can. Basically, it is a common-sense approach that will be recognizable to anyone familiar with some form of process management: If you want to improve something, first you have to measure it. And you have to be able to see the data in different views, and work with it over time.

Importantly though, a data driven approach is necessary to implement a meaningful accountability system. Without data, accountability is just politics. So if I were a teacher, I don’t think I would be anxious about the data driven approach at all. Teachers, learn the system thoroughly and use it to your advantage. It can help you more than it can hurt you.

Suddenly you will have access to the same numbers and reports that are being used to evaluate you, and you can review them every day if you want. Once you know what numbers are being watched, you can keep on top of those numbers faster than they can. As long as they are the right numbers to be watching, this is a win-win-win: good for the students and the school performance, and for your own evaluations. And now that you know there is an easy-to-access data warehouse, you can ask for any amount of custom reporting if you have to make a case and you need support. Ask for the data dictionary, and get a geek friend to help you. I suspect the teachers who don’t embrace the new system will be the ones always behind on their numbers.

How will the metrics be used?
One important use of metrics, arguably the most important, will be to track student performance to identify when a student needs intervention. The graphic below is from the Texas web site explaining how metrics are used to identify a student who needs help with math. The point is that once you can visualize all in one place how a student has been performing over time, you can recognize the need for intervention.

Tyson’s standardized math scores have declined “nearly to failing” over three years. Last year he got a D in Pre-Algebra, and this year he is failing Algebra I and has missed 6 of 30 math classes. In this example, the system is tracking multiple metrics over multiple years: Standardized scores, class attendance, course grades, and “benchmarks” (whatever they are).

So this is great, right? The technology has flagged Tyson as needing intervention. The only problem is, Tyson’s six year old sister could have told you last year Tyson needed help in math! Why did Tyson not receive intervention last year, after the first week he started bringing home D’s from Pre-Algebra?

Based on what I see here, it is because the metrics are tracking at the course-grade level, not at the daily assessment level. This is what designers of information systems call granularity. If the system metrics had been granular enough to track daily assignments and tests, failing assignments could be flagged at a much earlier time for more meaningful intervention, in time to raise Tyson’s grade in Pre-Algebra, and have a much better basis for success in Algebra 1. Waiting for course grades and standardized tests to come in is not fast enough to help in a meaningful way. Perhaps the “benchmarks” are tracking daily grades? If so, then the tracking needs to rely more on the current data.

Kids are growing up fast, and we can’t be waiting for last year’s data before we take action. Interventions need to be identified over weeks, not years, and then performed as micro-interventions that will be less costly and more likely to succeed.

Of course, tracking daily assignments is hard. It is harder to design and build such a system, and harder for the teachers to keep up with the necessary grading and data entry. But if we want to use this system for meaningful and timely interventions, we will have to do it.

More metrics
Another example of metrics used to identify intervention occurred to me. One student can be getting assignment grades of “100, 0, 100, 0, 100,” while another student in the same class gets scores of “60, 60, 60, 60, 60.” Both students fail the course with the same grade, but they require very different interventions.

Issuing a failing grade to a student is a very serious matter. If a student receives a failing grade, we need to know why. Not at interims, not after the marking period, not next year, but NOW. If you are only tracking at the course grade level, all failing students look the same. Waiting for that D to come in, and then figuring out how to intervene, is too late.

So here’s my idea: Every time a teacher issues a failing grade, he or she should be required to enter a reason along with the grade. Just a short list of canned reason codes will be fine. Did the student answer all the questions wrong? Did the student lose the paper, or forget to turn it in? Did the teacher refuse to accept completed work a day late? Is the teacher just in a bad mood on Thursdays? Or will some teachers back down on issuing the failure, once they realize it will have to be explained on the record? How can we help students who are failing if we don’t know why?

The data will show what the problem is, as long as our metrics are tracking at the class assignment level. Is the student doing badly with assignments on Mondays? Is the teacher giving an inordinate amount of zeroes?

Either way, for this metric to be meaningful it will have to be built into the online grading system. I suspect it is not. And the workshop for developing the metrics is happening in the very near future.

REL workshop
To develop the underlying metrics for the new system, Delaware will turn to the Regional Educational Laboratory (REL) Mid-Atlantic. REL is a research arm of the U.S. Department of Education, and this is exactly the sort of thing they exist for. From the RFP:

At the request of the Delaware Department of Education, the Regional Education Lab Mid-Atlantic is currently in the planning stages of conducting an expert roundtable on the development and implementation of student-level indicators and metrics for use through data dashboards. Invited experts will be nationally recognized authorities who are knowledgeable on both the research and implementation of data dashboard systems. The roundtable will be based on a series of discussions, intended to be engaging and interactive, focusing on what are good student-level indicators/metrics, what are they good indicators of (i.e., academic achievement, dropout, etc.), what indicators are most useful for various stakeholders (e.g., administrators, teachers, parents), and using visual presentations of data for ease of use and comprehension. The event will be held in Delaware for a predetermined list of DDOE and district personnel.

Dashboard RFP, Section 3.2.2

So this is where the performance metrics will be decided: whether we track students at the assignment level or the course-grade level, whether we track reasons for failing grades or not, what data is included in accountability measures. This is essential information that the geeks need to know to design the database and the applications. That is why the winning vendor is required to attend and integrate the workshop outcome into the design.

If you are feeling left out, don’t. I know well that one of the biggest causes of project failure is an endless requirements phase that requires consulting everyone in multiple rounds of revision and approval. And the political costs of screwing up the RTTT money would be very, very high, as would the loss to our students. In this RFP I can sense a team that knows what it wants and needs to move quickly.

On the other hand, it would be useful to at least allow meaningful written public input into the REL workshop or any suitable points in the process, and a way to report back on the outcome of that input. In the current process, I don’t see parents having a seat at the table.

In Part 3, I will discuss the upcoming training sessions on the new technology for teachers and administrators (sooner than you think), and how any of this may or may not benefit parents, students, or teachers.

Exit mobile version