Artwork

Контент предоставлен Darlene Suyematsu and The Deming Institute. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Darlene Suyematsu and The Deming Institute или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.
Player FM - приложение для подкастов
Работайте офлайн с приложением Player FM !

Goal Setting is Often an Act of Desperation: Part 6

38:12
 
Поделиться
 

Manage episode 424109510 series 2320637
Контент предоставлен Darlene Suyematsu and The Deming Institute. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Darlene Suyematsu and The Deming Institute или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

In the final episode of the goal setting in classrooms series, John Dues and Andrew Stotz discuss the last three of the 10 Key Lessons for implementing Deming in schools. They finish up with the example of Jessica's 4th-grade science class.

TRANSCRIPT

0:00:02.4 Andrew Stotz: My name is Andrew Stotz, and I'll be your host as we continue our journey into the teachings of Dr. W Edwards Deming. Today I'm continuing my discussion with John Dues, who is part of the new generation of educators striving to apply Dr. Deming's principles to unleash student joy in learning. This is episode six about goal setting through a Deming lens. John, take it away.

0:00:26.4 John Dues: Hey, Andrew, it's good to be back. Yeah, for the past handful of episodes or so, we've been talking about organizational goal setting. We covered these four conditions of healthy goal setting and then got into these 10 key lessons for data analysis. And then we've been looking at those 10 key lessons applied to an improvement project. And we've been talking about a project that was completed by Jessica Cutler and she did a Continual Improvement Fellowship with us here at our schools. And if you remember, Jessica was attempting to improve the joy in learning of her students in her fourth grade science class. So last time we looked at lessons five through seven. Today we're gonna look at those final three lessons, eight, nine and ten applied to her project.

0:01:15.7 AS: It's exciting.

0:01:17.1 JD: Yeah. So we'll jump in here. We'll kind of do a description, a refresher of each lesson. And we'll kind of talk about how it was applied to her specific project, and we'll look at some of her data to kind of bring that live for those of the folks that have video. Let's jump in with lesson number eight. So we've talked about this before, but lesson number eight was: more timely data is better for improvement purposes. So we've talked about this a lot. We've talked about something like state testing data. We've said, it can be useful, but it's not super useful for improvement purposes, because we don't get it until the year ends. And students in our case, have already gone on summer vacation by the time that data comes in. And you know that the analogous data probably happens in lots of different sectors where you get data that lags, to the point that it's not really that useful for improvement purposes.

0:02:15.8 JD: So when we're trying to improve something, more frequent data is helpful because then we can sort of see if an intervention that we're trying is having an effect, the intended effect. We can learn that more quickly if we have more frequent data. And so it's, there's not a hard and fast rule, I don't think for how frequently you should be gathering data. It just sort of needs to be in sync with the improvement context. I think that's the important thing. Whether it's daily or a couple times a day or weekly, or monthly, quarterly, whatever, it's gotta be in sync with whatever you're trying to improve.

0:02:50.5 AS: You made me think about a documentary I saw about, how they do brain surgery and how the patient can't be sedated because they're asking the patient questions about, do you feel this and they're testing whether they're getting... They're trying to, let's say, get rid of a piece of a cancerous growth, and they wanna make sure that they're not getting into an area that's gonna damage their brain. And so, the feedback mechanism that they're getting through their tools and the feedback from the patient, it's horrifying to think of the whole thing.

0:03:27.7 JD: Yeah.

0:03:28.3 AS: It's a perfect example of why more timely data is useful for improvement purposes 'cause imagine if you didn't have that information, you knock the patient out, you get the cancerous growth, but who knows what you get in addition to that.

0:03:43.7 JD: Yeah, that's really interesting. I think that's certainly an extreme example, [laughter], but I think it's relevant. No matter what our context, that data allows us to understand what's going on, variation, trends, whether our system is stable, unstable, how we should go about improving. So it's not dissimilar from the doctors in that example.

0:04:06.8 AS: And it's indisputable I think, I would argue. But yet many people may not, they may be operating with data that's not timely. And so this is a reminder that we would pretty much always want that timely data. So that's lesson eight. Wow.

0:04:22.6 JD: Lesson eight. Yeah. And let's see how we can, I'll put a visualization on the screen so you can see what Jessica's data look like. All right. So now you can see. We've looked at these charts before. This is Jessica's process behavior chart for joy in science. So just to reorient, you have the joy percentage that students are feeling after a lesson on the x-axis, sorry, on the y-axis. On the x-axis, you have the school dates where they've collected this survey information from students in Jessica's class.

0:04:57.0 AS: Can you put that in Slide Show view?

0:05:00.4 JD: Yeah. I can do that. Yeah.

0:05:02.7 AS: Just it'll make it bigger, so for the...

0:05:06.5 JD: There you go.

0:05:07.8 AS: For the listeners out there, we're looking at a chart of daily, well, let's say it looks like daily data. There's probably weekends that are not in there because class is not on weekends, but it's the ups and downs of a chart that's ranging between a pretty, a relatively narrow range, and these are the scores that are coming from Jessica's surveying of the students each day, I believe. Correct?

0:05:34.2 JD: Yeah. So each day where Jessica is giving a survey to assess the joy in science that students are feeling, then she's averaging all those students together. And then the plot, the dot is the average of all the students sort of assessment of how much joy they felt in a particular science lesson.

0:05:54.7 AS: And that's the average. So for the listeners out there John's got an average line down the middle of these various data points, and then he is also got a red line above and a red line below the, above the highest point and slightly below the lowest point. Maybe you can explain that a little bit more.

0:06:15.4 JD: Yeah. So with Jessica, you remember originally she started plotting on a line chart or a run chart when we just had a few data points just to kind of get a sense of how things are moving so she could talk about it with her class. And over time what's happened is she's now got, at this point in the project, which she started in January, now this is sort of mid-March. And so she's collected two to three data points a week. So she doesn't survey the kids every day just for time sake, but she's getting two, three data points a week. And so by March, she started just a couple months ago, she's got 28 data points. So that sort of goes back to this idea of more timely data is better for improvement.

0:07:00.9 JD: And a lot of times, let's say a school district or a school does actually survey their students about how, what they think of their classes. That might happen at best once a semester or maybe once a year. And so at the end of the year you have one or two data points. So it's really hard to tell sort of what's actually going on. Compared to this, Jessica's got these 28 data points in just about two months or so of school. So she's got 28 data points to work with. And so what her and her students are doing with this data then, one, they can see how it's moving up and down. So we have, the blue dots are all the plotted points, like you said, the green line is the average running sort of through the middle of the data, and then those red lines are our process limits, the upper and lower natural process limits that sort of tell us the bounds of the system.

0:07:50.4 JD: And that's based on the difference in each successive data point. But the most important thing is that as Jessica and her students are looking at this, initially, they're really just studying it and trying to sort of see how things are going from survey to survey. So one of the things that Deming talked about frequently is not tampering with data, which would be if you sort of, you overreact to a single data point. So let's say, a couple of days in, it dips down from where it started and you say, oh my gosh, we gotta change things. And so that's what Deming is talking about. Not tampering, not overreacting to any single data point. Instead look at this whole picture that you get from these 28 data points and then talk about...

0:08:41.5 JD: In Jessica's case she's talking about with her students, what can we learn from this data? What does the variation from point to point look like? If we keep using the system, the fourth grade science system, if we leave it as is, then we'll probably just keep getting data pretty similar to this over time, unless something more substantial changes either in the negative or the positive. So right now they...

0:09:10.1 AS: And I think for the listeners, it's, you can see that there's really no strong pattern that I can see from this. It's just, there's some, sometimes that there's, seems like there's little trends and stuff like that. But I would say that the level of joy in the science classroom is pretty stable.

0:09:32.1 JD: Pretty stable. Yeah. Pretty high. It's bouncing around maybe a 76% average across those two and a half months or so. And so, they, you kind of consider this like the baseline. They've got a good solid baseline understanding of what joy looks like in this fourth grade science classroom. Did that stop sharing on your end?

0:10:00.2 AS: Yep.

0:10:00.2 JD: Okay, great. So that's lesson eight. So clearly she's gathered a lot of data in a pretty short amount of time. It's timely, it's useful, it's usable, it can be studied by her and her students. So we'll switch it to lesson nine now. So now they've got a good amount of data. They got 28 data points. That's plenty of data to work with. So lesson nine is now we wanna clearly label the start date for an intervention directly in her chart. And remember from earlier episodes, not only are we collecting this data, we're actually putting this up on a screen on a smart board in the classroom, and Jessica and her students are studying this data together. They're actually looking at this, this exact chart and she's explaining sort of kind of like we just did to the listeners. She's explaining what the chart means.

0:10:54.2 JD: And so over time, like once a week she's putting this up on the smart board and now kids are getting used to, how do you read this data? What does this mean? What are all these dots? What do these numbers mean? What do these red lines mean? That type of thing. And so now that they've got enough data, now we can start talking about interventions. That's really what lesson nine is about. And the point here is that you want to clearly, explicitly with a literally like a dotted line in the chart to mark on the day that you're gonna try something new. So you insert this dashed vertical line, we'll take a look at it in a second, on the date the intervention started. And then we're also gonna probably label it something simple so we can remember what intervention we tried at that point in time.

0:11:42.7 JD: So what this then allows the team to do is then to very easily see the data that happened before the intervention and the data that happened after the implementation of this intervention or this change idea. And then once we've started this change and we start plotting points after the change has gone into effect, then we can start seeing or start looking for those patterns in the data that we've talked about, those different rules, those three rules that we've talked about across these episodes. And just to refresh, rule one would be if we see a single data point outside of either of the limits, rule two is if we see eight consecutive points on either side of that green average line, and rule three is if we see three out of four dots in a row that are closer to one of the limits than they are to that central line.

0:12:38.3 JD: So that again, those patterns tell us that something significant, mathematically improbable has happened. It's a big enough magnitude in change that you wouldn't have expected it otherwise. And when we see that pattern, we can be reasonably assured that that intervention that we've tried has worked.

0:12:56.0 AS: And let me ask you about the intervention for just a second because I could imagine that if this project was going on, first question is, does Jessica's students are, obviously know that this experiment is going on?

0:13:08.3 JD: Yes.

0:13:09.8 AS: Because they're filling out a survey. And my first question is, do they know that there's an intervention happening? I would expect that it would be yes, because they're gonna feel or see that intervention. Correct?

0:13:25.1 JD: Sure. Yep.

0:13:25.2 AS: That's my first point that I want to think about. And the second point is, let's imagine now that everybody in the classroom has been seeing this chart and they're, everybody's excited and they got a lot of ideas about how they could improve. Jessica probably has a lot of ideas. So the temptation is to say, let's change these three things and see what happens.

0:13:46.5 JD: Yeah.

0:13:47.1 AS: Is it important that we only do one thing at a time or that one intervention at a time or not? So maybe those are two questions I have in my mind.

0:13:58.6 JD: Yeah, so to the first question, are you, you're saying there there might be some type of participant or...

0:14:02.3 AS: Bias.

0:14:03.3 JD: Observer effect like that they want this to happen. That's certainly possible. But speaking to the second question, what intervention do you go with? Do you go with one or you go with multiple? If you remember a couple of episodes ago we talked about, and we actually looked at a fishbone diagram that Jessica and her students that they created and they said, okay, what causes us to have low joy in class? And then they sort of mapped those, they categorized them, and there were different things like technology not working. If you remember, one was like distractions, like other teachers walk into the room during the lesson. And one of them was others like classmates making a lot of noise, making noises during class and distracting me. And so they mapped out different causes. I think they probably came up with like 12 or 15 different causes as possibilities.

0:14:58.7 JD: And they actually voted as a class. Which of these, if we worked on one of these, which would have the biggest impact? So not every kid voted for it, but the majority or the item that the most kids thought would have the biggest impact was if we could somehow stop all the noises basically. So they came up with that as a class, but not, it wasn't everybody's idea. But I think we've also talked about sort of the lessons from David Langford where once kids see that you're gonna actually take this serious, take their ideas serious and start acting on them, they take the project pretty seriously too. So maybe not a perfect answer, but that's sort of what we...

0:15:38.0 AS: I was thinking that, ultimately you could get short-term blips when you do an intervention and then it stabilizes possibly. That's one possibility. And the second thing I thought is, well, I mean ultimately the objective, whether that's an output from a factory, and keeping, improving that output or whether that's the output related to joy in the classroom as an example, you want it to go up and stay up and you want the students to see it and say, wow, look, it's happening. So, yeah.

0:16:11.7 JD: And there's different ways you can handle this. So this joy thing could go up to a certain point. They're like, I don't know if we can get any more joy, like, it's pretty high. And what you could do at that point is say, okay, I'm gonna assign a student to just sort of, every once in a while, we'll keep doing these surveys and we will sort of keep plotting the data, but we're not gonna talk about a lot. I'm just gonna assign this as a student's job to plot the new data points. And we'll kind of, we'll kind of measure it, but we won't keep up with the intervention 'cause we got it to a point that we're pretty happy with. And now as a class we may wanna switch, switch our attention to something else.

0:16:45.2 JD: So we started getting into the winter months and attendance has dipped. Maybe we've been charting that and say, Hey guys, we gotta, gotta kinda work on this. This is gone below sort of a level that's really good for learning. So let's think about as a group how we could come up with some ideas to raise that. So maybe you turn your attention to something else, 'cause you can't pay attention to everything at once.

0:17:07.2 AS: Yeah, and I think I could use an example in my Valuation Master Class Boot Camp where students were asking for more personal feedback and I realized I couldn't really scale this class if I had to get stuck into hundreds of grading basically. And that's when I came up with the concept of feedback Friday, where one student from each team would present and then I would give feedback, I would give a critique and they would be intense and all students would be watching, it would be recorded, and all of a sudden all the issues related to wanting this personal feedback went away. And therefore, once I instituted it on a regular basis, I went on to the next issue and I made sure that I didn't lose the progress that I had made and continue to make feedback Friday better and better.

0:17:56.2 JD: Yeah. Yeah. That's great. That's great. I'll share my screen so you can kinda see what this looked like in Jessica's class now, what the chart looks like now. So now you see that same chart, that same process behavior chart, exact same one we were just looking at except now you can see this, this dashed vertical line that marks the spot where the intervention was started that we just talked about. And what the kids are actually doing, and Jessica are running a PDSA cycle, a Plan-Do-Study-Act cycle. That's the experimental cycle in her class. And what they're running that PDSA on is, again, how can we put something in place to reduce the distracting noises. And so what the students actually said is if we get a deduction for making noises, then there will be less noises. And so in the school's sort of management system, a deduction is sort of like a demerit.

0:19:00.0 JD: If you maybe went to a Catholic school or something like that, or some public schools had demerits as well, but basically it's like a minor infraction basically that goes home or that gets communicated to parents at the end of the week. But the kids came up with this so their basic premise is, their plan, their prediction is if there are less noises, we'll be able to enjoy science class. And if we give deductions for these noises, then there'll be less noises. So some people may push back, well, I don't think you should give deductions or something like that, but which, fine, you could have that opinion. But I think the powerful point here is this is, the students created this, it was their idea. And so they're testing that idea to see if it actually has impact.

0:19:44.8 JD: And they're learning to do that test in this scientific thinking way by using the Plan-Do-Study-Act cycle, and seeing if it actually has an impact on their data. So at the point where they draw this dashed line, let's call that March 19th, we can see a couple of additional data points have been gathered. So you can see the data went up from 3/18 to 3/21. So from March 18th to March 21st, rose from about, let's call it 73% or so, up to about 76% on March 21st. And then that next day it rose another percent or two and let's call that 78%.

0:20:28.1 JD: And so the trap here is you could say, okay, we did this intervention and it made things better. But the key point is the data did go up, but we haven't gathered enough additional data to see one of those patterns that we talked about that would say, oh, this actually has had a significant change. Because before the dashed line, you can see data points that are as high or even higher than some of these ones that we see after the PDSA is started. So it's too early to say one way or another if this intervention is having an impact. So we're not gonna overreact. You could see a place where you're so excited that it did go up a couple of days from where it was on March 18th before you started this experiment, but that's a trap. Because it's still just common cause data, still just bouncing around that average, it's still within the bounds of the red process limits that define the science system.

0:21:34.2 AS: I have an experiment going on in my latest Valuation Master Class Boot Camp, but in that case, it's a 6-week period that I'm testing, and then I see the outcome at the end of the six weeks to test whether my hypothesis was right or not. Whereas here it's real time trying to understand what's happening. So yes, you can be tempted when it's real time to try to jump to conclusion, but when you said, well, okay, I can't really get the answer to this conclusion until I've run the test in a fixed time period, then it's you don't have as much of that temptation to draw a conclusion.

0:22:14.1 JD: Yeah. And if I actually was... I should have actually taken this a step farther. I marked it with this Plan-Do-Study-Act cycle. What I should have done too is write "noises" or something like that, deduction for noises, some small annotation, so it'd be clear what this PDSA cycle is.

0:22:32.1 AS: In other words, you're saying identify the intervention by the vertical line, but also label it as to what that intervention was, which you've done before on the other chart. I remember.

0:22:42.1 JD: Yeah. And then it'd be sort of just looking at this when she puts this up on the smart board for the class to see it again too. Oh yeah yeah, that's when we ran that first intervention and that was that intervention where we did deductions for noises. But the bigger point is that this never happens where you have some data, you understand a system, you plan systematic intervention, and then you gather more data right after it to see if it's having an impact. We'd never do that ever, in education, ever. Ever have I ever seen this before. Nothing like this. Just this little setup combining the process behavior chart with the Plan-Do-Study-Act cycle, I think is very, very, very powerful and very different approach than what school improvement.

0:23:33.4 AS: Exciting.

0:23:34.6 JD: Yeah. The typical approach is to school improvement. So I'll stop that share for a second there, and we can do a quick overview of lesson 10 and then jump back into the chart as more data has been gathered. So lesson 10 is: the purpose of data analysis is insight. Seems pretty straightforward. This is one of those key teachings from Dr. Donald Wheeler who we've talked about. He taught us that the best analysis is the simplest analysis, which provides the needed insight.

0:24:08.1 AS: So repeat lesson 10, again, the purpose of...

0:24:11.6 JD: The purpose of data analysis is insight.

0:24:14.7 AS: Yep.

0:24:15.6 JD: So just plotting the dots on the run chart and turning the run chart into the process behavior chart, that's the most straightforward method for understanding how our data is performing over time. We've talked about this a lot, but it's way more intuitive to understand the data and how it's moving than if you just stored it in a table or a spreadsheet. Got to use these time sequence charts. That's so very important.

0:24:42.2 AS: And I was just looking at the definition of insight, which is a clear, deep, and sometimes sudden understanding of a complicated problem or situation.

0:24:51.6 JD: Yeah. And I think that can happen, much more likely to happen when you have the data visualized in this way than the ways that we typically visualize data in just like a table or a spreadsheet. And so in Jessica's case, we left off on March 22nd and they had done two surveys after the intervention. And so then of course what they do is they continue over the next 4, or 5, 6 weeks, gathering more of that data as they're running that intervention, then we can sort of switch back and see what that data is looking like now.

0:25:28.3 AS: Exciting.

0:25:30.3 JD: So we have this same chart with that additional data. So we have data all the way out to now April 11th. So they run this PDSA for about a month, three weeks, month, three, four weeks.

0:25:47.9 AS: And that's 11 data points after the intervention. Okay.

0:25:54.0 JD: Yep. Purposeful. So what was I gonna say? Oh, yeah. So three, four weeks for a Plan-Do-Study-Act cycle, that's a pretty good amount of time. Two to four weeks, I've kind of found is a sweet spot. Shorter than that, it's hard to get enough data back to see if your intervention has made a difference. Longer than that, then it's you're getting away from the sort of adaptability, the ability to sort of build on an early intervention, make the tweaks you need to. So that two to four week time period for your PDSA seems like a sweet spot to me. So she's continued to collect this joy in learning data to see... Basically what her and her class are doing is seeing if their theory is correct. Does this idea of giving deductions for making noises have an impact? Is it effective?

0:26:44.0 JD: So if they learn, if the data comes back and there is no change, no indication of improvement, then a lot of people will say, well, my experiment has failed. And my answer to that is, no, it hasn't failed. It might not have worked like you wanted, but you learn very quickly that that noise deduction is not going to work and we're gonna try some other thing, some other intervention. We learn that very very quickly within 3 or 4 weeks that we need to try something new. Now, in the case of Jessica's class, that's not what happened. So you can actually see that dotted line, vertical dotted line is still at March 19th, we have those 11 additional data points. And you can actually see, if you count, starting with March 21st, you count 1-2-3-4-5-6-7-8-9-10-11 data points that are above that green average line from before.

0:27:45.5 JD: So originally the red lines, the limits and the central line would just be straight across. But once I see that eight or more of those are on one side of that central line, then I actually shift the limits and the average line, 'cause I have a new system. I've shifted it up and that actually is an indication that this intervention has worked, because we said... Now for those that are watching, it doesn't appear that all the blue dots are above that green line, but they were before the shift. Remember the shift indicates a new system. So I go back to the point where the first dot of the 8 or more in a row occurred, and that's where I have indicated a new system with the shift in the limits and the central line. So this, their theory was actually correct. This idea of giving a deduction for noises actually worked to improve the joy in Jessica's science class. It was a successful experiment.

0:28:52.7 AS: Can I draw on your chart there and ask some questions?

0:29:00.5 JD: Sure. Yeah.

0:29:00.6 AS: So one of my questions is, is it possible, for instance, in the preliminary period, let's say the first 20 days or so that things were kind of stabilized and then what we saw is that things potentially improved here in the period before the intervention and that the intervention caused an increase, but it may not be as significant as it appears based upon the prior, the most recent, let's say 10 days or something like that. So that's my question on it. I'll delete my drawings there.

0:29:46.3 JD: Yeah, I think that's a fair question. So, the reason I didn't shift those before, despite you do see a pattern, so before the dotted line, I considered that period a baseline period where we were just collecting 'cause they hadn't tried anything yet. So Dr. Wheeler has these series of four questions. So in addition to seeing a signal, he's got these other sort of questions that he typically asks and that they're yes/no questions. And you want the answer to all those to be yes. And one of 'em is like, do you know why an improvement or a decline happened? And if you don't, then you really shouldn't shift the limits. So that's why I didn't shift them before. I chose not to shift them until we actually did something, actually tried something.

0:30:33.2 AS: Which is basically saying that you're trying to get the voice of the students, a clear voice, and that may be that over the time of the intervention, it could be that the... Sorry, over the time of the initial data gathering, that the repetition of it may have caused students to feel more joy in the classroom because they were being asked and maybe that started to adjust a little bit up and there's the baseline, so. Yep. Okay.

0:31:01.6 JD: Yeah. And so this is sort of where the project ended for the fellowship that Jessica was doing. But, what would happen if we could sort of see what happened, further out in the school year is that, either Jessica and the class could then be sort of satisfied with where the joy in learning is at this point where the improvement occurred. Or they could run another cycle, sort of testing, sort of a tweaked version of that noise reduction PDSA, that intervention or they could add something to it.

0:31:43.0 AS: Or they could have run another fishbone point, maybe the noise wasn't actually the students thought it would be the number one contributor, but, maybe by looking at the next one they could see, oh, hey, wait a minute, this may be a higher contributor or not.

0:32:01.2 JD: Yeah. And when you dug into the actual plan, the specifics of the plan, how that noise deduction was going to work, there may be something in that plan that didn't go as planned and that's where you would have to lean on, 'cause we've talked about the three sort of parts of the improvement team that you need. You need the frontline people. That's the students. You need the person with the authority to change the system. That's Jessica. And then someone with the knowledge of the system, profound knowledge. That's me. Well, those, the Jessica and her students are the one in that every day. So they're gonna have learning about how that intervention went, that would then inform the second cycle of the PDSA, whatever that was gonna be, whatever they're gonna work on next. The learning from the first cycle is gonna inform that sort of next cycle.

0:32:51.4 JD: So the idea is that you don't just run a PDSA once but you repeatedly test interventions or change ideas until you get that system where you want it to be.

0:33:01.1 AS: So for the listeners and viewers out there, I bet you're thinking gosh, Jessica's pretty lucky to have John help her to go through this. And I think about lots of things that I want to talk to you about [laughter] about my testing in my own business, and I know in my own teaching, but also in my business. So that I think is one of the exciting things about this is the idea that we just, we do a lot of these things in our head sometimes. I think this will make a difference and, but we're not doing this level of detail usually in the way that we're actually performing the tests and trying to see what the outcomes are.

0:33:43.9 JD: Yeah I think that for school people too, I think when we've attempted to improve schools, reform schools, what happens is we go really fast and the learning actually happens very slowly and we don't really appreciate what it actually takes to change something in practice. And what happens then is to the frontline people like teachers... The reformers have good intentions but the people on the front line just get worn out basically, and a lot of times nothing actually even improves. You just wear people out. You make these big changes go fast and wide in the system and you don't really know exactly what to do on the ground because the opposite is having Jessica's classroom. They're actually learning fast but trying very small changes and getting feedback right in the place where that feedback needs to be given right in the classroom and then they can then learn from that and make changes.

0:34:49.8 JD: And again, it may seem smaller. Maybe it doesn't seem that revolutionary to people but to me, I think it's a completely revolutionary, completely different way to do school improvement that actually kind of honors the expertise of the teacher in the classroom, it takes into account how students are experiencing a change and then I'm kind of providing a method that they can use to then make that classroom better for everybody so and I think in doing so students more likely to find joy in their work, joy in their learnings, teachers more likely to find joy in their work as well. So to me it's a win-win for all those involved.

0:35:34.9 AS: Fantastic. Well, should we wrap up there?

0:35:40.6 JD: Yeah, I think that's a good place to wrap up this particular series.

0:35:45.1 AS: And maybe you could just review for the whole series of what we've done just to kind of make sure that everybody's clear and if somebody just came in on this one they know a little bit of the flow of what they're gonna get in the prior ones.

0:36:00.4 JD: Yeah. So we did six episodes and in those six episodes we started off just talking about what do you need to have in place for healthy goal setting at an organizational level, and we put four conditions in place that before you ever set a goal you should have to understand the capability of your system, you have to understand the variation within your system, you have to understand if the system that you're studying is stable, and then you have to have a logical answer to the question by what method. By what method are you gonna bring about improvement or by what method you're gonna get to this goal that you wanna set. So we talked about that, you gotta have these four conditions in place and without those we said goal setting is often an act of desperation.

0:36:49.7 JD: And then from there what we did is start talking about these 10 key lessons for data analysis so as you get the data about the goal and you start to understand the conditions for that system of process we could use those 10 data lessons to then interpret the data that we're looking at or studying and then we basically did that over the first four episodes. In the last few episodes what we've done is look at those lessons applied to Jessica's improvement project and that's what we just wrapped up looking at those 10 lessons.

0:37:23.7 AS: I don't know about the listeners and viewers but for me this type of stuff just gets me excited about how we can improve the way we improve.

0:37:33.4 JD: Yeah. For sure.

0:37:34.9 AS: And that's exciting. So John, on behalf of everyone at the Deming Institute I want to thank you again for this discussion, and for listeners, remember to go to deming.org to continue your journey. You can find John's book Win-Win W. Edwards Deming, the System of Profound Knowledge and the Science of Improving Schools on amazon.com. This is your host Andrew Stotz, and I'll leave you with one of my favorite quotes from Dr. Deming, "People are entitled to joy in work."

  continue reading

185 эпизодов

Artwork
iconПоделиться
 
Manage episode 424109510 series 2320637
Контент предоставлен Darlene Suyematsu and The Deming Institute. Весь контент подкастов, включая эпизоды, графику и описания подкастов, загружается и предоставляется непосредственно компанией Darlene Suyematsu and The Deming Institute или ее партнером по платформе подкастов. Если вы считаете, что кто-то использует вашу работу, защищенную авторским правом, без вашего разрешения, вы можете выполнить процедуру, описанную здесь https://ru.player.fm/legal.

In the final episode of the goal setting in classrooms series, John Dues and Andrew Stotz discuss the last three of the 10 Key Lessons for implementing Deming in schools. They finish up with the example of Jessica's 4th-grade science class.

TRANSCRIPT

0:00:02.4 Andrew Stotz: My name is Andrew Stotz, and I'll be your host as we continue our journey into the teachings of Dr. W Edwards Deming. Today I'm continuing my discussion with John Dues, who is part of the new generation of educators striving to apply Dr. Deming's principles to unleash student joy in learning. This is episode six about goal setting through a Deming lens. John, take it away.

0:00:26.4 John Dues: Hey, Andrew, it's good to be back. Yeah, for the past handful of episodes or so, we've been talking about organizational goal setting. We covered these four conditions of healthy goal setting and then got into these 10 key lessons for data analysis. And then we've been looking at those 10 key lessons applied to an improvement project. And we've been talking about a project that was completed by Jessica Cutler and she did a Continual Improvement Fellowship with us here at our schools. And if you remember, Jessica was attempting to improve the joy in learning of her students in her fourth grade science class. So last time we looked at lessons five through seven. Today we're gonna look at those final three lessons, eight, nine and ten applied to her project.

0:01:15.7 AS: It's exciting.

0:01:17.1 JD: Yeah. So we'll jump in here. We'll kind of do a description, a refresher of each lesson. And we'll kind of talk about how it was applied to her specific project, and we'll look at some of her data to kind of bring that live for those of the folks that have video. Let's jump in with lesson number eight. So we've talked about this before, but lesson number eight was: more timely data is better for improvement purposes. So we've talked about this a lot. We've talked about something like state testing data. We've said, it can be useful, but it's not super useful for improvement purposes, because we don't get it until the year ends. And students in our case, have already gone on summer vacation by the time that data comes in. And you know that the analogous data probably happens in lots of different sectors where you get data that lags, to the point that it's not really that useful for improvement purposes.

0:02:15.8 JD: So when we're trying to improve something, more frequent data is helpful because then we can sort of see if an intervention that we're trying is having an effect, the intended effect. We can learn that more quickly if we have more frequent data. And so it's, there's not a hard and fast rule, I don't think for how frequently you should be gathering data. It just sort of needs to be in sync with the improvement context. I think that's the important thing. Whether it's daily or a couple times a day or weekly, or monthly, quarterly, whatever, it's gotta be in sync with whatever you're trying to improve.

0:02:50.5 AS: You made me think about a documentary I saw about, how they do brain surgery and how the patient can't be sedated because they're asking the patient questions about, do you feel this and they're testing whether they're getting... They're trying to, let's say, get rid of a piece of a cancerous growth, and they wanna make sure that they're not getting into an area that's gonna damage their brain. And so, the feedback mechanism that they're getting through their tools and the feedback from the patient, it's horrifying to think of the whole thing.

0:03:27.7 JD: Yeah.

0:03:28.3 AS: It's a perfect example of why more timely data is useful for improvement purposes 'cause imagine if you didn't have that information, you knock the patient out, you get the cancerous growth, but who knows what you get in addition to that.

0:03:43.7 JD: Yeah, that's really interesting. I think that's certainly an extreme example, [laughter], but I think it's relevant. No matter what our context, that data allows us to understand what's going on, variation, trends, whether our system is stable, unstable, how we should go about improving. So it's not dissimilar from the doctors in that example.

0:04:06.8 AS: And it's indisputable I think, I would argue. But yet many people may not, they may be operating with data that's not timely. And so this is a reminder that we would pretty much always want that timely data. So that's lesson eight. Wow.

0:04:22.6 JD: Lesson eight. Yeah. And let's see how we can, I'll put a visualization on the screen so you can see what Jessica's data look like. All right. So now you can see. We've looked at these charts before. This is Jessica's process behavior chart for joy in science. So just to reorient, you have the joy percentage that students are feeling after a lesson on the x-axis, sorry, on the y-axis. On the x-axis, you have the school dates where they've collected this survey information from students in Jessica's class.

0:04:57.0 AS: Can you put that in Slide Show view?

0:05:00.4 JD: Yeah. I can do that. Yeah.

0:05:02.7 AS: Just it'll make it bigger, so for the...

0:05:06.5 JD: There you go.

0:05:07.8 AS: For the listeners out there, we're looking at a chart of daily, well, let's say it looks like daily data. There's probably weekends that are not in there because class is not on weekends, but it's the ups and downs of a chart that's ranging between a pretty, a relatively narrow range, and these are the scores that are coming from Jessica's surveying of the students each day, I believe. Correct?

0:05:34.2 JD: Yeah. So each day where Jessica is giving a survey to assess the joy in science that students are feeling, then she's averaging all those students together. And then the plot, the dot is the average of all the students sort of assessment of how much joy they felt in a particular science lesson.

0:05:54.7 AS: And that's the average. So for the listeners out there John's got an average line down the middle of these various data points, and then he is also got a red line above and a red line below the, above the highest point and slightly below the lowest point. Maybe you can explain that a little bit more.

0:06:15.4 JD: Yeah. So with Jessica, you remember originally she started plotting on a line chart or a run chart when we just had a few data points just to kind of get a sense of how things are moving so she could talk about it with her class. And over time what's happened is she's now got, at this point in the project, which she started in January, now this is sort of mid-March. And so she's collected two to three data points a week. So she doesn't survey the kids every day just for time sake, but she's getting two, three data points a week. And so by March, she started just a couple months ago, she's got 28 data points. So that sort of goes back to this idea of more timely data is better for improvement.

0:07:00.9 JD: And a lot of times, let's say a school district or a school does actually survey their students about how, what they think of their classes. That might happen at best once a semester or maybe once a year. And so at the end of the year you have one or two data points. So it's really hard to tell sort of what's actually going on. Compared to this, Jessica's got these 28 data points in just about two months or so of school. So she's got 28 data points to work with. And so what her and her students are doing with this data then, one, they can see how it's moving up and down. So we have, the blue dots are all the plotted points, like you said, the green line is the average running sort of through the middle of the data, and then those red lines are our process limits, the upper and lower natural process limits that sort of tell us the bounds of the system.

0:07:50.4 JD: And that's based on the difference in each successive data point. But the most important thing is that as Jessica and her students are looking at this, initially, they're really just studying it and trying to sort of see how things are going from survey to survey. So one of the things that Deming talked about frequently is not tampering with data, which would be if you sort of, you overreact to a single data point. So let's say, a couple of days in, it dips down from where it started and you say, oh my gosh, we gotta change things. And so that's what Deming is talking about. Not tampering, not overreacting to any single data point. Instead look at this whole picture that you get from these 28 data points and then talk about...

0:08:41.5 JD: In Jessica's case she's talking about with her students, what can we learn from this data? What does the variation from point to point look like? If we keep using the system, the fourth grade science system, if we leave it as is, then we'll probably just keep getting data pretty similar to this over time, unless something more substantial changes either in the negative or the positive. So right now they...

0:09:10.1 AS: And I think for the listeners, it's, you can see that there's really no strong pattern that I can see from this. It's just, there's some, sometimes that there's, seems like there's little trends and stuff like that. But I would say that the level of joy in the science classroom is pretty stable.

0:09:32.1 JD: Pretty stable. Yeah. Pretty high. It's bouncing around maybe a 76% average across those two and a half months or so. And so, they, you kind of consider this like the baseline. They've got a good solid baseline understanding of what joy looks like in this fourth grade science classroom. Did that stop sharing on your end?

0:10:00.2 AS: Yep.

0:10:00.2 JD: Okay, great. So that's lesson eight. So clearly she's gathered a lot of data in a pretty short amount of time. It's timely, it's useful, it's usable, it can be studied by her and her students. So we'll switch it to lesson nine now. So now they've got a good amount of data. They got 28 data points. That's plenty of data to work with. So lesson nine is now we wanna clearly label the start date for an intervention directly in her chart. And remember from earlier episodes, not only are we collecting this data, we're actually putting this up on a screen on a smart board in the classroom, and Jessica and her students are studying this data together. They're actually looking at this, this exact chart and she's explaining sort of kind of like we just did to the listeners. She's explaining what the chart means.

0:10:54.2 JD: And so over time, like once a week she's putting this up on the smart board and now kids are getting used to, how do you read this data? What does this mean? What are all these dots? What do these numbers mean? What do these red lines mean? That type of thing. And so now that they've got enough data, now we can start talking about interventions. That's really what lesson nine is about. And the point here is that you want to clearly, explicitly with a literally like a dotted line in the chart to mark on the day that you're gonna try something new. So you insert this dashed vertical line, we'll take a look at it in a second, on the date the intervention started. And then we're also gonna probably label it something simple so we can remember what intervention we tried at that point in time.

0:11:42.7 JD: So what this then allows the team to do is then to very easily see the data that happened before the intervention and the data that happened after the implementation of this intervention or this change idea. And then once we've started this change and we start plotting points after the change has gone into effect, then we can start seeing or start looking for those patterns in the data that we've talked about, those different rules, those three rules that we've talked about across these episodes. And just to refresh, rule one would be if we see a single data point outside of either of the limits, rule two is if we see eight consecutive points on either side of that green average line, and rule three is if we see three out of four dots in a row that are closer to one of the limits than they are to that central line.

0:12:38.3 JD: So that again, those patterns tell us that something significant, mathematically improbable has happened. It's a big enough magnitude in change that you wouldn't have expected it otherwise. And when we see that pattern, we can be reasonably assured that that intervention that we've tried has worked.

0:12:56.0 AS: And let me ask you about the intervention for just a second because I could imagine that if this project was going on, first question is, does Jessica's students are, obviously know that this experiment is going on?

0:13:08.3 JD: Yes.

0:13:09.8 AS: Because they're filling out a survey. And my first question is, do they know that there's an intervention happening? I would expect that it would be yes, because they're gonna feel or see that intervention. Correct?

0:13:25.1 JD: Sure. Yep.

0:13:25.2 AS: That's my first point that I want to think about. And the second point is, let's imagine now that everybody in the classroom has been seeing this chart and they're, everybody's excited and they got a lot of ideas about how they could improve. Jessica probably has a lot of ideas. So the temptation is to say, let's change these three things and see what happens.

0:13:46.5 JD: Yeah.

0:13:47.1 AS: Is it important that we only do one thing at a time or that one intervention at a time or not? So maybe those are two questions I have in my mind.

0:13:58.6 JD: Yeah, so to the first question, are you, you're saying there there might be some type of participant or...

0:14:02.3 AS: Bias.

0:14:03.3 JD: Observer effect like that they want this to happen. That's certainly possible. But speaking to the second question, what intervention do you go with? Do you go with one or you go with multiple? If you remember a couple of episodes ago we talked about, and we actually looked at a fishbone diagram that Jessica and her students that they created and they said, okay, what causes us to have low joy in class? And then they sort of mapped those, they categorized them, and there were different things like technology not working. If you remember, one was like distractions, like other teachers walk into the room during the lesson. And one of them was others like classmates making a lot of noise, making noises during class and distracting me. And so they mapped out different causes. I think they probably came up with like 12 or 15 different causes as possibilities.

0:14:58.7 JD: And they actually voted as a class. Which of these, if we worked on one of these, which would have the biggest impact? So not every kid voted for it, but the majority or the item that the most kids thought would have the biggest impact was if we could somehow stop all the noises basically. So they came up with that as a class, but not, it wasn't everybody's idea. But I think we've also talked about sort of the lessons from David Langford where once kids see that you're gonna actually take this serious, take their ideas serious and start acting on them, they take the project pretty seriously too. So maybe not a perfect answer, but that's sort of what we...

0:15:38.0 AS: I was thinking that, ultimately you could get short-term blips when you do an intervention and then it stabilizes possibly. That's one possibility. And the second thing I thought is, well, I mean ultimately the objective, whether that's an output from a factory, and keeping, improving that output or whether that's the output related to joy in the classroom as an example, you want it to go up and stay up and you want the students to see it and say, wow, look, it's happening. So, yeah.

0:16:11.7 JD: And there's different ways you can handle this. So this joy thing could go up to a certain point. They're like, I don't know if we can get any more joy, like, it's pretty high. And what you could do at that point is say, okay, I'm gonna assign a student to just sort of, every once in a while, we'll keep doing these surveys and we will sort of keep plotting the data, but we're not gonna talk about a lot. I'm just gonna assign this as a student's job to plot the new data points. And we'll kind of, we'll kind of measure it, but we won't keep up with the intervention 'cause we got it to a point that we're pretty happy with. And now as a class we may wanna switch, switch our attention to something else.

0:16:45.2 JD: So we started getting into the winter months and attendance has dipped. Maybe we've been charting that and say, Hey guys, we gotta, gotta kinda work on this. This is gone below sort of a level that's really good for learning. So let's think about as a group how we could come up with some ideas to raise that. So maybe you turn your attention to something else, 'cause you can't pay attention to everything at once.

0:17:07.2 AS: Yeah, and I think I could use an example in my Valuation Master Class Boot Camp where students were asking for more personal feedback and I realized I couldn't really scale this class if I had to get stuck into hundreds of grading basically. And that's when I came up with the concept of feedback Friday, where one student from each team would present and then I would give feedback, I would give a critique and they would be intense and all students would be watching, it would be recorded, and all of a sudden all the issues related to wanting this personal feedback went away. And therefore, once I instituted it on a regular basis, I went on to the next issue and I made sure that I didn't lose the progress that I had made and continue to make feedback Friday better and better.

0:17:56.2 JD: Yeah. Yeah. That's great. That's great. I'll share my screen so you can kinda see what this looked like in Jessica's class now, what the chart looks like now. So now you see that same chart, that same process behavior chart, exact same one we were just looking at except now you can see this, this dashed vertical line that marks the spot where the intervention was started that we just talked about. And what the kids are actually doing, and Jessica are running a PDSA cycle, a Plan-Do-Study-Act cycle. That's the experimental cycle in her class. And what they're running that PDSA on is, again, how can we put something in place to reduce the distracting noises. And so what the students actually said is if we get a deduction for making noises, then there will be less noises. And so in the school's sort of management system, a deduction is sort of like a demerit.

0:19:00.0 JD: If you maybe went to a Catholic school or something like that, or some public schools had demerits as well, but basically it's like a minor infraction basically that goes home or that gets communicated to parents at the end of the week. But the kids came up with this so their basic premise is, their plan, their prediction is if there are less noises, we'll be able to enjoy science class. And if we give deductions for these noises, then there'll be less noises. So some people may push back, well, I don't think you should give deductions or something like that, but which, fine, you could have that opinion. But I think the powerful point here is this is, the students created this, it was their idea. And so they're testing that idea to see if it actually has impact.

0:19:44.8 JD: And they're learning to do that test in this scientific thinking way by using the Plan-Do-Study-Act cycle, and seeing if it actually has an impact on their data. So at the point where they draw this dashed line, let's call that March 19th, we can see a couple of additional data points have been gathered. So you can see the data went up from 3/18 to 3/21. So from March 18th to March 21st, rose from about, let's call it 73% or so, up to about 76% on March 21st. And then that next day it rose another percent or two and let's call that 78%.

0:20:28.1 JD: And so the trap here is you could say, okay, we did this intervention and it made things better. But the key point is the data did go up, but we haven't gathered enough additional data to see one of those patterns that we talked about that would say, oh, this actually has had a significant change. Because before the dashed line, you can see data points that are as high or even higher than some of these ones that we see after the PDSA is started. So it's too early to say one way or another if this intervention is having an impact. So we're not gonna overreact. You could see a place where you're so excited that it did go up a couple of days from where it was on March 18th before you started this experiment, but that's a trap. Because it's still just common cause data, still just bouncing around that average, it's still within the bounds of the red process limits that define the science system.

0:21:34.2 AS: I have an experiment going on in my latest Valuation Master Class Boot Camp, but in that case, it's a 6-week period that I'm testing, and then I see the outcome at the end of the six weeks to test whether my hypothesis was right or not. Whereas here it's real time trying to understand what's happening. So yes, you can be tempted when it's real time to try to jump to conclusion, but when you said, well, okay, I can't really get the answer to this conclusion until I've run the test in a fixed time period, then it's you don't have as much of that temptation to draw a conclusion.

0:22:14.1 JD: Yeah. And if I actually was... I should have actually taken this a step farther. I marked it with this Plan-Do-Study-Act cycle. What I should have done too is write "noises" or something like that, deduction for noises, some small annotation, so it'd be clear what this PDSA cycle is.

0:22:32.1 AS: In other words, you're saying identify the intervention by the vertical line, but also label it as to what that intervention was, which you've done before on the other chart. I remember.

0:22:42.1 JD: Yeah. And then it'd be sort of just looking at this when she puts this up on the smart board for the class to see it again too. Oh yeah yeah, that's when we ran that first intervention and that was that intervention where we did deductions for noises. But the bigger point is that this never happens where you have some data, you understand a system, you plan systematic intervention, and then you gather more data right after it to see if it's having an impact. We'd never do that ever, in education, ever. Ever have I ever seen this before. Nothing like this. Just this little setup combining the process behavior chart with the Plan-Do-Study-Act cycle, I think is very, very, very powerful and very different approach than what school improvement.

0:23:33.4 AS: Exciting.

0:23:34.6 JD: Yeah. The typical approach is to school improvement. So I'll stop that share for a second there, and we can do a quick overview of lesson 10 and then jump back into the chart as more data has been gathered. So lesson 10 is: the purpose of data analysis is insight. Seems pretty straightforward. This is one of those key teachings from Dr. Donald Wheeler who we've talked about. He taught us that the best analysis is the simplest analysis, which provides the needed insight.

0:24:08.1 AS: So repeat lesson 10, again, the purpose of...

0:24:11.6 JD: The purpose of data analysis is insight.

0:24:14.7 AS: Yep.

0:24:15.6 JD: So just plotting the dots on the run chart and turning the run chart into the process behavior chart, that's the most straightforward method for understanding how our data is performing over time. We've talked about this a lot, but it's way more intuitive to understand the data and how it's moving than if you just stored it in a table or a spreadsheet. Got to use these time sequence charts. That's so very important.

0:24:42.2 AS: And I was just looking at the definition of insight, which is a clear, deep, and sometimes sudden understanding of a complicated problem or situation.

0:24:51.6 JD: Yeah. And I think that can happen, much more likely to happen when you have the data visualized in this way than the ways that we typically visualize data in just like a table or a spreadsheet. And so in Jessica's case, we left off on March 22nd and they had done two surveys after the intervention. And so then of course what they do is they continue over the next 4, or 5, 6 weeks, gathering more of that data as they're running that intervention, then we can sort of switch back and see what that data is looking like now.

0:25:28.3 AS: Exciting.

0:25:30.3 JD: So we have this same chart with that additional data. So we have data all the way out to now April 11th. So they run this PDSA for about a month, three weeks, month, three, four weeks.

0:25:47.9 AS: And that's 11 data points after the intervention. Okay.

0:25:54.0 JD: Yep. Purposeful. So what was I gonna say? Oh, yeah. So three, four weeks for a Plan-Do-Study-Act cycle, that's a pretty good amount of time. Two to four weeks, I've kind of found is a sweet spot. Shorter than that, it's hard to get enough data back to see if your intervention has made a difference. Longer than that, then it's you're getting away from the sort of adaptability, the ability to sort of build on an early intervention, make the tweaks you need to. So that two to four week time period for your PDSA seems like a sweet spot to me. So she's continued to collect this joy in learning data to see... Basically what her and her class are doing is seeing if their theory is correct. Does this idea of giving deductions for making noises have an impact? Is it effective?

0:26:44.0 JD: So if they learn, if the data comes back and there is no change, no indication of improvement, then a lot of people will say, well, my experiment has failed. And my answer to that is, no, it hasn't failed. It might not have worked like you wanted, but you learn very quickly that that noise deduction is not going to work and we're gonna try some other thing, some other intervention. We learn that very very quickly within 3 or 4 weeks that we need to try something new. Now, in the case of Jessica's class, that's not what happened. So you can actually see that dotted line, vertical dotted line is still at March 19th, we have those 11 additional data points. And you can actually see, if you count, starting with March 21st, you count 1-2-3-4-5-6-7-8-9-10-11 data points that are above that green average line from before.

0:27:45.5 JD: So originally the red lines, the limits and the central line would just be straight across. But once I see that eight or more of those are on one side of that central line, then I actually shift the limits and the average line, 'cause I have a new system. I've shifted it up and that actually is an indication that this intervention has worked, because we said... Now for those that are watching, it doesn't appear that all the blue dots are above that green line, but they were before the shift. Remember the shift indicates a new system. So I go back to the point where the first dot of the 8 or more in a row occurred, and that's where I have indicated a new system with the shift in the limits and the central line. So this, their theory was actually correct. This idea of giving a deduction for noises actually worked to improve the joy in Jessica's science class. It was a successful experiment.

0:28:52.7 AS: Can I draw on your chart there and ask some questions?

0:29:00.5 JD: Sure. Yeah.

0:29:00.6 AS: So one of my questions is, is it possible, for instance, in the preliminary period, let's say the first 20 days or so that things were kind of stabilized and then what we saw is that things potentially improved here in the period before the intervention and that the intervention caused an increase, but it may not be as significant as it appears based upon the prior, the most recent, let's say 10 days or something like that. So that's my question on it. I'll delete my drawings there.

0:29:46.3 JD: Yeah, I think that's a fair question. So, the reason I didn't shift those before, despite you do see a pattern, so before the dotted line, I considered that period a baseline period where we were just collecting 'cause they hadn't tried anything yet. So Dr. Wheeler has these series of four questions. So in addition to seeing a signal, he's got these other sort of questions that he typically asks and that they're yes/no questions. And you want the answer to all those to be yes. And one of 'em is like, do you know why an improvement or a decline happened? And if you don't, then you really shouldn't shift the limits. So that's why I didn't shift them before. I chose not to shift them until we actually did something, actually tried something.

0:30:33.2 AS: Which is basically saying that you're trying to get the voice of the students, a clear voice, and that may be that over the time of the intervention, it could be that the... Sorry, over the time of the initial data gathering, that the repetition of it may have caused students to feel more joy in the classroom because they were being asked and maybe that started to adjust a little bit up and there's the baseline, so. Yep. Okay.

0:31:01.6 JD: Yeah. And so this is sort of where the project ended for the fellowship that Jessica was doing. But, what would happen if we could sort of see what happened, further out in the school year is that, either Jessica and the class could then be sort of satisfied with where the joy in learning is at this point where the improvement occurred. Or they could run another cycle, sort of testing, sort of a tweaked version of that noise reduction PDSA, that intervention or they could add something to it.

0:31:43.0 AS: Or they could have run another fishbone point, maybe the noise wasn't actually the students thought it would be the number one contributor, but, maybe by looking at the next one they could see, oh, hey, wait a minute, this may be a higher contributor or not.

0:32:01.2 JD: Yeah. And when you dug into the actual plan, the specifics of the plan, how that noise deduction was going to work, there may be something in that plan that didn't go as planned and that's where you would have to lean on, 'cause we've talked about the three sort of parts of the improvement team that you need. You need the frontline people. That's the students. You need the person with the authority to change the system. That's Jessica. And then someone with the knowledge of the system, profound knowledge. That's me. Well, those, the Jessica and her students are the one in that every day. So they're gonna have learning about how that intervention went, that would then inform the second cycle of the PDSA, whatever that was gonna be, whatever they're gonna work on next. The learning from the first cycle is gonna inform that sort of next cycle.

0:32:51.4 JD: So the idea is that you don't just run a PDSA once but you repeatedly test interventions or change ideas until you get that system where you want it to be.

0:33:01.1 AS: So for the listeners and viewers out there, I bet you're thinking gosh, Jessica's pretty lucky to have John help her to go through this. And I think about lots of things that I want to talk to you about [laughter] about my testing in my own business, and I know in my own teaching, but also in my business. So that I think is one of the exciting things about this is the idea that we just, we do a lot of these things in our head sometimes. I think this will make a difference and, but we're not doing this level of detail usually in the way that we're actually performing the tests and trying to see what the outcomes are.

0:33:43.9 JD: Yeah I think that for school people too, I think when we've attempted to improve schools, reform schools, what happens is we go really fast and the learning actually happens very slowly and we don't really appreciate what it actually takes to change something in practice. And what happens then is to the frontline people like teachers... The reformers have good intentions but the people on the front line just get worn out basically, and a lot of times nothing actually even improves. You just wear people out. You make these big changes go fast and wide in the system and you don't really know exactly what to do on the ground because the opposite is having Jessica's classroom. They're actually learning fast but trying very small changes and getting feedback right in the place where that feedback needs to be given right in the classroom and then they can then learn from that and make changes.

0:34:49.8 JD: And again, it may seem smaller. Maybe it doesn't seem that revolutionary to people but to me, I think it's a completely revolutionary, completely different way to do school improvement that actually kind of honors the expertise of the teacher in the classroom, it takes into account how students are experiencing a change and then I'm kind of providing a method that they can use to then make that classroom better for everybody so and I think in doing so students more likely to find joy in their work, joy in their learnings, teachers more likely to find joy in their work as well. So to me it's a win-win for all those involved.

0:35:34.9 AS: Fantastic. Well, should we wrap up there?

0:35:40.6 JD: Yeah, I think that's a good place to wrap up this particular series.

0:35:45.1 AS: And maybe you could just review for the whole series of what we've done just to kind of make sure that everybody's clear and if somebody just came in on this one they know a little bit of the flow of what they're gonna get in the prior ones.

0:36:00.4 JD: Yeah. So we did six episodes and in those six episodes we started off just talking about what do you need to have in place for healthy goal setting at an organizational level, and we put four conditions in place that before you ever set a goal you should have to understand the capability of your system, you have to understand the variation within your system, you have to understand if the system that you're studying is stable, and then you have to have a logical answer to the question by what method. By what method are you gonna bring about improvement or by what method you're gonna get to this goal that you wanna set. So we talked about that, you gotta have these four conditions in place and without those we said goal setting is often an act of desperation.

0:36:49.7 JD: And then from there what we did is start talking about these 10 key lessons for data analysis so as you get the data about the goal and you start to understand the conditions for that system of process we could use those 10 data lessons to then interpret the data that we're looking at or studying and then we basically did that over the first four episodes. In the last few episodes what we've done is look at those lessons applied to Jessica's improvement project and that's what we just wrapped up looking at those 10 lessons.

0:37:23.7 AS: I don't know about the listeners and viewers but for me this type of stuff just gets me excited about how we can improve the way we improve.

0:37:33.4 JD: Yeah. For sure.

0:37:34.9 AS: And that's exciting. So John, on behalf of everyone at the Deming Institute I want to thank you again for this discussion, and for listeners, remember to go to deming.org to continue your journey. You can find John's book Win-Win W. Edwards Deming, the System of Profound Knowledge and the Science of Improving Schools on amazon.com. This is your host Andrew Stotz, and I'll leave you with one of my favorite quotes from Dr. Deming, "People are entitled to joy in work."

  continue reading

185 эпизодов

Kaikki jaksot

×
 
Loading …

Добро пожаловать в Player FM!

Player FM сканирует Интернет в поисках высококачественных подкастов, чтобы вы могли наслаждаться ими прямо сейчас. Это лучшее приложение для подкастов, которое работает на Android, iPhone и веб-странице. Зарегистрируйтесь, чтобы синхронизировать подписки на разных устройствах.

 

Краткое руководство