Skip to main content

CHTU Update - 10.5.15 SLOs - setting growth targets and all that foolishness

Dear Colleagues,

For those of you who have to set your own growth targets for your SLO, I have posted a spreadsheet that is a slight modification from last year on the CHTU website (go to chtu.org under the evaluation tab or chtu.oh.aft.org/evaluation


Last year the spreadsheet took your pre-test scores and calculated two methods.  Here is how they work and what is new:


1.  The Austin Formula – which takes the difference of the score from 100, divides by 2, and adds that “growth” to the original value. 

For example:  a pre-test score of 40 is 60 points away from 100.  Since half of 60 is 30, the target score is 40 + 30 = 70.

The thing that I added this year is that you can change the divisor to be 2.5 or 3 or whatever number you want.  You might consider looking at the target scores to see if a different divisor helps make the targets more reasonable.


2.  The second method from last year was a statistical model.  It takes into consideration how the whole group of students scores and computes a standard deviation – I know you all probably took stats in college, but this is mathspeak for the likelihood that a certain percentage of scores falls within a particular range.  In normal distributions of numbers about 66% of scores fall within 1 standard deviation from the average, about 95% of all scores will fall within 2 standard deviations.

Last year we used 80% as our reliability factor, meaning that target scores were based on 20% multiplied by the standard deviation added to the pre-test score.


This year you can change the reliability factor.  If you make it 100%, then you are saying that no growth can occur.  If you make it 0%, then you are stating there is little limit on what students can score.  Last year it was suggested we use 80% as a reliability factor based on some work now retired math teacher Al Degennaro had done in the pilot study of SLOs. 


The fact is that this whole SLO business is nonsense.  Being able to predict what students will know right before spring break is impossible and silly.  BUT it is the law that we follow ridiculous rules that have little meaning or help students learn anything.  So, we may as well make it look like we have a special calculation that can make these predictions.  The more mathy it is, the better it must be, right?  It seems to me that making up growth targets would be just as viable.  In fact, if you use one of these method and believe that a target is too high or too low, then you can change it manually if you can describe why you did that based on your knowledge of the student.  You can exclude students at the end of the process if they were not in class a lot of the time.  The state automatically excludes them at some unbelievably high number, but you can put into your SLO that you are excluding if the student misses X number of days of class, excused or unexcused.

On that positive note, go to it.  Pre-test, calculate, and have fun.


In Union,
Ari Klein
CHTU President


slight addendum
For those of you using MAP or other norm-referenced test, the targets are set for you based on the period of instruction.  These targets can be changed manually, but because there are tens of thousands of data points that go into this number they should be more reliable overall than something you would make up.  You might change targets based on your knowledge of your students.  These changes should be something you can explain as to why the norm-referenced target may not apply in your situation.
Ari


Share This