**Mood:** a-ok

**Now Playing: **Porpoise song (The Monkees)

Well, I’m back from holiday about 2 weeks now, and decided I should really post something in this blog.

**Problem: Between Subject * Within Subject Interaction After Adding Covariate**

I’ll just give an update on where I am at this point in time. Well, previously I was using maths confidence and order as covariates in the repeated measures in the box*problem*task for total scores (for which I was getting a significant difference for the box*task interaction). John indicated to me that I really shouldn’t be using order as a covariate since it was a nominal variable. I had only decide to use maths confidence and order as covariates since there appeared to be a significant difference between maths confidence depending on the order that students answered the questions. Further, he indicated that there should not be a difference in the box*task interaction as the covariate should only affect the between subject and not a within subject, and hence should not affect the box*task interaction.

Well, first of all, I wanted to test to see whether if I repeated the repeated measures with only the covariate of maths confidence to see what I will get, I tested it out and I still got a box*task interaction. But I was worried because according to John, this shouldn’t be happening. So, I looked up what this meant.

**Possible Reason: Assumption of Homogeneity of Slopes Violated**

I came across an editorial letter by Gilmore (2007) who was saying that the ANCOVA done by Anstey et al (2006) on their work on cataract removal, they used ANCOVA’s inappropriately, since the ANCOVA doesn’t follow that of Winer (1971) and that they included the covariate in interactions terms and indicated that this a problem with the SPSS (Resolution no. 22133). So, I looked up this resolution on SPSS knowledgebase and to be fair to SPSS and to Anstey et al (2006), the way they have written it doesn’t show in anyway that the ANCOVA or MANCOVA done is wrong but if the Winer (1971) ANCOVA has to be done, one should do it two parts. First do the ANCOVA first with the covariate to get the F-ratio for the between subject, and then remove the covariate and then run the ANOVA to get the F-ratio for the within subjects and the interactions.

That got me worried, because if I did that there would in no way be an interaction between box*task (task was a within subject) and I had already done this without the maths confidence as a covariate (and I was fairly convinced that maths confidence was influencing this behaviour). Well, I looked up back to see the reply by Anstey et al (2007) to Gilmore (2007), and they said that using the SPSS calculations was perfectly fine and it actually was a more cautious one as it allowed one to see if there were any violation of the homogeneity of the slopes assumption (an assumption that is necessary in ANCOVAs). A violation of this assumption occurs when there is a significant interaction between the within variables and the covariate. So, I checked back my repeated measures MANCOVA and sure enough there was an interaction between the within subjects (problem and task) and the covariate variable. That got me scared since I had no idea on how to deal with it. Well, looking through the internet I found a paper by Delaney and Maxwell (1983) which suggested a way of dealing with heterogeneity of regression based on a method by Rogosa (1980) which looked a bit complicated – it involved picking points and doing some long calculations with them, so decided to ignore that and try something else (I had to go search the internet for this paper again – because forgot to save it – but after a long search found it eventually!).

**Solution: Recode Covariate**

Anyway, so I looked up at alternatives way to deal with covariate variables, and one website had suggested that you can split the variables into groups. So, I decided to recode the maths confidence variable into two groups. Now I had done this previously arbitrarily (coded 1-5 as low and 6-10 as high) – but wasn’t certain if that was the correct approach. So, decided to do a frequency distribution of the maths confidence, and noted there were two high points one at 5 and one at 7, so decided it was a sort of a bimodal distribution. Further when I looked at the mean and the median, I noted the values were just over 6. So, decided to recode the maths confidence from values 1-6 as low and 7-10 as high. Whilst, just searching for the Delaney and Maxwell paper, I found a paper by Owen and Froman (2005) who were not keen on people ‘carving up’ their continuous variables. They did indicate one the most legitimate reason for this is in a ANCOVA where the assumption of homogeneity of slopes has been violated, but they still recommended trying to use the continuous variables perhaps using multiple regression instead, although they did recognise that this was a whole lot more complicated when dealing with repeated measures. They were also against using the single item for measuring something (as I did for maths confidence), instead they think a series of questions with a mean score would be better – and whilst I agree with this, but given the constraints of time on the study i.e. 2 hrs, I don’t think imposing a 10 item for measuring maths confidence would have been that much more useful.

**Complication: Is the power enough?**

Anyway, after recoding the maths confidence into these two groups I got 19 students each for the low and high confidence group. Whilst that sounded good, one of my worries were how were the confidence distributed across the boxes, and unfortunately for the black-box low confidence there was only 4 students and for the glass-box high confidence there was also 4 students, the remaining sub-groups had a distribution of 9, 9, 6, 6. So, those seemed safish enough, but wasn’t certain if this sample size will have sufficient power to detect a difference if there was. This looked completely complicated now, because now only did I have to do calculate power for a repeated measures design but now in which there was unequal sample sizes!

Well, I went back to Lenth’s power calculation java applet and tried to see if I could do it there, it if was possible, I surely couldn’t figure out what to do! So, gave up on trying that. I did a search on the internet on whether there were programmes that calculated power for repeated measures design, and I came up with this programme called PASS, which I downloaded for a 7 day free trial period and ran all of my variations with sample size, and thankfully, according to that programme there was a sufficient power for all my interactions (>0.9), and I tested also what should have been my minimum sample size for each subgroup, and according to PASS, I could have done alright with power when having 2 persons in each sub-group (>0.8). I don’t think I was able to figure out whether PASS could tell me whether there would be sufficient power to determine which group the difference is coming from when it came to the interactions, but that doesn’t make a difference to me as SPSS doesn’t do interaction differences when doing a repeated measures.

**Outcome**

Anyway, now that I knew that there was sufficient power, I proceeded to use my recoded maths confidence as a between subject, and there the box*task interaction was found as previously! All good! Felt much better once all of this was sorted. I told John about my recoding, and whilst he doens’t quite like it, he thinks in this case, it is alright just to get around the violation of the assumption. I then did all of my other repeated measures MANOVA using maths confidence and box as my between subject variables, and did these for types of explanations as well as explorations, and got some interesting results, which I feel much happier about writing … well, once I get motivated about writing it 😀 .