Teacher-led questioning itself, in particular oral questioning, is also identified as extremely effective at promoting learning. Research on this stretches back to Kenneth Tobin’s (1987) work on wait time, where he found that a teacher waiting at least three seconds after posing a question before expecting an answer ‘boosted cognitive level achievement’, for it gave pupils additional time to think of the answer. He noted that a longer wait time would be most effective when the teacher posed analytical questions, rather than knowledge recall questions, which is argued also by Fautley and Savage (2008) and Harrison (2011). Harrison (2011) emphasised the need for teachers to ‘plan questions effectively’, both to enhance pupils’ learning in a teacher’s own class but also so that successful approach could be shared with colleagues for other classes (p.227). Christine Chin (2006) identified a range of different questioning techniques, used in her study by Science teachers, such “comment-question” and “responsive questioning”, and feedback techniques, such as “explicit correction-direct instruction” (p,1326). She also highlighted the need for teachers to be affirming in their responses to answers, such as answering an incorrect answer in a neutral manner, in order not to knock a pupil’s confidence and also to examine whether a pupil could reach the answer with more prompting. While both authors argue for the benefits of teacher-led questioning, they also warn, however, that it must be used with care, especially regarding the potential different academic abilities of the class. Tobin noted that, following the implementation of longer wait time in teacher-led questioning, ‘a small number of target students monopolised classroom interactions’ (p.88). Similarly, Chin (2006) noted that the use of questioning techniques that attempted to help pupils reach answers with prompting was only beneficial with higher-ability pupils, and that lower-ability ones prospered better with direct correction. Myhill and Warren (2005) cautioned teachers not to ‘give clues to the ‘right’ answer’, for otherwise pupils would not ‘necessarily grasp the learning at the heart of the task’ (p.58). Fearn (2018) also emphasised the necessity of pupils having the requisite prior knowledge, indeed having ‘intimacy’ of knowledge, needed to benefit from advanced teacher-led question, and found in her study of A Level student responses to Oxford admissions tests for History that those who performed better were those who had studied History most recently and most intensively (p.52). It is clear that questions should be sequenced in advance, starting with knowledge recall and progressing to analysis, so as to ensure that pupils of all ability ranges can benefit. The evidence here indicates that a teacher, furthermore, should only move onto analytical questioning once enough members of the class have demonstrated that they have requisite knowledge of the content in question.
Peer Assessment
Peer assessment is another effective Assessment for Learning strategy that can be employed by teachers. Marty-Snyder and Patton (2014) assert that peer assessment can help teachers not only assess learning but assess their own teaching practice, which would help them to design more effective ways of enhancing pupil learning. In their study, they highlighted how students ‘appreciated receiving feedback more often from their peer’, which was aided further by discussion (p.30). Bryant (2015) and Bonner and Chen (2020) echo this idea that pupils enjoy peer assessment, noting that it aids motivation for both teachers and pupils, for both groups, so say Bonner and Chen, ‘learn each other’s expectations and direct their resources accordingly’. Endy, Pfleger and Srole (2018) found this in their study of university History students too, that peer assessment and support helped arrest a ‘decline in repeatable grades’ (p.95). They note that in all three classes they studied, ‘students singled out their appreciation of the groups and facilitators’ and became more confident (p.96). It seems that peer assessment can work very well with rubrics, in that pupils can feel invested in the process of learning by seeing in the moment the progress they and their peers are making. However, the authors also note some limitations to this strategy. Marty-Snyder and Patton (2014) found that while students liked receiving feedback from their peers more, the feedback they received from their university supervisor ‘was more beneficial to the learning process because it was more specific’ (p.30). While praising peer assessment as a useful tool, Bryant (2015) concurs that there is a risk that pupils might not be skilled enough to provide accurate feedback to their peers: he noted in his study that only some pupils marked their peers’ work accurately and effectively, and that he had to step in to ‘provide more support and guidance’ to the others (p.57). Costa and Kallick (2004) found that while peer assessment was generally beneficial, feedback should be ‘neutral and without value judgements’, which would be hard to prevent in a large class of secondary school children (p.3). Endy, Pfleger and Srole’s study (2018) was conducted among a large group of university undergraduates whose mentors were often graduate students, a situation that could not be repeated in a secondary school classroom, and the authors note that their approach was best suited to ‘otherwise unwieldly classes of fifty or more students’ (p.99). Peer assessment should be used, thus, according to very strict rubrics, laid out first of all by the teacher, in order to be most effective at enhancing learning.
Linking paragraph
From the literature that I read, I decided to investigate how effective the use of rubrics, oral teacher-led questioning and peer assessment would be to promote learning. I decided only to assess these three strategies, and not to assess a fourth, self-assessment, because from my reading I concluded that self-assessment is a strategy best examined over a long period of time, a few weeks or months, and thus would not be an appropriate technique to assess in a four-lesson sequence (Thorne, 2015), especially with a class, like this one, that is lower-ability and can be disruptive.
I decided to use a low-stakes knowledge recall test at the start of lesson 1, in order to assess pupils’ knowledge of content learnt the previous week, which they would need to know in order to access the content for lessons 1 and 2. In designing this, I was influenced both by Hawkey (2015), who highlighted how regular low-stakes testing was essential to embed substantive knowledge, and William and Black (1998), who emphasised that such testing would be of especial help to lower-ability pupils. In lessons 1 and 2, a double lesson, I then introduced scripted, oral teacher-led questioning. I noted Harrison’s (2011) stipulation that questions should be scripted in advanced, and wanted to see whether increasing wait time, as advocated by Tobin (1987), would boost learning.
For lessons 3 and 4, a double lesson, in order to assess how well displaying rubrics for successful answering of questions would work, and how well peer assessment could promote learning through raising enjoyment and confidence, I made a worksheet with nine questions on it (see Appendix 2). In particular, I wanted to assess whether displaying rubrics would help pupils to answer analysis questions better, as this requires higher cognitive engagement and requires a deeper understanding of a topic to answer. In this I was influenced by Harrison (2011), who noted that questions such as ‘Is it always true that green organisms photosynthesize?’ are more challenging and require greater understanding to answer than simple factual recall, such as ‘Which types of organisms photosynthesize?’ (p.226). I was also influenced by Andrade and Valtcheva (2009), who noted that pupils must be fully aware of success criteria in order to answer questions well and to improve. In asking pupils to mark each other’s work, I was influenced by Bonner and Chen (2020), who noted how pupils enjoyed peer assessment and it could boost their confidence. I thought this was particularly important given the class is a lower-ability set, and often respond well, in terms of both academic engagement and behaviour, if they have achieved tangible success with tasks.