Sunday, 8 October 2017

Research-based instructional principles and mastery learning

In a review of research from cognitive scientists, master teachers and cognitive supports spanning four decades, Barak Rosenshine presents ten research-based instructional principles that all teachers should use (Rosenshine, 2012). These are as follows:

  1. Lesson starts: Begin a lesson with a short review of previous learning.
  2. Present new material in small steps with student practice after each step.
  3. Ask a large number of questions and check the responses of all students.
  4. Provide models.
  5. Guide student practice.
  6. Check for student understanding.
  7. Obtain a high success rate.
  8. Provide scaffolds for difficult tasks.
  9. Require and monitor independent practice.
  10. Engage students in weekly and monthly review.


These steps are summarised in the figure below. The rest of this blog uses Barak Rosenshine’s Principles of Instruction to outline how a typical lesson would be delivered according to the students’ current stage of mastery.

Lesson flow based on Barak Rosenshine's Principles of Instruction (Rosenshine, 2012)

Emerging stage of mastery

In the first lesson of (say) a 6-lesson unit of work, the majority of students will be in the “Emerging” stage of mastery, understanding between 0 and 25% of the concepts required to reach mastery. The lesson should start with a Do Now, a review of prior knowledge which allows students to connect to this lesson’s content.  Following the Do Now, the teacher should then model the thought processes for the new concept or skill. Collins et al state the importance of making the teacher’s thinking visible by thinking out loud in the process known as Cognitive Apprenticeship (Collins, et al., 1991).

In applying Cognitive Apprenticeship to Computing, many of the tasks we perform involve implicit steps or thinking. An example of this is closing a tag in HTML as soon as we open it. We would model this by stating that “HTML Tags generally occur in pairs and therefore it is good practice to create a closing tag immediately after creating an opening tag.” Later in the lesson, the teacher would check that students have internalised this habit by modelling the creation of (say) a  paragraph, opening a <p> tag and then asking students “As I’ve opened the <p> tag here, what should I do immediately, before I even write my paragraph?” The teacher should expect any student to be able to tell them that a closing </p> tag should be created. Unless we make these implicit habits explicit, our students will be lost as they will not be able to make the invisible conceptual leap that exists in the minds of their expert teacher.

The teacher will follow up teacher modelling with a worked example, presenting a finished product so that students know “what a good one looks like”. This is the model which they will judge their work against. They will know if they are on the right track by referring mentally or literally to the teacher’s model. Based on Rosenshine’s research, for new material the construction process should be broken down into small steps with student practice after each step. If for example, students are studying spreadsheets, students should not be presented with four different functions to use in formulae along with formatting and the creation of graphs in one lesson. Rather, the first lesson might focus on arithmetic operators and a spreadsheet might be designed which allows for practice of individual arithmetic operators before moving onto formulae which combine several operators and brackets.


For each of the lessons, Rosenshine advises that students should be attaining a high success rate in their guided and independent practice. Students should be attaining success rates of 80% on their practice tasks. One way to gauge what the class’s current success rate and level of understanding is to stop the students after a set amount of time, model the correct processes and solutions on the board and asking students to mark their own work. At the end of this modelling and self-assessment, pupils may be asked to raise their hand if they achieved 50%, 60%, 70% 80% or more. If it is clear that students have achieved a high success rate the teacher can launch the next task or increase the complexity of the current task. Likewise, if it is clear that very few have achieved a high success rate, then the teacher may want to clarify some misconceptions.


Developing stage of mastery

By the end of the first lesson, students should be developing a sense of mastery and at this stage teacher modelling should be punctuated with increased questioning. The process of modelling at this stage will involve deconstruction of the task, activity or process. Royce Sadler who has researched this area extensively advises that exemplars should be used during the modelling process and that these should be authentic pieces of student work of varying quality (Sadler, 2002). John Sweller is another highly-respected researcher who has spent over thirty years researching Cognitive Load Theory. During this time he has also written extensively about worked examples vs. problem solving. In a recent essay, professor Sweller discusses the benefits of showing highly variable worked examples (Sweller, 2016), referencing the work of Paas and Van Merriënboer (Paas & Van Merriënboer, 1994), he states that  learners who encounter highly variable worked examples learn more than those shown more similar worked examples. The differences in quality allow students to truly understand what is meant by quality, it makes abstract specifications and criteria more concrete. There is no substitute for exemplars; Sadler emphatically states that exemplars covey messages that nothing else can.


In the second or third lesson of the unit, the teacher should still be providing scaffolding for difficult tasks. However, the intention should be that the scaffolding will be removed once the students achieve a high success rate. At this point, students should have the opportunity to complete joint construction through supervision or guided practice. This guided instruction and collaboration is supported by Pearson & Gallagher’s Gradual Release of Responsibility Model (Pearson & Gallagher, 1983) along with Lewis & Wray’s research into literacy (Lewis & Wray, 2000) and Gibbons work on reading, writing and language acquisition (Gibbons, 2002). As students develop their level of understanding, it is worth closing the lesson with a formative assessment in the form of an exit ticket or low stakes quiz.


Secure stage of mastery

By the third or fourth lesson in a six-lesson unit, in order for modelling to be truly effective, we need to encourage students to analyse the exemplars and form their own opinions of quality; by being able to judge quality accurately, students will be able to judge and improve the quality of their own work during independent practice. After initial teacher modelling, To and Carless recommend critical peer dialogue as an effective way of students participating in this deconstruction and reconstruction process (To & Carless, 2016). To and Carless found in their research that peer dialogue and critique can provide a more supportive environment for peers to ask questions about an exemplar thus greatly increasing participation. For reserved students and students who may still fear failure in front of a whole class, pair or small group discussion allows students to make their opinions without fear of judgement from their peers. One area worth guiding students with is identifying weaknesses in exemplars as students generally gravitate towards identifying strengths and rarely identify weaknesses.

Teacher guidance is certainly required during the modelling process particularly during the early stages of a unit of work when it is highly likely that the teacher is the only expert in the room. When a teacher is leading a discussion about an exemplar, possible questions might be:


  • Who is the intended audience for this piece of work? How has the student ensured their work is user friendly and suitable for the audience and purpose?
  • How many marks would you give this student and why?
  • What keywords and technical vocabulary has this student used? What technical vocabulary should a student be using in this answer?
  • What might be a more efficient way of doing this?
  • What data structure could they use for this data set?
  • State three things would you do to improve this program/poster/report/essay/answer?
  • State three strengths of this program/poster/report/essay/answer?
  • What feedback would you give this student to improve their work?
  • This piece of work scored five out of seven what is missing to ensure the student gets full marks?
  • Are there any questions you wish to ask about the exemplars?
  • Why do you think the student has used this formula here? Can you explain their thinking?
  • What is the graph trying to show? How successfully has the student done this? How could it be improved?
  • If you were to pick out the strongest sentence or argument from this paragraph, what would it be?
  • What did you like about this film trailer?
  • What three techniques has the student used effectively to communicate the genre of their film to the audience?
  • List the graphic design rules and principles each of these exemplars have used?
  • What are the similarities between the different exemplars?
  • How might this exemplar be better or weaker than your own?
  • Can you summarise the key differences between the three exemplars we have looked at this morning?


The use of teacher-led modelling, peer discussion and individual deconstruction can take place during any unit of work. These varying methodologies do not necessarily need to take place in a linear sequence as the benefits of each methodology can be reaped regardless of the stage of mastery. However, one key finding based on experience and the literature review is that the choice and range of exemplars do need to be planned in advance.

The use of teacher-led modelling, peer discussion and individual deconstruction can take place during any unit of work. These varying methodologies do not necessarily need to take place in a linear sequence as the benefits of each methodology can be reaped regardless of the stage of mastery. However, one key finding based on experience and the literature review is that the choice and range of exemplars do need to be planned in advance.

Modelling methodologies. Based on (Carless & Chan, 2016) (Sadler, 2002)


As students move from novice to expert, the guidance provided to students should be gradually reduced (Renkl & Atkinson, 2003). Whilst problem solving is a one of the key elements of Computational Thinking and is one of the skills encouraged by Knight and Benson (Knight & Benson, 2014) and Nuthall (Nuthall, 2007), Sweller and researchers who have built on his Cognitive Load Theory have found that problem solving tasks are not suitable for novices during their initial stages of cognitive skills acquisition as they will experience cognitive overload (Sweller, 2016). Once a novice has developed their understanding and skills through varied worked examples, these learners’ intrinsic cognitive load will decrease, allowing them to be exposed to problems which they are required to solve. Renkl and Atkinson go on to state that these initial problem solving tasks will require scaffolding. Through gradual fading of both worked examples and scaffolding, students should complete problem solving tasks independently.

Rosenshine notes that whilst independent practice should be extensive and successful in order for skills and knowledge to become automatic, this independent practice should still involve monitoring. Brief monitoring of no-more than 30 seconds is appropriate. However, formative assessment can also be used to check the understanding of the whole class.

The expert teacher does not simply follow a rigid lesson structure and plan; the expert teacher will elicit feedback through frequent questioning and formative assessment techniques throughout the lesson and adapt their instruction accordingly.

“An assessment functions formatively to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited.”- (Kingsbury, et al., 2011)

The use of formative assessment strategies by teachers is similar to the strategies employed by football managers, pilots and heart surgeons. Whilst all the aforementioned professionals approach their operations with a plan, this plan is adapted in real-time. In aviation, when a pilot checks their instruments and discovers that they have deviated from the flight plan’s destination, the pilot makes a course correction. The pilot checks these instruments regularly, not just towards the end of the flight. Similarly, in teaching, expert teachers should be regularly checking that their students are on track to meet their destination and if not, the teacher should introduce a form of intervention to change their student’s trajectory and ensure a high success rate. The formative assessment should not take place only at the end of the lesson as this does not leave sufficient time to offer immediate corrective feedback for students to act on. An expert teacher’s classroom is a responsive classroom and the corrections to the lesson delivery are made throughout the lesson.

At this secure level of mastery, teachers can also rely on peers to provide feedback, correction and instruction. Pair programming is an example of a cooperative learning and peer instruction which is supported by research and industry (Williams & Kessler, 2002) (Hannay, et al., 2009) (Denner, et al., 2014) (Franklin, 2015). The theory behind pair programming is that the programmer (driver) is required to think aloud and the observing peer (navigator) should review the program, offering advice and feedback based on the driver’s programming and thought processes. As both students regularly switch roles after timed intervals, their shared knowledge and understanding allows them to achieve more than if they were to program separately. 

Mastery

The Gradual Release of Responsibility model (Pearson & Gallagher, 1983) states that students should move from teacher-led learning to student-led learning. At this final stage of mastery, Rosenshine states that independent practice is necessary in order to build fluency. The teacher plays an increasingly passive-role with much less intervention and much more observation and monitoring.
Learners at this stage should be independent in their thinking and application of knowledge and skills. The independent practice in the form of overlearning is what leads to fluency and automation. Rosenshine’s research suggests that the material used in independent practice may involve slight variations in the material covered during guided practice. In a Computing classroom, these variations might involve subtle changes in context or content for example re-designing a poster for a different audience or using a similar selection algorithm for slightly different conditions.
At the end of each unit of work, there should also be a summative assessment which tests the students’ stage of mastery by asking them to apply their skills or knowledge to a new scenario or context. These assessments should be conducted in exam conditions and should mirror the assessment requirements at GCSE or A-Level as closely as possible. In many cases, exam boards will release their old exam papers and also supply some specimen exam papers for new specifications. Teachers are encouraged to use these official materials wherever possible to ensure that the assessments are rigorous, accurate and appropriate.

Following a summative assessment, students should spend at least one lesson reviewing their assessment. It is here that the teacher should design resources and activities to correct misconceptions and misunderstandings. Some teachers like to have a digital mark book which directly feeds into a Personalised Learning Checklist (PLC) for each student. Dr Jasper Green (Network lead for Science at Ark Schools) suggests an alternative approach which is marking an assessment with mark scheme at hand and annotating this mark scheme with misconceptions, it soon becomes apparent where the common misconceptions are for a certain unit and what these misconceptions are (Green, 2016).

Indeed, a combination of the two techniques can be used to record and analyse misconceptions.

Before students start their corrections, students could also be given the opportunity to reflect on their learning and current level of understanding. Below is an example of a reflection sheet for a Database unit which is used during the Do Now and revisited as a plenary activity in the consolidation phase. <End of excerpt>

----------------------------------------------------------------------------------------------------------
This is an excerpt from Teaching Computing in Secondary Schools. The full text is available from Routledge and Amazon.


References

Baxter, M., Knight, O. & Lau, W., 2016. GFS Teaching Handbook, London: Greenwich Free School.
Carless, D. & Chan, K. K. H., 2016. Managing dialogic use of exemplars. Assessment and evaluation in higher education, 20 July, pp. 1-12.
Collins, A., Holum, A. & Seely Brown, J., 1991. Cognitive Apprenticeship: Making Thinking Visible. American Educator: The Professional Journal of the American Federation of Teachers, 15(Winter), pp. 38-46.
Denner, J., Werner, L., Shannon, C. & Ortiz, E., 2014. Pair Programming: Under What Conditions Is It Advantageous for Middle School Students?. Journal of Research on Technology in Education, 46(3), pp. 277-296.
Franklin, J. P., 2015. Perceptions by young people of Pair Programming when learning text languages, London: Axsied / King's College London.
Gibbons, P., 2002. Scaffolding Language, Scaffolding Learning. Portsmouth, NH: Heinemann.
Green, J., 2016. Question level analysis in science. [Online]
Available at: http://thescienceteacher.co.uk/question-level-analysis/
[Accessed 29 December 2016].
Hannay, J. E., Dybå, T., Arisholm, E. & Sjøberg, D. I., 2009. The effectiveness of pair programming: A meta-analysis. Information and Software Technology, 51(2009), pp. 1110-1122.
Kingsbury, G. G., Wiliam, D. & Wise, S. L., 2011. Connecting the Dots: Formative, Interim and Summative Assessment. College Park, Maryland, Northwest Evaluation Association (NWEA).
Knight, O. & Benson, D., 2014. Creating Outstanding Classrooms. Oxon: Routledge.
Lewis, M. & Wray, D., 2000. Literacy in the Secondary School. London: David Fulton Publishers Ltd.
Nuthall, G., 2007. The Hidden Lives of Learners. Wellington: NZCER Press.
Paas, F. G. W. C. & Van Merriënboer, J. J. G., 1994. Variability of Worked Examples and Transfer of Geometrical Problem Solving Skills: A Cognitive Load Approach. Journal of Educational Psychology, 86(1), pp. 122-133.
Pearson, P. D. & Gallagher, M. C., 1983. The Instruction of Reading Comprehension. Contemporary Educational Psychology, 8(3), pp. 317-344.
Renkl, A. & Atkinson, R. K., 2003. Structuring the Transition From Example Study to Problem Solving in Cognitive Skill Acquisition: A Cognitive Load Perspective. Educational Psychologist, 38(1), pp. 15-22.
Rosenshine, B., 2012. Principles of Instruction: Research-Based Strategies That All Teachers Should Know. American Educator, Issue Spring, pp. 12-39.
Sadler, D. R., 2002. Ah! … So That’s 'Quality'. In: P. L. Schwartz & G. Webb, eds. Assessment: Case studies, experience and practice from highereducation. London: Kogan Page, pp. 130-136.
Sweller, J., 2016. Story of a Research Program. Education Review, 10 February, Volume 23, pp. 1-19.
To, J. & Carless, D., 2016. Making productive use of exemplars: Peer discussion and teacher guidance for positive transfer of strategies. Journal of Further and Higher Education, 40(6), pp. 746-764.
Williams, L. & Kessler, R., 2002. Pair Programming Illuminated. Boston, MA: Pearson.

No comments:

Post a Comment