I've been through the previous listing from Hattie (2009: 297) and inserted the new items in rank order according to effect size (new ones in red, and without a former rank). This is not an exact science, first because the report is of an interview with Hattie which originally appeared in a Swedish magazine, and it is not always clear whether—as in the case of the former no. 1—there has simply been a slight re-labelling of the category (which is very likely given that the database has grown by about half since Visible Learning was written); or whether a completely new candidate has made an entrance, which seems to be the case with “Change Programs for conceptual understanding”.
I comment on some thoughts arising from the listing below: I'll leave “Collective Self-efficacy” to the last.
Former Rank | Domain | Influence | d. (Effect-size) |
Collective self-efficacy | 1.57 | ||
1 | Student | Self-report grades (Self-assessment of ratings / student expectations 1.4) | 1.44 |
2 | Student | Piagetian programs | 1.28 |
Change Programs for conceptual understanding | 1:15 | ||
3 | Feedback on intervention | 1.07 | |
Teaching | Providing formative evaluation (Formative assessment 0.98) | 0.90 | |
Teacher credibility | 0.90 | ||
4 | Teacher | Micro teaching (a.k.a Video analysis of tuition) | 0.88 |
5 | School | Acceleration | 0.88 |
Classroom discussion | 0.82 | ||
6 | School | Classroom behavioral | 0.80 |
7 | Teaching | Comprehensive interventions for students with Special Educational Needs | 0.77 |
8 | Teacher | Teacher clarity | 0.75 |
9 | Teaching | Reciprocal teaching | 0.74 |
10 | Teaching | Feedback | 0.73 |
The former no. 1 Self-Report Grades remains near the top. As this post from docendo discimus points out, it is not really clear what it means, and indeed why Hattie does not make more of it if the effect-size is so big. Like the self-efficacy point later, is it actually an "influence" or "intervention"? Is it not at least as likely that the ability to point accurately to one's likely level of achievement is simply an artefact of successful learning processes, a kind of meta-learning after the event rather than a condition of routine learning. Given the world-wide scope of the meta-analyses it is also at least possible that culture enters the equation quite a lot—here is my take on it from a while back (unfortunately the Chronicle article no longer seems to be available.)
Piagetian Program[me]s: This is based on a meta-analysis from 1981, and if it is that powerful, it is again quite strange that it (a) does not feature more strongly in VL —it gets just a 13-line paragraph on page 43—and (b) does not appear to have attracted more recent corroboration; there is a 2008 reference cited but that is to a two-page note on "Cognitive load theory and education technology". Docendo discimus also discusses this item here. Depending on the ages of the pupils, of course, there may be something here about the importance of enabling children to make the developmental stage transitions as effectively as possible, but that does depend on the reliability of Piaget's developmental model, and the accuracy with which it can be assessed—not to speak of the differentiation probably required. (See here for an introduction to Piaget.)
Change programs for conceptual understanding: Now this sounds more interesting. It's a new category and I imagine Hattie will locate it in the “curricula” domain. In the previous ranking, curriculum did not appear until 15 (Vocabulary Programmes; d. 0.67). In his 2013 lecture (link in Further Reading) Hattie says:
A major argument in this discussion is that there should be more attention to the narrative of ‘learning’, as it is via developing ‘learning’ for all students that there will be subsequent effects on ‘achievement’. While there are many definitions of ‘learning’, the one that is the basis for this presentation is that learning is the process of developing sufficient surface knowledge to then move to deep or conceptual understanding. (My emphasis)He goes on to endorse Biggs' SOLO taxonomy as a way of achieving this. This leads me to read “Change programs...” as an instruction, rather than as an endorsement of the packaged nostrums and course designs which purport to develop “thinking skills” and hence conceptual understanding. Indeed, it is interesting that in the quote above he starts with “developing sufficient surface knowledge to [...] move on”. Surface learning is commonly deprecated, but nevertheless provides the foundation for conceptual understanding. (See VL pp.28-9.)
I assume that formative assessment (0.98) is merely a re-wording of formative evaluation (.90). There are several variations on the theme of feedback, of course.
Teacher credibility is a new entrant to the chart. It's a hot topic for me; see blog posts here, here and here. I'm interested to see that the effect-size is as large as it is (0.90); in the brief note on the VL web-site, Hattie treats it as more of a hygiene factor (in Herzberg's sense) than a positive intervention:
According to Hattie teacher credibility is vital to learning, and students are very perceptive about knowing which teachers can make a difference. [...] In an interview Hattie puts it like that: “If a teacher is not perceived as credible, the students just turn off.”(Do read the full interview linked from the quote.)
Video analysis is probably simply a better term than “micro-teaching”, which is jargon and not self-explanatory.
So in terms of new entrants, that brings us to Collective self-efficacy, entering directly at the top with an amazing effect-size of 1.57. But if micro-teaching is jargon, what on earth is this?
You can't really blame Hattie—he did not invent the term, and it has been around since 1977 (ref. below). It originates with Albert Bandura, the pioneer of social learning theory, who defined it as “people's judgments of their capabilities to organize and execute courses of action required to attain designated types of performances” (1986: 391). It is more specific and task-focused than general motivation, and of course in this case it is collective; the beliefs concern the team or institution, possibly the class, rather than relating to just an individual.
If all that is a bit of a mouthful, it comes down to President Obama's (and Bob the Builder's) slogan: Yes, we can! (Bob got there first, in 1998.)
But as with Self-reported grades / Student expectations, the direction of causation is still not clear to me. It makes sense that a class with a “Yes, we can!” attitude is more likely to achieve than one without it. But surely the collective self-efficacy is the product of many other smaller effects or influences, such as students' trust in each other as well as the teacher, a sense of prior achievement, high expectations... the list goes on and on. Yes, we can! is a condition and an aspiration for learning rather than something which can be created ex nihilo and applied. It has to be nurtured with an through the class, and certainly cannot be imposed.
- For a recent short accessible account of Hattie's view of learning; here is his his keynote from the 2013 conference of the Australian Council for Educational Research Understanding Learning; lessons for learning, teaching and research.
- For a glossary of Hattie's terms: http://visible-learning.org/glossary/
- Bandura A. (1977) “Self-efficacy: toward a unifying theory of behavioral change”, Psychol Rev. 1977 Mar; 84(2): 191-215.
- Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.
- For more on self-efficacy: http://www.positivepractices.com/Efficacy/SelfEfficacy.html
- Interestingly, Hattie was involved in direct empirical research on self-efficacy and the development of a measurement instrument: Dimmock C and Hattie J (1996) “School Principals’ Self‐Efficacy and its Measurement in a Context of Restructuring” School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice vol 7 no 1, pp 62-75
Collective self-efficacy refers to the school staff's belief that they are change agents and that they have a direct impact on student learning.
ReplyDelete