My teaching evolved every year in the past three years, hopefully for the better. Here I want to share some of my experiences, which may be interesting for both other teachers and also my students, current and past.
In 2012-13 I spent much energy in discovering what works for good teaching. The first part of the survey was fruitless and confounding. Many things work to improve learning, indeed, everything works to improve learning. Giving detailed notes help to improve learning (because students benefit from access to material) but giving no notes also help improve learning (because students now need to process the information). The advice is vague and, worse yet, often contradictory.1
While this was going on, inherited from years of teaching and learning at university, I adopted laissez faire policies. This “worked”, on the virtue that the students require limited management. I am lucky to not have students who play cards and do push-ups in class, or play the violin, or punch one another (or me) in the nose.
Also inherited from university teaching is the default habit of overhead slides. Before every class I would prepare elaborate, pretty, thoroughly animated Powerpoints (on Keynote) that I would share after the class on the school network drive. This is usually ~2-3 hours / class effort, and I tell myself that I will re-use this year after year.
Ebb and flow of the course is indicated on a big display with post-it notes outside the lab, which I was very proud of. (It was pretty.)
I wrote all original questions for all tests, and they tend to be questions that pushed further than a straight-forward “plug-things-into-formula” mold.2 I would continue to do that for the rest of the years, and it was a fantastic source of education.
Back to my survey of the teaching literature: in the second half I came across John Hattie’s meta-meta-studies. A quick explanation is warranted here. Let’s say there has been 342 original studies on, say, differences on multiple choice and short answer tests; there’s so many because they are done on different grade levels, subjects, geographical locations, across cultural divides etc.3 A meta-study reviews these studies, and perhaps another 8878 (on essay writing and oral exams etc), to try to quantify what is there to be learnt about different testing techniques. Through this lens we see the overall effects, and resolve some of the contradictions.
Hattie’s meta-meta-study looks at ~900 of these meta-studies and ask what is there to be known about education, period. I have not gone down the path of verifying the statistics, but the secondary reviews seems to suggest the methods and findings to be sound.
I took home two things from Hattie’s work: first, that some things do work. There is such a thing as teaching well. Nihilistic, cynical feelings about teaching is not warranted. There are techniques that are known to be beneficial, and teaching is a job that is too serious, too important, too urgent, and too awful otherwise to ignore these known techniques.
Second, that it is almost impossible to tell whether things work for me. The “effect size” (how effective is a particular change / technique) is usually small even when billion of students were involved. I teach ~80 students a year and this is a case where the Law of Small Numbers hold: a particularly strong class will unduly inflate the effectiveness of my change (and vice versa). Whether test grades have improved year-on-year is not a suitable measure for effectiveness of a change.4
For an evidence-based, data-driven guy like me this would normally be a deal off. I detest both making changes whose success cannot, in a fundamental sense, be evaluated5 and changing things because one need to change things,6. But the broader trends were sufficiently convincing that justifies a leap of faith.
Hattie’s list suggests many effective techniques. Some of these (I think) I already do, some of which I cannot do, and yet others I could choose to implement, often in many different ways. I chose to focus on formative evaluations.
One of the significant drawback from a lecture (no matter how clear and engaging) is that there is limited ability for me to know what each student understood, and limited opportunities for students to know what they understood.
The most common remedy is to ask students if there are questions, which for the most part was hopelessly useless.9 There are many versions of the knowledge presented. There is one version in the teacher’s head, there’s one version out of the teacher’s mouth, there’s a version in the students’ head, and there’s a different version 20 minutes later. The most important corrections are almost never the ones that the students know they need, and that someone who can already do it would anticipate.
To take an example from yoga: many poses require the “yogi grip”, where one hooks the index and second fingers around the big toe. I would do it once, repeat it with an explanation, and then ask students to perform it. For something that seems as straight-forward as it comes, once I saw 4 different wrong ways of doing it in the same class of 16. All the students are convinced that they are doing it exactly like how I showed them. The goal of formative evaluations is to close this loop of what I think they know and what they actually know, and the best book I found on this is Dylan Wiliam’s Embedded Formative Assessments, a book that I re-read every term.
Similar to how other teachers use clickers, I introduced Signal Cards, laminated, colored Post-It notes. The classes are still slides-based, but I embedded inside them multiple-choice slides that students quickly responds to by flashing their responses.
At the same time, I secured funds for 20 whiteboards (“slates”) and it was the best investment I’ve made as a teacher. Instead of slides with multiple choice questions, other slides call for free-form responses. I learn so much from this. Like with the “yogi grip” above, the common errors are not always obvious to someone who already know, and this is the year that I feel I made the most progress in my teaching.
Students learnt more too. First, I get to peer inside their little heads and find out what they have understood and thus make suitable corrections. Second, the slates fully leverages the class time. You’ve all been in class: teacher asks a question, “Why is fluorine the most electronegative element…” (heads spin, stomachs tighten) “…Sarah?” (Sarah gets butterflies. Everyone else relaxes and watch the show with a mixture of schadenfreude and sympathy but no chemistry.) The next minute or two is a complete waste of time for everyone but Sarah. With the slates, the next minute or two fully engages everyone.
The bottleneck of the slate-work is in me giving feedback to each student, and here I have taken a page from both Doug Lemov’s work as well as being a scuba diver. Hand signals turn out to be extraordinarily efficient, and with little training students can immediately catch “correct / good”, “almost”, “not at all”, “try the next one” etc with no verbalizing, and corrections can be made efficiently with simple eye contact and 2-3 syllable phrases such as “sig fig”, “check units”, “sanity check”.
While still “slides based”, the back-and-forth nature of the class now demands a different kind of slide organization. Instead of slides that tries to be fully informative (and pretty), the slides ended up in two patterns:10
- Lead-up. I applied the lead-up pattern when the “next thing” can be assembled from prior knowledge. In this pattern, I deploy a sequence of questions, each building and related to another until a new concept is fully exposed.
- Inform-apply. When something new is introduced , I do a bite-sized piece of information (2-3 min.), then immediately get a response from the students in the form of applying the new thing to solve a problem (if possible) or a simple question for me to verify their grasp.
To differentiate between different levels of chemistry abilities, the free-form response slides have three components:11 a main question, a hint, and an extended version. I spend less time on crafting the slides, but much more on structuring the learning process. These new Google Slides also took ~2 hr / class to prepare, often first on paper. I swapped systematically to a black background to avoid the hotspot from the projector onto the whiteboard.
The move to Google Slides was motivated by the ease of sharing and versioning. Keynote presentations do not always translate properly to Powerpoints, and our school network can be a hit-and-miss place for storage (some machine never seems to be able to access it properly). And when I edit the slides later on to fill in more info, the different versions causes confusion. Having a single, always up-to-date link for distribution was useful.
In this year the College has also adopted Haiku Learning as our LMS.12 We were asked to use Haiku for our classes. I used Haiku as a portal where class resources were located, and each class has its own page where the Slides were embedded.
Since there is now a portal, I also placed the course schedule, now digital, as a Google Sheet. This is easier to reference, and syncs automatically without me going up to the lab and shuffling things around. I miss my art-and-craft but this was a right step.
(What do I think about Haiku? Three years on, I think we’ve got it all wrong. It was a mistake to choose Haiku because it was simple to implement.13 It was a mistake to have no ground rules for how Haiku should be used.14 It was a mistake that we probably won’t recover from.15 And fundamentally, speaking as a tech-savvy guy, I think it was a mistake to push eLearning for the sake of “we do eLearning”. eLearning is not panacea, and in most cases, not even a solution: an OECD study concludes that “students who use computers very frequently at school get worse results.”, and that “there is no single country in which the internet is used frequently at school by a majority of students and where students’ performance improved”.)
The same year Science was asked to make at least 3 unannounced quizzes per term. I agree with the meaning behind quizzing, but on the ground it was disastrous. Y2 students did not grow with the system, and the quizzes killed them. I consoled many crying students that term, and was so fed up with the system that I was ready to renounced the new contract.
The Google Slides system worked, but I wasn’t happy with it. My classroom is arranged so that I can either use the overhead projector or use the whiteboard; the whiteboard, for the most part, have more expressive power and serves the learning better. The slides system also means a largely pre-ordained delivery and decreases my ability to respond on the fly.
So I moved back to the whiteboard. Before each class I draw out the plan for the class in a notebook, continuing to base it around questions that probe / develop understanding using the slates. Each class flowed from left to right and ends, when all goes according to plan, when time is up and the board is filled. It’s elegant.16 I wish I have done this earlier, but in hindsight I could not have done it in the previous years. The anticipation of time needed, questions appropriate, clarity of organization, and experience to weave suitable improvisation in was available to me this year and not before.17
I tested many whiteboard pens this year. I teach in a large room, and large whiteboard pens makes a big difference with the readability. To distinguish conceptual elements (e.g., a graph, annotations on the graph, and explanation of the annotations) I use ample colors, and developed a color-scheme using the limited palette. Doesn’t always work, but usually well enough.
The shift to the whiteboards has a collateral damage. Writing out multiple choice questions wastes time. I ended up switching to quick flick of thumbs-up, thumbs down. This is a little sad, and in the future years I may yet incorporate it by pre-printing the questions on A3 sized paper. See, this summer, with the help of my friend Ricky, I commissioned 20 sets of wooden signal cards and embedded them with magnets. To be human is to worry about sunk cost.
The departmental demand for unannounced quizzes had since tempered down to just at least three quizzes (they can be with notice). Quizzes — active retrieval in general — are known to be helpful for learning. It’s just the psychological cost of unannounced quizzes, coupled with the perceived impact on grades,18 that makes it unpalatable.
So I took advantage of the relaxed regulation19 and gave 11 announced quizzes that term. I get to give many corrective comments. Between setting quizzes / tests and their answer keys, and marking hundreds upon hundreds of quizzes and tests, the workload that term was insane.
Before going on, here’s an relevant sidebar. There was an interesting study where students were given either one of three responses: (i) only grades, (ii) only comments, and (iii) both grades and comments. The expected finding is that giving comments provide much more benefits than only giving grades; the unexpected finding is that when grades is given alongside comments, the benefits of comments are nullified. Students simply look at the grades and ignore the comments.
To maximize the probability that students read the comments, I keep track of the quiz grades in my gradebook spreadsheet but omitted them on the paper. Uptake is just ok — I recycle exact questions in later quizzes to find out if students make exact errors as they did the first time around.20 It’s hit-and-miss; there seems to be a complete feedback loop for most students, but this gets worse at the tail end. As a whole, the quizzes work but at a tremendous cost.
It is also with these Y1s that I become very opinionated with computer use in class. There is no computers / phones in class.21 Over the years I have come to the same conclusion as what Clay Shirky has written: that distraction websites are designed and engineered to exploit human’s attention system, and that we are looking at a second-hand smoke effect where David’s laptop distracts David and three other students.
One year later, it was the absolute right thing to do. I wish I have put this hammer down since my first year. The only question in hindsight is why I have not done this before; why was I, indeed, uncomfortable with the idea?
This year also see broad changes from the IB and College level.
Chemistry-wise, the IB syllabus is updated, and students now prepare one Internal Assessment instead of a portfolio. Anticipating the change, and leveraging my expertise as an ICT workshop developer for the IBO, I made a substantial push in teaching simulations, database, and molecular modelling. This helps trim down some of the preparation work needed from the technicians with wet-labs, and also enable the students to have a plan B if their wet labs don’t work out. The jury is still out.
At the College level, we swapped to an 8-day-8-block schedule. There are two major consequences here for my chemistry teaching. The first is the introduction of “group blocks”, where subject groups alternate to teach using an 1-hour block to the entire year-group approximately once a month. In the sciences we use them to teach data analysis skills and to introduce components of the internal assessments. Very gratifying when done right, but very difficult to get the organization, personal help, and differentiation done with a crowd of 130.
The second is the reduction of teaching time, down to 153 for higher level. I barked up and down the tree that this would be a disaster (for me), and I still think this will be a disaster (for me). A good teacher is maybe 20% more efficient than an average teacher, and society should throw flowers and money for teachers that are 40% more efficient than their peers. I am not so good that I am 50% more efficient than my peers.
This year, at last, I look back at what I have done last year and it was good. My teaching is finally stabilized.
So I get to put my attention to some neglected bits, like teaching notes. In the first two years I have relied on supplying the slides to the students, and last year photographs of the whiteboard. I question the usefulness of both of these. The importance, and utility, of the notes is not in the content but in the act of preparing the notes; it’s about the brain-on-paper time. This year I am trying to provide guidance for students to take notes, a template modelled after my white-board plan, together with questions that they will solve in class as part of the progression.22 I am hoping that this will scaffold their note-taking, and eliminate the “copying question” time (pen-on-paper but not brain-on-paper).
(You would notice that the spaces for answering the questions are on the back of the page. This is a tip I picked up from design of Japanese math textbooks; the point is that students should be able to see their own solution, but not immediately.)
And then, taking a page from my yoga teachers Monica and Gregor, I am finally starting to use a voice amplifier. I should have done that years ago.
A stabilized teaching means that I am also getting to work on things outside the class. At any one time I have several secret projects going on, and one of these is getting into prime time. Ploddingly over the past year I have put together a Moodle setup that works for chemistry, and my goal for this year is to build this up preliminarily. This is a system where students can:
- practice as much as they want,
- on programmatically generated questions (so there’s infinite replayability),
- on topics of their choice,
- in their own time,
- receive accessible feedback, and
- track their progress.
[insert video here]
Setting up the quizzes is an exacting process and, including testing, demands full attention for about 90 minutes / quiz. (Even then I still get some things wrong.) Between other demands at the College, I struggle with getting quizzes prepared but I am not going to let perfect prevent the good from happening.
One month in I am pleased to see it all coming together. This already blows eLearning courses from big publishers (e.g., Oxford Baccalaureate) out of the water, and at times I feel like Luke Skywalker at the end of Episode IV.
There are three side effects to having an automated quiz system.
- The bad: I no longer get to see the student mistakes, and thus deprive myself from learning from them.
- The nice: I no longer need to see the student mistakes for hours after each quiz. (That said, what time I save from marking is spent on creating the quizzes. But this will be a net positive next year.)
- The fabulous: I can see exactly how and when students practice.
Some elaboration on the last point. The system tracks all kinds of details, and I am able to discern how and when students practice. This turns one of my deeply ingrained articles of faith upside down. I believed that students who struggle struggle because they came from a challenging background (be it language, content, or study habits). I was astonished to see preliminary evidence telling me that, on the whole, students who don’t do well simply haven’t tried. They know there is a quiz next day, that the questions are going to come from the practice quiz, and they would not bother to spend 10 minutes looking at the quiz (or its answers). This is looking to be the kind of information I could never have gotten otherwise, and I am hoping to do some proper analysis of this in December or at the end of the year.
Reflecting on my teaching is a cringe-worthy experience. There is so much that I have learnt; I hope I have at least done no harm to my former students. I am sure there is much more that I will learn; I hope one day I will be an expert teacher and not merely an experienced one. Compare to two years ago I am much more grounded and my own man. Many of what I have tried started as leaps of faith, but now I am certain that I am doing right and ready to pay the rope to the bitter end. We shall revisit in 2018.