The advantages of phonics are, in fact, modest. Before getting into why, it’s important to be very clear as to what I mean by ‘phonics’ . It is a word which causes great controversy and yet, as is often the case with controversial terms, there is no real clear consensus about what it actually refers to. My definition of ‘phonics’ is roughly as follows:
The intensive drilling of early readers with individual letter-sound correspondences, according to a carefully prescribed program, accompanied by exercises in blending and segmenting words, all of which being carried out independently of ‘real’ texts.
Most importantly, I take phonics primarily to be a method of instruction – it about what teachers should be doing, not what is true about the domain of interest. Indeed, much of the confusion around this issue is caused by both advocates and detractors muddling this notion of phonics with something else – a theory of English orthography. According to this theory, spelling ‘works’ in a certain way, namely, it conforms to what I shall call the ‘alphabetic thesis’ – (something explained in more detail below).
But these two ideas – what is true about written English, and how best it be taught – while evidently connected, are logically independent. It may well be that the best way to teach reading is phonics, but that the alphabetic thesis is false, or, equally, that the alphabetic thesis is true, but that there are other, better ways of teaching reading. As it turns out, I happen to believe that the first of these scenarios is the right one – phonics is moderately better than other ways of teaching reading (though still nowhere as good as many would wish) but the alphabetic thesis is essentially false.
Thus, phonics is only modestly successful for two reasons: first, it is not actually as good as many people say it is (though probably still better than any available alternative). And, second, the constant confusion between it and the alphabetic thesis means that many of the ideas often associated with phonics are wrong and, indeed, potentially harmful.
With regard to the first point, that the ‘phonics’ approach to teaching reading is only modestly effective, very little argument is actually needed, since even key phonics advocates are willing to concede that, in the long run, phonics has only modest results. Here is a quote from a leading advocate of phonics instruction in Australia:
“The lower long-term effects of phonics interventions can be explained by the constrained nature of phonics. Once children have mastered decoding, other aspects of reading instruction become stronger variables in their reading ability”
Some more strident defenders of phonics may be inclined to point to the word ‘once’ in this quotation, as it may imply that mastering ‘decoding’ (a confusing term, which I address below, but which, it suffices to say here, phonics instruction is highly focussed on) is a kind of ‘hurdle point’ – that there will be no progress without it. Thus, students who are taught with phonics arrive at this hurdle point earlier, which must be a significantly good thing.
It’s important, though, to attend to the data itself, and not what one might hope might be going on ‘behind’ it. While it may be the case that phonics-taught children make more early progress in being able to read simple texts (which makes a good deal of sense to me intuitively), those of us who recommend phonics must admit that this is of only limited importance when the ultimate aim of reading instruction – long-term reading ability – is brought into focus. The use of this word ‘once’ thus strikes me a little rhetorical: it could be replaced with ‘whether or not’ and remain true to the data.
A whole range of ‘other aspects’ are thus more important in helping young people achieve a sophisticated level of literacy. What is more, the brutal truth, which many on both sides of this argument are for understandable reasons unwilling to confront, is that many of these factors are outside the control of the school system. Literacy is a form of language, and language is inextricably social in its character. Schools are not so much the cause of a society’s linguistic competence, as a result. Universally highly academic schools will emerge only when society itself becomes universally highly academic.
I would therefore applaud phonics advocates for challenging methods of reading instruction which wholly ignore the sounds usually associated with letters (though I do wonder if there were ever really as many people using such methods as we are sometimes encouraged to think). What I take issue with, though, is the efforts sometimes made to squash the proposition that there are fundamental constraints on what any method of school instruction can achieve, and that these limits are chiefly set by social dynamics around literacy that are out of the classroom teacher’s hands. When teachers make this point, they are too often accused by the management class of having ‘low expectations’ of, for example, students who do not read much at home and for leisure. This is simply not fair, and in fact really rather sinister in its willingness to bully and humiliate those who are simply stating what is a quite obvious sociological fact.
I turn now to the alphabetic thesis, which is false, and which is in danger of causing harm. I should state that I consider this thesis to be quite distinct from the alphabetic principle, which is the trivially true observation that most writing systems are based on the way that written symbols can be associated with sounds of various kinds. The alphabetic thesis takes this notion too far, and assumes the basic logical principle on which a linguistic capacity is founded necessarily determines that whole system. It is rather like saying that, because the American constitution continues to ‘work’, and because this constitution is based on the principle that everyone be treated equally, everyone in America actually is treated equally.
Put another way, the alphabetic thesis might be expressed as the conviction that writing is a code. A code is a symbolic system in which there is a one-to-one correspondence between two symbols, each drawn from two distinct sets. Codes are very familiar to school children, who enjoy translating messages on the simple basis of say, aligning ‘A’ with ‘1’, ‘B’ with ‘2’ etc. The key point is that, when decoding a code, one can rely on ‘3’ always being a C – there is never any doubt about what ‘value’ to assign it. And it is this, definitive, feature of a code that English orthography repeatedly fails to display.
The alphabetic thesis is perhaps best illustrated in practice by the ‘nonsense word’ test, which many supporters of phonics advocate as an important reading test. This assessment, in which children are given nonsense words and asked to ‘read’ them, relies on the assumption that sounds can be uncontroversially assigned to graphemes independently of word meaning and history. An important implication of the alphabetic thesis is that the spelling system must be synthetic. This is the rule that, in a code, each symbol’s value remains the same no matter what symbols surround it. In the case of spelling, this would mean that written words could always be ‘broken down’ into a clear set of independent symbols, and each of these then independently assigned a sound ‘value’, which are then ‘blended’ together to create whole, spoken, words. Syntheticity is typically used to justify phonics’ emphasis on isolated drilling of sound-letter correspondence (which, as I have said, is not in itself a bad thing as a means of initial instruction).
More informed practitioners of phonics generally understand that English has a ‘deep orthography’, meaning that the assigning of sounds to letters is under-determined (as it is for all languages – every writing system depends to some degree on the representation of sounds, but even ‘shallow’ orthographies fail to purely codify sound). Many are less explicitly aware, it seems to me, of the distinct notion of syntheticity, and how English spelling fails to meet this particular condition of being a code.
Reasons for rejecting the Alphabetic thesis tout court
1. Multi-valency – telling which sound a letter or letter-group represents.
It is widely understood that, in many cases, the same letter or letter group is used in English to represent more than one sound or, in some cases, no sound at all. This happens for a variety of reasons: the desire to preserve meanings when words are combined, ‘foreign’ spellings, and long-term phonological change (the Great vowel shift, dropped consonant sounds) being the most important. Many ‘decoding’ approaches address this problem by teaching a whole range of ‘advanced code’ letter-sound correspondences, but this comes at the price of historical fidelity (it side-steps the actual reason for the multi-valency), as well as raising the more basic question, ‘how do you know which?’ Again, syntactic and semantic context must be key in how able readers solve these puzzles.
said – raid
2. ‘Squashed’ sounds – telling whether or not a letter represents a ‘true’ vowel.
Many of the vowels in multi-syllable words (and, in some contexts, single syllable words) have no fixed pronunciation, and no important role in determining the meaning of the word in which they appear. The vowels which are like this are unstressed, so an awareness of where stress falls in a word is thus vital in identifying them. Stress, though, is not marked in English spelling, so an intuitive sense of stress along with, again, semantic and syntactic context, must be used to help read these words. In the following examples, the bolded letters are in the stressed syllable, and so the typical sound value of ‘e’ can be assigned to them. The other syllables are unstressed, and so have no fixed sound value. The written word alone supplies no information about where the spoken stress occurs, and so it is impossible to know from text alone what the right pronunciation is (see ‘Spellings Encode Sounds’ for more detail on this distinction).
Elephant – Repentant –Recommend
3. Digraph Demarcation- telling where letter groups begin and end.
The concept of a digraph – a symbol consisting of more than one letter – is essential, but for novice readers it creates the problem of how graphemes are to be distinguished from one another: able readers rely on syntactic and semantic context to do this.
Gashouse Hothouse React Create
Reasons for Rejecting Syntheticity
As explained above, these examples extend the issue of ‘multi-valency’, undermining the the alphabetic thesis in a specific and significant way. They show that, in order to assign a sound to a letter, you have to keep in mind the letter and sounds which precede or follow them. There is thus no simple step-by-step ‘code-like’ algorithm for blending words – as with the examples above, readers must rely on other clues to read the words.
1. Orthographic context – the same letter represents different sounds depending on the graphemes which follow it.
Mating v Matting
Siting v Sitting
Hopping v Hoping
2. Phonetic context – a sound is said differently because of the sounds around it.
There are many paths in the country. The workshop was full of lathes.
He whips the children but blows kisses to them too.
John’s car is nicer than Pete’s.
‘Decoding’ is therefore a pretty misleading term for what happens when we sound out words. Much as the know-how conveyed in phonics programs is useful, it is far from being enough. Strong memorisation of the essential sound-letter relations is a good starting point, but readers also need to be flexible, to cycle between individual analysis of graphemes and an analysis of their context, be willing to try out different candidate lexemes, and, most importantly, supplement their phonics knowledge with an awareness of stress, a broad vocabulary, and a fluency with syntax, particularly those syntactic structures more commonly found in, but certainly not exclusive to, written English.
None of this means, I stress again, that the drilling, segmenting and blending which characterise phonics programs are mistakes – as we have said, this general approach to reading instruction is modestly better than any other. The point is that you can embrace the value of memorising this material in a pretty systematic way without buying into a rigid and false understanding of the degree to which this memorisation may be applied. The real danger, is seems to me, is in the peripheral ‘teacher talk’, and the messages it sends. More directive phonics programmes lean on the alphabetic thesis to shame teachers into avoiding saying things like ‘you may need to guess this one a bit’ or, ‘think about what word makes sense here’ or ‘you probably don’t say this word like that, but some people do’. Teachers are encouraged to avoid any language which smacks of vagueness or indeterminacy, and instead adopt choreographed routines which relentlessly support an implicit – and in some cases explicit – conviction that a total, reliable and consistent representative ‘code’ is being transmitted and memorised. The writers of such programmes believe that they are offering a more rigorous and ‘scientific’ approach, whereas in fact, as we have seen, the reverse is true.
As a final remark, I might speculate that the notion of reading as a somewhat precarious and guess-inflected activity would not surprise experts on speech reception. Both sides of the reading wars debate have failed to attend to just how precarious a business learning to listen is: it relies in much the same way as reading does on the brain’s capacity to piece together meaning using a complex, unreliable, and far from synthetic symbolic system. The more radical ‘whole language’ advocates saw listening as ‘effortless’, noticed the parallels with reading, and so supposed that reading could be easier than it actually is. But the proponents of phonics have actually fallen into the same trap, noticing the superficial ease of listening, and so assuming that a relentlessly systematic approach must be required for the supposedly less intuitive and wholly distinct task of reading.
The empirical evidence, I acknowledge, is sparse, but it seems to me it would be extraordinary if the brain did not make use of the same fuzzy-logic strategies, ad-hoc heuristics, and designed redundancies to gain familiarity with the written word as it does for the spoken one. Indeed, the fashion today is to refer to evolutionary psychology in explaining learning, and if we are to keep with this fashion, this would HAVE to be case. The species’ experience with the written word is a drop in the ocean of the biological evolution of our brains: we simply have not had time to evolve brain structures purposely designed for written material.
Phonics is thus a useful gateway fiction, and no teacher should ignore the value of drilling, but there is a strong case for delaying introduction to any kind of phonic instruction until the child has a broad and strong enough active and passive spoken vocabulary to make it effective. What is more, once children have developed basic automaticity with regard to letter-sound relations, spending too much time on further drilling, particularly of the so-called ‘advanced code’, risks directing resources away from the much more important business of ensuring they have the lexical and syntactic knowledge and know-how required to become strong, long-term readers. Certainly, when they apply themselves in seriousness to the tricky business of spelling (see ‘Spellings Encode Sound’), they would be far better served being taught about the reality of English orthography, rather than asked to struggle with an inelegant and inaccurate theory of why words appear the way they do.