So another part of the dyslexia constellation is the area of audition. And so we've referred to this a bit in previous weeks. We've talked about the facts that one of the primary issues in dyslexia is a phonological deficit. And this is a sound based problem, so it does give the hint that maybe other aspects of audition may be impacted. But this is, in fact, a very mixed area of research. The findings are not conclusive, and so we are going to talk a bit about what we are seeing and why this might be. And I think one of the issues here is that audition is actually an incredibly broad area of processing. And just to be clear as well, when we're talking about audition, we're not necessarily talking about peripheral hearing per se. So whether there's something going on with the actual ears themselves. But it seems that if there is an issue with audition it's going to be more at the level of the brain and how the brain is making sense of the sounds that the ears transmit. But when we think about how the brain processes sound, it actually has lots of different areas doing different types of processing. And this makes sense because speech is just such a complex signal that we're actually trying to process. And so speech sounds, while as a mature adult we hear very clearly a [SOUND], those sounds have so many different elements that discriminate them from each other. So, for example, one dimension on which speech sound differ is duration. So, for example, you can have the vowel sound, and then you can have the vowel sound, ahh. And those two sounds can create different meanings in a word. Also, speech sounds different in frequency or sometimes called pitch. This is due to how air actually travels through our vocal tracks. So when we're producing a sound, the air comes up through the vocal tract out through the mouth. And it's actually kind of bouncing around as it goes. It can't go in a straight line because our vocal tracts aren't purely straight. So the air bounces around, and this actually causes multiple resonances. So we actually, most speech sounds have pitches, they're composed of pitches, different levels, as a result of these various courses the air is taking. And so we need to be able to compute not just one frequency but actually multiple frequencies, and then work out the sound identity from that pattern. And actually, what we see is you can get a sense of this, especially with old telephones, because old telephones often chop off the highest frequencies. And this can sometimes make it difficult to hear the distinction between sounds such as [SOUND] and [SOUND] which have quite a lot of high frequency elements to them. And so by chopping out the higher frequencies, those sounds can become difficult to distinguish. And so, something a telephone can't quite master, our brain is doing all the time. Another element where sounds can vary is in their loudness, in their loudness patterns. So whenever we produce a syllable what's happening is the sound is going form a very low loudness or amplitude, and then when the vowel comes, a vowel is a very open sound. Usually, our articulators, our tongues and things, kind of get out the way and just let the air come through. And so this makes vowels a more resonant higher amplitude sound. All syllables have a vowel. So typically the peak of the loudness in any syllable is that vowel point. And people need to be able to distinguish the rate at which the loudness. Gets to its maximum amplitude, what that amplitude is. And when you think about English, it's what we call a stress-based language. So some syllables have a greater degree of stress than others. And although what constitutes stress is actually quite complex, one of the clear most salient factors in a stressed syllable is actually it's just louder than an unstressed syllable. So, for example, in the word binoculars, when I'm saying the noc, you can hear that's a more salient, louder syllable. When I'm saying the following syllables so, [SOUND] the [SOUND] is just a very indistinct syllable, less loudness. And this is a key gateway when children are learning a language, hearing the syllables and then working what's within them. Stress syllables can be more informative. So there's lots going on here. And then we can add more layers of complexity, because actually, although we can come up with the prototypical qualities of any one speech sound, when we're linking sounds together, they actually influence each other and the speech stream is a bit of a mush when it comes to it. So, for example, if I'm making the R sound, that sound will have a certain group of frequencies, a certain duration. But then, if I put an M at the end, so [SOUND], then actually I'm really altering the quality of the [SOUND] in my preparation to make that [SOUND] sound. So the vowel is actually getting influenced by the nasal quality of the [SOUND]. And yet our brains just makes sense of this. They extrapolate, they work out, okay that's still an R, even though it's being a bit intruded by the [SOUND]. So it's an epic feat that our brains are doing, distinguishing sounds at all. And so, this suggests that then actually there's multiple levels at which the system could also go wrong. And I think this is the conundrum around the relationship between auditory perception and phonological processing difficulties in dyslexia. Because lots and lots of researchers have been looking at this area, but they've all taken a slightly different aspect of this perceptual task. They've perhaps used children, looked at children with different ages, and a different dyslexia profile potentially given the heterogeneity of dyslexia. So I often think about the story of the blind men and the elephant when trying to kind of make sense of this. Because it feels like auditory perception is this elephant and us researchers are blindly holding on to a different aspect of it and thinking this is maybe the thing that's going wrong. And so, then people are thinking, well, I'm finding different things to you. But actually, we're probably all looking at aspects of the same issue. So, to summarize where the research is at in this area, what we seem to find is that when people do studies of different aspects of auditory perception and seeing if these are impacted in dyslexia, we're finding that typically most studies find that a sub group of children do struggle with, say, duration discrimination, or loudness discrimination. But it's never typically seen in all the children with Dyslexia in a research sample. So this suggests to us that auditory perception does sometimes seem to be compromised, but it's not a consistent link. We can't clearly say the phonological problem is incontrovertibly coming from a lower level auditory cause. But this doesn't mean that that's actually not what is happening. And if you remember when I was talking about a longitudinal study in Finland where they'd actually followed children had genetic risk from dyslexia from the first week of life. And by playing the infant's Finish speech syllables and looking at the neural responses to those syllables, they were finding strong correlations to later language development and later reading development. So this suggest that potentially audition really, and speech perception, is quite key to the profile. But it may just be that we're actually not capturing this trajectory very sensitively on our current measures. And our piecemeal approach which is when audition is such a complex thing, it's actually hard for us to do otherwise. But hopefully, in a few decades, we're going to be closer to what's really going on here. So that's quite a long and complex story for talking about how audition may link to phonological processing. But there's real intervention implications here, which is really the motivation here. And that is, if we can find aspects of speech perception or even non lower level perception and not just localized to speech, this could give us another strategy with which to help people with dyslexia if we could actually maybe work on improving and tuning these early aspects of speech perception. And this is actually something during my doctoral work with professor Shugoswame, we were looking at aspects of amplitude or loudness discrimination. We were looking to see if this was related to reading difficulties, which we found for many children it was. But then we asked, well, if we try and fine tune amplitude discrimination, can we see knock on effects on literacy. And this was with some older primary school children where we saw some positive effects, but they weren't actually as significant as the effects of actually working on literacy itself. So I think taking this study as an example and looking at others, we're not in a position where we can say that any aspect of auditory intervention should take the place of actually just working on the literacy. But it could be a useful supplement. And I think, especially when I reflect on this, the children who are so disaffected with reading, I'm hoping that this could ultimately be a useful way to help them without, almost a kind of new approach where they're not actually having to face lots of text but where we can see some kind of indirect gains. So hopefully, that's where things are going. And one other researcher to mention here is Nina Krause, who's based in Northwestern University in the US. She's doing some fascinating work both looking at the underlying causes and underlying mechanisms of auditory perception as it relates to different learning issues. But she's also interested in looking at how it can be used as an intervention. And Nina's approach is very much trying to localize the problem at a specific level of the brain, because then we can actually bootstrap additional knowledge about that brain area to know what some of the downstream effects might be. So, I'd encourage you to also look at her website, which we've included a link to on our course site.