To optimize intervention in children with congenital and early hearing loss, we will develop a measure to reliably predict spoken language competence by establishing when developmental milestones in early speech perception are met and how these milestones predict subsequent spoken language development using early, reliable, and low-impact methods. Congenital or early acquired hearing loss involves a significant risk of problems and/or delays in spoken language development and other areas of development. To mitigate this risk, early hearing rehabilitation is coupled with interventions focused on spoken as well as sign language. The effects of such interventions, however, only transpire after language assessments have been completed in later stages of development, when critical windows for foundational skills may have passed. Our present inability to predict outcomes on the individual level early in spoken language acquisition highlights the need for a reliable and low-impact screening that can be deployed early to assess children's spoken language development, thus increasing the effectiveness of interventions and therapy by selecting the best path for the individual child. To develop reliable screening tools for delays in spoken language development, we must first know when in time infants with hearing loss develop two fundamental skills underlying spoken word learning, specifically phoneme discrimination and lexical processing efficiency. A diagnostic tool for monitoring deaf and hard-of-hearing infants’ individual spoken language development furthermore requires that we know the predictors of spoken language competence in these infants, that is, the relationship between early phoneme discrimination and later lexical processing efficiency. Finally, it requires knowledge of whether the presence of visual speech cues affects spoken language development, that is, to what extent the relationship between phoneme discrimination and lexical processing efficiency in infants with hearing loss is modulated by access to visual speech. The proposed project will use a mixture of recently developed reliable and low-impact methods to establish if and when infants with hearing loss acquire fundamental skills in spoken language development in order to earlier identify and mitigate spoken language delays by ensuring access to the type of support best suited to their individual needs and skills. To establish the predictive role of the two key skills, we will investigate native-phoneme discrimination with the hybrid visual habituation procedure and lexical processing efficiency with the gaze-triggered looking-while-listening paradigm in infants with and without hearing loss at several time points. We can thus determine if and when important milestones in early speech perception are met in infants with hearing loss compared to infants with typical hearing. To determine whether phoneme discrimination is a good predictor for subsequent language development in infants with hearing loss, we will analyze the developmental relationship between auditory phoneme discrimination and lexical processing efficiency. Lastly, we will test whether the modality of speech input (auditory-only vs. auditory-visual speech input) has an effect on the developmental relationship between auditory-visual phoneme discrimination skills at test and lexical processing efficiency, to determine the importance of visual speech input on spoken language abilities. The insights gained from this project will strengthen diagnostic procedures for assessing spoken language development in deaf and hard-of-hearing infants after hearing aid fitting and/or cochlear implantation, in use by audiological centers, cochlear implant teams and centers for early intervention.