blog

SPMSPM2

Wednesday, May 05, 2021

SPM and SPM-2 Quick Tips Case Study: What’s Behind Challenging Behaviors in the Classroom?

Read more
  •  

    How to Assess the Five Components of Reading 

     

    A robust, comprehensive assessment plan can help you ensure that students are developing skills in all five components of reading: phonemic awareness, phonics, fluency, vocabulary, and comprehension.    

    • Universal screeners can identify which students are at risk of developing reading difficulties.  
    • Diagnostic assessments can help determine why a student is having reading problems so the right interventions can be planned. Diagnostic assessments help you pinpoint which component needs intervention. 
    • Progress-monitoring assessments can help you find out whether instruction, support, and interventions are working. Some experts recommend using progress-monitoring assessments as often as every month. 

     

    Assessing Phonemic Awareness  

    The first step in learning to read is becoming aware of how the 44 sounds of English can be put together to make different words. That process, known as phonological awareness, includes these increasingly challenging skills: 

    1. Breaking words into syllables 
    2. Rhyming 
    3. Recognizing when words start or end with the same sounds (alliteration) 
    4. Segmenting onset (first consonant or consonant blend in a word) from the rime (the vowel and final consonants) 
    5. Identifying the first and last sounds in a word 
    6. Blending separate sounds into words 
    7. Analyzing the separate sounds in a word 
    8. Manipulating sounds, such as replacing one sound with another to make a new word 

    Skills 4–8 are known as phonemic awareness. Researchers further subdivide phonemic awareness into synthesis, or the ability to blend phonemes into words, and analysis, or the ability to separate and work with the sounds in a word. Phonemic synthesis and phonemic analysis each contribute to reading ability in different ways.  

     

    Why Assess Phonemic Awareness? 

    Researchers recommend assessing phonological and phonemic awareness skills early because they can predict whether students are likely to have reading difficulties later. 

     

    How Is Phonemic Awareness Assessed? 

    Readers develop phonemic awareness skills gradually. The earliest skills to develop are often phoneme-matching skills, which emerge near the midpoint of the kindergarten school year. The last skill to develop is the ability to swap phonemes to make a new word.  

    Phonemic awareness tests usually involve tasks such as these:

    • matching: Which words have the same sound? (cat/cow/fox) 
    • isolating the first sound: What’s the first sound in “dog”?  
    • isolating the last sound: What’s the last sound in “him”? 
    • isolating the center sound: What’s the middle sound in “wet”? 
    • blending sounds: What word has the sounds /c/  /a/  /t/? 
    • segmenting sounds: What sounds make up the word “pig”? 
    • manipulating first sounds:  Can you say “hit” without the /h/? 
    • manipulating last sounds:  Can you say “storm” without the /m/? 
    • substituting sounds: What’s the new word when you change the /f/ in “fox” to /b/? 

     

    Learn more about the role of phonemic awareness in dyslexia evaluations. 

     

    Assessing Phonics  

    five components of reading: phonics

    Phonics refers to a student’s ability to pair phonemes (units of sound) with graphemes (letters in written language). When students understand which letters and sounds match, they can decode words.

     

    Why Assess Phonics?    

    The ability to link sounds to letters quickly and efficiently is important because that skill is part of a larger capability known as orthographic mapping. Orthographic mapping is a mental process where lettersound associations and spelling patterns are stored in memory so they can be accessed automatically. Orthographic mapping makes words instantly familiar, so they don’t have to be sounded out each time we read them.    

     

    How Are Phonics Skills Assessed?  

    Phonics assessments generally include  

    • letter-naming tasks; 
    • real-word identification tasks; and 
    • nonsense-word (pseudoword) decoding tasks.  

    Research shows that nonsense-word reading may be a better indicator of a child’s decoding ability than real-word identification. That’s because some children who have less developed phonological skills can still recognize familiar sight words (Levlin et al., 2020). As words get longer and texts become more complicated, these students are likely to encounter reading difficulties because the underlying phonological-core deficits have been present from the start (Kilpatrick, 2015).  

    Some assessments, such as the Test of Word Reading Efficiency, Second Edition (TOWRE-2), include timed nonsense-word decoding subtests. It’s important to use timed measures because slow letter–sound decoding points to problems with orthographic mapping.  

     

    Assessing Fluency   

     Five components of reading: Fluency

    Fluency is more than just the speed of reading. It also incorporates these skills: 

    • accuracy, which is reading aloud without making mistakes 
    • automaticity, which is quick and effortless decoding 
    • prosody, which is the ability to read expressively, pausing in the right places and changing the pace, tone, or emphasis of oral reading in ways that show you understand what you’re reading 
    •  

    Why Assess Fluency?     

    Fluency should increase as young readers gain experience. Good readers steadily build the number of words they can immediately identify. They apply what they’ve learned about letters–sound connections to new words they encounter. When reading is slow and laborious, it can be a sign that automaticity is not developing as it should. For that reason, some reading specialists compare fluency to the canary in the coal mine—it is an early sign of trouble. 

    Prosody has special significance. Studies have shown that prosody is closely related to comprehension. In fact, some studies show that prosody and comprehension influence each other. That is, students who can break the text into logical chunks as they’re reading can more easily store those chunks in memory. Likewise, understanding the text helps a reader know where they should pause or change tone in oral reading (Veenendaal et al., 2016). These skills depend on the ability to decode quickly and accurately (Kang et al., 2019).  

     

    How Is Fluency Assessed?   

    Speed and accuracy can be calculated using a words-correct-per-minute formula, reflected as the percentage of the words a student reads correctly out of the total words in the test item. That percentage can be compared to oral reading fluency norms. 

     Many reading fluency tests include: 

    • rapid automatized naming tasks, which ask readers to identify numbers, letters, colors, or objects; 
    • word-level fluency tasks, which ask readers to identify real and nonsense words; 
    • sentence-level fluency tasks, which ask readers to recite sentences aloud; and 
    • passage-level fluency tasks, which ask readers to read longer texts. 

    Some researchers have questioned the usefulness of sentence- and passage-level texts, since readers may be able to identify words based on the context in which they appear (Kilpatrick, 2015). The older a student is, the more likely it is that their background knowledge or reading experience will help them identify words. Using word lists may be a more accurate reflection of word-reading skills. 

     

    Download the infographic: Strategies to Improve Word-Reading Skill in Struggling Readers

     

    Assessing Vocabulary     

    Vocabulary refers to the words a person recognizes and understands. Vocabulary begins as an oral skill. Most children learn words by hearing them and later transfer that oral knowledge to written words as they learn to read. A student’s background knowledge helps them make connections between what a word sounds like, what a word looks like, and what a word means.   

     

    Why Assess Vocabulary?     

    five components of reading: vocabulary

    Assessing vocabulary is complex, largely because educators and clinicians measure different kinds of vocabulary, each of which plays a role in the development of reading ability and reading comprehension (Elleman & Oslund, 2019). In other words, vocabulary plays a role in the words people recognize and the words people comprehend. Vocabulary knowledge may help people learn to read. As people read, they sometimes have to choose between pronunciations, asking themselves which pronunciation makes sense in a given context.  

     

    How Is Vocabulary Assessed?  

    Different kinds of vocabulary are assessed directly and indirectly. Classroom teachers assess content area vocabulary with curriculum-based measures such as classroom projects and tests. Sight vocabularies, background knowledge vocabularies, and word-structure vocabularies factor into reading inventories, fluency assessments, comprehension tests, and other formal and informal reading assessments.  

     

    Could your student have dyslexia? This infographic can help you recognize dyslexia characteristics at different ages.

    Or download the Dyslexia Assessment Tool Kit here.

     

    Assessing Comprehension

    five components of reading: comprehension

    The “Simple View of Reading” states that comprehension is the result of two interactive factors (Gough & Tunmer, 1986). Reading experts often show this relationship as an equation: 

    word recognition  x  language comprehension = reading comprehension 

    Recent research highlights several factors that may influence this formula, including working memory, executive function, motivation, engagement, cultural context, and knowledge of reading strategies (Duke & Cartwright, 2021). 

     

    Why Assess Reading Comprehension? 

    People read for meaning. They read to build knowledge and to develop empathy. In many ways, comprehension is the reason we read. When clinicians and educators assess reading comprehension, they do so to find out whether readers understand  

    • what has been explicitly stated;  
    • what can be inferred; and 
    • what conclusions can be drawn from a text. 

    When an assessment reveals a need, you can assess other reading components to find out where specific deficits exist. Then you can design intervention plans and keep track of progress (Farrell et al., 2019). 

     

    How Should Reading Comprehension Be Assessed? 

    Reading comprehension is measured in different ways on different assessments. Students may be asked to 

    • retell a story to see how well they understand key ideas, details, and text structures; 
    • answer multiple-choice or open-ended questions about information and inferences; 
    • fill in missing words (cloze tasks) or choose the right word to match the meaning of a sentence (maze tasks); 
    • identify sentences with similar meanings (sentence verification tasks);
    • explain the meaning of figurative language; 
    • determine whether a social response is appropriate (pragmatic language); 
    • describe an author’s intent or purpose; or 
    • place events in sequential order (Cao et al., 2020). 

    Some measures also explore a reader’s knowledge of grammar, syntax, and vocabulary since these skills can also affect comprehension. 

    Assessing the five components of reading is a complex and ongoing process. That’s because reading is an interwoven set of abilities that develop at different rates. Looking carefully at strengths and needs as they change across each school year can help you pace and individualize instruction and intervention to match each student’s changing needs. 

     

    Related Assessments: 

     

    Research and Resources 

     

  • Six Key Messages from NASP’s 2022 Position Statement on Identifying Specific Learning Disabilities 

     

    New guidelines promote identification based on the complexity of human behavior and learning. 

     

    In August 2022, the National Association of School Psychologists (NASP) published a revised position statement updating its recommendations for identifying students with specific learning disabilities (SLDs). Here are six important takeaways from the new position statement:  

    • Evaluations to identify students with SLDs should be comprehensive. They should be based on “multiple reliable and valid sources of datagathered by a qualified, multidisciplinary team.  
    • Methods of identification should aim to reduce or eliminate the “disproportional identification of students of color, English learners, and economically disadvantaged students.”
    • Those involved in eligibility determinations should build frameworks to consider the implications of culture and language, the impacts of trauma, the effects of socioeconomic conditions, the presence of vision or hearing difficulties, and the quality of academic instruction—any of which could preclude identification of SLD.
    • Evaluation should consider the “whole child,” including academic performance trends measured by norm-referenced, criterion-referenced, and curriculum-based tools; cognitive processes; social–emotional skills; and oral language competencies. Observing classroom interactions and gathering information from caregivers and teachers is important.  
    • Evaluators should be aware of biases that could shape their judgment, and they should develop their expertise, consulting with experienced colleagues if needed. 
    • Identification should not only be based on discrepancies between aptitude and achievement since that approach can lead to incorrect identification and continued disparities. 

    The 2022 NASP position statement stresses the need for more research on evidence-based instruction, interventions, and identification methods. And it states clearly that multi-tiered support systems should undergird formal evaluations, no matter which identification methods are used.  

     

    To read the full position statement, click here 

     

     

     

    Research and Resources: 

    National Association of School Psychologists. (2022). Identification of students with specific learning disabilities [Position statement]. https://www.nasponline.org/x59975.xml

  •  

    How Do I Choose the Most Accurate Autism Test For My Client? 

     

    Identifying autism shouldn’t depend on a single assessment, no matter how thorough or precise that assessment is. A clear diagnosis—one your client can trust, and you can feel confident delivering—is the result of a comprehensive evaluation, generally involving a team of health professionals. The Centers for Disease Control and Prevention (CDC) recommends that clinicians gather information from several sources, including the parents’ account of their child’s development, diagnostic assessments, and professional observations (CDC, 2022).  

    The diagnostic tools you select are a critical part of a comprehensive evaluation. Here’s a quick primer on the types of assessments many clinicians use in an autism evaluation.  

     

    Developmental screening tools. 

    Screening tools aren’t used in a diagnostic evaluation. Instead, parents and health care providers use these quick questionnaires and checklists to identify children whose development may be atypical and who may have a higher likelihood of autism.  

     

    Rating scales

    Rating scales are typically used to determine how severe the symptoms or characteristics of a condition are. They often ask parents, the client, or observers to rate different behaviors or characteristics on a scale of progressive intensity.

     

    Interviews 

    Diagnostic interviews can be either structured, where the interviewer asks standardized questions in a pre-determined sequence, or semi-structured, where the interviewer has some flexibility to individualize follow-up questions (Mueller & Segal, 2015).

     

    Observations 

    Observations typically involve structured interactions between an evaluator and the person being evaluated. Interactions may involve objects, movements, or tasks—and the goal of each activity is to provide the child with opportunities to communicate with the observer (Park et al., 2018).   

    Most of the time, an autism evaluation includes more than one type of assessment so clinicians can create a complete picture of a child’s development. Given the range of available tools, how do you go about choosing which ones to use? Here are a few questions to consider as you weigh your options

     

    Does the Test Identify Autism Accurately?  

    You’ll need a diagnostic tool that is sensitive, which means it correctly identifies the diagnostic criteria for autism specified by the Diagnostic and Statistical Manual for Mental Health Disorders, Fifth Edition, Text Revision (DSM-5-TR). The test should be specific, which means it identifies when behavior is typical. An autism assessment should also be reliable, which means that when people administer it over and over, the results are the same. In other words, you need a test that researchers and health professionals trust because it correctly identifies the characteristics of autism.  

    The Autism Diagnostic Observation Schedule, Second Edition (ADOS®-2) is considered by many to be the “gold standard” for autism assessments (Brian et al., 2019). There are also several other commonly used measures:

     

    What Are the Characteristics of Your Client?   

    The “Guidelines for Assessments and Evaluations” of the American Psychological Association (APA) suggest that an evaluator consider the age, sex, ethnicity, and primary language of the client when deciding which assessment to use. The guidelines also recommend that evaluators know the demographics of the people included in the normative sample for the assessment.  

    The normative data you gather from an assessment may not be accurate if your client’s characteristics aren’t represented among the test’s norm group, the APA says (2020). If the test relies on tasks that have no cultural familiarity to your client, the results could be similarly skewed. When a mismatched test provides misleading results, clients may not get the services they need—and unfair health disparities can result (Thunt, 2021).   

    Learn more: Why Are So Many Autistic Girls & Women Missing Out on Early Identification?    

    Is the Test Available in the Language You Need?   

    When people take assessments in languages they are still learning, the results may be affected by the linguistic demands of the test, rather than by autism. Autism may be under-identified among students who are English-language learners, possibly because autism and English-language learners sometimes have delays, challenges, or differences in 

    • social communication, 
    • pragmatic speech, 
    • language acquisition, 
    • nonverbal communication, and/or 
    • social behaviors. 

    If an autism assessment tool isn’t available in the language you need, researchers recommend using assessment tools that aren’t dependent on language. It’s also a good idea to interview multiple informants and to provide flexible time requirements during testing (Dennison et al., 2018).

     

    What Other Assessments Will You Need to Complete the Evaluation?   

    One of the goals of an evaluation is determining whether symptoms could be better explained by another health condition. For that reason, your diagnostic team will likely need to determine whether any other conditions are causing symptoms that look like autism characteristics. Health conditions such as these can complicate an autism evaluation: 

    In addition to looking for comorbidities and overlapping symptoms, many clinicians also assess adaptive behavior skills. Studies show that parents aren’t just concerned about a formal diagnosis, but about the child’s daily functional needs—whatever the diagnosis turns out to be.

    Learn more: Study Highlights the Need to Assess Mental Health in Autistic Youth

    Where Will the Assessment Be Administered?    

    It’s important to consider where and how you’ll administer an assessment. The amount of time you’ll need to complete the evaluation and the cost of an assessment are two other practical aspects to consider.  

    The social distancing requirements of the COVID-19 pandemic spurred a dramatic surge in the use of telehealth assessments. In some instances, such as rural locations where health care and psychological services may be limited, telehealth services can improve access to care (Zwaigenbaum et al., 2021). Even so, not every assessment has been validated for use online (Spain et al., 2022). The ADOS-2 is validated for in-person use. But other auxiliary diagnostic tools are available for use as telehealth assessments: 

     

    What Training Will You Need to Feel Confident Administering, Scoring, and Interpreting the Test?     

    Becoming skilled at giving and interpreting assessments doesn’t happen overnight. It takes experience and ongoing training, especially as assessments are revised and updated in response to new research. As you make decisions about which tests to select, consider the resources available to help train you in the use of each tool. 

    Can you participate in webinars, workshops, or continuing education to better understand test items and how to interpret responses? Do you have access to seasoned assessment professionals to guide you, not only in choosing the best test for the situation but in using the tool to identify autism? Are there mentors at your clinic or in your school who can walk you through the process? 

    It’s important to say that trained, experienced professionals can identify autism without using a formal assessment—but in many settings, unlocking services and supports requires the use of a validated diagnostic tool. The better you understand your client’s characteristics and needs, the better you’ll be at choosing the best test for each person in your care.

     

     

    Further Reading on Autism

     

    Videos and Webinars on Autism

     

     

    Research and Resources:

     

    American Psychological Association. (2020). Guidelines for psychological assessment and evaluation. https://www.apa.org/about/policy/guidelines-psychological-assessment-evaluation.pdf  

    Brian, J. A., Zwaigenbaum, L., & Ip, A. (2019). Standards of diagnostic assessment for autism spectrum disorder. Paediatrics & Child Health, 24(7), 444–460. https://doi.org/10.1093/pch/pxz117 

    Centers for Disease Control and Prevention. (April 2022). Screening and diagnosis of autism spectrum disorder for healthcare providers. https://www.cdc.gov/ncbddd/autism/hcp-screening.html  

    Dennison, A., Hall, S., Leal, J., & Madres, D. (2018). ASD or ELL? Distinguishing differences in patterns of communication and behavior. Contemporary School Psychology, 23, 57–67. https://doi.org/10.1007/s40688-018-0206-x 

    Mueller, A. E., & Segal, D. L. (2015). Structured versus semi-structured versus unstructured interviews. In R. L. Cautin & S. O. Lilienfeld (Eds.), The Encyclopedia of clinical psychology. https://doi.org/10.1002/9781118625392.wbecp069 

    Park, H. S., Yi, S. Y., Yoon, S. A., & Hong, S. B. (2018). Comparison of the Autism Diagnostic Observation Schedule and Childhood Autism Rating Scale in the diagnosis of autism spectrum disorder: A preliminary study. Journal of Child & Adolescent Psychiatry, 29(4), 172–177. https://doi.org/10.5765/jkacap.180015 

    Spain, D., Stewart, G. R., Mason, D., Robinson, J., Capp, S. J., Gillan, N., Ensum, I., & Happé, F. (2022). Autism diagnostic assessments with children, adolescents, and adults prior to and during the COVID-19 pandemic: A cross-sectional survey of professionals. Frontiers in Psychiatry, 13, 789449. https://doi.org/10.3389/fpsyt.2022.789449 

    Thunt. (2021, February). The need for anti-racist psychological assessment. Fordham GSE News. https://gse.news.fordham.edu/blog/2021/02/12/the-need-for-anti-racist-psychological-assessment/ 

    Zwaigenbaum, L., Bishop, S., Stone, W. L., Ibanez, L., Halladay, A., Goldman, S., Kelly, A., Klaiman, C., Lai, M. C., Miller, M., Saulnier, C., Siper, P., Sohl, K., Warren, Z., & Wetherby, A. (2021). Rethinking autism spectrum disorder assessment for children during COVID-19 and beyond. Autism Research, 14(11), 2251–2259. https://doi.org/10.1002/aur.2615 

     

     

  •  

    The U.S. Preventive Services Task Force (USPSTF) recommends screening for anxiety in children and teens ages 818. The recommendation follows a systematic review of the potential benefits and harms of universal screening among children and teens. The task force concluded that the benefits of early identification and treatment warranted the recommendation. It comes on the heels of an earlier draft recommendation to screen all teens 1218 for major depressive disorder (MDD).  

     

    Pandemic Fallout 

    Numerous studies point to the pandemic as a primary driver of the rising rates of anxiety among children and teens worldwide. Quarantines, social distancing, fears about illness and death, and remote learning upended normalcy and added stress, researchers found. One research review found that upwards of 40% of caregivers noticed signs of anxiety in their children during the pandemic (Sniadach et al., 2021) 

     

    Social Media Effects

    Other studies have identified social media outlets as a source of anxiety among children, teens, and young adults. Some researchers think spending excessive amounts of time on social media leads to poorer well-being (Riehm et al., 2019). Others say it’s the specific behaviors on social media that trigger anxiety. 

    Behavioral researchers agree that the COVID-19 pandemic disrupted normal social interactions among young people, sparking greater engagement with social media. More social media activity may raise the risks of 

    • peer conflict, 
    • online victimization, 
    • harassment, 
    • discrimination, 
    • overexposure to upsetting content, 
    • anxiety about their social media identity, 
    • unhealthy social comparison, and 
    • participation in destructive social movements (Hamilton et al., 2022). 

     

    Economic Stress 

    As rising inflation and economic woes imperil families in the wake of the pandemic, it’s likely that anxiety, stress, and depression will increase among children and teens. Studies that tracked child and teen mental health following global recessions found that, as economies worsened, mental health problems among children grew (Golberstein et al., 2019) 

     

    Screening for Anxiety and Depression  

    Universal anxiety and depression screenings can take place in healthcare or educational settings. Brief, standardized anxiety assessments can help you identify who’s at risk and rate the severity of a child’s or teen’s symptoms.  

    Revised Children's Manifest Anxiety Scale, Second Edition (RCMAS™-2) can be administered in 10–15 minutes, using elementary-level, yes-no questions. A short form cuts the administration time to 5 minutes. The RCMAS-2 is available in Spanish and English. 

    Reynolds Child Depression Scale, Second Edition (RCDS-2) tracks depression symptoms in children grades 2–6. The RCDS-2 can be completed in 10–15 minutes, with the short form taking just 2–3 minutes. 

    Reynolds Adolescent Depression Scale, Second Edition (RADS-2) takes 5–10 minutes to administer. The short form can be administered in 2–3 minutes. The RADS-2 was standardized with an ethnically diverse sample of U.S. and Canadian teens, stratified by age and sex.  

    Children’s Depression Inventory, Second Edition™ (CDI 2®) features self-reports, teacher reports, and parent reports. The CDI-2 can be administered in 5–15 minutes. The short form takes 5–10 minutes to complete. 

    If you’d like to learn more about how to screen for anxiety and depression in your clinic or school, talk to a WPS Assessment Consultant for expert guidance on quick, simple, validated measures that can help you—and the kids you serve. 

     

     

    Research and Resources:

     

    Golberstein, E., Gonzales, G., & Meara, E. (2019). How do economic downturns affect the mental health of children? Evidence from the National Health Interview Survey. Health Economics, 28(8), 955–970. https://doi.org/10.1002/hec.3885 

    Hamilton, J. L., Nesi, J., & Choukas-Bradley, S. (2022). Reexamining social media and socioemotional well-being among adolescents through the lens of the COVID-19 pandemic: A theoretical review and directions for future research. Perspectives on Psychological Science, 17(3), 662–679. https://doi.org/10.1177/17456916211014189 

    Riehm, K. E., Feder, K. A., Tormohlen, K. N., Crum, R. M., Young, A. S., Green, K. M., Pacek, L. R., La Flair, L. N., & Mojtabai, R. (2019). Associations between time spent using social media and internalizing and externalizing problems among US youth. JAMA Psychiatry, 76(12), 1266–1273. https://doi.org/10.1001/jamapsychiatry.2019.2325 

    Śniadach, J., Szymkowiak, S., Osip, P., & Waszkiewicz, N. (2021). Increased depression and anxiety disorders during the COVID-19 pandemic in children and adolescents: A literature review. Life, 11(11), 1188. https://doi.org/10.3390/life11111188 

    U.S. Preventive Services Task Force. (2022, April 12). Anxiety in children and adolescents: Screening. https://www.uspreventiveservicestaskforce.org/uspstf/draft-recommendation/screening-anxiety-children-adolescents#citation32 

    U.S. Preventive Services Task Force. (2022, April 12). Depression and suicide risk in children and adolescents: Screening. https://www.uspreventiveservicestaskforce.org/uspstf/draft-recommendation/screening-depression-suicide-risk-children-adolescents 

     

     

  •  

    Equitable Evaluations: How to Assess Students with Disabilities

      

    In February 2022, the American Psychological Association (APA) released new Guidelines for Assessment and Intervention with Persons with Disabilities. In it, the APA outlines six equitable assessment-specific recommendations to help clinicians and educators  

    • choose the most appropriate tests for each individual,  
    • provide appropriate accommodations and modifications,  
    • administer tests sensitively, and 
    • interpret the results accurately. 

    Here’s a summary of guidelines 12–18, which relate directly to assessment best practices. 

     

    GUIDELINE 12: Psychologists strive to consider the interactions among disability and other individual and contextual dimensions in determining the breadth of assessment. 

    A clear and accurate diagnosis is rarely the result of a single test, especially when disability-related factors can affect test outcomes. The APA recommends that practitioners assess traditional areas such as cognition, visual perception, motor skills, and personality. To add depth and dimension to those test results, it’s important to collect and analyze information from 

    • an individual’s educational, occupational, medical, social, cultural, and psychological background or records;  
    • interviews with the client and their family, school, health-care providers, and employers; and 
    • behavioral observations conducted in varied settings. 

    It’s also important to understand how much support a person with a disability has and how that support enables them to cope and function. The APA recommends that the clinician “assess various qualities in a person with a disability in context, rather than the disability alone” (emphasis in the original). 

     

    GUIDELINE 13: Psychologists strive to ensure the validity of assessments by considering disability-related factors when selecting assessment tools and evaluating test norms. 

    APA best practice is to select a test that has been standardized with the disability group you’re planning to assess. When a test has been normed without including people with certain disabilities, the results may not be as accurate.   

    Though normative samples increasingly represent our diverse population, it may be difficult to find tests that are a good match for a specific disability. Where that is the case, the APA recommends working with test publishers to find instruments that could provide as much relevant data as possible. It’s also a good idea to consult with colleagues who have more experience with disability evaluations.  

     

    GUIDELINE 14:  Psychologists strive to provide appropriate accommodations to individuals with disabilities to optimize meaningful participation in the assessment process. 

    Not all people with disabilities will need accommodations for every assessment. But for those who do, accommodations can lead to more accurate results and more reliable diagnoses. If you’re not sure which accommodations a client needs, it’s a good idea to have an open conversation about what testing has been like for them in the past and what needs they have today.   

    An accommodation could change the format, presentation, or administration of a test. It shouldn’t, however, change the factor you’re measuring. For example, if a student taking a reading test used a device that enabled a much larger font to accommodate a vision problem, the test score would reflect the student’s ability to read. Without the accommodation, the test score might reflect the student’s ability to see. The aim of accommodation is to remove barriers so the test results aren’t skewed by factors related to the disability.  

    Some common accommodations include   

    • changing the format of a test from paper/pencil to computer,  
    • adding extra time,  
    • offering alternate ways for someone to respond to test items,  
    • choosing alternate assessments or subtests,  
    • using assistive technology devices, and  
    • providing distraction-free spaces.  

    The APA suggests that clinicians use accommodations that are: 

    • valid, 
    • appropriate, 
    • responsive to a student’s background, 
    • likely to make a test more accessible, and 
    • feasible in the circumstances. 

     

    GUIDELINE 15: Psychologists strive to validly assess individuals with disabilities by appropriately adapting test administration based on disability-related factors.  

    An assessment is intended to measure certain constructs—but if test results are affected by factors related to a disability, the outcomes aren’t a fair appraisal of the individual’s capabilities. Disabilities can affect people in lots of different areas, including the following: 

    • energy levels 
    • stamina 
    • strength 
    • motor coordination 
    • attention 
    • processing speed 
    • behavior 
    • communication 

    Medication side effects, bathroom habits, and pain can also disrupt a person’s performance on a test intended to measure other constructs. While planning an assessment, it’s a good idea to talk to your client about the best time of day to take a test. It might also be necessary to break the test into several sessions to avoid fatigue and minimize the influence of medication side effects. The aim is to create testing conditions that will lead to an accurate assessment of the desired construct.  

     

    GUIDELINE 16: Psychologists strive to validly interpret assessment results based on consideration of co-occurring factors impacting the performance of individuals with disabilities.  

    As you score and interpret test responses, it’s important to be aware of other health conditions that could be affecting the individual’s performance. For example, both anxiety and depression often occur alongside autism and ADHD. Those mental health conditions could affect test scores on certain assessments, even if accommodations are in place. Sleep disturbances are is another health condition that commonly occurs with some disabilities.   

    Tests created according to universal design principles may alleviate some of these concerns. Universal design attempts to eliminate unimportant test features that could influence how a person performs on a test. For example, providing several ways for students to respond to test items could reduce barriers for people with some disabilities, such as vision or hearing loss.   

     

    GUIDELINE 17: Psychologists strive to conduct appropriate multimodal assessments to provide diverse information to support valid interpretation of assessment results. 

    How each person experiences a disability and interacts with the world is unique. To identify needs, plan effective interventions, and build on strengths and supports requires the evaluator to gather different kinds of data. Integrating qualitative data from interviews and observations with quantitative data from standardized measures leads to a fully informed interpretation. 

    Functional assessments can be particularly useful. They can add information about many things: 

    • social behavior 
    • activities of daily living 
    • behavior patterns at home, school, and work 
    • communication skills 
    • motor skills 
    • academic functioning 

    Some practitioners pair clinical observations with functional assessments. If your observation is not tied to a functional rating scale, the APA recommends that you consider the 

    • purpose of the observation; 
    • specific constructs you want to explore; 
    • method you’ll use to measure the construct; 
    • amount of time you’ll need; 
    • best settings in which to observe the construct; 
    • people who should or shouldn’t be present during the observation; 
    • other factors that could disrupt or hinder demonstration of the construct; 
    • factors that could affect how the individual performs, including disability factors; and 
    • how the assessment data will be used. 

     

    GUIDELINE 18: Psychologists strive for accurate interpretation of assessment data by addressing personal biases and assumptions regarding individuals with disabilities. 

    Personal biases can interfere with the ability to accurately interpret assessments. Since many biases are unconscious, it can take effort to identify assumptions, stereotypes, and other kinds of automated thinking—and change them.  

    The APA recommends these five concrete steps for minimizing bias in assessments of people with disabilities: 

    • Form your professional judgments and decisions only after you’ve completed a comprehensive evaluation.  
    • Identify the biases you may have concerning disabilities.  
    • Consider more than just your initial hypotheses about the issues your client is experiencing. It’s important to test competing explanations to avoid confirmation bias. 
    • Build your background knowledge about the lived experience of people with disabilities. 
    • Include strengths and needs in your evaluation.  

    Close to 61 million adults in the U.S. are living with a disability. For each individual, disability is just one aspect of a complex intersectional identity. Eliminating the barriers to accurate assessment is an important step toward ensuring everyone has equal access to health care and educational services.  

     

     

    Research and Resources:

     

    American Psychological Association. (2022). Guidelines for assessment and intervention with persons with disabilities. https://www.apa.org/about/policy/guidelines-assessment-intervention-disabilities.pdf 

    Centers for Disease Control and Prevention. (2020, September 16). Disability impacts all of us. https://www.cdc.gov/ncbddd/disabilityandhealth/infographic-disability-impacts-all.html#:~:text=61%20million%20adults%20in%20the,is%20highest%20in%20the%20South. 

     

     

  •  

    Social and emotional skills are key indicators of healthy growth—so much so that the Centers for Disease Control and Prevention’s latest developmental checklists include 20 new social and emotional milestones—and that’s just from birth through age 5 (Zubler et al., 2022). Decades of research have shown that strong social and emotional skills predict success in school, at work, and in personal relationships across a lifetime.   

    With so much riding on this complex set of abilities, should students be screened for social and emotional competence as a matter of routine, just as they are for vision and hearing? For many experts, the answer is an emphatic “yes.” Here’s why.  

     

    First, a look at what constitutes social and emotional competence. 

     The Collaborative for Academic, Social and Emotional Learning (CASEL) includes five broad competencies in its social and emotional learning (SEL) framework:  

    • self-awareness;
    • self-management; 
    • social awareness;  
    • relationship skills; and 
    • responsible decision-making. 

    These broad categories encompass many specific skills, including:  

    • thinking critically; 
    • managing emotions; 
    • solving problems;  
    • setting goals; 
    • acting with integrity; 
    • showing empathy; and 
    • standing up for the rights of others (CASEL, 2022). 

    These capabilities usually begin to develop at home. But for most students, school is the workshop where they’re honed. That’s why so many experts recommend social and emotional learning in the classroom 

     

    What’s the case for universal screening?  

    Evidence suggests that the best place to begin a structured, intentional SEL program is with a reliable assessment. Here’s a summary of the benefits. 

     

    Screening identifies students who may need SEL support for academic success.   

    Having strong social and emotional skills is associated with higher academic achievement (Franco et al., 2017; Kim et al., 2021). Schoolwide assessments can help educators, families, and students celebrate strengths. They can also indicate where supports are needed—before unproductive patterns lead to academic delays. In short, screening counteracts the wait-to-fail model.  

     

    It’s also a first step toward addressing mental health and well-being.  

    Many studies have linked social and emotional skills to well-being. One example: A long-term study tracked Canadian students from 5 years old through age 14. Roughly 40% of those students began school with social and emotional vulnerabilities linked to early-onset mental health conditions. Addressing social and emotional competence early reduces the risks of depression and anxiety later on (Thompson et al., 2019).  

    Universal screening can help clinicians, school psychologists, and families understand which coping skills to strengthen. That’s especially important for students who have experienced adverse childhood events, trauma, and other risk factors. 

     

    Screening is a chance to build authentic relationships with parents and families.   

    The most effective, long-lasting SEL programs are those that view “students, families, and communities as co-creators” of the program (CASEL, 2021). Many screening tools rely on scales completed by parents. But data collection and data sharing are a tiny part of a much larger opportunity. Districts can engage parents in identifying 

    • the most pressing areas of need; 
    • approaches that feel safe and welcoming;  
    • ways that cultural identities can be leveraged to build upon strengths; and  
    • resources available in the community.  

    As SEL initiatives produce results, parents can gauge success and offer feedback.  

     

    It’s a moment to invite students to the table to change their learning environments.  

    Screening is a first step to opening conversations with students about how they perceive school culture, as well as their own strengths, needs, and priorities. Studies have shown that when students are involved in data collection, data sharing, and decision-making, they build important competencies. They practice communicating. They use critical thinking skills as assistant researchers. They develop agency and self-efficacy. In short, they achieve some of the program’s aims by helping to shape the program (Halliday et al., 2019). 

     

    Assessment reveals whether an SEL program is working.  

    Screeners create a baseline. If you’ve chosen an assessment that’s sensitive to change, the results can show you whether your SEL program is achieving what you want it to achieve.   

    Compare assessment data to other measures such as behavior referrals, suspensions, and academic performance, and you can see the effects of social and emotional learning. Narrow your focus, and SEL data can give you a clear sense of whether interventions are working for individual students (National Practitioner Advisory Group, 2019).  

     

    So, how do you choose a screening tool?  

    The best SEL assessments are evidence-based, rigorous, and targeted to the competencies you want to develop. Here are two to consider:  

    Right now, WPS is partnering with Thomas Schanding PhD, Associate Professor at the University of British Columbia, to pilot the Social–Emotional Learning Skills Inventory Screener (SELSI), a universal screener that measures skills before and after interventions. It’s based on the CASEL five-competency framework.  

     

    Getting Started 

    SEL programs are underway in preschools in all 50 states and in K-12 school districts in 20 states. Researchers and stakeholders continue to talk about what works and what doesn’t. As the conversations continue, it’s important to keep sight of what research has already taught us: Learning to understand ourselves, interact with others, and make good decisions are competencies with lifelong impacts. Screening alone won’t equip every student with SEL skills. But it’s a very good place to start. 

     

    Want to learn more about the power of SEL screening? Take a look at what our researchers say.   

     

     

    Research and Resources:

     

    CASEL. (n.d.) What is the CASEL framework? https://casel.org/fundamentals-of-sel/what-is-the-casel-framework/ 

    CASEL. (2021, November). 2011–2021: 10 years of social and emotional learning in U.S. school districts. file:///Users/rebeccastanborough/Downloads/CDI-Ten-Year-Report.pdf 

    CASEL. (2022). SEL policy at the state level. https://casel.org/systemic-implementation/sel-policy-at-the-state-level/ 

    Franco, M., Beja, M. J., Candeias, A., & Santos, N. (2017). Emotion understanding, social competence and school achievement in children from primary school in Portugal. Frontiers in Psychology, 8, 1376. https://doi.org/10.3389/fpsyg.2017.01376 

    Halliday, A. J., Kern, M. L., Garrett, D. K., & Turnbull, D. K. (2019) The student voice in well-being: A case study of participatory action research in positive education. Educational Action Research, 27(2), 173-196. https://www.tandfonline.com/action/showCitFormats?doi=10.1080%2F09650792.2018.1436079 

    Kim, S. H., & Shin, S. (2021). Social-emotional competence and academic achievement of nursing students: A canonical correlation analysis. International Journal of Environmental Research and Public Health, 18(4), 1752. https://doi.org/10.3390/ijerph18041752 

    Lawson, G. M., McKenzie, M. E., Becker, K. D., Selby, L., & Hoover, S. A. (2019). The core components of evidence-based social emotional learning programs. Prevention Science, 20(4), 457–467. https://doi.org/10.1007/s11121-018-0953-y 

    National Practitioner Advisory Group. (2019). Making SEL assessment work: Ten practitioner beliefs. Collaborative for Academic, Social, and Emotional Learning and the American Institutes for Research. https://casel.s3.us-east-2.amazonaws.com/making-SEL-assessment-work.pdf 

    Thomson, K. C., Richardson, C. G., Gadermann, A. M., Emerson, S. D., Shoveller, J., & Guhn, M. (2019). Association of childhood social-emotional functioning profiles at school entry with early-onset mental health conditions. JAMA Network Open, 2(1), e186694. 

    Zubler, J. M., Wiggins, L. D., Macias, M. M., Whitaker, T. M., Shaw, J. S., Squires, J. K., Pajek, J. A., Wolf, R. B., Slaughter, K. S., Broughton, A. S., Gerndt, K. L., Mlodoch, B. J., & Lipkin, P. H. (2022). Evidence-informed milestones for developmental surveillance tools. Pediatrics, 149(3). https://publications.aap.org/pediatrics/article/149/3/e2021052138/184748/Evidence-Informed-Milestones-for-Developmental 

     

     

  •  

    How to Assess Six Essential Communication Skills

     

    Pragmatic language is an interwoven set of linguistic skills people use to communicate in social contexts. It encompasses social, emotional, verbal, nonverbal, and other abilities. When people have trouble with pragmatic language, they may not understand other people’s intentions, or they may misread clues to how people are feeling. They may not fully grasp the unwritten rules that govern one-on-one or group conversations. Social and academic problems often follow.   

    Pragmatic language differences have been linked to autism, ADHD, developmental language disorder, social communication disorder, and mental health difficulties (Andres-Roqueta et al.; Ciray et al.) For that reason, clinicians and educators often look for differences and deficits when they’re conducting a diagnostic evaluation.  

    As you might expect, assessing pragmatic language is complicated. Evaluators often measure individual abilities such as

    • starting, maintaining, and ending conversations; 
    • connecting with others through eye contact; 
    • communicating with body language, facial expressions, and gestures; 
    • giving information in a coherent and logical sequence; 
    • taking turns speaking; and 
    • understanding and signaling intent by varying speech rhythms and tone of voice. 

    As powerful as informal assessments can be, evaluators often include formal, direct assessments as part of a comprehensive evaluation. Research shows that combining formal and informal assessments is useful in designing targeted intervention plans (Wong et al.). 

    When evaluators look at pragmatic skills, they’re generally assessing two primary domains, plus a bridge that connects the two domains: 

    • instrumental intent, which is the ability to recognize and communicate information
    • affective intent, which is the ability to recognize and communicate emotion
    • paralinguistic skills, which include the ability to decode and to use nonverbal signals that add meaning to people’s interactions 

     

    How to Assess Communication Skills: Six Essentials

    Let’s look at six constructs measured in formal assessments—constructs that will help you determine where to focus interventions for a particular client. 

    1. Instrumental performance appraisal. This skillset governs your awareness of social routines. It’s how people judge whether someone is communicating in socially appropriate ways. Can you recognize, for example, when someone is responding to gratitude or making requests according to social norms? Often, test instruments ask people to choose between appropriate and inappropriate responses to social situations.
    2. Social context appraisal. These skills involve correctly judging what other people’s feelings and intentions are. Social contexts are dynamic: People need to be able to notice changes in the setting, recognize when a conflict is arising, infer what other people are thinking, understand other people’s intent, show flexibility when routines change, and interpret irony, idioms, and other variables. 
    3. Paralinguistic decoding. This ability involves accurately reading “micro-expressions” that communicate meaning beyond what a person is saying. In fact, these nonverbal cues can help people understand when someone feels something completely contradictory to their verbal message. Well-developed paralinguistic skills help you respond appropriately to what people say—as well as what they don’t say.
    4. Instrumental performance. This skillset affects your ability to communicate information according to social norms. Can you, for example, introduce someone politely? Can you ask for help, directions, or permission in socially appropriate ways? When an evaluator assesses instrumental performance, they’re looking at the ability to adequately and appropriately communicate as a means to an end.
    5. Affective expression. This set of abilities controls how you express emotion as you’re communicating. Many everyday social situations require people to convey emotion. Someone might need to express regret, empathy, or gratitude in certain contexts. Affective expression is useful when you compliment, encourage, or support a friend or co-worker. These skills are vital to building and maintaining relationships.
    6. Paralinguistic signals. This group of skills governs the use of nonverbal forms of communication. It includes using facial expressions, gestures, and changes in the speed, rhythm, and tone of your voice to add meaning to what you’re saying. 

    One assessment that allows you to measure all six constructs is the Clinical Assessment of Pragmatics (CAPs™) which can be used with clients ages 7–18 years old. The CAPs is video based. Its primary advantage is that it presents complex real-life social scenarios. It asks people to describe what’s happening in each interaction and explain how they’d respond.   

    Research shows that real-life social scenarios, which can involve lots of sensory stimulation and overlapping interactions, can test the limits of comprehension in autistic people (Kotila et al.). Authentic, naturalistic interactions allow evaluators to track verbal and nonverbal responses and measure them against social norms. 

    Using formal and informal assessments to measure all six constructs can give you the data you need to identify your clients’ pragmatic skills, so you can tailor interventions to build their strengths and meet their specific pragmatic language needs. 

    For more on pragmatic skills assessment, view Unraveling the Complexities of Pragmatics. To learn more about the Clinical Assessment of Pragmatics (CAPs) and other pragmatic language assessments, visit our website or speak with a WPS assessment consultant 

     

     

    Related Links:

     

     

    Research and Resources:

     

    Andrés-Roqueta, C., & Katsos, N. (2020). A distinction between linguistic and social pragmatics helps the precise characterization of pragmatic challenges in children with autism spectrum disorders and developmental language disorder. Journal of Speech, Language, and Hearing Research, 63(5), 1494–1508. https://doi.org/10.1044/2020_JSLHR-19-00263  

    Çiray, R. O., Özyurt, G., Turan, S., Karagöz, E., Ermiş, Ç., Öztürk, Y., & Akay, A. (2022). The association between pragmatic language impairment, social cognition and emotion regulation skills in adolescents with ADHD. Nordic Journal of Psychiatry, 76(2), 89–95. https://doi.org/10.1080/08039488.2021.1938211 

    Kotila, A., Hyvärinen, A., Mäkinen, L., Leinonen, E., Hurtig, T., Ebeling, H., Korhonen, V., Kiviniemi, V. J., & Loukusa, S. (2020). Processing of pragmatic communication in ASD: A video-based brain imaging study. Scientific Reports, 10(1), 21739. https://doi.org/10.1038/s41598-020-78874-2 

    Wong, K., Lee, K., Tsze, S., Yu, W. S., Ng, I. H., Tong, M., & Law, T. (2021). Comparing early pragmatics in typically developing children and children with neurodevelopmental disorders. Journal of Autism and Developmental Disorders, 1–15. https://doi.org/10.1007/s10803-021-05261-9 

     

     

  •  

    On March 18, 2022, the American Psychiatric Association (APA) updated the diagnostic criteria for autism spectrum disorder in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR). Under criterion A, which describes differences in social communication and social interaction, the phrase “as manifested by the following” has been revised to read “as manifested by all of the following. 

    The APA’s DSM-5-TR work group said its intent was to improve clarity and “maintain a high diagnostic threshold.” It could have been inferred, based on the previous wording, that the presence of any of the criteria would have met the diagnostic threshold (APA, 2022). The new wording aims to prevent that misreading. 

    The new phrasing means that to meet the diagnostic threshold, someone would have to experience persistent differences in all of these areas: 

    • social–emotional reciprocity  
    • nonverbal communication behaviors used in social interactions  
    • developing, maintaining, and understanding relationships  

    In addition to these social communication and interaction differences, at least two of four types of restricted or repetitive behaviors must also be present to meet the DSM-5-TR diagnostic criteria. Restricted or repetitive behaviors involve: 

    When evaluators look at pragmatic skills, they’re generally assessing two primary domains, plus a bridge that connects the two domains: 

    • patterns of movement or speech
    • sameness of routines or rituals 
    • special, highly focused interests 
    • strong responses to sensations in the environment 

    The text revision also contains another change: The DSM-5 asked clinicians to specify whether a person with an autism diagnosis also had “another neurodevelopmental, mental, or behavioral disorder. The DSM-5-TR asks whether the autism diagnosis is associated with “another neurodevelopmental, mental, or behavioral problem. Broadening this specifier allows clinicians to include information about associated problems that affect well-being but that may not be classified as disorders.  

    Neither of these revisions is expected to have a major impact on the number of people diagnosed with autism.  

     

    Further Reading on Autism

     

    Videos and Webinars on Autism

     

    Related Links

     

     

    Research and Resources:

     

    American Psychiatric Association. (2022). Autism Spectrum Disorder. https://psychiatry.org/File%20Library/Psychiatrists/Practice/DSM/DSM-5-TR/APA-DSM5TR-AutismSpectrumDisorder.pdf  

    Centers for Disease Control and Prevention. (2022, April 6). Diagnostic Criteria. https://www.cdc.gov/ncbddd/autism/hcp-dsm.html 

     

     

  •  

    The best outcomes for clients and students happen when parents, caregivers, educators, and clinicians collaborate. But cultivating effective partnerships can be difficult. It’s hard to establish trust and foster open communication when people come to the table stressed, rushed, and worried. How can we prepare for parent and caregiver interactions to make it easier for everyone to fully engage in the evaluation and intervention process?  

     

    Be aware of the barriers that can get in the way of connection-building 

    Researchers point to several factors that can make it harder for parents, families, and caregivers to engage and connect with healthcare providers and educators. Some of these factors are: 

    • cultural differences 
    • language differences
    • mistrust of health care providers or educators
    • general life stress
    • stigma surrounding the learning problem or health condition
    • lack of understanding about how to navigate education or health care systems
    • anxiety over the cost, time, and skills involved in helping their child
    • power imbalances created by perceived differences in income, education, race, or immigration status

    Communication barriers like these can increase health and education inequities (Butler 2021). Shrinking these barriers takes time and patience—people often bring years of negative experiences with them to a meeting. A good starting place is a warm welcome. It may also help to arrange seating so you and your team aren’t on one side of a table with the family on the other, which can feel oppositional. And it’s always a good practice to explicitly invite parents to ask questions and share thoughts when they’re ready.  

     

    Explain the why, how, and when for assessments

    Whether it’s universal screening or a diagnostic evaluation, understanding the purpose of an assessment helps parents and caregivers get on board. It’s also important to explain the services and benefits that may flow from an accurate diagnosis. When parents and caregivers can clearly see how an evaluation or assessment will help their child, it can decrease resistance and boost buy-in.  

    Many parents also want to know exactly how an evaluation will unfold. When roles and responsibilities are clearly defined and parents understand the steps in the process, it can ease anxieties and establish clear expectations.  

    A detailed schedule is another trust-builder. If a date needs to be changed, let parents know early. Set new dates that work with everyone’s schedule to minimize inconvenience.  

     

    Consider word choices 

    You may need to speak with parents about sensitive topics. It’s a good practice to think about how your word choices could shape the experience. Some parents experience shame or guilt around their child’s difficulties. Others may feel that the health system or school system has failed them. It’s especially important, therefore, to use inclusive, non-stigmatizing language to avoid more tension (Sim 2021).  

    It’s also important to avoid ableist language (“She suffers from...” “normal vs. abnormal” “high-functioning vs. low functioning”). You can find a primer in the National Center on Disability and Journalism’s Disability Language Style Guide. The American Psychological Association offers a detailed Inclusive Language Guide to help people avoid terms that are harmful to people in marginalized communities.  

     

    Build trust with trauma-informed practices  

    Early in your relationship with a new family or client, you may not know whether trauma has impacted their lives. You may want to familiarize yourself with trauma-sensitive communication strategies for that reason. Here are a few recommended practices:  

    • Protect the privacy of personally identifiable information. 
    • Ask parents how they would like to communicate with you, including which methods and which times of day are best. 
    • Provide documents and information in the preferred language and in a variety of accessible formats.
    • Communicate in a direct, sensitive, and respectful manner. 
    • Establish safe feedback methods so families can share what is and isn’t working for them.  
    • Build choice into the process for students and families.

     

    Practice cultural humility

    When you approach parent and caregiver interactions from a learning standpoint, asking questions about individual preferences, backgrounds, and experiences, you’re more likely to create meaningful connections (Stubbe 2020). 

    Educate yourself about the culture of your clients—not looking for stereotypes but for social norms that might be part of the dynamic. The knowledge you gain may help you adapt your communication style or be aware of different perspectives on health conditions and interventions (Maul 2106). 

     

    When you meet, start with positives 

    Studies show that parents want to hear more about what you see as their child’s strengths (Azad 2018). Many assessments now identify abilities as well as deficits, so use these positives to set an uplifting tone. Emphasizing strengths can also help to reduce any stigma associated with learning problems or health conditions.  

    One additional benefit: Focusing on positive attributes sends a clear message that you see a person, not a problem or diagnosis. In some studies, involving early childhood educators working with immigrant children, collaboration with parents was problematic when teachers saw differences as deficits instead of focusing on the ways immigrant children enrich the classroom (Licardo 2021).   

     

    Use clear, accessible language 

    When you explain why a further evaluation is needed or inform parents of a finding, use terms that everyone is likely to understand. The information you’re presenting may be hard for people to fully take in, especially if the diagnosis comes as a shock. You’ll make it easier if you avoid educational or medical jargon, limit acronyms, and emphasize key points.  

    Because people respond differently, it’s a good idea to leave some space for people to process their feelings and ask questions if there’s something they haven’t understood. You may want to ask questions to be sure everyone has grasped key points.  

     

    Listen actively 

    Research shows that healthcare providers spend much more time explaining, asking questions, advocating, and negotiating roles than parents or caregivers do during visits (Giambra 2017). The same dynamic happens in educational settings (Gwernan-Jones 2015).  

    Yet research shows that people value health care professionals who genuinely listen to them (Washington 2019). Active listening cultivates empathy, which leads to better care and better outcomes (Haley 2017).  

    To create an environment where people feel heard, you can: 

    • ask open-ended questions (ones that aren’t answered yes or no 
    • explicitly invite questions, either at the conference or afterward 
    • consider working with a medical interpreter, health advocate, or intermediary when there are language differences 
    • notice body language and other non-verbal messages 
    • ask clarifying questions if you’re not sure what you’re hearing
    • avoid thinking about your response while someone is speaking
    • repeat in your own words what you believe they’ve said
    • be aware of differences in communication brought about by personality, cultural differences, and neurodivergence

     

    Include actionable next steps 

    Assessments, evaluations, and diagnoses are starting points. One of the most valuable parts of a parent or caregiver conference is the opportunity to plan next steps together—to help people see a path forward and envision their role in creating it. As you collaborate on an intervention or treatment plan, you can: 

    • invite parents to help set goals 
    • list next steps and make sure it’s clear who’s responsible for each action
    • calendar follow-up meetings together
    • provide information about support groups and organizations
    • make sure parents and caregivers know how to reach you

    Many clinicians and educators also give families functional homework focused on developing specific skills. That’s not just because home-based interventions can be highly effective. It’s also because numerous studies have found that when parents collaborate with providers and participate directly in interventions, their own quality of life improves (Musetti 2021). 

     

    Take care of yourself, too 

    Parent and caregiver conferences aren’t always easy. These interactions can be especially hard if you: 

    • are new and building up your experience and confidence 
    • have recently experienced hardship, illness, or trauma yourself 
    • are advocating for someone with a similar condition in your personal life 
    • don’t feel supported in your work environment 
    • don’t personally enjoy face-to-face interactions

    It’s okay to spend some time thinking about what you need to build your comfort and confidence during parent and caregiver communications. Mentorship, shadowing experiences, role-playing, or scripting may help. Sharing responsibilities with other evaluation team members is also a good idea. And training with assessment providers can boost your confidence in your ability to deliver assessments and report results.  

    As with any skill, it helps to adopt a growth mentality. With deliberate practice and patience, you can develop the communication skills you need to build effective partnerships. 

    WPS welcomes every opportunity to support you in your professional journey.  

     

     

    Research and Resources:

     

    Azad, G., Wolk, C. B., & Mandell, D. S. (2018). Ideal interactions: Perspectives of parents and teachers of children with autism spectrum disorder. School Community Journal, 28(2), 63–84. 

    Butler, S.M., & Sheriff, N. (2021, February). How poor communication exacerbates health inequities—and what to do about it. https://www.brookings.edu/research/how-poor-communication-exacerbates-health-inequities-and-what-to-do-about-it/ 

    Giambra, B. K., Haas, S. M., Britto, M. T., & Lipstein, E. A. (2018). Exploration of parent-provider communication during clinic visits for children with chronic conditions. Journal of Pediatric Health Care, 32(1), 21–28. https://doi.org/10.1016/j.pedhc.2017.06.005 

    Gwernan-Jones, R., Moore, D.A., Garside, R., Richardson, M., Thompson-Coon, J., Rogers, M., Cooper, P., Stein, K. and Ford, T. (2015), ADHD, parent perspectives and parent–teacher relationships: Grounds for conflict. British Journal of Special Education, 42: 279-300. https://doi.org/10.1111/1467-8578.12087 

    Haley, B., Heo, S., Wright, P., Barone, C., Rettiganti, M.R., Anders, M. (2017). Relationships among active listening, self-awareness, empathy, and patient-centered care in associate and baccalaureate degree nursing students. Nursing Plus Open, 6, 11-16. https://www.sciencedirect.com/science/article/pii/S2352900816300231?via%3Dihub 

    Licardo, M. & Oliveira Leite, L. (2022) Collaboration with immigrant parents in early childhood education in Slovenia: How important are environmental conditions and skills of teachers? Cogent Education, 9:1, DOI: 10.1080/2331186X.2022.2034392 

    Maul, A. & Menschner, C. (2016). Key ingredients for successful trauma-informed care implementation. Center for Healthcare Strategies, Inc. https://www.samhsa.gov/sites/default/files/programs_campaigns/childrens_mental_health/atc-whitepaper-040616.pdf 

    McIntyre, L. L., & Brown, M. (2018). Examining the utilization and usefulness of social support for mothers with young children with autism spectrum disorder. Journal of Intellectual & Developmental Disability, 43(1), 93–101. https://doi.org/10.3109/13668250.2016.1262534 

    Musetti, A., Manari, T., Dioni, B., Raffin, C., Bravo, G., Mariani, R., Esposito, G., Dimitriou, D., Plazzi, G., Franceschini, C., & Corsano, P. (2021). Parental quality of life and involvement in intervention for children or adolescents with autism spectrum disorders: A systematic review. Journal of Personalized Medicine, 11(9), 894. https://doi.org/10.3390/jpm11090894 

    Russell, G., Kapp, S. K., Elliott, D., Elphick, C., Gwernan-Jones, R., & Owens, C. (2019). Mapping the autistic advantage from the accounts of adults diagnosed with autism: A qualitative study. Autism in Adulthood: Challenges and Management, 1(2), 124–133. https://doi.org/10.1089/aut.2018.0035 

    Sim, W. H., Toumbourou, J. W., Clancy, E. M., Westrupp, E. M., Benstead, M. L., & Yap, M. (2021). Strategies to increase uptake of parent education programs in preschool and school settings to improve child utcomes: A Delphi study. International Journal of Environmental Research and Public Health, 18(7), 3524. https://doi.org/10.3390/ijerph18073524 

    Smith-Young, J., Chafe, R., Audas, R., & Gustafson, D. L. (2022). "I know how to advocate": Parents' experiences in advocating for children and youth diagnosed with autism spectrum disorder. Health Services Insights, 15, 11786329221078803. https://doi.org/10.1177/11786329221078803 

    Stahmer, A. C., Vejnoska, S., Iadarola, S., Straiton, D., Segovia, F. R., Luelmo, P., Morgan, E. H., Lee, H. S., Javed, A., Bronstein, B., Hochheimer, S., Cho, E., Aranbarri, A., Mandell, D., Hassrick, E. M., Smith, T., & Kasari, C. (2019). Caregiver voices: Cross-cultural input on improving access to autism services. Journal of Racial and Ethnic Health Disparities, 6(4), 752–773. https://doi.org/10.1007/s40615-019-00575-y 

    Stubbe D. E. (2020). Practicing cultural competence and cultural humility in the care of diverse patients. Focus (American Psychiatric Publishing), 18(1), 49–51. https://doi.org/10.1176/appi.focus.20190041 

    Warren, N., Eatchel, B., Kirby, A. V., Diener, M., Wright, C., & D'Astous, V. (2021). Parent-identified strengths of autistic youth. Autism: The International Journal of Research and Practice, 25(1), 79–89. https://doi.org/10.1177/1362361320945556 

    Washington, K. T., Craig, K. W., Parker Oliver, D., Ruggeri, J. S., Brunk, S. R., Goldstein, A. K., & Demiris, G. (2019). Family caregivers' perspectives on communication with cancer care providers. Journal of Psychosocial Oncology, 37(6), 777–790. https://doi.org/10.1080/07347332.2019.1624674 

     

     

  •  

    Assessments are an integral part of a helping professional’s work. The results obtained comprise one of the critical tools used in making high-stakes decisions, which can have a dramatic impact on the lives of those the helping professional serves. Thus, professionals rely on assessments to produce results that they can trust. There are many ways to measure whether a test is accurate or not. Most assessments include evaluations within the test manual describing multiple measures of validity and reliability. It is important for professionals to understand these results so that they can identify which tests to confidently add to their battery and for what purpose.

    This paper begins with a review of reliability and validity before shifting focus to sensitivity and specificity as a measure of validity. The aim is to inform the reader as to what sensitivity and specificity measure, how they are determined, and what factors should be considered when evaluating these results in assessment manuals. Various characteristics of the sample can significantly impact the results of validity studies as well as what cut points are used when interpreting scores. The Comprehensive Assessment of Spoken Language, Second Edition (CASL®-2) data were analyzed to demonstrate this. The results provide further support for the validity and utility of the CASL-2 as a diagnostic tool when measuring language abilities, while also remaining sensitive to those skills exhibited by individuals with mild symptoms of language impairment. 

     

    Reliability & Validity 

    Within the field of assessment, there are different types of reliability. Reliability refers to the consistency of a measure. If you gave the same test to the same person a few different times within a short period without changing any other features, would you get the same results? A common example of this is to think about a scale. If you step on a scale to measure your weight, and then you step on it again in 30 minutes, the numbers should be relatively the same. Obtaining multiple results that are corresponding (within an expected range) provides evidence that your scale is consistent, reliable. Another measure of accuracy is the validity of a test, which is the extent to which a test is measuring what it claims. Returning to the scale example, you just had your weight taken at the doctor’s office, where you know the measurement is accurate. Your bathroom scale gives the same number as the doctor’s scale. This indicates that your scale at home is valid.

    A measure can be valid, in that it is measuring what you intended to measure, but not reliable. Your scale at home reads the same weight as your doctor’s office visit. After a few minutes you step on your scale again, but this time your weight is 10 pounds more than what the doctor’s scale said. Your scale is valid (it is still measuring weight), but it is not reliable (you did not gain 10 pounds in 10 minutes). Alternatively, a measure can be reliable, giving you consistent results every time, but not valid. Your scale at home is giving you the same result every time, but it is showing your weight is 10 pounds more than the doctor’s office scale said just a short while ago. This time, the scale is consistent (giving the same results each time), but it is not valid (the weight is not accurate).

     

    Sensitivity & Specificity

    Publishers include multiple measures of validity and reliability within the manual as part of the psychometric properties of the test. These are intended to help the reader understand the full scope of the test’s function and to support the use of the test. Professionals should review these studies prior to use to confirm that it is both reliable AND valid for their intended purpose. This is particularly important when using a test for diagnostic purposes.

    Sensitivity and specificity are measures of validity. Sensitivity refers to a test’s ability to identify the presence of an actual deficit, condition, or disorder—a true positive result. Specificity refers to a test’s ability to identify the absence of an actual deficit, condition, or disorder—a true negative result. Figure 1 displays the possible test outcomes. Test results are considered “accurate” when the true positives and true negatives are both high, while the false positives and false negatives are both low. For a more advanced understanding of how sensitivity and specificity are calculated, sensitivity is the proportion of actual true positives identified [True Positives/(True Positives + False Negatives)]. Specificity is the proportion of actual true negatives identified [(True Negatives/(True Negatives + False Positives)].

    The car alarm is a real-world example that is often used to illustrate this concept. In an ideal situation, your car alarm only sounds when someone is trying to break into your car, scaring them off and deterring a break in. This would be the top left cell of Figure 1, a True Positive occurrence. Conversely, you expect that your car alarm does not go off when there is no one breaking into your car. This is the bottom right cell of Figure 1, a True Negative. Though, we have all heard the incessant howl of a car alarm at 2am when there is no burglar present, but rather a cat jumped onto the roof of the car waking everyone except the owner of the vehicle. This is the top right cell, a False Positive. Perhaps worst of all, is when you are in earshot of your car alarm and know that it never sounded, yet you arrive at your parking space only to find that your car has been stolen. This is the bottom left cell, a False Negative. 

     

     

    how to interpret sensitivity and specificity: categories of positive and negative test results.

    Figure 1. Categories of Positive and Negative Test Results.

     

     

    A perfect test would be one that is 100% sensitive (i.e., it identifies all people who actually have the condition of interest, top left cell) and 100% specific (it does not identify anyone who does not have the condition as having one, bottom right cell). However, most tests have some error rate (top right cell and bottom left cell), and there is usually a tradeoff between having a highly specific test and a highly sensitive one. Thinking again about the car alarm example, you don’t want it to go off every time a car drives by or a person walks near the car, or it would give you a lot of false positives. But you also don’t want the alarm to miss when someone attempts to steal the car and not go off, which would be a false negative. This is the tradeoff between sensitivity and specificity.

    The same reasoning is true for assessment results. You want a test that correctly identifies someone who has a significant and meaningful skill deficit as having one (True Positive), and those who do not, should be identified as not having a deficit (True Negative). You also want the instances of a test incorrectly identifying someone as having a deficit when they do not (False Positive) and incorrectly identifying someone as typically performing when they actually do have a deficit (False Negative) to be as low as possible. High levels of both sensitivity and specificity indicate that the test is accurately identifying those who have the presence or absence of the condition of interest, while not mistakenly under- or over-identifying individuals.

     

    Using Sensitivity & Specificity Statistics in Behavioral and Social Sciences

    Although sensitivity and specificity are increasingly used to support the accuracy of behavioral and psychological assessments today, historically their use has been in medical and healthcare settings. This is particularly true for screening tests used to identify the likely presence or absence of a condition so that healthcare providers can make appropriate decisions for further testing and treatment (Trevethan, 2017). However, there are some fundamental differences to consider between medical conditions and psychological or behavioral conditions. These statistics were originally designed to detect the presence or absence of a condition, a yes or no to a diagnosis. For example, a Covid-19 test presents a result of positive or negative to indicate the presence or absence of the virus. Tests used in the behavioral and social sciences typically produce scores that exist along a range of values that have an order to them and often the distance between scores is meaningful, indicating changes in severity of symptoms. Many medical conditions have distinct categories: yes or no, you have a virus, or you do not. In contrast, many behavioral and psychological conditions typically exhibit a range of symptoms along a continuum or spectrum with more gray areas between the distinctive ends of yes or no, the condition is present or not.

    While examining a variety of empirical studies has become an accepted framework for determining the validity of a diagnostic assessment (Dollaghan, 2004), Plante and Vance (1994) state that evaluating diagnostic accuracy should be the primary concern for a test used to identify language impairment. They refer to diagnostic accuracy as the test's ability to adequately identify those who have language impairments (the sensitivity) and identify those with typical language development (the specificity). To evaluate the sensitivity and specificity of behavioral and psychological assessments, a predetermined threshold must be used, often referred to as a “cutoff score” or “cut score” (Sheldrick et al., 2015). Again, this is because behavioral measures do not simply measure a condition with a yes or no category, but rather measure abilities or symptoms along a range. Often the cut scores are chosen in relation to the standard deviation (SD) of the test. For a test with a standard score mean of 100, 15 points in either direction is the standard deviation. Thus, a score of 85 would be 1SD below the mean and a score of 70 would be 2SDs below the mean.

    Rather than presenting sensitivity and specificity values for a single cut score, Betz, Eickhoff, and Sullivan (2013) recommend providing results for multiple values so that professionals can choose a cutoff score best suited to their population. Often organizations differ in their criteria for determining the eligibility of services and when to continue or cease services. Presenting a range of values allows for flexibility and provides a different calibration for how sensitive you want your “alarm” to be set. You can customize it to fit your service population and the intended purpose of the test.

     

    Interpreting Sensitivity & Specificity Statistics

    Now that you know what sensitivity and specificity are, how can you evaluate these statistics in the tests you use? Every test that presents sensitivity and specificity statistics has conducted these analyses using a specific sample of interest, usually a clinical group, in comparison to another sample, usually a typically developing group. These samples matter and factors like age, diagnosis, and severity of deficits can have profound impacts on these statistics. It is important to review any demographics and descriptive data given about the samples used for the sensitivity and specificity analysis. These statistics are calculated using the instances of true positives and true negatives found in the samples. Thus, if the sample used in the analysis is not representative of the intended audience of the test, these statistics are meaningless in terms of supporting the validity of the test for the population intended.

    Some characteristics to examine include demographics of the samples and the clinical features of the groups of interest. How many individuals were included in the study? Is the age range of those included in the analysis reflective of the entire age range of the test or were the analyses conducted on only a specific age group? How do the demographic characteristics of the samples included in the analysis compare to the overall test sample? How were the diagnoses determined for the clinical sample? What is the severity level of the cases included in the clinical sample?

    To examine how these features can affect sensitivity and specificity, see Figure 2. A fictional test was administered to 10 individuals, age ranging from 5 to 21 years. Nine were previously diagnosed as having a speech-language impairment by 9 separate speech-language pathologists (SLPs), following the federal requirements for eligibility under the Individuals with Disabilities Education Act (IDEA) and their state regulations. The demographic characteristics of the sample roughly matched that of the recent US Census data, making it “representative” (although quite small). The SLPs made a clinical judgment as to the level of impairment being mild, moderate, or severe for each clinical case at the time of testing. Eight were rated as severely impaired and one as mildly impaired. There was one person identified as typically developing. Their standard scores on the test (M=100, SD=15) are given below.

     

    how to interpret sensitivity and specificity: fictional test, sample of 10 individuals.

    Figure 2. Fictional Test, Sample of 10 Individuals.

     

     

    Sensitivity and specificity differ depending on what value is used for the cutoff score. Using a cutoff of 2SDs below the mean, those who score above 70 would be considered typically developing while those who score at or below 70 are impaired. In this case, 8 out of the 9 individuals with diagnosed speech impairments were identified by this test. The sensitivity = 0.89 because 8 True Positives were correctly identified (score of 70 or below) divided by the 8 True Positives + 1 False Negative (mildly impaired case who was not identified). In other words, 89% of the sample was correctly identified as having a significant and meaningful language deficit. The one individual without a diagnosed speech-language impairment was correctly identified as not having a deficit in this example. The specificity is 1.00 because the one True Negative was correctly identified (score greater than 70) divided by the 1 True Negative + 0 False Positives. This means that 100% of the typically developing sample was correctly identified as not having a significant and meaningful language deficit.

    What if the cutoff score was moved to a standard score of 80? The one mild case would then be identified, and the sensitivity and specificity of the test would both be 1.00. The less stringent cutoff of 80 provides improved sensitivity, capturing all those previously diagnosed as speech impaired. Also, it does not increase the chances of overidentifying those typically developing individuals (the specificity remains 1.00). The interpretive range for sensitivity and/or specificity states that .90–1.00 is good to excellent, while .80–.89 is considered acceptable for diagnostic measures (Plante & Vance, 1994). Thus, this fictional test would be perfect! But remember, the sample of this test is based on only 10 individuals. That is a tiny sample size and the majority (8 out of 10) have severe language impairments. How representative is this group to the population at large that you intend to test? Even though the numbers look excellent, you must consider the sample that is used in the data analysis to obtain these results.

    Think about the referred individuals you see who do not already come to you with a diagnosis. If every possible case came through the door exhibiting severe deficits, a test would only confirm what you already can identify as a clinician. It is critical to examine the qualities of the sample used in analysis to determine if it is realistic and represents the diverse range of skills you might find in your population.

     

    Sensitivity & Specificity in the CASL-2

    The sensitivity and specificity analysis of the CASL-2 (Carrow-Woolfolk, 2017) is included alongside a variety of studies supporting the validity and reliability of the measure. The published analysis used the standardization sample compared to the clinical sample of 271 individuals, aged 3 to 21 years. Data collectors who participated in the standardization of the CASL-2 recruited individuals with the following disorders: expressive and/or receptive language disorder (n=72), hearing impairment (n=23), autism spectrum disorder (n=49), social (pragmatic) communication disorder (n=23), intellectual disability (n=36), learning disability (n=43), and developmental delay (n=25). To be included in the sample, these individuals needed to have a previously established clinical diagnosis (e.g., diagnosed by a professional according to federal and state regulations prior to participation in the study) and be receiving special services. Because of the inclusion criteria, the clinical sample was not expected to exactly replicate that of the U.S. Census demographic distribution. However, the sample does offer some diversity in terms of ethnicity and parental education level. Males outnumbered females, as is often the case in clinical samples. See Table 4.5 in the CASL-2 manual for the exact demographic composition of the clinical sample.

    The CASL-2 sensitivity and specificity analysis (presented in Table 5.20 of the CASL-2 manual) is replicated here for reference as Table 1. The analysis demonstrates the ability of the CASL-2 to accurately identify those with a clinical diagnosis from those who do not by using the CASL-2 General Language Ability Index (GLAI) standard score. It is important to note that the entire clinical sample was included in this published analysis. The rationale for including all clinical groups was to display the range of sensitivity and specificity in a very diverse population with a wide range of symptoms and ability.

     

     

    Table 1. Published CASL-2 Sensitivity and Specificity Values Using a Diverse Clinical Sample.

    SS cutoff

    Sensitivity

    Specificity

    70

    .41

    .99

    75

    .47

    .96

    80

    .64

    .91

    85

    .74

    .84

    90

    .86

    .76

     

     

    Notice that the range of sensitivity values does not meet the acceptable mark of .80 or greater (Plante & Vance, 1994) until the cutoff of a standard score of 90, which would be a lenient cutoff for most. Indeed, at a cutoff of 90, the specificity drops to .76, meaning the risk for over-identifying those in the typically developing group increases. The sensitivity values are much lower at the most stringent cutoff of 70 and 75, which suggests that many of the mild- to moderately-impaired individuals will not be identified using the CASL-2 GLAI score within this diverse group of clinical cases used in this analysis. The cutoff of 85 (1SD below the mean), provides a better balance capturing 74% of the clinical sample (closer to the .80 recommendation) and an acceptable specificity rate where 84% of the typically developing sample is accurately identified. These values are representative of the most commonly used assessments of language, and this is likely due to the diverse nature of the clinical samples included in these studies.

    To demonstrate the impact that the sample of interest has on sensitivity and specificity, we analyzed the CASL-2 clinical sample again, but this time trimming the clinical group based on the type of clinical classification and the severity of the symptoms. The data collectors who participated in the CASL-2 standardization study rated the severity of symptoms for each of the participants who were previously given a clinical diagnosis. They rated everyone from the clinical group as mild, moderate, or severe based on the testing session where the practicing SLPs administered all available CASL-2 tests for the examinee’s age. Although this rating is somewhat subjective, we can presume the clinical judgment of the SLPs is sufficient to determine a discrepancy between those rated as mild compared to severe. As such, Table 2 presents the sensitivity and specificity values when the CASL-2 GLAI scores are compared for the typically developing standardization sample to only those who displayed moderate to severe symptoms, across all clinical groups.

     

     

    Table 2. CASL-2 Sensitivity and Specificity Values Using a Moderate to Severe Clinical Sample.

    SS cutoff

    Sensitivity

    Specificity

    70

    .53

    .99

    75

    .64

    .99

    80

    .84

    .98

    85

    .92

    .92

    90

    .99

    .82

    Note. The analyzed sample included 195 clinically diagnosed individuals and 2,194 typically developing individuals.

     

     

    These results demonstrate that a cutoff score of as low as 80 would be acceptable, with a sensitivity of .84 and specificity of .98. This means that 84% of the clinical sample was correctly identified as having language deficits, even across a diverse clinical sample, while 16% were missed and not identified as belonging to the clinical group when they should have been. Almost all typically developing individuals (98%) were correctly identified as not having language deficits, and only 2% were identified as having a deficit when they actually did not because their score fell below the cutoff of 80.

    Using a very stringent cutoff score of 70 would only capture 53% of the clinical group in this case. This is due to the variance of the moderately impaired clinical group, whose scores fell between 71 to 85. This is highlighted by the increasing sensitivity values going from a cutoff of 70 to a cutoff of 85. As the cut score increases, more and more of the clinical cases are correctly identified such that at a cut score of 85 (1SD below the mean) sensitivity increases to .92. This means that 92% of the clinical sample included in this analysis was accurately identified, while 92% of the typically developing group was correctly identified as not having a deficit. This suggests that using very stringent cut scores such as 1.5 to 2SDs below the mean increases the risk of false negatives, or under-identifying individuals with actual language impairments. There are nuances in symptomology and less pronounced impairments may be overlooked when only considering 2SDs below the mean as the criteria for diagnosis. Those with milder yet significant conditions may be more likely performing closer to the 1SD below the mean range.

    All clinical classifications were included in the analysis above. However, we might not expect that certain groups, such as those with general learning disability or developmental delay, would show deficits specific to language. Although language difficulties may exist within their symptomology and related to their diagnosis, language deficits are not the focus of their condition. Thus, we again trimmed the moderate to severe clinical sample to only include those clinical conditions where language deficits are an expected and pronounced symptom (all groups except learning disability and developmental delay). These results are presented in Table 3.

     

     

    Table 3. CASL-2 Sensitivity and Specificity Values Using a Moderate to Severe Language-Impaired Clinical Sample.

    SS cutoff

    Sensitivity

    Specificity

    70

    .60

    .99

    75

    .68

    .99

    80

    .89

    .98

    85

    .97

    .92

    90

    1.00

    .82

    Note. The analyzed sample included 141 clinically diagnosed individuals and 2,194 typically developing individuals.

     

     

    These results demonstrate that a cutoff score of 80 would be very good for diagnostic accuracy, with a sensitivity of .89 and specificity of .98. This means that 89% of the clinical sample was correctly identified as having language deficits, with only 11% not being identified. Almost all typically developing individuals (98%) were correctly identified as not having language deficits, while only 2% were identified as having a deficit when they did not because their score fell below the cutoff of 80. Using the stringent cutoff of 70 would capture 60% of the clinical group in this case. Again, this is due to the variance of the moderately impaired clinical group who are scoring between 1SD and 2SDs below the mean. The cutoff of 85 (1SD below the mean) improves the sensitivity to .97, indicating that 97% of the clinical sample would be correctly identified whereas only 3% would be missed because they are scoring above 85. While sensitivity goes up using a cutoff of 85, specificity decreases somewhat to 92% of the typically developing group being correctly identified, while 8% would be identified as having a language impairment.

    This tradeoff of sensitivity and specificity highlights how different cut points may be used in different situations. Using the cut scores of 80, 85, or 90 all provide acceptable choices for sensitivity and specificity. Depending on the intention of the test administration and the client who is being tested, one cut score may be more appropriate than another. For high-stakes eligibility cases, you may wish to use a more stringent cutoff at 80 (lower than 1SD below the mean) because approximately 89% of all individuals who exhibit moderate to severe language impairments will be accurately identified, while 98% of the typically developing individuals will be accurately identified. In contrast, when you are testing for treatment planning or lower-stakes decisions, you may wish to use a higher cut score, such as 85 or 90, that is much more inclusive of those who may experience more mild symptoms of language impairments but may also slightly increase the chances of overidentifying those in the typically developing population.

     

    Summary

    Measures of test validity are an important part of the overall examination of an assessment’s psychometric properties. To diagnose with confidence and accuracy, one must use a test that demonstrates reliability and validity across a range of empirical studies. One such measure of validity is the analysis of sensitivity and specificity. These statistics reflect the ability of a test to accurately identify those who truly have a deficit, condition, or disorder (sensitivity), while also correctly identifying those who do not (specificity). However, behavioral and psychological tests such as direct performance assessments often produce scores on a continuous scale where differences in score values are meaningful, compared to the discrete, nominal scores of many medical tests.

    In order to calculate and use sensitivity and specificity values, cut points must be implemented as a threshold to determine the presence or absence of impairment. Generally, as you increase the value of the cutoff score you correctly identify more and more impaired individuals (higher sensitivity), but you also increase your chances of over-identification (lower specificity). Whereas, if you use very stringent cutoff scores you risk under-identifying impaired individuals (lower sensitivity), but you generally have higher specificity (you are not mistakenly identifying unimpaired individuals). This tradeoff between sensitivity and specificity is often a determinant for which cut point to use as a threshold in evaluating a test and for what purposes you intend to give the test results.

    The samples used to calculate the sensitivity and specificity analysis can have a dramatic impact on the statistics. Professionals are encouraged to critically evaluate various descriptive features of the samples such as sample size, age, demographic characteristics, clinical diagnoses included, and severity of the clinical group being compared. A re-analysis of the CASL-2 clinical data was examined to illustrate this point. The CASL-2 sensitivity and specificity values were significantly improved when only those who exhibit more moderate to severe symptoms were included in the clinical sample. Sensitivity improved even more once only those clinical groups who might be expected to show deficits in language were included rather than the more general clinical sample. These results provide further support for the validity of the CASL-2 as a diagnostic measure of language impairment, while also supporting its use in identifying those who demonstrate more mild symptoms.

    There is utility in using sensitivity and specificity for evaluating assessments of direct performance and behavior, particularly for identifying those at risk for negative outcomes based on ability levels or behavior. However, sensitivity and specificity should not be thought of as a precise measure of validity on its own. Further, the interpretation of these statistics is dependent entirely on what threshold is chosen as a cutoff (e.g., using 1, 1.5, or 2SDs below the mean). Studies have shown the chances of misclassifying individuals increases significantly as scores approach that decision threshold (Robins, 1985; Sheldrick et al., 2015; Spaulding, Plante, & Farinella, 2006; Swets, Dawes, & Monahan, 2000).

    Professionals should take a variety of cut points into consideration, with cutoffs moving from more to less stringent depending on if the goal of the assessment is diagnostic to treatment planning.  As with all assessments, a single test score should not be used in isolation for diagnosis or treatment planning. Instead, assessment results should be used in concert with other data (e.g., other assessment results, parent and teacher interviews, review of available records, direct observation, etc.) to identify a disorder or disability. The fuller the picture we can capture, the better able we are to illuminate the path forward for those individuals who are seeking our guidance as helping professionals.

     

    Downloadable pdf version

     

    Related Posts: 

     

     

    Research and Resources:

     

    Betz, S. K., Eickhoff, J. R., & Sullivan, S. F. (2013). Factors influencing the selection of standardized tests for the diagnosis of specific language impairment. Language, Speech, and Hearing Services in Schools, 44, 133-146.

    Carrow-Woolfolk, E. (2017). Comprehensive Assessment of Spoken Language, Second Edition (CASL-2). Torrance, CA: Western Psychological Services.

    Dollaghan, C.A. (2004). Evidence-based practice in communication disorders: what do we know, and when do we know it? Journal of Communication Disorders, 37(5), 391-400.

    Plante, E. & Vance, R. (1994). Selection of preschool language tests: A data-based approach. Language, Speech, and Hearing Services in Schools, 25, 15-24.

    Robins, L.N. (1985). Epidemiology: reflections on testing the validity of psychiatric interviews. Archives of General Psychiatry, 42(9), 918–924.

    Sheldrick, R. C., Benneyan, J. C., Giserman-Kiss, I., Briggs-Gowan, M. J., Copeland, W., and Carter, A. S. (2015). Thresholds and accuracy in screening tools for early detection of psychopathology. The Journal of Child Psychology & Psychiatry, 56(9), 936-948.

    Spaulding, T. J., Plante, E., & Farinella, K. A. (2006). Eligibility criteria for language impairment: is the low end of normal always appropriate? Language, Speech, and Hearing Services in Schools, 37(1), 61-72.

    Swets, J.A., Dawes, R.M., & Monahan, J. (2000). Psychological science can improve diagnostic decisions. Psychological Science in the Public Interest, 1(1), 1–26.

    Trevethan, R. (2017). Sensitivity, specificity, and predictive values: foundations, liabilities, and pitfalls in research and practice. Frontiers in Public Health, November 20.

     

     

  •  

    In 2010, the U.S. Department of Justice passed the Americans with Disabilities Act Standards for Accessible Design, mandating that electronic and information technology, like websites, be accessible to those with disabilities, including visual, auditory, physical, speech, cognitive, language, learning, and neurological disabilities. This was an important step towards greater accessibility for the more 3.8 million Americans with visual impairments, not to mention those with other disabilities.

     

    Consequences on Non-ADA-Compliant Websites

    More than a decade later companies are still struggling to achieve compliance, and a 2020 study found that 98% of companies do not offer full accessibility services. The companies who don’t comply with ADA standards are putting themselves at a substantial risk, including lawsuits and fines. The violation maximum civil penalty a first violation under title III is $75,000, with subsequent violations being capped at $150,000. In most cases, companies choose to settle complaints out of court, but even those settlements can run in the thousands of dollars. Fortunately, companies are making this technology more accessible.

     

    Other Compliance Requirements

    There are two key compliance groups that organizations must meet the requirements of: ADA Compliance and WCAG Compliance.

    ADA compliance focuses on ensuring the same level of access to the disabled as their able-bodied counterparts with assistive technology.

    Web Content Accessibility Guidelines (WCAG) defines how to make Web content more accessible to people with disabilities.

    An additional Section 508 compliance applies to federal contractors. You can learn more about the difference between the three layers of compliance here.

     

    Common Triggers for ADA Compliant Website Investigations

    There are many website features that might trigger non-compliance. A few examples of these include:

    • Lack of alt tags, which describe images for visually impaired users
    • Insufficient text/background contrast
    • Site text is not scalable
    • Menu navigation and a drop-down menu that is not fully keyboard-accessible due to some JavaScript and don’t properly support screen readers
    • did not properly support screen readers
    • Lacking “skip navigation” options for screen readers
    • Password requirements that do not support screen readers
    • Actions, like adding a product to cart, that aren’t designed to support screen readers
    • PDF content was not able to be read in HTML format
    • Phone number on the website lacked a full description, potentially barring users from understanding what the number is for
    • Site information, such as the company address and hours of operation that are not labeled

     

    How to Make your Website ADA Compliant

    Fortunately for website owners, there is no shortage of companies that help them remediate these problems on websites, and you can learn more by searching for “ADA compliance Software.”

    After conducting our own research, we chose to work with a provider named accessiBe. We chose this company for its ease of implementation, price point, and specific functionality. Their innovative use of AI to label photos was ideal for our company website, as we have hundreds of product pages and photos that require work. We also appreciated that their software is developed and tested by people with disabilities.

    If you’re curious to learn more, look for the teal green accessibility button on the right-hand side of your screen. Clicking that button activates the accessibility menu, and you’ll be able to toggle the settings as needed.

  • As the pandemic winds down, many are celebrating a return to normalcy. But for practitioners, clinicians and educators, the next challenge has just begun. The two years of instability have done more than cause stress, it has also shown a significant detriment to reading skills and social–emotional development in children and those with special needs, and they need your services more now than ever.

    We’ve compiled a list of self-care tips to help you stay resilient as you face the next wave of challenges brought on by the pandemic. Our team of trained assessment consultants is standing by to guide and train you on the most effective assessments and intervention resources. We also have a robust online grading and assessment platform that makes assessments and progress monitoring easier than ever before.

     

    Start with Self-Awareness

    It’s tempting to think that the risk of burnout or post-traumatic stress disorder is lower now that things are returning to normal, but that might not be the case. Stay attuned to your body and monitor for signs such as muscle tension, a clenched jaw, increased heart rate, or chest pressure. Other signs of burnout include irritability, a lack of empathy, and even an inability to connect with patients, students, research participants, or others. If you’re experiencing these symptoms, it may be time to seek help from a licensed professional.

     

    Connect with Colleagues

    Humans are social creatures, and spending time talking with colleagues not only helps us feel more connected but creates greater compassion and resilience in teams. When possible, make time to connect in enjoyable activities, whether it be a virtual coffee break or a restorative walk around the office, so that you begin to associate work with pleasurable activities rather than stressful ones.

     

    Upgrade (or Change) your Environment

    Many of us have developed an automatic negative response to our working space due to the high frequency of stress that we were exposed to in it. If you find a sense of dread setting in the moment you sit down at your desk, change things up! Add a plant, move your desk or hang up some new art, and set an intention to form a new, positive relationship with this workspace.

     

    Remind Yourself of the Good

    Burnout often shifts a person’s brain into a negativity bias, which is certainly not helped by the overwhelming amount of bad news in the media. This is where methods from Cognitive Behavioral Therapy can come in handy. Remind yourself of the positive impact you have made, and learn to recognize and reframe any negative thinking patterns you may have slipped into during the stresses of the past two years.

     

    Build Better Boundaries

    Just because boundaries were blurred during the pandemic doesn’t mean they have to stay that way. Keep track of your daily activities and make sure that you’re balancing your work and day your free time, and carve out meaningful pockets of relaxation and regeneration into your daily schedule to stay energized. Perhaps you no longer have a water cooler to gather around in the office, but could you take those ten minutes to sit outside and text a friend or colleague?  

     

    Just Do One Thing

    After two years of people talking about self-care, you may feel that it’s yet another “to do” on a long list, and that is the very cornerstone of burnout: the treacherous combination of feeling unmotivated, detached, and dissatisfied. Fortunately, even small, selfish steps forward in self-actualization can get the ball moving in the right direction.