Results Driven Accountability: IDEA Part C Results Data in Determinations

OSEP is committed to implementing a results-driven accountability framework that leads to increased state and local capacity to improve results and functional outcomes for children with disabilities. As part of this effort, OSEP asked the Early Childhood Technical Assistance Center (ECTA (http://ECTACenter.org)) to provide input on Individuals with Disabilities Education Act (IDEA) Part C results measures that could be used to review states’ performance results of their infants and toddlers with disabilities who receive early intervention services. An explanation of ECTA’s recommendations is contained in a presentation entitled Using Child Outcomes Data for Determinations, A Proposal. In addition, a more detailed account of the proposed approach is contained in ECTA’s report entitled Documentation of the Recommended Analysis for Using Child Outcomes Data for IDEA Part C Determinations.

Feedback

Comment period now closed.

We would like your feedback on the proposed approach. What are the pros and cons of the proposed approach?  Are there other data sources that should be considered as we move forward in including results data in the Part C determinations process? Please include any additional information related to the use of results data in Part C determinations. Submit comments below by Friday, December 12, 2014. Submitting comments is voluntary and subject to ED blog comment policies.

RDA Home Page

Also, please also visit the RDA Home page for all the latest information and resources.

Posted by
Information Technology Specialist, U.S. Department of Education

67 Comments

  1. The National Association of the Deaf applauds the move to results-driven accountability rather than staying with the one-size-fits-all model. We do need states to “check-in” with the federal government, other states, and most importantly the states themselves on their performance annually with early intervention services/programs for babies/children with disabilities and their families.

    Number one, we urge OSER to require disaggregating the data by type of disability so this can bring up accountability for low-incidence disabilities or multi-disabilities, for example, is a deaf-blind child being provided services both for deafness and blindness? Often the state says the family has to choose which category comes first, which does not make sense, especially in early intervention as we all know, the earlier the child receives services/support, the more successful the child will be in the long run. Yoshinaga-Itano C., Sedey, A.L., Coulter, D.K., Mehl, A.L. (1998). Language of early- and later-identified children with hearing loss.
    Pediatrics, Vol. 102, pp.1161– 1171 (doi: 10.1542/peds.102.5.1161).

    Disaggregation also will assist states identify root causes of low performance in deaf and hard of hearing children (e.g. late identification, lack of access to language, need for family support in acquiring knowledge and language development support) and the federal government to develop a better understanding of best practices.

    To assist with best practices, to chime in with Barbara Raimondo of CEASD and Janet Desgeorges of Hands and Voices, the National Association of the Deaf also recommends utilizing the Joint Commission on Infant Hearing’s multi-disciplinary guidelines for early intervention when it comes to measuring results for early intervention on part of deaf and hard of hearing children and their families. We also would like to point to a pair of recently published articles about language deprivation to best determine how this can be measured by a SSIP to ensure deaf and hard of hearing children along with deaf-blind children have full access to language from birth. http://www.swarthmore.edu/SocSci/dnapoli1/lingarticles/Ensuring%20Language%20Acquisition.pdf “Ensuring language acquisition for deaf children: what linguists can do” and http://www.harmreductionjournal.com/content/9/1/16 “Language acquisition for deaf children: Reducing the harms of zero tolerance to the use of alternative approaches” along with our position paper on language acquisition at http://nad.org/position-statement-early-cognitive-and-language-development-and-education-dhh-children.

    We hope those resources will be beneficial as we share a common goal to see enhanced data collection and accountability on the behalf of all deaf and hard of hearing babies and children.

    Finally, we would recommend adding a data indicator of whether people with disabilities are being included in the states’ work in early intervention, because we cannot rely upon experts alone, and it is far more beneficial to include individuals who have personal experience and insight to ensure we are on the correct path on serving all children with disabilities. Thank you.

  2. California’s Comments on the proposed Part C Results process for Child Outcomes in State Determinations

    California is concerned that the methodology proposed for using child outcomes data in State Determinations needs additional work to equitably evaluate the data quality and child achievement scores for States.
    California would like to highlight these concerns:
    • Proposal is for data that has occurred in a fiscal year that is already finished
    • States are using their own approved methods for measuring child outcomes and reporting and should not be compared to other states
    • Variable eligibility criteria in each state affects the child outcomes
    • Eligibility criteria can change by statute in a State and the data will take a few years to stabilize
    • The calculation of completeness of data should be determined by the State identified denominator since not all children on 618 exit data will meet the criteria for inclusion in the Annual Performance Report data (i.e. the child must receive 6 months of early intervention services)
    • If data quality receives a zero score than all child outcome work is zeroed out: this does not weigh the data quality and child achievement equally
    • Progress category ranges for “a” and “e” have a rational but categories “b”, “c”, and “d” ranges cannot be predicted because each child improve on her or his trajectory at various rates which are scientifically unpredictable

    California is in support of ITCA’s analysis and recommendations for the current proposal.

  3. New York State (NYS) Part C Program: Thank you for the opportunity to submit comments on the Office of Special Education Program’s (OSEP) proposal to include child outcomes data in Part C determinations in 2015. New York State is one of the largest Part C programs in the nation, serving approximately 65,000 infants and toddlers with disabilities and their families annually. NYS Part C agrees with OSEP’s efforts to focus on improved results for young children and their families as part of the determinations process; however, ensuring data are of high quality and used in a way which accurately reflects child performance is critical.

    Many factors influence child outcome data which need to be considered and provide the context for use of results in the determinations process. States use different methods, as approved by OSEP, for collecting and interpreting assessment data to measure and report child outcomes, which make comparison of data across states difficult. Eligibility criteria are established by states within the federally required framework, and therefore the population of children served by Part C is heterogeneous. There are many social, demographic, and child and family circumstances that influence child development. The length of time children and families participate in Part C programs is relatively short, and service delivery parameters, such as level of service, also vary across states.

    The evidence base for developmental progress which can be made by children in Part C programs is growing. NYS Part C is working hard to contribute to child outcomes data. More evidence-based knowledge and experience is needed to establish expectations for the level of progress that can be achieved by children participating in Part C programs. NYS Part C is completing a research project funded by the U.S. Department of Health and Human Services, Bureau of Maternal and Child Health research program, to model an approach to evaluating the impact of participation in Part C programs on toddlers with ASD and their families and a comparison group of families, which we hope will contribute to this knowledge base.

    NYS Part C strongly endorses the comments and recommendations prepared and submitted by the Infant Toddler Coordinators Association (ITCA) on IDEA Part C results data in determinations. Seventy-four percent of ITCA membership participated and/or provided input into these recommendations. NYS Part C was an active participant in this process.

    New York State strongly agrees with and supports each of the following recommendations made by ITCA:

    1. Compliance and results data should not be weighted equally in June 2015 determinations. For the initial year of using Part C results data, compliance scores should be 70 percent of a state’s determination, with results data weighted at 30 percent of the determination. The use of results data should be increased by 5 percent per year over 6 years until compliance and results are weighted equally.

    2. Data quality and child achievement data should not be weighted equally in June 2015 determinations. For the initial year of using Part C results data, data quality should be 70 percent of the results calculation, with child achievement weighted at 30 percent. The use of child achievement data should be increased by 5 percent over 6 years until results are weighted equally.

    3. Only comparisons of each state’s performance to its own targets should be used during the initial years of incorporating results data into determinations. State differences such as eligibility criteria and decision rules for compiling child outcome results into progress categories, and making comparisons using the current data, could potentially impact an individual state’s performance when compared to a national aggregation of all data. As Indicator 3 child outcomes data collection and the SSIP process continues and the quality of states’ outcome data improve, it is more likely that a valid process can be designed to compare child achievement data across states.

    4. Completeness of the data should be determined based on a denominator supplied by the state with accompanying rationale and documentation. The 618 exiting data used in the OSEP proposal do not represent an accurate count of the number of children for whom child outcomes exit ratings should be available.

    With respect to this recommendation, we note that given the size and scope of the NYS Part C program, a sampling methodology approved by OSEP is used to collect child outcomes data. This is the only feasible approach to collection of child outcomes data given that no additional funding has been made available to states to implement child outcomes measurement systems. In addition, it is well-established that use of a sampling methodology yields results that are as accurate as universal measurement systems, while offering a more cost-effective approach to program evaluation.

    Given that NYS Part C uses a sampling methodology for child outcome data, the 618 exit data would not be the appropriate denominator to use when assessing the completeness of NYS Part C child outcome data. Rather, the completeness of NYS Part C child outcome data should be based on the number of children in the sample who are exiting in the relevant program year and for whom outcome data should be available. As recommended, NYS Part C also collects data only on children in the sample who have received early intervention services for at least six months.

    NYS Part C carefully tracks and monitors entry and exit COS data for children in child outcomes samples. As a result, we are able to calculate the number of children reported for the outcome/number of children in the sample exiting in the relevant program year (PY). For example, in the 2012-13 PY, for all three child outcome indicators, NYS Part C had outcome data for 43% of children who were included in a child outcome sample, received Part C services for at least six months, and who exited in the PY. In contrast, when the denominator used was all children exiting in the 2012-13 PY, the data supplied to NYS Part C by ECTA indicated that NYS Part C had outcome data for only 2% of children exiting in that year. NYS Part C would be unfairly disadvantaged if exiting data were used as the denominator for completeness of data, and would not receive credit for the significant level of effort and resources used to track and collect child outcomes data.

    5.Use only “a” and “e” progress categories to measure “out-of-range” scores for data quality.

    6. States’ performance data in child outcomes should be considered in determinations even if the state’s data quality does not meet all the standards for data quality.

    7. A different method for comparing state to state child achievement data should be developed over the next several years by OSEP through the federally-funded contractors with active stakeholder involvement.

    8. For the first two years, use only each state’s performance compared with the state’s targets as the measure of child achievement and exclude the “change over time in summary statements” measure.

    9. A methodology for considering family outcomes should be developed in the next several years and incorporated into the Part C determinations process.

    With respect to this recommendation, we wish to note that families are critical partners in all aspects of the NYS Part C program. Family-centered services, including involving families in all aspects of service delivery to their child, and providing services to the family as needed to enhance and support their child’s development, is a central tenant of the NYS Part C program. As such, NYS Part C believes that work should begin now on identifying a methodology to include family outcomes in the determinations process and that this should be accomplished in the next several years.

    Thank you again for the opportunity to submit comments.

  4. These comments are submitted on behalf of the National Association of State Directors of Special Education (NASDSE) in response to the request for feedback on proposed Part C determinations. NASDSE appreciates the opportunity to provide feedback. NASDSE strongly supports OSEP’s Results-Driven Accountability (RDA) initiative with its focus on results and welcomes the opportunity to work collaboratively with OSEP to move forward in a meaningful way.

    NASDSE has identified several areas where we believe further clarification is required:

    1. The Recommended Analysis would group states into three groups (lowest, middle and highest) but is not clear on how these groups would translate to the determination categories required by IDEA (e.g., meets requirements, needs assistance, etc.).
    2. The proposal does not describe how these new groupings will blend with the compliance indicators in order for OSEP to make its determinations.
    We do have several concerns about the proposed criteria:
    1. Data quality is a key component. However, we are concerned about using data quality for an evaluation component this year because the data states will be submitting February 1, 2015 was collected some time ago. Using a new standard without giving states a ‘heads up’ means that some states will not be in a position to address problems with their data. We recognize that the issue of data quality is not new and that states should have been addressing the quality of their data for some time. Nevertheless, states may have been focusing more of their attention on other issues that have demanded immediate attention from the U.S. Department of Education. NASDSE therefore recommends that for 2015, compliance and results data should not be weighted equally.
    2. For very young children, there are a number of social, demographic and individual child and family circumstances, including the nature of a child’s disability, that can potentially impact outcomes for any Part C program. In addition, states set their own criteria for the program. Therefore, we have concerns about grouping states into categories based on their child outcome scores. We would much rather see states set high quality standards and high outcomes standards and be evaluated based on these standards. A state’s determination would then be based on how well it achieved those goals and not how well it achieved compared to other states, which may have had different starting points. If OSEP believes that a state is setting its standards too low, we believe that OSEP should work with a state (which should be developing these standards in conjunction with its stakeholders) to address any concerns that OSEP has regarding the standards set by the state.
    3. The proposal does not take into consideration variables that might impact a state’s data from year to year. For example, if a state improves its data quality, it could see a decrease in its results from the previous year. Also, for a state that already has strong, positive results, it might be difficult to demonstrate change. In addition, it most likely will take more than one year to demonstrate significant results. We therefore suggest that a state be given an opportunity to explain its data in a meaningful way (as is already done with the compliance indicators) and that if necessary, OSEP recognize these variables in making its determinations.
    4. We also have some concerns about reaching a single score based on the two factors – data quality and performance. We recommend that even if a state receives a low score in its data quality component, that the date’s child outcome data still be considered for making the state’s determination.

    Again, NASDSE appreciates this opportunity to provide feedback.

  5. Comments on Part C Determinations

    These comments are submitted on behalf of the National Association of State Directors of Special Education (NASDSE) in response to the request for feedback on proposed Part C determinations. NASDSE appreciates the opportunity to provide feedback. NASDSE strongly supports OSEP’s Results-Driven Accountability (RDA) initiative with its focus on results and welcomes the opportunity to work collaboratively with OSEP to move forward in a meaningful way.

    NASDSE has identified several areas where we believe further clarification is required:

    1. The Recommended Analysis would group states into three groups (lowest, middle and highest) but is not clear on how these groups would translate to the determination categories required by IDEA (e.g., meets requirements, needs assistance, etc.).
    2. The proposal does not describe how these new groupings will blend with the compliance indicators in order for OSEP to make its determinations.
    We do have several concerns about the proposed criteria:
    1. Data quality is a key component. However, we are concerned about using data quality for an evaluation component this year because the data states will be submitting February 1, 2015 was collected some time ago. Using a new standard without giving states a ‘heads up’ means that some states will not be in a position to address problems with their data. We recognize that the issue of data quality is not new and that states should have been addressing the quality of their data for some time. Nevertheless, states may have been focusing more of their attention on other issues that have demanded immediate attention from the U.S. Department of Education. NASDSE therefore recommends that for 2015, compliance and results data should not be weighted equally.
    2. For very young children, there are a number of social, demographic and individual child and family circumstances, including the nature of a child’s disability, that can potentially impact outcomes for any Part C program. In addition, states set their own criteria for the program. Therefore, we have concerns about grouping states into categories based on their child outcome scores. We would much rather see states set high quality standards and high outcomes standards and be evaluated based on these standards. A state’s determination would then be based on how well it achieved those goals and not how well it achieved compared to other states, which may have had different starting points. If OSEP believes that a state is setting its standards too low, we believe that OSEP should work with a state (which should be developing these standards in conjunction with its stakeholders) to address any concerns that OSEP has regarding the standards set by the state.
    3. The proposal does not take into consideration variables that might impact a state’s data from year to year. For example, if a state improves its data quality, it could see a decrease in its results from the previous year. Also, for a state that already has strong, positive results, it might be difficult to demonstrate change. In addition, it most likely will take more than one year to demonstrate significant results. We therefore suggest that a state be given an opportunity to explain its data in a meaningful way (as is already done with the compliance indicators) and that if necessary, OSEP recognize these variables in making its determinations.
    4. We also have some concerns about reaching a single score based on the two factors – data quality and performance. We recommend that even if a state receives a low score in its data quality component, that the date’s child outcome data still be considered for making the state’s determination.

    Again, NASDSE appreciates this opportunity to provide feedback.

  6. I support the use of results data in the Part C Determinations. The goal of improved outcomes for all children is something that providers of Part C strive towards. The opportunity to provide comment is sincerely appreciated.

    The equal weighting of the results data with the compliance data is premature. States have focused on the compliance indicators over the years and the process to achieve better data was quite involved. Time is needed to improve results data and a phased in process would be recommended. The varying resources of states and the resources provided by OSEP over the years has not yet yielded the data systems required to use equally weighted results data.

    Since eligibility varies across the states, the comparison of outcome data across states is not recommended. A new process should be designed to allow comparisons amongst states.

    A method for considering family outcomes should definitely be developed to be included in the Part C determinations process. OSEP approves the family outcomes data for each state. A process of minimal criteria should be developed to allow the inclusion of this data in the future.

    Again, thank you for the opportunity to comment.

    • The Greater Los Angeles Los Angeles Area SELPA (GLAAS)Directors, representing 21 SELPAs, have concerns with various data methods that are proposed for the Results Driven Accountability (RDA) process for Parts B and C.

      In California, data collected by the DRDP assessment instrument (that school districts use) and Regional Center clinical evaluations are markedly different assessment methods and tools. They cannot be combined to provide a valid score for the state of California. The varying assessment tools used across states do not lend themselves to a valid comparison across states.

      Serving young children in their natural environment is essential for improving outcomes for infants and toddlers. Data collected in a child’s natural environment at this young age is critical for providing an accurate portrait of the child’s abilities, achievement and growth.

  7. Thank you for the opportunity to provide input on the proposed approach for including results data in the Part C determinations process. Although we are members of New York State’s Early Intervention Coordinating Council our comments do not represent the council but our individual opinions. We are proud of the Early Intervention Program in New York and the services and supports provided to infants and toddlers with disabilities and their families.

    Each year ICC’s across the nation are asked to review their state’s performance plan and support the submission of the plan to the OSEP. We recognize the tremendous effort that lead agencies expend in collecting, analyzing and reporting the data and wish to applaud the New York State Department of Health Bureau of Early Intervention for its work.

    The requirement that states collect child and family outcome data is a huge commitment of shrinking resources that has not been accompanied by increased federal funding. We support the availability of increased funding to allow states to enhance the quality of their data and its use to improve program quality. In addition, we urge a change in the funding formula to states that aligns with the number of infants and toddlers with disabilities served by the Part C program as opposed to the birth cohort.

    We are concerned that the comparison of data across states does not recognize that child outcome data is impacted by many details that are often not contemplated when the data is reviewed. These details tell a “story” that is currently missing in the use of results in the determination process for states. As others have expressed in the comments, eligibility criteria differs from state to state and consequently the population of children served by Part C varies, making the value of the information gleaned from comparisons questionable. Another major factor that is not considered is that each state decides the approach, methods and tools for gathering and analyzing child outcomes. In addition, in a state as large and diverse as New York there is such significant cultural, linguistic, socio-economic, and geographic diversity that potentially affect child and family outcomes and make comparisons difficult.

    We are especially troubled by scoring that is based on the degree to which infants and toddlers with disabilities demonstrate progress between their skills and abilities as compared to that of their typically developing peers. In particular, we are apprehensive how the data will be used in the future for the population of infants and toddlers who do not make substantial progress or for whom age appropriate expectations are not achievable.

    More and more, early intervention funding is based on allowable Medicaid and commercial insurance reimbursement. This shift, while philosophically appropriate, puts tension on the tenets of Part C which is based on educational roots. Specifically, medical justification, such as habilitative and rehabilitative may restrict services for infants and toddlers whose disability may limit measurable medical progress. The challenge will be for OSEP to recognize the mandate for this funding while balancing appropriate Part C services for all children eligible regardless of medically defined criteria.

    We appreciate and support the efforts of the Infant Toddler Coordinators Association (ITCA) and their letter submitted in response to the request for comments. We value the transparency in this process and acknowledge the Administration’s commitment to infants and toddlers with disabilities and their families and to early childhood education. We thank you again for the opportunity to comment and are available for follow up if necessary.

    Talina Jones, Parent
    NYS EICC Chair
    tajeemom@aol.com

    Steven Held. Executive Director
    Just Kids Early Childhood Learning Center
    NYS EICC Vice Chair
    jkschool@aol.com

    Margaret Sampson
    Family Initiative Coordinator
    NYS EICC Member
    msamp@ficsp.com

  8. I have considerable concern with both the methodology that is being used to comment on the proposed determinations and with the specific proposed content. In an apparent effort to either circumvent the official regulatory process or to reduce the paperwork burden on itself, OSEP has asked ECTA to solicit these blog comments. While helpful feedback, these blog entries should not stand in place of an official action that would require OSEP to explicate its reasons for taking certain actions. The limited exposure of the blog would seem to indicate that feedback is only desired by those who may be in OSEP’s inner circle and who may be less objective or critically analytic regarding both the methods employed and the proposed content.

    There is considerable variability among States regarding: child eligibility for Part C programs, assessment methodologies used, lead agency designation, and outcome expectations.

    In our state, the Part C lead agency has a completely different service delivery model from the SEA model. Increasing the required “score” for each summary statement in the proposed model is contraindicated when we can’t even validate the scores from program to program. Instead of creating an artificial measure to rank states, examining the validity of the various assessment instruments and child outcome measures used would be more helpful. This should be done prior to establishing a rubric for evaluating those outcomes. The current assessment instruments with greatly varying reliability and validity cannot be compared equivalently.

    In fact, some states do not assign outcomes ratings in a manner that assures that children receive the same ratings from different early intervention programs. There is too much variability in assignment of ratings within programs. Some use on-the-spot clinical ratings; others use objective measures sampled over time.

    Using the proposed one-size-fits-all outcome system that is based on spurious recording of achievement regarding age-appropriate functioning for children with identified disabilities sets a standard that is antithetical to the needs of children we serve. The net result of such a system is to focus states’ attention on numerical outcome scores, rather on the quality of child and family services. Providers work hard making IFSP outcomes individual, appropriate, and measurable. States could better determine individual areas of need, and target ways to improve child and family services.

    The pre- to post- design is not a good design for measuring progress. The most appropriate tool for an infant may not necessarily be the best tool for the three year old. Third, there is confusion as to the weighting of the test result, which is a standardized score that is less likely to show change, and then the assignment of a rating, which is a judgment and qualitative measure by the evaluator(s) who may or may not understand the ratings or how to weight the scores in making this judgment. So the scores that are collected are not reliable, they are not valid, they are not meaningful to instructional practice and they are not useful to evaluate the quality of programs and services – they are useless.

    The justification for creating an overall total score simply for the purpose of ranking states is unclear. Child Achievement and Data Quality are wholly different variables, and should be treated as such. An intervention for poor data quality should not be the same as one for poor achievement. Each is important, but added together their meaning is obscured. This entire conceptual model needs to be revamped.

  9. The Statewide Parent Advocacy Network (SPAN) supports key aspects of the Results Driven Accountability (RDA) process for Part C, while at the same time continuing to support the need for compliance monitoring and findings based on compliance issues. Having a high quality early intervention system that has positive child outcomes for those children receiving EI services does not mean much if eligible children are not timely identified, evaluated, and provided with services. We have reviewed the Early Childhood Technical Assistance Center (ECTAC) proposal regarding the use of outcomes data for IDEA Part C determinations and are pleased to have the opportunity to submit these comments.

    We agree that the quality of data submitted by states to OSEP should have a relatively high weight within the determinations process. This will provide much-needed incentive for states with very poor quality data to improve their data collection, analysis, and reporting processes. We agree, however, with the National Disability Rights Network (NDRN) that a reasonably achievable threshold level of data accuracy and reliability should be set, rather than the ECTA proposal to create a “cut point” that would result in only a very small number of states being identified in the lowest tier.

    As a family-led organization that works to help families in the early intervention, special education, and transition processes understand these systems, partner with professionals, and advocate for their infants, toddlers, children and youth with disabilities, we strongly believe that family outcomes should be considered in the determination process. The very fact that the plan developed under Part C is an individualized FAMILY services plan underscores the importance of states achieving success in the family outcome components. We would strongly encourage OSEP, however, to require that states utilize a meaningful tool and strategy to collect and analyze this data, as currently states use many different approaches some of which are clearly not providing us with valid and reliable information on which any decisions could be based. Further, we urge that OSEP review disaggregated data by race, ethnicity, socio-economic status, and limited English proficiency status and include the results of this review in their determinations. A state with a low % of non-white families should not escape identification as needing improvement just because their data on white, English speaking families “looks good,” if the data from non-white families is poor.

    We also agree with another commenter that OSEP should work to establish minimum criteria for data collection, e.g., adequate sampling plans and minimum response rates.

    SPAN also agrees with the NDRN that child find should be included in the monitoring process so that it remains an area of focus. It is important that babies and toddlers – including babies and toddlers of color, from immigrant and LEP families, and of low socio-economic status – be identified as early as possible to maximize their potential benefits from EI. While we understand that there are critical barriers to “child find” in many states (provider shortages, statewide hiring freezes, funding issues, eligibility criteria, etc.), state performance in this area must continue to be a priority.

    We note that the ECTA Center recommends exclusion of natural environments data, in part because in many states more than 90% of families are receiving services in “natural environments,” and in part because of state barriers like those noted above. SPAN is concerned that very young children are receiving their EI services in settings where typically developing children are found such as child care centers. For this reason, and to avoid states backsliding on natural environments, we believe that it should be included as a determination factor. We also encourage the Department to encourage states to collect, analyze, and report on data that doesn’t lump home-based services in with community/child care-based services but rather to report on natural environments data in both of these general categories. Families need the assistance of early intervention to help them make their communities – and the services, like child care, that should be available to all families including those who have young children with disabilities – more accessible to them.

    We look forward to your careful and thoughtful review of our comments and recommendations, and thank you again for this opportunity to provide feedback on the ECTA Center recommendations on Part C determinations.

  10. NDRN is pleased to support the Results Driven Accountability (RDA) process for Parts B and C, while acknowledging that there continue to be areas for improvement within these processes. We have reviewed the The Early Childhood Technical Assistance Center (ECTAC) proposal regarding the use of outcomes data for IDEA Part C Determinations and have these thoughts to share about it.

    First, we appreciate greatly that the proposal weights the quality of data submitted by the states to OSEP at a relatively high level within the determinations process. For too long, some states have submitted poor quality data with limited incentive for improvement and it is impossible to effectively improve a system without it. We do question why ECTA was apparently instructed that only a small number of states should populate the lowest tier, and was forced to work backward to create a cut point that would achieve that goal. It seems more logical to set a threshold at a reasonably achievable level of data accuracy and reliability, and then afterward make state determinations in keeping with those levels (For example, a cut score of 34% for missing data.)

    There are two other sources of outcome data that should be considered in the determination process: Family Outcomes and Child Find. Both are critical to the process of evaluating Part C programming.

    With regard to the “Family Outcomes” measure, of all of the OSEP funded programs, there is none more closely linked to family involvement than Part C. In fact, the plan produced, the “IFSP,” intentionally includes the word “family.” It is hard to imagine why the effectiveness and efficiency of the system would not be measured using this particular data element. If the data on this topic is currently not helpful, it is important to improve the collection methodology so that they are.

    With regard to the “Child Find” measure, the reasons stated below for not including this data are precisely the reasons why we should include it. The ”Child Find” requirement would not have been included in the statute and have remained in the monitoring process for these many years if it were easy to achieve. It is both important and difficult, and for those reasons it should continue to be included in the monitoring process to ensure that it remains an area of focus. As with the Family Outcomes measure, while Child Find is important for all children, research has shown that it is critical for babies and toddlers to receive services as quickly as possible.

    (“Child Find efforts are strongly influenced by factors that impact state capacity to identify and serve children:
    •Provider shortages, statewide hiring freezes, funding issues, etc.
    •Changes made to “narrow” eligibility criteria due to reduced fiscal resources”)

    “Child Find” of all possible data elements must be included in the Part C determination process.

    We are willing to forgo inclusion of Natural Environments (NE) data for now but are compelled to note that some of the reasons that ECTA Center gives for not including this data are similar to our complaints out the Child Find analysis above. (”Some states continue to struggle with implementing NE in rural areas, provider shortages, driving distances, reimbursement rates for providers, etc.“) These are reasons why we should include this particular measure, rather than exclude it. If it were easy, it would not be necessary to include it in the determination process.

    Thank you for this opportunity to provide feedback.

  11. I support the Department’s Results Driven Accountability initiative and the use of results in its determinations process. I offer the following recommendations for your consideration.

    1. OSEP should use family indicators (and/or outcomes) in its Part C determinations.
    In establishing the Part C (early intervention) program in 1986 Congress recognized “an urgent and substantial need” to “enhance the capacity of families to meet their child’s needs.

    There are a number of federal programs that use “family data” in processes like determinations. The U.S. Department of Health and Human Services, for example, uses a number of such measures for its Maternal and Child Health, Title V Block Grant program. All based on a survey, they include measures of the extent to which families partner in decision making at all levels and are satisfied with the services they receive, and report that community-based service systems are organized so they can use them easily. Similarly, the National Committee on Quality Assurance and the Centers for Medicare and Medicaid Services use various reported perceptions of clients in evaluating both private and public section health plans.

    As ECTA notes in its proposal, there is state-to-state variation in the way the data is collected. Given that, OSEP should work to establish minimum criteria for data collection, e.g., surveying families in services for a minimum number of months, requiring adequate sampling plans and establishing minimum response rates. If this cannot be done in the short term, consider the measure as optional/voluntary (for a period of time) for states that currently meet such criteria, and have it account for some percentage of the overall total. Finally, ECTA also states that there is not much variation in what states are reporting on the family data, and thus, it would be difficult to distinguish states. The ability (or inability) of a measure to distinguish the performance of states should not necessarily disqualify it as a meaningful and credible measure, especially if it is fundamental to the program.

    2. OSEP should weight compliance indicators at 60 percent of the total and results at 40 percent in the 2015 determinations, and increase the weight of results by five percent each year thereafter until results are weighted at 60 percent. OSEP has placed a very strong emphasis on compliance in the past several years, and states have made considerable efforts to improve compliance, sometimes at the expense of focusing on results. This phased-in approach will allow states the time to identify and implement strategies to improve child outcomes.

    3. I support the proposal to give equal weight to data quality and child outcome results. I also think it is reasonable to consider performance on outcomes only if minimum criteria for data quality are met.

    4. OSEP should consider alternative methods of comparing state-to-state performance. The use of percentiles means some ranking of states, even if only in broad groupings. A more meaningful performance is with a state’s past performance, and/or perhaps to targets if set more rigorously. We also know that there is considerable variation in states’ eligibility criteria. It may be that the two different summary statements would “balance each other out” across states with different eligibility criteria. However, for the sake of transparency and establishing credible expectations among families, states and broader stakeholder groups, the impact of eligibility criteria should be further explored and made public. In the longer term, more work should be done to address the issue of a reasonable ceiling for each of the summary statements – that is, at what point would we no longer expect significant increases in performance?

    Thank you for the opportunity to submit comments and for your consideration of these recommendations.

  12. I urge OSEP to not compare states to each other due to the fact that states set their own eligilbity criteria. Additionally, I would urge the OSEP to consider using family outcomes when measuring progress.

  13. CHILD FIND
    Child find must be evaluated for the resources used for child find and the different ways child find is conducted. To provide early intervention for the at risk babies they must be identified and referred not only from populations exhibiting delays but also the substance exposed and adverse family backgrounds to pick up the babies/toddlers with future social emotional and developmental issues at an earlier age than preschool or kindergarten.

  14. Improving outcomes for infants, toddlers, children, and youth with disabilities and their families, is essential. As such, moving from compliance to results is an important next step. The continued underemployment of adults with disabilities is embarrassing and we must ensure that all educators–not just those who specialize in students with disabilities–are ready, willing and able to support students with disabilities to have enviable and productive lives.

    However, the lack of adequate federal funding for both Part C and Part B are of immense concern. Additional data collection, analysis and data-based decision-making require additional funds. If states use one-time infusions of state funds to upgrade their data systems or to make system enhancements to support professional development around child and family outcomes will these funds be considered part of their Maintenance of Effort?

    With regard to this specific proposal:
    The use of results driven data for determinations should be phased in, and should only require states to compare their progress against their own data for at least the first two years, as target setting has been a locally driven process, and states will need time to implement improvement strategies to ensure improved data quality. Additionally, states which show progress in their data quality should also be recognized for these efforts.

    With regard to child achievement, it seems that more research is necessary to show what percentage of children should be improving and at what rate(s). Comparing broad eligibility states to narrow eligibility states makes no sense, as one should expect that narrow eligibility states will have a higher percentage of children who enter with more significant delays and who exit with progress but not with significant percentages for summary statement 2. Additionally, since states can self-select the instruments used to determine a child’s rating and progress over time, and professionals report that this process is still not one they are comfortable with doing with families, comparing one states data to another’s seems similarly difficult. OSEP should assist states in identifying methods to support improved determinations and progress calculations but for the first two years compare states to themselves or return to the former practice of comparing states by eligibility definition.

  15. Disability Rights New Jersey is the designated protection and advocacy organization for people with disabilities in the State of New Jersey pursuant to the Developmental Disabilities Assistance and Bill of Rights Act, 42 U.S.C. §§ 15041 to 15045; the Protection and Advocacy for Individuals with Mental Illness Act, 42 U.S.C. §§ 10801 to 10851; and the Rehabilitation Act, 29 U.S.C. § 794e. DRNJ’s Board of Directors consists of a majority of people with disabilities and family members of people with disabilities. The Board has consistently identified the area of equal educational opportunity as a major goal of the disability community.

    DRNJ appreciates the opportunity to comment on the Department’s change toward using child outcome data to make Part C state determinations. The shift from looking only at procedural compliance data to using both child outcome and compliance data is an important step toward holding states accountable for the quality of their programs. While ensuring compliance with procedural matters is essential, the child outcomes are what matter most to families.

    As the Department moves forward, it is essential that it considers overall growth of child outcomes over time within each state. In addition, there should be consideration of more than just exit data. Looking at the child exit data is important in determining long term gains of child outcomes, but other data points are equally important. The Department should also look at annual reporting data for children who are not yet leaving the system.

    Depending upon each state’s data collection process and eligibility definition, it may be difficult to measure states against each other in a meaningful way. It may be helpful to states and families for the Department to provide a comparison among states with similar eligibility criteria. This may be a more meaningful comparison than comparing all states to each other.

  16. In states that have a narrow eligibility criteria, it would seem that the children receiving early intervention services would make less progress compared to children with less severe delays in other states. If data isn’t generally collected on children who have been in early intervention for 6 months or less, shouldn’t those children not be counted as part of the percentage of exit indicators that have been completed? Additionally, subjective ratings scares are just that–subjective. Wouldn’t having empirical data to compare at entry and exit for early intervention be more objective and measurable than a subjective rating indicator scale.

  17. Tennessee reviewed ETCA Center’s recommendations for using early childhood outcomes data for IDEA Part C Determinations and submits one comment for consideration relative to 1. Data Quality: Missing Data. ETCA Center recommends using 618 Exiting data as the basis for determining missing data. Clarification is requested whether this recommendation takes into consideration the removal of:
    1. Children who have less than six-months of service in the program
    2. The following 618 exiting reasons where ECO exiting scores may not have been obtained:
    a. Deceased
    b. Moved out of state
    c. Attempts of contact unsuccessful
    d. And in some cases, withdrawal by parent or guardian

    Tennessee would also not recommend using family outcomes data as part of IDEA determinations. States use various survey tools, methods of collection, calculations standards, and analyses methods. It would be very difficult to assign a value to a state for comparison purposes as is proposed with early childhood outcomes.

  18. Louisiana Part C: Comments on IDEA Part C Results Determinations Process
    Thank you for the opportunity to comment on the proposed process for including results data in State Determinations. We are very much in support of a new process which will reflect the support provided to children and families which demonstrates improved child outcomes as is intended in the IDEA 2004 legislation. Louisiana supports the recommendations submitted by the Infant Toddler Coordinators Association (ITCA) which collaborated with representatives from 37 states in preparing their comments and recommendations. In addition we would like to recommend the following:
    1. Comment: Terminology—this may seem minor, but the descriptions of the sections for the components send the wrong message through their “labels.” For example, section 1 is called data quality, but is titled “missing data”—as measures of quality, these should describe what is being measured, not what is missing or lacking in quality, but rather what is expected.

    Recommendation: Rename the sections:
    • 1. Available child exit scores
    • 2. Child assignment to progress categories

    2. Louisiana does not agree that the number of children for whom exit scores are available is a measure of quality. For our state, it measures only what is stated: children whose families chose for them to participate in the measurement process. We use a standard assessment as our measure, so the results are not based on a team review. If a family elects not to participate in the assessment, then that child will not be included in the scoring. We agree that this could be included in the results determination as described by others. We also support the ITCA recommendations for this measure, especially for allowing family reasons to remove children from the denominator.

    Recommendation: Rename the measure as recommended above and remove the Data Quality “label” allow use of Family Reasons to exclude children from the count.

    3. Comment/recommendation: Consider other options for how the measures are determined, specifically related to the scoring. Despite working very closely with ITCA and its committees, it was very difficult to use the descriptions/instructions provided to apply our state results to the process. If it is complex for a state to calculate just using as an example, it will be very difficult to explain the process to our leadership and stakeholders who do not have the same level understanding about our data. Items 1 and 2 are simple, but the measurement gets more complex after.
    Recommendation: Take some time to rethink the scoring and ranking process to simplify them.

    4. Comment/Recommendation: Just to add emphasis to the ITCA recommendation, Louisiana would like to stress that Item 5—Child Achievement: Change over time in summary statements be eliminated from consideration as an results measure. This is a complex measurement (despite the use of spreadsheet calculators to determine statistical significance) and difficult to explain to stakeholders. In addition, we do not agree that it measures what is intended. There is no evidence that supports this concept that change over time of aggregated state results, demonstrates improved child outcomes. Specifically since you may not be comparing the same cohort of children across two years. There is no evidence that changes in this aggregated data shows improvement. A state may be at its highest result, why would change then suggest improvement?

    5. Louisiana supports the ITCA recommendation to include Family Outcomes as a measure of quality. We understand OSEP’s stated concerns about its use, but this is a focus for the intent of Part C. In the slides presented at this link, the Family Outcomes slides posed some questions/comments. One was that there is state-to-state variation in how the data is collected and that it represents “different populations and different timing.” This same observation must be applied to the child outcome data, but states child outcome results are being considered for results determinations.

    Recommendation: include a measure for family outcomes as part of the results determination. A process can be developed over time to reduce the expressed concerns so that family results can be included if not now, but in the future.

    Thank you again for the opportunity to comment on the recommendations.

  19. December 12, 2014

    Submitted electronically

    Melody Musgrove, Director
    Office of Special Education Programs
    Office of Special Education and Rehabilitative Services
    U.S. Department of Education
    400 Maryland Ave., SW
    Washington, DC 20202-7100

    RE: Comments on Proposal: “Using Child Outcomes Data for Determinations”

    Dear Ms. Musgrove:

    On behalf of our 90,000 member physical therapists, physical therapist assistants, and students of physical therapy, the American Physical Therapy Association (APTA) appreciates the opportunity to submit comments on the proposed methodology, Using Child Outcomes Data for Determinations from the Early Childhood Technical Assistance Center (ECTA).

    Physical therapists and physical therapist assistants work in Part C early intervention under IDEA and support families of infants and toddlers at risk for or with disabilities to promote their development, learning, and participation in family activities and routines as part of the Individualized Family Service Plan (IFSP). The Section on Pediatrics of the APTA represents pediatric physical therapist in all practice settings and has special interest groups focused specifically on early intervention and school based practice. Accordingly, APTA has a strong interest in any policies related to IDEA regulations. We respectfully provide the following comments:

    Proposal Methodology

    APTA is fully supportive of developing a framework to improve assessments of state’s performance to measure the effectiveness and outcomes of early intervention services. However, we have concerns about the ECTA proposal and have included a summary of our concerns below.

    General
    1. If funding of a program is based on a state’s determination score, the validity of the data could potentially be skewed. School-based service providers are already seeing this occur in school districts where the focus is on teaching students how to take a test versus teaching students the knowledge and skills they will need to pursue post-secondary education and/or employment.
    2. The current child outcome ratings are still somewhat subjective with no way to ensure reliability and validity of the data being submitted by the states.
    3. Children with a variety of medical diagnoses receive early intervention services and many of these children have progressive conditions or will not progress at rate that is equal to their peers without disabilities. How does this information get factored into the data? We believe the system needs to have a mechanism for states to report this information as a variable that will affect results, especially when the “a-e progress scales” are scored based on the degree to which children close the gap between their skills and abilities and that of their typically developing peers.
    4. We do not believe the current data reflects all the changes and positive outcomes observed and experienced by children and their families in early intervention.
    5. Although the focus of early intervention programs is on helping infants and toddlers and their families achieve outcomes, pediatric physical therapists have voiced concerns that they are spending less time providing intervention and more time on documentation and collecting data. The processes used to collect the data must not negatively influence the services infants and toddlers receive. Has OSEP considered alternative methods for collecting the data through the use of random samples and/or qualitative methodologies? We believe that further examination of the efficiency and validity of the data monitoring system is warranted.
    6. The process for identifying the states most in need of assistance is unclear. Based on the information presented, it appears that states with the lowest scores would receive assistance, but the type and amount of assistance is not specified. We are also concerned that only those with the lowest score would receive assistance.
    7. The lack of full funding for IDEA seriously impacts states’ ability to implement early intervention services for infants and toddlers at risk of or with disabilities and their families. The lack of full funding often results in states narrowing the eligibility criteria for early intervention services, further reducing access to such services.

    Data Quality

    8. The extent of missing data is one element of the data quality score; however, we believe evaluating the reasons or causes for the missing data are more important. What steps does OSEP take to examine this when making determinations?
    9. The “out of range” percentages for the “a through e progress” categories is the second component of the data quality, but this only reflects data quality if scores are entered incorrectly or if scores are outside the range of possibilities. Otherwise, these scores seem to be a measure of child achievement, not data quality.

    Child Achievement

    10. Many factors affect child achievement and unless states have similar characteristics in terms of state demographics and eligibility requirements for Part C programs, comparisons across states are not meaningful.
    11. The child achievement scores are based on the summary statements, but we question the reliability and validity of the summary statements. Because States base these decisions on different tools and processes, we are concerned that the data being used to evaluate state programs may not be reliable or valid.
    12. The change over time appears to focus only on comparing two years’ data versus reviewing trends over a longer period of time, which we believe would be more informative. If OSEP has collected this data since 2006-2007, what efforts are being made to look at trends within each state from 2006-present?
    13. Lack of rationale for the identified cut-offs; it appears the cut-offs were designed to identify the fewest number of states versus setting a minimal bar for expected performance. We suggest setting a higher standard with appropriate rationale versus setting the bar at the lower possible standard and then adjusting when states no longer fall below that level.

    Total Score

    14. The reasoning behind the total score is unclear.

    Other Outcomes and Data

    15. We suggest inclusion or consideration of factors, such as state poverty levels and Medicaid eligibility. We believe OSEP would be remiss by failing to include this information when evaluating state results.
    16. Eligibility requirements must be considered for early intervention programs when using data, as the populations receiving services across states varies. (See #3 and #9)
    17. One of the goals of early intervention programs is to enhance the capacity of families to address the needs and support the development of their infants and toddlers at risk for or with disabilities. Not including a measure of family outcomes as part of state determinations is counterintuitive and may have unforeseen consequences in hindering family-centered care.
    18. Although states report a high percentage of services in early intervention that are provided in the natural environment, we recognize definitions and interpretations vary across states. We also recognize the importance of this data and that the current data being collected is minimal. We recommend future consideration of how to capture this information in a more meaningful way.
    19. Regardless of the data collected, mechanisms must be in place to ensure the overall quality of data from states and on developing a system that allows more efficient, valid, and meaningful data collection without negatively impacting early intervention providers’ ability to serve infants and toddlers and their families.

    Conclusion

    APTA, including its Pediatric Section, supports the development of a results-driven accountability framework that is used to measure states’ performance results of their early intervention services. We believe a balanced approach can be taken that considers both child and family outcomes in the determinations process while, most importantly, ensuring that infants and toddlers and their families have access to the necessary early intervention services to promote their development, learning, and participation in family activities and routines and in community life.

    Physical therapy services promote the health and development of infants and toddlers at risk for or with developmental delays and disabilities. We commend the Department of Education for seeking input from stakeholders who are invested in supporting infants and toddlers and their families. APTA looks forward to working with the U.S. Department of Education/The Office of Special Education and Research Services (OSERS) in its efforts to improve early intervention programs through Results Driven Accountability (RDA). Thank you for your consideration of our comments. If you have any questions, please contact Maria Jones, PT, PhD, Federal Affairs Liaison, Section on Pediatrics, at 405-271-2131 x46811 or maria-jones@ouhsc.edu or Deborah Crandall, J.D., Senior Regulatory Affairs Specialist, at 703-706-3177 or deborahcrandall@apta.org.

    Sincerely,

    Paul Rockar, Jr. PT, DPT, MS
    President

    PR/dc

  20. CEC Comments on IDEA Part C Results Data in Determinations

    December 12, 2014

    These comments are provided on behalf of the Council for Exceptional Children regarding the request for feedback on the proposed approach for including results data in the Part C determinations process. The Council for Exceptional Children (CEC) – an international community of educators, administrators, related service personnel, higher education faculty and researchers – is the voice and vision of special education. Our mission is to improve quality of life for individuals with exceptionalities and their families through professional excellence and advocacy.

    CEC is adopting the recommendations from CEC’s Division for Early Childhood as its official recommendations. CEC supports the Department’s Results Driven Accountability (RDA) initiative and recognizes that the determinations process is a critical component of this effort. CEC strongly supports the use of results in the determinations process and is committed to working with OSEP to shape the determinations process in a manner that will help states and local providers to move forward in improving child outcomes and family outcomes. CEC recognizes the difficulties inherent in the measurement process and supports a process that ensures the data are of high quality and used in a way that accurately reflects child achievement and positive family outcomes. CEC offers a number of recommendations to support achievement of this goal.

    CEC believes it is important to recognize that there are many factors that influence child outcome data, all of which need to be considered. These factors provide context for use of results in the determination process for states and local programs. First, states use different methods and tools for collecting and interpreting assessment data to measure and report child outcomes. Second, eligibility criteria are established by states and therefore the population of children served by Part C varies significantly from state to state. Finally, there are many social, demographic, and individual child and family circumstances that influence child development. These factors make comparison of data across states difficult.

    In addition, the length of time children and families participate in Part C programs is relatively short. We must always remember that each year the children whose progress is used to measure accountability is a different cohort of children from the year before. More research is needed to establish appropriate expectations for the level of progress that might be expected to be achieved by children participating in Part C, considering service frequency and intensity, as well as length of time in the program. CEC offers its assistance with this important research effort.

    CEC also notes that states and local programs continue to work on the quality of the data being reported on child outcomes. High quality data are necessary to ensure that reliable conclusions can be made regarding child outcomes. As data quality improves at the state and local level, with regard to both entry and exit assessment, annually reported child achievement data may sometimes decrease for a time before it increases.

    States and local programs are now actively engaged in completing intensive analyses on their child outcomes data as a part of the required State Systemic Improvement Plan (SSIP) process. As a part of these efforts, states and local programs are drilling down on data quality issues and designing improvement activities to respond to any issues identified. These efforts will systematically increase the quality of the data over the SSIP life span. As these efforts continue over the coming years, data will become more stable and be of higher quality.

    CEC offers the following recommendations:

     Compliance and results data should not be weighted equally in June 2015 determinations. States and local programs have focused attention on compliance for the past six years, leading to significant improvement over time in state and local compliance. For the initial year of using Part C results data, compliance scores should be 70 percent of a state’s determination, with results data weighted at 30 percent of the determination. States and local programs will need time to identify and implement effective strategies to improve child outcomes. The use of results data should be increased by 5 percent per year over 6 years until compliance and results are weighted equally.

     CEC supports the OSEP proposal to weigh data quality and child achievement equally. It is reasonable to weight data quality and child achievement equally in June 2015.

     “Missing” data should be determined based on a denominator supplied by the state with accompanying rationale and documentation. The 618 exiting data used in the OSEP proposal do not represent an accurate count of children for whom child outcomes exit ratings should be available. CEC recommends that states be asked to provide the denominator from which the “completeness of the data” will be calculated. States should be expected to have necessary documentation of this denominator, including adjustments made to the total number of children exiting the program. OSEP historically has allowed states to “adjust” raw data reported in compliance indicators to account for family circumstances impacting state performance. There are several factors that contribute to the availability of child outcome data on exit from the Early Intervention Program including:

    o According to OSEP directions, states are required to report outcomes data only for children who received at least six consecutive months of services in the Part C system. States should be permitted to subtract documented examples of children who did not receive services form the Part C program for at least 6 months.
    o States experience a variety of exceptional family circumstances that impact on the number of children for whom exit ratings are available. States should be permitted to subtract documented examples of exceptional family circumstances from the denominator as well. This is appropriate and consistent with family-centered practices.

     States’ performance data in child outcomes should be considered in determinations even if the state’s data quality does not meet all the standards for data quality. According to the OSEP proposal, a state’s child achievement data would only be considered if the state scored at least a 2 out of a possible 4 in the data quality component. CEC recommends that a state’s child achievement data be used in the calculation of its determination regardless of its score in the data quality calculation. CEC believes it is the responsibility of state and local programs to ensure high quality data are reported.

     A different method for comparing state-to-state child achievement data should be developed over the next several years by OSEP through the federally-funded contractors with active stakeholder involvement. CEC is concerned about the current proposal’s use of percentiles to compare state-to-state performance on child achievement and believes the use of percentages when comparing state to state performance should be considered. CEC recommends that while states and local programs continue to work on enhancing data quality, OSEP work with their contractors to consider alternate methods using actual state data as it is reported over the next several years. Stakeholder input is necessary for this process as it proceeds and CEC is happy to assist in this effort.

     For the first two years, use only each state’s performance compared with the state’s targets as the measure of child achievement and exclude the “change over time in summary statements” measure. Nationally, state and local current child achievement data are not stable enough to be used for comparing states to each other or to the state’s own performance over time. Using a concept of statistically significant change will not be valid until the data have more stability. As states and local programs are working to increase the accuracy of their child outcomes measurement process, it is not unlikely that performance will fluctuate.

    CEC is concerned, however, that targets may not be rigorous enough given current federal requirements related to target setting. CEC recommends that additional criteria be established by OSEP for quantifiable expectations for year-to-year targets. If there are more stringent requirements placed on target setting, CEC recommends that until the data are more stable, at least in the first few years, a state be compared against its state targets.

    Regardless of what decision is made by OSEP for 2015, going forward, it will be important to establish evidence-based expectations for the percent of children who will demonstrate substantial progress or achieve age-typical development as a result of participating in Part C programs. Part C programs provide services to children with a wide range of disabilities (and in some states risk factors) with impact on development and functioning ranging from mild to severe. Logically, there will be a “ceiling” which will be reached in states for both summary statements. Perhaps a percentage threshold should be set, after which increases would not be expected.

     A methodology for considering family outcomes should be developed and incorporated into the Part C determinations process. CEC believes that both child outcomes and family outcomes should be considered in the Part C determinations process, consistent with the Part C statute “to enhance the capacity of families to meet the special needs of their infants and toddlers with disabilities.” There is strong and unequivocal research demonstrating the relationship between families’ self-perceptions, knowledge, skills, and supports, and the developmental progress of their children. Improved outcomes for families are important to ensuring the child’s continued positive outcomes over time. CEC recognizes this will not occur in time for the June 2105 process but we request that work begin now on determining an accurate way of accomplishing this in the next several years. CEC is happy to assist in this effort as well.

    Thank you for the opportunity to submit comments. As always, CEC is available and willing to provide any additional information or clarification that may be needed. If we can be of further assistance, please contact me debz@cec.sped.org. Thank you for the opportunity to provide input.

    Sincerely,

    Deborah A. Ziegler, Ed.D.
    Director

    cc: Melody Musgrove, OSEP
    Ruth Ryder, OSEP
    Gregg Corr, OSEP
    Larry Wexler, OSEP

  21. DEC Comments on IDEA Part C Results Data in Determinations

    December 12, 2014

    These comments are provided on behalf of the Division for Early Childhood (DEC) of the Council for Exceptional Children regarding the request for feedback on the proposed approach for including results data in the Part C determinations process. DEC is a professional membership organization whose mission is to promote policies and advance evidence-based practices that support families and enhance the optimal development of young children (0-8) who have or are at risk for developmental delays and disabilities.

    DEC supports the Department’s Results Driven Accountability (RDA) initiative and recognizes that the determinations process is a critical component of this effort. DEC strongly supports the use of results in the determinations process and is committed to working with OSEP to shape the determinations process in a manner that will help states and local providers move forward in improving child outcomes and family outcomes. DEC recognizes the difficulties inherent in the measurement process and supports a process that ensures the data are of high quality and used in a way that accurately reflects child achievement and positive family outcomes. DEC offers a number of recommendations to support achievement of this goal.

    DEC believes it is important to recognize that there are many factors that influence child outcome data, all of which need to be considered. These factors provide context for use of results in the determination process for states and local programs. First, states use different methods and tools for collecting and interpreting assessment data to measure and report child outcomes. Second, eligibility criteria are established by states and therefore the population of children served by Part C varies significantly from state to state. Finally, there are many social, demographic, and individual child and family circumstances that influence child development. These factors make comparison of data across states difficult.

    In addition, the length of time children and families participate in Part C programs is relatively short. We must always remember that each year the cohort of children whose progress is used to measure accountability is a different cohort of children from the year before. More research is needed to establish appropriate expectations for the level of progress that might be expected to be achieved by children participating in Part C, considering service frequency and intensity, as well as length of time in the program. DEC offers its assistance with this important research effort.

    DEC also notes that states and local programs continue to work on the quality of the data being reported on child outcomes. High quality data are necessary to ensure that reliable conclusions can be made regarding child outcomes. As data quality improves at the state and local level, with regard to both entry and exit assessment, annually reported child achievement data may sometimes decrease for a time before it increases.

    States and local programs are now actively engaged in completing intensive analyses on their child outcomes data as a part of the required State Systemic Improvement Plan (SSIP) process. As a part of these efforts, states and local programs are drilling down on data quality issues and designing improvement activities to respond to any issues identified. These efforts will systematically increase the quality of the data over the SSIP life span. As these efforts continue over the coming years, data will become more stable and be of higher quality.

    DEC offers the following recommendations:

     Compliance and results data should not be weighted equally in June 2015 determinations. States and local programs have focused attention on compliance for the past six years, leading to significant improvement over time in state and local compliance. For the initial year of using Part C results data, compliance scores should be 70 percent of a state’s determination, with results data weighted at 30 percent of the determination. States and local programs will need time to identify and implement effective strategies to improve child outcomes. The use of results data should be increased by 5 percent per year over 6 years until compliance and results are weighted equally.

     DEC supports the OSEP proposal to weigh data quality and child achievement equally. It is reasonable to weight data quality and child achievement equally in June 2015.

     “Missing” data should be determined based on a denominator supplied by the state with accompanying rationale and documentation. The 618 exiting data used in the OSEP proposal do not represent an accurate count of children for whom child outcomes exit ratings should be available. DEC recommends that states be asked to provide the denominator from which the “completeness of the data” will be calculated. States should be expected to have necessary documentation of this denominator, including adjustments made to the total number of children exiting the program. OSEP historically has allowed states to “adjust” raw data reported in compliance indicators to account for family circumstances impacting state performance. There are several factors that contribute to the availability of child outcome data on exit from the Early Intervention Program including:

    o According to OSEP directions, states are required to report outcomes data only for children who received at least six consecutive months of services in the Part C system. States should be permitted to subtract documented examples of children who did not receive services form the Part C program for at least 6 months.
    o States experience a variety of exceptional family circumstances that impact on the number of children for whom exit ratings are available. States should be permitted to subtract documented examples of exceptional family circumstances from the denominator as well. This is appropriate and consistent with family-centered practices.

     States’ performance data in child outcomes should be considered in determinations even if the state’s data quality does not meet all the standards for data quality. According to the OSEP proposal, a state’s child achievement data would only be considered if the state scored at least a 2 out of a possible 4 in the data quality component. DEC recommends that a state’s child achievement data be used in the calculation of its determination regardless of its score in the data quality calculation. DEC believes it is the responsibility of state and local programs to ensure high quality data are reported.

     A different method for comparing state-to-state child achievement data should be developed over the next several years by OSEP through the federally-funded contractors with active stakeholder involvement. DEC is concerned about the current proposal’s use of percentiles to compare state-to-state performance on child achievement and believes the use of percentages when comparing state-to-state performance should be considered. DEC recommends that while states and local programs continue to work on enhancing data quality, OSEP work with their contractors to consider alternate methods using actual state data as it is reported over the next several years. Stakeholder input is necessary for this process as it proceeds and DEC is happy to assist in this effort.

     For the first two years, use only each state’s performance compared with the state’s targets as the measure of child achievement and exclude the “change over time in summary statements” measure. Nationally, state and local current child achievement data are not stable enough to be used for comparing states to each other or to the state’s own performance over time. Using a concept of statistically significant change will not be valid until the data have more stability. As states and local programs are working to increase the accuracy of their child outcomes measurement process, it is not unlikely that performance will fluctuate.

    DEC is concerned, however, that targets may not be rigorous enough given current federal requirements related to target setting. DEC recommends that additional criteria be established by OSEP for quantifiable expectations for year-to-year targets. If there are more stringent requirements placed on target setting, DEC recommends that until the data are more stable, at least in the first few years, a state be compared against its state targets.

    Regardless of what decision is made by OSEP for 2015, going forward, it will be important to establish evidence-based expectations for the percent of children who will demonstrate substantial progress or achieve age-typical development as a result of participating in Part C programs. Part C programs provide services to children with a wide range of disabilities (and in some states risk factors) with impact on development and functioning ranging from mild to severe. Logically, there will be a “ceiling” which will be reached in states for both summary statements. Perhaps a percentage threshold should be set, after which increases would not be expected.

     A methodology for considering family outcomes should be developed and incorporated into the Part C determinations process. DEC believes that both child outcomes and family outcomes should be considered in the Part C determinations process, consistent with the Part C statute “to enhance the capacity of families to meet the special needs of their infants and toddlers with disabilities.” There is strong and unequivocal research demonstrating the relationship between families’ self-perceptions, knowledge, skills, and supports, and the developmental progress of their children. Improved outcomes for families are important to ensuring the child’s continued positive outcomes over time. DEC recognizes this will not occur in time for the June 2105 process but we request that work begin now on determining an accurate way of accomplishing this in the next several years. DEC is happy to assist in this effort as well.

    Thank you for the opportunity to submit comments. As always, DEC is available and willing to provide any additional information or clarification that may be needed. If we can be of further assistance, please contact us via Sharon Walsh, DEC Governmental Relations Consultant, at walshtaylo@aol.com.

    Sincerely,

    Leah Weiner, Ed.D.
    DEC Executive Director

  22. Hello,

    I understand the pressures of GPRA-Mod but could we focus on the quality of both the child AND family outcomes data? The return rate for the family surveys (Indicator 4) is just as important as the participation rate for child outcome data (Indicator 3).

    Since the national data is still trending down for Ind. 3, the results data should be weighted until a true baseline is achieved. Making determinations using this data may not change of the slope of data that is still settling in.

    I am very concerned that there are states without adequate support to even identify all possible eligible children and support their families. Results taken out of context are misleading. If a state is only serving 1% of the children under 3 and meets requirements because they have high quality child outcome data and positive change over time but another state that serves 3% needs assistance because they have poor child outcome data are we not saying that it doesn’t matter whether you beat the bushes to find children with delays early as long as your ratings at exit are high? Bad message.

    I believe the many partners at the many TA centers could help develop a matrix using data from Inds 2 (Settings), 3 (Child outcomes), 4 (Family Outcomes) as well as 5 and 6 (Child Count). States could each be rated on % from the national mean: more than 1.5 StDev below = 0 points for each, within 1.5 StDev = 1 point for each, and >2 StDev above = 2 points.

    The states that truly Need Assistance ($$) and Intervention ($$) are the ones providing center-based child-focused therapy to fewer than 2% of the children under 3.

  23. The purpose of Part C is to serve both the child and the family; therefore, it is recommended that child outcomes and family outcomes be considered in the determinations process. Positive family outcomes should be considered equally in the determinations process as the family is the main recipient of Part C services.

  24. Children served in Part C varies among the states due to differing eligibility criteria. Also, states use different tools and methods for collecting data on child outcomes. These two variables make comparing current child outcome data across states extremely difficult and unreasonable. Due to the instability of current child outcome data, comparing current data across states could negatively impact a state’s performance when compared to national data.

  25. I agree there is a need for a balance between compliance and results in the determinations process; however, it is critical to ensure high quality and accurate data exists in order to truly measure accurate performance of children with disabilities. States Part C programs are working hard to improve data quality but we are not yet at a place for child outcome data to be measured across states and even within our state.

    Using child outcomes to measure results for FFY 2013-2019 seems premature when most states indicate concerns with data quality and valid measurements of child outcomes as the root cause for selecting child outcomes as the State-Identified Measureable Result (SIMR) in the Part C State Systemic Improvement Plan (SSIP)!!

  26. 1. These indicators are important areas to measure and appropriate to be used for accountability purposes.

    2. I would like to address the issue of assessment of deaf and hard of hearing infants and toddlers. Before the establishment of newborn hearing screening, the average age of identification of a deaf child was two-and-a-half. Because of newborn hearing screening, the average age has dropped dramatically, and today more than 5000 deaf and hard of hearing children are being identified annually through newborn hearing screening systems (CDC http://www.cdc.gov/ncbddd/hearingloss/2012-data/2012_ehdi_hsfs_summary_b.pdf). However, not all early intervention programs have the personnel and resources to properly serve them and their families. As required in IDEA, OSEP should ensure that

    • in assessing deaf and hard of hearing infants and toddlers, states use assessment tools that are valid and reliable for this population; and

    • evaluators and assessors of deaf and hard of hearing infants and toddlers are qualified to perform those evaluations and assessments.

    3. The rate of provision of services in “natural environments” would not be an appropriate outcome measure, as the decision as to where to provide services is not an outcome. As required by IDEA, the setting in which services are provided should be determined by the IFSP team. Specialized settings, such as center-based programs, are appropriate for many infants and toddlers and their families. Monitoring should focus on whether or not children and families are receiving appropriate services and children are progressing at the appropriate rate. For deaf and hard of hearing children language acquisition is paramount. Placement in settings that support language acquisition and development should be encouraged, even if they do not fall within the legal definition of natural environment.

    Thank you for the opportunity to comment.

  27. We strive to help the family and not just the child. Developing a method to consider family outcomes as well as child outcomes is something that should be considered in the next several years and incorporated into the Part C determination process.

  28. it seems that the federal government, before changing rules on IDEA special education accountability, should be actually providing the full 42% agree upon during the initial implementation of IDEA.

    The plan seems to be a cookie cutter expectation that is ignoring that all children are not the same. It concerns me also that it is going to be a list of mandated things that will up the challenge to the teachers who are the group whose positions are hardest to fill already.

    • Please consider a matrix that not only looks at child outcomes but that factors in the percent of children being served.

      Data collection about child outcomes does no good for children not even being identified.

  29. I totally agree with Heather’s comments above. we spend to much money on designing tests, outcome criteria and more work for these programs and teachers. It is about time that the people wanting and writing this outcome criteria’s and tests spend a month working with these children to see if they can achieve the goals they are setting for educators to meet. I highly doubt that any of them could meet their own goals, yet they are the first to write and mandate others to meet them. Educators are already under enough stress and frustration of meeting the CCSS, state standards and the measurable guidelines already in place for infant to toddlers. It just seems like the money is being spent to keep jobs instead of being used directly in the classrooms where the frontlines are. I would bet that a group of educators could design a curriculum that included goals, mastery limits, achievement scores, and a test for each grade level and have them met because they would be realistic. The extra millions of dollars could go back into our schools.

  30. In general, I agree with examination of the quality of data as well as outcomes. However, one area of concern for data collection is transition from early intervention (EI) to preschool, particularly as it relates to Least Restrictive Environment (LRE.) Families exiting EI aren’t yet aware of their child’s rights and the responsibilities of the districts. Parents are told about preschool disabled programs, not inclusive settings. The NJEIS statement on Natural Environments stated that once children are in segregated settings, even as early as EI, they tend to remain so throughout their adult lives. Families need to know that the first option should be the home school and that alternate placement only occurs if supports and services are unsuccessful. Upon exiting EI, families should be made aware of federally designated Parent Training and Information Centers that provide free technical assistance to help families navigate the educational system following early intervention.

  31. 1. Special Education needs resources and funding. It has been cut drastically

    2. Early intervention is key here and also needs to be funded and resourced. It has been cut drastically.

    3. The federal government needs to leave it up to the states on how to help their special needs populations. Each state is unique in its make up of special needs populations and should be given the funding to meet those needs

    4. The ability to modify a child’s work/ASSESSMENTS in school SHOULD NOT BE CHANGED OR AMENDED. Teachers and states need to have the flexibility to meet the diverse needs of their special education populations.

  32. It seems important that or the first two years, that only each state’s performance compared with the state’s targets be used. Maybe also consider excluding the “change over time in summary statements” measure.

  33. ITCA Comments and Recommendations on IDEA Part C Results Data in Determinations
    December 12, 2014

    These comments are being submitted on behalf of the IDEA Infant & Toddler Coordinators Association (ITCA) regarding the request for feedback on the proposed approach for including results data in the Part C determinations process. The ITCA represents member states and other jurisdictions implementing Part C of the Individuals with Disabilities Education Act (IDEA) for infants and toddlers with and at risk for developmental delays and disabilities and their families.

    Since learning that OSEP would be including child outcome data in the determinations process, one of ITCA’s highest priorities has been to work with members to prepare recommendations to OSEP. In total, 74 percent of the membership (37 states) participated and/or provided input on these recommendations through the following activities:

    • Devoted a substantial part of our annual meeting to discussing this issue with members;
    • Had representatives in attendance at the focused meeting convened by OSEP in September, and transmitted notes on OSEP’s proposed approach to all members for their information and to request their input on key issues and recommendations;
    • Conducted several polls of state members to ask questions specifically related to the proposal particularly regarding children who are not in Part C for at least 6 months;
    • Convened a work group of the Data Committee to assist with preparation of ITCA comments and recommendations and model how the proposed approach using their states’ data to assess the impact on states;
    • Convened multiple joint meetings with the data and legislative committees of ITCA, that included representatives of 24 states.
    • Engaged the ITCA board in three meetings on this issue;
    • Surveyed member states to determine their support and receive comments on ITCA’s proposed recommendation on this proposal;
    • Shared proposed recommendations with partner associations and others; and
    • Finalized the attached ITCA letter of recommendations, which have been approved by the Board and strongly endorsed by an overwhelming majority of ITCA members who participated in this process.

    ITCA recognizes and supports state determinations as an important part of OSEP’s Results Driven Accountability (RDA) process for Part C. ITCA agrees that increasing efforts to focus on improved results for young children and their families, thereby facilitating a better balance between compliance and results as envisioned in IDEA 2004, is wise. ITCA strongly supports the use of results in the determinations process and is committed to working with OSEP to shape the determinations process in a manner that will help states move forward in improving child outcomes, while recognizing the difficulties inherent in the measurement process and ensure a process that is equitable to all states.
    Ensuring the data are of high quality and used in a way that accurately reflects child performance is critical. ITCA must note that no additional federal funds were provided for this very costly investment in a child outcomes measurement process. States have had varying resources to work on data quality. In addition, the focus on early childhood data systems has redirected data efforts beyond IDEA. We note this because states have limited resources with which to work to improve the quality of child outcome data and states would benefit from opportunities to compete for grants to improve child outcome data and results in the future. ITCA offers a number of recommendations to support achievement of this goal.
    First, we offer a few observations to support our recommendations:

    ITCA believes it is important to recognize that there are many factors that influence child outcome data which need to be considered and provide the context for use of results in the determination process for states. First, states use different methods, as approved by OSEP, for collecting and interpreting assessment data to measure and report child outcomes, which make comparison of data across states difficult. Eligibility criteria are established by states within the federally required framework, and therefore the population of children served by Part C is heterogeneous. There are many social, demographic, and individual child and family circumstances that influence child development, and the length of time children and families participate in Part C programs is relatively short. The evidence base for developmental progress which can be made by children in Part C programs is growing and Part C programs are pleased to be in a leadership position to have and contribute to child outcomes data. At the same time, more evidence-based knowledge and experience is needed to establish expectations for the level of progress that can be achieved by children participating in Part C programs.

    Since baseline data for Indicator 3 child outcomes were reported in each state’s APR five years ago, state Part C systems have been working to improve data quality to ensure that reliable conclusions can be made regarding child outcomes. These ongoing improvement activities also help explain why a state’s data may not be in line with the national data and may continue to change as the improvement activities are implemented. These changes will affect the results of an individual state as well as the national data and therefore should not be used for such high-stakes. State efforts include:

     Increasing the number of children whose exit scores are available (completeness of the data);
     Providing training and technical assistance to improve the accuracy of the measurement of child achievement;
     Making adjustments, with input from stakeholders, in their approaches to measuring child outcomes, including changes in assessment approaches, tools, procedures and business rules used to derive progress categories for reporting purposes; and
     Engaging in data analysis activities at the state and local level to enhance data quality and to use the data to inform strategies to improve results for infants and toddlers.

    States are now actively engaged in completing intensive analyses on their child outcomes data as a part of the required State Systemic Improvement Plan (SSIP) process. As a part of these efforts, states are drilling down on data quality issues and designing improvement activities to respond to any data quality issues identified. These efforts will systematically increase the quality of the data over the SSIP life span. As these efforts continue over the coming years, data will become more stable and be of higher quality.

    States have had varying resources to work on data quality. At one point OSEP had competitive grants for enhancements to data systems. In addition, the focus on early childhood data systems have redirected data efforts beyond IDEA. We note this because states have limited resources with which to work to improve the quality of child outcome data and states would benefit from opportunities to compete for grants to improve child outcome data and results in the future.

    ITCA offers the following recommendations:

    1. Compliance and results data should not be weighted equally in June 2015 determinations.
    States have necessarily focused on the compliance indicators for the past six years, leading to significant improvement over time in state compliance. For the initial year of using Part C results data, compliance scores should be 70 percent of a state’s determination, with results data weighted at 30 percent of the determination. States will similarly need time to identify and implement effective strategies to improve child outcomes. The use of results data should be increased by 5 percent per year over 6 years until compliance and results are weighted equally. This phase-in period also will allow additional and necessary time for states’ data to become more stable and of higher quality.

    2. Data quality and child achievement data should not be weighted equally in June 2015 determinations.
    For the initial year of using Part C results data, data quality should be 70 percent of the results calculation, with child achievement weighted at 30 percent. The use of child achievement data should be increased by 5 percent over 6 years until results are weighted equally. Over the 6 years, state data will become more stable as accuracy and quality increases.

    3. Only comparisons of each state’s performance to its own targets should be used during the initial years of incorporating results data into determinations.
    As discussed above, states’ data are not yet stable enough to be used for comparison across states. In addition, state differences such as eligibility criteria and decision rules for compiling child outcome results into progress categories, and making comparisons using the current data, could potentially impact an individual state’s performance when compared to a national aggregation of all data. As Indicator 3 child outcomes data collection and the SSIP process continues and the quality of states’ outcome data improve, it is more likely that a valid process can be designed to compare child achievement data across states.

    4. “Completeness” of the data should be determined based on a denominator supplied by the state with accompanying rationale and documentation.
    The 618 exiting data used in the OSEP proposal do not represent an accurate count of the number of children for whom child outcomes exit ratings should be available. ITCA recommends that states be asked to provide the denominator from which the “completeness of the data” will be calculated. States should be expected to have necessary documentation of this denominator, including adjustments made to the total number of children exiting the program. OSEP historically has allowed states to “adjust” raw data reported in compliance indicators to account for family circumstances impacting state performance. There are several factors that contribute to the availability of child outcome data on exit from the Early Intervention Program including:

     According to OSEP directions, states are required to report outcomes data only for children who received at least six consecutive months of services in the Part C system. According to ITCA data from a number of states, between 20-45% of children who exited Part C do not receive at least 6 consecutive months of services. States should be permitted to subtract the number of children who leave Part C who did not receive services for at least 6 months from the denominator used to calculate “completeness” of the data.
     States experience a variety of exceptional family circumstances that impact on the number of children for whom exit ratings are available. States should be permitted to subtract documented examples of exceptional family circumstances from the denominator as well.
     Finally, one state continues to sample and should only be accountable for the number of children included in the OSEP-approved sample methodology used in the state. States that sample should be permitted to use the number of children in the state’s approved sample as the denominator.

    5. Use only “a” and “e” progress categories to measure “out-of-range” scores for data quality.
    ITCA supports the standards as indicated for expected patterns and ranges in progress categories “a” and “e.” This is consistent with a standard practice of consideration for outliers. However, we do not support the use of expected patterns for progress categories “b,” “c,” and “d” as there is no empirical evidence for this.

    6. States’ performance data in child outcomes should be considered in determinations even if the state’s data quality does not meet all the standards for data quality. According to the proposal, a state’s child achievement data would only be considered if the state scored at least a 2 out of a possible 4 in the data quality component. A score of 0 or 1 would automatically score the state with a zero in child achievement. ITCA recommends that a state’s data be used in the calculation of its determination regardless of its score in the data quality calculation.

    7. A different method for comparing state to state child achievement data should be developed over the next several years by OSEP through the federally-funded contractors with active stakeholder involvement.
    ITCA is concerned about the current proposal’s use of percentiles to compare state to state performance on child achievement. The 90th percentile seems too high a standard and it seems to contradict the proposed data quality standard that “e” should not be too high. ITCA recommends that while states continue to work on enhancing data quality, OSEP work with their contractors to consider alternate methods using actual state data as it is reported over the next several years. Stakeholder input is necessary for this process as it proceeds.

    The use of percentiles is, in a way, arbitrary. States could be quite close to each other on child achievement measures and yet receive fewer points because the state is not in the 90th percentile. Using an approach such as summary statement percentage ranges provides states with a benchmark to strive to achieve which is absent using a percentile approach. One example using percentage ranges is:

    Summary statement percentage ranges would be used for each summary statement for each outcome area, similar to the process for the “missing data” under data quality. These ranges might differ for the two summary statements, as shown in the example. The cut scores here are for example only to illustrate the concept.

    Summary Statement #1, outcomes A, B, and C. 65% = 2.
    Summary Statement #2, outcomes A, B, and C. 45% = 2.

    There is strong sentiment nationally that the variability in state systems makes it impossible to reliably compare performance across states. This additional time period during which alternate methods are considered can be used to carefully consider how these state variables, including state differences such as tool and methods used to measure child outcomes, eligibility criteria, population, system of payments, etc., should be taken into account. This careful process with stakeholder input and transparent data runs will result in a methodology that, once implemented, is considered valid and reliable and an accurate way of comparing states’ performance.

    8. For the first two years, use only each state’s performance compared with the state’s targets as the measure of child achievement and exclude the “change over time in summary statements” measure.
    Nationally, states’ current performance data are not stable enough to be used for comparing states to each other or to the state’s own performance over time. Using a concept of statistically significant change will not be valid until the data have more stability. As states are working to increase the accuracy of their child outcomes measurement process, it is not unlikely that performance will fluctuate. Targets are set with stakeholder input and with consideration of the important state variables that impact on data quality and performance. ITCA recommends that until the data are more stable, at least in the first few years, a state be compared against its state targets.

    Either FFY 2013 performance could be compared to FFY 2012 targets or consideration should be given to initiating this measure in the June 2016 determinations when state’s FFY 2014 performance can be compared to the state’s 2014 targets.

    Regardless of what decision is made, going forward, it will be important to establish evidence-based expectations for the percent of children who will demonstrate substantial progress or achieve age-typical development as a result of participating in Part C programs. Part C programs provide services to children with a wide range of disabilities (and in some states risk factors) with impact on development and functioning ranging from mild to severe. Logically, there will be a “ceiling” which will be reached in states for both summary statements. Perhaps a percentage threshold should be set, after which increases would not be expected.

    9. A methodology for considering family outcomes should be developed in the next several years and incorporated into the Part C determinations process.
    ITCA believes that both child outcomes and family outcomes should be considered in the Part C determinations process, consistent with the purposes of Part C as stated in the statute. There is strong and unequivocal research demonstrating the relationship between families’ self-perceptions, knowledge, skills, and supports, and the developmental progress of their children. Including family outcomes in the determinations process reflects the primary role of the family in the achievement of positive child outcomes. ITCA recognizes that there is no acceptable way of incorporating family outcomes into the June 2105 process but we request that work begin now on determining an accurate way of accomplishing this in the next several years.

    Thank you for the opportunity to submit comments on the use of results in the determinations process. As always, ITCA is available and willing to provide any additional information or clarification that may be needed. ITCA looks forward to working closely with the Department as you proceed to incorporate results into the determinations process for Part C. Feel free to contact us by email at ideaitca@aol.com if we may be of further assistance.

    Donna Noyes, ITCA President
    Maureen Greer, ITCA Executive Director

    cc: Melody Musgrove, OSEP
    Ruth Ryder, OSEP
    Gregg Corr, OSEP
    Larry Wexler, OSEP

    • I am ever grateful at the thoughtful way that ITCA analyzes the impact of policy change on the intentions in the field for making a difference in the lives of young children. As a former member of ITCA and a former Part C Coordinator, these insightful recommendations merit careful attention.

  34. As OSEP shifts at least part of its focus to child outcomes, rather than solely compliance, data quality w/ regard to those child outcomes has been very apparent. Using current child outcomes data to determine a baseline seems to be inadequate and misleading. A “ramping up” period w/ regard to child outcomes data quality seems to be in order.

    Also, I’ve received much feedback – with which I agree – about the COSF being a rather subjective tool with which to gather data. Because of this, a child that receives a certain set of scores at one site may have received a completely different set of scores if they had been at another site or in another state. Considering the high turnover in our field, it’s also very likely that the team that completed the initial COSF may not be the same team that completes the exit COSF. As long as the COSF is used, accurate data will be hard to obtain.

  35. Although accountability always sounds good, one needs to examine the starting points of the subjects as well as the difficulties presented. To hold states and programs accountable for things beyond their control such as abilities, home life, and parent followup is ludicrous.

  36. We support of the use of results in the determinations process and appreciate the opportunity to comment on the proposed criteria for improving child and family outcomes; we also recognize the challenges in determining an equitable process across all states. We encourage OSEP to re-examine the criteria for both child outcomes and the results indicators included in the overall evaluation of Early Intervention Programs and the impact we have with children and their families.

    Calculation of Child Outcomes Criteria for Determinations
    1. Data Quality
    a. Missing Data
    – Definition of “missing data.” This calculation should not include data that is excluded for children served in an Early Intervention Program for less than 6 months, which is not reported according to OSEP instructions. These data are not missing due to error but are removed from the data according to OSEP calculation instructions. Use of 618 exit data for the denominator is an inaccurate measure for determining missing data, since it includes all children who left the program, regardless of the duration they were served.
    – There is a lack of detail about how the cut off percentages were established, especially given the recommendation for higher criteria. Provide additional detail about the process for establishing the identified percentage ranges.
    2. Child Achievement
    a. Summary Statements compared to other states
    – Was eligibility criteria discussed in the context of developing criteria for this measurement? The percent improvement reported in development for infants and toddlers would be expected to vary for states with different eligibility criteria.
    – The cut-off percentages for this measurement establish a very narrow range at the top and bottom of the measurement. Provide additional explanation of the rationale of this selection of measurement, especially since it is different from the percentages established for missing data.
    – Due to the variability of systems across states, we recommend the evaluation of child achievement through a comparison of the a state’s results over time, not a comparison across states.
    – State targets are not included in this proposal and should be incorporated into the process due to state efforts to set and achieve targets.
    b. Change over time
    – There is a difference between statistical significance and meaningful difference. This component should include an evaluation of change at a meaningful level.

    Limited Indicators included for Determinations
    The number of compliance indicators/criteria currently included (7) and number of results indicators currently proposed (1) does not present a balanced evaluation of compliance with results of the work of Early Intervention Programs.

    The Part C Program emphasizes, and research demonstrates, family engagement and parental involvement as the most effective way to improve development and outcomes for infants and toddlers with delays or disabilities. It would be a disconnect of the OSEP determinations process to neglect to include the measure of parents reporting the impact that Early Intervention Services have had on their knowledge and skills to assist in supporting and improving their child’s development.
    “Research overwhelmingly supports that the most powerful impact upon developmental outcomes for children receiving early intervention is parental responsiveness to the child during early intervention” Mahoney, 2009.
    The evaluation of Family Outcomes could include criteria below, paralleling the established criteria for the measurement of child outcomes.
    1. Data Quality
    a. Overall response rate (from the APR instructions)
    This parallels the criteria of Missing Data section from the child outcomes proposal.
     The number of surveys returned divided by the number of surveys disseminated compared with a determined range of acceptable response rate percentage.
     Representativeness of data: family demographics, program characteristics, geographic variables, etc. compared with a determined range of acceptable responses.
    b. Out of Range
    This parallels the criteria of Data Quality section from the child outcomes proposal.
     Did the state’s return fall within the determined high and low percentages? (i.e. did any state have 100% across all responses or below 50% on responses)
    2. Family achievement
    a. Compared to other states.
    This could include a comparison of states using like surveys, similar to the reporting currently included in the APR Indicator Analysis published annually.
    b. Change over time.
    Similar to Child Outcomes, each state sets targets for this indicator that should be included in this determinations process. This parallels the criteria of Change over Time in Summary Statements section from the child outcomes proposal.

    Child Find measures should also be considered in calculation of a state’s determination status as it relates to results indicators. Recent messaging from OSEP from Leadership conferences has emphasized the need to assure that programs are identifying all children potentially eligible for Early Intervention Services in order to have the potential to positively impact their development. Criteria should be established to include Indicator 5 and Indicator 6 in the determination process as it relates to results indicators.

  37. I perceive RDA as an attempt by federal education officials to move away from strictly compliance measures to include outcome-based measurements of child progress. While this sounds reasonable in theory, one wonders what requirements will no longer be requested as states take up this new format. Federal dollars have been inadequate from the start as regards funding early childhood disability services and education. In other words, dollars best spent on services are being diverted to RDA, and funds were short to start with. In addition, I must ask: has compliance measurement failed? In my experience, federal requirements to date in this area have created a kind of dance where states pretty much make it up as they go, with some exceptions. There has been an overall lack of enforcement from the federal level around states who fall short of minimum requirements outlined in IDEA guidance. Adding RDA into the mix will not resolve this. If states were truly required to meet the intent in statute, I’m uncertain whether RDA would in any way be necessary. The issue, as stated earlier, is and will remain quality. High quality services lead to high quality outcomes, the data are unequivocal on this point. States choose their RDA measure; my state’s choice in no way will insure increased quality. The measure chosen was already a set of practices in place and is will merely produce “feel good” reporting while families will still have to fight to get their needs met. I’m still waiting for some common-sense leadership that will implement systems changes and produce enhanced child and family outcomes in EI services and supports.

  38. The scope of data collection does not include the incidence and impact of neglect and abuse experienced by children with disabilities. Existent data indicates that one in four will experience neglect and/or abuse by the time they are 18 yrs. of age. The period of greatest risk is birth to three years. While early intervention professionals know they are mandatory reporters of suspected instances of neglect and/or abuse, they do not know how to prevent the abuse and/or neglect. Prevention strategies, as presented on the Child Welfare Information Gateway, include the following: a) nurturing and attachment; b) knowledge of parenting and of child/youth development; c) parental resilience; d) social connections; e) concrete supports for parents; and f) social and emotional competence of children. The Hands & Voices O.U.R. Children Project is working to integrate these protective factors into the early intervention services of children who are deaf/hard of hearing. This inclusion represents a small, yet significant enhancement of early intervention services for all children with disabilities. It is recommended that data collection concerning such services be expanded to included the degree to which services include the protective factors. As professionals, we must work to prevent, vs. simply recognize and report, child neglect and/or abuse. As we now know from the CDC ACE (Adverse Childhood Experience) Studies, child neglect/abuse has a significant, negative life long impact upon the individuals health, life, and performance.

  39. I think that we need to establish national standards for eligibility criteria and assessment tools. Standards vary widely across geographical areas, which makes program services and outcomes difficult to compare. Using criteria indices designed for typical child development often fails to include progress made by disabled students, as their time line does not parallel that of typical peers. I think that criterion referenced tests may best reflect progress, as goal achievement can be tracked without undue focus on average age accomplishments. However, tests comparing average age of accomplishment for basic child development can be useful as a means to prove the need for intervention. The most important component of Early Start programs should be servicing the familes so that they best learn how to support their disabled children. How will more data and paper work accomplish this?

  40. As we consider ways to determine the effectiveness of our Part C programs, it is important to find equitable ways to assess the variety of program services offered across the nation.
    1. While it is important to assess the effectiveness of tax dollars, using four single measures of quality uses a narrow metric on a diverse system that is designed to meet the individual needs of children and their families.
    2. States have varied eligibility for Part C programs; outcome expectations are significantly different based on identified child needs at entrance to Part C.
    3. Examining the validity of assessment instruments and child outcome measures would be helpful prior to establishing a rubric for evaluating those outcomes. States use varied assessment instruments with greatly varying reliability and validity, that can not be compared equivalently.
    4. Using an outcome system that is based on achievement of age-appropriate functioning for children with identified disabilities sets a standard that is opposed to the needs of children we serve.
    5. The net result of such a system is to focus states’ attention on numerical outcome scores, rather on the quality of child and family services. States could better determine individual areas of need, and target ways to improve child and family services.

  41. I do not believe that the scores being reported have the level of validity necessary to make program decisions. In my state we are not assigning outcomes ratings in a manner that would assure that children would receive the same ratings at a different early intervention program. There is too much variability in assignment of ratings between programs. In addition, there is variation in program eligibility in different geographical areas of the state which makes COSF results even more suspect. Meanwhile the state considers increasing the required “score” for each summary statement when we can’t even validate the scores from program to program. This entire system needs to be revamped. Scores from a single test may not be ideal but could at least provide a level of validity that can not be manipulated.

  42. Please consider States’ assessments as an option rather than requiring use of the COSF. The COSF is too subjective and lacks validity.

  43. This is a most troublesome evaluation design and assessment at many levels. First, there are problems with the assumption that special education is going to fix or prevent disabilities. The disabilities identified in the birth to 3 year old age range tend to be severe and pervasive. It is not uncommon for the expression of the disability to intensify as the child matures and the typically developing child acquires more skills while the child with the disability will appear to be regressing. Second, the pre- to post- design is not a good design for measuring progress. The most appropriate tool for an infant may not necessarily be the best tool for the three year old. Third, there is confusion as to the weighting of the test result, which is a standardized score that is less likely to show change, and then the assignment of a rating, which is a judgment and qualitative measure by the evaluator(s) who may or may not understand the ratings or how to weight the scores in making this judgment. So the scores that are collected are not reliable, they are not valid, they are not meaningful to instructional practice and they are not useful to evaluate the quality of programs and services – they are useless. Fourth, the data collection being conducted by the states is horrific at best. We send them the data and they fail to match it correctly. The reporting on hundreds of our students in our county just is not matched by the state. Which goes back to the pre- post- model as perhaps not the best method of tracking progress. Fifth – let us return to the intentions and assumptions of this data collection. The assumption here is that the state has the capacity to match the data correctly and that all variables are the same….Does early intervention make a difference in the developmental trajectory of children with disabilities? YES – but it will be differentiated based on the age of identification, the disability, severity of disability, intensity and quality of service, and the same SES factors that impact other achievement indicators – so unless you have assumptions about your data that is truly informing, the outcomes measures as designed in your proposal is just another exercise in reporting to the government and not a tool for improvement. I hope you revise your practices and thinking to a better evaluation design!

  44. agree.

    I run a charity that provides scholarships as well as programs to include children in equine assisted therapies – horse back riding. We do all sorts of things with them while they ride – it’s not just recreational! some non verbal children participate in our Strides program – which provides an iPad with communication apps to help the learn the importance of social communication, telling stories (not just asking for a cracker), self expression is what we’re looking for… but some of our activities include education – math, for instance. Where J***** will have his iPad pre-programmed and then led into the arena with math problems presented. It’s AMAZING what our horses can do to motivate!! So one little guy last summer, age 6, non-verbal, Autistic, his parents were there — they didn’t know he knew his numbers – we did the math game and guess what we found out? He not only KNEW his numbers and could identify them on the iPad, but he knew some simple math! So how do you go about documenting that with a data chart? And yes, it is time consuming – which costs money, money that my programs do not have…. outcomes are not solely based on data points.

  45. Please make certain that all parents are referred to their local IDEA funded Parent Center! We are here to assist parents through the maze of services and supports. It takes all of us working together to ensure parents have all they need to work with their child and to understand the systems that support them!

  46. We have become a data driven society! There is not enough hours in a day to compile all the data that everyone wants to keep on children. Teaching is a second career for me and the saddest part is I am beginning to dislike the job! Too much paper and no one at the top is actually looking at all this DATA! We need to be in the classroom helping students learn and enjoy their young life.

    • While I agree that natural environment and present moment teaching is very important, how then will anyone prove progression? For example, if a special education student started out the year with needing hand over hand prompting and an aide directly by them yet at present time that aide is sitting 10 feet back and only using gestural prompts-that is ALOT of progress that no test is going to be able to show. If we are collecting data in different settings and for different behaviors such as the above mentioned on task behavior then we can show more than academic tasks on a test, we can show how they are completing tasks functionally and independently.

  47. I LIKE THE STUDY. HOWEVER, I TAUGHT IN AN URBAN SCHOOL DISTRICT FOR 10 YRS AND THE TESTS WERE NOT EQUAL TO THE OTHER OR RURAL, SUBURBAN DISTRICTS. MANY CHILDREN WERE ON DIFFERENT LEVELS, WHICH IS VERY HARD TO TEACH. I KNOW EACH STATE AND DISTRICTS WITHIN EACH STATE HAS THEIR OWN WAY OF TESTING FOR DATA AND DETERMINING WHO IS ELIGIBLE FOR SERVICES. I BELIEVE THAT WITH THE WAY THE DATA IS TO BE DETERMINED AND NE BEING CONSIDERED, AS WELL AS, ABSENCES THIS COULD WORK.

    THE QUALITY OF DATA SEEMS TO FALL WITHIN THE MID TO UPPER RANGE, IN WHICH ONE WOULD ASSUME IT IS BECAUSE OF NE’S OR THE LOWER URBAN OR LOWER POVERTY LEVELS. I BELIEVE THERE DOES NEED TO BE DIFFERENT CRITERIA WHEN IT COMES TO THE DATA EQUALING THE CHILDREN’S PERFORMANCE.

    I BELIEVE THAT THE VALIDITY OF THE PERFORMANCE OF EACH CHILD WOULD BE REINFORCED IF EACH STATE GOT INVOLVED. THIS WOULD ONLY BE A BIG IMPROVEMENT IF EACH STATE FOLLOWS THE RULES AND THERE ARE PEOPLE THERE FROM THE STATE TO IMPLEMENT AS WELL. THIS WOULD PROVE VALIDITY. THE ABSENCES WOULD BE DISCOUNTED. NE IS AN INDEPENDENT VARIABLE WHERE THERE IS NO CONTROL, ESPECIALLY IN URBAN AREAS. IN ADDITION, PERFORMANCE IS LOWER BECAUSE OF NE’S.

    OVERALL, I BELIEVE IF THIS IS IMPLEMENTED UNIVERSALLY AND TAILORED TO EACH STATES LEVEL AND PERFORMANCE, SUPERVISED OFFICIALLY FOR VALIDITY, AND CHILDREN RECEIVE THE SERVICES THEY NEED IT WOULD WORK FOR THEM IN THEIR EARLY YEARS. THE DATA AND DATA SUMMARY SHOULD ALSO BE ON A MONTHLY OR QUARTERLY BASIS AND MANDATED FOR EACH STATE TO REPORT FEDERALLY. THUS, THIS WOULD HELP OR RATHER THE DATA AND PERFORMANCE LEVELS, WOULD HELP THE EARLY INTERVENTION DETERMINE WHAT DIFFERENCE OR “INTELLECTUAL DISABILITY” A CHILD WOULD HAVE. IT WOULD PROVIDE VALUABLE INFORMATION TO ASSIST THAT CHILD WITHE THE APPROPRIATE EDUCATIONAL PLAN AND ACCOMMODATIONS. iN ESSENCE, ASSISTING EARLY ON IN THE IEP PROCESS AND GETTING A HEAD START TO HAVE CHILDREN AND STATES PERFORM AT A MID TO HIGHER LEVEL.

    THIS DATA WOULD HAVE HELPED MY SON. BY THE END OF THIRD GRADE HE WAS EVALUATED AND DIAGNOSED WITH A LEARNING DISABILITY IN READING, NOS. IF THE AFOREMENTIONED HAD OCCURRED, HE WOULDN’T BE A YEAR BEHIND IN 8TH GRADE AT A 7TH GRADE LEVEL. THIS DATA AND DATA SUMMARY EARLY ON WOULD HAVE ASSISTED IN OBSERVING HIS READING NEEDS IN ORDER TO AVOID AN IEP.

    I HOPED THIS HELPS.

    THANK YOU MUCH.

  48. Its is amazing to me that anyone would believe that collecting more data is a magic bullet to determining which programs work and which don’t. Have we learned nothing from NCLB. The only positive outcome would be for the companies who create the assessments and collect the data, draining more money from the actual programs that serve our most at-risk students. These early intervening programs often provide the kinds of positive outcomes that are not easily quantified and have far reaching family outcomes that cannot be counted on a data set. Using data for the purposes proposed will have the same negative outcomes as the NCLB and RTTT initiatives. It will cause people to behave in ways that are not in the best interest of children when funding is tied to outcomes that are based solely on data points. Those of us who work in these programs have seen the corruption possible when greedy people see dollars instead of children. Data collection on children is a slippery slope. We have no idea how this data will later be used to track these children.

    • agree.

      I run a charity that provides scholarships as well as programs to include children in equine assisted therapies – horse back riding. We do all sorts of things with them while they ride – it’s not just recreational! some non verbal children participate in our Strides program – which provides an iPad with communication apps to help the learn the importance of social communication, telling stories (not just asking for a cracker), self expression is what we’re looking for… but some of our activities include education – math, for instance. Where Johnny will have his iPad pre-programmed and then led into the arena with math problems presented. It’s AMAZING what our horses can do to motivate!! So one little guy last summer, age 6, non-verbal, Autistic, his parents were there — they didn’t know he knew his numbers – we did the math game and guess what we found out? He not only KNEW his numbers and could identify them on the iPad, but he knew some simple math! So how do you go about documenting that with a data chart? And yes, it is time consuming – which costs money, money that my programs do not have…. outcomes are not solely based on data points.

  49. I really think that the most important part of this subject it is the assesment previous to the situations in order to know exactly what is producing or produced these disabilities. It is not just give solutions it is also to prevent more cases,

  50. We have had students move in from different districts that are ranking children really low at entrance. Example: Giving speech only kids low scores in all areas. We would never rank a speech only kid low in any area except perhaps the social/ communication outcome and then only if their speech impairment affected their ability to get their needs met. The whole system is subjective. We already spend a bunch of time making IFSP outcomes individual, appropriate, and measurable. In addition, the MN outcomes are crosswalk with a criterian referenced test. Our district uses the Infant Toddler Developmental Assessment. We would be following our students with this tool with our without the MN outcomes. Why not just look directly at those scores. Please look at the precesses we already have in place instead of adding a new layer of paperwork to our already busy desk time. I went into special education to work with children and I picked birth the three to work with families. I don’t enjoy paperwork but I do it as a necessary part of my job but I don’t want to take any more time away from prep and student time than I have to. In addition, I have at least a few students per year that have conditions that are degenerative or that will show progressive delays as the child ages. That does not mean the child is not progressing and that I am not an affective teacher. A child with Down Syndrome will be at age level in the first few months but then fall further behind her peers as she ages. I know the outcomes allow for progress to be shown even for those children. Any other measurement that is selected would need to allow for these types of children/disorders.

  51. Helping us all understand the progress of early childhood special ed. programs will focus on those which are successful and those in need of help. This discussion will drive the energy of the program to work toward obtaining successful outcomes. Even if the illegibility is different in each state, each state will be asked to determine if they are getting what they want from the funding. I am sure some consistent variables can be placed into the evaluation process to take into account the variability of state programs. If we do not assess…then what are WE all looking for from the federal program?

  52. I find that outcome data is not a valid method of measuring child achievement. Teachers often complete their outcomes quickly and do not use the data to inform what they’re doing. Can we put more of an emphasis on the quality of IFSP’s? What I find the most important is whether the content of the IFSP matches the needs of the child and family, whether the goals/outcomes are measurable and reflect high expectations, and whether children are making adequate progress.

  53. Missing Data — Its not enough to look at the proportion of exiting children for whom data is provided. Its very important to determine if certain groups of children tend not to provide data. For example is missing achievement data associated with severity of delay, ethnicity or parent education? If there’s a tendency to provide data for children with minor delays achievement outcomes may biased toward the best outcomes.

    Child Achievement — Family and child characteristics affect outcomes. Because states enroll different groups of children and families into Part C there’s no reason to think that states should have comparable child outcomes. In addition before states can be compared on child achievement there it should be demonstrated that the processes by which of child achievement scores are obtained are reliable across the nation.

    It’s not clear why is change over time in summary statements is used to assess quality. Is there a reason to believe that overall rates of child achievement in a state should improve over time?

    Total Score — The justification for creating an overall total score is unclear. Child Achievement and Data Quality are different variables, with very different meanings. Adding them together is like adding the results of high school student achievement tests and the proportion of students taking the test. Each is important, but added together their meaning is obscured.

  54. I like the fact that there will be some measure to help determine effectiveness of early intervention sevices and to compare across states. It would be nice to see a measure that reflects part C information being related to the primary care providers, so that part C services don’t appear to be provided within a silo. There needs to be communication between part C and other providers.

  55. The eligibility requirements vary from state to state. Those children with a 50% delay are going to have much different outcomes than children with a 25% delay that come to early intervention and reach age levels with intervention.

  56. Using data to find if the program is working or to find what new strategies need to be used to help individual students is fine. However, using data on a group of children that have disabilities on different levels causes stress among teachers because every student is not going to make gains as fast as others, and some groups of students don’t look like they are making gains at all when grouped together. You can’t expect a child with a 68 IQ to perform and achieve like a child with an average or above IQ. Some early warning students are intellectually disabled where others are high functioning Aspergers that have not learned how to deal with the world or language expression.

  57. I like the fact that there will be some measure to help us determine the effectiveness of early intervention sevices and a way to compare outcomes across states. I do think that to further contribute to the validity of your outcomes socioeconomic status of particpating families might be included as a weight in your calculations. I could not determine if this factor was already considered in your quantitative analysis or other data. A longitudinal aspect might be considered to examine the children’s progress after they transition to part ‘B’.

  58. To help our kids the most start with urging Congress to fully fund IDEA….Data Collection does no good if you don’t have the funds to adequately provide the accommodation/adaptation/modification/education(kids and teachers and parapros) that your DATA shows works best….plus, if you would design a format for grading goals like subject matter grading this would help not only the program design but also we could move to give a functional diploma to those who qualify by meeting the standards of their IEP goals….

    • AMEN. The teachers and students don’t see the funding. The only thing we receive that is different from the regular ed students are testing materials and a different computer program than the intervention students.

  59. Please consider that some states serve at risk children while other states have a narrower eligibility criteria. Additionally, not every state uses the COSF to obtain results. The COSF may have elements that are more subjective than other assessments and result in higher scores. In both cases, a state with narrow eligibility criteria and different assessment techniques would be at a disadvantage when compared to other states using other methods. Given that, the four piecses of the scoring should not be weighted equally.

Comments are closed.