国产视频

In Short

Are There Truly No Differences in Teacher Preparation Program Quality?

no-diff-teacher-prep-quality_image.jpeg

NOTE: 国产视频鈥檚 recommendations for HEA Title II data collection and reporting have evolved over time, particularly with regard to connecting the data reported to specific consequences. For example, we no longer recommend using the results of HEA Title II data reports to determine a preparation program鈥檚 eligibility to offer TEACH grants.

For 国产视频鈥檚 most up-to-date recommendations on HEA Title II, please see our newest brief on the topic, or reach out to tooley@newamerica.org.

September is just around the corner, which is when the U.S. Department of Education (ED) is slated to release its final regulations for Title II of the Higher Education Act (HEA) governing teacher preparation. As released for , the proposed included a few highly contentious requirements, as Stephen Sawchuk of Education Week has . One of these is requirements is that states rate teacher preparation programs (TPP)鈥攁nd hold them accountable鈥攂ased in part on their graduates鈥 performance in the classroom, which must include a measure of students鈥 learning growth. Recently, on teacher preparation program (TPP) quality in Missouri found no substantive differences in the effect of attending a given program on graduates' performance, as measured by graduates鈥 impact on student achievement. In light of the Missouri study, can we expect new regulations to have any impact on teacher preparation program quality or will they only lead to wasted time and effort?

At first blush, the Missouri research findings may seem like a cautionary tale: if there are no real differences in teacher preparation program quality, then new regulations could require states and institutions of higher education to go through the work of collecting data and assigning ratings for no good reason. 聽But here are three important caveats to the study鈥檚 findings:

  1. The study only investigated differences at the 耻苍颈惫别谤蝉颈迟测听濒别惫别濒, but not at the level that ED鈥檚 proposed regulations would require: for the specific teacher preparation programs within聽a聽university's聽school of education聽(e.g., the elementary education program versus the secondary mathematics program). Lumping all of an institution鈥檚 various programs together may obscure differences that actually exist, as a strong program in one area could聽make up for a weak program in another area.
  2. The authors (Koedel, Parson, Podgursky, and Ehlert, all in the economics department at the University of Missouri) only looked at 鈥渢raditional鈥 preparation programs in public institutions in the state of Missouri and at graduates that ended up teaching in elementary schools. In a footnote, the researchers鈥 explain how their 鈥渇ocus on traditional programs and on teachers moving into elementary schools reduces within-institution heterogeneity,鈥 but fail to acknowledge that it could reduce the performance variation they found between institutions as well. And, while current Title II regulations only require states to assess performance of 鈥渢raditional鈥 preparation programs, ED's聽proposed rules would expand this to include alternative preparation programs as well (e.g., Teach for America).
  3. This research聽assessed program 鈥渜uality鈥 solely via聽a measure of teacher impact on student achievement, whereas ED's proposed regulations would allow states to determine and use for this purpose, although student learning growth must be factored in.

Additionally, as the authors of the Missouri study highlight, an absence of strong federal reporting and accountability requirements to date have led states, districts, and teacher preparation programs to have 鈥渓ittle incentive to innovate and improve,鈥 and聽could explain why they found very little differentiation in program聽quality. In fact, a from the U.S. Government Accountability Office (GAO) found that seven states had no process in place to report on low-performing TPPs, despite the current requirement to do so under Title II of HEA. And while the other states did have assessment processes in place, most were not particularly rigorous. For example, many used alignment with their state鈥檚 teaching standards (assessed primarily via reviews of syllabi and course materials, as well as through interviews of TPP staff), but fewer than 10 used teacher evaluation or used student assessment data.

(As an aside, many teacher preparation programs have expressed聽outrage at the National Council on Teacher Quality (NCTQ) rating teacher preparation programs based largely on reviews of syllabi and course materials, which they have deemed unfair. Sharon Robinson, head of the American Association of Colleges for Teacher Education, 聽as "little more than a document review鈥攈ardly adequate evidence to judge graduates鈥 readiness to teach." This聽draws into question why聽preparation programs are pushing to keep states' preparation program rating systems unchanged at the same time they protest NCTQ's rating methods. A good聽guess: states are lenient, and NCTQ is not.)

Given GAO鈥檚 findings that states鈥 TPP quality assessment processes are light and loose, and that ED does not monitor the execution of these processes, it should come as no surprise that states rarely identify any programs as low-performing: from 2013 through 2014, GAO found that only six states identified at least one program as low-performing, and only 13 identified at least one as at-risk of becoming low-performing. The fact that, currently, ED can only require states to assess and report quality for entire education schools at institutions of higher education鈥攁s opposed to the individual preparation programs within them, as the new proposed regulations would do鈥攍ikely only exacerbates this issue, which occurs despite the widespread understanding that most new teachers end up feeling during their first years on the job.

Little evidence exists to suggest that states and institutions of higher education will work to improve the quality of their teacher preparation programs without stronger federal oversight and interventions.

Unfortunately, little evidence exists to suggest that states and institutions of higher education will work to improve the quality of their teacher preparation programs without stronger federal oversight and interventions. Still, tougher federal regulations and oversight alone won鈥檛 lead to improvement among teacher preparation programs within and across states. In order to improve the teacher preparation program landscape, state policymakers, districts and preparation program providers, must also play their part to ensure quality in the field. Look for our upcoming post delving into strategies these various stakeholders can employ to do just that.

More 国产视频 the Authors

kaylan-connally_person_image.jpeg
Kaylan Connally
Are There Truly No Differences in Teacher Preparation Program Quality?