Friday, May 26, 2023

 

Study finds school improvement plan (SIP) templates continue to be enacted out of compliance rather than as mechanism for spurring and sustaining improvement efforts in schools


SIPs can espouse governmental entities’ perspectives on and requirements and recommendations for the school improvement planning process

Peer-Reviewed Publication

UNIVERSITY OF DELAWARE

LONG READ

School accountability policies from around the world list an array of mandates and recommendations to improve schools. One prevalent mandate, especially in the United States, calls for the development of a school improvement plan (SIP). Since the 1970s, many U.S. states have required that schools develop SIPs, and, in the 1990s, the U.S. federal government started to require that all state-designated underperforming schools develop SIPs (IASA, 1994Odden & Dougherty, 1982). These school accountability policy mandates assert that SIPs are an improvement tool for educators to use to set direction, organize resources, and take actions to enhance school performance (Beach & Lindahl, 2007Doud, 1995). Studies have found that higher-quality SIPs are positively—but often not significantly—correlated with better student achievement in English/language arts (ELA) and/or mathematics (Fernandez, 2011Huber & Conway, 2015Strunk et al., 2016VanGronigen & Meyers, 2022). Yet, other work suggests that educators charged with developing SIPs consider the process to be more of a compliance exercise than a legitimate tool for improving their schools (Meyers & VanGronigen, 2019Mintrop et al., 2001). As a result, educators create SIPs that are just “good enough” (Simon, 1957, p. xxv) to be approved by their school district or their state education agency (SEA) so they can check the SIP off their to-do list and return to work they deem more important (Duke, 2015Duke et al., 2013).

Study Purpose and Research Questions

The U.S. federal government and many SEAs, scholars, and practitioners have developed a range of resources over the last 30 years to aid educators in crafting high-quality SIPs, from targeted professional workshops to extensive school improvement toolkits (Anfara et al., 2006Rhim et al., 2007Scott et al., 2009). In the present study, we focus on one resource: SIP templates. Prior work (e.g., Miller, 2003Rentner et al., 2017Rhim & Redding, 2011White & Smith, 2010) has found that some SEAs devise their own SIP templates whereas other SEAs adopt an external SIP template, such as those from the Indistar® online planning platform. Still other SEAs provide their schools with no SIP template, leaving the design of SIPs up to school- and school district–level officials.

Research (e.g., Louis & Robinson, 2012) suggests that the design of school accountability policy mandates like SIPs influences the work of educators. Thus, the design and characteristics of SIP templates can signal to educators what is and is not important when developing and implementing a SIP (Mintrop et al., 2001). A SIP template that does not require an analysis of student attendance data, for example, may prompt fewer educators to address chronic absenteeism among students. A SIP template that does not call for family-school-community engagement strategies may see fewer educators invest time in building relationships with people outside their school. Despite their potential influence, though, few peer-reviewed empirical studies have intentionally investigated SIP templates in general, much less the specific influences SIPs may have on school improvement efforts. The present study is in direct response to both this gap and calls from scholars (e.g., Bickmore et al., 2021Dunaway et al., 2014) to better describe the SIP development and implementation process.

The broad purpose of the present study was to better understand the design and characteristics of SIP templates used in public schools around the United States. To strengthen our analysis and examine variation over time, we gathered SIP templates used before and after the 2015 passage of the Every Student Succeeds Act (ESSA). ESSA devolved some federal authority over school improvement efforts back to states, and we wanted to explore the potential influence of this devolution on SIP templates. In service of our purpose and desired examination of change over time, this exploratory qualitative content analysis study asked the following two research questions:

1.

What are the design and characteristics of SIP templates used before and after ESSA’s passage?

2.

How does the typical pre- and post-ESSA SIP template espouse the SIP development and implementation process?

Review of Relevant Literature

The use of improvement planning by schools and school districts has roots in two predominant places in the United States: (a) the strategic planning process from business and (b) the effective schools movement. Like strategic plans created by businesses, SIPs include goals that set a school’s overall direction—measurable objectives aligned to goals, strategies to meet objectives, and action steps to implement strategies (Duke et al., 2013). Unlike strategic plans, though, SIPs often have a one-year time horizon—meaning schools set goals every year—whereas strategic plans usually map out work over a 5- to 10-year time horizon (Beach & Lindahl, 2007). Consequently, SIP development and implementation has become an annual endeavor in many schools: educators develop a SIP at the start of the school year, educators implement SIP strategies and action steps during the school year, and a group of educators led by the principal perform an “autopsy” (Duke, 2015, p. 89) at the end of the school year to evaluate a school’s success in meeting SIP goals.

Mandates for School Improvement Planning

In 1994, the U.S. Congress passed the Improving America’s Schools Act (IASA), which required that all state-designated underperforming schools annually develop SIPs and charged school districts with reviewing those SIPs (IASA, 1994). The law also called upon state education agencies (SEAs) to provide technical assistance to underperforming schools, one area of which focused on SIPs. The No Child Left Behind (NCLB) Act of 2001 preserved both of these IASA mandates along with instituting new, high-stakes consequences for persistently underperforming schools, such as closure (NCLB, 2002). These federal mandates are layered on top of existing SIP mandates in many U.S. states, some of which have been in place since the 1970s (Odden & Dougherty, 1982).

Despite over 40 years of school accountability policy mandates to use SIPs as a tool for improving schools, the current peer-reviewed empirical literature base on the topic remains thin (Bickmore et al., 2021). Of the limited published work, studies since NCLB’s passage have more often examined the quality and effects of SIPs rather than the day-to-day implementation of SIPs. A synthesis of this extant research suggests that SIPs tend to be of low quality when assessed against research-informed criteria related to developing clear and coherent organizational plans (Curry, 2007Meyers & VanGronigen, 2021Mintrop & MacLellan, 2002VanGronigen & Meyers, 2020). Other evidence suggests that higher-quality SIPs are correlated with better school performance, such as student scores on ELA and mathematics standardized tests (Huber & Conway, 2015VanGronigen & Meyers, 2022). In Nevada public schools, for example, Fernandez (2011) found that SIPs with specific, time-bound goals and specific progress-monitoring indicators were associated with increased student scores on state standardized tests. Although encouraging, many of these researchers note considerable “noise” in attempting to assess the direct effects of SIP quality on outcomes of interest (e.g., student learning).

One recurring theme in the literature may offer insight into some of the reason for low-quality SIPs and a lack of significant association between SIPs and school performance. Several studies (e.g., Meyers & VanGronigen, 2019Mintrop et al., 2001) share how many educators view the SIP development and implementation process as a compliance exercise instead of a legitimate tool for improving their schools. Educators have been found to “satisfice” when developing SIPs, meaning they spend time drafting a “good enough” (Simon, 1957, p. xxv) SIP to meet school accountability policy mandates—and nothing more. The extent to which a good enough SIP actually leads to school improvement in addition to simply responding to policy mandates remains an open question and requires further study (Meyers & VanGronigen, 2019). Moreover, other research suggests many SIP goals target increases in student proficiency in ELA and mathematics (Anfara et al., 2006Sun et al., 2019VanGronigen & Meyers, 2017). These findings demonstrate that school accountability policy mandates set the goals of many SIPs rather than the educators and community members in those schools (Mintrop et al., 2001). These circumstances can short-circuit the SIP development and implementation process and lead to SIP developers actively or passively deferring to external school accountability policy mandates rather than spending important time to identify school-specific needs, such as rebuilding family-school-community linkages (Wronowski et al., 2022).

Templates for School Improvement Planning

Despite a nearly three-decade mandate that all underperforming schools develop SIPs, the U.S. federal government has provided little guidance on what information SIPs should include and how SIPs should be implemented (USED, 2006; see also Forte, 2010). As a result, numerous organizations—from SEAs and federally funded research centers to academics and school districts—have devised a vast array of resources to help educators and others develop and implement high-quality SIPs. These resources include professional learning experiences, resource kits, rubrics, reflection questions, worksheets, and templates (Rhim et al., 2007VanGronigen et al., 2017). This latter resource—templates—can hold particular importance because what is and is not included in a template can shape how people interact with and use that template (e.g., Poniatowski & Neumann, 2020).

There is scant rigorous, empirical research specifically on SIP templates, which we found surprising given how much SIPs are proffered as a tool for improvement. Many studies about or that mention SIPs tend to provide few details about SIP template design and characteristics—and among studies that do, details more so establish context and frame analyses instead of being a distinct focus of inquiry. Wronowski and colleagues (2022), for instance, examined SIPs from four U.S. states but included only one SIP templates-related paragraph per state. Aaron and colleagues (in press), on the other hand, offered considerable detail about a SIP template used by a U.S. professional services organization. They noted how this organization’s SIP template called for SIPs to include up to four goals along with accompanying root cause analyses and desired outcomes for each goal. Some states have created their own SIP template (Montana Department of Education, 2022) whereas other states have either adopted a SIP template devised by an outside organization—like the template described by Aaron and colleagues (in press)—or charged school- and school district–level officials with identifying and using their own SIP template (Duffy, 2001). The extent to which a SIP template is a required or suggested tool for improvement, though, is outside the scope of the present study; we are concerned solely with the design and characteristics of SIP templates from U.S. states.

Shifting Emphases from Recent Changes to Policy and Standards

In 2015, the U.S. Congress passed ESSA, and although the law still privileged student proficiency in ELA and mathematics, there were some key differences between ESSA and NCLB. First, states could broaden indicators of school performance past student proficiency in ELA and mathematics. ESSA permitted states to devise a school quality or student success (SQSS) indicator to help measure school performance, and indicators ranged from chronic absenteeism rates or school climate survey results to science proficiency or college and career readiness (Kostyo et al., 2018Woods & Scott, 2017). Second, ESSA required that all state-designated underperforming schools use a needs assessment to inform their SIPs. In some states, SEAs performed the needs assessment while other states leveraged third-party vendors or school districts to perform the needs assessment (Cuiccio & Husby-Slater, 2018). This requirement in particular—which was not included in NCLB—was intended to help schools better identify a range of their needs instead of quickly jumping to student ELA and mathematics proficiency.

Also in 2015, the National Policy Board for Educational Administration (NPBEA) published the Professional Standards for Educational Leaders (PSEL) to replace the Educational Leadership Policy Standards (ELPS) (NPBEA, 2015). PSEL is a set of research-informed standards intended to guide state policymakers’ efforts in creating expectations for educational leaders, especially principals and assistant principals (Smylie & Murphy, 2018). Recognizing the evolving nature of the jobs of educational leaders, PSEL—compared to its ELPS predecessor—increased emphasis on educational leaders’ role with respect to family-school-community engagement; equity, social justice, and cultural responsivity; and students’ nonacademic outcomes, such as social-emotional learning (Murphy, 2016; see also Farley et al., 2019). As the next section details, policy mandates from NCLB and ESSA along with standards like PSEL create conditions that may influence SIP templates and how educators—especially educational leaders—interact with those templates to develop and implement SIPs.

Conceptual Framework

Recent shifts in federal educational policy and professional standards for educational leaders, in particular, have changed the environments in which states and educators operate. ESSA and PSEL—which differ from their respective predecessors—may prompt SIP developers to focus SIP goals, objectives, strategies, and action steps on efforts besides students’ ELA and mathematics proficiency, such as an ESSA-aligned SQSS indicator about student attendance or a PSEL-aligned goal about promoting a caring school culture. To empirically investigate these and other potential changes, the present study examined SIP templates devised before and after ESSA’s passage in 2015. Such an approach aimed to provide insight into whether SIP templates created after ESSA’s passage reflected some of the changes mentioned previously or were, in fact, more of the same from the NCLB era.

To frame the present study, we draw on Argyris and Schön’s (19741978) conceptual work on espoused theory and theory-in-use (i.e., enacted theory). They posit that an espoused theory involves that which is communicated or professed while an enacted theory is observable through actions. Although these theories may be consistent with one another, it is possible that they may directly conflict without enactors being cognizant of the conflict. For example, a principal might say they value PSEL’s focus on family-school-community engagement and value frequent engagement with parents, but, in practice, the principal only hosts one parent meet-and-greet event during a school year. In this instance, the principal’s espoused theory—valued family-school-community engagement—does not match the principal’s enacted theory.

By developing or endorsing resources like SIP templates, states—whether they are aware of it or not—publicly communicate an espoused theory about the SIP development and implementation process. The general contours of and specific prompts within SIP templates might be influenced by a number of aspects, such as federal mandates (e.g., NCLB), state statutes, or newly hired officials in an SEA’s school improvement office. What SEAs choose to include or not include in their SIP templates may exert considerable influence on how educators perceive and engage with the SIP development and implementation process.

If, for instance, a SIP template includes a box for a goal related to student ELA proficiency, research suggests that SIP developers are more likely to include an ELA-related goal (Anfara et al., 2006Mintrop et al., 2001). Some SEAs may list a box specific to its ESSA SQSS indicator (e.g., chronic student absenteeism), which could prompt more schools to focus their SIPs on the indicator than they would have otherwise. If a SIP template does not explicitly prompt for a narrative about how a school intends to engage parents, families, and the wider community (such as those SIP templates examined by Wronowski and colleagues [2022]), then that school may be less likely to focus improvement efforts on family-school-community engagement unless the school’s principal is specifically aware of and seeks to implement PSEL’s emphasis on family-school-community engagement. As these examples illustrate, SIP templates, which we conceptually frame to be espoused theories of school improvement planning, can influence SIP developers’ enacted theories—that is, how SIP developers then go about working to improve their schools. The present study, however, investigates only espoused theories via SIP templates. A later section highlights how future research can leverage the present study’s findings to examine enacted theories about how SIPs are implemented in schools.

Methods

The purposes of the present study were to better understand the characteristics of SIP templates and how those templates espoused the SIP development and implementation process. To accomplish these purposes, we conducted a conventional content analysis (Krippendorf, 2004), which is an apropos qualitative approach when not much is known about the phenomenon of interest (Hsieh & Shannon, 2005).

Sample, Data Collection Procedures, and Data Sources

To provide a broad understanding of SIP templates, we used a complete target population sampling method, which Patton (2002) recommends when research purposes call for learning about all participants in a group of interest. We considered each U.S. state to be a separate participant (N = 50).

The goal of our data collection efforts was to gather at least two documents from each state: (a) a blank SIP template or completed SIP from before the 2015–2016 school year, and (b) a blank SIP template or completed SIP from after the 2018–2019 school year. We referred to the first set of documents as “the prior era” and the second set as “the current era.” ESSA’s passage in December 2015 provided the time point to distinguish between the prior and current eras.

To start data collection efforts, three research team members visited each SEA’s website to identify SIP-related webpages. Some SEA websites like Florida’s had a publicly available database of all SIPs whereas other SEA websites included scant information about SIPs. In these latter cases, we performed additional website searches within and outside the SEA’s website to identify prior and current-era SIPs, such as navigating through school district websites. These additional searches failed in 12 states, so one research team member emailed those states’ SEA improvement unit staff members to ask for needed documents. After attempts to contact all 50 states, we ultimately secured 44 SIPs from the prior era and 48 SIPs from the current era (see Table 3 in the findings section for the full list). These 92 documents served as our final data sources. Although we intentionally selected SIPs at random, no SIPs were specifically Title I schoolwide plans because the U.S. federal government sets the guidelines for those plans (USED, 2016). We were interested only in SIP templates that were not mandated by the U.S. federal government (i.e., developed and/or shared by SEAs).

Data Analysis

Following document analysis procedures (Hsieh & Shannon, 2005Krippendorf, 2004), we used an integrated coding scheme that consisted of both deductive and inductive codes (Bradley et al., 2007). First, two research team members consulted the literature on school improvement generally and SIPs specifically to develop a list of deductive codes for SIP template characteristics, such as school mission/vision/purpose and the listing of goals, objectives, strategies, and action steps. One research team member then randomly selected 10 states with SIP templates from both eras (N = 20) for initial coding and to establish interrater reliability (see Shenton, 2004). Two different research team members then engaged in two rounds of coding of the initial random sample of 20 SIP templates. In the first round, researchers analyzed SIP templates using the list of deductive codes. In the second round, researchers inductively coded the initial random sample to account for characteristics potentially missed through deductive coding. Our entire research team then met to (a) discuss coding results from the initial random sample, review areas of agreement and disagreement, and refine codes as necessary (e.g., the granularity of coding SIP goals by focus area, such as mathematics, social-emotional learning, etc.), and (b) collectively finalize the integrated coding scheme, which ended up including 103 SIP template characteristics (see Table 1 in the findings section for the full list).

Two research team members then used this refined coding scheme to analyze all 92 documents, first going through all 44 prior-era SIP templates and then all 48 current-era SIP templates. Interrater reliability was 93% (4,219 of 4,532 matches) for the prior era and 96% (4,865 of 5,047 matches) for the current era (Saldaña, 2015). We met every two weeks to discuss coding results as another research team member attempted to secure additional documents from some SEAs. Once all coding was finished, one research team member reviewed all coding results and completed the data reduction process (Miles & Huberman, 1994) by reviewing codes for consistency and combining open codes into axial and selective codes. In sum, we observed 737 unique instances of SIP template characteristics in the prior era and 663 unique instances of SIP template characteristics in the current era.

Two research team members then used the literature on SIPs (e.g., Duke et al., 2013) to categorize all 103 SIP template characteristics into one of six “emphasis areas”: assessing current conditions, determining needs, setting direction, organizing resources, taking action, and evaluating progress. These two research team members separately deductively coded all SIP template characteristics with respect to emphasis areas and met to discuss coding results and resolve disagreements. After this discussion, one of these two team members reanalyzed all SIP template characteristics with respect to emphasis areas with the refined deductive scheme. Ultimately, of the 103 SIP template characteristics, 29 aligned with assessing current conditions, 14 for determining needs, 23 for setting direction, 13 for organizing resources, 11 for taking action, and 13 for evaluating progress. Because the present study examined SIP templates with respect to the SIP development and implementation process, we viewed the first four emphasis areas (assessing current conditions, determining needs, setting direction, organizing resources) as aligning with the SIP development aspect of the process and the last two emphasis areas (taking action, evaluating progress) as aligning with the SIP implementation aspect of the process.

Methodological Limitations

Two main limitations of our methodology warrant mention. First, we were unable to secure documents from six states for the prior era and two states for the current era despite repeated attempts. Some SEAs simply did not respond to multiple emails and phone calls requesting information. Consequently, our findings can speak to most, but not all, of the United States. Second, scholars (e.g., Bowen, 2009) note how documents—just like interviews and surveys—are self-reported information and may exhibit similar biases that people exhibit when answering interview or survey questions, such as selective sharing. While triangulation with primary data collection methods (e.g., interviews) may have mitigated some of these biases, we made an intentional methodological decision to offer espoused portraits of SIP templates using only publicly available information. These espoused portraits are often representative of what school- and school district–level educators have to engage with and interpret as they develop and implement SIPs. In a later section, we describe how future research efforts can address some of these limitations and strengthen the present study’s findings.

Findings

Research Question 1: SIP Template Characteristics

Our first research question asked about the characteristics of SIP templates before and after ESSA’s passage (the prior era and the current era, respectively).

General SIP Template Characteristics

To get a general sense of the data, we calculated the overall prevalence of the 103 SIP template characteristics across the prior and current eras (see Table 1). Starting with the prior era, the most prevalent characteristics that appeared in at least half of states were a general description of goals (84%), a required ELA goal (68%), a general description of action steps (61%), a general description of strategies to implement goals (59%), and the school principal’s name (50%). On the other hand, we observed no instances of 12 characteristics across prior-era SIP templates, such as including early warning data, rationales for objectives, recommendations for future school years, expected results from improvement efforts, or a description of the school’s cultural competency plan. As a reminder, these characteristics came from our set of deductive codes derived from extant literature.

For the current era, the most prevalent characteristics appearing in at least half of states were a general description of goals (65%), a required ELA goal (54%), a general description of action steps (52%), and a required mathematics goal (52%). We observed no instances of 15 characteristics across current-era SIP templates, such as staff and community demographic data, several details related to objectives (e.g., rationale, supporting evidence, timeline), expected results of improvement efforts, a description of feedback loops between the school and parents and surrounding community, or a description of the school’s cultural competency plan.

To examine more nuanced changes in SIP template characteristic prevalence rates after ESSA’s passage, we ranked the prevalence of all characteristics within each era with Rank 1 being the most prevalent and Rank 103 being the least. Ranking changes from the prior to the current era permitted us to consider the extent to which ESSA’s mandates—such as SQSS indicators (e.g., students’ social-emotional learning) and needs assessments—were present in our sample of SIP templates. Looking to the ranking distribution’s tails, 11 characteristics decreased at least 26 ranks (i.e., one quartile) between eras while 12 characteristics increased at least 26 ranks. Fewer SIP templates used after ESSA’s passage included staff demographic data (↓65 ranks), a list of stakeholders involved in developing the SIP (↓48), explicit strategies to communicate information to parents (↓39), subject area test scores by student subgroup (↓38), a required science test score goal (↓38), a description of monitoring progress on meeting goals (↓33), and a required social studies test score goal (↓29). Conversely, more SIP templates used after ESSA’s passage included purpose statements (↑44), a required student-focused social-emotional learning (SEL) goal (↑39), measurable outcomes for strategies (↑37), supporting evidence for goals (↑33), supporting evidence for strategies (↑31), a timeline for meeting goals (↑30), progress benchmarks for action steps (↑28), and goals in the SMART goal format (↑27).

We then categorized SIP template characteristic prevalence rates from both eras into the six emphasis areas of the SIP development and implementation process (see Table 2). Of the 737 characteristics we observed in prior era SIP templates, 31% focused on assessing current conditions, 10% on determining needs, 28% on setting direction, 8% on organizing resources, 16% on taking action, and 7% on evaluating progress. Of the 663 characteristics we observed in current-era SIP templates, 25% focused on assessing current conditions, 12% on determining needs, 33% on setting direction, 8% on organizing resources, 14% on taking action, and 8% on evaluating progress. Considering change over time, SIP templates used after ESSA’s passage had more characteristics that emphasized setting direction (+5%), evaluating progress (+2%), and determining needs (+1%)—and fewer characteristics that emphasized assessing current conditions (−6%) and taking action (−2%).

SIP Template Characteristics by State

Turning to findings by state, we calculated a general “coverage rate” for each SIP template from each era from each state, which was the percentage of characteristics in each SIP template divided by 103. For the prior era, coverage rates ranged from 7% (Arkansas, Hawaii, Iowa) to 29% (New Jersey) with an average of 16%, meaning a prior-era SIP template included—at most—30 of the 103 characteristics. For the current era, coverage rates ranged from 3% (Utah) to 25% (New Mexico) with an average of 14%, meaning a current-era SIP template included—at most—26 of the 103 characteristics.

To examine more nuanced changes in coverage rates after ESSA’s passage, we calculated the differences in coverage rates between eras for the 44 states with both prior- and current-era SIP templates. Across these 44 states, coverage rate changes ranged from a 19% decrease (Minnesota) to an 11% increase (Florida), with an average of −3%, suggesting that SIP templates used after ESSA’s passage included fewer characteristics compared to before ESSA’s passage. Looking to the coverage rate distribution’s tails, current era SIP templates in nine states included at least 10% fewer characteristics compared to their prior-era SIP templates: Minnesota (−19%), Tennessee (−17%), Rhode Island (−15%), Texas (−13%), Michigan (−11%), Virginia (−11%), Illinois (−10%), New Jersey (−10%), and Utah (−10%). On the contrary, only one state—Florida—included at least 10% more characteristics in its current-era SIP template compared to its prior-era SIP template.

We then explored state findings with respect to the six emphasis areas of the SIP development and implementation process. For each state’s prior- and current-era SIP template, we calculated “focus rates” to assess the extent to which a SIP template focused on the six emphasis areas. To calculate these focus rates, we divided each emphasis area’s observed characteristic count by the total number of observed characteristics in that SIP template. For example, Alabama’s prior-era SIP template included 29 total characteristics, eight of which aligned with the assessing current conditions emphasis area while two aligned with the determining needs emphasis area. The resulting focus rates for these two emphasis areas in Alabama’s prior-era SIP template were 28% (eight divided by 29) and 7% (two divided by 29), respectively.

Across all 44 prior-era SIP templates, average focus rates for the six emphasis areas were 29% for assessing current conditions, 10% for determining needs, 31% for setting direction, 8% for organizing resources, 16% for taking action, and 6% for evaluating progress. Although all prior-era SIP templates had at least one characteristic emphasizing setting direction, various states had 0% focus rates for the other five emphasis areas: 4 states included no characteristics about assessing current conditions, 13 states included nothing about determining needs, 15 states included nothing about organizing resources, 2 states included nothing about taking action, and 15 states included nothing about evaluating progress. See Table 4 for a full listing of prior-era focus rates for each emphasis area by state.

Turning to the 48 current-era SIP templates, average focus rates for the six emphasis areas were 23% for assessing current conditions, 11% for determining needs, 36% for setting direction, 8% for organizing resources, 14% for taking action, and 8% for evaluating progress. Similar to the prior era, all current-era SIP templates included at least one characteristic emphasizing setting direction, but some states had 0% focus rates for the other five emphasis areas: 8 states included no characteristics about assessing current conditions, 13 states included nothing about determining needs, 18 states included nothing about organizing resources, 8 states included nothing about taking action, and 12 states included nothing about evaluating progress. See Table 5 for a full listing of current-era focus rates for each emphasis area by state.

Comparing focus rates between the 44 states with SIP templates from both eras, current era SIP templates had higher focus rates in determining needs (+1%), setting direction (+5%), and evaluating progress (+2%) and lower focus rates in assessing current conditions (−6%) and taking action (−2%). Focus rates in organizing resources did not change between eras. The number of states with 0% focus rates in certain emphasis areas also changed between eras: four more states in the current era did not include characteristics about assessing current conditions, three more states did not include characteristics about organizing resources, and six more states did not include characteristics about taking action. Three fewer states in the current era did not include characteristics about evaluating progress while 0% focus rates between the eras remained the same for determining needs and setting direction. See Table 6 for a full listing of focus area rate changes between eras for each emphasis area by state.

Research Question 2: Espousals of the SIP Development and Implementation Process

Drawing upon our conceptual framework, our second research question asked how our sample of SIP templates—through their design and characteristics—espoused the SIP development and implementation process before and after ESSA’s passage. The next sections, though, do not make value judgments about whether such espousals were “good” or “bad.” Our goal with the present study was to describe–not evaluate. In a later section, we critically reflect upon these espousals and our findings more generally, especially with respect to extant literature.

To set the stage for these espousals, our coding scheme included SIP template characteristics about general school details (e.g., principal name); school demographic data; SIP development details; school performance data; early warning data; needs assessment data; goals; objectives; strategies; action steps; family and community engagement; budgeting; and other information, such as schools’ plans for staff professional learning, technology, and cultural competency. Extant research (e.g., Duke et al., 2013) suggests that the bulk of a SIP’s content focuses on sections related to “goals, objectives, strategies, and action steps,” which—for simplicity—we abbreviated as GOSAS. Across all SIP templates, we coded whether each GOSAS included a general description, a rationale for selection, evidence to support selection, progress benchmarks, measurable outcomes, timeline for completion, progress monitoring information, and those responsible for doing the work (see Table 1).

Prior-Era Espousal

The typical SIP template used during the NCLB era included approximately 17 of the 103 SIP template characteristics. A SIP template from the prior era often included a mission/vision/purpose statement; a required ELA goal; and general descriptions of goals, strategies, and action steps. A prior-era SIP template did not often include more granular details about GOSAS, especially information related to why a particular GOSAS was selected (e.g., a connection to prior school performance data or current needs assessment data) or how progress and ultimate success for a particular GOSAS would be measured (e.g., student formative assessment scores, end-of-year standardized test scores). A SIP template from the prior era also did not include early warning data or a cultural competency plan. Finally, 77% of the characteristics included in a typical prior-era SIP template emphasized developing the SIP (e.g., assessing current conditions, determining needs, setting direction, organizing resources) whereas 23% emphasized implementing the SIP (e.g., taking action, evaluating progress). Although this prior-era espousal aligns with NCLB’s (2002) focus on student achievement in ELA, the lack of SIP template characteristics related to SIP implementation—especially monitoring and measuring progress—suggests that SEAs charged educators more so with developing improvement efforts and less so with implementing those efforts. Such an espousal comports with extant literature published before ESSA’s passage asserting that educators develop SIPs and then rarely refer to them as implementation occurs during the school year (e.g., Duke, 2015Duke et al., 2013).

Current-Era Espousal

The typical SIP template used during the ESSA era included approximately 14 of the 103 SIP template characteristics. Although a current-era SIP template included many of the same characteristics as a prior-era SIP template (e.g., a mission/vision/purpose statement, a required ELA goal), more states called for SIPs to include SMART goals, goals related to students’ nonacademic outcomes (e.g., behavior, social-emotional learning), and evidence for selecting strategies and setting measurable outcomes for strategies. Fewer states called for SIPs to include science and social studies goals, details about monitoring progress on meeting goals, subject area test scores disaggregated by student subgroups, and staff and community demographic data. Similar to the prior era, though, the typical current-era SIP template included no characteristics about a school’s cultural competency plan. Finally, fewer characteristics in a typical current-era SIP template emphasized assessing current conditions whereas more characteristics prompted educators to set direction. This current-era espousal aligned with some of ESSA’s tenets (Hale et al., 2017), such as drafting SIP goals related to more than student achievement in ELA and mathematics and considering evidence with respect to school improvement strategy selection. Despite these increased emphases, the typical current-era SIP template included fewer characteristics than its prior-era predecessor. The next section expounds upon positive and negative consequences of these changes between eras.

Discussion

ESSA’s Lackluster Influence

Given our interest in change over time, we start by returning to our conceptual framework to discuss differences in states’ espoused theories of school improvement planning between the prior era and the current era. Despite ESSA’s passage, the typical current-era SIP template—by and large—looked rather similar to the typical prior current-era SIP template. As a result, the espoused SIP development and implementation process will likely remain rather similar during the current era.

Curiously, though, some states appeared to use their ESSA-granted autonomy to decrease the number of characteristics in their SIP templates. From one perspective, fewer SIP template characteristics can provide educators with more autonomy over school improvement efforts (see Mintrop & Sunderman, 2009), which was one of ESSA’s espoused goals (Portz & Beauchamp, 2022). Such autonomy can create conditions for educators to proactively identify and address internally developed, school-specific needs rather than reactively respond to mandates from externally developed school accountability policies (Altrichter & Kemethofer, 2015).

From a different perspective, fewer SIP template characteristics may prompt less attention on certain critical issues, especially equity. Current-era SIP templates from four states studied by Wronowski and colleagues (2022), for instance, included few characteristics related to enhancing educators’ cultural competency to better serve increasingly diverse student populations, shifting educators’ deficit views to better serve families and communities, or involving community members in school improvement efforts. Although fewer SIP template characteristics may enhance educator autonomy and promote educator professionalization, such omissions place greater responsibility on educators—especially educational leaders—to use their preparation to ensure improvement efforts address student, teacher, and community needs.

Relatedly, we observed that fewer SIP templates used after ESSA’s passage called for educators to provide some kind of evidence for GOSAS selection—a finding that stood in direct contrast to ESSA’s charge that select improvement strategies, especially those used in underperforming schools, needed to be supported by evidence (Hale et al., 2017). This finding aligns with recent work on states’ ESSA plans that found few SEAs themselves included evidence to support their espoused approaches to school improvement more generally (VanGronigen et al., 2022). This lack of modeling at the state level may prompt school- and school district–level officials to act similarly.

There was one bright spot, though—more SIP templates used after ESSA’s passage called for educators to include nonacademic goals, such as those focused on student behavior generally and SEL for students specifically. Our findings suggest that ESSA’s provision that states develop broader criteria to measure school performance took root in some states’ current-era SIP templates. Despite being incremental in the larger scheme of our findings, this encouraging change suggests that some states may have used their ESSA-granted autonomy to reshape their espoused theories of school improvement planning, signaling to educators that students’ nonacademic outcomes deserved attention alongside students’ academic outcomes.

Prioritizing Development Over Implementation

Numerous SIP templates from both eras prioritized the development rather than the implementation of SIPs, suggesting a continuation from NCLB to ESSA. Recall that we categorized 103 SIP template characteristics into one of six emphasis areas (assessing current conditions, determining needs, setting direction, organizing resources, taking action, and evaluating progress) and aligned the first four areas with developing SIPs and the last two areas with implementing SIPs. Although we recognize that there were fewer possible implementation-related SIP template characteristics, we nevertheless found the vast majority of SIP templates from both eras focused more on SIP development—not SIP implementation.

At quick glance, this finding seems intuitive because the phrase “school improvement plan” literally includes the word “plan,” which can be a synonym for develop (see Beach & Lindahl, 2004). Yet, as Aaron and colleagues (in press) found in a study of Florida educators charged with developing and implementing SIPs, a skewed emphasis on SIP development raises a question about the extent to which educators spend too much time thinking about developing SIPs and too little time implementing SIPs. Such a question raises two perspectives. First, Bryk (2015) suggests that many educators engage in “solutionitis,” which is “jump[ing] to implement a policy or programmatic change before fully understanding the exact [challenge]” (p. 468). A SIP template that prioritizes implementation may encourage educators to spend less time on developing a rich understanding of their school’s unique, contextualized needs. Without such an understanding, wider school improvement efforts may end up being misaligned to school needs and, ultimately, less effective.

On the other hand, a second perspective questions the outsized emphasis on SIP development. Emphasizing SIP development at the expense of SIP implementation may lead to “analysis paralysis,” where educators may spend much of their already limited time gathering and analyzing various sources of data. Although these actions are certainly important, they may result in a SIP being what we term “frontloaded”—meaning a considerable amount of work went into developing a rich understanding of a school’s needs, but less time was spent discussing that understanding and then laying out the actual step-by-step work to address those needs. Consequently, educators may be left with few concrete strategies and action steps in their SIPs that can be readily implemented during the school year (see Aaron et al., in press).

Extending the previous point, more emphasis on SIP development rather than implementation—specifically the lack of detail on concrete strategies and action steps—may continue preventing educators from viewing their SIPs as a “living” document (Duke, 2015Timar & Chyu, 2010). Prior work (e.g., Meyers & VanGronigen, 2019Mintrop et al., 2001) shows that some educators draft SIPs that are just “good enough” (Simon, 1957, p. xxv) for approval by school district and/or SEA officials and, once approved, place those SIPs on a shelf and do not review them until conducting an end-of-school-year “autopsy” (Duke, 2015, p. 89) to assess the effectiveness of improvement efforts. If SIP templates put development and implementation on more equal footing, educators may be more likely to reference, edit, and refine their SIPs throughout the school year as they implement improvement efforts.

The Notion of “Surprise”

We close our discussion with the notion of “surprise” with our findings. Some findings were unsurprising and aligned with prior literature, such as the number of SIP templates from both eras that required specific goals tied to student ELA and mathematics performance (e.g., Forte, 2010). Other findings, though, surprised us and were contrary to prior literature.

First, approximately 50% of SIP templates from each era lacked a prompt for a school vision, mission, or purpose statement. Scores of research studies and practitioner resources (e.g., Murphy & Torre, 2015Stevenson, 2019) have highlighted just how essential it is for schools to create some kind of driving purpose for improvement efforts. Without explicit prompting and an overt espoused emphasis on that driving purpose, we wonder about the extent to which educators will draft some kind of overarching vision for improvement efforts. Second, no SIP templates from either era explicitly mentioned characteristics related to cultural competency, suggesting a missed opportunity among states to espouse the importance of schools—via improvement efforts—attending to issues of equity and social justice. These omissions from SIP templates also do not comport with increasing calls (Galloway & Ishimaru, 2015Wronowski et al., 2022) for and attention on identifying and addressing equity- and social justice–related issues in schools. Third, and related to the second point, fewer current-era SIP templates called for the inclusion of demographic data about students, staff, or the local community. This decreasing prevalence between eras elicits questions about the extent to which SIPs can be a tool for helping educators look for—and make efforts to address—potential disproportionalities within and between students, staff, and the local community on a range of issues. The reduced espoused focus among states on including and analyzing demographic data may lead to the continued persistence of an array of inequities in schools.

A final surprising finding was the considerable range of coverage rates of SIP template characteristics across states. This variation suggests that some states espouse the SIP development and implementation process in one way whereas other states espouse the process in a different way. Although such a finding is perhaps expected because of the U.S.’s decentralized educational system (Honig & Rainey, 2012), it could signal a lack of consensus on what “good planning” is and can and should look like in U.S. schools. Indeed, the divergence among states demonstrates that some states may not have consulted the (limited) literature and resources on school improvement planning. This variation raises concerns that educators in some states may continue to create low-quality SIPs and enact the SIP development and implementation process as a compliance exercise rather than a helpful way to carry out school improvement efforts (see Meyers & VanGronigen, 2019).

Implications, Recommendations for Future Work, and Conclusion

Our findings prompt several implications for policy, preparation, and practice. Regarding policy, states must develop and disseminate a coherent theory of action about how they think school improvement happens, where a SIP resides in that theory of action, and how educators—especially principals charged with leading the SIP development and implementation process—can be supported to develop and implement a SIP that recognizes both their school’s needs and the state’s theory of action. With our findings suggesting no specific cultural competency-related SIP template characteristics, for instance, would such a topic be on the typical principal’s radar when leading the SIP development and implementation process? This implication walks a fine line, though, between mandate and recommendation. Although states may not require certain characteristics in their SIP templates, they can still list them to prompt educators to reflect on key topics (e.g., equity) and encourage educators to consider those key topics in their SIPs.

Turning to preparation, principals are often the primary drivers of developing and implementing SIPs. As a result, educational leadership preparation programs (ELPPs) should allocate specific space and time in their program of study to have aspiring leaders review, discuss, and critique their state’s SIP template. ELPPs could also share SIP templates from other states and prompt aspiring leaders to consider alternative ways to how the SIP development and implementation process may unfold and ultimately be accomplished. Moreover, ELPPs should also provide aspiring leaders with explicit training that describes school improvement as a systems issue—and that even if a SIP template does not prompt for information about certain parts of the system (e.g., a reflection on early warning data), aspiring leaders should still consider the wider system when gathering and analyzing information for inclusion in their SIPs.

Extending the previous implications to practice, our findings suggest that educators in some states may receive very little guidance from their SIP templates about improvement efforts generally and the SIP development and implementation process specifically. Consequently, the onus of responsibility to identify and address schools’ unique, contextualized needs while also satisfying external mandates continues to rest mostly with school-level educators—not other actors in the system. As such, school district officials, in particular, should take an active role in supporting school-level SIP development and implementation efforts. First, these officials should emphasize to their school-level leaders the need to prioritize the SIP development and implementation process and the need for SIPs to comport with school district goals (e.g., those listed in a school district’s strategic plan) and school-specific needs. Second, school district officials should critically review SIPs early in their development to ensure alignment among state regulations (e.g., SIP template prompts), school district goals, and school-specific needs. Several meetings that occur before the start of the fall and spring semesters can provide important opportunities for school district officials to offer essential feedback before school-level leaders would finalize and start implementing a SIP. Third, school district officials should spend more time with school-level leaders, especially principals, throughout the school year to monitor SIP implementation. These monitoring efforts, which school district officials should take responsibility for initiating and sustaining, may occur monthly and consist of reviewing a school’s progress toward meeting SIP goals and discussing if any revisions to SIP contents are warranted based on implementation (e.g., new strategy or action steps). We recognize that these officials may not know how to support SIP-related efforts, though, so school-level officials—especially principals—may need to provide contextual insight to aid school district officials in helping provide feedback on early SIP development and later SIP implementation efforts.

We also recommend the continuation of this line of inquiry in future research. This line of inquiry could detail and compare how educators in a few states develop and implement SIPs. Colorado’s current-era SIP template, for instance, included several reflection prompts whereas New Mexico’s current-era SIP template was organized around plan-do-study-act (PDSA) cycles. Original qualitative data collection using interviews and/or focus groups could explore how educators interact with SIP templates and what subsequent SIP implementation looks like within and across states. Such work would offer insight into enacted theories of school improvement planning and be an excellent complement to the present study’s focus on espoused theories.

To close, the present study was among the first to specifically examine the characteristics of SIP templates used in states before and after ESSA’s passage. Although we identified some encouraging changes in what SIP templates prompted after ESSA’s passage, the typical SIP template used during the ESSA era looked much like the typical SIP template used during the NCLB era. Consequently, not much is poised to change in the near future with respect to the SIP development and implementation process. We nevertheless remain steadfast, though, that SIP templates can be a tool to help educators identify and address a range of important issues in their schools, especially those related to equity and social justice. States, especially SEAs, occupy powerful positions to help shape that kind of work—and an intentionally developed, comprehensive SIP template is one tool that educators can use to foster more high-quality, equitable learning experiences for all students.

 

The Mediterranean Diet: Good for your health and your hip pocket


Peer-Reviewed Publication

UNIVERSITY OF SOUTH AUSTRALIA

Mediterranean Diet 

IMAGE: THE MEDITERRANEAN DIET IS NOT ONLY GOOD FOR YOUR HEALTH BUT ALSO FOR YOUR WEEKLY BUDGET. view more 

CREDIT: "COLOURFUL VEGETABLES" BY ADACTIO IS LICENSED UNDER CC BY 2.0.

We’ve heard it time and time again – the Mediterranean diet is great for our health. But despite the significant health benefits of this eating plan, a common deterrent is often the expected costs, especially when budgets are tight.

Now, new research from the University of South Australia shows that the Mediterranean diet is not only good for your health but also for your weekly budget, saving a family of four $28 per week (or $1456 per year) compared to the typical Western diet.

The study compared the nutrition profile and weekly costs of three food baskets based on: the typical Australian western diet, the Mediterranean diet, and the Australian Guide to Healthy Eating (AGHE).

It found that the Mediterranean diet and the Australian Guide to Healthy Eating met recommendations for food groups, macronutrient distribution and key micronutrients associated with good health, but the typical Australian diet significantly lacked fibre, zinc, potassium, calcium, magnesium, vitamin E and vitamin B6, and had double the recommended salt intake.

The Mediterranean diet cost $78 per week for a single person household, $135 for a household of two, $211 for a family of three, and $285 for a family of four.

UniSA researcher and PhD candidate Ella Bracci says the research shows that a Mediterranean diet can be a viable and healthy option for cost-conscious families.

“Diet is one of the leading modifiable risks factors for chronic disease. Yet a significant number of Australians are still not consuming a balanced healthy diet,” Bracci says.

“Australians tend to eat a fair amount of food that’s high in fat, salt, and sugar, which reflects the Western diet. Unfortunately, this is also contributing to increased rates of type two diabetes, heart disease, obesity, and osteoporosis.

“To help combat unhealthy food choices, global agencies are increasingly endorsing plant-based diets such as the Mediterranean diet as their preferred guide to healthy eating. The challenge, however, has been for people to adopt these in Australia and one of the greatest barriers is perceived cost.

“The Mediterranean diet encourages eating fruits and veggies, whole grains, nuts, extra virgin olive oil, seeds and seafood, and there is a view that these foods are more expensive. And with cost of living being so high in Australia, it’s no surprise that people are being careful about where their hard-earned dollars go.

“This research shows how a Mediterranean diet can be a cost-effective option, letting people prioritise both their health and their hip pocket.”

The Australian Guide to Healthy Eating recommends that a balanced, healthy diet comprises five food groups: fruit, vegetables and legumes, breads and cereals, dairy foods, and meat (and alternatives).

Only 8% of Australians eat the recommended 375g of vegetables per day, with the average Australian consuming up to 35% of their daily energy from foods high in salt, added sugars and unhealthy fat.

UniSA’s Associate Professor Karen Murphy says healthy food shopping is more affordable than some may expect.

“Eating a balanced healthy diet doesn’t have to break the bank, but eating unhealthy food can damage your body,” Assoc Prof Murphy says.

“Whether you prefer to follow the Australian Guidelines for Healthy Eating or the Mediterranean diet, both provide the necessary nutrients and energy, but as this study shows, the Mediterranean diet is generally less expensive.

“As with anything, shopping around, looking out for specials and mark-downs, purchasing in season, or stocking up on frozen, dried, and canned produce, can help reduce the costs of your weekly grocery shop. As can choosing home-brand or non-premium products.

“A $28 dollar saving may not seem like much a week, but over a year this is nearly $1500, which can make all the difference to your budget when times are tough.”

…………………………………………………………………………………………………………………………

Notes to editors:

Food baskets were developed to meet recommended serving sizes of food groups, and to provide adequate macronutrient and micronutrient intake over a week. Four reference families ensured an array of different nutritional needs.

The study modelled data on four representative Australian households: household of four (two adults, two children, with a weekly income of $3670); household of three (one adult, two children, with a weekly income of $1835); household of two (elderly pensioners with a weekly income of $774); and a household of one (adult with a weekly income of $1835).

Comparative costs for the three weekly food baskets were:

People in household

Australian Western diet

Australian Guide to Healthy Eating

Mediterranean diet

1 adult

$80

$75

$78

2 older adults

$157

$186

$135

1 adult 2 children

$217

$238

$211

2 adults and 2 children

$313

$315

$285

  • Food basket costs used prices from Coles online.
  • Grocery list of foods and quantities of foods are available on request.
  • Infographic comparing food and costs of the Western Diet and the Mediterranean diet available on request.

…………………………………………………………………………………………………………………………


Failed antibiotic now a game changing weed killer for farmers

Peer-Reviewed Publication

UNIVERSITY OF ADELAIDE

Dr Tatiana Soares da Costa and her team 

IMAGE: (FROM LEFT) EMILY MACKIE, DR ANDREW BARROW AND DR TATIANA SOARES DA COSTA. view more 

CREDIT: UNIVERSITY OF ADELAIDE

Weed killers of the future could soon be based on failed antibiotics.

A molecule which was initially developed to treat tuberculosis but failed to progress out of the lab as an antibiotic is now showing promise as a powerful foe for weeds that invade our gardens and cost farmers billions of dollars each year.

While the failed antibiotic wasn’t fit for its original purpose, scientists at the University of Adelaide discovered that by tweaking its structure, the molecule became effective at killing two of the most problematic weeds in Australia, annual ryegrass and wild radish, without harming bacterial and human cells.

“This discovery is a potential game changer for the agricultural industry. Many weeds are now resistant to the existing herbicides on the market, costing farmers billions of dollars each year,” said lead researcher Dr Tatiana Soares da Costa from the University of Adelaide’s Waite Research Institute.

“Using failed antibiotics as herbicides provides a short-cut for faster development of new, more effective weed killers that target damaging and invasive weeds that farmers find hard to control.”

Researchers at the University’s Herbicide and Antibiotic Innovation Lab discovered there were similarities between bacterial superbugs and weeds at a molecular level.

They exploited these similarities and, by chemically modifying the structure of a failed antibiotic, they were able to block the production of amino acid lysine, which is essential for weed growth.

“There are no commercially available herbicides on the market that work in this way. In fact, in the past 40 years, there have been hardly any new herbicides with new mechanisms of action that have entered the market,” said Dr Andrew Barrow, a postdoctoral researcher in Dr Soares da Costa’s team at the University of Adelaide’s Waite Research Institute.

It’s estimated that weeds cost the Australian agriculture industry more than $5 billion each year.

Annual ryegrass in particular is one of the most serious and costly weeds in southern Australia.

“The short-cut strategy saves valuable time and resources, and therefore could expedite the commercialisation of much needed new herbicides,” said Dr Soares da Costa.

“It’s also important to note that using failed antibiotics won’t drive antibiotic resistance because the herbicidal molecules we discovered don’t kill bacteria. They specifically target weeds, with no effects on human cells,” she said.

It’s not just farmers who could reap the benefits of this discovery. Researchers say it could also lead to the development of new weed killers to target pesky weeds growing in our backyards and driveways.

“Our re-purposing approach has the potential to discover herbicides with broad applications that can kill a variety of weeds,” said Dr Barrow.

This research has been published in the journal of Communications Biology.

Dr Tatiana Soares da Costa and her team are now looking at discovering more herbicidal molecules by re-purposing other failed antibiotics and partnering up with industry to introduce new and safe herbicides to the market.

Funding for this research was provided by the Australian Research Council through a DECRA Fellowship and a Discovery Project awarded to Dr Tatiana Soares da Costa.

The first author on the paper is Emily Mackie, a PhD student in Dr Soares da Costa’s team, who is supported by scholarships from the Grains and Research Development Corporation and Research Training Program. Co-authors include Dr Andrew Barrow, Dr Marie-Claire Giel, Dr Anthony Gendall and Dr Santosh Panjikar.

The Waite Research Institute stimulates and supports research and innovation across the University of Adelaide and its partners that builds capacity for Australia’s agriculture, food, and wine sectors.