Comparing Institutional Effectiveness Standards, Part II
Last month, we talked about what the seven accreditation regions in the U.S. have in common in terms of their standards regarding institutional effectiveness. This month, let’s talk about the differences in how the regions address institutional effectiveness.
Let’s bear in mind that we’re aiming at a moving target: the accreditation standards evolve over time. In fact, in just a few days (at the SACSCOC Annual Meeting, December 2-5), the Southern region is having its member institutions vote on what seems to be a pretty substantial revision of that region’s accreditation principles. The points I make below refer to the accreditation principles current as of this writing (i.e., SACSCOC’s 2012 Principles of Accreditation) rather than the revision going before the membership.
OK, with that caveat out of the way, here are some of the unique ways in which the seven accreditation regions talk about institutional effectiveness.
If you are an institution accredited by the New England Association of Schools and Colleges (NEASC), you can expect to have measures of student achievement prescribed for you by NEASC. The other regions typically offer some student achievement measures as examples, but institutions are free to choose their own. While two of the other regions also prescribe student achievement measures, NEASC prescribes more measures than the other regions – seven measures, while other regions only specify three. In case you’re wondering, those seven achievement measures are: progression rate, retention rate, transfer rate, graduation rate, loan default rate, licensure exam pass rates, and employment rates.
If you are an institution accredited by the Middle States Commission on Higher Education (MSCHE), MSCHE specifies types of continuous improvement activities in which an institution or program can engage. These read like helpful examples rather than an exhaustive typology, and it does seem like they should be useful when we are struggling with the question of “our students are doing well, how else are we supposed to improve?” The types of continuous improvement activities listed are: improving learning; improving curriculum or pedagogy; revising support services; professional development; planning and budgeting for programs and services; informing constituents about programs; and improving key indicators of success like retention or graduation rates.
In the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), the accreditation principles are (as of this writing) broken into three sections: Core Requirements, Comprehensive Standards, and Federal Requirements. SACSCOC also expects to see documented continuous improvement in administrative departments, as well as academic programs and student support departments; I expect this to be a point of convergence in the near future, but the other regions do not explicitly address a need for continuous improvement in the administrative units of an institution. SACSCOC also requires its accredited institutions to develop and implement a Quality Enhancement Plan (QEP), a five-year plan designed to improve student learning or the environment for learning.
For institutions accredited by the Higher Learning Commission of the North Central Association (HLC), they have three different accreditation “Pathways” to choose from: the Standard Pathway, the Open Pathway, and AQIP (Academic Quality Improvement Program). Each of the three Pathways focuses on quality assurance and improvement, but AQIP has a greater emphasis on continuous improvement. An institution must be relatively evolved before the Commission will allow it onto the AQIP Pathway, and if an AQIP institution seems to be struggling the reviewers may recommend instead that it revert to one of the other Pathways.
The Northwest Commission on Colleges and Universities (NWCCU) has a very interesting distinguishing characteristic: an institution accredited by NWCCU must identify “core themes” that represent each different facet of its mission statement. The institution must establish measurable objectives for each core theme, and use these indicators to track mission attainment over time. All the regions regard the institutional mission as very important, but NWCCU is unique in its expectation that an institution’s performance vis à vis its mission be explicitly quantified. In my humble opinion, this is a very intriguing and potentially powerful way to operationalize an institution’s mission – with the side benefit of helping to prevent “mission creep”. Why have a long and intricate mission statement if you know you will be required to measure your performance on each facet of your mission?
Finally, we turn our attention to the Western Association of Schools and Colleges (WASC), or more properly the WASC Senior College and University Commission (WSCUC) and the Accrediting Commission for Community and Junior Colleges (ACCCJC). Structurally, WASC is unique in that it is the only regional accreditor that focuses on a single state (California), and is also the only regional accreditor that has separate commissions for four-year and two-year colleges. WSCUC and ACCJC are not only separate commissions, but they have distinct accreditation standards as well.
WSCUC and ACCJC share in common a focus on the program review process, more so than the other accreditation regions. In addition to its unique focus on senior institutions, WSCUC also lays out three “Core Commitments” that it expects all institutions to embrace: a Core Commitment to Student Learning and Success, a Core Commitment to Quality and Improvement, and a Core Commitment to Institutional Integrity, sustainability, and Accountability.
ACCJC, along with its unique focus on community and junior colleges, requires data on student learning outcomes to be disaggregated by student demographics (something not explicitly required by the other regions, but I suspect that the peer reviewers in the other regions would welcome seeing institutions voluntarily take up this practice). ACCJC also requires the use of assessment by college personnel to be built into personnel evaluations – a practice that I have often heard recommended as a way to get people to “take assessment seriously”. ACCJC requires a college Board of Trustees to be engaged in its own continuous improvement process. And finally, the ACCJC standards explicitly state that, in the case of multi-college districts, district-level planning needs to be integrated with college-level planning.
Anyone who has spent any time reading accreditation standards knows that they are very dense reading. I have identified the key factors that distinguish the accreditation regions’ standards in my eyes. What do you think? Do you think your accreditation region has additional unique characteristics?
Check back next month, where we won’t be talking about accreditation standards – I promise!
Joe Baumann, M.S., Lead Consultant for SPOL