Cameron Students posing for a picture on Campus

ASSESSMENT DEFINITIONS AND GUIDELINES


Outcomes

Each program or unit participating in the assessment process uses an outcome-based approach. Academic programs use Student Learning Outcomes (SLOs) as the basis for their assessment program while non-academic units may use a combination of non-student learning outcomes and SLOs.  There should be a differentiation between the SLOs for embedded certificates and the corresponding degree program.

Student Learning Outcomes should describe what the students will know or be able to do and should be stated using action verbs as described in Bloom's taxonomy. Clearly written SLOs articulate faculty or staff expectations about what knowledge is to be learned and what skills, attitudes, and behaviors are to be developed by students involved in the program or unit. SLOs should be meaningful, measurable.

Programs with external accrediting or certification agencies should follow the guidelines from those agencies in developing SLOs.  In some cases, outcomes may be mandated by the accrediting agency, while in other cases, the professional organizations may have guidelines for writing outcomes.  Programs should follow external agencies or professional organizations' requirements or guidelines when creating or modifying program outcomes.

Non-academic units may have outcomes relating to co-curricular activities, student/client satisfaction, student engagement, unit performance, or student learning. Outcomes should not be phrased in terms of what activities the unit is engaged in, but rather in terms of the desired outcomes of the unit. As with student learning outcomes, outcomes from non-academic units should be meaningful and measurable.

Outcomes should be aligned, when appropriate, to the mission of the unit, the mission of the university, the core values of the university, CU's general education SLOs, the current strategic plan, and the sub-components of the Higher Learning Commission's criteria for accreditation.

Curriculum Mapping

Academic programs should develop a program curriculum map to ensure that students are given the opportunity to develop competence in program-level student learning outcomes in core (required) courses. Student learning outcomes can be not covered in a core course; introduced in a core course (I); reinforced in a core course (R); or mastered in a core course (M). Curriculum mapping ensures that all SLOs are aligned with the core courses in the academic program. A curriculum map may reveal, for example, that one or more SLOs are not associated with any core courses or that one or more core courses are not aligned with any SLOs. The process of creating a curriculum map may lead programs to either refine program SLOs or to reexamine which courses are required as core courses in the program. The program curriculum map should be updated when the SLOs or the required core courses for the program change.

Embedded certificate programs should use a check mark to indicate alignment between the core course and the SLO.

Starting in AY 2019-2020, all undergraduate programs were asked to develop a general education curriculum map.  The general education curriculum map should included all core (required) courses for the the program and the general education SLOs. Instead of designating a course using the I, R, or M designations, a check mark should be used to indicate alignment between the core course and the indicated general education SLO.  Embedded certificate programs programs and graduate programs do not need to do a general education curriculum map.

Measures 

Each outcome should be assessed using at least one assessment measurement. Academic programs should have direct measures of SLOs. Direct measures are those that result in a direct examination of the student’s work or performance. Examples of direct measures include standardized tests, licensure exams, portfolios containing student's work, presentations, locally developed tests or embedded test questions, written assignments, performances, etc. When deciding which measure best assesses the student learning outcome, consideration should be given to the expected skill level of the student, the reliability and validity of the instrument, and motivation of the student to do well on the assessment measure. For assessment measures such as papers, presentations, essays, performance, portfolio artifacts, etc., a rubric should be utilized to ensure consistency in the data collection.

Indirect measures are measures where individuals indicate how much they have learned, what they have learned, or how satisfied they are. Indirect measures are more likely to be used by non-academic programs although some academic programs may use indirect measures in addition to direct measures. An administrative measure gauges unit effectiveness in non-learning areas.

A measure for a non-academic unit may be designated as co-curricular if the associated SLO is a learning outcome aligned with a Supported Initiative found under the General Education Outcomes or Institutional Priorities (Mission Statement or Core Value Statement).  A co-curricular measure may be either a direct measure or an indirect measure.

Multiple measures for each outcome provide more reliable and meaningful information than just using one measure. It is important to remember that for SLOs, student grades and course averages are measures of individual student performance in a class or an assignment and are not a measurement of a specific SLO. Non-academic units are more likely to have one measure per outcome. For academic units, three direct measures of different types per SLO is ideal; however, depending on the structure of the academic program, not every academic program will be able to have three direct measures per outcome.

Benchmarks and Targets

For each assessment measurement, program faculty or unit staff should use their professional judgment to determine a benchmark for the measure. The benchmark refers to the performance level that indicates an acceptable level for an individual student's level of performance. The language in the benchmark should match the language used in the rubric or measure.

 A target is the percent of students or respondents expected to score at or above the benchmark.  While the benchmark is the expectation for an individual student's performance, the target is tied to the overall performance of all assessed students. When possible, targets should be given in terms of the percentage of students that are assessed as the number of students may vary from one application of the assessment measure to another. Program faculty and unit staff should use their professional judgment to determine what percentage of students should be scoring at the benchmark or higher.

Once established, benchmarks and targets should not vary from one assessment cycle to the next. It may be necessary to adjust a benchmark or target as the program's assessment process matures.

Data Collection, Findings, and analysis of findings

Data from each assessment instrument should be collected systematically and consistently. Depending on the size of the population, either all students should be assessed or a sample of students should be assessed. If a sample is being used, care should be given to minimize biases (only collecting data in the Fall, only collecting data from day-time courses, only collecting data from traditional courses, etc.) If a sample is used, data collection should ensure that every artifact has an equal chance of being chosen and that enough artifacts have been collected to ensure that the data is representative of the population.

Once data has been collected, data should be analyzed and discussed with appropriate faculty and staff. Data analysis should help identify strengths and weaknesses and should form the basis of discussion of course, program, or unit modifications. It should be determined if the target was “Met,” “Not Net,”or "Not Reported This Period."

When displaying results, tables and graphs should be used. If a rubric is being used to score the assessment measure, indicating the number and percent of students who scored in each criterion cell by performance level can help identify strengths and weaknesses in student learning.  If the assessment measure has multiple questions aligned with a SLO, examining how students perform on each test question (item analysis) can help identify strengths and weaknesses in student learning. When possible, it is informative to compare student performance in relation to data from previous years or, if applicable, to national, regional, or state norms. If data have been collected for three or more years, a trend chart is desirable.  For readability purposes, it is recommended that a trend chart should have at most five years of data in the chart.

Action ITEMS

Action Items should describe new activities or changes that will be implemented and should not describe the current status quo. Action Items should be driven by analysis of the data, especially when a target is not met, although an Action Item may be added even if the target is met.

The development and implementation of action items resulting from collected data tied to outcomes is commonly referred to as “closing the loop.” Especially when the data collected indicates that a target is not being met, faculty and staff should have discussions on what modifications can be made to help individuals meet the benchmark. For academic programs, Action Items tied to specific SLOs might be focused on a particular course (re-ordering topics, changing how topics are presented, adding more emphasis to a topic), on the learning styles of the students and teaching methods (adding more activity-based assignments, adding group work, adding additional homework, providing more feedback to students on their work), on a sequence of courses (introducing a topic earlier in the program of study, reinforcing the topic in additional courses, adding a prerequisite or co-requisite to a course), or the actual program (removing core courses, adding core courses). For non-academic programs, Action Items tied to specific outcomes might be focused on changing office practices, changing procedures, or providing additional training for staff.

ANALYSIS QUESTIONS

Analysis Questions should be answered in complete sentences using proper spelling and grammar.  The answers provided are used as a basis for a section in the Oklahoma State Regents for Higher Education's Annual Student Assessment Report that is due each Fall and should be written in such a way that they are understandable to an external audience.  Answers to these questions should be global in nature and give an overall picture of the program.