Executive Summary

The Task Force on Program Evaluation and Assessment was charged by the Provost to develop mechanisms for evaluation and assessment of the University's undergraduate and graduate programs and research units. The recommended Program Evaluation and Assessment Plan for the University of Illinois at Urbana-Champaign identified three interrelated levels of evaluation—external, campus, and local. The external level of evaluation refers to accreditation and external monitoring, focusing primarily on the University's accreditation by North Central Association. The campus level of evaluation, that is, the Expanded Campus Profile, is required for campus-level planning and fiscal decision making. It is intended to provide comparative information and show trends across time for several indicators. The local level of evaluation is conducted by department-level units and is aimed at program self-improvement and local unit planning.

The North Central Association of Schools and Colleges' Commission on Institutions of Higher Education has recently begun to place special emphasis on the assessment of student academic achievement as part of its ten-year accreditation review. In concurrence with and in response to that emphasis, the Task Force recommends that

The Program Evaluation and Assessment Plan includes, as a second level of evaluation, a system of campus indicators termed the Expanded Campus Profile. This is a system of centrally-provided indicators and unit-supplied information to be collected and reported annually for each department and aggregated at the college and campus levels. The campus indicator system provides information for central planning and decision making, and the campus indicators (i) apply to the wide variety of academic and research units on campus; (ii) track change over time; (iii) impose a minimal burden on units and campus; (iv) represent the resources, performance and quality of a unit; and (v) relate to local evaluations. The Task Force recommends that

Local evaluations will be initiated and conducted voluntarily by units and aimed at unit self-improvement and unit planning. To meet these goals, the Task Force recommends that

 


Introduction

Background

The Task Force on Program Evaluation and Assessment was established by the Provost in the Fall of 1994. The charge to the Task Force was to develop mechanisms for evaluating our University’s undergraduate and graduate education programs and research units and for assessing student academic achievement. During the academic year 1994-95, the Task Force devised a plan for program evaluation and assessment. The approach was intended to provide information for program planning and improvement, for state and federal reporting requirements, for accreditation reviews, for long-range or strategic planning, and for use in University resource allocation.

The 1994-95 Task Force polled all University units, and the units provided descriptions of the evaluation and assessment procedures that were being used within units. Input ranged from qualitative descriptions and comments on assessment needs to detailed measures of student performances and procedures for faculty performance evaluation. This information provided the foundation for the Task Force's recommendations.

The Task Force developed an approach to evaluation that was intended to support the University’s goal of promoting program quality, of reducing the burden on units and of maximizing accuracy. The recommended evaluations made extensive use of existing campus data sources. Recognizing that direct comparison between units is difficult because of the significant disciplinary differences in the mission of various units across the University, the proposed system included a diverse and flexible array of indicators to provide information for central decision making. The approach was devised to provide the ability to judge the effectiveness of each unit in a broader context, both internal and external to the University.

One of the many external accreditation bodies for the University is the North Central Association of Schools and Colleges (NCA). The NCA Commission on Institutions of Higher Education requires the colleges and universities under its purview to collect and use student outcome measurements. This requirement does not specify a specific pattern of evidence, yet the Commission continues to expect that a program of assessment of student academic achievement exists and that its usefulness is a key indicator of how well an institution assess its educational effectiveness. The approach presented by the 1994-95 Task Force was consistent with these objectives while avoiding duplication.

Given these considerations, the 1994-95 Task Force recommended a Program Evaluation and Assessment Plan for the University of Illinois at Urbana-Champaign. The Program Evaluation and Assessment Plan was unanimously approved by members of the Task Force and submitted to the Provost in the Spring of 1995. The current report presents the work of the reconstituted Task Force in 1995-96.

Program Evaluation and Assessment Framework

The Program Evaluation and Assessment Plan recommended in 1995 identified three interrelated levels—external, campus, and local. The external level of evaluation refers to accreditation and external monitoring and is typically reported to outside agencies such as the Illinois Board of Higher Education (IBHE), North Central Association (NCA), or other accrediting bodies. The campus level of evaluation is required for campus-level planning and fiscal decision making. It is intended to provide comparative information and to show trends across time for several key indicators. Local evaluations are initiated and conducted voluntarily by units and aimed at unit self-improvement and unit planning. Table 1 summarizes the framework of the Program Evaluation and Assessment Plan.

Activities of the Task Force

In the Fall of 1995, the Provost reconstituted the Task Force on Program Evaluation and Assessment and asked that they refine the campus indicators and initiate the Plan proposed the previous year. The Task Force activities follow the three levels of evaluation and assessment presented in Table 1.

Table 1. Framework for Program Evaluation and Assessment

External Evaluation
Purpose
Evaluation for accreditation and external monitoring purposes; mandated by and used for reporting to Illinois Board of Higher Education and accrediting bodies
Description
Specific requirements from accrediting body; uses some information from campus and unit sources; data collection and reporting done by appropriate university entity at intervals specified by external body
Campus Indicators
Purpose
Evaluation for central planning and fiscal decision making; mandated by campus administration and used for reports to unit directors, campus administration, and decision makers, with results linked to unit funding and local evaluations
Description
Quantitative and qualitative indicators linked to mission, demand, centrality, quality, cost, diversity and outcomes; centralized mechanism for collecting, analyzing and reporting data; includes unit input, output, performance and cost measures; annual data collection and reporting done centrally to campus level; compiled at the department level and aggregated at the college level; minimum demand on local units as campus databases provide much of the needed data
Local Evaluation
Purpose
Evaluation for unit planning and program improvement; initiated by the unit for internal use with no required reporting outside of the unit, with results linked to self-improvement and to the University's mission; can serve as unit-supplied information on central indicator system
Description
Conducted and entirely controlled at the local level; evaluation activities should be designed to help units find ways to improve their programs; highly participatory to build local ownership and encourage use of the evaluation; procedures, example instruments and other aspects of the evaluation provided by campus, but the actual design, conduct, and use of the evaluation resides at the local level; data collection and reporting done locally with significant faculty, student and staff participation; campus assistance available upon request; the unit determines the frequency of the evaluations
External Evaluation: NCA and Student Outcomes Assessment

The external level of evaluation considered the requirements of NCA and student outcomes assessment. The 1994-95 Task Force surveyed all academic units to determine what mechanisms they were using to assess student academic achievement. Members of the current Task Force visited six of the units that responded to discuss the survey results. Two points became clear from the interviews. The survey results may under-represent the assessment activities conducted by the units, but few of the assessment activities are direct measures of student academic achievement.

The Senior Survey, which is administered to graduating seniors every three years by the Office of Instructional Resources, was revised with the help of the Task Force, to include a section on self-reported student outcomes (Appendix B). The new outcomes section asks students to rate their level of competency in various academic areas prior to enrolling at UIUC and again at the end of their academic career. The results of the March, 1996, survey are now being analyzed by the Office of Instructional Resources.

The Task Force plans to continue its efforts to identify several assessment models that might be adopted or modified by campus units to meet their particular needs and at the same time meet the accreditation requirements of the North Central Association. It is clear that this effort will extend into 1996-1997.

Campus Indicators: Piloting the Expanded Campus Profile

The indicator system for campus evaluation recommended by the 1994-95 Task Force was based on the current campus profile, a centrally compiled set of quantitative indicators that has been in use at UIUC for the last decade. The 1995-96 Task Force proposed expanding the campus profile. The expansion included adding a number of additional quantitative indicators and allowing for unit input. The 1995-96 Task Force began a pilot of the proposed indicator system by creating actual profiles for six units on campus, using the list of indicators proposed by the 1994-95 Task Force. The pilot units were selected to represent the diversity of units on campus in terms of mission, size, funding, complexity, and nature of scholarship. They were Animal Sciences, Art and Design, Economics, Political Science, Electrical and Computer Engineering, and Mathematics. The Task Force reviewed the unit profiles in order to refine the indicator system.

Once the prototype indicator system was endorsed by the Task Force, four of the six pilot units (representing four different colleges) were selected to participate in the second phase of piloting. This time the units were Animal Sciences, Art and Design, Electrical and Computer Engineering, and Political Science. In the second phase, unit directors and college administrators were sent their profiles in advance of a group interview where they were asked to comment upon the utility, accuracy, feasibility, and propriety of the proposed indicator system and the process for collecting and reporting. (Appendix C contains the letter to unit directors describing this meeting.) Two to five representatives from departments and colleges and at least three members of the Task Force attended each interview. The Task Force made additional revisions in the system based on feedback obtained in these interviews.

In addition to piloting the indicator system, the Task Force asked members of the Task Force on Graduate Education to review the proposed indicator system in light of that Task Force’s charge to establish an ongoing process for promoting graduate program quality. Based on these discussions, changes were made to make the profile more consistent with proposed recommendations for evaluating Graduate Programs.

Members of the Task Force also participated in discussions with Dean Sims, Graduate College of the University of Iowa, regarding the Profile of Graduate Programs system in use at Iowa. The discussion provided an opportunity to compare the proposed UIUC system with the one in place at Iowa and to get feedback on implementation and utilization issues.

Based on input from the pilot, feedback from the Task Force on Graduate Education and discussions with the full Task Force, an Expanded Campus Profile system was developed. The revised system is presented in a subsequent section.

Local Evaluation: Unit Sponsored and Initiated Evaluation

The information gathered by the 1994-95 Task Force included examples of local evaluation materials used by many of campus units. Five of these reports were selected to exemplify certain desirable elements for use in local evaluations. The 1995-96 Task Force contacted the executive officers of the units that prepared these reports (Appendix D) and then conducted interviews with members of the units. The purpose of these contacts was to determine the motivating factors, procedures and outcomes of these exemplary evaluations. These interviews were conducted by two Task Force members in each case. A number of the recommendations for local evaluations are a direct result of advice gathered in these interviews.

 


Recommended Plan for Program Evaluation and Assessment

The following sections present the findings and recommendations for each of the three levels of evaluations.

External Evaluation: Student Outcomes Assessment

During the summer of 1995 the campus submitted the UIUC Plan for the Assessment of Academic Achievement to the North Central Association for review and comment. The NCA has not as yet responded to the proposed plan but has published several documents indicating that it is expecting a great deal in this area from its member institutions and will be checking carefully when it visits UIUC in 1999 for its accreditation review to see what units are doing on this front.

The May 1996 article in Inside Illinois (Appendix A) explains what the NCA expects. A more complete description of what is expected by the NCA can be obtained from the Office of Instructional Resources.

Briefly, NCA recommends that each institution should have a program coordinator and an oversight committee. Units need to develop specific program objectives and then need to use various methods to determine whether students are achieving those objectives. Feedback is essential and the results of the assessment should be used to provide faculty with information useful to the improvement of instruction and learning. Continuous improvement is the goal.

Recommendations

• The Provost, in conjunction with the UIUC Senate, should charge a committee with oversight of student outcomes assessment and designate a program coordinator for overseeing the student outcomes assessment efforts of the institution. This committee and coordinator should work hand-in-hand with campus units in developing assessment models and ensuring that the results of these efforts are being used to improve instruction and learning.

• Several units on campus currently collect student outcome information, e.g., the Senior Survey and the University Graduate Survey. All such efforts should be coordinated so as to maximize efficiency and utility of student outcomes assessment.

Campus Indicators: Expanded Campus Profile

As described in Table 1, the Program Evaluation and Assessment Plan included, as a second level of evaluation, a system of campus indicators termed the Expanded Campus Profile. This is a system of centrally-provided indicators and unit-supplied information that would be collected and reported annually for each department and aggregated at the college and campus levels. The purpose of the campus indicator system is to provide information for central planning and decision making. For that reason, campus indicators were chosen that (i) apply to the wide variety of academic and research units on campus; (ii) track change over time; (iii) impose a minimal burden on units and campus; (iv) represent the resources, performance and quality of a unit; and (v) relate to local evaluations.

An overview of the Expanded Campus Profile, the report format for the indicator system, is presented in Table 2 with a complete listing contained in Appendices E through H. Indicators in the Expanded Campus Profile include both qualitative and quantitative information, linked to the threefold mission of the university--research, teaching and service--and directly related to the criteria of quality, centrality, demand and cost. General categories include unit mission, organization, and resources; staff; budget; expenditures; research, creative and other scholarly activities; faculty accomplishments; instructional development; external perceptions; service; IBHE cost study; activity effort plan; student majors; student quality; degrees granted; student outcomes assessment; instructional units; instructional unit connectedness; section characteristics; faculty teaching activity and Instructional Course Evaluation System (ICES) evaluations.

These data can be used by the Provost and his designees, such as the Campus Budget Oversight Committee, to report to unit directors, campus administration, and other key decision makers. This information is one resource for making decisions regarding funding, program endorsement, restructuring, and reform. It is not expected that any one indicator will be used to make judgments about a unit or to compare units; instead, indicators would be examined in clusters or in total to convey a useful description of a unit’s performance. Units will have full access to information related to their unit. To aid in its interpretation, they will have the opportunity to annotate and respond to their report.

Campus level evaluation has links to external and local evaluation. For external reporting, many of the campus indicators will be useful for responding to information requests from IBHE, NCA and other accrediting bodies. For local evaluation, a unit's performance on campus indicators may encourage it to undertake a local evaluation so as to improve its performance in subsequent campus evaluations.

The format of the Expanded Campus Profile includes indicators that are to be collected in two formats: (i) centrally-provided indicators (Appendix E and F), which are compiled by the Provost’s office for all units annually; and (ii) unit-supplied information (Appendix G), which is supplied annually by unit heads with the aim of presenting a comprehensive picture of the unit, detailing its complexity and unique aspects.

Centrally-provided indicators (Table 2 and Appendix E and F) consist of quantitative indicators organized into three sections: Faculty, Staff, Budget and Space; Students and Degrees; and Teaching Activity. These indicators replace the existing Campus Profile and will be issued once each year in November. The Provost's office will collect and report the centrally-provided indicators using campus databases (such as those maintained by the Division of Management Information, Business Affairs, Admissions and Records, Office of Budget and Planning, Office of Instructional Resources, Graduate College, etc.) with minimal demands on local units. Reports will be developed for departments and then aggregated to produce college and campus reports.

Because it is important to document changes and trends over time, the centrally-provided indicators will reflect when possible data for the last decade.

To facilitate use, a summary containing a subset of key indicators (Appendix H) will accompany the centrally-provided indicators. The summary should provide a quick but comprehensive snapshot of a unit that can be quickly referenced to the centrally-provided indicators or unit-supplied information for additional information.

Summary pages and centrally-provided indicators for each department, college, and campus will be posted with restricted access on the World Wide Web and updated annually. In addition, a limited number of hard copies of college and campus reports will be printed for use by the Provost’s Office and Campus Budget Oversight Committee. The centrally-provided indicators information will be linked to unit-supplied information and other information deemed important by departments and colleges (see next section).

Unit-supplied information (Table 2 and Appendix G) is a set of quantitative and qualitative indicators organized into the following categories: general information, faculty accomplishments, student accomplishments, public perceptions, professional standing, and internal and external service. Recognizing that quantitative indicators alone are not sufficient to convey the scope, complexity and quality of a unit’s activities and accomplishments, the unit-supplied information is intended to bolster the centrally-provided indicators, explain patterns found in the quantitative data, and give a richer picture of the unit.

The campus- and unit-supplied information should be routinely available at web sites. There are opposing opinions on the accessibility of such information, but the advantages of external scrutiny seem to outweigh the disadvantages. Moreover, because the unit can annotate its own web page where its unit-supplied information is posted, the unit has ample opportunity to elaborate and explain its profile. The web site address will be forwarded to the office responsible for the centrally-provided indicators and used to link the unit-supplied information with the unit’s centrally-provided indicators, which also will be located on the World Wide Web. The unit is responsible for updating information on the web site annually.

 

Table 2. Summary of the Expanded Campus Profile

Unit Faculty, Staff, Budget and Space

Unit mission, organization, and resources* - Short statements describing the unit mission to the University, state and discipline in the areas of research, service and teaching, and the distinct subdivisions that exist within the unit; the unit teaching model for both faculty and graduate assistants as discussed in "Campus as a Classroom"; quality estimates and assignable square feet.

Full-time staff - Tenure track faculty by type and rank, graduate assistants by responsibility, and nonacademic staff; includes FTE, headcount and appointments on state and on all funds.

Budget - The allocation of state budget to salaries by appointment; includes source of funds through budget reform; ratios of budget quantities to faculty FTE, to IUs, and faculty FTE to IUs.

Expenditures - Expenditures by source of funds, including state ICR, grants and contracts, gifts and endowments, etc.; number of PIs on all grants and contracts, and the percent of faculty participating in such activities, ratios of grant and contracts expenditures per faculty FTE and total expenditures per student.

Research, creative and other scholarly activities* - Frequency of activities for all unit faculty in areas denoted in Academic Affairs Communication No. 9; percent of unit faculty producing scholarly activities; total citations from standard references; complete listing (all authors in order, title, and publication) of five publications or other creative activities that represent the quality of unit scholarship and nature of unit scholarship.

Faculty accomplishments* - List of recognitions received in research, teaching and service; major activities of the unit faculty in outside organizations; listing of the major committee activities of unit faculty, including University, national and international involvement.

Instructional development* - Activities related to resident instruction and continuing education as in Academic Affairs Communication No. 9.

External perceptions* - Findings from employer surveys; list of popular media articles and interviews; published national rankings of units including evaluation criteria; list of accreditations possible for each unit and those that are held; responses from undergraduate and graduates from 1, 5, and 10 years ago from UI graduates survey.

Service* - Five examples of service to the University that represent the typical quality and nature of service activities; five examples of service external to the university that represent the typical quality and nature of external service activities; list of key constituents that use the scholarly output of the unit, major uses, and nature and magnitude of dissemination activities.

IBHE cost study - Breakdown of costs by teaching department and by student program.

Activity effort plan - Percent of FTE faculty by activity on state funds.

Students and Degrees

Student majors - Number and percent of group of majors by level; number of advisees; includes ratios of undergraduates to faculty FTE, graduates and professional students to FTE, and number of majors to FTE.

Student quality* - For undergraduates, the ACT composite scores and high school rank of all juniors; for graduate students, the number of graduate applications, graduate admissions, new graduate enrollments as well as the undergraduate GPA of enrollees, GRE scores and any special test scores; student awards, publications, placements and examination performances.

Degrees granted - Number granted and the mean terms to degree by level; ratios of degrees to faculty FTE.

Student outcomes assessment* - Type and frequency of student outcomes assessment conducted, including undergraduate and graduate evaluations.

Teaching Activity

Instructional units - Number and percent of IUs by level and by class type; percentage of faculty, graduate student and other teaching IUs by level; total IUs supported by this unit's state funds for undergraduate and graduate education as well as the percent of group undergraduate IUs.

Instructional unit connectedness - Percent of group IUs from undergraduate majoring in this unit, from undergraduates from another dept in college, from undergraduate from another college.

Section characteristics - Number of sections taught at each level including discovery sections, honor sections, and comp II sections; includes statistics on section size.

Faculty teaching activity - Contact hours per term per faculty FTE, organized sections per term per faculty FTE, and number of individual instruction students per term per faculty.

ICES evaluations - Distribution of faculty and TA ICES scores.

* contains some unit-supplied information detailed in Appendix G.

 

Recommendations

• The Provost will implement the Expanded Campus Profile system (Table 2 and Appendices E through H) in academic year 1996-97. It is anticipated that most centrally-provided indicators will be available in November of 1996. A few new indicators that require extensive programming, reformatting, reanalysis or additional data collection, may not be ready at that time, but will be phased in the following year. Departments and colleges should begin to collect unit-supplied information and establish their web pages in September of 1996.

• The Provost has the responsibility for implementing and annually requesting the Expanded Campus Profile information. The responsibility for compiling and disseminating the centrally-provided indicators should reside within an administrative unit in the Provost’s office. The Provost should provide departments and colleges with an electronic template for submitting the unit-supplied information.

• The Provost should encourage electronic data collection and reporting mechanisms to minimize the costs and effort associated with preparing written reports.

• Refinements in the system will be needed to meet the information needs of campus decision makers. The Provost, in consultation with other vice chancellors, deans, the Campus Budget Oversight Committee, and other primary users of the system, should take responsibility for continued oversight of the campus indicator system. Continual review and refinement will be necessary through input from the IBHE, NCA, Campus Budget Oversight Committee and the Task Force on Graduate Education.

During their work over the last two years, members of the Task Force have developed a set of recommendations that affect the implementation, operation and utility of the campus indicator system. These recommendations address two broad issues: use of consistent definitions across units and databases, and improvement of campus databases.

Recommendations on Consistent Definitions

To enable valid comparisons across units or within units across time and to permit aggregation of information from departments to college and campus levels, indicators must be defined clearly and these definitions must be used consistently across campus. The following recommendations address several areas where consistent definitions are needed to increase the accuracy of the proposed indicator system.

• The campus uses several descriptors of instructional format: lecture, discussion, lecture/discussion, lab, lab/discussion, quiz, practicum, conference, and flight. Units are responsible for assigning these descriptors to sections and do so in different ways, making cross-unit comparisons problematic. To alleviate this problem, the campus should develop standard definitions for the full range of instructional options and then encourage their use.

• Units collect Activity Effort Plan (AEP) information annually and use it widely for external and campus reporting. There is significant variation across units in terms of how AEPs are completed and how effort is assigned to various categories. Definitions of AEP activities should be revised as needed. Units should be more conscientious about completing these forms. One strategy for refining this process is to ask faculty to provide feedback by allowing them to check routinely their AEP forms for accuracy.

• Some personnel categories, such as visiting assistant professor, academic professional, etc., are so ambiguous that they have limited utility at the campus level. For example, some academic professionals have primary responsibility for research; others are responsible for teaching. If AEPs were completed accurately, percent time allocated to the various categories of the AEP could be used to describe these positions by function (research, instruction, administration, public service) rather than relying on ambiguous position titles.

• The category Indirect Instruction is missing from current AEPs. Because AEP information is reported to IBHE, such a category is needed to truly reflect the activity patterns of some faculty and should be added to the AEP format. The Task Force recommends that the AEP also be revised to include the following three categories: Academic Advising; Effort Related to Managing a Course, Laboratory, or Studio; and Other Indirect Instruction.

Database Improvement Recommendations

Because the proposed indicator system relies so heavily on existing campus databases, the accuracy and capacity of those databases are crucial to its feasibility and utility. This second set of recommendations addresses several areas where improvements are needed in existing databases.

• The University’s accounting systems need to be revised to reflect interdisciplinary efforts in research and teaching. For example, in the current system, expenditures from an interdisciplinary research project cannot be traced back to individual faculty or credited to their home departments. Revisions may include the use of subaccounts and other strategies for relating expenditures and commodities to specific departments or to individuals within departments.

• At present, Grants and Contract databases record only the first three investigators on external Grants and Contracts. To fully acknowledge all faculty and departments involved in collaborative projects, all investigators should be included in the database.

• In addition to reporting the actual space allocation per unit, the Office of Facility Planning and Management should regularly update its assessment of space quality. The Task Force recommends that the campus conduct regular (every two or three years) evaluation of space quality and revise the model for predicting the expected space allocation for units. The comparison of actual space allocation to space generated would be a valuable indicator to add to the system. In addition, the Space Report could be electronically linked to the centrally-provided indicators.

• To permit accurate computation of the time it takes students to obtain their degrees ("terms to degree"), the Graduate College should update its student databases regularly, especially the information on current degree objectives.

• The Graduate Admissions system appears to be missing GRE scores of many applicants for programs that require the GRE. For such programs, these scores should be entered consistently so that they may be summarized for the centrally-supplied indicators.

• ICES results should be linked to the social security numbers of faculty or teaching assistants or some other unique identifier developed to compute accurate estimates of the percentage of faculty and teaching assistants using ICES forms. Then the campus can easily relate ICES results to individual sections.

• Courses requiring special effort on the part of the unit should be identifiable in the on-line registration system. Honors sections and Composition II courses are already indicated. Discovery sections should be identified so that these IUs can be identified and reported.

Local Evaluation: Unit Sponsored and Initiated Evaluation

From the interviews with the five executive officers of the units that have conducted exemplary evaluations, it was clear that all of these units and their executive officers were committed to the necessity and importance of careful local evaluation and the discussion of the results of the evaluation with the members of their units, either privately or in meetings, as appropriate. Two of the larger units hired staff members whose primary responsibility was to gather and organize evaluative information from alumni, staff and students. In the remaining three units, a particular individual was designated to direct the evaluation with the assistance of faculty and/or staff committees. In three of the units, an advisory committee carefully determined the general structure and scope of the evaluation. A particular individual implemented the evaluation with input and assistance from others. In no case was a committee really "in charge" of the evaluation.

The reasons cited as the most important for conducting the evaluation were (i) allocation of scarce resources and long range planning within the unit; (ii) the perceived need to provide periodic performance evaluations for faculty and staff within the unit, and to discuss these evaluations with the individuals involved; and (iii) concerns raised by input such as faculty or alumni or student surveys. Even in cases where the evaluation was required by some external unit or agency, these units seemed committed to using the resulting information for their own self-improvement.

Two of the external resources that were identified as potentially useful to the local evaluation process were funds for external consultants and assistance with survey construction and data reduction. All five executive officers expressed support for the availability and usefulness of local evaluations, and all said that they would be willing to provide consultation on the evaluation process to units implementing local evaluations.

Recommendations

• The Provost should establish and fund an evaluation assistance group to help units with the development and implementation of local evaluations. This group would provide staff assistance and small grants to units conducting local evaluations. This group will include faculty consultants from various campus units (for example, the five unit executive officers interviewed by the subcommittee have volunteered to serve as consultants to units conducting local evaluations), a group coordinator and research assistants, as needed. Units can directly and confidentially request financial and consultative assistance with local evaluations within established guidelines based on the size of the unit and the scope of the proposed evaluation. It is important that communications between this group and units be confidential.

• The local evaluation assistance group should operate on a pilot basis for the first year. Units requesting assistance grants must provide matching funds (which may include released time for unit staff assigned to the evaluation). The unit must also appoint a staff member with released time to direct the local evaluation. Part of the responsibility of the coordinator of the evaluation assistance group, especially during the first year, will be to initiate meetings with unit executive officers and/or unit faculty/staff groups to inform them about local unit evaluations.

• A local evaluation information letter (Appendix I), or a suitable variation of it, should be sent to all unit executive officers and unit Advisory/Executive Committee members. This letter invites units to consider conducting a local evaluation and outlines the type of help that is available.

 


Additional Recommendation: Evaluation of Administrative Units

This report and the efforts of this Task Force have focused on the evaluation of academic units, whose primary mission is teaching, research, and public service. However, the evaluation of administrative units is also critical in any institution interested in quality and effectiveness.

• The Task Force recommends that the central indicators in the areas of budget, expenditure, space, and staffing be collected for campus administrative units and disseminated in the same way as the indicators for academic units.

• The Task Force also recommends that productivity measures such as those outlined in Appendix A.1 of the Framework for Budget Reform be included for individual administrative units. Many of the measures cited in that appendix are unique to an administrative unit, and many of them are available only from the unit itself. The Task Force recommends that measures that are available from the administrative databases be provided centrally and that the remainder be supplied by the service units directly. In addition, other indicators are available from external sources.

 

 


07/96 - rs