Linking Program Assessment to Institutional Goals

Integration of institutional research-based planning and evaluation processes is a mechanism to improve institutional quality and effectiveness by focusing all university constituents on implementing and evaluating strategic initiatives. While educational program assessment to foster evidence-based improvements is strongly infused in the culture of many universities, drawing intentional connections between program assessment, which primarily focuses on student learning outcomes, and institutional strategic planning, can be challenging for faculty. This paper highlights the assessment work of three diverse disciplines in a large public research institution that have articulated connections between their program’s student learning and outcomes to elements of the university strategic plan and other organizational requirements. This paper explores the benefits and challenges of explicitly linking outcomes or measures in program assessment to university planning.


Introduction
Virtually all colleges and universities engage in strategic planning and institutional assessment; indeed, program assessment is a requirement of regional accreditation. Similarly, strategic planning is critical to helping organizations lay out a path of growth and improvement, which often fuels fund-raising and helps satisfy the demands of stakeholders and oversight committees and boards. Because strategic planning occurs at the institutional level-envisioning strategies for strengthening and transforming the institution's prominence, efficiency, and culture to meet present and future challengesvery often the strategic plan feels disconnected from the concerns and challenges facing specific academic departments and programs, which typically focus on improving student learning outcomes. This perceived disconnect between the broader institutional goals and the specific educational programs that collectively carry out the mission of the university or college (student learning, research innovation) can generate a sense of isolation, a kind of "us-against-them" mentality, and disinvestment in the institutional assessment process that may be seen as out of touch with program-level concerns and challenges.
In this paper, we argue that intentionally connecting program assessment that primarily focuses on student learning outcomes, and institutional goals, is mutually beneficial. Programs are better able to articulate their mission and role within the larger structure, and the institution is enhanced by having multiple academic units working in conjunction with these broad goals. Ultimately, students are the primary beneficiaries, as programs and organizations work collaboratively to serve their needs.

Strategic Planning and Assessment in Higher Education
Both strategic planning and assessment of student learning and programs in higher education gained traction in the United States during the 1980s as demands for greater accountability from state and federal governments and accrediting commissions intensified (Hinton 2012). While some argue that the two (learning outcomes and institutional planning processes) should and must be closely connected (Hinton 2012;Serbin 2004), in reality, assessment and strategic planning serve different purposes and are frequently carried out by different personnel, and thus the overlap may be slight.
Strategic plans typically outline the organization's mission, values and goals, and strategies for achieving those goals, typically for the next 5-10 years. Calls for more "strategic" planning in higher education emerged as many colleges and universities were facing crises due to enrollment declines and shrinking financial support from governmental and business sources during the 1970s and early 1980s (Keller 1983). Keller (1983) proposed as a solution that strategies commonly employed in corporate and commercial settings, namely "strategic marketing planning," be adopted by institutions of higher education as a way for colleges and universities to plan for and survive the shifting landscape. As Kotler and Murphy (1981, p. 488) argued, "The future that appears to hold many threats for most colleges and universities should become less imposing with the judicious use of strategic planning." Strategic planning was a way to prioritize increasingly limited resources and promote greater focus within the institution (Hinton 2012).
Assessment's roots can be traced to the First National Conference on Assessment in Higher Education held in 1985 following two publications-Involvement in Learning (National Institute of Education, 1984) and Integrity in the College Curriculum (Association of American Colleges, 1985). Assessment practitioners and policy shapers gathered to discuss the report recommendations which argued, based on scholarly research, that several conditions were needed to promote student achievement, including setting high expectations, involvement in active learning, and providing prompt and useful feedback. But the report also emphasized that higher education institutions could benefit from feedback about their own performance. This final recommendation was consistent with voices within higher education that were focused on curriculum and pedagogical improvement to create a cohesive experience guided by ongoing scholarly measurement of student learning.
Though a handful of colleges and universities initiated attempts to measure student competencies, it was the publication of a report from the National Governors Association that fostered early response from state governing boards (National Governors Association, 1986). States mandated the use of standardized tests to compare across institutions (e.g., Texas) or required higher education institutions to establish their own approach to articulate, measure and gather evidence about student learning (e.g., Colorado and Virginia). By the end of the decade, about two thirds of states had installed requirements that institutions assess student learning (Banta, 1993;Ewell, 2002).
Reauthorization of the Higher Education Act of 1988 along with tight state budgets spurred transfer of the public accountability agenda from state authorities to regional accreditors. Armed with new language in the act, many regional accreditors took up the charge to require by the early 1990s that all colleges and universities participate in assessment of student learning. Furthermore, colleges and universities were required to provide strategic plans. As Hinton (2012, p. 7) notes, "institutions began to find themselves under serious scrutiny during their reaccreditation processes if they did not have a working strategic plan and some form of assessment plan in place." Although strategic planning remains integral to institutions of higher education, Hinton (2012) notes that by the late 20 th century, even those educational institutions that had forged successful plans and fruitful processes began to dismantle planning offices and focus instead on assessment initiatives. However, this shift from strategic planning to assessment came with its own challenges. Historically, assessment of student learning has been mired by a dual purpose-calls for accountability and calls for authentic study of teaching and learning to improve outcomes. Because the motivation to conduct assessment resulted from administrators' efforts to meet compliance standards, assessment of student learning was considered by many at colleges and universities as an add-on activity rather than an integral part of the teaching and learning process. Faculty with this perspective propagated the use of summative standardized tests and surveys of students.
Perhaps even more distressing, one result of the compliance agenda was the erroneous faculty beliefs that assessment was divorced from their academic mission and scholarship and was merely something they did periodically to satisfy administrative requirements (Ewell, 2008;Astin & Antonio 2012). Unfortunately, this focus on compliance hindered the pursuit of authentic assessment which is systematic, ongoing, and formative as well as summative in nature, aimed at gaining understanding of and improving explicitly stated student learning outcomes about what students should know, be able to do and value. In this approach, multiple measures are embedded by faculty into student assignments in a curriculum of study so that student work, scored with a rubric or other scoring protocol, is the evidence of focus to improve curriculum design, pedagogy and learning over time.
More recently, a survey of provosts or chief academic officers in 2009 revealed the most common use of assessment data remained regional or discipline accreditation (Kuh & Ewell, 2010). However, perhaps this study captured evidence of a turning point for assessment in higher education. Provost responses also showed a commitment to use assessment data to improve learning through revising learning goals, informing strategic planning, modifying general education curriculum, and improving instructional performance (Kuh & Ewell, 2010).
Making this paradigm shift in purpose can be challenging for institutional leaders. Executive leadership must carefully weigh the benefits and costs of investing in building and sustaining an ongoing, systematic and effective assessment culture and planning process focused on evidence-based improvement. Additionally, integrating and making connections between institutional planning processes that matter to members of the community can be daunting: educational program assessment is usually focused on using results gleaned from annual assessments analyzed over time to improve student learning outcomes rather than broader institutional goals prevalent in strategic planning. Serban (2004) notes that comprehensive models that "coherently integrates all levels, from courses and programs to the overall institution" (p. 26) are lacking. In short, institutional student learning outcomes (ISLOs) are often not emphasized in strategic planning, especially at larger universities whose mission includes and typically prioritizes research funding, posing challenges to those charged with program assessment to link their efforts to institutional goals.
Structuring program assessment so that faculty make intentional connections between the outcomes or measures in their educational program assessment plans to elements of the strategic plan can help to bridge two planning levels by linking program assessment to broader institutional and organizational goals. Benefits include harnessing the energies and expertise of all constituents in the institution to achieve broad planning goals through ongoing practice of evidence-based decision making that promotes improvement in mission driven institutional priorities. Here we describe the institutional effectiveness assessment model practiced at University of Central Florida (UCF), a large public research institution that aims to foster links between program assessment and strategic planning. We also highlight the assessment work of three diverse disciplines within the institution where faculty have articulated connections between their program's student learning and program outcomes and elements of the university strategic plan or their professional requirements.

The Institutional Effectiveness Assessment Model at the University of Central Florida
The decision to deepen investment in institutional planning processes as a mechanism to foster quality, innovation and improvement was made by leaders at UCF in 2000. Consistent with its core mission and strategic plan, administrators at UCF implemented its own institutional effectiveness (IE) assessment policies and procedures. UCF faculty and staff members defined expected outcomes, assessed the extent to which these outcomes were achieved, and have modified and improved their academic programs and administrative units based on assessment results since 1994. By 1996, the faculty of each academic program and administrative unit had developed an assessment plan (mission, objectives, outcomes, and measures) and completed one cycle of reporting results and use of results. A three-year review cycle was instituted initially, followed by an annual review in 2000. This change was prompted by a memorandum by the president that restated the importance of assessment and established a new office, Operational Excellence and Assessment Support, to support assessment activities.
The UCF Institutional Effectiveness Assessment process is directly tended by Divisional Review Committees (DRCs) that are aligned to colleges and divisions. The UCF IE assessment model consists of two broad categories, academic programs and administrative units, and is used to guide assessment in both areas. Academic programs include undergraduate and graduate educational programs (with selected tracks) and certificates.
Assessment coordinators (faculty members) for each program work with program faculty to: • develop a plan with student learning outcomes consistent with the mission using SMART 1 guidelines; • select and implement measures using MATURE 2 guidelines; and • analyze results and plan for improvements based on the results that are then assessed in the subsequent plan (that is, closing the loop). The results and plan for improvement are documented in an assessment report.
The components of the assessment report that is submitted annually are described as follows: 1. Results of the previous year's assessment plan (data and analysis).

2.
A reflective statement about the results describing the implications of the findings and how the evidence can be used to make improvements. Reflections are based on a trend analysis of results for outcomes gleaned from annual assessment over time.
measures (one of which is a direct measure) per outcome with performance criteria or targets that provide evidence about how well the outcomes are being achieved. Methodologically sound practices are employed by faculty to measure student learning and operational outcomes.
5. Results and plans are submitted to DRCs for reviews designed to promote excellence in assessment and improvement based on the results. A web application report and review system houses common structured templates for assessment coordinators, DRC chairs, and DRC members. Using the UCF IE Assessment Rubrics, DRC members provide feedback to the coordinators about the assessment results and plans.
Each DRC is charged with working collaboratively with its programs or units to mentor the members in their assessment team and to provide a review of the quality of the assessment reports based on established criteria. These criteria are defined in the UCF IE Assessment Rubrics, designed in 2009 and revised in 2013 by the University Assessment Committee as a tool for providing specific feedback on plans and results. Each program or unit is reviewed by multiple members of the Divisional Review Committee (DRC)-often one member and the chair. Assessment coordinators then address the feedback and resubmit the results and plans back to the DRC. The results and plans go through several review iterations prior to final approval by the DRC Chair.
Broad-based participation is the foundation of the UCF assessment model and is characterized by active involvement and contributions of faculty, staff, and administrators who are organized into DRCs that are aligned to the colleges and divisions. Each Division Review Committee has a chair who sits on the University Assessment Committee. The University Assessment Committee (UAC) was established by the UCF President to support a process of continual self-evaluation and improvement. The primary purpose of the UAC is to oversee and assist academic and administrative units in conducting ongoing assessment to improve student-learning and operations. The UAC ensures the quality of the reviews conducted by the DRCs through its oversight of the review process. The chairs of each of the 21 DRCs comprise the university-level committee. Annually, each member of the UAC presents a DRC report about the quality of the results and plans. It contains examples of how the programs or units used assessment results to make improvements.
The expectation that program assessment coordinators make intentional connections to strategic planning was introduced to the university community by the UAC beginning with the drafting of 2009-10 IE Assessment Plans. The strategic planning alignment criteria was included in the 2009 IE Assessment Plan Rubric. However, after several years of applying the 2009 IE Assessment Rubrics to academic program plans, DRC members observed that a more specific rubric criteria was needed to help faculty structure intentional connections between program student learning outcomes assessment and strategic planning. The IE Assessment Plan Rubric criteria related to strategic planning was revised in 2013 to foster deeper alignment between these institutional planning processes. The 2009 rubric criteria asked faculty members to "describe the relationship between the IE plan and the University's Strategic Plan." By contrast, in the revised 2013 IE Assessment Plan Rubric, the strategic planning criteria was redesigned to increase specificity by stating that, "the plan explicitly links one or more outcomes or measures to strategic planning." An accompanying IE Assessment Plan Rubric narrative was also developed to provide additional rubric criteria guidance. Further, a dedicated area was created in the IE Assessment Plan template for faculty to detail the strategic planning links. Finally, the IE Assessment Plan Rubric levels were adjusted in 2013 to increase rigor. The strategic planning criteria was one of the two criteria that could be satisfied to earn an IE Assessment Plan Rubric rating of "Accomplished" on a five-point scale where 1 is "Beginning," 2 is "Emerging," 3 is "Maturing," 4 is "Accomplished," and 5 is "Exemplary." With the implementation of the most recent UCF strategic plan in 2017, the accompanying IE Assessment Plan Rubric narrative was revised to further clarify to align to the "promises" or "metrics" in the current strategic plan.

UCF's Strategic Plan
Planning for the UCF Strategic Plan, or Collective Impact Statement, began in fall 2015 and was implemented in summer 2017. Like most university strategic plans, many of the objectives are designed to enhance reputation, prestige and funding of the institution (e.g., attract $100 million in new funding) and are seemingly separate from the student-learning mission. Of the five broad, overarching "promises" outlined in the plan, one does address students and faculty: "Attract and cultivate exceptional and diverse faculty, students, and staff whose collective contributions strengthen us" and one goal in the strategic plan relates directly to student success-"increasing student access, success, and prominence" (University of Central Florida, 2017). Yet, because the strategic plan is aimed at these higher-level goals, it follows that the metrics and strategies associated with these institutional-level goals are also broad (e.g., "enroll a student population whose family incomes reflect the distribution of the region"). As a result, there is a perceived disconnect between the day-to-day workings of individual faculty or departments/programs and the university's goals and strategies.
Thus, an assessment challenge confronting faculty and program directors is linking improvements at the departmental or program level to institutional goals. These challenges may be confounded even further when programs are accredited by a professional governing board (e.g., Accreditation Committee for Education in Nursing or Association for Behavior Analysis International). In addition to university goals, these programs must demonstrate that they have satisfied criteria outlined by the accrediting body. Despite these challenges, some faculty and programs have attempted to intentionally link their program assessment to these larger institutional goals. Here we review three such examples: The first (Criminal Justice) highlights efforts to directly link assessment of student learning to the strategic plan's goal of increasing students' access and success; the second (Social Sciences) describes efforts to link program assessment (apart from student learning) to the strategic plan and its goals of increasing student success, diversity and inclusion; and, the third (Athletic Training Program) illustrates how one program linked assessment of student learning to both the university strategic plan's call for greater student access, success and prominence, and the professional requirements dictated by discipline accreditation.

Linking Assessment of Student Learning to the Strategic Plan: The Criminal Justice Bachelor of Arts and Bachelor of Science Program
Part of the mission of the Department of Criminal Justice at UCF is to serve the university's strategic goal of providing the best undergraduate criminal justice education to students coming from diverse backgrounds. The makeup of the more than 1400 students majoring in Criminal Justice (CJ) consist of a blend of first time in college (FTIC, 42%) and transfer students (58%) from local area state colleges who are admitted with Associates of Arts or articulated Associates of Science degrees through the DirectConnect to UCF program (DirectConnect to UCF guarantees admission to the population of transfer students from our partner colleges). The CJ majors are required to complete core courses in the areas of policing, courts, corrections, research methods, statistics, and a Capstone Experience.
Criminal Justice was one of three pilot programs selected in which faculty implemented the Student Success Collaborative (SSC), established in 2015-16, which is designed to enhance student success, retention, and timely graduation. This program stems from an emphasis at the state level and subsequently by top UCF administrators to improve student success in these areas, and its focus aligns directly with the university's strategic plan regarding student success. SSC uses a predictive analytics platform to aid program directors, coordinators, faculty and advisors in more effectively monitoring student success. SSC is used to pinpoint success or failure markers for struggling students in a timely manner to reduce or prevent course repeat, failure and negative trajectories.
Reports are generated to inform program personnel about students who may be in jeopardy of missing a high-enough grade in an important success marker class or whose behaviors may show a pattern across multiple courses that could indicate a more serious problem.
Contextual knowledge combined with SSC pilot program analytics confirmed that two required courses-Research Methods and Data Analysis-historically inhibited student success, retention and timely graduation. Thus, in 2017, based on this contextual knowledge, combined with SSC pilot program analytics, annual assessment results dating back to 2013-14, and recommendations from the program's review in 2013, the faculty decided to restructure the curriculum with a close eye on these two courses to ensure improved performance on program outcomes aimed at meeting our learning compacts. The faculty established Research Methods and Data Analysis committees to review course structuring and curriculum, examined sibling programs at UCF (i.e., Psychology, Sociology, Political Science, Public Administration), and conducted a statewide analysis of institutions with CJ programs offering similar courses. The result was the addition of a onehour weekly lab in both courses, making a "hands on experience" a vital component of the courses, and development of a fixed curriculum and datasets to be used by all students in these courses.
Two direct outcome measures in the program's assessment focus on learning in these two courses. As seen in Table 1, students were assessed on their ability to design a research project and consume CJ research, and their ability to understand national crime databases (UCR, NCVS), and how crime data are collected and presented to the public. The direct measures supported improved student performance in these areas tied to programmatic changes. Additionally, indirect measures of both outcomes were used to gauge student perception of success using the Graduating Senior Survey conducted annually.

Program Assessment Outcome 1 (Research Methods):
Criminal Justice students will demonstrate an ability to design a research project and intelligently consume the results of criminal justice research conducted and presented by others.

Measures:
1. Annually, panel of CJ faculty will evaluate research projects to determine both the students` ability to design a research project and the students` ability to intelligently consume the results of criminal justice research. All research projects from all sections of CCJ4701 are reviewed. At least 75% of students will score 75% or higher on their research project evaluation. (Direct) 2. Annually, all graduating majors are asked to rate their level of agreement with statement: "As a result of my Criminal Justice education at UCF, I am able to design a research project." Respondents will be able to respond with strongly agree, agree, neutral, disagree, strongly disagree. At least 80% of students will respond that they strongly agree or agree with the statement. (Indirect)

Program Assessment Outcome 2 (Data Analysis):
Criminal Justice students will demonstrate knowledge of national crime and victimization databases and how crime data are collected and presented to the public.

Measures:
1. Annually papers and/or projects from all students enrolled in all sections of CCJ 4746 will be evaluated to determine if students demonstrate a knowledge of national crime and victimization databases/how crime data are collected and presented to the public. At least 75% of students sampled will score 75% or higher on the evaluation. (Direct) 2. Annually, all graduating majors are asked to rate their level of agreement with the statement: "The criminal justice program at UCF has provided me with the knowledge of national crime and victimization databases and how crime data are collected and presented to the public." Respondents will be able to respond with strongly agree, agree, neutral, disagree, strongly disagree. At least 80% of students will respond that they strongly agree or agree with the statement.
Note. Information about this program may be found at https://www.ucf.edu/degree/criminal-justice-bs/ These efforts to restructure the curriculum and assessment to align more clearly with university goals surrounding student success, retention and timely graduation, has been beneficial to students. Although it is too early to provide sound long-term data, some preliminary highlights and anecdotal data lend encouragement to the department's efforts. That is, program assessment results for 2018-19 revised curriculum, compared to 2016-17 results, before the program changes occurred, show signs of student learning gains in ability to design a research project and intelligently consume the results of criminal justice research conducted and presented by others. In 2018-19, 84.4% (n=260/308) of students in the population scored 75% or higher on the research project using a rubric, compared to 79.2% ( Anecdotally, conversations with core faculty and instructors teaching Research Methods and Data Analysis indicate that students appear to be more effectively grasping methodological concepts and data analytic skills because of incorporating lab-based environments. Informal advising sessions and discussions with students also seem to indicate a more positive attitude toward these courses along with more positive communication streams among students, leading to higher retention. Early data also suggest students are less likely to repeat these courses, and thus are more likely to graduate on time. Admittedly, it is early in the process as the Criminal Justice program has had one year of comparative data since restructuring the program with a pointed eye on research methods and data analysis courses. Nonetheless, the program and curriculum changes noted above, stemming from a triangulated approach (program analytics contextual knowledge, review of trends in previous annual assessment results, and recommendations from the previous program reviews), assisted with meeting departmental goals and aligned the program with the larger institutional mission. Other programs may explore the use of data analytics linked with other assessment and implementation approaches to accomplish departmental outcomes and link to larger institutional goals.

Linking Program Assessment to the Strategic Plan: The Social Sciences Bachelor of Science Program
The Social Sciences program at UCF is an interdisciplinary program composed of programs within the social sciences (sociology, psychology, political science, communication, anthropology, and women's and gender studies). The major is structured in such a way that students complete the requirements for the minor in three of the six programs, in addition to completing a course in basic statistics and a methodology course related to one of the student's area of concentration. Currently, there are approximately 120 majors in the program.
Until 2017, the sole focus of the Social Sciences program assessment was to assess cognitive learning outcomes. During the semester of graduation, students were required to take an "exit exam" in which they were quizzed on their statistical literacy, methodological knowledge (especially concerning ethics), and knowledge of basic principles and concepts related to their three areas of concentration. This exam assessed outcomes corresponding to basic knowledge, comprehension, and to a lesser extent, application of concepts, according to Bloom's (1956) taxonomy. The exit exam was intended only for assessment purposes. That is, students were not required to pass the exam with any specific level of competency; they were simply required to complete the exam.
With the emergence of UCF's new strategic planning goals, the faculty began exploring ways to further align the program (and its assessment) to the larger institutional mission. Because the UCF strategic plan, as discussed earlier, is geared toward institutional outcomes and not student learning per se, bridging the gap between institutional goals and this relatively small academic program was challenging. Faculty decided to focus on two strategic planning goals that pertained to undergraduate students: "student success" and "student diversity and inclusiveness." It should be noted that the program continues to focus primarily on student learning outcomes, and that the new outcomes based on the strategic plan are in addition to the student learning outcomes that were previously created and are still in use.
As seen in Table 2, one UCF strategic planning goal concerning student success states that all students will participate in a positive, high impact student experience. Here, "positive, high impact student experience" includes research, internship, service-learning, or study abroad experiences. Thus, the program created a new assessment outcome and measures for the social sciences program. The desire was to align with the university goal and have Social Sciences majors participate in these high-impact experiences including research, internships, and study abroad. It should be noted that given the interdisciplinary nature of the Social Sciences program, the program director has no control over the departments or programs that students are minoring in. For instance, some departments have extensive internship or research opportunities available to students, others don't. Given these challenges, the program created modest measures-10 percent increases in each category: research experiences, internships and study abroad.
The second approach to linking program goals to the broader university goals was through student diversity and inclusiveness (see Table 2). The university goal was to increase degree attainment of specific diverse student cohorts across all academic disciplines by 10%, and the program was also dedicated to attracting these students to the program and serving diverse students within the major. Thus, the measures developed to assess progress in this area were twofold: to increase by 10% the Social Sciences majors representing diverse and underrepresented groups, specifically students of color and transfer students; and to increase by 10% the percentage of majors who represent diverse groups. The data used to measure diversity and high-impact experiences are provided by the university. Program Assessment Outcome 1: Social Sciences majors will participate in positive, high-impact experiences including research, internships and study abroad.

Measures:
1. There will be a 10% increase in number of students who participate in research (Honors in the Major, Independent Directed Studies, Student Undergraduate Research Experience [SURE]) 2. There will be a 10% increase in number of students who participate in internships (experiential learning). 3. There will be a 10% increase in number of students who participate in study abroad.
Strategic Planning Goal: Student Diversity and Inclusiveness (specifically, increase by 10%, degree attainment of specific diverse student cohorts across all academic disciplines) Program Assessment Outcome 2: Social Sciences will attract and serve diverse students to/within the major

Measures:
1. The percentage of students representing diverse and underrepresented groups (students of color and transfer students) who major in Social Sciences will increase by 10%. 2. The percentage of graduates representing diverse groups will increase by 10%.
Note: Information about this program may be found at https://www.ucf.edu/degree/social-sciences-bs/ Although it is too early to document the Social Sciences' program success, by meaningfully and intentionally working not only to meet desired student learning outcomes but also working in synergy with the university to achieve its promise, the program is better positioned to contribute to the larger institutional mission. By intentionally linking the program outcomes to the broader university goals such as enrollment patterns and retention data by demographics, any program can evaluate how well it meshes with the path that the larger institution has laid out, and whether it is doing enough to facilitate student success and attract diverse students.

Linking Assessment of Student Learning to the Strategic Plan and Accreditation: The Athletic Training Program
The Athletic Training (AT) Program at UCF illustrates the linking of professional standards to strategic planning and program-specific student learning outcomes. Athletic trainers are healthcare providers who serve in a primary care role within secondary schools, colleges and universities, outpatient rehabilitation facilities, industry and military, as well as any other location where physically active people sustain injuries and illnesses. Approximately 56 students are enrolled in the program.
Unlike academic programs highlighted above, professional clinical programs such as the AT Program that allow graduates to sit for the certification examination are guided by a list of competencies. There are eight Professional Knowledge content areas found in the 5th Edition of the Educational Competencies (National Athletic Trainers' Association, 2011). The accreditor for athletic training programs requires that "There must be a comprehensive assessment plan to evaluate all aspects of the educational program." The AT Program interprets "comprehensive" to mean that the program needs to: 1) assess faculty and preceptors as teachers/mentors of the students; 2) assess clinical sites for their ability to provide appropriate experiences; 3) assess the curriculum to determine if the program is preparing students in all aspects of practice; and 4) assess some of the "soft skills" that graduates need to be good athletic trainers. The first-time pass rate on the Board of Certification (BOC) examination provides important evidence of student success. Table 3 reveals the direct link between the professional standards-the 8 competency content areas-and the strategic plan for the university. All outcomes are student learning outcomes and include all 8 areas. Direct measures are practical skills, exams, essays, and projects. Indirect measures assess the graduates' perceived confidence in their abilities in the 8 areas.
These professional requirements and goals align with one of UCF's Strategic Plan Priority Metrics-increasing student access, success, and prominence. The way the AT Program increases success is by ensuring that all graduates are well qualified to become entry-level practitioners. The AT Program increases prominence by ensuring that graduates are well prepared to pass the BOC examination at a rate that establishes UCF as a leader across the country.
Using this same approach, any academic program can review the professional standards documents from their national professional association (e.g., the American Speech and Hearing Association's "Big 9" areas of practice and the Council on Social Work Education's "Nine Core Competencies and Behaviors") and use them to assess the curriculum in their program. Doing so creates an additional linkage between the program's student learning outcomes and institutional goals surrounding prominence and student success.

Measures:
1. 90% of students will earn a grade of "B-" (80%) or better on the cumulative final competency examinations for each practicum course (ATR 3812L, 3822L, 4832L, 4842L). The first-time pass rate will meet or exceed the first-time pass rate for the prior year. Commonly missed questions will be identified and categorized so that an action plan to improve can be implemented during the subsequent cycle. (Direct) 2. 90% of all students in the AT Program will earn a "B-" (80%) or better on the Psychosocial Intervention essay in the Case Studies in Sports Medicine (ATR 4103 course). Scores will be adjusted for formatting errors (the rubric has 72 points related to content and 28 points related to structure and formatstudents can also lose 25% for a late grade). This measure assesses the psychosocial strategies & referral content area. (Direct) 3. 90% of students will earn a grade of "B-" (80%) or better on the cumulative final examination for the Organization & Administration in Athletic Training course (ATR 4512C). This measure assesses the healthcare administration (HA) and professional development & responsibility (PD) content areas.
(Direct) 4. 90% of graduating seniors will report on the AT Program Exit Survey (prior to graduation), that they "agree" or "strongly agree" that they are confident regarding their knowledge and ability to perform in the seven Professional Knowledge content areas measured in this outcome. Each mean score will meet (within 1 standard deviation) or exceed the mean score from the prior year. (Indirect) Program Assessment Outcome: AT Program students will demonstrate information fluency and critical thinking through proficiency with the 5 steps of evidence-based medicine (EBM -defining a clinically relevant question, searching for best evidence, appraising evidence quality, applying evidence to practice, and evaluating the process).

Professional Goals and Corresponding Program Outcomes and Measures for the Athletic Training Program
Measures: 1. 90% of students will earn a grade of "B-" (80%) or better on the Therapeutic Modalities in Athletic Training (ATR 4302C) EBM Project. (Direct) 2. 90% of students will earn a grade of "B-"(80%) or better on EBP examination questions given on the Advanced Rehabilitation in Athletic Training (ATR 4315C) final examination. (Direct) 3. 90% of graduating students will "strongly agree" or "agree" that the AT Program fostered critical thinking skills and that they are able to provide care that is evidence-based. The mean scores will meet (within 1 standard deviation) or exceed the scores from the prior year. (Indirect) 4. 90% of graduating seniors will report on the AT Program Exit Survey (prior to graduation), that they "agree" or "strongly agree" that they are confident regarding their knowledge and ability to perform in the Professional Knowledge content area of Evidence-Based Medicine (EBM). The mean score will meet (within 1 standard deviation) or exceed the mean score from the prior year.

Conclusion
In this paper, we present three examples of programs from diverse disciplines that have confronted the challenge of linking program assessment to larger institutional goals, including those associated with professional accreditation. Admittedly, the linkages feel, at times, elusive, as the difficulties of bringing university and organizational level goals and metrics down to the department or program level persist. Such linkages would be more straightforward if institutions prioritized institutional student learning outcomes (ISLO) (Serban 2004) so that each program could link to some or all of these outcomes. Strategic plans outside teaching-oriented institutions often fail to include ISLOs or ones applicable to various programs. These disconnects not only reveal the different purposes of strategic planning and assessment, but also the tension between accountability and authentic assessment that have historically played out within the larger arena of assessment in higher education. Yet, as seen here, as faculty become more intentional in the way they develop assessments to align with larger institutional goals and strategies, the tension between these two approaches can be resolved. Programs can play a more direct role in influencing institutional goals and metrics related to student retention, diversity/inclusion or similar outcomes. By working together in this manner, students, faculty and administrators all stand to benefit from institutional assessment in higher education.