Skip to main content

Trusteeship Magazine

Learning Metrics: How Can We Know That Students Know What They Are Supposed to Know?


Institutions can provide considerable information that helps boards be more accountable for educational quality.

This information generally addresses: 1) educational inputs, such as students and faculty characteristics, 2) educational processes, such as retention and graduation rates, or 3) educational outcomes such as content knowledge, writing ability, and critical-thinking proficiencies.

Eight colleges and universities that have participated in a project sponsored by the Teagle Foundation have worked to identify the most appropriate and useful evidence for determining educational quality at their institutions.

What do our students learn? Do our students get what they pay for? Are our graduates ready to succeed? How do we know? These questions define higher education. And today, more than ever, such issues surrounding educational quality have risen to the top of the national agenda, stimulated by public concerns about the cost and value of a college degree. For both fiduciary and reputational reasons, boards must effectively oversee the educational quality of their institutions, including the appropriate ways to assess and measure it.

Student learning outcomes, as they are called, are the crux of educational quality: Did students learn what they were expected to learn? Was their learning an appropriate return on their investment of time and money? And how can we know? These are profoundly important and difficult questions that cannot be answered as succinctly and quantitatively as can questions about financial issues, which have more standard and established metrics.

Higher education as an industry is, in fact, only in the early stages of developing and implementing sophisticated, valid, and reliable assessments of student learning. The task is highly complex and likely to develop over a number of years. The number and diversity of learning-outcome expectations among programs and institutional missions make development of standardized tests difficult. Creating authentic assessments and metrics is costly, students are diverse, and expectations for what they will learn are wide-ranging. Most of the work must be done institution-by-institution, primarily by full-time faculty, because the question is not, “Did students learn anything?” Rather, the question is, “Did they learn what the institution says they should have learned?” These issues are at the heart of faculty responsibility, and they vary from one institution and program to another.

Yet while institutions cannot count learning as they count dollars, and direct measures of student learning outcomes are still emerging, institutions can still provide considerable information that helps board members and the public hold them accountable for educational quality. This information generally addresses one of three “domains” of quality:

  • Educational inputs, such as student and faculty characteristics;
  • Educational processes and experiences, such as retention and graduation rates and participation in high-impact practices; and
  • Educational outcomes, such as content knowledge, writing ability, and critical-thinking proficiencies.

Evidence within the third domain—student learning outcomes—concerns what students actually know or can do, and it can be direct or indirect. Direct evidence of student learning is typically derived from systematic analysis of their actual work—papers, performances, examinations, projects, presentations, or portfolios, for example. Indirect evidence is most often derived from surveys or interviews with students, alumni, or employers of the institution’s graduates.

Research and practice also demonstrate that learning is more likely to occur under certain conditions related to faculty members, students, and other inputs as well as the educational process itself. Assessing these conditions can further inform educational quality oversight. The most meaningful information for board oversight is a thoughtful combination of direct and indirect evidence that reflects the institution’s mission and educational goals. (See Figure 1 below.)

What Boards Can Know Now

Boards already receive important information about educational quality, although they may not think of it as such. Accreditation is a major source of external information about educational quality, as are academic program reviews. Examinations for professions and standardized tests can also provide insights. In addition, some of the indicators commonly employed on board dashboards are also useful.

Accreditation and Academic Program Review. One of six regional accreditors reviews all aspects of every institution, including educational quality. Accreditors have long required member institutions to demonstrate that they have the essential ingredients to gauge educational quality, assess student learning, and make improvements based on those assessments. Because regional accreditation is required for access to federal student aid, nearly all colleges and universities can use institutional accreditation as a source of information about educational quality.

Specialized program accreditation reports are additional external reviews that focus entirely on field-specific education. In some cases—especially the professions of engineering, medicine, and business—specialized accreditation has led the way in shifting the spotlight from educational inputs and processes to direct evidence of student learning. However, program accreditation is not available in all fields.

Accreditation reviews occur typically only every five to eight years, and they may take two or three years of work from start to finish. Generally speaking, accreditors attest to whether institutions are doing what they say they are doing. They examine educational inputs such as entering-student test scores and faculty qualifications. They examine dozens of internal resources and activities that represent widely accepted indicators of good education such as those associated with the curriculum and instructional resources. They want to know how graduates perform on exit exams and whether they go on to appropriate advanced study or employment.

Accreditation requires massive amounts of data and information, much more in quantity and detail than governing boards need annually. Accreditation is a meaningful cornerstone, but it is too infrequent, complex, and varied to fulfill all of the requirements of educational quality oversight for governing boards. In addition to accreditation, then, governing boards need more frequent, succinct, high-level evidence of how the institution is ensuring quality.

Direct and Indirect Indicators. The most direct existing quantitative indicators of student learning outcomes are the examinations to qualify for admittance to a profession such as law, nursing, and teaching. Those examinations represent the best judgment of people in the field regarding what new practitioners should know and be able to do. The proportion of examinees from a given institution that passes the test is a direct indication of educational quality in that program. Programs at or near a 100 percent pass rate on such examinations can claim excellent student learning outcomes for that profession.

In addition, several highly regarded standardized instruments are now available to address some aspects of student learning. (See sidebar on “National Instruments for Gathering Evidence of Student Learning” below.) In a 2010 AGB survey, 68.9 percent of boards reported that the full board or a committee received such information to monitor student learning outcomes.

Most programs do not have licensure examinations, but acceptance into graduate programs can provide similar, though more subjective, information. Placement rates and satisfaction surveys of graduates and their employers provide useful information that can also help guide program improvements.

Many institutions use a dashboard to track key indicators of institutional health and strategic progress. Some indicators of educational quality may already be on the dashboard, especially those relating to educational inputs and processes. Higher retention and graduation rates suggest that the institution is meeting a variety of students’ needs and expectations, including educational quality. Based on research showing impact on student learning, some institutions track student engagement levels through surveys and monitor the use of high-impact teaching practices.

Evidence of Educational Quality Oversight: Eight Case Studies

How can boards’ abilities to effectively fulfill their responsibilities related to the oversight of educational quality be strengthened? In 2011, the Teagle Foundation and AGB launched a project to help eight diverse institutions take their work on oversight of educational quality to the next level. One of the four project goals was to develop greater understanding of the evidence that would be most appropriate and useful for this work. Extensive information about the project and each institution is available on AGB’s website here.

Figure 2, “Sample Board Indicators of Educational Quality,” provides a high-level summary of participating institutions’ educational-quality indicators. Many indicators are quite familiar to board members, but putting them together as an educational-quality cluster helps boards recognize their potential significance, see the whole picture quickly, and consider where they may need more information. (For more detail, see each institution’s dashboard at the AGB website linked in the above paragraph.)

All institutions that participated in the AGB-Teagle project use retention and graduation rates as part of their process of board oversight of educational quality. All institutions with programs requiring professional licensure use those examination results, too. The results of periodic academic program review are widely considered, as well.

Like academic program review, some quality indicators are complex and cannot be fully represented in a dashboard format. Listed below are the ways that each participating institution in the AGB-Teagle project assesses student learning, including changes and additions that it made as a result of the project.

Drake University. In the past, Drake presented academic dashboard data such as retention rates, graduation rates, and professional-examination pass rates to the board, but senior administrators became concerned that the language of metrics could interfere with meaningful engagement with academic quality. The information presented to the board now includes a hybrid of previous metrics, along with some additional information and discussion that focuses on a specific aspect of academic quality, such as the academic success of students by race or ethnicity, or assessment of students’ critical-thinking skills.

Metropolitan State University in Denver. In addition to retention and graduation information, the board receives the results of academic program reviews and one-year follow-up reports. Data on internships, service learning, and campus climate are also available. All academic programs are required to have a process to assess student learning outcomes. Faculty members in each academic program determine the appropriate student learning outcomes and the best sources of evidence of student achievement. The university is now considering how best to summarize results for board review.

Morgan State University. The Morgan State board asked for a dashboard to track progress on the strategic plan. Educational quality was built into the dashboard, the university plan, and the strategic plans of units within the university. The dashboard includes indirect measures such as enrollment, retention, and graduation rates. In addition, the university has provided board members with information about student performance on the university’s writing proficiency examination. Oral communication performance is also reviewed, and the university plans to identify additional indicators of educational quality.

Rhodes College. The Rhodes College board has a relatively deep understanding of educational quality as a result of reports, experiences, and discussions held over time. While continuing those activities, the board is adding an initiative to follow specific success markers through four stages of the student lifecycle and track participation in the following high-impact educational practices: first-year seminars and experiences, common intellectual experiences, learning communities, writing-intensive courses, collaborative assignments and projects, undergraduate research, diversity/global learning, service learning and community-based learning, internships, and capstone courses and projects. The college is also evaluating the quality of those practices. In addition, Rhodes uses national indicators such as the National Survey of Student Engagement (NSSE) and the Collegiate Learning Assessment (CLA) as well as local measurements (for example, rubrics for program-level assessment) in its assessment of educational quality. Discussions are underway regarding how to best summarize and share this information with the board.

Rochester Institute of Technology. RIT has developed a model that integrates its dashboard on academic quality into the institution’s strategic vision and assessment framework. In addition to an array of input and process metrics, the institution is developing indicators of learning outcomes to be included in the alumni survey in 2014. The board also reviews the institution’s results on the National Survey of Student Engagement, employer surveys, and co-op evaluations.

Salem State University. Board members at Salem State use a dashboard with inputs and educational process indicators, and they discuss academic-program and accreditation reviews and key quality issues regularly. Indicators of student learning outcomes are under development. Salem State participates in the Massachusetts Department of Higher Education “Vision Project,” which has a process to identify student learning indicators to help enhance student learning and success. Salem State also participates in Liberal Education and America’s Promise (LEAP), an initiative of the Association of American Colleges and Universities (AAC&U) that uses rubrics to assess student learning in liberal education.

St. Olaf College. St. Olaf has developed a matrix of indicators of educational quality for a broad array of inputs, processes, and outcomes. The section on student learning outcomes matches results from a variety of institutional-level assessment instruments with the college’s stated mission-based outcome expectations. Some of the indicators are derived from direct assessment of student work in courses and on nationally administered tests, such as the Collegiate Learning Assessment. Others are indirect, consisting of items or item clusters from high-quality surveys: the National Survey of Student Engagement, Higher Education Data Sharing Consortium (HEDS)-Alumni, and HEDS-Research Practices. (See Sidebar below.)

Valparaiso University. Valparaiso reports to the board on a variety of input, process, and outcome indicators, including results of academic program reviews and the percent of operating budget devoted to instruction and academic support relative to peers. Discussions and mutual understanding between faculty members and board members about key quality issues, such as academic innovation and MOOCs (massive open online courses), is an important aspect of Valparaiso’s approach to board oversight of educational quality.

Variations among the eight institutions reflect each board’s prior experiences and culture, the college or university’s evolution in student learning assessment, and other factors. The approach used by one institution might make little sense at another. What is most similar among them is the commitment to more and better direct assessment of student learning at the institution level, the use of both direct and indirect evidence of student learning, and the engagement of board members not only with the indicators, but also with what they mean, how they are developed, and how the institution responds.

For example, suppose that the pass rate on a professional examination declines from 98 percent to 90 percent over a three-year period. Worthwhile board discussion might focus on what changes could have led to the decline, what has already been done to reverse the trend, whether employer surveys or placement rates have also suffered, and what it will take to support an effective action plan for recovery.

Evidence of Educational Quality: Questions for Board Members

  • How well does the board understand what the institution aims to accomplish in terms of learning outcomes?
  • Do we understand how accreditation works and what accreditations we do or do not have?
  • Does the institution use accreditation results to improve student learning and the environment for learning?
  • Does the board support efforts to improve educational quality where the evidence indicates improvement is needed?
  • Is educational quality here better than it used to be? How do we know?
  • Do we understand any special challenges to educational quality that may face the institution?
  • Which indicators of educational quality are most important to this institution? What performance levels does the institution need to achieve those indicators?
  • Would an educational-quality dashboard be useful for us? Is our existing dashboard useful? Do we use educational quality evidence in our decision making?

Next Steps for Boards

Experiences of the eight institutions in the AGB-Teagle project confirm the value of selecting and assembling evidence to support board oversight of educational quality. The questions and discussions along the way are important learning experiences for all involved, and the resulting core set of key indicators provides compelling focal points for joint, ongoing work for continuous improvement. The AGB-Teagle project has reinforced that, in determining educational quality, boards must grapple with the following questions:

What evidence should we use?

Start with direct indicators of student learning outcomes that are appropriate for institution-level oversight, such as pass rates on professional examinations. Add indirect indicators of student learning outcomes like graduate and employer surveys. Determine which input and process indicators are most appropriate for the institution’s mission and goals and are most likely to impact student learning outcomes. Engage board members, select administrators, and faculty members in deciding what to include and revisit the decisions as needed. Consider aligning indicators with key expectations the institution has for its graduates.

Select thoughtfully to develop the smallest reasonable number of sound indicators that are most meaningful for the institution. Do not be surprised if many of your indicators are similar to those of other institutions. Conversely, do not be surprised if some are quite different. The fundamental criterion is that they make sense for the institution at this time. Use existing data for indirect indicators, but encourage investment in new measures for direct student learning outcomes.

How can we get the most value from the evidence?

Many of the most important indicators cannot be well represented in numbers, and some of the numbers are less precise than what financial or enrollment information can provide. In most cases, it is more worthwhile to ask, for example, how a rubric works than to wonder whether a metric’s change from one year to the next is statistically significant.

Evidence means little unless board members gain some understanding of how the institution produces and assesses quality. Meaningful oversight requires both understanding and evidence. The right indicators are the ones that lead to the right interactions and follow-up.

Discuss the educational quality information contained in accreditation reports and academic program review. Use them as opportunities to build understanding about what it takes to produce and assess educational quality.

Finally, accept that much work remains to be done. As one participant put it, most institutions are “still struggling to find the critical 50,000-foot evidence that will tell the learning story effectively to the board.”

Peter T. Ewell, a national leader on educational quality and author of Making the Grade (AGB Press, 2nd edition, 2013), encourages board members to expect and demand a culture of evidence, recognize that educational quality evidence raises questions more often than it gives final answers, and review quality evidence as a regular part of board activity. Quality evidence can provide a common language and framework with which to build rewarding new collaborations among faculty, students, board members, and administrators on their most significant shared responsibility.

National Instruments for Gathering Evidence of Student Learning

Collegiate Learning Assessment (CLA)

Developed by the Council for Aid to Education, the CLA uses performance-based tasks to evaluate critical thinking skills of students. CLA+ measures critical thinking, problem solving, scientific and quantitative reasoning, writing, and the ability to critique and make arguments.

National Survey of Student Engagement (NSSE)

Survey items represent empirically confirmed “good practices” in undergraduate education, those associated with desired outcomes of college. “NSSE doesn’t assess student learning directly, but survey results point to areas where colleges and universities are performing well and aspects of the undergraduate experience that could be improved.”

Community College Survey of Student Engagement (CCSSE)

CCSSE is a “tool that helps institutions focus on good educational practice and identify areas in which they can improve their programs and services for students. . . . CCSSE asks about institutional practices and student behaviors that are highly correlated with student learning and retention.”

Collegiate Assessment of Academic Proficiency (CAAP)

ACT offers a “standardized, nationally normed assessment program that enables postsecondary institutions to assess, evaluate, and enhance student learning outcomes and general education program outcomes.”

Association of American Colleges and Universities’ (AAC&U)

“Essential Learning Outcomes” Rubrics Through its Liberal Education and America’s Promise (LEAP) program, AAC&U has identified a robust set of “Essential Learning Outcomes” representing the knowledge and proficiencies developed by a contemporary liberal education.

HEDS - Alumni

The HEDS Alumni Survey is designed to assess the long-term impact of teaching practices and institutional conditions on liberal-education outcomes such as critical thinking, information literacy, and problem solving. It also examines postgraduate employment outcomes, college debt, and college satisfaction.

HEDS - Research Practices Survey

The HEDS Research Practices Survey is a short survey that collects information on students’ research experiences and assesses information literacy skills.

Image Credit



Click here to chat with the member concierge