Making Metrics Matter

How to Use Indicators to Govern Effectively

By AGB    //    Volume 19,  Number 1   //    January/February 2011
Making Metrics Matter: How to Use Indicators to Govern Effectively, Trusteeship Magazine January/February 2011

How can a board measure the progress of the higher-education institution that it governs? How do trustees know whether their college or university is meeting its goals and serving its mission? What data should they be tracking to make sound decisions?

Many institutions develop specific measures or indicators— often called “dashboards”—to inform boards and top administrators about the college or university’s current situation and performance and assist them in moving the institution ahead strategically. And, increasingly, institutions are using metrics not only to assess internal progress but also to respond to external calls for greater accountability from policy makers and the public.

In The Information Mosaic: Strategic Decision Making for Universities and Colleges (AGB Press, 2007), Gerald W. McLaughlin and Josetta S. McLaughlin note that identifying such indicators has enabled higher-education institutions “to create information out of an oversupply of data that threatened to overwhelm the decision maker.” Citing the changing nature of the higher-education environment, they add that “performance indicators must be revised on a regular basis. This is an activity through which boards of trustees can make substantial contributions to support their institution.”

Indeed, the process of developing and regularly reviewing dashboard indicators itself can be quite valuable. Selecting indicators and determining what other institutions to include in a comparison can engage administrators and board members in important educational and strategic discussions.

Trusteeship asked top leaders of three different institutions how they use strategic indicators and dashboards to govern their college or university. Clyde Allen, chair of the board of regents of the University of Minnesota System; Lawrence S. Bacow, president of Tufts University; and Laura Skandera Trombley, president of Pitzer College, shared their views.

Does your board have a particular set of data that it monitors year after year?

Allen: Our board approved in 2000 an annual “University Plan, Performance, and Accountability Report” that is issued each September. This year’s report is 110 pages plus appendices and tracks roughly 100 measures that encompass all aspects of the university’s mission and operations. Since the launch of strategic planning in 2004, we have also been refining a list of “key measures”—30 to 40 indicators that are broadly available, comparable to our peers, and can be monitored regularly to ensure progress on our most important priorities.

Bacow: Nine years ago, the board and the administration agreed on a set of metrics to use to monitor the performance of the university. These indicators are designed to shed light on our progress toward our strategic objectives. They are represented to the board in a two-page dashboard that is presented at each board meeting. Over time, we have adjusted the dashboard to reflect changes in our strategic goals.

Trombley: Our board has utilized a dashboard that contains data covering finances, fund raising, students, faculty, staff, budget, and our ranking in U.S. News & World Report for the past seven years. This year’s dashboard is 25-pages long with a series of graphs that are clear and concise, with benchmarks and goals attached when appropriate.

What new indicators have emerged as especially important in recent times?

Trombley: Indicators that have emerged as particularly important to us at Pitzer are those concerning financial data: revenue and expenses, net-assets growth, total return in spending-rate comparison, endowment per full-time student, endowment spending as a percentage of operations, tuition discount rate, and expenditures per student.

Allen: Of course, in a challenging economy, financial indicators will continue to receive increasing attention. But the emerging areas of focus concern more qualitative indicators of academic excellence and public benefits, such as studentlearning outcomes and the economic impact of university research and teaching. This new focus comes first from a need to examine our own wise use of resources in terms of results. Clearly, however, this shift also is driven by increasing demands for accountability and the call for universities to demonstrate the value of higher education to society in an atmosphere of changing priorities at the individual and governmental levels.

How does the board use the information? Is there a set of indicators for each board meeting or a season for each subset of data? For example, does the board or its committees look at budgets for the next fiscal year in spring, graduation rates in fall?

Bacow: In reviewing the dashboard at each meeting, I focus on those indicators that have changed since the previous meeting, which are highlighted on the printout. That allows us to concentrate on new information. Since different indicators, such as admission results or aggregate annual research volume, change at different times, the board has a chance to discuss all the relevant categories over the course of the year.

Specific metrics are tied to specific objectives. For example, the board created a new metric to measure alumni engagement independent of fund raising. We now report to the board the aggregate number of “touches”—defined as attendance at a reunion, a regional gathering, or any other program for alumni—that we have each year with our graduates.

Trombley: The dashboard is produced once each year in the fall after our date of record. The board views it regularly as an essential tool in their kit as trustees. It accompanies all strategic discussions involving investment decisions, capital decisions, and campaign decisions. Every year at its fall meeting, the board splits into two small groups and meets with half of the vice presidential cabinet at a time specifically to discuss various indicators that come under the purview of those vice presidents. This two-hour meeting typically earns the highest evaluation among trustees, as the board believes that the best way to begin the year is to have a holistic understanding of all of the drivers for the college.

Allen: Much of what comes before the board and its committees arrives at roughly the same time each year, based in part on the availability of data—for example, indicators pertaining to enrollment and the characteristics of incoming students are generally reported in October—and in part on our fiscal calendar and the state legislative schedule. We work hard to share and discuss this information in a timely manner ahead of key decision points, like the approval of annual budgets or state capital requests.

Does the board use trend data over time and comparative data on peer institutions to monitor institutional performance? Are specific measures linked to the strategic plan or an annual plan?

Allen: All of these key measures are linked directly to the university’s strategic plan, and each is indicative of several other operational measures monitored at the unit, department, college, or campus level. We use year-to-year data to demonstrate progress on specific objectives that move us toward our institutional goals, to show our effectiveness in addressing specific state priorities, and to make the case for public and private investment in the university. Comparative data help us assess our progress toward our institutional goals themselves: Are we a leading research university? Do we deliver a world-class education?

Bacow: The dashboard is constructed so that it reports the highest and lowest values of each data point for the last six years, as well as the current value. There is also an arrow indicating whether the current value is larger or smaller than the previous year’s value. The color of the arrow indicates whether or not that change is positive or negative from a policy perspective.

Some of the metrics in the dashboard are constructed with reference to peer institutions. For example, we report faculty salaries by discipline as a percentage of the average salary for our peer group in the same discipline. We strive to be at least at 100 percent of the average of our peer group. In addition to the dashboard, we also periodically benchmark Tufts against a specific set of 12 peer institutions that the board has agreed we will use as a reference point.

Are most of the dashboard measures related to financial performance? What about monitoring educational quality?

Trombley: Many of the dashboard measures are related to financial performance, but certainly not the majority. We look very carefully at issues concerning faculty diversity, for example, including the number of tenured faculty members as well as women and faculty of color.

Also, an offshoot of our board dashboard has been that now each of Pitzer’s vice presidents maintains his or her own internal dashboard that is specific to his or her area. Such internal dashboards were created after a retreat of our administrative cabinet where we read and discussed Jim Collins’ monograph, “Good to Great and the Social Sectors.” It shows how the good-to-great concepts can be successfully adapted to worlds in which success is not measured in economic terms. The vice presidents would agree that has helped them grow professionally as well as encouraged discussion among their staff as to just what they should be measuring and how.

Allen: We revisited the university’s key measures at the board’s November meeting, and we were pleased to see several new measures either in development or under consideration to better benchmark and measure educational performance, including placement of students in postbaccalaureate educational programs and post-baccalaureate employment.

Currently only five of more than three dozen key measures are specifically financial, although others, such as the one for carbon emissions—which is primarily related to the board’s sustainability policy but is tightly tied to energy consumption, fuel costs, square feet of space, and condition of facilities— tell us something about the university’s revenues and expenditures, as well. The best indicators tell a multifaceted story: one that can be taken at face value, and others that help you examine the component parts.

Bacow: We monitor both financial performance and educational quality. For example, with respect to the latter, we report on the six-year undergraduate completion rate, the percentage of undergraduates engaged in research, the number of undergraduates completing senior honors theses, undergraduate satisfaction with advising, undergraduate satisfaction with career services, the number of major fellowships received by undergraduates (Rhodes, Marshall, Fulbright, Truman, and so on), the percentage of seniors going to graduate school, and the percentage of seniors with at least one job offer at commencement. In addition, we track overall undergraduate satisfaction with the Tufts experience as measured by our senior survey. We also monitor a series of financial performance indicators—including endowment value, endowment growth, growth in net assets, total research volume, indirect-cost recovery, and income from our intellectual- property portfolio.

A good way to educate trustees about their responsibilities for monitoring performance is to engage them in the selection of dashboard indicators. It can reveal the institution’s priorities, values, and sore spots. Have you experienced this with your board?

Bacow: Reviewing the dashboard provides an opportunity to discuss our strategy at each meeting. We also periodically discuss the composition of the metrics in the dashboard and revise them as necessary. And we review from time to time whether the institutional peer group remains relevant for our analysis.

Trombley: After each section of the dashboard—admissions, advancement, and so on—trustees are asked if they would like to see any graphs added or eliminated because they are redundant or not as useful as they once were. The board has been quite clear and helpful in making suggestions. For example, they recommended detailed tracking of our international programs and exchanges, indicators which have since proved to be quite important.

What have been especially valuable measures for you to monitor and how has the institution benefited from using such data? How has using data in decision-making really paid off?

Allen: For some time, our four-year graduation rates were unacceptably low. Internal research and reviews showed a number of factors that influenced those rates, and over the past several years, the board and the administration took action to address them. We set aggressive four-year graduation goals and passed a policy that provided students with a financial incentive to take a full course load. We made targeted investments to improve the first-year experience and academic advising of our students. We also invested in technologies to help students get the academic support they need and to help advisors see the early signs of struggling students. All of these decisions have been informed by and monitored using our key indicators and supporting measures. While there is still work to do, first-year retention has climbed to above 90 percent, and our graduation rates are up 17.5 percentage points since 2004.

Bacow: Although individual metrics are always valuable to look at, some of the more interesting conversations we have are about the interaction between metrics. For example, several years ago, we consciously decided to reduce reliance on early decision in admissions. We knew this would have an adverse effect on our yield and our selectivity. However, we have been able to demonstrate to the board how the quality of the entering class, as measured by average class rank and average SAT scores, has increased as a result of trading off yield and selectivity for quality.

Similarly, we report to the board a number of metrics on faculty quality, including how many members of the various national academies are on our faculty, as well as the percentage of new faculty hires from top-ranked graduate programs. I never miss an opportunity to point out to the board how strong the positive correlation is between faculty quality and our metrics for faculty compensation.

Trombley: We’ve found it particularly valuable to monitor our study-abroad programs—not only the number of students going overseas through our own programs but also exchange opportunities with foreign institutions. That has allowed us to focus on increasing the total number of students going abroad. In the past academic year, Pitzer was among the top 10 colleges in the total number of students being sent overseas, with 75 percent of the senior class having at least one study-abroad opportunity. Also, it has been valuable to track our fund raising, as we look forward to planning a camcampaign and announcing the public portion in the fall of 2011.

All in all, at this point I think my board could not imagine being able to function at the high level that they currently enjoy without having the dashboard. In many ways they would be expected to be the captains of the boat without having a wheel or rudder. This has become an essential part of our trustees’ portfolio.

What Performance Indicators Do Institutions and Their Boards Commonly Use?

By Dawn Geronimo Terkla, associate provost for institutional research and evaluation, Tufts University

To evaluate the array of dashboards that have been created on campuses, two colleagues from George Washington University and Northeastern University and I collected samples from 66 public and private institutions across the country, ranging from small colleges to major research universities. Before I developed a final version of a dashboard for the Tufts board of trustees, I wanted to see what other institutions did.

We solicited examples from various newsletters and listservs of the Association for Institutional Research and its regional affiliates, as well as obtained some samples through a Google search. We gathered a good cross-section of the dashboards and indicators that nonprofit higher-education institutions are using.

Dashboard indicators consist of a variety of measures that generally are related to the strategic mission of the institution or to the specific office developing the dashboard. Selecting the indicators is the most critical component. They should be 1) easy to understand, 2) relevant to the user, 3) strategic, 4) quantitative, 5) up-to-date with current information, and 6) not used in isolation. And, of course, the data underlying the indicators must be reliable.

We found that the number of indicators that institutions use and the actual measures they include vary greatly. Of the institutions we studied, the number ranged from as few as three to as many as 68. The average number of indicators was about 29. Few indicators are common to all dashboards, supporting the idea that institutions develop their indicators based on their specific strategic plan and institutional characteristics.

We grouped the measures into 11 broad categories, ordered by frequency of use. (See box below.)

Visual Presentation

The dashboards shared by our colleagues differed substantially in how they looked as well as the type of indicator they tracked. Thirty-eight percent were onepage documents, and 15 percent ran two pages. The rest varied in length from three pages to a 50-page fact book. Seventy-eight percent contained longitudinal, trend data.

The best dashboards were organized by categories or topics, and included useful contextual information like trend data for the institution or average scores for a comparison group. Some included a goal or target, or arrows showing the direction of change and whether that was good, bad, or neutral.

Administrative Aspects

We also sent a short survey to about half of the initial respondents to find out who initially requested the dashboard, the primary audience, whether the dashboard was paper or electronic, whether access was restricted, the frequency of update, and the number of dashboards the institution had developed. Seventy-one percent responded.

In almost all cases, the president, provost, or board of trustees requested the dashboard. The primary audiences for all the respondents was the board, president, and deans. Most of the dashboards were electronic; some were also available in print. Institutions that had paper-only versions planned to develop electronic versions in the near future.

Three-quarters of the responding institutions had a single dashboard. Some examples of multiple dashboards included 1) one with student indicators and one with financial indicators, 2) one about the institution as a whole and one with just athletics indicators, 3) one for the institution as a whole and one for academic affairs, and 4) one for each school or college within the institution.

A Few Guidelines for Boards

By Robert A. Sevier

Over the past months I have talked to many board members and administrators and have identified not only a set of metrics to help boards lead their institutions but also five general guidelines for using such metrics:

1. Monitor your metrics over time. A one-time look is a snapshot—interesting, but often lacking context. Far more useful is a look at a consistent data point over months and years.

2. Determine peer institutions at the institutional level. It may make sense for a college to compare itself to a different group of institutions for, say, admissions than for endowment, given that various colleges may be peer institutions in one of those cases and not in the other. But you should always determine and review cohort groups in an organized fashion and not allow different units within your college to willy-nilly select other institutions to benchmark against. The creation of cohorts can become overly political. For example, faculty members may like to compare themselves against a peer group for academic salaries that shows they should be paid more.

When possible, settling on a single, small group of peers can often make comparisons more meaningful. These comparisons, coupled with longitudinal data, will give you a rich sense of both place and progress.

3. Never use cohort and benchmark data to make your institution more like your competitors. That flies in the face of one of your most important strategic goals: meaningful differentiation from your competitors. In particular, do not use competitor data to help you decide which academic programs to offer. Instead, use marketplace, student, and employer interest to help guide those decisions.

4. Have confidence in the number. You must fully understand how it was calculated and insist that administrators use the same methodology each year. In fact, as part of full disclosure, you might even insist that the calculation be included as a footnote to the metric.

5. Strike a balance between too many metrics and too few. Often, in an attempt to fully understand something, we keep adding layers of data. These layers often give us a false sense of security and cause us to lose track of the truly important numbers. Focus on the round-up numbers: the major metrics that summarize the performance of sub-metrics.

Robert A. Sevier is senior vice president, strategy, at Stamats, a higher-education research, planning, marketing communication, and consulting company. These guidelines were excerpted from a longer article on metrics and benchmarking, “24,” which can be found on the AGB Web site at www.agb.org/tship/24.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.