Return on Investment
Return on Investment (ROI) has overlap with concepts of Key Performance Indicators (KPIs) and stakeholders, discussed elsewhere. It requires separate treatment, though, particularly due to ROI's sensitivity to context.
Scientific inquiry often requires equipment for observation and measurement. Examples span all of human history, including ancient artifacts such as Stonehenge (to measure the passing seasons), sundials, sextants, telescopes, thermometers, and balances. Today, many scientific instruments are highly specialized, and might require particular expertise to operate them. Modern instruments of science may be quite expensive. At the high end, consider examples such as the Mars rovers, the Hubble Telescope, and the Large Hadron Collider. These instruments cost billions of dollars to build and operate.
Another type of modern instrument that is costly to procure and operate is the supercomputer. Supercomputers are the principal mechanisms for analysis and discovery in a number of scientific inquiries, ranging from weather forecasting, to high-frequency financial trading, to design of modern materials and chemical compounds.
Leadership-class supercomputers, such as those operated by the US Department of Energy, can cost hundreds of millions of dollars to build, and tens of millions of dollars per year to operate. Major operational costs include vendor support, personnel, and electricity. Smaller supercomputers, such as those found in many businesses and universities, might cost from a few hundred thousand to over a million dollars. This is a substantial cost, compared to other instruments found on university campuses.
As with any capital expenditure in an organization, it is desirable to assess the return on investment (ROI) for a supercomputer. There are a variety of approaches to assessing ROI, with substantial variation in different types of organizations. Fundamentally, the idea is to assert benefit compared to expense, over time.
Measuring the expense over time is not difficult, other than choosing among different approaches to depreciation, and considering what components to include in the expense. For example, a supercomputer that costs $1 million might be anticipated to have a useful lifespan of 5 years. Operational costs, such as electricity, software, vendor support, data center facility costs, and so forth can similarly be accounted for.
Personnel costs might involve some judgment to assess. For example, university personnel might provide support to multiple instruments. Their effort might need to proportionally assigned to a particular system. Generally speaking, organizations that are already paying personnel and incurring other expenses seem to have little difficulty in measuring those expenses.
The benefits are much more challenging to measure. One attempt was recently published by the industry analysis group, IDC (www.hpcuserforum.com). They sent surveys to twelve academic research computing centers in the US, asking them to quantify dollars spent on supercomputing, and dollars received via innovations. Within the context of companies focused on product development or discovery, “innovation” seems a reasonable basis for assessing fiscal benefit. Such companies are likely to have their own standard for assessing outcomes of a variety of investments.
Innovations seem a poor choice for university environments, however. University outcomes include student graduation rate, attraction of leading faculty members, extramural research funding, and intellectual outcomes including papers, publications, presentations, and monographs.
Fiscal ROI is a less common measure of investment outcomes in university environments, yet within the CASC membership it is commonly discussed as a target. At many universities there is no effort to measure ROI on other expensive facilities, such as electron microscopes, biological laboratories, and ionospheric research stations. As with supercomputers, these are multi-user instruments with long lifetimes, specialized staffing, and high operational costs.
Supercomputers have a few characteristics that make them more appealing targets for ROI assessment. Characteristics include:
- Most organizations have one or just a few such systems.
- They are often operated by dedicated staff, in some sort of department or other organizational unit.
- They are in a single location, so it’s easier to measure electrical utilization, square feet of data center space, and other drivers of costs.
- Due to the relatively short lifespan of 3-7 years, there is ongoing need to plan for the next new system and its associated expenses.
- The instruments themselves are good at counting things, such as the number of users, CPU hours, storage, etc.
What makes ROI a wicked problem for supercomputing is that the ability to easily account for expenses is accompanied by good capability to measure utilization, but clarity on measuring academic benefits, and any fiscal value they have, is lacking. This is a dilemma (in the sense of Rittel & Weber, 1973) for the organizations the operate supercomputers. CASC members report constant pressures in university environments to justify their expenses, and yet most other university departments or equipment operators are not pushed for ROI justification.
Perhaps the most important advice in such situations is to take control of measuring outcomes. For example, “impact” might be assessed as a combination of utilization and outcomes. Standard university outcomes such as papers, student theses, and course enrollment are already measured in most university environments. Can these measures be applied to supercomputer users?
Impact might also be assessed relative to other units at the organization. Many supercomputing facilities serve 5-10% of the faculty, and 10% of more of graduate students. Such broad utilization is likely to exceed the impact of many other instruments.
Another unfortunate aspect of supercomputing is that, as mentioned, there is a limited lifecycle before a new system needs to be acquired. The cost efficiency of computers goes down dramatically as they age, due to support being more expensive, increasingly frequent component failures, and newer systems being more efficient (in FLOPs/Watt or other measures) than older systems.
ROI for supercomputing in academic environment is a wicked problem: it’s subjective and subject to challenge by administrators or other stakeholders. The outcomes that are most easily measured (such as CPU hours utilized) are not well aligned with those that matter most (such as scientific discovery). And, the push to demonstrate ROI for supercomputing is less seldom demonstrated for other university facilities or equipment.
In the spreadsheet that accompanies this monograph, I have included spaces to track many of the academic outcomes that are of likely interest. Individual centers need to look within their host institution to determine which factors matter, and how to make use of measures that are already in place.
Citation: Rittel H, and Webber, M. Dilemmas in a General Theory of Planning Policy Sciences 4 1973.