AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
The gurus and pioneers of total quality management offer various definitions of quality in their extensive writings (see, for example, Crosby, 1984; Deming, 1986; Feigenbaum, 1991; Juran, 1992; Taguchi, 1989). Crosby (1984) defines quality as "conformance to requirements". Juran (1992) uses the phrase "fitness for use" and emphasizes that product features should meet the customer's needs and have the fewest deficiencies. In all definitions of quality, the customer's requirements and use have a paramount place.
Before quality and customers became the major industry focus in the global market, the field of information systems paid special attention to user satisfaction as an important performance measure (Ives and Olson, 1984; Robey and Farrow, 1982). Cyert and March (1963) emphasized the importance of satisfying users' needs in information systems. Ives et al. have observed that "satisfaction of users with their information systems is a potentially measurable, and generally acceptable, surrogate for utility in decision making" (1983, p. 785). Furthermore, they, as well as Bailey and Pearson (1983), have found in their surveys that reliability is one of the significant dimensions of user satisfaction.
The information-systems literature in this respect could be divided into three broad and interrelated categories. The first category consists of work on the assessment of an information system solely from the perspective of its users. Bailey and Pearson (1983) developed survey instruments to quantify the assessment of information systems. The behaviour components of information systems have been studied by Alavi and Henderson (1981), Ginzberg (1981), Sage (1981), and Ives et al. (1983).
The second category is related to identifying the system performance by adding indicators other than users' responses. De and Hsu (1986) suggested the application of chance constrained method for monitoring the performance of the system from usage patterns in order to determine the system component that should be modified. Research interest in this area is in its infancy.
The third category is relating quality to other dimensions of information systems. Ballou and Pazer (1987) analysed the cost-quality trade-off for information systems. Chikofsky and Rubenstein (1988) used the CASE environment to engineer reliability into information systems' life cycle. Gemoets and Mahmood (1990) investigated the effect of quality of user documentation on user satisfaction. Spitzer (1991) discussed the importance of information systems in service quality. More recently, research in the applications of quality techniques in the development of software systems has begun. Zultner (1990), Barnett and Raja (1995), Zahedi (1995), and Haag et al. (1996) discuss the application of quality function deployment (QFD) in the requirements analysis and design of software and information systems.
Efforts to quantify reliability in information systems have been mostly limited to the reliability of software products. Software reliability has been a major area of research that has produced more than 20 software-reliability models and an extensive body of literature in the last two decades. These models have been reviewed and categorized extensively (see for example, Goel, 1983; 1985; Jelinski and Moranda, 1972; Malaiya and Srimani, 1990; Musa, 1971; and Zahedi and Ashrafi, 1995).
On the other hand, models formalizing the reliability of information systems are almost non-existent. Although millions of dollars are spent on developing and deploying information systems, little attention has been paid to formal metrics of information system performance. A major exception to this observation is the work done by the SIM Working Group (1992), which provided guidelines for implementing quality assessment and planning tools in information systems. This paper follows the recommendation of the SIM Working Group by synthesizing customer-satisfaction measures and the system-performance data into a single reliability metric.
Prior to the description of the metric, a few definitions should increase the clarity of the presentation. We refer to those who benefit from the services of an information system as "customers" rather than "users". This broadens the category of those who benefit from the system's services to include its internal as well as external customers (Zahedi, 1995).
Furthermore, we define an information system to include software, hardware, procedures, human services, inputs, and outputs. In other words, all factors that lead to the production of information services are considered to be parts of the system. However, in the example case discussed here, we focus on the software aspect of the information system.
We refer to the terms utility and requirements. The difference between requirement and utility in this context is that the customer's requirements are what the customer needs and utility is the extent of the customer's satisfaction when the need is fulfilled. We use utility in the same sense as in economics. Kriebel and Moore (1980) discuss the individual and social utility of information systems and Boehm (1981, Ch. 20) models the value of information via statistical decision theory. The utility of information systems involves expected monetary as well as non-monetary benefits, such as increase in knowledge, control, and satisfaction. Ives et al. (1983, p. 788) define reliability of a measure as "its stability over a variety of conditions". They categorize it into test-retest and the amount of error in measurement. Both they and Bailey and Pearson (1983) use the second category as the definition of reliability. We expand the definition of reliability (or more accurately, the unreliability of the system) to system faults that may or may not result in a system crash. However, the methodology is general enough not to be affected by the definition of reliability.
The handbook on metrics published by the US Air Force (1991) differentiates between metric and measure in the following way. A metric is a combination of measures designed for depicting an attribute of a system or entity. It states the basic characteristics of a good quality metric as: meaningful to customers; containing organizational goals; simple, understandable, and logical; repeatable; capable of showing a trend; unambiguously defined; economical in data collection; timely; and driving appropriate action. The information-system attributes that are critical to quality metrics are: responsiveness, performance, service to the customer, and value for the customer (Zahedi, 1995).
In this paper, we develop a reliability metric for monitoring the performance of information systems that meets the above requirements. It combines customers' information requirements with the technical specification of information systems. This goes to the heart of the definition of quality meeting customers' needs and expectations. A simplified integrated information system for manufacturing is used to show the application of the methodology. We also report the evaluation of the metric by 19 information system (IS) managers based on the attributes listed above.
Requirements hierarchy and utility values
The added value of an information system is its utility for customers. Otherwise, its technical elegance and good design will have little relevance. The question we address is how to combine the utility of an information system to its …