NIKOLAOS K. TSELIOS (1)
NIKOLAOS M. AVOURIS (1)
ANGELIQUE DIMITRACOPOULOU (2)
SOPHIA DASKALAKI (1)
Experimental results of usability evaluation of a distance learning system are presented in this article. An experiment is described that took place in the frame of a University course. The main goal of the experiment was to evaluate the usability of the Testing and Self-evaluation component of the system. A complementary research goal was to explore the eventual impact of system usability on student performance. For this purpose, two alternative software components were compared that shared similar functionality, implemented in different ways (IDLE, WebCT). The usability evaluation was based on user questionnaires. From this experiment correlation between the software usability and student performance has emerged, underlining the importance of usability evaluation of systems supporting distance learning.
Recent years have witnessed the development of new powerful enabling technologies related to distance and collaborative learning. Advances in networks' performance and the widespread use of the Internet made it possible for educational material of high quality to become available to large numbers of potential learners. Additionally, these technological advances have accelerated the development of educational material for distance learning, offered through the Web. Most Universities and other educational institutions engage the Web in their traditional everyday activities and offer educational material of various forms for distance learning to a wider extramural audience. Yet this new use of computer technology in the educational field raised once more skepticism on the effectiveness of the process (Fitzelle & Trocim, 1996).
The World Wide Web (WWW or Web) is the technological environment that enabled and supported this process. There are many reasons for which the Web can be considered a suitable educational medium: It is easily accessible by many groups of learners. It supports multiple representations of educational material and various ways of storing and structuring this information. It is powerful and easy to use as a publishing medium. Additionally, it has been widely accepted that the hyper-medial structure of the Web can support learning. Some researchers characterize the Web as an active learning environment that supports creativity (Becker & Dwyer, 1994). According to (Thuring, Mannemann, & Haake, 1995) the Web encourages exploration of knowledge and browsing, behaviors that are strongly related to learning. The associative organization of information in the Web is similar to that of human memory and the process of information retrieval from the Web presents similarities to human cognitive activities. However a hyperme dial space, such as the Web, cannot be considered only by these features, as an effective tutoring environment. It is rather more appropriate to think of the Web as a powerful tool that can support learning, if used in an appropriate way (Eklund, 1995; Alexander, 1995). This is because learning is a process that depends on other features, such as learner's motivation, previous experience, and learning strategies that the individual has been supported to develop, and so forth. Effectiveness of any educational environment cannot be considered independently of these aspects. It is widely accepted that effective learning is also related to educational environments and tools that provide the students with incentives for active participation in the learning process. So the characteristics of the tools used to support learning are factors affecting the process. One of the most important features of any software tool is its usability, that is the effectiveness, efficiency, and satisfaction that it gives to the user i n a given context of use and task. So the usability of an educational environment is related to its pedagogical value (Kirkpatrick, 1994) and evaluation of its usability is part of the processes of establishing its quality. However, evaluation of usability of a distance-learning environment is not an easy task. The effectiveness of usability evaluation techniques varies, depending in great extend on the specific characteristics of the evaluated environment and the objectives of the evaluation study (Molich, Thomsen, Karyukina, Schmidt, Ede, van Oel, & Arcuri, 1999). Some of the most widely used techniques are heuristic evaluation (Nielsen, 1993; Levi, Conrad, & Frederick, 1996), field studies and observation (Togniazzini, 1992), questionnaires filling, interviews, logging of user performance in laboratory conditions, and so forth.
While there is a large corpus of theoretical and practical knowledge relating to software usability evaluation in general and educational software in particular (Squires & Preece, 1999; Avouris, Tselios, & Tatakis, 2001), there are no established techniques relating to distance-learning environments usability evaluation (Heines, 2000). This is due partly to the fact that distance learning is an area of relatively short history, characterized by rapidly shifting technological context and by inherent idiosyncrasies of the environments under evaluation. For instance, users of distance-learning tools, in contrary to traditional software, can access them through various computer and social contexts, the process of logging their performance and actions presents technical difficulties, the rate of novice users is relatively high, while in general the characteristics of typical users of distance-learning services cannot be easily predicted. According to (Hayes, 2000) usability evaluation of online course delivery sys tems should examine in particular the effort required by the user to take ownership of the system's functionality and should concentrate on ease of use. It should be mentioned here, that other areas of web-based applications and tasks such as information and multimedia content distribution and e-commerce applications appear to have similar problems as far as usability evaluation is concerned, according to Nielsen (2000).
An Overview of the Article
The research reported here is part of the effort to delineate and expose some of these problems through a specific case study involving usability evaluation of a module of distance-learning environment, used under realistic educational conditions. A distance learning software environment usually contains a number of components with different functionalities. Modules that are used for content presentation, student communication with tutors and peers, collaboration and interaction support modules, modules for active learning, and so forth. One of the components that is encountered most often in these environments is the Testing and self-assessment component. Such a module is usually simple in terms of functionality and design of interaction. It contains a number of closed questions with a predetermined set of answers. User interaction and user tasks are trivial and therefore one should expect that usability in this context is not an important issue. So usability assessment of such components is not normally per formed due to the conventional and predictable character of the tools involved.
In the frame of this research, concerning evaluation of distance-learning systems, one of the objectives was to establish a methodology that included suitable techniques for evaluation of the various components of distance-learning educational environments and related effectiveness of the tools to their usability. This approach involved an extensive evaluation experiment of a distance-learning environment in use in the Department of Electrical and Computer Engineering (ECE) of the University of Patras, developed by our group, the Infotronic Distance Learning Environment (IDLE) (*) …