AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
This article describes the research experience of the author in working with a group of voluntary organizations in Swindon, England, to change policy makers' perspectives regarding the evaluation of local voluntary work. The focus is on how the author's applied research strategy contributed to policy change in Swindon. In the end, the ethnographic perspective in which this project was conducted became a vital tool for empowering local groups to develop their own approaches to evaluation, rather than succumbing to policy makers' attempts to impose their own methods.
RESEARCH, LINCOLN AND GUBA (1986) ARGUE, is a "type of disciplined inquiry undertaken to resolve some problem in order to achieve understanding or to facilitate action" (p. 549). Much applied research combines both aspects of Lincoln and Guba's definition: improving understanding of a local issue to initiate action. In many cases, the methods of research used in applied settings are an important source of this action and change. This article describes an applied research project that benefited from the use of qualitative, ethnographic methods of research. It becomes quite clear through the project description that the use of these methods in particular applied settings can assist in bridging the "gap between knowledge and action" that Patton (1983) claims often prevents clients from using the results of applied research.
The impetus for this study evolved from a growing expectation of policy makers in Swindon, England, that evaluation needed to become an integral part of negotiations with local voluntary groups. This article describes how the methodology I used for this study helped to clarify the reasons for this policy change and also allowed less powerful groups in the community to express their opinions and misgivings about the evaluation issue. The questions that I hoped this study would clarify included the following: What were the local policy implications of an increased emphasis on evaluation? To what extent were local voluntary workers already responsible for evaluating their work? What local issues concerning evaluation were not being raised that should be? and, How could the voluntary sector become involved in changing local perspectives regarding evaluation?
As this project progressed, I became aware of the parallels between the methodology employed in this study and the approach to evaluation that I hoped to promote in Swindon. The ethnographic methods used to explore these research questions not only helped to provide the foundation for the development of more specialized evaluation skills in the community but also resulted in significant changes in the relationship between the local government and local community groups. The results of this project, which include an ongoing local evaluation project, further illustrate the benefits of conducting this applied project from a more qualitative and participative perspective.
When I first became involved with Swindon while attending graduate school at the University of Bath in 1986, I had originally intended to conduct an evaluation of a local women's health group. However, as I became aware of the tensions surrounding the issue of evaluation itself, I began to focus on the concerns and misgivings resulting from the funders' increasing expectations regarding evaluation. To local policy makers, the emphasis on evaluation appeared to be rather straightforward and logical given the restrictive climate in which it was being promoted. However, on closer examination, the evaluation of local voluntary work involved many complex issues relating not only to technique and method but also to the underlying value structures and political agendas. Policy makers were concerned with questions of efficiency and value for money. Their focus on accountability and quantitative methods of evaluation ran counter to the voluntary sector's ways of working. That is, funders were not taking into sufficient account the consequences of using quantitative approaches to evaluate the work of voluntary organizations, which looked at themselves with far more complex and comprehensive perspectives than those uncovered by numbers or statistics. The policy implications of an increased emphasis on evaluation seemed to place local groups at a disadvantage because the process of evaluation was mystified and confused in such a way that community groups could not benefit from the process.
While the issue of evaluating voluntary work was receiving far more attention from local policy makers than at any other time in the past, this does not suggest that local groups had never evaluated their work. In answer to the question regarding to what extent local groups were responsible for evaluating their work, a number of initiatives around evaluation had occurred at several different levels in Swindon's voluntary community. Two local resources for evaluation were the Student Unit and the Training Unit. Through these two projects, social-work students and community trainers provided research and evaluation support to local groups. These projects had limited value for local groups, as they represented a "one-time" resource, without support for sustained effort. Also, they presented little opportunity for voluntary workers to exchange experiences and ideas across organizational boundaries. This was especially true for the smaller groups that were more isolated from the usual channels of support in the community.
Additionally, several local groups became involved in evaluation by virtue of their affiliation with a national agency. In these cases, the national organizations would conduct evaluations that required input from local branches. The usefulness of these kinds of evaluation varied, depending on how well an organization's terms of reference and responsibilities were defined by the sponsor agency. Often a focus on the concerns of the national agency eclipsed any interest in developments and progress at the local level.
Evaluation was often initiated by volunteers and staff members themselves, either in preparation for an annual policy meeting or in response to an organizational crisis. While at times these evaluations uncovered useful information, systematic review rarely occurred between these meetings or crises.
Finally, some local evaluations occurred when local-funding agencies became concerned with a particular group's goals and accomplishments. This type of evaluation caused the greatest dissension in Swindon, and was the primary source of confusion and suspicion around the funders' more recent requirement that local voluntary groups evaluate their work. These "crisis-led" evaluations, which were usually imposed on groups during times of uncertainty or trouble, were viewed as disruptive, offering little back to local groups in terms of support and practical guidelines in incorporating recommendations.
The main factor driving the funders' increased concern with the evaluation of local voluntary work was the changing economy. Local governments were reevaluating their level of funding to voluntary groups, reflecting the economic pressures operating et the national level. Rate capping, which had restricted local authority spending for four years in a row, and the imminent abolishment of the Urban Programme, a major funder of local voluntary work also affected the local voluntary community. Local groups that were …