The challenge of shifting traditional research paradigms

Experiential research at-risk: The challenge of shifting traditional research paradigms

Bocarro, Jason

Abstract

This article attempts to address one of the consistent themes within experiential education, notably the concern that the field lacks a strong research base. This concern has contributed to a lack of clarity which surrounds the field. Research and programmatic evaluations of adventure based experiential programs may fail to produce results for a variety of reasons. This article explores two of these reasons. The first is the breakdown of the program itself and its ever-changing methods of delivery. Second is the inappropriateness of the research methodology that often follows standard procedures, which may be inconsistent with non-standard programs.

“Experience, that most brutal of teachers. But you learn, my god do you learn.”

C. S. Lewis

One of the constant concerns and criticisms in experiential education is the general acceptance that the efforts to evaluate programs has been insubstantial (Shore, 1977; Warner, 1990). This has resulted in a lack of respect for the field which has at times been accused of maintaining a “soft” perspective unable to adapt to the spontaneity and unpredictable circumstances that often characterize such programs (Ewert, 1989). The importance of quality research should not be underestimated as it can help legitimize and convince people from outside the field that experiential education is an effective medium to learn and affect positive behavioral change. Furthermore, as Ewert (1987) points out, research and evaluation can become useful tools in better understanding the impacts of experiential education programs.

The literature on evaluation and research of adventure-based experiential learning programs has been fairly limited. Research on Outward Bound programs, for example, is weak with most studies focusing on outcome issues such as changes in self-concept and selfesteem, to the virtual exclusion of programmatic issues such as nature of instruction and the duration of the course (Shore, 1977). The issue of research within this field is further complicated by unclear definitions of what evaluation is, as well as no universally accepted evaluation method (Davis-Berman & Berman, 1994). Warner (1990) describes how the majority of programmatic evaluation was dominated by one-time outcome studies conducted by researchers such as graduate students from a variety of departments whose experimental designs were often weak. Ewert (1989, p. 17) states that the general view of research in experiential programs has been reserved, “due primarily to the overreliance on self-selected samples and measures using a self-report format.” Furthermore, many programs suffer from a lack of comparison with a non-intervention group drawn from the same population (Rawson & McIntosh, 1991). These problems emphasize the fact that there are still substantial gaps in the body of knowledge within the field of adventure therapy programs. The failure to answer this broader array of issues has contributed to a misunderstanding and lack of clarity which surrounds the field (Gass, 1993).

This article describes the 1994 Youth L.I.V.E. adventure-based counseling program and the problems associated with the research of the program. The research methodology followed a traditional format and was initially influenced by the requirements of a university thesis committee. Previous experimental designs of other research into experiential learning programs were adapted. This article shows how these adaptations were inappropriate. The authors suggest that the field of experiential learning is not being well served by applying traditional methods to nontraditional programs. In particular, the adherence to “customary procedures” is a paradigm that must be changed.

The 1994 Youth L.I.V.E. Program

The 1994 Youth L.I.V.E. (Living In Volunteer Experiences), was an adventure-based counseling program run by the City of Halifax Recreation Department in Nova Scotia, Canada. The participants were drawn from so-called “at-risk” communities, of which dysfunctional families of low socio-economic status are prevalent. Participant selection into the L.I.V.E. program was based upon the evaluations of a selection committee, which was composed of youth workers from the four different Halifax recreation centers.

Participants were chosen by the youth workers from the various centers and were selected on the premise that they would respond in a positive manner to the program’s aims and objectives and that these positive outcomes would be maximized. Through advertisements in local community centers, a shortlist of potential participants, between the ages of thirteen to eighteen, were invited to an interview. Twenty-five youth were selected for the Youth L.I.V.E. program. The remainder provided the basis for the control group. Participants were selected based upon the organizers’ perceptions of potential success in the adventure-based counseling program.

The selected participants were divided into two groups and participated in the program with staggered sequences. The primary program staff were the same for both groups. The two groups were determined by geographic location. The first group comprised thirteen youth from two of the community centers. The second group comprised twelve youth from the other two centers. The groups selected were based on an equal representation of gender and an equitable distribution of racial diversity based on the demographics of the four Halifax communities that were being served.

Although the experimental groups and control group represented similar demographic characteristics and interests in the program, they were not exactly matched. Thus, because both groups were partially self-selected (as voluntary participation was seen as an essential prerequisite to involvement in the program), the control group was selected from the applicants who were not offered a place in the program. The fact that they were not selected may have affected their motivation and level of responses. The program organizers were asked to select the most appropriate applicants to serve in the control group in order to get as equal a match as possible. Although the control group did not have the opportunity to participate in this year’s program, it was assumed that the success of this program might result in an increase in funding and a broader participation in subsequent years.

The project evolved out of the concept of programming for “at-risk” youth using an adventure medium and was designed in four stages. The first part of the program consisted of a seven-day wilderness adventure experience; the second part, a six-day service learning program; the third part, a seven-day inner-city adventure program; and finally, a year-long “Youth Helping Youth” phase.

Stage 1: The Wilderness Adventure Experience

The first part of the program consisted of a sevenday wilderness adventure experience. The individuals participated in physical and emotional challenges in a variety of outdoor experiences. Throughout the experience, the youth were asked to set individual and group goals, take on emotional and physical challenges, discuss their personal experiences, and use real-world settings to solve problems.

The participants were encouraged to participate in a variety of activities. However, a “challenge by choice” philosophy was introduced whereby all the youth had an option to participate. A Full Value Contract was drawn up between the organizers and the youth at the end of the first night. The Full Value Contract is the process by which the group agreed to find positive value in the efforts of its members with the following three commitments forming its backbone:

Agreement to work together as a group and to work toward individual and group goals.

Agreement to adhere to certain group behavior and safety guidelines.

Agreement to give and receive feedback, both positive and negative, and to work toward changing behavior when it was appropriate.

(Schoel, Proudy, & Radcliffe, 1988, p. 95)

All of the activities during the program were briefed and debriefed so that each significant event would provide participants with time to reframe the experience. This enabled associations to be made to real-life challenges. The associations were achieved through the use of ‘therapeutic metaphors’ which provided the participants with metaphoric learning situations that parallel the features of some “real-life” situation or issue that the youth needed to address.

Stage 2: The Service Learning Phase

The service learning phase of the program lasted for six days and consisted of various service learning projects designed to improve the lifestyle of others while at the same time learning through the experience. This phase combined individual service projects (where each youth was set a challenge within the community that catered toward his/her interests) alongside projects that were undertaken in small groups. This part of the program attempted to provide the youth with a better understanding of the importance of serving their community.

Stage 3: The Inner City Adventure Experience

The inner-city adventure program was a six-day continuation of the outdoor adventure experience using the community (city) as the adventure medium, rather than the outdoors. Examples of the inner-city activities included rock climbing, an individual solo, a cultural immersion day (exposure to a variety of different cultures), an eleven-hour epic activity, an inner-city hike, and a jail experience (participants were locked up in police cells for the night). The activities were selected to allow the youth to obtain a substantial understanding of their communities, the services available to them, and the strength and diversity of the people within their community.

Stage 4: The “Youth Helping Youth” Program

After completion of the three-week program, the City attempted to solicit qualified volunteers to provide leadership and life skills training for all the participants of the program. These volunteers would consist of experienced professionals from various fields such as community services and police and fire departments. It was then proposed that the recreation staff, youth partners, and mentors would plan, design, and implement strategies for motivating other “at-risk” youth to become involved in positive community activities. This part of the project would be called, “The Youth Helping Youth” Program.

Experimental Design

This study examined the effect that the Youth L.I.V.E. program had on the participants’ general level of self-esteem, self-esteem in social situations, self-efficacy, and sense of community involvement through the use of self-report questionnaires. The specific instruments employed were the Rosenberg Self-Esteem Scale, a validated shortened version of the Janis-Field Feelings of Inadequacy Scale (self-esteem in social situations), the self-efficacy subscale of the Social and Personal Responsibility Scale, and the Semantic Differential Scale on Being Active in Your Community. All of the self-report questionnaires used were based upon Conrad and Hedin’s (1981) national assessment of experiential education.

At the interview, all of the candidates were asked to complete the self-report questionnaires which provided the baseline data for the study. During the program, some group debriefing interviews were conducted asking questions relating to the program and the behavioral measures. It was intended that these responses would be used to enhance and interpret the quantitative data generated by the self-report questionnaires.

Research Limitations

The design and nature of experiential programs mean that most, if not all, have a small sample population. From a quantitative research perspective, this limits the generalizability of the findings, making most of the discussion speculative. The traditional method of identifying a statistically significant treatment effect is that if the treatment is replicated elsewhere, the researcher should be expected to document the same results at another time. This type of analysis was originally designed for laboratory studies where a high degree of control over experimental and environmental variables existed so that the study could be replicated with such confidence that the replication would be identical or nearly identical to the original study (Warner, 1982). However, in the case of the Youth L.I.V.E. project, one could argue that most of the programming was unique and that the characteristics of the activities and leaders were an integral part of the definition of the specific interventions. Therefore, it would be naive to argue that another program in a different place, at another time, with different circumstances and different leaders, could actually replicate this study.

Furthermore, one of the main difficulties associated with program research and evaluation can be the program design and programmatic problems associated with this design. These problems are often out of the researcher’s control and can sometimes force him/her to adapt and sometimes spontaneously change both the collection and analysis of the data.

Programmatic Problems of the Youth L.I.V.E. Program

The 1994 Youth L.I.V.E. adventure-based counseling program was an exploratory project that combined some of the successful concepts from the 1993 program alongside some changes. The three main differences were the implementation of a new three-stage format (wilderness, service learning, and inner city), the increase in the number of youth participants (10 in 1993 up to 25 in 1994), and finally, an accessible sitebased wilderness phase of the program which would not contain any type of expedition. As a result of continual changes during the program itself, the process of data collection was impaired. Furthermore, the programmatic problems (which were not foreseen by either the program organizers or the researcher) caused the researcher to have to adapt and sometimes spontaneously change both the collection and the analysis of the data. In many instances, the perceived credibility of the program will directly affect the perceived credibility of the research. Thus, in this instance, the perceived inadequacies associated with the Youth L.I.V.E. program led to the research being linked to the fallibilities of the program and, subsequently, credibility and collection were negatively affected.

The study began with forty-two youth attending interviews for the program. From those forty-two youth, twenty-five were selected to be participants in the study. The remaining seventeen youth were considered to be an appropriate control group. It was proposed before the program began that the youth workers would help to encourage the seventeen participants not selected to act as an appropriate control group. The suggested incentive for participation as a control subject would take the form of some kind of adventure day, run by the individual youth center and organized by the youth worker in conjunction with the program organizers. However, the adventure day never materialized and an added incentive of free swimming offered by one of the four youth centers produced a poor response. This resulted in the abandonment of the control group which meant that the study could no longer control against the threat of contemporary history.

Another problem encountered was the high attrition rate of participants. Some participants began to selectively choose which days to attend, particularly during the service learning and inner-city phases, which resulted in difficulties in data collection. The strength of this treatment program was dependent on both the program content and the ability of the group leaders to deliver that content. Unfortunately, one of the two full-time program leaders had limited previous adventure-based counseling experience which was reflected in the participants’ feedback. Despite the fact that both full-time program leaders were involved in all stages of the program, there were days when the program was run by part-time staff who had little experience in this area.

The difficulties in defining the characteristics of the intended program participants also posed a problem for the organizers and probably contributed to the high dropout rate of participants. Each of the four youth workers had his/her own idea of what type of youth would fit the program’s “at-risk” criteria. The outcome resulted in a program that contained varying degrees of at-risk youth. Unfortunately, the program was not designed to be a panacea for all these different types of youth and could not cater to their different needs.

The final stage of the program, the “youth helping youth” phase, failed to be implemented. The intention of this phase was to have each of the participants supported by an adult mentor (a volunteer). These volunteers would consist of experienced professionals from a field, such as community services and police and fire departments. It was then proposed that the recreation staff, youth partners, and mentors would plan, design and implement strategies for motivating other `at-risk’ youth to become involved in positive community activities. The non-implementation of this phase resulted in difficulties in tracking down some participants for the posttests (the follow-up tests after completion of the adventure-based counseling program).

The experiential nature of this data collection was not necessarily consistent with the style and method of other similar studies. The program itself was somewhat serendipitous and, therefore, the researcher was often at the convenience and “mercy” of the program and its organizers. For example, program-related problems created instances where the participants were unavailable or unwilling to either complete the self-report questionnaires or participate in the qualitative groupdebriefing interviews.

Conclusion and Implications Toward Future Research

The lack of understanding of the purpose and nature of the study by the program organizers culminated in a lack of respect for both the researcher and the research. Therefore, participant observation was limited (due to some of the program leaders not wanting a researcher out in the field for extended periods), and some of the qualitative group debriefing interviews were not conducted. It is recommended that future studies build a relationship with the program leaders before the commencement of the program so that both parties understand each other’s roles and requirements. Ewert (1995) alludes to the issue of different needs between the two parties, which he describes as the researcher/practitioner gap. Often, practitioners and researcher are faced with different concerns that can result in criticism between the two parties. Ewert (1995) proposes that if practitioners want information that is useful and specific, they should be more receptive to the idea of cooperatively designing the research and evaluation.

An important problem in the investigation of experiential education has been the lack of appropriate measures and instruments (Cason & Gillis, 1994; Conrad & Hedin, 1995; Ewert, 1995; Ibbetson, 1994; Warner, 1982). There is often not enough consistency among programs to allow this type of research to be replicated (Mitten, 1994; Warner, 1982). The quantitative measures used in this study were useful in detecting important trends in this particular adventurebased counseling program, but they also exposed some obvious limitations. For example, newly developed scales are accused of lacking reliability, whereas the use of “tried and trusted” scales are criticized for being outdated and, at times, inappropriate. A suggested route around this has been the recent popularity of using a qualitative approach alongside the quantitative questionnaires (Davis-Berman & Berman, 1994; Gass, 1993; Warner, 1990). Conrad and Hedin (1995, p. 402) suggest that interviews, observations, analysis, case studies, and ethnographies, “…could be used to both triangulate and see beneath the findings of the paper and pencil tests.” Ewert (1987) proposes a multimethod, multivariable approach that combines a number of variables with a number of methods. The use of a qualitative approach (either in a pure form or combined alongside a quantitative approach) seems to be a more appropriate methodology in order to confront some of the issues described above. Marshall and Rossman (1995) contend that a qualitative strategy allows researchers and program evaluators to maintain the required flexibility within their research design through devising a strategy that includes many traditional research elements alongside the right to modify and change that initial plan during data collection.

Some interesting ethical dilemmas evolved out of the examination of the Youth L.I.V.E. program. Because there had been no clearly defined role definition as to how the research would fit within the confines of the overall program, the researcher was often confronted by a number of conflicting issues. For example, some of the participants perceived the researcher as a staff member which led to a conflict between the role of a participant observer and researcher. Furthermore, if the failure of the program directly affects the quality of the research, does the researcher have a right to at least have some input relating to certain elements of the program design? And, would they be abusing their role as an impartial researcher by reporting justifiable concerns (unrelated to any research hypotheses) to program organizers and funders? It seems that these ethical dilemmas could have been alleviated or at least minimized through roles being more clearly defined beforehand, together with a greater collaborative effort between the researcher and program organizers in designing the research and evaluation process. Patton (1982) alludes to such dilemmas and states that there is no way of anticipating in advance what information will surface. He suggests that it is important to reach an agreement relating to what information will be provided by the researcher, what information will not be divulged by the researcher, what should remain confidential, and what criteria should be applied in making those judgments. This point is crucial, as one’s personal integrity and credibility may well be at stake in these issues.

Although we advocate a closer relationship between program organizers and researchers, it is important to recognize a potential ethical dilemma to which both Ewert (1987) and Scott (1997) allude. In situations where there are a number of stakeholders associated with a program, this and other related fields still seem to be plagued by ineffective programs using research and evaluation to hide their inadequacies. Thus, in certain cases, it may be in the “political interest” of researchers to provide programs with documentation of success in order to legitimize their justification for funding efforts, which in turn may provide further funding for the researcher. Similarly, it is important for researchers to be up front and honest with some of their fallibilities and weaknesses in order for others to learn and profit from their experiences. Too often researchers conceal methodological flaws and mistakes in order to package a clean article submission. However, as many experiential educators have discovered, we often progress and gain most knowledge through our biggest mistakes.

As the literature has revealed, adventure-based experiential programs have constantly been subjected to criticisms of being self-selective and of merely being a self-fulfilling prophecy. However, little is known as to what happens to participants who fail to complete the program or have negative, personal experiences. Furthermore, the majority of research has examined short-term changes with very few long-term longitudinal studies. Gass (1993) points out that most professionals agree that this field lacks a strong research base. For this field to gain the acceptability it desires, more money and importance needs to be directed toward the area of long-term evaluation. Other critical areas of evaluation that need to be studied include location, transfer and facilitation strategies, leadership style, and the success of follow-up programs (Priest & Gass, 1997).

Although it was not the data collection methods that caused the problems, the program’s failure to validate the process and design of the research emphasized the continual need to develop new research techniques and measures in order to better understand and deal with the uniqueness of adventure-based experiential learning programs. In order to achieve this, there seems to be a need to establish a group of dedicated and interested individuals within the field to investigate and discuss research methodologies that are more compatible and consistent for evaluating, assessing, and justifying the uniqueness and spontaneity which characterize these programs. This may require a paradigm shift away from what has traditionally been considered the “correct” way to conduct research.

The challenge for experiential researchers should not be to avoid evaluating poorly designed or unsuccessful programs. Instead, they should see this as a challenge to design new experiential research methodologies and techniques to prevent programmatic problems from interfering with their data collection and analysis but which still comply with the rigors demanded of quality research. Even though the process may be a frustrating and, at times, brutal experience, you will learn, my god will you learn.

References

Cason, D., & Gillis, H. L. (1994). A meta-analysis of outdoor adventure programming with adolescents. Journal of Experiential Education. 17

(1), 40-47.

Conrad, D., & Hedin, D. (1981). Instruments and scoring guide of the experiential education evaluation project. Minneapolis, Minnesota: Center for Youth Development and Research, University of Minnesota.

Conrad, D., & Hedin, D. (1995). National assessment of experiential education: Summary and implications. In R. J. Kraft & J. Kielsmeier

(Eds.), Experiential learning in schools and higher education. Boulder, CO: Association for Experiential Education. Davis-Berman, J., & Berman, D. S. (1994). Wilderness therapy: Foundations, theory and research. Dubuque, Iowa: Kendall/Hunt . Ewert, A. (1987). Research in experiential education: An overview. Journal of Experiential Education. 0(2), 4-7. Ewert, A. (1989). Outdoor adventure: Theory models and foundations. Scottsdale, Arizona: Venture.

Ewert, A. (1995). Research and evaluation of experiential learning. In R. J. Kraft & J. Kielsmeier (Eds.), Experiential education in schools and higher education. Boulder, CO: Association for Experiential Education.

Gass, M. A. (1993). The evaluation and research of adventure therapy programs. In M. Gass (Ed.), Adventure therapy: Therapeutic appli

cations of adventure programming. Dubuque, Iowa: Kendall/Hunt. Ibbetson, A. (1994). Team building: An investigation of the effectiveness of an adventure-based experiential approach. Unpublished master’s thesis, Dalhousie University, Nova Scotia, Canada.

Marshall, C., & Rossman, G. B. (1995). Designing qualitative research. London, England: Sage Publications.

Mitten, D. (1994). Ethical considerations in adventure therapy: A feminist critique. In E. Cole, E. Erdman, & E. D. Rothblum (Eds.), Wilderness therapy for women: The power of adventure. New York, New York: The Haworth Press.

Patton, M.Q. (1982). Practical evaluation. Beverly Hills: Sage Publications.

Priest, S., & Gass, M. A. (1997). Effective leadership in adventure programming. Champaign, Illinois: Human Kinetics.

Rawson, H. E., & Mcintosh, D. (1991). The effects of therapeutic camping on the self-esteem of children with severe behavior. Therapeutic Recreation Journal. 5(4), 41-49. Schoel, J., Proudy, D., & Radcliffe, P (1988). Islands of healing: A guide to adventure based counseling. Hamilton, Massachusetts: Project Adventure Inc.

Scott, D. (1997). The formative nature of conducting research for a metropolitan park district. Applied Behavioral Science Review. 5(1), 2539.

Shore, A. (1977). Outward Bound: A reference volume. Connecticut: Outward Bound.

Warner, A. H. (1982). A social and academic assessment of the outcomes of experiential education trips with elementary school children. Unpublished Doctoral Dissertation, Dalhousie University, Nova Scotia, Canada.

Warner, A.H. (1990). Program evaluation: Past present and future. In J. Miles & S. Priest, (Eds.), Adventure education (pp. 309-321). State College, PA: Venture.

Jason Bocarro, MA., is a doctoral student in the Recreation, Park and Tourism Sciences Department at Texas A&M University, College Station, Texas 77843. (E-mail: jbocarro@rpts,tamu.edu). Anthony Richards, Ed.D. is a faculty member in the Health and Human Performance Department at Dalhousie University, Halifax, Nova Scotia. (E-mail: richarda@is.dal.ca)

Copyright Association for Experiential Education Sep/Oct 1998

Provided by ProQuest Information and Learning Company. All rights Reserved