Developing a Student Evaluation of Advising Survey Instrument

Developing a Student Evaluation of Advising Survey Instrument

Zimmerman, Allen

Abstract

A student evaluation of advising survey instrument was developed based on information obtained from the literature and the results of an advising-related workshop. The survey form was then used in a college-wide pilot program based on the voluntary participation of advisors and students. The survey instrument was found to be an appropriate tool for advisors to use for formative feedback from students. Results of the pilot program indicate that students rate highly the performance of their advisors. Students also rate highly their own performance in the advising process, although at a lower level than that of their advisors.

Introduction

Colleges and universities typically require that faculty obtain feedback about their courses and teaching performance from students via student evaluation of teaching (SET) survey instruments. When combined with other measures such as peer and self-evaluation, student input can be very valuable in improving the teaching process.

Often, a standardized SET instrument is developed at the college or university level for the purpose of summative evaluation of teaching. In addition, or as an alternative, many faculty use their own versions of SET instruments for the purpose of formative evaluation of teaching. In either case, the items that make up the instruments will have been identified as important characteristics of effective teaching. Therefore, the SET instruments can also serve as a guideline and reminder of the key attributes of good teaching.

As with teaching, the evaluation of advising (including the systematic collection of student input) can be an important and valuable technique for improving the process. Unfortunately, evaluation of advising in general, and the use of student evaluation of advising (SEA) instruments in particular, have not been common on college campuses (Lee, Polson, and Severy, 1992; Habley and Morales, 1998). However, evaluation of advising is receiving increased attention. As one prominent example, an Assessment of Advising Interest Group (www.advising.hawaii. Edu/nacada/assessmentIG /aaig.asp) was formally created within the National Academic Advising Association (NACADA) in 2000.

Advising is not formally evaluated at the authors’ college, and neither summative nor formative standardized SEA instruments are available. Therefore, as a professional improvement activity and with the advising theme of the 2002 NACTA Conference as the impetus, the authors decided to develop an in-house SEA survey instrument. The intention was to make the SEA form available on a volunteer basis to college advisors who want to obtain formative student feedback. Advisors would also be able to use the instrument as a personal guideline and reminder of the key attributes of good advising.

Consulting the literature for general recommendations regarding the content of SEA instruments was a logical first step in the process of developing an in-house advising survey form. Hanson and Huston (1995) suggest that items on advising surveys completed by students address the two major roles of advisors – providing information and serving as counselors. Creamer and Scott (2000) state that availability, knowledge, and helpfulness are the core elements of advisor behavior that should be central to the evaluation of advisors by students. Cuseo (2002) maintains that specific items that comprise an advisor evaluation instrument should reflect the following four general core qualities of effective advisors: available/accessible, knowledgeable/helpful, personable/approachable, and counselor/mentor. He also provides detailed suggestions for structuring and refining the instrument.

Information provided by two other authors was also helpful regarding the composition of SEA survey instruments. Srebnik (1988) located and reviewed 12 SEA instruments, and briefly discusses each in terms of length, content, format, and use. Habley (2000) lists eight general goals developed by a NACADA task force for academic advising (data on the achievement of which is collected periodically in national surveys). These goals include assisting students in such areas as self-understanding and self-acceptance, evaluating and establishing life goals and educational plans, developing decision-making skills, obtaining necessary information, and identifying support services.

Pre-existing SEA forms offered another valuable source of information regarding the make up of an inhouse advising survey instrument. Complete SEA forms are printed in advisingrelated articles authored by Williams (1990) and Iaccino (1991). Several other survey instruments currently in use at colleges across the country appear on the website www.advising.hawaii.edu/nacada/assessmentIG/advi sing_Assess_tools.asp. Items that appear on SEA instruments are listed and results analyzed in advising-related articles by Leonhardy and Jimmerson (1992), Severy, et al. (1994), Bedker and Young (1994), and Radhakrishna and Thomson (1997). SEA instruments used at two colleges of agriculture and data concerning student responses were presented at the 2002 NACTA Conference by Moore and Esbenshade (2003) and Barrick and Hernandez (2003). A proprietary comprehensive survey of academic advising with one section of items labeled “impressions of your advisor” is available from ACT (1997).

An additional source of information was the group participant output generated during an interactive workshop on developing a SEA survey instrument that was held at the 2002 NACTA conference (Zimmerman, 2003). Major activities during the workshop included individual reflection and writing, small group discussion, and large group summary discussion. The output included final lists developed for three major headings (which were established in advance of the workshop) as shown in Figure 1, and a first draft of a SEA instrument by each participant based on the lists and other materials provided. The authors made use of the contents of Figure 1 during the development and refinement of the inhouse SEA form.

The purpose of this article is to provide a copy of the in-house SEA survey instrument that was developed and to discuss the results of a pilot program using the form.

Methods

The first draft of the in-house SEA survey instrument that the authors developed contained 12 items using a Likert scale. Several of the items required students to self-assess their own performance as advisees. A section requesting written comments and recommendations was also included. During Winter Quarter 2003, a small group of students was asked to review this first version. Some minor changes and adjustments were made to the form based on their feedback. The survey instrument was then shared with attendees at a regularly scheduled faculty meeting (only faculty advise students at the college) held early in Spring Quarter 2003. Based on advisor feedback, the additional item “I consult with my advisor before I register on-line” was added and a minor change in wording was made in one other question. The final version of the SEA survey instrument is shown in Figure 2.

During the campus meeting, advisors were also solicited as participants in a pilot program using the SEA form. A total of 14 individuals (which represents about 50% of the advisors on campus) volunteered. A list of the advisees for each of these advisors was compiled based on an electronic search of all students enrolled during Spring Quarter 2003. The search yielded a total of 333 students. These students were then contacted by one of the authors (Mokma) via an email message which included a cover letter explaining the pilot program and the address of the web page that contained the SEA instrument. Students were asked to complete and return the web-based survey instrument electronically.

Due to a low response rate to the first mailing (probably due the fact that the response deadline was within five days of the mailing), a second mailing was sent six days later with the response deadline eight days after the mailing. A total of 101 completed surveys were received as a result of the two mailings, which is a 30.3% response rate. Since the number of surveys returned for some advisors was very small, only total results are discussed in this article.

Results and Discussion

Results of the SEA survey instrument pilot program in terms of mean, standard deviation, and rank for each of the 13 items on the form are shown in Figure 3. The numerical values are based on the following scale: 1 = agree strongly; 2 = agree; 3 = neutral; 4 = disagree; and 5 = disagree strongly.

Of the 101 responses, 98 were usable because three respondents rated every item as “not applicable.” Although responses covered the full range (1-5) for most items, ratings were typically in the 1-2 range with only one item on the form having a mean value of 2.00 or higher. The standard deviations indicate that the responses for each item were relatively close with only two of the items having a SD more than 1.00. These overall results indicate that students rate highly their and their advisors’ performance in the advising process at the college. The results also suggest that the majority of the respondents are those students who really like their advisor. This positive response may be due in part to the fact that participation in the survey was voluntary.

The first five items (#1-5) on the survey require students to self-assessment themselves as advisees and the final eight items (#6-13) involve advisor assessment. The means for the five items related to student participation in the advising process range from 1.62 to 2.00 and the SD for these items range from 0.72 to 1.13. The rankings for these five items are among the six lowest in the survey. The mean value (2.00) for item #4 (keeping updated files of advising-related materials) suggests that advisors may need to emphasize more strongly to students the importance of this responsibility. The mean values (1.89 and 1.88) for item #1 (consulting with advisor before registering) and item #2 (taking the initiative in scheduling appointments) reflect the fact that students in the college can schedule for classes electronically without ever consulting their advisor.

The means for the eight items related to advisor performance range from 1.28 to 1.67 and the SD for these items range from 0.71 to 0.94. The students rated their advisors best for their knowledge of courses and curriculum (item #7, mean value 1.28), their knowledge about academic policies and campus service (item #8, mean value 1.35), and being available and approachable (item #6, mean value 1.36). Students rated their advisors’ performance lowest in regards to helping with career and education goals and plans (item #10, mean value 1.67). This may be due to the fact that all students in the college are required to declare a major prior to enrollment.

Sixty students (61.2% of those who completed the survey) provided written statements in the comments and recommendations section of the SEA instrument. The written comments provide insight to faculty on their advising strengths and offer helpful suggestions on how they can improve their advising practices.

The statements are generally complimentary of the advisors’ performance, with phrases such as “has a wealth of information for me,” “easy to talk to,” “would be lost without her help,” “don’t have to be afraid to go to him for help,” and “my mentor and friend” serving as typical examples. Statements like these indicate that most advisors are meeting the expectations of student advisees.

There are also some statements that raise concerns. Some students rarely met with their advisor because they felt the advisor was too busy and “didn’t have time for me,” or was “not approachable.” Other students viewed advising as primarily a process for scheduling of courses and never met with their advisor because they could figure out course schedules on their own.

A third category of comments are critical of the advisor, but possibly are ramifications of advisees being unprepared or not taking responsibility for their own recordkeeping. One student wrote, “Currently I’m in my second year here. I was told this is a two-year program. I have to come back next year for a quarter with 20 credit hours and then take another course in Winter Quarter. I don’t understand why I have to come back.”

Summary

Based on the pilot program, the SEA instrument should be an appropriate tool for advisors to use for formative student feedback. Although responses to the pilot survey were voluntary and thus most likely from those students who feel positive about their advisors, some of the students did use the full range of the Likert scale in their evaluations. Many of the students were also willing to take the time to submit written comments and recommendations. The survey instrument also provides advisors with a succinct guideline and reminder of the key attributes of good advising.

Results of the pilot program indicate that students rate highly the performance of their advisors. Students also rate highly their own involvement in the advising process, but not at the same level as that of their advisors. Although most of the student comments written on the forms were complimentary of advisors and the advising process, some of the statements raised concerns that need to be addressed.

Copies of the SEA survey instrument have been made available to all advisors for their own use if they choose to solicit formative feedback from students. There are also plans to continue to use the SEA form to evaluate advising college-wide based on the volunteer participation of advisors and students.

In order to increase student participation, future surveys may be conducted during Autumn Quarter, when there are many more students on campus. During the Spring Quarter pilot program, more than half of the students were off campus completing their internship requirement. Also, advisors who participate may want to voluntarily include their results in their annual reports. Conducting the survey in Autumn Quarter more closely conforms to the annual performance review cycle. It should be noted that any process involving the use of the survey instrument for summative evaluation would involve college-wide discussion and faculty approval. The negative aspect of choosing Autumn Quarter is that first-year students will be in their initial quarter of enrollment and may not be comfortable evaluating their advisor so soon.

Among the many benefits of developing the inhouse SEA instrument has been the increased awareness in the college of the importance and attributes of good advising. Also, by making advisor participation in the pilot survey voluntary, the threat of “evaluation” was removed and the process has been viewed as one of professional development and improvement.

Academic advising is a very worthwhile and meaningful service to students. As such, it is worthy of being assessed to determine if the students’ needs are being met. Personnel at other colleges of agriculture that have not been soliciting student feedback concerning advising are encouraged to initiate a process to obtain such information as part of an overall program to improve advising on their campuses.

Literature Cited

ACT. (1997). Educational Services Division. Iowa City, IA.

Barrick, K. and A. Hernandez. 2003. Graduating senior perceptions of faculty advising. NACTA Jour. 47(1):54-55.(Abstr.)

Bedker, E and A. Young. 1994. Advising in the 90s: Assessing the quality of the advisor/advisee relationship. NACTA Jour. 38(1):33-36.

Creamer, E. and D. Scott. 2000. Assessing individual advisor effectiveness. In: Gordon, V. and W. Habley (eds.). Academic Advising: A Comprehensive Handbook. San Francisco, CA: Jossey-Bass.

Cuseo, J. 2002, 3/19. Assessment of academic advisors and advising programs. First-Year Assessment Listserv (FYA) Series, [on-line serial], http://www.brevard.edu/fyc/listserv

Habley, W. 2000. Current practices in academic advising. In: Gordon, V. and W Habley (eds.). Academic Advising: A Comprehensive Handbook. San Francisco, CA: Jossey-Bass.

Habley, W. and R. Morales. 1998. Current practices in academic advising: Final report on ACT’s fifth national survey of academic advising. NACADA Monograph Series, no. 6. Manhattan, KS.

Hanson, G. and C. Huston. 1995. Academic advising and assessment. In: Reinarz, A. and E. White (eds.). Teaching Through Academic Advising: A Faculty Perspective. San Francisco, CA: Jossey-Bass.

Iaccino, J. 1991. Assessment and comparison of advising for freshmen and upperclassmen. Jour, of the Freshman Year Experience. 3(2):75-90.

Lee, N., C. Poison, and L. Severy. 1992. The status of advising: Results of the 1992 Membership Survey. Paper presented at the NACADA Conference, Atlanta, GA.

Leonhardy, L. and R. Jimmerson. 1992. Advising needs as perceived by students, advisors, and administrators. NACTA Jour. 36(4):37-41.

Moore, J. and K. Esbenshade. (2003). Academic advising evaluation in the College of Agriculture and Life Sciences: A successful program. NACTA Jour. 47(1):54. (Abstr.)

Radhakrishna, R. and J. Thomson. 1997. Academic advising in agricultural and extension education: An empirical study. NACTA Jour. 41(3): 15-18.

Severy, L., N. Lee, K Carodine, L. Powers, and G. Mason. 1994. Rating scales for the evaluation of academic advisors. NACADA Jour. 14(2): 121-129.

Srebnik, D. 1988. Academic advising evaluation: A review of assessment instruments. NACADA Jour. 8(1):52-62.

Williams, J. 1990. Student evaluation of advisors. NACTA Jour. 34(4):42-44.

Zimmerman, A. 2003. Developing a personalized

Allen Zimmerman and Arnold Mokma

The Ohio State University, ATI

Wooster, OH 44691-4000

1 Professor, 1328 Dover Road

2 Assistant Director (retired)

Copyright North American Colleges and Teachers of Agriculture Sep 2004

Provided by ProQuest Information and Learning Company. All rights Reserved