ASSESSMENT OF ACHIEVEMENT AND PERSONAL QUALITIES UNDER CONSTRUCTIONIST LEARNING ENVIRONMENT

ASSESSMENT OF ACHIEVEMENT AND PERSONAL QUALITIES UNDER CONSTRUCTIONIST LEARNING ENVIRONMENT

Tangdhanakanond, Kamonwan

Constructionism as an educational concept asserts that students are particularly likely to construct knowledge and form new ideas when they engage in building tangible objects. Such objects are often products of a group project. Learning under constructionism is therefore project-based. Darunsikkhalai School in Bangkok, Thailand, provides total project-based education for their students. Portfolios are used to assess students’ academic and non-academic development. Twelve students served as sample for the present study. Their portfolios were assessed three times (third week, sixth week, and ninth week) during a nine-week project period. The results indicated significant improvement (p

Behavioral psychology was a dominant influence on education for many years (Druin & Solomon, 1996; Tullavantana, 2002). According to the behaviorist viewpoint, teachers are the disseminators of information and students are passive recipients of the knowledge that teachers impart (Hay & Barab, 2001 ; Tullavantana, 2002).

During the 1970s, the prominence of behavioral psychology declined giving way to the rapid increase of cognitive psychology (Gosling & Craik, 1999). Cognitivism emphasizes the learning processes in the minds of students. One cognitive learning theory, constructivism (proposed by Jean Piaget), argues that knowledge is not transmitted from teachers to students, but constructed by students themselves when they interact with their environment (Bjorklund, 1995; Guzdial, 1997; Stager, 2001).

Constructionism, another cognitive learning theory, was invented by Seymour Papert, professor of learning research at the Media Laboratory of Massachusetts Institute of Technology (Guzdial, 1997). Constructionism goes a step further than constructivism (Druin & Solomon, 1996; Petcharuksa, 2001; Tullavantana, 2002). It asserts that students are particularly likely to form new ideas and construct knowledge when they engage in building and manipulating objects or making products by themselves (Guzdial, 1997; Hay & Barab, 2001; Papert, 1980, 1984, 1993, 1999; Stager, 2001). Thus, while constructivism defines learning as the building of knowledge inside of one’s head, constructionism suggests that the best way to ensure that such intellectual structures form is through the active construction of something outside of one’s head-something tangible that others can see, critique, and possibly use (Guzdial, 1997; Stager, 2001). Social skill development is another benefit of learning through constructionism. Papert (1993) indicated that a constructionist learning environment also allows students to show, discuss, examine, and collaboratively reflect on the cognitive artifacts or products that they have created. This is how their content area knowledge, habits of mind, and social skills are developed (Hay & Barab, 2001; Stager, 2001).

In Thailand, there have been dramatic changes in education since the enactment of the National Education Act A.D. 1999. This act has introduced a new teaching style, shifting from teacher-centered to student-centered instruction. As a result, many schools have provided a constructionist learning environment for their students (Petcharuksa, 2001; Tullavantana, 2002). Among these schools, Darunsikkhalai School is the only full scale constuctionist school in Thailand that provides a totally project-based learning environment.

Darunsikkhalai School is a relatively new school that has used constructionism since its inception. It was established in November 2000, three years after Seymour Papert came to Thailand to lead a series of workshop on constructionist learning. The school was established by King Mongkut’s University of Technology Thonburi in collaboration with Suksapattana Foundation, Thaicom Foundation, and the Future of Learning Group at the Media Laboratory of the Massachusetts Institute of Technology (Darunsikkhalai School, 2004). At the time of the study, the school had 23 students, ages 6-11. Each student was free to choose any project in which he or she was interested. The project time was from 8:30 a.m. to 2:30 p.m. every weekday for nine weeks. During this period of time, students had to work on the selected projects in separate groups. The students in each group worked closely together in making products, starting from making a working plan, searching information from various sources (e.g., the Internet, books, and knowledgeable people), designing products, and creating the products as planned. They learned from one another and helped solve problems that arose. An example of a student group project was the Natural Product project in which products such as fruit juice, herb ice-cream, and aroma candles were made. Thirteen teachers who had been trained at conferences and workshops on constructionism served as facilitators in student group projects. Learning of basic subject matters such as mathematics and the Thai language took place at intervals throughout the group projects with contents derived from the projects themselves. This learning and teaching process is quite different from that of a typical school where approximately same-aged students learn different subject matters in different class periods (subject-based as opposed to project-based).

To assess students’ learning under total project-based education or constructionism, traditional testing (e.g., true-false, multiple-choice, fill-in, short-answer, and essay tests) may not be the most appropriate. Traditional tests fail to allow students to demonstrate the multidimensional aspects of what they have learned (Cole, Ryan, Kick & Mathies, 2000). Portfolio, however, is potentially an authentic assessment tool for assessing student learning applied in a complex, real-world situation (Benson & Barnett, 1999). Portfolio reflects many types of student performances, including individual abilities and characteristics, as well as growth and progress as seen through their created products or artifacts (Aschbacher, 1990; Birenbaum, 1996; Moonkum, 2000; Poowipadawat, 2001). Learning under constructionism should be assessed through portfolios.

The objectives of the present study were:

(1)To use portfolios to assess the Darunsikkhalai School students’ academic (mathematics and Thai) outcomes.

(2) To use portfolios to assess the students’ non-academic (emotional development, adversity handling, technology usage, and moral development) outcomes.

(3) To compare the students’ academic and non-academic progress.

Method

Participants

The population consisted of 23 students (ages = 6-11 years) and 13 teachers at Darunsikkhalai School. Twelve older students (seven females and five males, ages 9-11 years) and their six teachers constituted the sample for the present study. Older students were selected because of their higher level of literacy and maturity.

Instruments

In the present study, portfolios were used to assess the learning and development of the students. Since student portfolios were not in place when the present study began, the researcher (the first author) collaborated with the teachers in developing the process of organizing student portfolios for the school. Scoring rubrics for assessing and evaluating learning outcomes through portfolio contents were also created. Three areas of outcomes were evaluated:

1. Mathematical skills: These included calculation and problem solving skills (CAL), as well as data presentation and analytical skills (DAT). The range of possible scores was 3-12 each.

2. Thai language skills: These included listening skill (LIS), speaking skill (SPE), reading skill (REA), and writing skill (WRI). The range of possible scores was 3-12 each.

3. Four quotients: These were desirable personal qualities that the school conceptualized and believed consistent with modem Thai culture. These included emotional quotient (EQ) reflecting emotional development, adversity quotient (AQ) reflecting ability to handle adversity, technology quotient (TQ) reflecting ability to use technology, and moral quotient (MQ) reflecting moral development. The range of possible scores was 1-4 each.

Note that the first two areas were academic-focused while the third area was desirable-characteristics-focused.

An example of the scoring rubric for the “technology quotient (TQ)”: Level 1 = not able to use electronic devices in project works; Level 2 = able to use electronic devices in project works but always need close supervision; Level 3 = able to use electronic devices in project works but need advice some of the times; and Level 4 = able to use electronic devices with hardly any advice.

Three raters (the student himself or herself, a peer, and the teacher) evaluated the student academic and non-academic outcomes through portfolio contents. A student’s final outcome score was the weighted average of these three raters. The student’s self rating was weighted 32%, the peer 28 %, and the teacher 40%. These weights were derived by averaging the weights proposed by the teachers in brainstorming session with them. The interrater reliabilities for the scoring rubrics of mathematics, Thai, and the four quotients were 0.93, 0.85, and 0.92 respectively.

The present study did not employ an available standardized test of emotional quotient (EQ) such as the Emotional Quotient Inventory (EQ-1) by Bar-On (1997) because such an instrument was constructed in other socio-cultural context and therefore may not be appropriate for the Thai culture. Instead, the researcher collaborated with the teachers at Darunsikkhalai School in constructing their own scoring rubrics to assess the emotional quotient of the students, together with other quotients (i.e. adversity quotient, technology quotient, and moral quotient). Another reason for the decision not to use Bar-On’s emotional quotient inventory was that it would be a student self-report only measure. The present study’s evaluation was based on three sources-self, peer, and teacher.

Procedure

The researcher worked with the teachers in explaining the portfolio process to the students. The portfolio process consisted of 8 steps: (a) planning the portfolio, (b) collecting created products, (c) selecting satisfactory products, (d) evaluating the products, (e) revising the products, (f) integrating knowledge acquired from making the products, (g) evaluating the portfolio, and (h) presenting the portfolio to parents. The students created their own portfolios by following each step outlined above with the assistance of their teachers throughout each nine-week project period. The products in their individual portfolios were evaluated three times per project period (once every three weeks) based on the scoring rubrics. Rating scores of student outcomes at three points in time were analyzed using one-way repeated measure ANOVA to see if significant improvement occurred. Effect size analysis was also employed to compare academic and non-academic progress.

Results

The analysis of student academic outcomes (math and Thai language) and non-academic outcomes (the four quotients) at three different points in time was performed using the one-way repeated measure ANOVA. The means, standard deviations, and F-statistics are as shown in Table 1. All F values were significant (p .05) in three (SPE, EQ, and TQ) of the 10 outcomes during time 1-2 period. However, during time 2-3 period, significant increases occurred (p

Since an academic outcome had a possible score range of 3-12 but a non-academic outcome had a different possible score range of 1 -4, comparing their differential gains would require the use of a common standard index such as an “effect size (ES)” An effect size (Glass, McGaw & Smith, 1981) is defined as the mean of the experimental group minus the mean of the control group and then divided by the standard deviation of the control group (Light & Pillemer, 1984). In a one-group pre-test post-test design such as in the present study, the same group serves as its own control. Therefore, an effect size is computed by subtracting the pre-test mean from the post-test mean and dividing the result by the pre-test standard deviation. The calculated effect sizes are also shown in Table 2. Note that the effect sizes were generally larger for the academic than for desirable characteristics (non-academic) gains, and larger in the time 2-3 period of measurement than in the time 1-2 period. The average effect size of gain for mathematics and Thai combined in the 1-2 period was 0.77, and that in the 2-3 period was 1.14. The average effect size of gain for the four desirable characteristics in the 12 period was 0.44, and that in the 2-3 period was 0.92.

Discussion

This present study found that the raw score means in all aspects of learning increased from time 1 to 2 and from time 2 to 3. Pair-wise analysis (time 1 vs. 2, time 2 vs. 3) indicated significant differences in a great majority of pairs. It is interesting to observe that the Bonferroni tests indicated more significant differences at time period 2-3 assessment than at time period 1 -2. Moreover, the effect sizes of gains were also generally larger in the time period 2-3 compared with the time period 1-2. This might indicate that (a) teachers became more effective towards the end of the student project-based learning, or (b) students had more products and therefore were more likely to select better products to be evaluated towards the end of the project

When comparing academic and nonacademic gains, it was found that effect sizes on academic outcomes (mathematics and Thai combined) were higher than those on desirable characteristics (the 4 quotients combined) in both the first and the second time periods. It seems that gain in academic achievement is easier to obtain than gain in non-academic desirable personal characteristics. This is consistent with a number of other research studies of similar nature. For example, Bereiter & Engelmann (1966, 1968) found that prepost gains in tests that have clearly identifiable content such as language, reading, and arithmetic were larger than those gains in standardized IQ tests. This suggests that personality is more difficult to change than academic achievement.

Future Studies

Darunsikkhalai School is currently experimenting with constructionist or project-based learning. The true effect of this type of learning is difficult to assess without an appropriate “control” group. Ideally, we need a comparable group of students in this school that can be randomly assigned to a control group using a “traditional or typical” instructional approach. Since that is not possible, the next best thing may be to compare the achievement of students at Darunsikkhalai School with that of students at another school who have similar backgrounds. Such comparison may be forthcoming. Currently all students in Thailand are required to take a national test to have their academic achievement assessed. A comparison of national test scores between Darunsikkhalai students and other Thai students from schools of similar demographics is currently being planned.

References

Aschbacher, P. (1990). Performance assessment: State activity, interest and concerns. Applied Measurement in Education, 3(4), 275-288.

Bar-On, R. (1997). The Bar-On Emotional Quotient Inventory (EQ-i): A test of emotional intelligence. Toronto, Canada: Multi-Health System.

Benson, B., & Barnett, S. (1999). Students’led conferencing: Using showcase portfolios. Thousand Oaks, CA: Corwin Press Inc.

Bereiter, C., & Engelmann, S. (1966). Teaching disadvantaged children in the preschool. Englewood Cliffs, NJ: Prentice Hall.

Bereiter, C., & Engelmann, S. (1968). An academically oriented preschool for disadvantaged children: Results from the initial experimental group. In D. W. Brison & J. Hill (Eds.), Psychology and early childhood education (no. 4, pp. 17-36). Ontario, Canada: Ontario Institute for Studies in Education.

Birenbaum, M. (1996). Assessment 2000: Towards a pluralistic approach assessment. In M. Birenbaum, & F. Dochy (Eds.), Alternatives in assessment of achievements, learning process and prior knowledge (pp. 319-340). Boston, MA: Kluwer Academic Publishers.

Bjorklund, D. F. (1995). Children’s thinking: Development function and individual differences (2nd éd.). Georgia: International Thompson Publishing Company.

Cole, D. J., Ryan, C.W., Kick, F, & Mathies, B.K. (2000). Portfolio across the cirriculum and beyond. Thousand Oaks, CA: Sage Publication.

Darunsikkhalai School (2004). Darunsikkhalai: School for innovative learning. Retrieved Oct 25, 2004, from http://e-school.kmutt.ac.th/

Druin, A., & Solomon, C. (1996). Designing multimedia environments for children: Computers, creativity, and kids. New York: John Wiley & Sons.

Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis of social research. Beverly Hills, CA: Sage.

Gosling, S. D., & Craik, K. H. (1999). An empirical analysis of trends in psychology. American Psychologist, 54(1), 117-128.

Guzdial, M. (1997). Constructivism vs. Constructionism. Retrieved Oct 1, 2004 from http:// www.guzdial.cc.gatech.edu/commentary/construct.html

Hay, K. E., & Barab, S.A. (2001). Constructivism in practice: A comparison and contrast of apprenticeship and constructionist learning environments. Journal of the Learning Sciences, 10(3), 281-322.

Light, R. J., & Pillemer, D. B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.

Moonkum, S. (2000). Portfolio (13th ed.). Bangkok, Thailand: Parppim Publishing.

Papert, S. (1980). Mindstorms: Children, computer and powerful ideas. New York: BasicBooks.

Papert, S. (1984). New theories for new learning. School Psychology Review, 13(4), 422-428.

Papert, S. (1993). The children’s machine: Rethinking school in the age of the computer. New York: BasicBooks.

Papert, S. (1999). What is Constructionism? Retrieved December 21, 2001, from http:// lynx.dac.neu.edu/k/krudwall/constructioism.htm

Petcharuksa, S. (2001). Constructionism in Thailand. Bangok, Thailand: Office of National Education Commission.

Poowipadawat, S. (2001). Child-centered learning and authentic assessment (2nd éd.). Chiangmai, Thailand: Knowledge Press.

Stager, G. S. (2001). Constructionism as a hightech intervention strategy for at risk learners. Paper presented at the National Education Computing Conference, Chicago, Illinois, July 2001. (ERIC Document Reproduction No. ED 462 959)

Tabachnick, B. G., & Fidell, L. S. (1996). Using multivariate statistics (3rd ed.). New York: HarperCollins College Publishers.

Tullavantana, R. (2002). The strategic development of organizing instructional system based on constructionism of Thai higher education institution. Unpublished doctoral dissertation, Chulalongkorn University, Bangkok, Thailand.

KAMONWAN TANGDHANAKANOND

SOMWUNG PITIYANUWAT

Chulalongkorn University

TEARA ARCHWAMETY

University of Nebraska at Kearney

Please send all paper correspondences to:

Kamonwan Tangdhanakanond

C/O Dr. Teara Archwamety

Dept. of Counseling and School Psychology

College of Education

University of Nebraska at Kearney

Kearney, NE 68849

Please direct all e-mail messages to:

tkamonwan@hotmail.com, archwametyt@unk.edu

Copyright Project Innovation Spring 2006

Provided by ProQuest Information and Learning Company. All rights Reserved