Fuzzy logic, neural networks, genetic algorithms: Views of three artificial intelligence concepts used in modeling scientific systems

Sunal, Cynthia Szymanski

Students’ conceptions of three major artificial intelligence concepts used in the modeling of systems in science, fuzzy logic, neural networks, and genetic algorithms were investigated before and after a higher education science course. Students initially explored their prior ideas related to the three concepts through active tasks. Then, laboratories, project work, use of computer modeling of scientific systems, and cooperative group work were used to help students construct key characteristics of each concept. Finally, they applied each concept in contexts different from that in which it had been previously studied. In postcourse interviews using a set of scenarios for each of the major course concepts, 49% of students’ applications included key characteristics of the concepts studied versus an application of 5% in precourse interviews. Students’ post interview applications were inconsistent even though they were more frequent, indicating a state of transition in their conceptual change. Applications were most consistent when used with scenarios deemed very familiar to the students, indicating the effects of context in conceptual change.

Scientists often model natural systems as a means of better understanding them. A Kansas prairie, for example, might be modeled to better understand the effects of prairie dogs on its plant life. Students can model systems as they study them and use modeling to address scientific problems. A system is “a collection of things and processes (and often people) that interact to perform some function” (American Association for the Advancement of Science [AAAS], 1993, p. 262). This unifying concept frames science content in the National Science Education Standards (National Research Council, 1996).

Scientists today frequently specify systems quantitatively, studying their theoretical behavior through computer simulations using mathematical models. These rule-based programs utilize traditional if-then rules as the algorithmic device for modeling. They have been used to solve difficult problems in chemistry, medicine, geology, and biology, among others (Goldberg, 1989). The public has seen the modeling of natural systems with computers by real scientists investigating the composition of a tornado and by fictional scientists who have built and interact with a major character, “Data,” during episodes of Star Trek Next Generation. Today’s students conceptualize components of artificial intelligence (AI) through their own experiences with computers and the media. Artificial intelligence is the capability of a device to perform functions normally associated with human intelligence, such as reasoning and optimization through experience. Such capability enables scientific systems to be modeled in order to assist in solving scientific problems.

Students have encountered basic concepts used in modeling systems in science and engineering through artificial intelligence (Thoresen, 1993). Such modeling can help students become aware of the types of scientific problems routinely solved using modeling with artificial intelligence techniques. This study was designed to document changes in the conceptions undergraduate elementary teacher education majors express about three key concepts used in, modeling scientific systems through artificial intelligence techniques: fuzzy logic, neural networks, and genetic algorithms.

Fuzzy logic systems provide a mechanism by which subjective concepts are incorporated into if-then rules, producing a rule-base akin to the “rule of thumb” approach generally implemented in human decisionmaking. Fuzzy logic considers more than binary, either– or choices. Decisions can be made within a range. For example, at 32 deg C a person may feel hot. Does that mean that at 31.99 deg C the person is not “hot”? There is a range during which a person is likely to feel hot. A fuzzy logic system recognizes that range and considers related factors such as humidity. A person may turn the air conditioner on at 27 deg C one day but not turn it on the next day until the temperature is 29 deg C, because the humidity is lower. Fuzzy logic operates over the range 0.0-1.0, with an infinite set of values. The key characteristic of fuzzy logic is the existence of a range rather than Boolean either/or values of 0.0 and 1.0. Computer programs are based on binary, either-or logic. When scientists recognize that a system displays a range, they can adapt programming to attempt to model that range (Yager, 1987).

Neural networks are loose models of the human brain “trained,” not programmed, with specific instructions for accomplishing a task. Like humans, neural networks learn to solve a problem by being shown numerous examples from which they are able to conceptualize. The neural network searches for similarity between the examples, using criteria to establish relevant similarities. The biological neuron serves as the model for the neural network. There are four basic components of biological neurons: dendrites, soma, axon, and synapses. A biological neuron receives inputs from other sources (dendrites), combines them in some way (soma), performs a generally nonlinear operation on the result (axon), and then outputs the final result (synapses). When scientists construct neural networks, they make decisions about: (a) arranging artificial neurons in various layers, (b) the type of connections among neurons for different layers as well as within a layer, (c) the way a neuron receives input and produces output, and (d) the strength of connection within the network by allowing the network to learn the appropriate values of connection weight by using a training data set. The most common use of neural networks is to predict what will most likely happen. Among applications being developed or used today are bomb detectors installed in airports to determine the presence of certain compounds from the chemical configurations of their components and the identification ofpeople with risk for a specific cancer. The key characteristics in neural networks are their loose modeling of the human brain, training for a specific task, and ability to learn to solve a problem.

Genetic algorithms are search algorithms based on the mechanics of natural organisms, enabling them to change and adapt over millions ofyears. They improve over time, working toward optimal solutions. Genetic algorithms learn from experiences, discarding characteristics when they do not facilitate an optimal solution. At a later time, a characteristic may be reintroduced into a system if it appears that now it could optimize a situation. Thus, genetic algorithms evolve through generations, working to drop out less favorable characteristics in order to achieve a model of the optimal solution or setting. Often, genetic algorithms are applied to search spaces comprising all possible solutions to a problem at hand that are too large to be exhaustively searched. The standard genetic algorithm generates an initial population of individuals. Every evolutionary step, or generation, is decoded and evaluated according to some predefined quality criterion-the fitness function. To form a new population (the next generation), individuals are selected according to their fitness. The introduction ofnew individuals into the population occurs through genetically inspired operators, crossover and mutation. Crossover is performed by exchanging parts of the encodings, or genomes, oftwo individuals to form new individuals. Mutation randomly samples new points in the search space, flipping bits at random with some small probability. Genetic algorithms are widely applied today, used with problems in diverse fields such as ecology, population genetics, and social systems. The key characteristics in genetic algorithms are that they evolve to solve problems, they improve over time searching for optimal solutions, and they drop out unfavorable characteristics.

Each of these three concepts is investigated in a problem-based, higher education course. Students are introduced to artificial intelligence systems in science through the computer modeling of a progression of natural systems. The course focuses on two main ideas, systems in science and the modeling of such systems.

This study investigated change in students’ understanding of the three major concepts of fuzzy logic, neural networks, and genetic algorithms prior to and after completion of the course. Watson, Prieto, and Dillon (1997) contended that conceptual change is marked by a state of transition. Accommodation does not occur across all contexts at the same time. It can be expected that accommodation may occur first with the most familiar events, those with which students have the most experience. Earlier, White (1991) described concepts as continuously developing over many dimensions. This process, in theory, is never ending. Others have noted that conceptual change includes elements such as developing precision in using relevant language, replacing aspects of the old ideas with aspects of the new, incorporating the new concept, and sometimes retaining aspects of both the old and the new simultaneously (Garnett, & Hackling, 1995; Garnett, Watson et al., 1997). Bliss (1995) has described the effects of context on conceptual change. Since concepts develop over many dimensions and such development is marked by a state of transition, conceptual change has been explained as a dynamic process occurring over a period of time (Tyson, Venville, Harrison, & Treagust, 1997). This dynamic process takes into consideration the pre instructional conceptions of the student, the science content, and the path between them as a student constructs learning.

The concepts investigated in this study are process concepts used to explain how an event is structured (Chi, Slotta, & deLeeuw, 1994). Conceptual change can be expected to be lengthier and more often inconsistently applied with process concepts because they are constraint-based interactions. Since the research literature indicates that conceptual change is not quickly accomplished, the research question considered was the following: To what extent is there evidence of conceptual change in students’ explanations of three artificial intelligence concepts used in modeling scientific systems (fuzzy logic, neural networks, and genetic algorithms) before and after participation in an active investigation of these concepts within a higher education course?

Methods

Students’ conceptions of fuzzy logic, neural networks, and genetic algorithms as used in modeling systems were elicited by asking them to consider everyday situations during which an individual comes into contact with one of these concepts through its incorporation into an object or event. These situations were presented as scenarios ending with a question to be considered. The scenarios were not overt science problem situations. The National Science Education Standards stress science as being an everyday, relevant part of students’ lives, not just what occurs in a laboratory (National Research Council, 1996). Therefore, the study’s scenarios incorporated the concepts investigated within the context of experiences not considered “scientific” in an academic sense; for example, using an automatic teller machine (ATM) to withdraw money from a bank.

Pilot Study

A pilot study investigated the use of 20 scenarios (seven for genetic algorithms, seven for neural networks, and six for fuzzy logic) with 18 students not enrolled in the course. Each scenario’s content validity was established by having a panel of three AI researchers examine the scenarios and determine whether they could be discussed in terms of the key characteristics identified for each of the three major course concepts previously described. Based on these students’ responses, 11 scenarios were selected for this study, including two scenarios on fuzzy logic systems, five scenarios on neural networks, and four scenarios on genetic algorithms (see appendix). Criteria for selection were students’ (a) ability to discuss the scenario posing a rationale for the process by which the scenario’s event operates even when unable to appropriately explain underlying concepts, (b) affirmation of familiarity with the context ofthe scenario, (c) description ofthe scenario as demonstrating scientific concepts, and (d) ability to make suggestions regarding which scientific concepts might be involved in the scenario even if these were not appropriate to the event. Of the 20 scenarios piloted, the responses of the 18 students met all of the criteria for 11 of the scenarios.

Subjects

Twelve students initially were chosen randomly from two sections of the course taught by the same instructor. One student declined to participate in the study, so 11 were interviewed at the beginning ofthe course and again at its completion by a graduate student. The interviewees were 20-40 years old, averaging age 24, and included 2 males and 9 females. The students were elementary education majors in their sophomore year. This was their third four-semester– credit-hour science course. The course prerequisite expectation was that one prior course had been completed with a grade of “C” or better in biological science and one in a physical science area. Another prerequisite expectation was the completion of at least nine credit hours of mathematics including, at a minimum, successfully passing a course in college algebra or precalculus. The course’s work and the data collection were designed using these minimum criteria.

Course Description

The course emphasizes the process of doing science, building an awareness of science as more than a body of information conveyed by text and lecture. Students model natural systems using Al techniques to solve scientific problems, discussing applications ofthe AI techniques in scientific disciplines. The relationship between natural systems and Al systems is demonstrated, as is the role of modeling systems in modern science and engineering. The main course goals are (a) develop understanding of how systems in nature can be modeled, (b) increase awareness of the types of problems routinely solved in the natural sciences using modeling, (c) introduce modeling using Al techniques, and (d) provide opportunities to solve scientific problems in a laboratory setting through the modeling of systems.

This course describes modern technological tools to students who may have weak mathematical backgrounds, so students do not program Al techniques. Instead, students encounter systems through a variety of approaches, including designing and troubleshooting. To set up such encounters, the course is problem based, incorporating inquiry teaching methods using the learning cycle (Driver, 1986; Karplus, 1977; Marek & Cavallo, 1997; Renner, 1982). Instruction incorporates laboratory investigation, consistent use of computer technology as a tool for modeling the scientific systems investigated, cooperative groups, performance assessment, and an orientation to facilitating students’ conceptual change. Students first explore their prior knowledge of a system in nature through experiences with particular aspects of the natural system to be modeled. Next, students see how the systems under study can be computer modeled using an Al technique. Then, students use the resulting computer models to solve problems related to the natural sciences. They work with each of the three major course concepts over 12 hours of class time and during associated project work. There are nine major components in the course:

1. Introduction to technological tools, acquiring and turning in assignments via e-mail, and using the Internet and a course web page to obtain copies of course lecture material and to locate and acquire computer software implementing the AI techniques introduced.

2. Introduction to the basic theory of systems, developing personal definitions of a system, and comparing and contrasting them with standard definitions. In cooperative group activities students explore relationships between various human systems and develop a preliminary working model identifying those systems necessary for intelligence.

3. Overview of systems in science simulated by Al techniques through working with examples from geology, chemistry, physics, astronomy, and biology.

4. Use of tree diagrams by students to study the hierarchical structure by which animals are classified.

5. Drawing on personal experience in classifying animals to develop a set of rules for animal classification and discussing methods for modeling these rules. Students expand this idea by working with another system using Induce-it (Inductive Solutions, Inc., 1992) software, in which an expert system successfully classifies mineral samples. Additional laboratories work with modeling rules for systems from chemistry, geology, and engineering.

6. Manipulating scientific concepts to solve complex problems using fuzzy logic with a focus on modeling systems. One laboratory works with a titration problem in a computer-simulated environment in which students use an imperfect pH sensor and identify the need for subjective descriptions as they develop rules for neutralizing a solution. They also examine the applicability of fuzzy logic using systems developed in chemistry, physics, and engineering.

7. Modeling the human brain as a system using neural networks. Students describe their impression of how the human brain works, diagramming how they think the brain makes a decision. Then, they develop an initial concept of a neural network and how it works using an Excel add-on spreadsheet interface, VB BackProp. This interface enables them to use a neural network to learn the functional relationship governing planetary dynamics without having to do all the math. Then, they implement a neural network to solve another problem, either determining the decay rate of nuclear material or the growth rate of a virus introduced into a population of organisms. Finally, they expand their ideas through investigating examples of neural network applications in chemistry, geology, astronomy, and engineering.

8. Modeling the human genetic system using genetic algorithms. Students use a genetic algorithm software package run in a spreadsheet, Premium Solver Platform Version 3.5 (Frontline Systems, 1993). They see the powerful search capabilities occurring in natural systems, solving one of several scientific problems, such as performing spectral analysis of chemical compounds. They then examine other systems in which problems can be solved using genetic algorithms.

9. Completing a course project demonstrating a fundamental understanding of one ofthe course topics. Students describe a system in science, a problem related to the system that has not been presented in class, and an AI technique that might be used to solve a problem related to the system.

Interviews

Each student was interviewed using the scenarios during an audiotaped, individually scheduled interview averaging 70 minutes in length. Precourse interviews occurred during the first week of classes in the semester priorto any content coverage in the course. Postcourse interviews occurred within 3 days following the course’s completion. Students were probed for further explanation, including a real-life example similar to that in the scenario, and additional comments were welcomed.

Pre and post interview comments were matched. Evidence statements relating to the key characteristic(s) of each of the three major concepts were identified (Posner, Strike, Hewson, & Gertzog, 1982; Sadler, 1998). Two raters analyzed each interview with an interrater reliability at .89. The raters were trained using transcripts of pilot study interviews, for which an interrater reliability of.90 was achieved. When examining Scenario 1, which dealt with the concept of fuzzy logic as used in a camera photographing a user of an ATM machine, for example, Student 3 said, “The camera only takes a picture when it focuses. It is either on or off.” This response did not incorporate the key characteristic of fuzzy logic, a range of values. In the post interview, Student 3 said,

The camera is always on and it is programmed to sense movement all across a wide range from side to side and also across a range in depth. So it combines width and depth into a range of options and photographs anything that moves in those ranges, that kind of cuts across from one place in the ranges to another.

This post interview was scored as incorporating the key characteristic, a range of values.

Interview responses for the concept of neural networks were examined to identify the presence of three key characteristics: loose modeling of the human brain, training for a specific task, and ability to learn to solve a problem. In the pre interview for Scenario 3, in which interviewees described how a vending machine determines the amount of money deposited by a user. Student 6 said, “Coins roll in and drop into slots that accept their size. The machine gives a number to each slot and adds up all the times the slot is `hit.’ So if it equals the price for the item you want, it drops it for you to get.” This response did not contain any of the key characteristics of neural networks. In the post interview, Student 6 gave the following response containing all three key characteristics.

The machine has a hard problem because there are a lot of combinations of coins and bills. So it is programmed to recognize different coins and bills and combine them in a lot of ways [key characteristics: training for a specific task]. It is also programmed to solve a new problem like a mix of coins with, say, a silver dollar, it did not come across before. Maybe it searches for numerals or wording on the silver dollar to identify it [key characteristic: ability to learn to solve a new problem]. Then, it adds this information to what it knows so it can deal fast with it next time. If any combination is what the cost is for what you want, it drops it for you to get. It kind of thinks like we do, you meet a new problem and you try to find a way to solve it using some stuff you know but in a different way [key characteristic: loose modeling of the human brain].

For the major concept of genetic algorithms, the responses were examined to identify whether they contained the key characteristics of ability to evolve to solve problems, improve over time through searching for optimal solutions, and drop out unfavorable characteristics. Using Scenario 11 dealing with educational games’ adaptability to their users, in the pre interview Student 4 said, “The game has a training set it puts the child through so it can figure out what is easy to average for the child. Then, it gives the child only the things that are easy to average.” None of the key characteristics of genetic algorithms can be identified in this response. In the post interview, Student 4 said,

The game runs through a training set and figures out what’s easy or average or hard for the child. Then, it give the child average or sort-of hard stuff. When the child gets good at the game, it changes it so it is sort-of hard for the child again and it just keeps changing to keep the child interested [key characteristic: ability to evolve to solve problems]. When the child just can’t do something, it gives up after a few tries and does something else that could work [key characteristics: improve over time through searching for optimal solutions and drop out unfavorable characteristics]. If the child gets so he maybe can do it, the thing that was too hard, the machine tries it again.

Results

The interview results are presented in three sections in relation to the concepts investigated: fuzzy logic, neural networks, and genetic algorithms.

Fuzzy Logic Systems

The key characteristic of fuzzy logic examined in the interviews was whether the interviewees recognized a range of values governing the operation of the automatic video camera in Scenario 1 or a “point-and– click” camera in Scenario 2.

Scenario 1 dealt with how an automatic video camera lens turns to record someone making an ATM transaction. During pre interviews, all students said they thought the camera’s lens was programmed to respond to the presence of an individual at some distance from the camera but could not explain the programming. All the responses focused on a specific distance, using binary “either it is or it is not” explanations. Student 1 said, “A light sensor moves the camera whenever a shadow crosses it at a certain distance, recording the object creating the shadow.” The other interviewees said the camera had a distance focus mechanism and gave responses similar to Student 2’s: “When an object moves to a specific distance, it focuses on that object, recording it.” In post interviews, responses incorporating fuzzy logic were received from four students (36%). These four students described a range of values governing the operation of the camera. Student 4’s response is representative: “The camera responds to a range of movement, not to just a movement in a special place it is pointed at. It will notice ifyou come in from the side or the front or if you move fast or slow, as long as you are in a range in its view.”

Scenario 2 dealt with the use of an automated personal camera requiring the user only to point and click. Two students said the camera was programmed to focus on a set distance, while the other students were unable to offer any explanation.

In the post interview, all expressed some ideas about how the camera worked and, of these, 2 (18%) appropriately incorporated the use of fuzzy logic, in which a range of values is considered. These 2 students identified distance of the object photographed and the light available as two major factors, with multiple interactions creating a range of values the camera must consider.

Neural Networks

The key characteristics considered in students’ responses to Scenarios 3-7 were neural networks’ loose modeling of the human brain, training for a specific task, and ability to learn to solve a problem. Scenario 3 responses will be described in some depth to demonstrate how responses were evaluated.

Scenario 3 asked how a vending machine determines the amount of money deposited by a user. During the pre interview Student 4 said, “I have no idea.” Student 5 said, “The machine scans a bill and enters its number in memory. If coins are used, it senses the diameter and mass of each coin and enters a value based on these.” The rest of the students based an explanation on the coins deposited, mentioning either the size or weight of the coins. Students 7 and 1 gave additional detail, saying the coins go through slots individualized for different sizes of coins. In each instance, they stated that the machine used either size or weight to determine what money was deposited and then it “knew” what change to return and whether to drop the potato chips the user was trying to purchase into a dispenser tray. None of the pre interview responses included appropriate description of any of the three characteristics of neural networks.

In the post interview, Student 5, who had suggested how the machine recognized the values of bills and coins talked only about coins saying, “Their weight is used and added up to determine a total deposited. From this value, the machine figures out how much money to give in change.” Student 4, who was unable to give any explanation in the pre interview said, “The vending machine has a built-in calculator programmed to know how much change is deposited but I don’t know how this works.” Student 7 repeated a pre interview explanation identifying the size of the coins as the relevant variable. “The machine is programmed to match the size with a value. Then, it adds up the values. Next, it subtracts this sum from the required amount of money needed to purchase the item and returns it in change.” The other eight interviewees described the use of ifthen rules by the machine. Student 8 gave a representative response incorporating all three key characteristics of neural networks saying,

The machine uses if-then rules in a filter system comparing the coins deposited to known coins. Then, it decides what was deposited and compares this sum to the required amount. It works kind of like people’s brains work [key characteristic: loose modeling of the human brain]. We figure that if we have this, then that will work out to be this option but if we have something else then it means this other option will work out. The machine is trained to put together several options, not just limiting itself to a couple of variations [key characteristic: training for a specific task]. The machine can learn to solve a new problem like, say, it gets a bunch of pennies and never had all pennies. Its training sets it up so it can solve this new problem kind of like how people meet a new problem and so we have to figure it out from what we know about solving similar problems [key characteristic: ability to learn to solve a new problem].

The interviewees’ responses to Scenarios 3 through 7 are summarized in Table 1. As noted in Table 1, no students included all three key characteristics in their pre interview responses to Scenario 3, while 8 (74%) were able to do so in their post interviews. One student (9%) included all three key characteristics for Scenario 4 in the pre interview, while 8 (73%) did so in the post interview. In responding to Scenario 5, 1 student (9%) included the three key characteristics in the pre interview, while 2 (18%) did so in the post interview. Three students (27%) included the three key characteristics in their Scenario 6 pre interviews, while 8 (73%) did so in the post interviews. None of the pre interviews included all three key characteristics of neural networks in their responses to Scenario 7, while 8 students (82%) included them in their post interviews.

Genetic Algorithms

The key characteristics considered in students’ responses to Scenarios 8-11 were genetic algorithms’ ability to evolve to solve problems, improve over time through searching for optimal solutions, and drop out unfavorable characteristics.

Scenario 8 asked students to discuss how a speech– activated computer works. In the pre interview, Student I suggested that the computer considered context factors such as “how you use words and inflections in your voice.” Then, this student said, “But unless it is hooked up to your body, I don’t know how it would actually look at anything.” Student 2 thought the computer had a dictionary it examines as a person types and then tries “to piece it together as a sentence. However, you talk so fast, I don’t know how it could look up the words that fast in its dictionary.” Other students also used a dictionary explanation. Student 4 said the computer “primarily focuses on adjectives and adverbs, then on the subject content words, and then determines the context of what you are talking about, but I am not sure bow it does it all.” Student 3 refined the dictionary explanation further considering voice inflections by saying, “It doesn’t do whole words. It takes parts of words and associates voice inflection, the raising and lowering of your voice with mood.”

During post interviews, Student 1, who had used voice inflections and a dictionary in the pre interview explanation said,

We could put little sensors on your face and look at how your facial expressions are. We could look at the tones in your speech, learning over time how a certain person uses speech-the context, inflection in the voice [key characteristic: improve over time searching for optimal solutions]….What it could do is, if there’s five signals going, one signal could be the word itself or the letters of the word, one could be the tone of voice, and so on. It would give more weight to the tone than the word. Then, it would give a response. If it can’t figure out what you are saying, it can change what it does. It can say, “Well, this doesn’t work. How come? Maybe there’s a lot of background noise from cars going by outside o let’s drop out any noise that sounds like cars so maybe the person’s voice will be clearer.” Then, it drops out the car noise [key characteristic: dropping out unfavorable characteristics]. But because everybody’s speech is different, like, you might have a lisp or an accent, the computer probably isn’t that good when it is first programmed. It can get better, evolve, so it can handle more and more different kinds of speech and it gradually gets better at running on speech rather than by somebody typing in instructions [key characteristic: evolves to solve problems]. After a while, it might get so good that it can put back in some stuff that it took out earlier, like maybe it can figure out your speech even if it is real quiet where, before, it had to enhance your speech by making it louder to understand it.

Student 6 said,

It uses a genetic algorithm picking the best aspects and sending them to the next step [key characteristic: drops out unfavorable characteristics]. You mate two of each to get the best aspects from each one to form the next generation that is best because of positive aspects [key characteristic: improves over time searching for optimal solutions]. Then, the next generation consists of those new ones. It does the whole thing over again with the occasional mutation being stuck in to introduce a good aspect lost earlier [key characteristic: evolves to solve problems].

Student 3’s response was similar but added an example:

The computer might ask you to read something aloud and you would read it in your usual tone. That is one generation. It would have some way to respond, and if it responded incorrectly, you could say, “try it again,” or you chop it up reading very slowly [key characteristic: improves over time searching for optimal solutions]. If you read “cat” and it types “cap,” you would say, “try it again” [key characteristic: drops out unfavorable characteristics, evolves to solve problems].

The remaining post interviewees used the dictionary explanation with the computer examining the dictionary for matches with voice inflections through searching among several possible matches. These interviewees, however, did not explain how the process was accomplished.

As noted in Table 2, 1 student (9%) included all three key characteristics in the pre interview response, while 4 (36%) did so in their post interview responses. No students included any of the three key characteristics in their pre interview responses to Scenarios 9, 10, or 11. One student (9%) included all three key characteristics in the post interview response to Scenario 9, with 2 (18%) doing so in Scenario 1 O’s post interviews and 9 (82%) doing so in Scenario 1 I’s post interviews.

Discussion and Conclusions

When considering the study’s findings, we found no differences by gender nor by age. Among all interviewees both appropriate and inappropriate responses were found in post interviews.

There were six instances of appropriate responses to the precourse interviews out of the total of 121 responses, with Student 5 giving two of these. During the postcourse interviews 49% of responses indicated understanding and application of fuzzy logic, neural networks, and genetic algorithms compared to 5% in the precourse interviews. This large overall increase indicates that some conceptual change occurred among these students in relation to the three major concepts investigated.

These students’ pre interview responses to the fuzzy logic scenarios suggest that they were familiar with the idea that computers were programmed but could not explain factors considered in the programming and how they were structured within a program. The pre interview explanations suggest an on-off, binary view of how an ATM machine or an automatic point-and-click camera works. In the case of the ATM machine scenario, all students conceptualized the process as having one factor, distance or light, on which a decision was made. After investigating the concept in the course, six post interview responses explained a familiar event with an appropriate use of fuzzy logic, including the key characteristic “a range of values,”but the use of fuzzy logic explanations was not applied consistently.

During the pre interviews with neural network scenarios, responses incorporating all three key characteristics were given by Student 5 to Scenarios 4 and 5. Three students, not including Student 5, used the key characteristics in responses to the pre interviews for Scenario 6. Components incorporated into post interview responses that appropriately used the three key characteristics were found in each of the neural network scenarios. Scenario 5 had just two appropriate responses, while three other scenarios had eight appropriate responses and Scenario 7 had nine appropriate responses. All students incorporated a neural network into at least one appropriate response, with 91% doing so in three or more of the five neural network scenarios. More students used an appropriate explanation with these scenarios than had done so in the fuzzy logic scenarios. However, since there were more scenarios to which they could respond, this may partially account for the difference. As with fuzzy logic, students seemed to have undergone conceptual change, but it was not firmly established. Those students whose pre interviews included some use of neural networks demonstrated a wider, but still incomplete, application ofthe concept in post interviews. This variable usage suggests that the course enabled them to develop an understanding of neural networks incorporating components found among scientists but that this understanding may be in transition, since it was not consistently utilized.

Only 1 student incorporated the key characteristics of genetic algorithms into the response for one of the four scenarios used to investigate this concept. In the post interviews at least one appropriate post interview response to this set of scenarios was given by 91% of the students. Scenario 11, dealing with educational computer games, received the most appropriate post interview responses.

The findings ofthis study indicate that an investigative, inquiry-oriented course can create conceptual change in adult students. The post interviews find many more responses that indicate understanding and application of the concepts of fuzzy logic, neural networks, and genetic algorithms than are found in the pre interviews. However, since half of the post interview responses do not appropriately demonstrate an understanding and application ofthese concepts, the conceptual change is not fully applied.

Among the three concepts, neural networks were most often applied in post interviews. However, they also appeared most often in pre interviews. This suggests an understanding of neural networks among some of the students prior to the course. A genetic algorithm is an abstract and complex concept incorporating fuzzy logic and neural networks. Nine of these interviewees appropriately used a genetic algorithm in their post interview response to Scenario 11 in reference to an education game, although only four appropriate responses were given to the other three scenarios.

The results support Watson et al.’s (1997) contention that conceptual change is marked by a state of transition. The concepts investigated in this study are process concepts used to explain how an event is structured (Chi et al., 1994). Because they are constraint-based interactions, the inconsistency evident in the results can be expected, since conceptual change can be lengthier with process concepts. Genetic algorithms are the most dynamic, causal, and constraintbased of the three concepts investigated in this study. The responses indicated that the interviewees most often viewed the genetic algorithm scenarios as being static. Only in the educational game, Scenario 11, was a dynamic and constraint-based view widely evident. The context of the scenario appears to be fundamental to these students’ application of the concept (Bliss, 1995). For example, the educational game scenario elicited far more appropriate explanations than did the other genetic algorithm scenarios. Two of the neural network scenarios dealt with the vending machines experienced commonly in modern society. One neural network scenario dealt with identifying windshield wipers for one’s car, a common experience. Another neural network scenario dealt with applying for and receiving a loan. This is an experience many undergraduates have had as they seek financial assistance for higher education. The scenario with the fewest appropriate responses, the computer-designed fabric patterns, may reveal a lack of experience with industrial and artistic computer applications among these students.

Further research needs to identify elements of the scenarios that may be impacting these students’ ability to apply a concept. The findings suggest that personal experiences play an important part in conceptual change, thus supporting research done by many others. Since no individual is likely to be familiar with all contexts encountered, educators need to provide experiences that broaden and deepen individual experiences. But educators must also consider how to support students’ usage and application of a concept to less familiar contexts.

Modeling of scientific systems using computers can be expected to continue to expand in scientific research and theory building. It offers opportunities to explore complex interactions within a system. Student involvement in modeling scientific systems can be expected to become a more frequently used component of science education, because it enables students to manipulate a multitude of factors to determine their effects. To meaningfully incorporate modeling, students have to understand the concepts of fuzzy logic, neural networks, and genetic algorithms in a scientifically appropriate manner and apply them in many contexts.

Editors’ Note: This research was partially supported by the National Aeronautics and Space Administration’s Opportunities for Visionary Academics (NOVA) program.

References

American Association for the Advancement of Science. (1993). Benchmarks for scientific literacy.

New York: Oxford University Press.

Bliss, J. (1995). Piaget and after: The case of learning science. Studies in Science Education, 25, 139-172.

Chi, M. T. H., Slotta, J. D. & deLeeuw, N. (1994). From things to processes: A theory of conceptual change for learning science concepts. Learning and Instruction, 4, 27-43.

Driver, R. (1986). The pupil as scientist. Milton Keynes, UK: Open University Press.

Frontline Systems, Inc. (1993). Solver Platforme Version 3.5. Incline Village, NV: Author.

Garnett, P., Garnett, J., & Hackling, M. W. (1995). Students’ alternative conceptions in chemistry: A review of research and implications for teaching and learning. Studies in Science Education, 25, 69-95.

Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Reading, MA: Addison-Wesley.

Inductive Solutions, Inc. (1992). Induce-It [Computer Software]. New York: Author.

Karplus, R. (1977). Science teaching and the development of reasoning. Berkeley: University of California Press.

Marek, E. A., & Cavallo, A. M. (1997). The learning cycle (Rev. ed.). Portsmouth, NH: Heinemann.

National Research Council. (1996). National science education standards. Washington, DC: National Academy Press.

Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of a scientific conception: Towards a theory of conceptual change. Science Education, 66, 211-277.

Renner, J. (1982). The power of purpose. Science Education, 66, 709-716.

Sadler, P. M. (1998). Psychometric models of student conceptions in science: Reconciling qualitative studies and distractor-driven assessment instruments. Journal of Research in Science Teaching, 35, 265– 296.

Thoresen, K. (1993). Principles in practice: Two cases of situated participatory design. In D. Schuler & A. Manioka (Eds.), Participatory design: Principles and practices (pp. 271-298). Hillsdale, NJ: Lawrence Erlbaum Associates.

Tyson, L. M., Venville, G. J., Harrison, A. G., & Treagust, D. F. (1997). A multidimensional framework for interpreting conceptual change events in the classroom. Science Education, 81, 387-404.

Watson, J. R., Prieto, T., & Dillon, J. S. (1997). Consistency ofstudents’ explanations about combustion. Science Education, 81, 425-443.

White, R. T. (1991). Episodes and the purpose and conduct of practical work. In B. Woolnough (Ed.), Practical science (pp. 78-86). Milton Keynes, UK: Open University Press.

Yager, R. R. (1987). Fuzzy sets and applications: Selected papers by L. A. Zadeh. New York: John Wiley.

Cynthia Szymanski Sunal, Charles L. Karr, and Dennis W. Sunal The University of Alabama

Correspondence concerning this article should be addressed to Cynthia Szymanski Sunal, The University of Alabama, P. O. Box 870232, Tuscaloosa, AL 35487-0231.

Electronic mail may be sent via Internet to cvsunal@bamaed.ua.edu

Copyright School Science and Mathematics Association, Incorporated Feb 2003

Provided by ProQuest Information and Learning Company. All rights Reserved

You May Also Like

An analysis of Incident/Accident Reports from the Texas secondary school science safety survey, 2001

An analysis of Incident/Accident Reports from the Texas secondary school science safety survey, 2001 Stephenson, Amanda L This study…

What Does It Mean to You?

Technology: What Does It Mean to You? Flick, Lawrence B Readers will have noticed that we welcomed back the Technology Reviews colum…

Taiwanese gifted students’ views of nature of science

Taiwanese gifted students’ views of nature of science Lederman, Shiang-Yao Liu Norman G This study examined the conceptions of natur…

Whodunit? Exploring proportional reasoning through the footprint problem

Whodunit? Exploring proportional reasoning through the footprint problem Koellner-Clark, Karen This paper describes a proportional r…