Back to the future: Engineering, computing, and ethics
Herkert, Joseph R
A few years ago, Bill Joy, a cofounder of Sun Microsystems and coauthor of the Java software specification, published a controversial article in Wired magazine in which he suggested that certain paths of scientific and technological research – genetic engineering, robotics, and nanotechnology – posed such great dangers to the future of the human beings that we ought to think twice before proceeding down those paths. Joy believes that what distinguishes these technologies from earlier ones is their potential for self-replication, thus raising the specter of a “future [that] doesn’t need us.” However, not all technologists share Joy’s concern. For example, in a panel discussion of “humanoid robotics” that appeared in Discover, Marvin Minsky, one of the founders of the field of artificial intelligence, commented, “I don’t see anything wrong with human life being devalued if we have something better.”
Others, while not necessarily agreeing with Minsky’s optimistic outlook for robots, have dismissed Joy’s article as a naive statement of technological determinism. For example, in a recent review of Michael Crichton’s nanorobot thriller Prey, Freeman Dyson argues that “Joy ignores the long history of effective action by the international biological community to regulate and prohibit dangerous technologies.” Nonetheless, I find Joy’s article worthy of notice for a number of reasons. First, a leader in the technical community speaking out on ethical issues, though not unheard of, is certainly rare. Second, Joy’s focus on “macroethical” issues reflects a growing trend in engineering ethics. And third, the three problem areas cited by Joy – robotics, nanotechnology, and genetic engineering – indicate the growing need for greater collaboration among engineering ethicists and computer ethicists.
I started work as a consultant in the electric utility industry in the mid-1970s a few years after earning my bachelor’s degree in electrical engineering (and after a brief interlude studying creative writing). Though the first oil shock had just taken place, the utility industry was still barreling toward the future with plans to double generating capacity every ten years. In retrospect, I can identify many ethical issues that went unnoticed at the time. Conflicts of interest, such as in underestimation of costs in planning studies to perpetuate the need for consulting services, though not everyday occurrences, were clearly present. Construction flaws and survey errors were overlooked to maintain good relations with contractors and to avoid embarrassing other engineers. Public concerns about nuclear power were belittled. And while these events sometimes tugged at my conscience, engineering ethics was a subject that was never broached in my education or work experience. Hand calculators had replaced slide rules, but computer simulations were still uncommon. I recall being criticized by a supervisor for writing in a business-development prospectus that we would attack a particular problem using a digital computer. Computers, he scolded, are merely tools – it was our engineering expertise that made us attractive to clients.
By the time I returned to my graduate studies in the early 1980s, engineering ethics was emerging as a full-fledged branch of applied ethics. Federally funded collaborations among engineers and philosophers led to significant developments in research and teaching. While moral theories, grounded in philosophy, and engineering codes of ethics, grounded in part in engineering’s desire to earn respect as a “profession,” competed for the attention of scholars and teachers, the case study emerged as a principal mode of pedagogy. Issues covered ranged from conflict-of-interest cases and industrial secrets to protecting public health, safety, and welfare, which all contemporary codes of engineering ethics now list as of “paramount” importance. For the most part, the behavior of individual engineers and the internal workings of the engineering profession (or what now might be called “microethics”) received the most attention. The 1990s saw not only an explosion of textbooks and other print and online educational resources, but also recognition by the Accreditation Board for Engineering and Technology (ABET) that “professional and ethical responsibility” is one of eleven knowledge areas critical to a general engineering education.
One case study that has been a particular focal point for engineering ethics (and business ethics as well) has been the space shuttle Challenger explosion. Perhaps more has been written on this case than any other (and much more is certain to come, given the parallels to the recent space shuttle Columbia disaster).* Many classic engineering-ethics cases deal with disasters such as these, and like the Challenger case often focus on whistle-blowing and its usually negative consequences for the whistle-blower. Recently, however, more emphasis has been placed on cases with happier endings. The best known of such “good works” cases is the story of William LeMessurier, the chief structural designer of New York’s Citicorp building who, upon discovering flaws in the building’s construction, essentially blew the whistle on himself.
MACROETHICS AND MICROETHICS
Despite an occasional call for more concern among engineering ethicists for macroethical issues – that is, the social responsibility of the engineering profession and public policy concerning technology – engineering ethics until recently remained focused primarily on microethical issues. But this focus has begun to change. Political scientists Langdon Winner and Ned Woodhouse, for example, have called attention to pressing societal needs, such as over-consumption, that deserve more attention from engineering. And William Lynch and Ronald Kline have suggested that sociology and history should play a more prominent role in engineering-ethics education. In my own work, I have focused on the relationship of engineering ethics and public policy in such areas as risk assessment, sustainable development, and product liability.
Macroethics in engineering has also drawn some attention outside of academia. Many engineering organizations and engineering leaders have promoted the concept of sustainable development and the role of engineering in making it a reality, as highlighted in a document prepared by several U.S.-based engineering societies for the Johannesburg Earth Summit 2002:
Creating a sustainable world that provides a safe, secure, healthy life for all peoples is a priority for the U.S. engineering community. It is evident that U.S. engineering must increase its focus on sharing and disseminating information, knowledge and technology that provide access to minerals, materials, energy, water, food, and public health while addressing basic human needs. Engineers must deliver solutions that are technically viable, commercially feasible, and environmentally and socially sustainable.
Bill Wulf, President of the National Academy of Engineering (NAE) who, like Joy, is a well-respected leader in the engineering community, has also championed the cause of macroethics, with his concerns also focusing on nanotechnology, biotechnology, and information technology. Wulf is attempting to establish a Center for Engineering, Ethics, and Society at NAE with a primary focus on macroethical issues and social responsibilities of the engineering profession.
ENGINEERING AND COMPUTING – CONNECTIONS AND DIVERGENCE
Many, if not most, of the emerging macroethical issues in engineering intersect with the growing dependence of engineering on computing, and on information and communication technology (ICT) in general. With my colleague, Brian O’Connell, I have lately been working on comparing and contrasting the fields of engineering ethics and computing ethics. We began with the observation that computer ethics is much more relevant to engineering ethics than engineering-ethics texts would have one think. While most such texts recognize the importance of knowledge of environmental ethics to engineers of all disciplines, few give special treatment to computer ethics, which for the most part is taught only to computer scientists and computer engineers. When computing topics are covered in engineering-ethics texts, it is usually piecemeal, with no special significance placed on the revolutionary nature of computing and information technology. This lack is curious, given that computing is no longer merely a tool, as my engineering supervisor once chided me, but an integral component of contemporary engineering. As Wulf has noted:
The pervasive use of information technology in both the products and process of engineering . . . has the potential to change the practice of engineering significantly, and hence the education required to be an engineer. . . . As the power of computers . . . increases exponentially, more and more routine engineering functions will be codified and done by computers, simultaneously freeing the engineer from drudgery and demanding a higher level of creativity, knowledge, and skill [emphasis added].
Given the significance of ICT in engineering education and practice, engineering students of all disciplines, and not just computer engineers, stand to benefit from exposure to ethical issues that are standard fare in computer ethics, in such areas as privacy, intellectual property in the digital age, and computer-systems reliability.
Moving on to examining research, O’Connell and I have found that while the emergence of engineering ethics and computing ethics as academic fields of study occurred more or less concurrently, engineering ethicists seem more interested in and better prepared to deal with microethical issues, while computing ethicists are much more willing and able to take on macroethical concerns. It seems to us that this distinction results at least partly from the strong tradition of professional practice and professionalism in engineering and the important role it has played in the development of engineering ethics. Given that the roots of computing are more abstract and academic than those of engineering, computer ethics has developed in an atmosphere that does not parallel the professional traditions of engineering. On the other hand, unlike engineering ethicists, from the very beginning computer ethicists have more naturally turned to the broader social implications of ICT, as suggested in the goals of computer-ethics courses enumerated in Johnson’s classic text on computer ethics:
(1)to make students (especially future computer professionals) aware of the ethical issues surrounding computers;
(2)to heighten their sensitivity to ethical issues in the use of computers and in the practice of computing professions;
(3)to give them more than a superficial understanding of the ways in which computers (do and don’t) change society and the social environments in which they are used;
(4)to provide conceptual tools and develop analytical skills for sorting out what to do when in situations calling for ethical decision-making or for sorting out the likely impacts computer technology will have in this or that context.
It is not uncommon, for example, to find computer ethicists immersed in such issues as gender and ICT, the “digital divide,” electronic documents, online communities, information security, and design issues in ICT. To cite one example, the computer-ethics community reacted with great concern over the Bush Administration’s plans for mining ICT technologies for information on terrorist activities at the probable expense of civil liberties.
FRUITFUL COLLABORATION AND CONTINUING CHALLENGES
Recognizing that the strengths of engineering and computing ethics are complementary, O’Connell and I have concluded that much can be gained from a more deliberate interaction among engineering and computer ethicists. Indeed, recent trends would suggest that both engineering and computing and their ethics counterparts are moving closer together as disciplinary boundaries blur and issues become more complex. For example, the accreditation board for computer science has recently been integrated with ABET. The Institute of Electrical and Electronics Engineers (IEEE), the world’s largest technical society, has 300,000 members in more than 150 countries, of which nearly 100,000 belong to the IEEE Computer Society. Perhaps most importantly in relation to ethics, the IEEE Computer Society and the Association for Computing Machinery recently collaborated to establish a Software Engineering Code of Ethics and Professional Practice that addresses both the microethical and macroethical responsibilities of software designers.
Joy’s examples of the ongoing revolutions in robotics, nanotechnology, and genetic engineering illustrate the convergence of engineering and computing and the need for ethical thinking in both fields that bridges the microethical and the macroethical. Indeed, efforts are underway to form a new interdisciplinary field of science with input from nanotechnology, biotechnology, information technology, and cognitive science (NBIC). Ethical dilemmas posed by NBIC developments range from maintaining professional competence as disciplinary boundaries are crossed to pushing the limits of what it means to be human. Less threatening technologies also pose challenges of mutual interest to engineering and computing ethicists. The explosion in wireless networking, for example, involves traditional microethical issues such as product safety and reliability, along with more global challenges such as preserving privacy and providing equitable access to information.
If all engineering and computing professionals were as thoughtful and prudent as Joy, there might be less of a need for ethicists to focus on these fields. Unfortunately, this is not the case. Most engineering practioners, I fear, are no more aware of their ethical responsibilities than I was some twenty-five years ago. For example, of the 300,000 IEEE members, fewer than 2,000 are members of the Society on Social Implications of Technology, for more than twenty years one of IEEE’s “technical” societies, and most members have probably never read the IEEE Code of Ethics. The IEEE’s ethics activities, highly regarded by most outsiders, have waxed and waned throughout the years, reflecting an ongoing internal struggle within the professional societies between engineering professionalism and the corporations for which most engineers work. The IEEE’s Ethics and Member Conduct Committee and its members, for example, are currently prohibited by IEEE Bylaws from “provid[ing] advice to individuals.”
While significant inroads have been made in engineering and computing education in the area of professional ethics and social responsibility, and advocacy by business and academic leaders such as Joy and Wulf is becoming more prominent, it is ultimately rank-and-file engineers and computer scientists, and their professional societies, who must acknowledge and face head-on the traditional microethical responsibilities and emerging macroethical ones. The future surely does need us, but only if our sense of professional and social responsibility becomes more in tune with our technical achievements.
Acknowledgement
I am grateful to Brian M. O’Connell for helpful comments on a draft of this article.
* See for example Thomas G. White, Jr., “The Establishment of Blame in the Aftermath of a Technological Disaster: An Examination of the Apollo I and Challenger Disasters,” (National Forum, 81.1:24-29) – Editor.
Works Cited and Further Reading
Davis, M. Thinking Like an Engineer: Studies in the Ethics of a Profession. Oxford: Oxford University Press, 1998.
A Declaration by the US Engineering Community to the World Summit on Sustainable Development (available online at http://www.asme.org/gric/ps/2002/02-30.html).
Dyson, F. “The Future Needs Us!” The New York Review of Books 13 February 2003 (available online at http://www.nybooks.com/articles/16053).
Herkert, J. “Continuing and Emerging Issues in Engineering Ethics Education.” The Bridge 32.3 (2002): 8-13. (available online at http://www.nae.edu/NAE/naehome.nsf/weblinks/MKEZ-5F7SA4).
Johnson, D. Computer Ethics. 2^sup nd^ ed. Englewood Cliffs, NJ: Prentice Hall, 1994.
Joy, B. “Why the Future Doesn’t Need Us.” Wired Magazine 4 Aug. 2000 (available online at http://www.wired.com/wired/archive/8.04/joy.html).
Lynch, W. and R. Kline. “Engineering Practice and Engineering Ethics.” Science, Technology and Human Values 25 (2000): 195-225.
Petit, C. et al. “The Future of Humanoid Robots.” Discover March 2000: 84-90 (available online at http://www.discover.com/mar_00/featfuture.html).
Web Clearinghouse for Engineering and Computing Ethics (available online at http://www4.ncsu.edu/~jherkert/ethicind.html).
Woodhouse, E. “Overconsumption as a Challenge for Ethically Responsible Engineering.” IEEE Technology and Society Magazine 20.3 (2001): 23-30.
Wulf, W. “Changing Nature of Engineering.” The Bridge 27.2 (1997). (Available online at http://www.nae.edu/nae/nae home.nsf/weblinks/NAEW-4NHMBD).
Joseph R. Herkert is associate professor of multidisciplinary studies at North Carolina State University where he teaches in the Science, Technology, and Society Program and is director of the Benjamin Franklin Scholars Program, a dual-degree program in engineering and humanities/social sciences. Dr. Herkert is editor of Social, Ethical and Policy Implications of Engineering: Selected Readings (Wiley/IEEE Press) and recently guest-edited special issues of IEEE Technology and Society Magazine on “Engineering Ethics: Continuing and Emerging Issues” and “Social Implications of Information and Communication Technology.” He may be contacted via e-mail at joe_herkert@ncsu.edu
Copyright National Forum: Phi Kappa Phi Journal Spring 2003
Provided by ProQuest Information and Learning Company. All rights Reserved