Print

Technology and ethics

DOI
10.4324/9780415249126-L102-1
Versions
DOI: 10.4324/9780415249126-L102-1
Version: v1,  Published online: 1998
Retrieved April 26, 2024, from https://www.rep.routledge.com/articles/thematic/technology-and-ethics/v-1

3. Anglo-American discussions

In contrast to the Continental approach, philosophers following an Anglo-American analytical tradition organize ethical discussions around particular technologies. For example, biomedical ethics includes the study of ethical implications of the use and development of advanced medical technologies (see Bioethics; Medical ethics); information technology ethics (also known as computer ethics) examines social and ethical ramifications of computers and high-speed digital networks (see Information technology and ethics); engineering ethics studies the professional responsibilities of engineers (see Engineering and ethics); and environmental ethics evaluates the effects of various technologies on the natural environment (see Environmental ethics). Work in these areas applies concepts and modes of analysis drawn from analytical moral philosophy and political theory even though, at times, the special problems of new technologies demand an extension of these concepts and methodology beyond their traditional usage.

Each sub-area of ethics and technology is marked by a distinctive community of discourse and unique set of issues. Nevertheless, a number of common themes cut across these specialized domains. Prominent among them are: equity or justice; the problem of risk; responsibility for technology; and the effect of technology on liberty and autonomy.

Historically, issues of social justice, related to the distribution of technological goods and services, were first to become a focus of moral concern. This set of issues played a major role in the nineteenth-century rise of the labour movement, socialism, and the state regulation of technology. More recently, equity issues have re-emerged in acute form in relation to biomedicine, environmental pollution and computers. On what basis should scarce medical resources such as donated or artificial organs be allocated? To what extent does protection of an environmental commons legitimately limit private ownership? How should access to information technologies be facilitated under democratic capitalist structures?

A different dimension of the problem of just distribution of technological benefits is the problem of the equitable distribution of technological costs and risks. The general problem of risk due to technology, however, goes beyond equity concerns, and is part of the more general effort of technology assessment. Technologies often have unintended consequences which, if known in advance, might have altered decisions about their adoption. These consequences may include an increase in risk. Two scholars who have especially influenced work in this area are David Collingridge (1980) and Kristin Shrader-Frechette (1991). Collingridge is well-known for his dilemma of technological (and risk) assessment: in the early development of a technology, when it is relatively easy to control its direction, we inevitably lack the knowledge to exercise reasonable control; yet by the time we have more experience and, along with it, a better understanding of the risks, control has become difficult, if not impossible. Shrader-Frechette argues that persons should not be subjected to technological risk until they have clearly understood the risk and have granted their consent without being unduly constrained by economic or other external pressures. She contends that the concept of free and informed consent as applied in the field of medicine is applicable to technology in general and ought to be a part of what guides morally grounded public policy (see Risk; Risk assessment).

The issue of responsibility for the effects of technology involves two dominant lines of inquiry. One is concerned with the special responsibilities of technical professionals (see Professional ethics). Paul Durbin (1992), for instance, argues that engineers have an obligation to go outside their technical communities to lobby public policy, as when physicists during the 1950s and early 1960s lobbied for a worldwide ban on the atmospheric testing of nuclear weapons or when computer scientists in the United States opposed funding of the Strategic Defense Initiative (the ‘Star Wars’ project) in the 1980s. The second line of inquiry focuses on questions of blameworthiness and accountability when technological innovations or products cause individual or societal harms (see Responsibility; Responsibilities of scientists and intellectuals §3).

Finally, invention and utilization of advanced technologies may be evaluated in terms of liberty and autonomy. In biomedicine, for example, in the name of liberty and autonomy, there have been numerous efforts to work out the exact parameters of free and informed consent, and then to propose ways to institutionalize them (see Consent). In the area of information technology, liberty and autonomy are a key to explaining normative dimensions of privacy as well as prescribing the extent of freedom of speech over the digital networks (see Privacy §2). Langdon Winner (1986) has discussed, in more general terms, the relationship of technology with control and autonomy by looking at ways in which the technological design process and large technical systems, such as roads, restrict human agency, affect political and personal autonomy and delimit the exercise of democratic citizenship.

Print
Citing this article:
Mitcham, Carl and Helen Nissenbaum. Anglo-American discussions. Technology and ethics, 1998, doi:10.4324/9780415249126-L102-1. Routledge Encyclopedia of Philosophy, Taylor and Francis, https://www.rep.routledge.com/articles/thematic/technology-and-ethics/v-1/sections/anglo-american-discussions.
Copyright © 1998-2024 Routledge.