A key development in realm of information technologies is that they
are not only the object of moral deliberations but they are also
beginning to be used as a tool in moral deliberation itself.
Since artificial intelligence technologies and applications are a kind
of automated problem solvers, and moral deliberations are a kind of
problem, it was only a matter of time before automated moral reasoning
technologies would emerge. This is still only an emerging
technology but it has a number of very interesting moral implications
which will be outlined below. The coming decades are likely to
see a number of advances in this area and ethicists need to pay close
attention to these developments as they happen. Susan and Michael
Anderson have collected a number of articles regarding this topic in
their book, Machine Ethics (2011), and Rocci Luppicini has a
section of his anthology devoted to this topic in the Handbook of
Research on Technoethics (2009).
Information Technology as a Model for Moral Discovery
Patrick Grim has been a longtime proponent of the idea that philosophy should utilize information technologies to automate and illustrate philosophical thought experiments (Grim et al. 1998; Grim 2004). Peter Danielson (1998) has also written extensively on this subject beginning with his book Modeling Rationality, Morality, and Evolution with much of the early research in the computational theory of morality centered on using computer models to elucidate the emergence of cooperation between simple software AI or ALife agents (Sullins 2005).
Luciano Floridi and J. W. Sanders argue that information as it is
used in the theory of computation can serve as a powerful idea that can
help resolve some of the famous moral conundrums in philosophy such as
the nature of evil (1999, 2001). The propose that along with
moral evil and natural evil, both concepts familiar to philosophy;
we add a third concept they call artificial evil (2001). Floridi
and Sanders contend that if we do this then we can see that the actions of
artificial agents
…to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil (and for that matter good) but conversely to ‘receive’ or ‘suffer from’ it. (Floridi and Sanders 2001)
Evil can then be equated
with something like information dissolution, where the irretrievable loss
of information is bad and the preservation of information is good
(Floridi and Sanders 2001). This idea can move us closer to a way of measuring the
moral impacts of any given action in an information environment.
Information Technology as a Moral System
Early in the twentieth century the American philosopher John Dewey proposed a theory of inquiry based on the instrumental uses of technology. Dewey had an expansive definition of technology which included not only common tools and machines but information systems such as logic, laws and even language as well (Hickman 1990). Dewey argued that we are in a ‘transactional’ relationship with all of these technologies within which we discover and construct our world (Hickman 1990). This is a helpful standpoint to take as it allows us to advance the idea that an information technology of morality and ethics is not impossible. As well as allowing us to take seriously the idea that the relations and transactions between human agents and those that exist between humans and their artifacts have important ontological similarities. While Dewey could only dimly perceive the coming revolutions in information technologies, his theory is useful to us still because he proposed that ethics was not only a theory but a practice and solving problems in ethics is like solving problems in algebra (Hickman 1990). If he is right, then an interesting possibility arises, namely the possibility that ethics and morality are computable problems and therefore it should be possible to create an information technology that can embody moral systems of thought.
In 1974 the philosopher Mario Bunge proposed that we take the notion
of a ‘technoethics’ seriously arguing that moral
philosophers should emulate the way engineers approach a problem.
Engineers do not argue in terms of reasoning by categorical imperatives
but instead they use:
… the forms If A produces B, and you value B, chose to do A, and If A produces B and C produces D, and you prefer B to D, choose A rather than C. In short, the rules he comes up with are based on fact and value, I submit that this is the way moral rules ought to be fashioned, namely as rules of conduct deriving from scientific statements and value judgments. In short ethics could be conceived as a branch of technology. (Bunge 1977, 103)
Taking this view seriously implies that the very act of building
information technologies is also the act of creating specific moral
systems within which human and artificial agents will, at least
occasionally, interact through moral transactions. Information
technologists may therefore be in the business of creating moral
systems whether they know it or not and whether or not they want that
responsibility.
Informational Organisms as Moral Agents
The most comprehensive literature that argues in favor of the prospect of using information technology to create artificial moral agents is that of Luciano Floridi (1999, 2002, 2003, 2010b, 2011b), and Floridi with Jeff W. Sanders (1999, 2001, 2004). Floridi (1999) recognizes that issues raised by the ethical impacts of information technologies strain our traditional moral theories. To relieve this friction he argues that what is needed is a broader philosophy of information (2002). After making this move, Floridi (2003) claims that information is a legitimate environment of its own and that has its own intrinsic value that is in some ways similar to the natural environment and in other ways radically foreign but either way the result is that information is on its own a thing that is worthy of ethical concern. Floridi (2003) uses these ideas to create a theoretical model of moral action using the logic of object oriented programming.
His model has seven components; 1) the moral agent a, 2) the moral
patient p (or more appropriately, reagent), 3) the interactions of
these agents, 4) the agent's frame of information, 5) the factual
information available to the agent concerning the situation that agent
is attempting to navigate, 6) the environment the interaction is
occurring in, and 7) the situation in which the interaction occurs
(Floridi 2003, 3). Note that there is no assumption of the
ontology of the agents concerned in the moral relationship modeled
(Sullins 2009a).
There is additional literature which critiques and expands the idea
of automated moral reasoning (Adam 2008; Anderson and Anderson 2011;
Johnson and Powers 2008; Schmidt 2007; Wallach and Allen 2010).
While scholars recognize that we are still some time from creating
information technology that would be unequivocally recognized as an
artificial moral agent, there are strong theoretical arguments in favor
of the eventual possibility and therefore they are an appropriate
concern for those interested in the moral impacts of information
technologies.
No comments:
Post a Comment