The move from one set of dominant information technologies to
another is always morally contentious. Socrates lived during the
long transition from a largely oral tradition to a newer information
technology consisting of writing down words and information and
collecting those writings into scrolls and books. Famously
Socrates was somewhat antagonistic to writing and he never wrote
anything down himself. Ironically, we only know about
Socrates' argument against writing because his student Plato
ignored his teacher and wrote it down in a dialogue called
“Phaedrus” (Plato). Towards the end of this dialogue
Socrates discusses with his friend Phaedrus the
“…conditions which make it (writing) proper or
improper” (section 274b–479c). Socrates tells a fable
of an Egyptian God he names Theuth who gives the gift of writing to a
king named Thamus. Thamus is not pleased with the gift and
replies,
If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. (Phaedrus, section 275a)
Socrates, who was adept at quoting lines from poems and
epics and placing them into his conversations, fears that those who
rely on writing will never be able to truly understand and live by
these words. For Socrates there is something immoral or false
about writing. Books can provide information but they cannot, by
themselves, give you the wisdom you need to use or deeply understand
that information. Conversely, in an oral tradition you do not
simply consult a library, you are the library, you are a living
manifestation of the information you know by heart. For Socrates,
reading a book is nowhere near as insightful as talking with its
author. Written words,
…seem to talk to you as though they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you the same thing forever. (Phaedrus, section 275d).
His criticism of
writing at first glance may seem humorous but the temptation to use
recall and call it memory is getting more and more prevalent in modern
information technologies. Why learn anything when
information is just an Internet search away? In order to avoid
Socrates' worry, information technologies should do more than just
provide access to information; they should also help foster wisdom and
understanding as well.
The Fundamental Character of Information Technologies
Early in the information technology revolution Richard Mason suggested that the coming changes in information technologies would necessitate rethinking the social contract (Mason 1986). What he could not have known then was how often we would have to update the social contract as these technologies rapidly change. Information technologies change quickly and move in and out of fashion at a bewildering pace. This makes it difficult to try to list them all and catalog the moral impacts of each. The very fact that this change is so rapid and momentous has caused some to argue that we need to deeply question the ethics of the process of developing emerging technologies (Moor 2008). It has also been argued that the ever morphing nature of information technology is changing our ability to even fully understand moral values as they change. Lorenzo Magnani claims that acquiring knowledge of how that change confounds our ability to reason morally “…has become a duty in our technological world” (Magnani 2007, 93). The legal theorist Larry Lessig warns that the pace of change in information technology is so rapid that it leaves the slow and deliberative process of law and political policy behind and in effect these technologies become lawless, or extralegal. This is due to the fact that by the time a law is written to curtail, for instance, some form of copyright infringement facilitated by a particular file sharing technology, that technology has become out of date and users are on to something else that facilitates copyright infringement (Lessig 1999). But even given this rapid pace of change it remains the case that information technologies or applications can all be categorized into at least three different types each of which we will look at below.
All information technologies record (store), transmit (communicate),
organize and/or synthesize information. For example, a book
is a record of information, a telephone is used to communicate
information, and the Dewey decimal system organizes information.
Many information technologies can accomplish more than one of the above
functions and, most notably, the computer can accomplish all of them
since it can be described as a universal machine,
so it can be programmed to emulate any form of information
technology. we will look
at some specific example technologies and applications from each of
the three types of information technology listed above and track the
moral challenges that arise out of the use and design of these
specific technologies. In addition to the above we will need to
address the growing use of information environments such as massive
multiplayer games, which are environments completely composed of
information where people can develop alternate lives filled with
various forms of social activities. Finally we will look at
not only how information technology impacts our moral intuitions but
also how it might be changing the very nature of moral
reasoning. We will look at
information as a technology of morality and how we might program
applications and robots to interact with us in a more morally
acceptable manner.
Moral Values in Information Recording
We live in a world rich in data and the technology to record and store vast amounts of this data has grown rapidly. The primary moral concern here is that when we collect, store, and/or access information it is done in a just manner that anyone can see is fair and in the best interests of all parties involved. As was mentioned above, each of us produces a vast amount of information every day that could be recorded and stored as useful data to be accessed later when needed. But moral conundrums arise when that collection, storage and use of our information is done by third parties without our knowledge or done with only our tacit consent. The control of information is power. The social institutions that have traditionally exercised this power are things like, religious organizations, universities, libraries, healthcare officials, government agencies, banks and corporations. These entities have access to stored information that gives them a certain amount of power over their customers and constituencies. Today each citizen has access to more and more of that stored information without the necessity of utilizing the traditional mediators of that information and therefore a greater individual share of social power.
One of the great values of modern information technology is that it
makes the recording of information easy, and in some cases, it is done
automatically. Today, a growing number of people enter biometric
data such as blood pressure, calorie intake, exercise patterns, etc.
into applications designed to help them achieve a healthier
lifestyle. This type of data collection could become more
automated in the near future. There are already applications that
use the GPS tracking available in many phones to track the length and
duration of a user's walk or run. How long until a
smartphone collects a running data stream of your blood pressure
throughout the day perhaps tagged with geo-location markers of
particularly high or low readings? In one sense this could be
immensely powerful data that could lead to much healthier lifestyle
choices. But it could also be a serious breach in privacy if the
information got into the wrong hands which would be easily accomplished
since third parties have access to information collected on smartphones
and online applications. We will look
at some theories on how best to ethically communicate this recorded
information to preserve privacy. But here we must address a more
subtle privacy breach, the collection and recording of data about a
user without his or her knowledge or consent. When
searching on the Internet, browser software records all manner of data
about our visits to various websites which can, for example, make
webpages load faster next time you visit them. Even the
websites themselves use various means to record information when your
computer has accessed them and they may leave bits of information on
your computer which the site can use the next time you visit.
Some websites are able to detect which other sites you have visited or
which pages on the website you spend the most time on. If someone
were following you around a library noting down this kind of
information you might find it uncomfortable or hostile, but online this
kind of behavior takes place behind the scenes and is barely noticed by
the casual user.
According to some professionals, information technology has all but
eliminated the private sphere. Scott McNealy of Sun Microsystems
famously announced in 1999: “You have zero privacy anyway.
Get over it” (Sprenger, 1999). Helen Nissenbaum observes that,
[w]here previously, physical barriers and inconvenience might have discouraged all but the most tenacious from ferreting out information, technology makes this available at the click of a button or for a few dollars (Nissenbaum 1997)
and since the time when
she wrote this the gathering of data has become more automated and
cheaper. Clearly, earlier theories of privacy that assumed
the inviolability of physical walls no longer apply but as Nissenbaum
argues, personal autonomy and intimacy require us to protect privacy
nonetheless (Nissenbaum 1997).
A final concern in this section is that information technologies are
now storing user data in “the cloud” meaning that the data
is stored on a device remotely located from the user and not owned or
operated by that user, but the data is then available from anywhere the
user happens to be on any device he or she happens to be using.
This ease of access has the result of also making the relationship one
has to one's own data more tenuous because of the uncertainty
about the physical location of that data. Since personal data is
crucially important to protect, the third parties that offer
“cloud” services need to understand the responsibility of
the trust the user is placing in them. If you load all the
photographs of your life to a service like Flickr and they were to
somehow lose or delete them, this would be a tragic mistake that might
not be repairable.
Moral Values in Communicating and Accessing Information
Information technology has forced us to rethink a simple notion of privacy into more complex theories that recognize both the benefits and risks of communicating all manner of information. The primary moral values of concern are privacy, ownership, trust and the veracity of the information being communicated.
Who has the final say whether or not some information about a user
is communicated or not? Who is allowed to sell your medical
records, your financial records, your friend list, your browser history,
etc.? If you do not have control over this process, then how can
you claim a right to privacy? For instance Alan Westin argued in
the very early decades of digital information technology that control
of access to one's personal information was the key to
maintaining privacy (Westin 1967). It follows that if we
care about privacy, then we should give all the control of access to
personal information to the individual. Most corporate entities
resist this notion as information about users has become a primary
commodity in the digital world boosting the fortunes of corporations
like Google or Facebook. There is a great deal of utility each of
us gains from the services of internet search companies. It might
actually be a fair exchange that they provide search results for free
based on collecting data from individual user behavior that helps them
rank the results. This service comes with advertising that is
directed at the user based on his or her search history. That is,
each user tacitly agrees to give up some privacy whenever they use the
service. If we follow the argument raised above that
privacy is equivalent to information control then we do seem to be
ceding our privacy away little by little. Herman Tavani and James
Moor (2004) argue that in some cases giving the user more control of
their information may actually result in greater loss of privacy.
Their primary argument is that no one can actually control all of the
information about oneself that is produced each day. If we focus
only on the little bit we can control, we lose site of the vast
mountains of data we cannot (Tavani and Moor 2004). Tavani and
Moor argue that privacy must be recognized by the third parties that do
control your information and only if those parties have a commitment to
protecting user privacy will we actually have any real privacy and
towards this end they suggest that we think in terms of restricted
access to information rather than strict control of personal
information (Tavani and Moor 2004).
Information security is also an important moral value that impacts
the communication and access of user information. If we grant the
control of our information to third parties in exchange for the
services they provide, then these entities must also be responsible for
restricting the access to that information by others who might use it
to harm us (see Epstein 2007; Magnani 2007; Tavani 2007). With
enough information, a person's entire identity might be stolen
and used to facilitate fraud and larceny. The victims of these
crimes can have their lives ruined as they try to rebuild such things
as their credit rating and bank accounts. This has led to
the design of computer systems that are more difficult to access and
the growth of a new industry dedicated to securing computer
systems.
The difficulty in obtaining complete digital security rests in the
fact that security is antithetical to the moral values of sharing and
openness that guided many of the early builders of information
technology. Steven Levy (1984) describes in his book,
“Hackers: Heroes of the Computer Revolution,” a kind of
“Hacker ethic,” that includes the idea that computers
should be freely accessible and decentralized in order to facilitate
“world improvement” and further social justice (Levy 1984;
see also Markoff 2005). So it seems that information technology
has a strong dissonance created in the competing values of security and
openness based on the competing values of the people designing the
technologies themselves.
This conflict in values has been debated by philosophers.
While many of the hackers interviewed by Levy argue that hacking is not
as dangerous as it seems and that it is mostly about gaining knowledge
of how systems work, Eugene Spafford counters that no computer break-in
is entirely harmless and that the harm precludes the possibility of
ethical hacking except in the most extreme cases (Spafford 2007).
Kenneth Himma largely agrees that hacking is largely unethical
but that politically motivated hacking or “Hacktivism” may
have some moral justification though he is hesitant to give his
complete endorsement of the practice due to the largely anonymous
nature of the speech entailed by the hacktivist protests (Himma
2007b). Mark Manion and Abby Goodrum agree that hacktivism could
be a special case of ethical hacking but warn that it should proceed in
accordance to the moral norms set by the acts of civil disobedience
that marked the twentieth century or risk being classified as online
terrorism (Manion and Goodrum 2007).
A very similar value split plays out in other areas as well,
particularly in intellectual property rights
and pornography and censorship.
What information technology adds to these long standing moral debates
is the nearly effortless access to information that others might want
to control such as intellectual property, dangerous information and
pornography (Floridi 1999), along with the anonymity of both the user
and those providing access to the information (Nissenbaum 1999; Sullins
2010). For example, even though cases of bullying and stalking
occur regularly, the anonymous and remote actions of cyber-bullying and
cyberstalking make these behaviors much easier and the perpetrator less
likely to be caught. Arguably, this makes these unethical
behaviors on cyberspace more likely that the design of cyberspace
itself tacitly promotes unethical behavior (Adams 2002; Grodzinsky and
Tavani 2002). Since the very design capabilities of information
technology influence the lives of their users, the moral commitments of
the designers of these technologies may dictate the course society will
take and our commitments to certain moral values (Brey 2010; Bynum
2000; Ess 2009; Johnson 1985; Magnani 2007; Moor 1985; Spinello 2001;
Sullins 2010).
Assuming we are justified in granting access to some store of
information that we may be in control of, there is a duty to ensure
that that information is useful and accurate. If you use a number
of different search engines to try to find some bit of information,
each of these searches will vary from one another. This shows
that not all searches are equal and it matters which search provider
you use. All searches are filtered to some degree in order to
ensure that the information the search provider believes is most
important to the user is listed first. A great deal of trust is
placed in this filtering process and the actual formulas used by search
providers are closely held trade secrets. The hope is that these
decisions are morally justifiable but it is difficult to know. If
we are told a link will take us to one location on the web yet when we
click it we are taken to some other place, the user may feel that this
is a breach of trust. This is often called
“clickjacking” and malicious software can clickjack a
browser by taking the user to some other site than is expected; it will
usually be rife with other links that will further infect your machine
or sites that pay the clickjacker for bringing traffic to them (Hansen
and Grossman, 2008). Again the anonymity and ease of use that
information technology provides can facilitate deceitful
practices. Pettit (2009) suggests that this should cause us to
reevaluate the role that moral values such as trust and reliance play
in a world of information technology.
Lastly in this section we must address the impact that the access to
information has on social justice. Information technology was
largely developed in the Western industrial societies during the
twentieth century. But even today the benefits of this technology have
not spread evenly around the world and to all socioeconomic
demographics. Certain societies and social classes have little to
no access to the information easily available to those in more well off
and in developed nations, and some of those who have some access have
that access heavily censored by their own governments. This
situation has come to be called the “digital divide,” and
despite efforts to address this gap it may be growing wider.
While much of this gap is driven by economics,
Charles Ess notes that there is also a problem with the forces of a new
kind of cyber enabled colonialism and ethnocentrism that can limit the
desire of those outside the industrial West to participate in this new
“Global Metropolis” (Ess 2009). John Weckert
also notes that cultural differences in giving and taking offence play
a role in the design of more egalitarian information technologies
(Weckert 2007). Others argue that basic moral concerns like
privacy are weighed differently in Asian cultures (Hongladarom 2008; Lü 2005).
Moral Values in Organizing and Synthesizing Information
In addition to storing and communicating information, many information technologies automate the organizing of information as well as synthesizing or mechanically authoring or acting on new information. Norbert Wiener first developed a theory of automated information synthesis which he called Cybernetics (Wiener 1961 [1948]). Wiener realized that a machine could be designed to gather information about the world, derive logical conclusions about that information which would imply certain actions, which the machine could then implement, all without any direct input form a human agent. Wiener quickly saw that if his vision of cybernetics was realized, there would be tremendous moral concerns raised by such machines which he outlined in his book the Human Use of Human Beings (Wiener 1950). Wiener argued that, while this sort of technology could have drastic moral impacts, it was still possible to be proactive and guide the technology in ways that would increase the moral reasoning capabilities of both humans and machines (Bynum 2008).
Machines make decisions that have moral impacts. Wendell
Wallach and Colin Allen tell an anecdote in their book
“Moral Machines” (2008). One of the authors left on a
vacation and when he arrived overseas his credit card stopped working,
perplexed, he called the bank and learned that an automatic anti-theft
program had decided that there was a high probability that the charges
he was trying to make were from someone stealing his card and that in
order to protect him the machine had denied his credit card
transactions. Here we have a situation where a piece of
information technology was making decisions about the probability of
nefarious activity happening that resulted in a small amount of harm to
the person that it was trying to help. Increasingly, machines
make important life changing financial decisions about people without
much oversight from human agents. Whether or not you will be
given a credit card, mortgage loan, the price you will have to pay for
insurance, etc. is very often determined by a machine. For
instance if you apply for a credit card the machine will look for
certain data points, like your salary, your credit record, the economic
condition of the area you're in, etc., and then calculates a
probability that you will default on your credit card, that probability
will either pass a threshold of acceptance or not and determine whether
or not you are given the card. The machine can typically
learn as well to make better judgments given the results of earlier
decisions it has made. Machine learning and prediction is based
on complex logic and mathematics, this complexity may result in slightly humorous examples of
mistaken prediction as told above, or it might interpret the data of
someone's friends and acquaintances, his or her recent purchases,
and other social data which might result in the mistaken classification
of that person as a potential terrorist, thus altering that
person's life in a powerfully negative way (Sullins 2010).
It all depends on the design of the learning and prediction algorithm,
something that is typically kept secret.
The Moral Paradox of Information Technologies
Several of the issues raised above result from the moral paradox of Information technologies. Many users want information to be quickly accessible and easy to use and desire that it should come at as low a cost as possible, preferably free. But users also want important and sensitive information to be secure, stable and reliable. Maximizing our value of quick and low cost minimizes our ability to provide secure and high quality information and the reverse is true also. Thus the designers of information technologies are constantly faced with making uncomfortable compromises. The early web pioneer Stewart Brand sums this up well in his famous quote:
In fall 1984, at the first Hackers' Conference, I said in one discussion session: “On the one hand information wants to be expensive, because it's so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other”.
Since these competing moral values are essentially impossible to
reconcile, they are likely to continue to be at the heart of moral
debates in the use and design of information technologies for the
foreseeable future.
No comments:
Post a Comment