In the section above, the focus was on the moral impacts of
information technologies on the individual user. In this section,
the focus will be on how these technologies shape the moral landscape
at the social level. At the turn of the century the term “web
2.0” began to surface and it referred to the new way that
the world wide web was being used as a medium for information sharing
and collaboration as well as a change in the mindset of web designers
to include more interoperability and user-centered experiences on their
websites. This term has also become associated with “social
media” and “social networking.” While the original
design of the web by its creator Tim Berners-Lee was always one that
included notions of meeting others and collaboration, users were
finally ready to fully exploit those capabilities by 2004 when the
first Web 2.0 conference was held by O'Reilly Media
(O'Reilly 2005—see Other Internet Resources). This change has meant that a growing
number of people have begun to spend significant portions of their
lives online with other users experiencing an unprecedentedly new kind
of lifestyle. Social networking is an important part of many
people's lives now where massive numbers of people congregate on
sites like Facebook and interact with friends old and new, real and
virtual. The Internet offers the immersive experience of
interacting with others in virtual worlds where environments
constructed from information. Just now emerging onto the scene
are technologies that will allow us to merge the real and the
virtual. This new “augmented reality” is facilitated
by the fact that many people now carry GPS enabled smart phones and
other portable computers with them upon which they can run applications
that let them interact with their surroundings and their computers at
the same time, perhaps looking at an item though the camera in their
device and the “app” calling up information about that
entity and displaying it in a bubble above the item.
Each of these technologies comes with their own suite of new moral
challenges some of which will be discussed below.
Social Media and Networking
Social networking is a term given to sites and applications that facilitate online social interactions that typically focus on sharing information with other users referred to as “friends.” The most famous of these sites today is Facebook. There are a number of moral values that these sites call into question. Shannon Vallor (2011) has reflected on how sites like Facebook change or even challenge our notion of friendship. Her analysis is based on the Aristotelian theory of friendship. Aristotle argued that humans realize a good and true life though virtuous friendships. Valor notes that four key dimensions of Aristotle's ‘virtuous friendship,’ namely: reciprocity, empathy, self-knowledge and the shared life, are found in online social media in ways that can actually strengthen friendship (Vallor 2011). Yet she argues that social media is not up to the task of facilitating what Aristotle calls ‘the shared life,’ and thus these media cannot fully support the Aristotelian notion of complete and virtuous friendship by themselves (Vallor 2011). Vallor also has a similar analysis of other Aristotelian virtues such as patience, honesty and empathy as they are fostered in online media (Vallor 2010). Johnny Hartz Søraker (2012) argues for a nuanced understanding of online friendship rather than a rush to normative judgement on the virtues of virtual friends.
There are, of course, privacy issues that abound in the use of
social media. James Parrish following Mason (1986) recommends
four policies that a user of social media should follow to ensure
proper ethical concern for other's privacy:
- When sharing information on SNS (social network sites), it is not only necessary to consider the privacy of one's personal information, but the privacy of the information of others who may be tied to the information being shared.
- When sharing information on SNS, it is the responsibility of the one desiring to share information to verify the accuracy of the information before sharing it.
- A user of SNS should not post information about themselves that they feel they may want to retract at some future date. Furthermore, users of SNS should not post information that is the product of the mind of another individual unless they are given consent by that individual. In both cases, once the information is shared, it may be impossible to retract.
- It is the responsibility of the SNS user to determine the authenticity of a person or program before allowing the person or program access to the shared information. (Parrish 2010)
These systems are not typically designed to protect individual
privacy, but since these services are typically free there is a strong
economic drive for the service providers to harvest at least some
information about their user's activities on the site in order to
sell that information to advertisers for directed marketing.
Online Games and Worlds
The first moral impact one encounters when contemplating online games is the tendency for these games to portray violence. There are many news stories that claim a cause and effect relationship between violence in computer games and real violence. The claim that violence in video games has a causal connection to actual violence has been strongly critiqued by the social scientist Christopher J. Ferguson (Ferguson 2007). However, Mark Coeckelbergh argues that since this relationship is tenuous at best and that the real issue at hand is the effect these games have on one's moral character (Coeckelbergh 2007). But Coeckelbergh goes on to claim that computer games can be designed to facilitate virtues like empathetic and cosmopolitan moral development so he is not arguing against all games just those where the violence inhibits moral growth (Coeckelbergh 2007). Marcus Schulzke (2010) holds a different opinion, suggesting that the violence in computer games is morally defensible. Schulzke's main claim is that actions in a virtual world are very different from actions in the real world, though a player may “kill” another player in a virtual world, that player is instantly back in the game and the two will almost certainly remain friends in the real world thus virtual violence is very different from real violence, a distinction gamers are comfortable with (Schulzke 2010). While virtual violence may seem palatable to some, Morgan Luck (2009) seeks a moral theory that might be able to allow the acceptance of virtual murder but that will not extend to other immoral acts such as pedophilia. Christopher Bartel (2011) is less worried about the distinction Luck attempts to draw; Bartel argues that virtual pedophilia is real child pornography, which is already morally reprehensible and illegal across the globe.
While violence is easy to see in online games, there is a much more
substantial moral value at play and that is the politics of virtual
worlds. Peter Ludlow and Mark Wallace describe the initial moves to
online political culture in their book, The Second Life Herald:
The Virtual Tabloid that Witnessed the Dawn of the Metaverse
(2007). Ludlow and Wallace chronicle how the players in massive online
worlds have begun to form groups and guilds that often confound the
designers of the game and are at times in conflict with those that
make the game. Their contention is that designers rarely realize that
they are creating a space where people intended to live large portions
of their lives and engage in real economic and social activity and
thus the designers have the moral duties somewhat equivalent to those
who may write a political constitution (Ludlow and Wallace
2007). According to Purcell (2008), there is little commitment to
democracy or egalitarianism in online games and this needs to change
if more and more of us are going to spend time living in these virtual
worlds.
The Lure of the Virtual Game Worlds
A persistent concern about the use of computers and especially computer games is that this could result in anti-social behavior and isolation. Yet studies might not support these hypotheses (Gibba, et al. 1983). With the advent of massively multiplayer games as well as video games designed for families the social isolation hypothesis is even harder to believe. These games do, however, raise gender equality issues. James Ivory used online reviews of games to complete a study that shows that male characters outnumber female characters in games and those female images that are in games tend to be overly sexualized (Ivory 2006). Soukup (2007) suggests that gameplay in these virtual worlds is most often based on gameplay that is oriented to masculine styles of play thus potentially alienating women players. And those women that do participate in game play at the highest level play roles in gaming culture that are very different from those the largely heterosexual white male gamers, often leveraging their sexuality to gain acceptance (Taylor et al. 2009). Additionally, Joan M. McMahon and Ronnie Cohen have studied how gender plays a role in the making of ethical decisions in the virtual online world, with women more likely to judge a questionable act as unethical then men (2009). Marcus Johansson suggests that we may be able to mitigate virtual immorality by punishing virtual crimes with virtual penalties in order to foster more ethical virtual communities (Johansson 2009).
The media has raised moral concerns about the way that childhood has
been altered by the use of information technology (see for example
Jones 2011). Many applications are now designed
specifically for toddlers encouraging them to interact with computers
from as early an age as possible. Since children may be
susceptible to media manipulation such as advertising we have to ask if
this practice is morally acceptable or not. Depending on the
particular application being used, it may encourage solitary play that
may lead to isolation but others are more engaging with both the
parents and the children playing (Siraj-Blatchford 2010). It
should also be noted that pediatricians have advised that there are no
known benefits to early media use amongst young children but there
potential risks (Christakis 2009). Studies have shown that from
1998 to 2008, sedentary lifestyles amongst children in England have
resulted in the first measured decline in strength since World War Two
(Cohen et al. 2011). It is not clear if this decline is directly
attributable to information technology use but it may be a contributing
factor.
Malware, Spyware and Informational Warfare
Malware and computer virus threats are growing at an astonishing rate. Security industry professionals report that while certain types of malware attacks such as spam are falling out of fashion, newer types of attacks focused on mobile computing devices and the hacking of cloud computing infrastructure are on the rise outstripping any small relief seen in the slowing down of older forms of attack (Cisco Systems 2011; Kaspersky Lab 2011). What is clear is that this type of activity will be with us for the foreseeable future. In addition to the largely criminal activity of malware production, we must also consider the related but more morally ambiguous activities of hacking, hacktivism, commercial spyware, and informational warfare. Each of these topics has its own suite of subtle moral ambiguities. We will now explore some of them here.
While there may be wide agreement that the conscious spreading of
malware is of questionable morality there is an interesting question
as to the morality of malware protection and anti-virus software.
With the rise in malicious software there has been a corresponding
growth in the security industry which is now a multi-billion dollar
market. Even with all the money spent on security software there seems
to be no slowdown in virus production, in fact quite the opposite has
occurred. This raises an interesting business ethics concern, what
value are customers receiving for their money from the security
industry? The massive proliferation of malware has been shown to be
largely beyond the ability of anti-virus software to completely
mitigate. There is an important lag in the time between when a new
piece of malware is detected by the security community and the
eventual release of the security patch and malware removal tools.
The anti-virus modus operandi of receiving a sample, analyzing the sample, adding detection for the sample, performing quality assurance, creating an update, and finally sending the update to their users leaves a huge window of opportunity for the adversary … even assuming that anti-virus users update regularly. (Aycock and Sullins 2010)This lag is constantly exploited by malware producers and in this model there is an everpresent security hole that is impossible to fill. Thus it is important that security professionals do not overstate their ability to protect systems, by the time a new malicious program is discovered and patched, it has already done significant damage and there is currently no way to stop this (Aycock and Sullins 2010).
In the past most malware creation was motivated by hobbyists and
amateurs, but this has changed and now much of this activity is
criminal in nature (Cisco Systems 2011; Kaspersky Lab 2011). Aycock
and Sullins (2010) argue that relying on a strong defense is not
enough and the situation requires a counteroffensive reply as well and
they propose an ethically motivated malware research and creation
program. This is not an entirely new idea and it was originally
suggested by the Computer Scientist George Ledin in his editorial for
the Communications of the ACM, “Not Teaching Viruses
and Worms is Harmful” (2005). This idea does run counter to the
majority opinion regarding the ethics of learning and deploying
malware. Most computer scientists and researchers in information
ethics agree that all malware is unethical (Edgar 2003; Himma 2007a;
Neumann 2004; Spafford 1992; Spinello 2001). According to Aycock and
Sullins, these worries can be mitigated by open research into
understanding how malware is created in order to better fight this
threat (2010).
When malware and spyware is created by state actors, we enter the
world of informational warfare and a new set of moral concerns. Every
developed country in the world experiences daily cyber-attacks, with
the major target being the United States that experiences a purported
1.8 billion attacks a month (Lovely 2010). The majority of these
attacks seem to be just probing for weaknesses but they can devastate
a countries internet such as the cyber attacks on Estonia in 2007 and
those in Georgia which occured in 2008. While the Estonian and
Georgian attacks were largely designed to obfuscate communication
within the target countries more recently informational warfare has
been used to facilitate remote sabotage. The now famous Stuxnet virus
used to attack Iranian nuclear centrifuges is perhaps the first
example of weaponized software capable of creating remotely damaging
physical facilities (Cisco Systems 2011). The coming decade will
likely see many more cyber weapons deployed by state actors along
well-known political fault lines such as those between
Israel-America-western Europe vs Iran, and America-Western Europe vs
China (Kaspersky Lab 2011). The moral challenge here is to determine
when these attacks are considered a severe enough challenge to the
sovereignty of a nation to justify military reactions and to react in
a justified and ethical manner to them (Arquilla 2010; Denning 2008,
Kaspersky Lab 2011).
The primary moral challenge of informational warfare is determining
how to use weaponized information technologies in a way that honors
our commitments to just and legal warfare. Since warfare is already a
morally questionable endeavor it would be preferable if information
technologies could be leveraged to lessen violent combat. For
instance, one might argue that the Stuxnet virus did damage that in
generations before might have been accomplished by an air raid
incurring significant civilian casualties—and that so far there
have been no reported human casualties resulting from Stuxnet. The
malware known as “Flame” seems to be designed to aid in
espionage and one might argue that more accurate information given to
decision makers during wartime should help them make better decisions
on the battlefield. On the other hand, these new informational
warfare capabilities might allow states to engage in continual low
level conflict eschewing efforts for peacemaking which might require
political compromise.
Future Concerns
As was mentioned in the introduction above, information technologies are in a constant state of change and innovation. The internet technologies that have brought about so much social change were scarcely imaginable just decades before they appeared. Even though we may not be able to foresee all possible future information technologies, it is important to try to imagine the changes we are likely to see in emerging technologies. James Moor argues that moral philosophers need to pay particular attention to emerging technologies and help influence the design of these technologies early on before they adversely affect moral change (Moor 2005). Some potential technological concerns now follow.
Acceleration of Change
An information technology has an interesting growth pattern that has been observed since the founding of the industry. Intel engineer Gordon E. Moore noticed that the number of components that could be installed on an integrated circuit doubled every year for a minimal economic cost and he thought it might continue that way for another decade or so from the time he noticed it in 1965 (Moore 1965). History has shown his predictions were rather conservative. This doubling of speed and capabilities along with a halving of cost has proven to continue every 18 or so months since 1965 and shows little evidence of stopping. And this phenomenon is not limited to computer chips but is also present in all information technologies. The potential power of this accelerating change has captured the imagination of the noted inventor Ray Kurzweil who has famously predicted that if this doubling of capabilities continues and more and more technologies become information technologies, then there will come a point in time where the change from one generation of information technology to the next will become so massive that it will change everything about what it means to be human, and at this moment which he calls “the Singularity” our technology will allow us to become a new post human species (2006). If this is correct, there could be no more profound change to our moral values. There has been some support for this thesis from the technology community with institutes such as the Singularity Institute, the Acceleration Studies Foundation, Future of Humanity Institute, and H+. Reaction to this hypothesis from philosophy has been mixed but largely critical. For example Mary Midgley (1992) argues that the belief that science and technology will bring us immortality and bodily transcendence is based on pseudoscientific beliefs and a deep fear of death. In a similar vein Sullins (2000) argues that there is a quasi-religious aspect to the acceptance of transhumanism and the acceptance of the transhumanist hypothesis influences the values embedded in computer technologies that are dismissive or hostile to the human body. While many ethical systems place a primary moral value on preserving and protecting the natural, transhumanists do not see any value in defining what is natural and what is not and consider arguments to preserve some perceived natural state of the human body as an unthinking obstacle to progress. Not all philosophers are critical of transhumanism, as an example Nick Bostrom (2008) of the Future of Humanity Institute at Oxford University argues that putting aside the feasibility argument, we must conclude that there are forms of posthumanism that would lead to long and worthwhile lives and that it would be overall a very good thing for humans to become posthuman if it is at all possible.
Artificial Intelligence and Artificial Life
Artificial Intelligence (AI) refers to the many longstanding research projects directed at building information technologies that exhibit some or all aspects of human level intelligence and problem solving. Artificial Life (ALife) is a project that is not as old as AI and is focused on developing information technologies and or synthetic biological technologies that exhibit life functions typically found only in biological entities. A more complete description of logic and AI can be found in the entry on logic and artificial intelligence. ALife essentially sees biology as a kind of naturally occurring information technology that may be reverse engineered and synthesized in other kinds of technologies. Both AI and ALife are vast research projects that defy simple explanation. Instead the focus here is on the moral values that these technologies impact and the way some of these technologies are programmed to affect emotion and moral concern.
Artificial Intelligence
Alan Turing is credited with defining the research project that would come to be known as artificial Intelligence in his seminal 1950 paper “Computing Machinery and Intelligence.” He described the “imitation game,” where a computer attempts to fool a human interlocutor that it is not a computer but another human (Turing 1948, 1950). In 1950, he made the now famous claim that
I believe that in about fifty years' time…. one will be able to speak of machines thinking without expecting to be contradicted.
A description of the test and its
implications to philosophy outside of moral values can be found
here
(see entry on The Turing Test).
Turing's prediction may have been overly ambitious and in fact
some have argued that we are nowhere near the completion of
Turing's dream. For example, Luciano Floridi (2011a) argues
that while AI has been very successful as a means of augmenting our own
intelligence, but as a branch of cognitive science interested in
intelligence production, AI has been a dismal disappointment.
For argument's sake, assume Turing is correct even if he is
off in his estimation of when AI will succeed in creating a machine
that can converse with you. Yale professor David Gelernter
worries that that there would be certain uncomfortable moral issues
raised. “You would have no grounds for treating it as a
being toward which you have moral duties rather than as a tool to be
used as you like” (Gelernter 2007). Gelernter
suggests that consciousness is a requirement for moral agency and that
we may treat anything without it in any way that we want without moral
regard. Sullins (2006) counters this argument by noting that
consciousness is not required for moral agency. For
instance, nonhuman animals and the other living and nonliving
things in our environment must be accorded certain moral rights, and
indeed, any Turing capable AI would also have moral duties as well as
rights, regardless of its status as a conscious being (Sullins
2006).
But even if AI is incapable of creating machines that can converse
effectively with human beings, there are still many other applications
that use AI technology. Many of the information technologies we
discussed above such as, search, computer games, data mining, malware
filtering, robotics, etc. all utilize AI programming techniques.
Thus it may be premature to dismiss progress in the realm of AI.
Artificial Life
Artificial Life (ALife) is an outgrowth of AI and refers to the use of information technology to simulate or synthesize life functions. The problem of defining life has been an interest in philosophy since its founding. See the entry on life for a look at the concept of life and its philosophical ramifications. If scientists and technologists were to succeed in discovering the necessary and sufficient conditions for life and then successfully synthesize it in a machine or through synthetic biology, then we would be treading on territory that has significant moral impact. Mark Bedau has been tracing the philosophical implications of ALife for some time now and argues that there are two distinct forms of ALife and each would thus have different moral effects if and when we succeed in realizing these separate research agendas (Bedau 2004; Bedau and Parke 2009). One form of ALife is completely computational and is in fact the earliest form of ALife studied. ALife is inspired by the work of the mathematician John von Neumann on self-replicating cellular automata, which von Neumann believed would lead to a computational understanding of biology and the life sciences (1966). The computer scientist Christopher Langton simplified von Neumann's model greatly and produced a simple cellular automata called “Loops” in the early eighties and helped get the field off the ground by organizing the first few conferences on Artificial Life (1989). Artificial Life programs are quite different from AI programs. Where AI is intent on creating or enhancing intelligence, ALife is content with very simple minded programs that display life functions rather than intelligence. The primary moral concern here is that these programs are designed to self-reproduce and in that way resemble computer viruses and indeed successful ALife programs could become as malware vectors. The second form of ALife is much more morally charged. This form of ALife is based on manipulating actual biological and biochemical processes in such a way as to produce novel life forms not seen in nature.
Scientists at the J. Craig Venter institute were able to synthesize
an artificial bacterium called JCVI-syn1.0 in May of 2010.
While media paid attention to this breakthrough, they tended to
focus on the potential ethical and social impacts of the creation of
artificial bacteria. Craig Venter himself launched a public
relations campaign trying to steer the conversation about issues
relating to creating life. This first episode in the synthesis of
life gives us a taste of the excitement and controversy that will be
generated when more viable and robust artificial protocells are
synthesized. The ethical concerns raised by Wet ALife, as this
kind of research is called, are more properly the jurisdiction of
bioethics.
But it does have some concern for us here in that Wet ALife is part of
the process of turning theories from the life sciences into information
technologies. This will tend to blur the boundaries between
bioethics and information ethics. Just as software ALife might
lead to dangerous malware, so too might Wet ALife lead to dangerous
bacteria or other disease agents. Critics suggest that there are
strong moral arguments against pursuing this technology and that we
should apply the precautionary principle here which states that if
there is any chance at a technology causing catastrophic harm, and
there is no scientific consensus suggesting that the harm will not
occur, then those who wish to develop that technology or pursue that
research must prove it to be harmless first (see Epstein 1980).
Mark Bedau and Mark Traint argue against a too strong adherence
to the precautionary principle by suggesting that instead we should opt
for moral courage in pursuing such an important step in human
understanding of life (2009). They appeal to the Aristotelian
notion of courage, not a headlong and foolhardy rush into the unknown,
but a resolute and careful step forward into the possibilities offered
by this research.
Robotics and Moral Values
Information technologies have not been content to remain confined to virtual worlds and software implementations. These technologies are also interacting directly with us through robotics applications. Robotics is an emerging technology but it has already produced a number of applications that have important moral implications. Technologies such as military robotics, medical robotics, personal robotics and the world of sex robots are just some of the already existent uses of robotics that impact on and express our moral commitments (see Capurro and Nagenborg 2009; Lin et al. 2011).
There have already been a number of valuable contributions to the
growing field of robotic ethics (roboethics). For example, in Wallach
and Allen's book Moral Machines: Teaching Robots Right from
Wrong (2010), the authors present ideas for the design and
programming of machines that can functionally reason on moral
questions as well as examples from the field of robotics where
engineers are trying to create machines that can behave in a morally
defensible way. The introduction of semi and fully autonomous machines
into public life will not be simple. Towards this end, Wallach (2011)
has also contributed to the discussion on the role of philosophy in
helping to design public policy on the use and regulation of
robotics.
Military robotics has proven to be one of the most ethically
charged robotics applications. Today these machines are largely
remotely operated (telerobots) or semi-autonomous, but over time these
machines are likely to become more and more autonomous due to the
necessities of modern warfare (Singer 2009). In the first decade of war
in the 21st century robotic weaponry has been involved in
numerous killings of both soldiers and noncombatants, and this fact
alone is of deep moral concern. Gerhard Dabringer has conducted
numerous interviews with ethicists and technologists regarding the
implications of automated warfare (Dabringer 2010). Many
ethicists are cautious in their acceptance of automated warfare with
the provision that the technology is used to enhance just warfare
practices (see Lin et al. 2008; Sullins 2009b) but others have been
highly skeptical of the prospects of a just autonomous war due to
issues like the risk to civilians (Asaro 2008; Sharkey 2011).
No comments:
Post a Comment