Ethics of Creating Secure Anonymous Communities

March 2015 · 19 minute read

Written for CSC 300 – Professional Responsibilities May, 2004.

Abstract

Advances in communication technology have changed the way our world

operates. This paper discusses the ethical implications of developing software that

ensures privacy and anonymity for the general public. The balance between freedom

and security is a delicate one. Creating secure anonymous communities can undermine

law enforcements ability to secure us from crime and terrorism. Not developing such

technology could result is losing our rights and freedoms. This paper shows

privacy and anonymity is in the public interest and is therefore ethical.

Introduction

Advances in communication technology have changed the way our world

operates. Governments have the ability to quickly share information, compile large

databases on their citizenry and monitor foreign and domestic electronic communication.

Industry can use these technologies for managing global enterprises. The public can

use these technologies to share information across huge distances with unprecedented

scale, time and ease. Criminals can also use these technologies to plan or execute illegal

activities. The very same technology that allows law enforcement to fight crime can

assist terrorists in attacking us. The temptation for a responsible government entrusted

with ensuring the security of its citizens is to restrict development and implementation

of such technology so that law enforcement can stay ahead. Privacy advocates and those

fearful of a large Orwellian government argue that restricting citizens’ rights to protect

themselves will result in losing their freedoms.

A Brief History

Before the 1970s, encryption was difficult. Though much was known about how

to create encryption schemes. The difficulties of key management made widespread

use, even in light of the growing use of computer terminals, unlikely. In 1975, Whitfield

Diffie and Martin Hellman invented the concept of public key encryption. [Levy, 1994]

Public key encryption solved the key management issue by splitting the key used for

encryption into two parts. Each user would have two keys, a public key and a private

key. Anything encrypted with the public key could be decrypted with the private key and

vice versa. This allowed protocols to develop where communication could be established

between two parties without the need to trade secret keys beforehand. Suddenly,

widespread use of encryption was possible.

In June of 1991, Phil Zimmerman released a program called Pretty Good Privacy

(PGP) [Bulkeley, 1994]. Zimmerman had become an electronic-freedom activist in 1990

when the FBI and NSA were pushing for tough regulation on encryption. According to

Zimmerman, “I did it to inoculate the body politic.” This free software allowed for almost

anyone to use unbreakable encryption in their everyday electronic communication.

Introduced onto the Internet, thanks to a few of his friends, PGP spread across the world

ensuring the masses access to high-grade encryption. The federal government was of

course not pleased and tried several times to prosecute Zimmerman with munitions

exportation laws but was unsuccessful.

In 2002, Michael Freedman and Robert Morris (of the 1988 internet worm fame)

published a paper on Tarzan [Freedman, 2002]. Tarzan is a peer-to-peer anonymizing

network. It works as a layer allowing almost any IP enabled applications to work

anonymously, meaning it is almost impossible to track who is communicating with

whom. By using encryption and hop-by-hop routing, Tarzan allows users to route their

traffic through a random set of members in the Tarzan network, making the final

destination and real source difficult to determine. A node of this anonymizing network

receives traffic from another node, and forwards it on to a next hop. Encryption hides the

actual contents of the traffic. A node also has no way of determining if the node it

received from was the actual source, or if the node it is delivering to is the actual

destination. Only by controlling all members of the anonymizing chain for a session of

traffic could a 3rd

Such an application, if used extensively, would make network wiretaps useless. Without

a way to identify traffic with a real IP address there is no way to identify the actual user

of that IP address.

In 2004, Justin Frankel released a network application called Waste [Kushner,

2004]. Frankel, whose previous contributions include Winamp (one of the first popular

mp3 music players) and Gnutella (popular peer-to-peer file trading software, successor to

Napster), has been dedicated to releasing software that enables the masses. When AOL

bought him out for $100 million, he continued development of Winamp under their

watch. Tension grew between Frankel and AOL as he released unauthorized products

such as Gnutella and Waste. AOL quickly shut him down. Frankel sees development of

his pet projects as good for AOL. “I mean, I’m a stockholder of the company. I want

them innovating. I want them doing things that are good for the world and being socially

conscious,” he says. Waste creates small, secure private networks that allow groups to

communicate and share files without fear of being eavesdropped on or tracked. By

trading public keys, a user is allowed into the small network and trade files or messages

with other users on the network. Only by compromising one of the members of the

network can a 3rd party determine the actual IP address of the source and destination.

What is a Secure Anonymous Community?

These examples of current development in the area of security and privacy

are just leading up to providing us with truly secure, anonymous communities. The

technology is there, but the software has not yet been developed. Currently, large online

communities such as Gnutella, a peer-to-peer file sharing network are very popular and

their use is widespread. These systems are currently very insecure and vulnerable to

monitoring. This is how the RIAA and MPAA are able to prosecute people who choose

to distribute copyrightable material. Even though these networks give the impression of

anonymity, it is still possible to determine the network address of a trader and associate

that with an actual user by requiring ISPs to give up that information. In the near future,

these peer-to-peer networks may be combined with anonymous networks such as Tarzan

and secure communities such as Waste so that both the identity and the contents of the

traffic are indistinguishable to outsiders.

File trading is not the only use for a secure anonymous community. Secure

newsgroup services could be created which do not require central servers. The identity

of contributors of ideas and opinions could be kept anonymous or even just verified by

using authentication schemes. Anonymous email remailers were popular for a while

but suffer from the weakness of being a fixed target for enforcement where logs can be

subpoenaed to discover the identity of a user [EFF, 1996]. If a peer-to-peer anonymous

system were created, there would be no single point of failure or target for attack.

Additionally, with the use of anonymous digital cash [Chaum, 1992], entire underground

economies could flourish.

A secure anonymous community allows any information to be transmitted, traded

or received with no accountability or even any proof that a transaction occurred.

To Create or Not To Create?

Balancing the development of these technologies is a matter of both ethics and

politics. Approaching this issue as technology professionals, an ethical debate is really

more appropriate. The attitude our governing bodies have agreed upon for the time being

is not the end-all for this debate. Lack of laws restricting this technology does not solve

the ethical decision. Laws restricting such development must also be examined ethically.

Is it ethical to create technology that may undermine our government’s ability to protect

us? More specifically, is it ethical to create technology enabling the general public to

create large secure anonymous communities?

What is Ethical?

When we ask what is or isn’t ethical, we are trying to establish whether an action

is good or bad based on some value system. From a utilitarian point of view, the value

system is based on consequences [Johnson, 2001]. For an action to be good, the

consequences of that action must increase general happiness, while a bad action would

decrease happiness. The intention of the action is not really important. Meaning well

does not make the action ethical.

The central principle for many codes of ethics, including our own Software

Engineering Code of Ethics [IEEE-CS/ACM, 1999] is to not do harm to the public.

Principle 1 of the code states, “Software engineers shall act consistently with the public

interest.“ This principle seems to have its fundamental roots in utilitarianism. If an action

is going to cause harm (or decrease happiness), then it is unethical to do it. We should

always be striving for the public good.

What makes rationalizing actions based on this utilitarian ethic complicated is the

difficulty in determining whether a behavior, in the long term, will increase happiness.

The consequences of any action are mostly impossible to estimate. Even the immediate

effects of an action may be vastly different from the long-term effects. Determining

the consequence is also difficult when dealing with behaviors that have global effects.

Considering the impact on different cultures and people we do not know or agree with is

difficult at best, and impossible in many situations.

A system that seems to work easier for us in most cases is a more deontological

approach. By valuing the nature of the decision and establishing a set of principles we

stick to, value judgments are easier. Essentially, these deontological values are based on

an estimate that in general, upholding these values increases happiness. Deontological

values may tell us that it is wrong to kill. This principle seems pretty accepted among the

cultures on earth. Not killing generally increases happiness. The underlying value system

still holds increasing happiness as its core principle. We abstract through deontological

arguments a set of principles that allow us to estimate the value of an action and enables

us to make decisions since it is close to impossible to reason fully using utilitarianism.

We do not have good rules for security, privacy and anonymity. There are always

exceptions when we should or should not allow these abilities. The core belief is that

we should allow them when it will increase happiness. Therefore, to determine whether

creating secure, anonymous communities is ethical, we must determine whether doing so

promotes the public good.

Anonymity and Community

In [Johnson, 1994] a study is described which seeks to answer what happens to

collaboration as community size and member’s anonymity grow. Different size groups

were told the bill for their dinner would be shared equally, regardless of what each

member ordered. They were given the option of ordering expensive lobster or cheap

hot dogs. Not surprisingly, as the community size grew, more people were likely to

order the lobster knowing that they were somewhat anonymous in their order. Smaller

communities collaborated to reduce the size of the bill since the consequences of their

action could be seen more directly.

This study seems to suggest that people will act differently when they believe

they will not be accountable for their actions. According to [Johnson, 1994], this is

a reason to not encourage the ability for anonymous communications in society. It is

necessary for developing collaborative communities to require accountability. Therefore

the ability to be anonymous is destructive.

Others have argued that anonymity is necessary for liberty and freedom

[Walters, 2001]. Indeed our own tradition of voting requires that the vote itself is

private and anonymous. To ensure that unpopular and dissenting opinions are free to

be expressed without persecution, anonymity is required. According to the Supreme

Court, “Anonymous pamphlets, leaflets, brochures and even books have played an

important role in the progress of mankind. Persecuted groups and sects from time to

time throughout history have been able to criticize oppressive practices and laws either

anonymously or not at all.” [Supreme Court, 1960]

It is ethical to create technology to enable anonymous communication. Indeed it

may be true that anonymous communication can have some destructive consequences.

The potential benefits of allowing anonymous communication are much greater. The

consequences of not allowing anonymous communication could be that unpopular ideas

and opinions will not be expressed. As John Stuart Mill has pointed out, “’the tyranny of

the majority’ is now generally included among the evils against which society requires to

be on its guard “ [Mill, 1859] It is in the public’s interest to allow anonymous

communication.

Only Criminals Need Privacy

One argument for not developing these technologies is the claim that primarily

criminals will use them. A secure anonymous community is most useful to those that

have something to hide. Average citizens should not be worried about some government

agency watching over their back. The office of the White House Press Secretary under

the Clinton administration declared, “… if encryption technology is made freely

available worldwide, it would no doubt be used extensively by terrorists, drug dealers,

and other criminals to harm Americans both in the U.S. and abroad.” [Press Sec., 1994]

Stansford Turner, former CIA director has made the claim that, “A threat could develop

… terrorism, narcotics, whatever … where the public would be pleased that all electronic

traffic was open to decryption. You can’t legislate something which forecloses the

possibility of meeting that kind of emergency.” [Barlow, 1992] Because of the threats to

our own security, privacy from law enforcement should not be guaranteed. It would be

irresponsible to allow a technology to have widespread use that would undermine the

government’s ability to protect us.

This argument is very similar to one used in the gun control debate. Most

weapons seem to have no legitimate use other than murder. The benefit to home security

is very minor. This argument is always followed by some specific examples. In 1997,

Sacramento County sheriff arrested a man accused of being a pedophile and confiscated

his computer equipment. Files on his hard drive, including what looked like a diary, were

indecipherable thanks to PGP [Denning, 1997]. This is very similar to a story about little

6 year old Kayla who is murdered by another 6 year old at school [CNN, 2000]. Imagine

all the little sisters who would still be alive if we banned handguns. Imagine all the little

sisters who be alive or unmolested if we didn’t create technology to facilitate crimes

against them.

The other side to this argument is done very similarly. Technology (and guns) can

be used to fight against criminals or even oppressive governments when our protectors

cannot help us. A gun debate may use an example of a gun owner saving his family from

a killer sneaking into their home. Zimmerman may use the anecdote provided by a PGP

user from Latvia as Russian freedom was hanging in the balance: “If dictatorship takes

over Russia, your PGP is widespread from Baltic to Far East now and will help

democratic people if necessary. Thanks.” [Bulkeley, 1994] Requiring law enforcement

access to all needed information is not good for the public. As Ron Rivest, co-inventor of

RSA encryption scheme has said, “We have the largest information-based economy in

the world. We have lots of reasons for wanting to protect information and weakening our

encryption systems for the convenience of enforcement doesn’t serve the national

interest.” [Barlow, 1992]

The basis for valuing privacy in general is not well established. Even the U.S.

Constitution does not explicitly protect privacy. Despite the lack of official support,

privacy is considered to be a human rights issue [Walters, 2001]. Many consider privacy

to be integral to free expression, well being and freedom. The U.S. Supreme court

decided in U.S. v. Katz that we do have a reasonable expectation of privacy in most

electronic communication [Supreme Court, 1967].

Principle 1.03 of the Software Code of Ethics among other things lists “diminish

privacy” as results from our work that we should avoid. Principle 3.12 says, “Work to

develop software and related documents that respect the privacy of those who will be

affected by that software.” It seems clear that this code of ethics values privacy as a

human right and we should not infringe upon it.

Human rights are however not absolute [Walters, 2001]. There are situations

when human rights should be infringed upon. Currently we allow law enforcement to

violate the privacy of suspects when investigating crimes. This is done on a small scale

and only under very strict guidelines of what they can and cannot monitor. The FBI uses

a system called Carnivore to monitor computer networks [Kerr, 2000]. This system

allows the FBI to be more strict on what they do and do not monitor. A standard packet

sniffer would capture all traffic and require an operator to sort through to find the

relevant packets. This would expose the FBI to information they are not authorized to

capture by a wiretap order. The Carnivore system allows the FBI to be stricter about what

they see so as not to accidentally violate the privacy of others.

Use of modern encryption methods enables a person to thwart law enforcement’s

ability to monitor traffic, even if society has decided it is ok to violate privacy in

investigating a specific crime. Citizens of the United States have a guarantee that our

privacy will not be infringed unless we are suspected of some criminal offense. As

technology continues to spread and develop, infringing on privacy will become easier

[ACLU, 1993]. The temptation to violate privacy by automated monitoring of all

domestic electronic communication may at some point be too great. The immediate

benefit of security from criminals would be huge. The effects to freedom could be

devastating.

A secure anonymous network for the masses insures freedom of information. It

is a fundamental value of the free world that free expression promotes the greater good.

In the United States we seem to have protections against the unrestricted monitoring

or censoring of speech. In many other countries this is likely not the case. Privacy is

important; it is ethical to promote privacy.

The Red Queen

Leigh Van Valen introduced in [Van Valen, 1973] the Red Queen hypothesis

of evolutionary biology. The hypothesis proposes the idea of co-evolution where two

species that have a conflicting relationship will evolve to compete with each other and

just end up “running in place”. The phrase “Red Queen” comes from Lewis Carrol’s

Through the Looking Glass where Alice has discovered after chasing the Red Queen that

even though she was running very quickly, they never actually go anywhere. The Red

Queen responds saying, “Now, HERE, you see, it takes all the running YOU can do, to

keep in the same place.” [Carrol, 1872]

The Red Queen hypothesis can be easily extended to describe higher-level

conflicts such as U.S./Soviet Union arms race and even our issue of restricting

technology to preserve an advantage for law enforcement. It would seem that society,

as a biological system will continue to develop technology that offsets itself. We will

simultaneously develop technology to enable law enforcers and law breakers so that the

end result is again “running in place.”

The danger is when one side of the conflict develops faster than the other

resulting in extinction for the weaker side. If technology that enables criminals develops

faster than the technology for protecting us from them, “extinction” could result. It would

seem that if we could through government policy restrict the development of technology

for criminals, then crime will become extinct. Of course this is not that simple. There

is another relationship that needs to be taken into account. If government is viewed as

a predator of freedom and the public good, public technology is required to counter the

abuses of the government. These two relationships are at odds with each other. While

law enforcement technology is good for eliminating criminals, it is bad for preserving

freedom. While technology that preserves freedom is good for the public, it makes

catching criminals difficult. Again we have a very delicate balancing act.

In keeping with the Red Queen hypothesis, if the development of technology

seemed to allow for abuses and infringement of privacy for the public, it is our ethical

duty to provide technology to help keep the privacy. Since we value privacy and

anonymity we must act to secure it. We cannot assume that since we have not yet had our

privacy infringed upon that it will not happen. The Red Queen will keep running, we

have to run right along side just to stay in the same place.

Conclusion

The technology to enable governments to monitor all electronic communication is

not far-off. In the United States, the results of law enforcement technology advances are

mostly good. We have enough checks and balances in place to reduce the threat of

abusing this power. However, those checks and balances are no guarantee. While having

the ability for the U.S. government to monitor all communication, criminal or otherwise,

may promote security, the potential danger to freedom is too great. The potential for

abuse cannot be ignored.

It is in the public interest to create technology that protects the balance of power.

Ensuring the public has access to anonymity and privacy is critical for a free society. It is

impossible to know for sure what the eventual outcome of all technological advances will

be. Criminals will use technology to undermine our laws and cause some damages. Not

creating such technology is even more dangerous. Allowing only governments to have

advanced technology will undermine our own freedoms. In the long run, privacy and

freedom will result in the greater good and therefore it is our responsibility to continue to

promote this technology.

Bibliography

American Civil Liberties Union. “Cryptographic Issue Statements: Letter to the Computer System Security and Privacy Advisory Board,” May 28, 1993. Repr.

Building in Big Brother, Springer-Verlag New York, 1995.

Bulkeley, William M. “Genie is Out of the Bottle,” Wall Street Journal, April 28, 1994.

Barlow, John Perry. “Decrypting the Puzzle Palace,” Communications of the ACM, Vol. 35, Issue 7. July 1992.

Carrol, Lewis. Alice Through the Looking Glass. Project Gutenberg, 1991. (orig. published 1872) http://www.cs.indiana.edu/metastuff/looking/looking.txt.gz

Chaum, David. “A Cryptographic Invention Known as a Blind Signature Permits Numbers to Serve as Electronic Cash…,” Scientific America, August 1992. Repr. http://www.eff.org/Privacy/Digital_money/chaum_privacy_id.article

CNN, “Clinton pushes Congress to pass new gun control legislation,” March 7, 2000. http://www.cnn.com/2000/US/03/07/clinton.guns.03/

Denning, Dorthy and Baugh, William Jr. “Encryption and Evolving Technologies As Tools of Organized Crime and Terrorism,” National Strategy Information

Center’s US Working Group on Organized Crime, July 1997. http://www.cs.georgetown.edu/~denning/crypto/oc-rpt.txt

Electronic Frontier Foundation. “Johan Helsingius closes his Internet remailer,” Press Release, August 1996. http://www.eff.org/Privacy/Anonymity/960830_penet_closure.announce

IEEE-CS/ACM Joint Task Force on Software Engineering Ethics and Professional

Practices. “Software Engineering Code of Ethics and Professional Practice,”

  1. http://www.computer.org/tab/seprof/code.htm

Johnson, David R. “The Unscrupulous Diner’s Dilemma and Anonymity in Cyberspace,” March 4, 1994. http://www.eff.org/Privacy/Anonymity/anonymity_online_johnson.article

Johnson, Deborah G. Computer Ethics. Prentice Hall, 2001.

Kerr, Donald. “Carnivore Diagnostic Tool,” Testimony of Donald M. Kerr, Assistant Director, Laboratory Division, FBI Before the United States Senate, The Committee on the Judiciary, September 6, 2000. http://www.fbi.gov/congress/congress00/kerr090600.htm

Kushner, David. “The World’s Most Dangerous Geek,” Rolling Stone Magazine, January 13, 2004.

Levy, Steven. “The Cypherpunks vs. Uncle Sam,” New York Times Magazine, June 12, 1994.

Mill, John S. “On Liberty (1859),” Harvard Classics, P.F. Collier & Son 1909. Repr. http://www.serendipity.li/jsmill/on_lib.html

Supreme Court of the United States, “Talley v. California”, March 7, 1960. http://www.epic.org/free_speech/talley_v_california.html

Supreme Court of the United States, “Katz v. United States”, Dec 18, 1967.

Van Valen, L. “A New Evolutionary Law,” Evolutionary Theory, 1, 1-30, 1973.

Walters, Gregory J. “Privacy and Security: An Ethical Analysis,” ACM SIGCAS Computers and Society, Volume 31, Issue 2. June 2001.

White House Office of the Press Secretary, “Statement of the Press Secretary,” February 4, 1994. Repr. Building in Big Brother, Springer-Verlag New York, 1995.