[Previous] Result 1 of 1 [Next] [Refine Search][Print][Email][Save][Go To Full Text] [Tips]
Title: On-Line Polling.
Subject(s): INTERNET (Computer network) -- Political aspects -- United States; TECHNOLOGICAL innovations
Source: Harvard International Journal of Press/Politics, Spring99, Vol. 4 Issue 2, p30, 15p
Author(s): Rosenblatt, Alan J.
Abstract: Explores the methodological limitations and implications of the potential use of the Internet for on-line polling. Obstacles that limit the selection of a representative on-line; Views of politicians and political analysts on electronic democracy; Discussion of the implications that solutions to these obstacles may have on the possibility of an electronic democracy.
AN: 1833847
ISSN: 1081-180X
Database: Academic Search Elite
Print: Click here to mark for print.

[Go To Citation]
 
 

ON-LINE POLLING

Methodological Limitations and Implications for Electronic Democracy

The rise of the Internet has excited many politicians and political analysts about the potential for on-line polling, in particular, and electronic democracy, in general. This essay explores the obstacles that limit the selection of a representative on-line sample. It further considers the implications that solutions to these obstacles may have on the possibility of an electronic democracy. Severe limitations to on-line sampling are revealed. Fixes to these problems may undermine the democratic principles that form the foundations of our political system.

In anticipation of the day when as many Americans as own a radio or television will have the computer-modem ability to go online, the White House Office of Communications sought the help of the Artificial Intelligence (AI) Laboratory at MIT. The AI Lab, under a pro bono contract, began developing a software program to analyze the contents of computer messages coming into the White House and to forward them to the appropriate office or agency. Onliners who send a message to the White House would be asked to follow a standard electronic form (name, address, organization, information being requested, approximate category for registering an opinion, etc.). By the millennium, White House computers could be programmed to produce a cumulative attitudinal analysis of all the messages received. The twenty-first century president would have available a daily poll of the online electorate. (Diamond and Silverman 1995:149)

The potential of on-line technology is causing many pollsters and wannabe pollsters to salivate like Pavlov's dog. The mention of millions of citizens connected to a communications backbone that allows instantaneous transmission of surveys across the polity at virtually no cost can create a feeling of omnipotence for survey researchers. Once all of America is wired to the Internet, it would seem that pollsters need only draw a random sample and e-mail a survey instrument out in one single, fluid effort. Within a few days, hundreds of responses would come streaming back. A quick sorting of the sample database would then identify the outstanding surveys. In as much time as it takes to compose a letter, a follow-up message could be sent out to the stragglers. With this technology, researchers could send another copy of the survey along with this letter at virtually no cost to the pollster. If needed, this process could be repeated again and again until the response rate reached an acceptable level.

It seems that this would be a dream come true for public opinion researchers. At last, we would have a painless, inexpensive way to monitor the pulse of the nation or would we? This technology requires us to rethink how we approach the survey endeavor, as well as how we think about the role of the people in a democracy. On-line polling would offer both the opportunity and the necessity to construct an interactive survey instrument, an instrument that might promise greater citizen empowerment but might also deliver a flood of nonsense and paranoia. As for the instrument, we can create it with sophisticated skip patterns using computer-aided survey systems already employed by telephone polling operations. This dynamic design may be necessary to ensure that respondents complete surveys due to the tendency of on-line users to have short attention spans. Because the data are collected electronically, on-line polls also offer us the opportunity to rely more heavily on open-ended questions. The responses to these can be quickly analyzed using any of several content-analysis software programs available on the market. Sampling procedures must also be scrutinized. Much more so than traditional methods, sampling on-line can be a nightmare filled with ghosts and avatars. In purely practical terms, the identification of a sample frame becomes a monumental, if not impossible, task.

Future efforts to overcome these obstacles are likely to be problematic, not just technically, but also with respect to its effect on the political debate and the possibility of an electronic democracy. The technical aspects of this article will focus on sampling and will save instrument construction for another day. The philosophical aspects will focus on the feasibility of a cyberdemocracy.

This article makes two related arguments. The first is that attempts to conduct legitimate scientific on-line sampling for surveys face problems that in many ways are unique to the Internet. Techniques have been developed and are likely to be refined to allow better and better sampling of the on-line population, sampling far superior to self-selected samples. The second argument logically follows from the first. As on-line sampling techniques approach the standards required by the Central Limits Theorem, concessions to anonymity must be made. These are likely to be required not only for on-line polling, but also for any system of on-line voting, producing a system of confidential polling, a far cry from the anonymous poll. Such a transition would potentially have severe implications for society.

Sampling Issues

An important goal of good survey research is to select a sample that is representative of the population. The size of a sample alone is insufficient to warrant our faith in a poll's results. The fate of Literary Digest after it miscalled the Roosevelt-Landon presidential election in 1936 should forever warn us of the pitfalls of blind faith in large numbers. The Law of Large Numbers requires that a sample be large and random to ensure a low likelihood of bias. It is the assumption of randomness that poses the greatest obstacle to on-line polling.

For a sample to be random, each member of the population must have an equal chance of being selected for the sample. To operationalize this concept, a researcher must compile a sample frame. A sample frame is a physical list of the population. Applying a fixed skip pattern and a random starting point, names are randomly selected from the sample frame.

Although this process seems simple enough, the difficult part has always been creating the sample frame. Imagine the difficulty of the task if the population were all eligible voters in the United States. The sample frame would have to include well over 100 million people. Entering all of their names and addresses into a database would take years of work. Acquiring their names from a government agency would certainly save some time, but it might be expensive, and the addresses might not even be up-to-date. A national phone directory might provide some information, but it would not identify citizenship or how many people share a given phone number.

For national samples, pollsters often take a short cut and create a probability sample. Instead of working from an individual-level list, the sample is randomly drawn from census tracts, streets within tracts, houses on the sampled streets, and people residing in each selected house. This process satisfies the random assumption and allows for the relatively quick and easy generation of samples.

An alternative to this process, although often integrated with it, is a method of sampling called random-digit dialing. With the help of the phone companies, pollsters identify which exchanges are given to residences and which are given to businesses. Then the phone numbers are listed in sequence, and a skip value and random starting point are identified. When a phone is answered, the interviewer asks for a predetermined member of the household (oldest female of voting age, youngest male of voting age, etc.).

These methods are problematic. For example, some telephone exchanges are assigned to both residences and businesses. Some homes do not have phones; some have two or more separate lines. Some people have more than one home; some have no home. For these reasons and others, even these commonly used methods of selecting representative samples are problematic.

On-Line Sampling

With regard to on-line sampling, the need remains for a complete sample frame and reliable sampling methods. Although it would seem that e-mail addresses would be easy to collect and, already in digital form, would be easy to compile into a database, there are many problems complicating the process.

The collection of e-mail addresses is the first task required for the sample frame. Perhaps the best way to start would be to get lists of addresses from the major Internet service providers (ISPs): America Online, UUNet, PSINet, BBN, Erols, Earthlink, Mindspring, and others. In addition to these, most universities, research institutions, and businesses provide e-mail accounts to their students and staff. Newest on the block are Web-based e-mail services like Hotmail, Yahoo!, Netscape, and Postmaster. These companies provide e-mail addresses that are accessible through the World Wide Web, no matter which ISP is used.

To compile a list of e-mail addresses, pollsters would either have to get directories from each of the e-mail services or use an on-line search engine to harvest addresses. Neither of these would be very successful. E-mail lists are proprietary information. Some companies release them only for a fee; others never release them. The other route is to use a search engine, yet even these are noticeably incomplete. My own e-mail addresses cannot be found using these search sites. These problems make it difficult to complete the sample frame.

Beyond the sample frame, pollsters must contend with rampant multiplicity of addresses. Users often have one e-mail address at work and another from their home ISP. Especially since the rise of free, Web-based e-mail services, users are signing up for more than one e-mail address. Often, these addresses are acquired under false names, complete with false personality profiles (Turkle 1995). Even if we could generate a sample frame, we would be unable to identify these multiple accounts. Further, given the commitment that many of these people have to their various on-line personalities, it is reasonable to expect that many surveys would be answered "in character."

These are some of the drawbacks associated with getting a complete sample frame. So far, however, we have looked only at a world where all citizens are online. Someday we may have to confront these problems in earnest. Until then, we must deal with the current state of affairs in the on-line population.

Current Approaches to On-Line Population Sampling

Despite these difficulties, attempts have been made to survey the on-line population scientifically. These studies can be divided according to their basic strategies. Some generate random samples of citizens at large, using conventional sampling techniques. Then they extract a subset of the sample that goes on-line for their study sample. The other studies sample on-line directly, through various means of advertising and self-selection. In this way, they attempt to generate a representative sample with a nonprobabilistic method. Although most of the self-selected, on-line polls are worthless (Wu and Weaver 1997), some creative efforts have been made to overcome the expected bias produced by self-selected opinion polls (SLOP).This article focuses only on attempts to produce a representative sample of the on-line community.

Off-Line Random Sampling The basic philosophy is to randomly sample a national population and then screen for on-line users. Because the original sample is randomly selected, the on-line users subsample should be representative of the on-line subpopulation. This approach can be very expensive, though, because it requires much overhead and processing expense. The experiences of two of the most prominent research firms using this approach offer illustrative examples.

In 1995, O'Reilly & Associates conducted a market study of on-line users. To attain a sample of 1,000 Internet users and 500 on-line service subscribers, the firm had to make 200,000 random-digit dialing (RDD) attempts and complete 32,000 screening interviews.(n1) Considering that at around 10 to 20 percent of the population was on-line in 1995, this is an extremely low success rate.

In 1996, the Times Mirror Center for the People and the Press (now the Pew Research Center for the People and the Press) was able to take advantage of its survey research infrastructure to build a sample of 1,003 on-line users. Using the combined random samples of several recent in-house surveys, the Center for the People and the Press was able to screen a random sample of on-line users from 6,000 to 7,000 earlier survey participants. Because they were already part of a random sample, albeit pooled from several distinct national samples using the same selection process, the resulting 1,003 on-line users surveyed should be representative of the on-line population.(n2)

These are two of the most prominent examples of surveys of the on-line community using conventional sampling methods. Theoretically, they should produce the most representative sample of the population. Although this may be true, these surveys simply fail to exploit the potential that has cyberspace cheerleaders jumping. They still require all of the overhead inherent to a real world-based survey: phone calls, interviewers, data entry, and various other tasks that would supposedly be eliminated by a purely electronic, or virtual world, survey process.

On-Line Nonprobabilistic Sampling The real excitement about surveying on-line users is that the entire survey can be conducted electronically. The enormous reduction in overhead offered by the virtual environment has the potential to make surveys a far more powerful agenda-setting and policy-making device than ever before. On-line polling holds out the possibility of knowing the feelings of the people on as deep a level as the questions in the survey can allow in very short order, assuming respondents reply immediately. No longer would we suffer the use of nonrepresentative "polls," often employed by politicians, such as walking the street, talking with a small portion of constituents, or tallying up comments from mail received at the office. The bias caused by the self-selected nature of these samples guarantees that when politicians think their fingers are on the pulse of the nation, those fingers are probably their thumbs.

The key to the success of on-line polling lies in our ability to draw representative samples directly on-line. As indicated earlier, question writing and data analysis remain unchanged in this environment. Data collection and entry become much faster due to the automation of the delivery, return, and data record generation processes. All that is required to put this phenomenon on the map is an effective sampling process.

There have been a handful of rigorous attempts to generate a representative sample of the on-line population. These attempts have used varying methods to solve the bias problem with varying success.

Bonnie Fisher and colleagues were among the earliest political scientists to attempt a direct sampling (1996a, 1996b).Their 1995 survey project employed a clever, but fatally flawed, solution to the bias problem. It should have worked beautifully; however, because the very principle of random sampling conflicted with the Usenet principle of a newsgroup for every topic and a topic for every newsgroup, the participants of these groups rejected the presence of a randomly distributed survey instrument. Because the survey was off-topic within most of these groups, its posting was a violation of the rules of participation. Fisher and colleagues wisely took a random sample of fairly complete sample frames of Listserv and Usenet newsgroups and posted a form of their survey on those selected. Unfortunately, the participants in these forums are committed to their rules for relevance. These collectives are more like the Senate, with strict rules that all comments must be relevant to the current topic, than like the House, which has no such requirements. In addition to nonresponsiveness and the occasional berating, the researchers were subjected to a wide variety of retaliations that are now possible in this electronic environment, filling their email boxes with "flames" (nasty messages).

Fisher and colleagues' work makes it very clear that on-line random sampling is difficult, if not impossible. Recognizing this, the Graphics, Visualization, and Usability (GVU) Center at GeorgiaTech developed a methodology for online sampling in 1994. Their ongoing study has been endorsed by the World Wide Web Consortium (W3C), "which exists to develop common standards for the evolution of the web."(n3)

The GVU study employs a methodology that is not based on random sampling. Instead, they have developed a method of sampling that exploits some of the unique characteristics of the Web. Like Fisher and colleagues, they post announcements on newsgroups--announcements only, and only on appropriate newsgroups (e.g., comp.internet.net-happenings,etc.). In addition, they post banners (advertisements that link to GVU's home page) on high-exposure pages like the home pages for Yahoo! and Netscape. On other high-exposure sites (e.g., Webcrawler, etc.) they randomly rotate their banners. They e-mail announcements out over their own Web-surveying mailing list, and they advertise their survey in the popular media (e.g., newspapers, trade magazines, etc.).

This approach, although not a true random sampling method, has a great deal of potential. By posting banners on the most heavily trafficked sites, like search engines and browser company home pages, GVU exposes itself to virtually all of the on-line population, increasing the chances of getting an unbiased sample. Despite this innovation, though, GVU is still dependent on self-selection.

In an attempt to offset the effects of selection bias, GVU offers a cash incentive. Each participant is entered into a drawing that provides several $250 awards. Although this does not eliminate the problem, it can increase the likelihood of participation of the more mainstream users. GVU reports that the introduction of the incentives did not greatly increase the number of responses, but it did increase the probability that the survey would be completed once begun.

This reveals one of the sad truths about on-line browsing. This medium is so distracting that the audience is often hard put to finish reading a piece before moving on to another site. When Michael Kinsley left the New Republic to become editor of the on-line magazine Slate, the move was lamented by many who thought that his ability to write pieces that people want to read completely would be wasted on-line.

The On-Line Community as a National Sample

Until such time that all citizens have Internet access, we must accept the fact that the on-line population is only a sample of the larger citizen population. Thus to employ this community as a study sample, we must assess its representativeness. Does it look like the polity? If it does, it can be used to assess the political mood of the nation. If not, we cannot use it as a basis for valid inferences.

In a two-year period between January 1996 and January 1998, the percentage of Americans who used a personal computer at home, work, or school increased from 59 to 65 percent.(n4) Although this growth is modest, the absolute number of users is quite high. At the same time, the percentage of Americans who go on-line has grown from 21 to 37 percent.(n5) This rate is growing in both real and relative terms faster than the growth in computer use. This pattern suggests that an on-line user is quickly becoming indistinguishable from a computer user. In fact, consumers would be hard put these days to buy a computer that was not ready to connect to cyberspace. The computer user population that does not go on-line is now facing a battle of attrition.

These findings may give cyberspace cheerleaders great cause for celebration. It would seem that on-line usage is growing at such a fast rate that soon the whole population will be wired. When that happens, there will be no more concern for the worries of sampling. Unfortunately, the numbers suggest that the rate of usage is growing slowly and that it may take a while for the last third of the population to join the ranks of computer users. The percentage of respondents who claim to use a computer at home, work, or school over this time period fluctuated between a low of 56 percent in July 1996 to a high of 66 percent in November 1997. Although it is true that the two-year period began at 59 percent and ended at 65 percent, it is unclear whether we have broken out of a plateau of usage (Table 1).

If we are at a plateau of computer use, the rapid growth of the on-line population will hit a ceiling at around 60 percent. It is also possible that the percentage of computer users includes a significant number of occasional, low-end users who are unlikely to go on-line. Additionally, many of those who go on-line may only have access via their employer, so although they are fair game for surveys, it may not be possible to offer them secured voting on their employer's computer, should electronic democracy evolve to a system of online voting. As a result, the upper limit for the percentage going on-line may be less than the percentage of computer users, and it may also be inflated for the purposes of assessing the viability of an on-line democracy.

So we are still faced with the question of sample bias. Is the on-line population a biased sample of the electorate? According to the 1996 Technology Survey, published by the Pew Research Center for the People and the Press, the on-line population does not look exactly like the national population, but it is not entirely different. Table 2 details the demographic comparisons between the on-line population and the nation at large. The key differences are sex, age, income, and education. According to their findings, the on-line population is more male, younger, richer, and better educated than the national population. It is also more likely to live in the suburbs, where people tend to be wealthier and more enamored by electronic conveniences. Note that race does not seem to be a factor in predicting whether a person goes on-line. Though these differences are significant, they are not huge. As for political preferences, the on-line population is slightly more Republican, both in party identification and in candidate preferences (Table 3).

The results of the GVU survey are difficult to compare with the Pew survey because of the dramatically different sampling methods employed. Just a rough comparison of the percentage of women on-line reveals the discrepancy. Whereas Pew found that 42 percent of the on-line population were women, GVU's 1996 findings showed about 31 percent women. Despite this inability to compare directly, we should note that according to the 1997 GVU survey, women increased their on-line presence by 7 percent. The newest Pew survey reveals a narrowing of the gap between men and women.(n6)

The GVU study has found the on-line population to be more educated than the Pew study. The Eighth Survey found 46.96 percent with college education, down from the Seventh Survey by about 7 points,(n7) though the study includes European users, who apparently drive up the average. The GVU study also reports the tendency of on-line users to be more affluent than the national population.

One of the most interesting findings of the GVU study is that approximately 40 percent of on-line users falsify their personal information while online. Fourteen percent actually claim to falsify more than 25 percent of the time. This finding throws suspicion on any data gleaned automatically from online users. The implications for meaningful debate are serious. The impact this has on on-line voting is also of great concern.

As more of the population goes on-line, the degree of correspondence between the profiles of the on-line population and the national population will fluctuate. In the long run, if the entire nation truly does get on-line, these numbers will obviously converge. When they do, the possibility of an electronic democracy will loom ahead. What are the implications of such a prospect on our democratic culture?

Dilemmas of a Perfect World

The realization of a perfect world for an electronic democracy requires that every voting-eligible citizen be wired. This would allow fully accessible public debate and the creation of an on-line system in which voters could have a unique ID that would allow them to log onto the system only under their true identity. This would clearly solve the problem of citizens' weighing in more than once, preserving the principle of one person, one vote in cyberspace. We could go so far as to hold on-line elections, where all votes for political candidates and referendums would be cast on-line.

This type of system would also solve the sample frame problem that has plagued on-line sampling to date. Presumably, an official on-line voting system would require the existence of a master list of names and IDs, much the same as current voter registration rolls. This list would already be stored electronically, so generating random samples from it would only take a few minutes. Because these IDs would be a mutually exclusive and exhaustive accounting of the entire population of on-line voters, we would not have to worry about people with multiple identities or those with none.

Once this electronic voter database is on-line, we could easily use sampling to assess public sentiment between elections. This would be the grand realization of the electronic town meeting heralded by Ross Perot and discussed in the opening quote from Edwin Diamond and Robert Silverman (1995). It would seem the best of all possible worlds. We would have an easy voting system, publicly accessible political debate, and the ability to employ true probability samples of the population. This would allow us to use samples with high response rates and therefore high levels of representativeness. It would also ensure that public debate included identifiable citizens who would be accountable for their own statements.

This digital utopia comes at a cost, however. The problems stem from the root requirement of the establishment of a national database of voter IDs. Presumably, this database would have to link voter IDs with Social Security numbers to verify the existence of a voter and that voter's eligibility (citizenship, history of institutionalization, etc.). Although the Social Security number could be left out of the publicly accessible database, it must exist in the main database for both initial verification and for continuing maintenance of the system.

With this requirement, the federal government, and any local government that is granted access to the data, would be able to link voter registration information with any other data tied to the Social Security number. Even the secrecy of the ballot would be threatened. Although we could create a system where one's vote is not tied to one's identification in practice, the nature of on-line transactions guarantees that linking one's ID to how one voted is technologically possible. In fact, the only assurance that this link would not be made is the word of the government. Even if the government were true to its word, suspicions of the government are rampant enough that this might have a chilling effect on voting participation.

The possible implications of a national identification system like this are truly frightening. The most outrageous possibility is the rise of an armed insurgency against the government. Already, we have witnessed the tragedy of the bombing of the federal building in Oklahoma City that housed, among others, the FBI offices for that area. Investigations revealed that this attack was a payback for the assault on Waco, Texas, one year earlier.

The connection between this bombing and the potential for insurgency spurred by a national identification system is insidious. Timothy McVeigh apparently learned how to make his bomb from a white supremacist novel, The Turner Diaries, written by William Pierce under the pseudonym Andrew MacDonald (1978). In this novel, the "Organization" bombs the FBI headquarters in Washington, D.C., with a truck bomb precisely because the building housed the database for a national identification card. In the novel, this ID was initially developed for the purpose of tracking gun owners and recovering their weapons.

Although the on-line democracy database described here would not be developed for this purpose initially, it is inevitable that many would make the connection. Part of the reason why we should expect this reaction from our more disgruntled citizens is that this system would follow the creation of an online database for the purpose of tracking guns, which is already under construction, funded by the Brady Act. In this act, the waiting period is only a temporary condition that will be removed once a nationwide electronic database is implemented for instantaneous background checks. Millions of dollars have already been allocated for this purpose.

Although the insurgency scenario is extreme, it is not completely unlikely. There is, additionally, a more likely negative implication of such a national identification system. Such a system essentially adds the political behavior of citizens to an already overwhelming amount of data tracking our behavior. Credit companies hold extensive databases of our financial histories: banks, supermarkets, and credit card companies maintain databases of our individual withdrawals and expenditures (Gandy 1993; Lyon 1994). Marketing research firms like Nielsen Media Research and Arbitron maintain detailed databases of what we watch and listen to on TV and radio (Larson 1992), and now automated counters and "cookies" track everything we access on the Internet.

These databases, along with many other industry-specific databases, are creating what David Lyon (1994) and William G. Staples (1997) call the surveillance society. Increasingly, everything we do is recorded, stored, and processed to allow businesses and government the ability to target advertising and policies to our inferred preferences. Although we may not have achieved the Orwellian equivalent of Big Brother yet, we certainly live in a world where many Little Brothers are watching us. It seems that to pursue the dream of an electronic democracy would facilitate the merger of the Little Brothers into a Big Brother.

The implications of a society built on the surveillance of its citizens are serious, indeed. Oscar Gandy Jr., expanding on the works of Jeremy Bentham and Michel Foucault, describes such a society as the "panoptic sort," a societywide manifestation of Bentham's classic Panopticon prison, where all inmates are under the constant apparent surveillance of the guards (Gandy 1993). The simple premise is that if a person is under the perception of being constantly watched, he will begin to censor his own behavior. As Foucault argues, a society based on this design creates a chilling atmosphere, as mass behavior is much more effectively controlled than in a society based on the punishment of transgressions (Foucault 1977).

The result of this panoptic sort is the mainstreaming of thought and behavior. Ironically, the homogenization of preferences that would occur defeats the purpose of an on-line democracy. If our goal is to allow the previously unheard voice of the underrepresented masses described by E.E. Schattschneider (1960), then the very mechanism for giving them their voice would be the mechanism that chills it.

Notes

(n1.) O'Reilly & Associates, "Defining the Internet Opportunity," 1995 <http:// www.ora.com/research/users/index.html>.

(n2.) Pew Research Center for the People and the Press, "One-in-Ten Voters Online for Campaign '96" 1996 <http://www.people-press.org/tec96-1.htm>.

(n3.) GVU, Eighth WWW User Survey, 1997 <http://www.gvu.gatech.edu/user(underbar)surveys/ survey(underbar)1997(u nderbar)10>.

(n4.) Pew Research Center for the People and the Press, "Education, Crime, Social Security Top National Priorities," 1998 <http://www.people-press.org/jan98que.htm>.

(n5.) Ibid.

(n6.) Pew Research Center for the People and the Press, "On-line Newcomers More MiddleBrow, Less Work-Oriented" 1999 <http://www.people-press.org/tech98sum.htm>.

(n7.) GVU, Eighth WWW User Survey.

Table 1 Computer and on-line usage

ON A DIFFERENT SUBJECT ...

Q.24 Do you use a computer at your workplace, at school, or at home on at least an occasional basis?

Legend for Chart:

A - Jan. 1998
C - Nov. 1997
D - July 1996
E - Apr. 1996
F - Mar. 1996
G - Feb. 1996
H - Jan. 1996

 A             B             C     D     E     F     G     H

65    Uses a PC              66    56    58    61    60    59

35    Does not use a PC      34    44    42    39    40    41

*     Don't know/refused      *     *     *     *     0     0

100   100                   100   100   100   100   100   100

IF RESPONDENT ANSWERED YES IN Q. 24, ASK:

Q.24a Do you ever use a computer at work, school,
or home to connect with other computers over the Internet, with
the World Wide Web, or with information services such as America
Online or Prodigy?

BASED ON TOTAL RESPONDENTS:

 A             B             C     D     E     F     G     H

37    Goes on-line           36    23    21    22    21    21

28    Does not go on-line    29    33    37    39    39    38

0     Don't know/refused      1     0     *     0     *     0

35    Not a computer user    34    44    42    39    40    41

100   100                   100   100   100   100   100   100

Source: The Pew Research Center for the People and the Press,
January 1998.

For the January 1998 study, N = 1,218. The N for the other
studies range from 1,003 to 2,000.

Table 2 Demographic profile of on-line users
Legend for Chart:

B - Total Population (%)
C - On-line Population (%)

         A            B    C

SEX

Male                  48   58

Female                52   42

RACE

White                 85   86

Nonwhite              14   14

Black                 10    9

AGE

18-24                 12   23

25-29                 10   14

30-49                 42   51

50+                   35   11

INCOME

$75,000+              10   19

$50,000-$74,999       12   19

$30,000-$49,999       25   28

$20,000-$29,999       17   12

Less than $20,000     23   13

EDUCATION

College graduate      21   39

Some college          23   30

High school or less   56   30

REGION

East                  20   23

Midwest               25   20

South                 34   32

West                  21   25

COMMUNITY SIZE

Large city            20   22

Suburb                23   31

Small city/town       35   32

Rural                 21   14

Source: The Pew Research Center for the People and the Press,
January 1996 (<http://www.people-press.org>).

Based on 4,475 interviews/1,082 on-line users.

Table 3 Political profile of on-line users
Legend for Chart:

B - Total Population (%)
C - Weighted Population[a] (%)
D - On-Line Population (%)

               A                   B    C    D

PARTY ID

Republican                         29   31   34

Democrat                           33   31   28

Independent                        32   35   36

1996 PRESIDENTIAL PREFERENCE[b]

Clinton                            49   48   47

Dole                               34   36   38

Perot                              11   12   12

1996 CONGRESSIONAL PREFERENCE[b]

Republican                         44   46   50

Democrat                           49   48   47

Source: The Pew Research Center for the People and the Press,
January 1996 (<http://www.people-press.org>).

Based on 2,724 interviews/690 on-line users conducted July and
September 1996.

[a] Demographically balanced sample: For this analysis, a sample
of the public was weighted to match the age, sex, and educational
distribution of the on-line population. Comparisons were then
made between the political attitudes of this matched sample and
the on-line population.

[b] Among registered voters.

References

Diamond, Edwin, and Robert Silverman. 1995. White House to Your House: Media and Politics in Virtual America. Cambridge, MA: MIT Press.

Fisher, Bonnie, Michael Margolis, and David Resnick. 1996a. "Breaking Ground on the Virtual Frontier: Surveying Civic Life on the Internet. "American Sociologist (spring): 11-29.

Fisher, Bonnie, Michael Margolis, and David Resnick. 1996b. "Surveying the Internet: Democratic Theory and Civic Life in Cyberspace." Southeastern Political Review 24(3): 399-429.

Foucault, Michel. 1977. Discipline and Punish: The Birth of the Prison. New York: Vintage.

Gandy, Oscar, Jr. 1993. The Panoptic Sort: A Political Economy of Personal Information. Boulder, CO: Westview Press.

Larson, Erik. 1992. The Naked Consumer: How Our Private Lives Become Public Commodities. New York: Henry Holt.

Lyon, David. 1994. The Electronic Eye: The Rise of the Surveillance Society. Minneapolis: University of Minnesota Press.

MacDonald, Andrew. 1978. The Turner Diaries. Hillsboro, WV: National Vanguard Press.

Schattschneider, E.E. 1960. The Semi-Sovereign People: A Realist's View of Democracy in America. New York: Henry Holt.

Staples, William G. 1997. The Culture of Surveillance: Discipline and Control in the United States. New York: St. Martin's Press.

Turkle, Sherry. 1995. Life on the Screen: Identity in the Age of the Internet. New York: Simon and Schuster.

Wu, Wei, and David Weaver. 1997. "On-Line Democracy or On-Line Demagoguery? Public Opinion `Polls' on the Internet." Press/Politics 2(4):71-86.

Paper submitted June 2, 1998; accepted for publication October 15, 1998.

~~~~~~~~

By Alan J. Rosenblatt

=Alan J. Rosenblatt is Assistant Professor of Government and Politics at George Mason University in Fairfax, Virginia. His teaching and research focus on the politics of cyberspace

Address: Department of Public and International Affairs, MSN 3F4, George Mason University, Fairfax, VA 22030-4444; phone: 703-993-1413 or 703-993-1400; e-mail: alan@postmaster.co.uk.


Copyright of Harvard International Journal of Press/Politics is the property of Sage Publications Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.
Source: Harvard International Journal of Press/Politics, Spring99, Vol. 4 Issue 2, p30, 15p.
Item Number: 1833847

 [Previous] Result 1 of 1 [Next] [Refine Search][Print][Email][Save][Go To Full Text] [Tips]