Sigdoc '96 Keynote: Information, Economy, and Community.


I am on the English department faculty at the University of Virginia, where I teach literary theory, hypertext, American literature, and postmodernism (in various combinations). For the last seven years, I have been a co-editor of the first peer-reviewed electronic scholarly journal in the humanities , Postmodern Culture--a publication which I helped to invent, and which I have seen through the many changes in internet delivery mechanisms since 1990, from listserv to ftp to gopher to the World-Wide Web. For the last four years, I have been director of an institute which fosters computer-based research projects across the disciplines of the humanities--about 25 projects so far, in architecture, literary studies, religion, music, linguistics, history, art, archaeology, and other fields. At the Institute, I have worked with a number of industry partners, beginning with IBM, whose three-year equipment and personnel grant established the infrastructure of the Institute, and subsequently with Sun, Sprint, AT&T, Electronic Book Technologies, and other technology companies. What I have to say today, on the subject of information technology and its economies and communities, is framed by this experience and comes from this point of view.

Before I go on, I'd like to get a rough sense of your background and point of view, too:

--How many currently employed by educational institutions?

--How many currently employed by corporations?

--How many corporate employees with advanced degrees or hours towards them, or have held an academic job in the past?

--How many academics who have requested or received support from corporations, consulted with corporate clients, or held a corporate job in the past?

--How many think academics are self-indulgent and unrealistic?

--How many think corporations are short-sighted and unwilling to take risks?


The theme and title of this conference is: "MARSHALING NEW TECHNOLOGICAL FORCES:

BUILDING A CORPORATE, ACADEMIC, AND USER-ORIENTED TRIANGLE." I take it that this implies two things: first, it implies that the corporate/academic/user partnership is something that has yet to be created; second, it implies that technology is going to help build that partnership. There have been attempts to build these partnerships, though, and information technology alone has not been sufficient to accomplish the task. I want to suggest that the reason we're not satisfied with the results so far is that we haven't really tried to combine the respective communities on equal terms or equal footing, and we also haven't understood the economy within which those communities take shape.

I'd like to start by examining the successes and failures of an institutional framework that has tried to include all three parties, albeit in imperfect harmony: it's one that I know well, because I run it. The Institute for Advanced Technology in the Humanities is unusual, among humanities research computing organizations (indeed, among academic organizations of any kind), in that it is fundamentally interdisciplinary: rather than being focused on the work of a single author or the literature of a single discipline, it serves projects in very different--even apparently unrelated--fields. It is also unusual in that it brings computer professionals (programmers, systems engineers, systems analysts, user support specialists) and humanists (faculty and graduate students) into a long-term, collegial relationship with one another. This is important, because only prolonged contact permits each group to understand and appreciate the culture and expertise of the other. It takes time to learn a foreign language, or a foreign world-view, and immersion is the best and fastest way of doing it. When this cross-cultural experience takes hold, the results can be impressive: there is an old proverb that says "a fool can ask more questions than a thousand wise men can answer," and with respect to humanists and computer technology, the same is true: once humanists have understood that the computer is a general-purpose modeling tool, as well as a communications medium, they want it to do things that are well beyond the horizon of what's possible today. And as George Bernard Shaw said, "the reasonable man adapts himself to the world; the unreasonable man expects the world to adapt to him. Therefore, all progress depends on the unreasonable man." Or woman, I would add.

For their part, the computer professionals who mediate between the goals of the researchers and the limitations of the technology are often forced to think in new ways, outside established practices and familiar methods. And when the Institute staff develop new software applications to meet these needs, they do it in consultation with the people who will use that software--with the result that the software often develops in ways that the programmers, acting on initial specifications and their own understanding of the goals, would not have predicted.

In this scenario, academics are users as well as researchers, and the computer folks are researchers as well as product developers. Our users, and our programmers, are grappling with problems that are quite generalizable, and even though their demands might go beyond what the general public needs at the moment, I don't believe that they go beyond what will be needed in the future.

To take a specific example: we have been working for some time on developing a web-based unicode browser that would use SGML markup to provide synchronized search and display of texts in multiple character sets. For most of the business world, at the moment, Latin1 (the character set for English) plus one other character set is sufficient--they may have a local audience that uses Hebrew or Japanese or Greek or Cyrillic, but everyone else uses English. Hence, Netscape, Java, the Web in general do not support 16-bit character encoding and display, or don't do so in any useful way. Our effort in this area grew out of a religious studies project that wanted to study parallel texts in many different languages, but if we succeed in developing this tool, it will clearly have an immediate audience outside of religious studies in other academic disciplines, and it is not difficult to imagine a much larger audience taking advantage of the tool once it is available.

Other examples of this kind would include our java-based image annotation tool, Inote, a project spawned by the needs of literary and historical researchers, or our work in adapting SGML to the description of architectural and archaeological sites, or our application of three-dimensional modeling to humanistic data (for example, to recover a sense of scale when displaying artworks on the web, or more abstractly, to model and map census data from the Civil War). It is characteristic of humanities research to want to deal with data of very disparate types, to want to represent that data with a high degree of aesthetic as well as quantitative fidelity, and to want to relate that data actively and interactively to other research in other places and in other disciplines. These researchers want all the extant records, want to search them all at once, and want a coherent and concise display of the results. And indeed, we want them to want that, because if we can provide it for them, we can provide it for a much larger audience, who will want it too, once it occurs to them that they could.

What's missing from the picture at the Institute, though, is that third party, the commercial partner. This is not to say that our efforts have been unsupported by the industry: as I mentioned, IBM, Sun, and other information technology companies have provided us with significant resources. But what hasn't happened (yet) is the involvement of commercial partners as resident members of the IATH community: our contacts with technology companies are generally sporadic and intermittent, and there isn't a lot of feedback in either direction. The technology or funding that comes to us from these companies usually comes without a commitment of personnel, and often without a formal reporting arrangement. Even when we might want to provide information about the strengths and weaknesses of the technology we've been given, there is no channel for this kind of communication; if we want access to engineers or programmers, it is often difficult or impossible to get. In part, I think this is because technology companies still have a hard time imagining that humanistic research might have, or might produce, market value. In fairness, I note that they are helped to this conclusion, in many ways, by humanists themselves, who tend to regard the market as a taint, and business as anathema. But even in the sciences, where collaborative relationships between academic research and the business community have a longer history, we are hardly overwhelmed with examples of the rapid development and effective commercial deployment of products resulting from academic research.

For example, we have corporate-sponsored places like Bell Labs (now known as the company formerly known as Bell Labs) and Xerox Parc; we have places like MCNC here in the Triangle area and a number of state-sponsored technology transfer centers in other states, such as the Center for Innovative Technology in Virginia; and we have many University-based Centers now that, in one way or another, try to bring together the research capabilities of university faculty with the product development and marketing capabilities of business. A particularly interesting example of this kind of joint research and development project is, I think, the Computer Applications and Software Engineering (CASE) Center at Syracuse University, one of New York State's original Centers for Advanced Technology . With (last year) 4 million in federal funds, 2 million in corporate support, and 1 million in state money, this center incubates and spins off small technology businesses, supports graduate students, involves faculty researchers, and works on product development with large corporate sponsors. What's notable about CASE, the Syracuse example, is that unlike most of these others, it actually brings into direct contact more than two of our three categories (business people, academics, users).

The oldest example of a corporate-sponsored research facility, in the field of information technology, is probably the organization formerly known as Bell Labs. In its heydey, these labs fostered pure research on terms that would once have sounded academic, but on a scale that no university of the time could equal. According to Samuel P. Morgan, a member of the Bell labs Research Area beginning in 1947 and, during the early seventies, Director of the Computing Science Research Center,

If there aren't a certain number of things taken up in the organization every year that don't work out, we're not being sufficiently aggressive, or innovative, or we are not gambling enough. . . . We work on percentages, and of course this is true throughout any research organization. We've got to have a certain number of failures. . .

Many an academic longs in vain to hear this kind of rhetoric from deans and provosts today: in fact, it is only a very large accumulation of wealth that makes it practical to gamble in this way, investing resources in doing research with no immediately obvious economic benefit, and as universities have been "downsized" they have become increasingly concerned about the immediate marketability of research results, in no small part because the corporate and government sponsors of that research, themselves downsizing as fast as they can, impress that concern upon them.

If you went today to the corporate-sponsored research facilities like Xerox Parc, you would still hear echoes of the emphasis on the importance of pure research, but you would also hear distinct overtones of a short-term market orientation. In talking about Xerox Parc, its director, John Seelye Brown, says: "We're trying to build a culture here where people are willing to take risks, in fact love to take risks, but on the other hand also care about having a major impact on the world." This is, it seems to me, a very straightforward statement of the issues: on the one hand, you want to foster innovation, but on the other hand, you want your research to have "a major impact on the world" --and one assumes that for the corporate sponsor that impact is expressed in the marketability and profitability of the products of this research.

The record shows, though, that organizations like Parc and to a lesser extent Bell labs have always had the problem of not recognizing and seizing on those marketable products its experiments have produced--and their failure in this regard has, on the whole, been our good fortune, resulting in longer-term, less predictable, more generalized and more extensive benefits than would likely have been the case if the early patents had been secured and/or protected. The list of products in this category is impressive: the mouse, bitmapped graphics, the graphical user interface, Unix, postscript, the PC architecture. The list becomes even more impressive if we add to it some of the enabling non-proprietary standards (ascii, tcp/ip, sgml) and software products (mosaic, netscape, java, perl, and a huge host of shareware) that drive the development of cross-platform, networked information. These products don't all have the same history or the same status--some of them do have owners, some of them do retain patent rights, some of them do have elements for which one has to pay--but they have in common the surrender of immediate economic gain (sometimes inadvertently, sometimes partially, but sometimes deliberately and completely) in favor of widespread use--in short, they behave according to the imperative of our new and poorly understood information economy. The newer brand of collaborative research and development center, like the Syracuse example, no doubt have a higher rate of success in turning research ideas into patents and proprietary goods, but I think it is too early to tell whether they will have the same impact as these earlier, less "businesslike" ventures. Personally, I am inclined to doubt it.

If we are going to figure out how to bring academic researchers, corporations, and users together in the most effective, most felicitous, most productive way, we need to learn the principles of this economy and the lessons of its brief history. As John Perry Barlow puts it, in his essay called "The Economy of Ideas:"

With physical goods, there is a direct correlation between scarcity and value. Gold is more valuable than wheat, even though you can't eat it. While this is not always the case, the situation with information is often precisely the reverse. Most soft goods increase in value as they become more common. Familiarity is an important asset in the world of information. It may often be true that the best way to raise demand for your product is to give it away.

I take the time to lay out this perspective and these examples because they represent an interesting point of commonality between what academics do and what business people do. Academics (especially in the humanities) get paid indirectly for their research--not on the basis of sales of their books, but on the basis of the familiarity that accrues to their ideas. In the case of academic research, economic property and intellectual property are often separable, and the former is often much less significant than the latter. In the information technology business, these things can also be separated, and one can often see that indirect profits are more significant than direct ones: you give away postscript, and drive the sales of postscript printers; you give away netscape browsers, and drive the sales of server software and intranet applications; you give away a hardware architecture and drive the sales of an operating system (in the case of PCs) or vice versa (in the case of Unix). It is true, in both academic and commercial instances, that the indirect benefit may go to someone else, but it is also true that in the aggregate, an information economy grows best and most rapidly, to the benefit of the largest number of people, when it is based on free or low-cost exchanges, universal access to basic goods and services, and profusion rather than scarcity.

If you'll bear with me, I want to follow Barlow's argument one step further, through his comparison of the nature of value in physical and informational goods:

In the physical world, value depends heavily on possession or proximity in space. One owns the material that falls inside certain dimensional boundaries. The ability to act directly, exclusively, and as one wishes upon what falls inside those boundaries is the principal right of ownership. The relationship between value and scarcity is a limitation in space.

In the virtual world, proximity in time is a value determinant. An informational product is generally more valuable the closer purchaser can place themselves to the moment of its expression, a limitation in time. Many kinds of information degrade rapidly with either time or reproduction. Relevance fades as the territory they map changes. Noise is introduced and bandwidth lost with passage away from the point where the information is first produced.

For academic researchers, access to new information, and even to work in progress, is the most valuable kind of access: in most academic disciplines, information that is more than a few years old is probably worthless. In the business world, the same is true, but the cycle can be much, much shorter: the clearest example of the same principle is the price you pay for stock quotes which you could get for free ten minutes later.

Capitalism thrives on difference, and it thrives by exploiting the difference between adjacent economies: it wants to produce goods cheaply, and sell them dearly. In a global economy, this practice is now called "outsourcing" and it is and has been the subject of much controversy in labor and treaty negotiations. Outsourcing is a practice, though, that depends, as Barlow says the physical property system in general does, on limitations of space: if you can hop across the border and buy cheap, it doesn't work. In an information economy, based on temporal rather than spatial limitations, the difference that will drive capital is lag-time. Maybe that's fine: if the broker will pay for information which the historian, later that afternoon, can have for free, both economies can still function. And there may be other differences to consider: for the academic researcher, a large number of failed experiments may provide more information than a single resounding success; a solution which takes you 80% of the way to addressing an intractable problem may be more valuable than one that takes you 100% of the way to addressing a commonplace and well understood one; innovative beta software, even if it crashes with regularity, might be more useful than a stable but more limited product; and so on, and on.

There is another kind of difference worth considering, in the attempt to understand information economy. This is not so much a temporal difference as a relational one, and it turns on the fluid relationship between users and information. In most of our thinking about the value of information, we fail to take this fluidity into account. This is why publishers, for example, fail to understand the absurdity of the widely feared scenario in which a user downloads and prints and xeroxes and distributes for free the information he or she has only once paid to access. But as any user will tell you, in a world of networked information, value inheres not in the part, but in the whole and in its potential. If I come to a large, dynamic database of information on two consecutive days, I most likely do so with two different purposes, two different queries, two different demands. The path that I take through that data is of value to me at the time I take it, but my results are only valuable to someone who comes (at nearly the same time) to the same database with the same query. The next day, my activities of the day before have not consumed the value of the resource, because I now have a different question, and require a different cross-section of the data. Unless I can download the entire database and the tools that help me use it, I can't appropriate (in the old, physical sense) its potential value, to me or to others.

With respect to users, there is one other important and fluid characteristic. To use myself as an example again, sometimes I am a general audience, and sometimes I am a specialist. On the web, in particular, I never want to pay for the information I seek out in my capacity as a general reader--I'm simply not motivated to do so. If I can't get it for free, I'll go look somewhere else, and if I can't find it anywhere, I'll go without it. On the other hand, when I am browsing as a specialist, I am highly motivated to find a particular piece of information--motivated enough to pay for it, especially if the transaction is immediate, the charge is low, and the relevance is high. This behavior implies a commercial practice which (I know from experience) most information providers (especially publishers) consider perverse: in short, it suggests that you should give away the information of interest to the largest number of people, and sell the information of interest to more limited audiences. If you are providing an online journal (to take an example at random), you might give away the current issue and license the archive of back issues; if you are putting William Blake online, you would give away Songs of Innocence and Experience, and sell The Marriage of Heaven and Hell; if you are publishing a scientific database, you would provide abstracts and results for free, and sell access to the data on which those results are based. The rationale for valuing readers or users who don't pay goes back to Barlow's point that "familiarity is an important asset in the world of information:" unless I have had significant free access to general information from a particular resource, I am not likely to know about it, value it, and return to it when I want more specialized information in the same vein. Moreover, the availability of the general resource may affect the direction of my specialization, and may shape the need I am later willing to pay to have filled. In his conclusion, Barlow says that "the economy of the future will be based on relationship rather than possession. It will be continuous rather than sequential"--I would agree, but I would add that one will probably play a sequence of roles in these relationships, some of which can lead to the exchange of money, and others which will not.

And this, it seems to me, is the key to understanding the emerging information economy--not only as a global phenomenon, but also as a local one that affects the everyday practice of the university and the corporation and the average user in very practical ways: it depends on motivations that sometimes exceed the property system. To return to the historical example of Unix (or actually, its predecessor, Multics), it is worth noting that communication--not the business of communications, but communication itself--was a motivating factor in the design of this operating system. As Dennis Ritchie recalls,

Even though Multics could not then support many users, it could support us, albeit at exorbitant cost. We didn't want to lose the pleasant niche we occupied, because no similar ones were available. . . . What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.

Information demands and depends on communication, and the two together are "a system around which fellowship could form." I would argue that if we understand the nature of information, the different economies in which it is distributed, and the ways in which its value exceeds the idea of property, we can in fact build a community which includes research, product development, and the user on equal terms.

What might this community look like, and how might it work? Well, oddly enough, I think it would have to be residential--communication takes place in many ways today, through many channels, but face-to-face communication is still our highest-bandwidth channel, and sharing work-space is still a good way to increase the odds of accidental discoveries and happy accidents. It would include the three categories of people we have been discussing--researchers, business people, and users--and it would do so on terms that provided rewards appropriate to each, in accordance with the different reward structures that apply to each. For the researcher, this might mean time to experiment and the luxury to fail; for the user, this might mean ready access to support and a role in shaping the tools; for the business person, the reward might simply be the opportunity to watch the other two go about their work and observe their successes and frustrations. It's worth remembering, too, that these roles might be mixed and sequenced in various ways: as in the IATH example, the researcher may be a user, and the product developer may be a colleague. In fact, one of the principle benefits of this sort of arrangement would be, it seems to me, that it makes possible that shifting of roles, depending on the situation. I even believe that there is a place in this community for computing humanists, not only as consumers of information and information technology, but also as researchers who pose some of the most unreasonable challenges to that technology, and who therefore are the best hope of technological progress.

With respect to the products of this collaboration, and the bad old property system, I am sure that there would be difficult issues to address, and I am sure they would best be addressed up front. The researcher is not going to want to be told what problems to address, the business person is not going to want to be deprived of a product, and the user is not going to want to be treated as a guinea pig. The institutions that fund this kind of collaboration are going to want to see profit fairly distributed, when there is any, and loss fairly shared when there isn't. But even here, face to face with grim reality, I think we should bear in mind the words of Thomas Jefferson (patron saint of the University of Virginia and of the Information Age):

He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. (Letters, qtd. in Barlow).

See also: Hal Varian, "The Economics of the Internet, Information Goods, Intellectual Property and Related Issues." James Boyle, "Sold Out" Digital Future Coalition Paulina Borsook, "Cyberselfish"