Rock climbing and the world of information: technologist Carl Ledbetter to headline SLA’s Annual Conference in Nashville – Ledbetter Interview; Special Libraries Association

Rock climbing and the world of information: technologist Carl Ledbetter to headline SLA’s Annual Conference in Nashville – Ledbetter Interview; Special Libraries Association – Interview

Suzi Hayes

Suzi Hayes: It’s very nice to meet you. We’re really looking forward to having you as our keynote speaker.

[ILLUSTRATION OMITTED]

Carl Ledbetter: It’s good to meet you also. I’m really looking forward to the talk. I’m sure it will be a lot of fun; it’s a great audience for this theme.

SH: We have quite a variety of people, from the new people to the very experienced, and people who work in the arts as well as people who work in technology, so it’s a wide-ranging audience. I have a couple of questions I wanted to ask, and we’ll see where that leads the conversation.

I recently read an article on leadership lessons of a rock climber. I know that you recently did a presentation where you arrived by rappelling down a rock wall, and I was wondering whether you did that just to get people’s attention or if there were some message as part of that?

CL: It is a way to get people’s attention. I am a climber, and a U.S. Mountain Guides certified lead climber. But there is an aspect of climbing that has sort of a tangential touch on what I talk about in technology. I’m a technologist at heart, and there’s a lot of technology and technique in rock climbing, as there is in some of the other things that I do in my professional life. But what is important about those subjects, both rock climbing and the technology topics we’re going to talk about during my speech, is what is interesting to a far larger audience of people than those who are expert in the technology components of either. Rappelling down a rock wall is a dramatic way to capture attention at the beginning in order to segue over to discussing the technology that allows it to happen safely.

SH: Would you elaborate on that a little bit?

CL: Well, certainly, with respect to what I was hoping to talk about in front of the SLA. There are a lot of people who think the issues that surround the Internet, or more generally networks beyond even the Internet, and the security and privacy issues that are associated with networks are extremely difficult technically. The complexities of networking involve all kinds of stuff down to very, very delicate issues having to do even with physics at the optical layer of the network, and continuing up into very abstruse mathematical theorems on things such as encryption technologies and the problems that swirl around security and privacy.

But as a technologist, what I’m always at pains to tell people is that although technology is important in making things work, it’s almost never the thing that drives us toward what we’re doing. And it’s almost never the answer–or at least not the complete answer–to the question of how we control the technologies that we’re creating.

By that I mean that the issues that surround security and privacy on networks, although they do have technical components, are essentially public policy problems. We have to decide what we want the technology to do in order to make the things that we do with that technology acceptable to us sociologically, politically, legally, and in a lot of other ways. And on those other grounds, there are a large number of people besides technologists who ought to be involved in the discussion. In fact, probably in most instances, it’s those other groups who should lead the discussion and control the outcome.

The problem we face here is due to what C. P. Snow called the two cultures, where technologists, on the one hand, and people who are not in technology, on the other, have a hard time talking effectively to each other about these issues. One of the things that I try very hard to do is to make sure that I bridge that gap. So whether it’s doing a Rubik’s Cube on stage or rappelling down a rock wall, what I’m attempting to do is illustrate that these technology issues are things that you can touch, even if you don’t understand the specific science that underlies them.

SH: That’s interesting, because those of us who are in the library and information profession also feel that it’s our job to bridge the gap.

CL: You bet. In fact, that’s why I’m so interested to be able to do this talk in front of your group. I think library professionals are literally at the forefront of where all of these issues come together.

SH: We think so too. Getting the attention of all of the powers that be is the hard part.

CL: Yes, absolutely. And by the way, that “getting the attention of the powers that be” will be part of what I talk about during my speech. For instance, I have a sort of funny and dramatic but also a difficult and cautionary story to tell about testifying in front of the joint House-Senate Committee on Science and Technology back in the early 1990s. Al Gore was the chairman of the committee at the time, so it was before he became Vice President in ’93.

I testified to them about existing U.S. laws that prohibited, for instance, the export by the U.S. computer industry of certain mathematical algorithms in software, what are called strong encryption technologies. The U.S. at the time actually classified the export of those strong encryption technologies in the same law and by the same mechanism that it controlled the proliferation of nuclear weapons. And yet, from a conference room on the Hill in the capital of the United States, I downloaded, imported, software that would have been illegal for anyone in the U.S. to export, from the website of a high school kid from Czechoslovakia (and it was still Czechoslovakia at the time). What I was pointing out to them is that it was trivially easy for me–or anyone else anywhere in the world–to get this technology over the Net from an 18-year-old in Eastern Europe, and yet it would be illegal–in fact, it would be a serious felony–for me to export it from the United States, even to mail it back to where it had come from, which was a pretty absurd situation.

What struck me as I was going through all of that is what motivates me now to be interested in helping to translate between the public policy and technical worlds: The people in that conference room that day, mostly congressmen and senators and their staffs, were very bright, highly motivated people who were just trying to do the right thing. But they didn’t understand what the issues were. They were baffled by this stuff because they didn’t understand the technology components well enough to make good public policy decisions.

And, of course, people coming to present their case on this or any other complicated issue with a technical component, no matter which side of the debate they’re on, will be making their arguments for reasons having to do with their own interests–and that’s not necessarily bad–but whether it is economic interest or whatever else, these advocates will present to decisionmakers diametrically opposite views on what to do, even what is possible technically. It is very hard for the responsible decisionmakers to sort out the technical issues in a way that makes sense to them so that they can come to reasonable conclusions. As long as technology is so inaccessible to such decisionmakers, we as a society are vulnerable to having decisions made that are adverse to our long-term interest. And that’s why it is so important to find a way to bridge the gap between the two cultures, to help well-motivated people understand the implications of technology choices for public policy.

Of course, I have positions on most of these issues, some of them pretty strongly held. But I think what is more worrisome to me than the fact that the “powers that be” would institute things that are against the positions I hold–what really scares me–is the fact that they might do so without even understanding what those positions are, or the opposite of them, for that matter. It is so terribly dangerous that people, now that the Net reaches just about everywhere, are still worried about the wrong things technically. For instance, when people talk about computer security, they all too frequently mean something like that they’re afraid somebody is going to steal their credit when they make a purchase over the World Wide Web. Well, you’re far more likely to have the 19-year-old kid who dropped out of high school two years ago who took your credit card last night at the restaurant steal your identity than you are to have your credit card number stolen on an SSI link over the network. You ought to be worried about some other things if you’re using the Internet, things people almost never think about, but which are much more worrisome. Getting people to understand and focus on what the real threats are is important.

It’s also very important for people to understand that, although technology can do a lot of things, we may not want to do some of them. Every time you make a choice about what technology should do to make something more secure or more private, or whatever it is, you will be making a trade-off associated with that decision. You simply cannot stop all of the bad stuff, even if you can decide what things are bad, a pretty hard problem in itself, without incurring a cost, in either dollars, effort, or inconvenience. Every time you make something harder to do because you’re trying to prevent something bad from happening, you also make harder to do things that you want to have happen, and these choices are very complicated. It’s not necessary to understand all about the complications, and particularly about the technologies underlying them, but it is important for the people who make these decisions to understand the consequences of the choices, and those consequences usually have deep technical roots.

The sort of naive viewpoint that a lot of people have is that we technologists in the computer industry should just stop all of the bad stuff and let all of the good stuff go ahead. It’s just not that easy. There is always a trade-off involved in such decisions, and it’s important for good public policy decisionmaking that we make those trade-offs explicit and understandable. We all make such risk trade-offs all the time in our daily lives, even if we don’t realize it. For instance, every time there’s some horrendous accident, an airplane crash, for example, somebody pontificates about the fact that we have to make airplanes completely safe. Well, those of us who sort of do this for a living sit there and cringe when that happens, because it sets completely unreasonable, I’d even say irresponsible, expectations. We know that actually, no, that’s not what we’re trying to do. First, we can’t; and second, we as a society don’t really want that, because we don’t want the concomitant consequences. People would never tolerate it if that were the real objective, because eliminating all of the risk associated with whatever we’re doing, whether it’s flying in an airplane, driving a car, or using the Net, even if it were technically possible, would induce a level of inconvenience that would be unacceptable. What we really ought to be doing, instead, is to simultaneously make the level of risk we tolerate acceptable, while we make the cost for offsetting any greater risks reasonable to society.

People say you can’t place a price on human life, for instance. Well, that’s actually not true. We do it all the time. The actuarial folks will tell you exactly what a human life is worth, at least on average. I know that’s emotionally very disturbing to a lot of people, and of course nobody literally intends to say that the value of a life is fully measurable in dollar terms, that it’s worth a million or a million and a half dollars. But the point is by our behavior, by our choices as a society, we are setting that value; we are making that trade-off about cost and risk all the time. In exactly the same way, we’re making choices, or not making them in certain cases now, about trade-offs between the cost and the risk for things that we’re trying to control on the computer, on the network.

The security and privacy issue risks that are associated with the network–every single one of them–have answers. We can stop just about anything bad, just about all the time, with a pretty high degree of confidence. But there’s a cost associated, both the cost of installing the technology in money and time and effort, and also the cost in what, by deploying that technology solution, we prohibit that we didn’t intend to prohibit–the unintended consequences of our choices. And this audience should be very familiar with those kinds of issues.

SH: Oh, yes. The filtering issue …

CL: Absolutely. So you’re a wonderful audience for me, because I often use those kinds of things as an example to show other audiences composed of people who have never thought about it why it’s so hard. So, for instance, if we want to prevent some really ugly stuff from happening on the Net–and of course, we do–we start thinking of technical ways to accomplish that goal. And one of the technology choices is to filter on certain key words or addresses. But if we do that, nobody can ever find out anything about the cost of real estate in Middlesex County, because the filter blocks access because “Middlesex” has the keyword “sex” in it. Yes, we could get around that by adding another layer of technical sophistication to the software, but we’re chasing our tail since that will introduce yet another kind of problem. It’s a real problem, and there’s no simple answer to it.

SH: The article on rock climbing and leadership was discussing exactly the same points that you are. That yes, there are risks, and the risks may be horrendous, but the probability of that risk happening is so infinitesimal that it’s a risk you’re willing to take.

CL: Precisely. That’s such a good example. I use it all the time as a way of illustrating the point. For instance, when I do rappel down a rock wall as a way of getting on the stage, I always make some joke to introduce this issue of risk trade-offs. Inevitably someone will ask me why I go rock climbing, because they think it’s terribly risky. My answer is I do it because skiing is too dangerous. And, of course, everybody in the audience laughs because they go skiing, especially when I do it in Colorado or Utah. They would never go rock climbing. They’d think how dangerous climbing is. But I’ve never been hurt rock climbing, while almost everybody I know who’s a skier has been pretty badly hurt at some point. In skiing there is a pretty high probability of minor injuries, a moderately high probability of severe injuries, and quite a small probability of worse. In rock climbing (when it’s done right), there is also a small probability of terrible consequences, but there is actually a much lower risk of moderate or minor injury. So which is the more dangerous sport?

And that’s precisely the issue that you’ve raised in your question. There is some probability of a bad risk occurring in nearly everything. And you try to guard against the highest probability of the worst risk, but you can’t get all of the risk out, and you often can’t eliminate all we’d like without inducing some high probability of a less awful, but significant, inconvenience or risk. Public policy decisions should be based on a careful understanding and analysis of those risk and cost and inconvenience trade-offs. I have lots of really interesting examples that I think your audience will love, that will help people understand exactly the sweep of that, from the economic issues that are associated with our choices to other kinds of concerns that touch on things like criminal activity on the Net–what wrongdoing could be concealed by certain kinds of privacy or encryption technologies–and the trade-off of preventing those crimes against the inconvenience and loss of privacy of individuals when those technologies are compromised.

SH: I thought I might change directions here. One of the reasons that we were interested in having you speak to us was because you’re known for your global perspective on information technology. And I know that you manage offices around the world. Do you have any insights on how to be an effective manager on a worldwide basis? Or if it matters where your management offices, or the people you’re working with, are located?

CL: It matters a great deal. And it is intimately connected with some of the issues we’re talking about on the Net. In fact, some of the examples that I’ve chosen to line up for this talk touch exactly on that issue. So let me give you a little preview of some of them.

When we do international banking, the standard that most of the world uses now is an encryption technology that was developed in the United States. It’s called the digital encryption standard, or DES. The digital encryption standard is very complex technically. It’s a very, very good algorithm for encrypting information, particularly those kinds of pieces of information concerning financial transactions that you wouldn’t want disrupted. It has all kinds of very clever mechanisms for confirmation and what’s called non-repudiation, all things that are good to do for secure financial transaction.

There’s a side note to this technology, that when the original DES was introduced back in 1977 as what is called a 56-bit encryption technology (the larger the number of bits, the stronger the algorithm is theoretically), it was thought it would be secure in computing for a long, long time. By 1998 the Electronic Frontier Foundation showed that by diligent effort you can actually crack DES-they did it in three days. It was difficult. It was costly to do it. It took some very smart people and a lot of computing power. And it’s not at all clear that it would be worth it to actually set up a computer to crack whatever comes through encrypted by DES on a general basis-which goes again to the risk versus cost trade-off, because you might expend all that money, time, and effort to crack a message only to discover that it was a message from my wife asking me to stop by the dog groomer to pick up the dog on the way home from work, in which case, you would have spent thousands of dollars worth of computer time to find out something of no consequence. So the fact that DES can be cracked doesn’t mean it’s necessarily an inadequate technology.

[ILLUSTRATION OMITTED]

But if DES can be broken at all by computing equipment that is readily available, like a PC, then we’d want to know it so that we can provide greater levels of safety through stronger encryption. As computing power has increased since 1998, that has become an issue. I actually managed to decrypt a DES message myself a couple of years ago on my laptop–exactly the kind of laptop I’ll have with me during the talk–and I’ll tell that story. So the industry has adapted to that by deciding to use a stronger encryption technology called triple DES. But let’s put that aside, because you asked about the international implications, and there is one in the DES story.

The international issue for DES is that the French wouldn’t use it. Nor did they permit it to be used with financial transactions with French banks. And the reason for that is quite interesting. The French, from the days of de Gaulle in World War II, insisted upon having a completely different set of codes from all the rest of the allies because they were worried about French secrets–about keeping them secret even from allies. The French government’s official position about DES was that, although no professional cryptographer has ever found a way to do this, they were afraid that, because it was developed in the United States, the U.S. government, the U.S. national laboratories, the National Security Agency, or other U.S. businesses or agencies might have what’s called a back door to the code. In other words, the French were worried that the U.S. government or some agency of the government might use that back door to decrypt French financial and diplomatic information, and compromise French interests.

Now, just so people don’t get on their high horse about it, there are people other than the French who think that there might be a back door in DES. There are professionals in the United States, colleagues of mine, who, although they have never offered any proof and they have no evidence, are still skeptical that the United States would have put out this very strong encryption standard without having some way to break it. And the reason for that is that all of these people are programmers. It is almost a matter of routine–in fact, it’s standard practice in most software companies–that programmers put something into their code to allow them to break into the code when they’re maintaining it, a perfectly legitimate practice. So the notion that there wouldn’t be one in DES is something that many security professionals find hard to believe.

So there’s an interesting international problem. How do you convince the French that DES, or the triple DES standard, is secure enough to do diplomatic and economic transactions with banks in France?

SH: The stuff of spy novels.

CL: Yes, absolutely. It’s exactly like spy novels.

SH: And all of our spy novels tell us yes, there is a back door, and, of course, the evil-doer can find it.

CL: Absolutely. And it’s not all that far-fetched since almost every software program I’ve ever been associated with in my career has had a back door. It’s a perfectly legitimate technique that allows the company that invents the code to act in the interest of the customer under times of duress when something’s going wrong and there’s a maintenance issue that has to be solved. So, although my belief in the case of DES is that there is no back door (some very smart people have tried to find one for years without success), it’s not a completely nutty concern.

Here’s another example with international consequences. There is a burgeoning habit of software companies moving software engineering tasks offshore. Just last night on CNN, I saw that there is a big hubbub about the fact that some diligent reporter has discovered that IBM is thinking very seriously of moving lots of its software programming jobs to Ireland or Bangalore, India, or to Eastern Europe, as many companies have done. There are lots of places in India, Ukraine, and other parts of Eastern Europe with very well-trained programmers who are tremendous at writing code, especially if it’s Java- or Linux-based code. And American companies are moving programming over to these countries because it’s so much less expensive for them to do it. I mean literally a factor of four to five less expensive for the same level of talent and productivity.

So here you are, a company in the United States doing this. Or you’re the U.S. government looking over the shoulders of companies doing this. One of the most dangerous threats in this era of terrorism, one of the most severe threats against the United States, is the economic cost we’d bear if somebody could bring down the Net. If somebody could bring down the network, the mechanism in which such a large fraction today of the economic interest of the United States is involved, we’d be catastrophically hurt. So much business goes over the Net that bringing it down, or making people believe that it’s dangerous to do business there, would be terribly damaging to the United States.

So think about it–you’ve got people over in Bangalore, India, writing code. Some of them may not particularly like the United States. And maybe one of those programmers may decide he’s going to write a trap into some piece of code so that he can bring the application, or the whole network, down once the code is deployed by all of the banks in the United States. The banks don’t know about that, because this is a piece of code that’s written by a prestigious American company, like IBM or Sun or Microsoft or Novell or any of them. And yet there is some bad thing that’s been inserted in the code, and it’s managed to get through the testing process. And believe me, there are a lot of ways that can happen. How do we protect against that?

How do you put in place provisions that would allow companies, not only in the United States but in other places in the world, to know that there isn’t something bad embedded in software they have to rely on, especially if it’s not even known to the company that is responsible for writing and maintaining the code? That’s a very hard problem technically. If people had any idea of how big and how complex these software programs are that we all depend upon, and how vulnerable we all are to this risk, they would be shocked.

The number of known bugs, for instance, known errors, known problems in code that is shipped by very good, reputable companies in the industry, runs sometimes as many as tens of thousands. Microsoft, for instance, issued 72 different security patches to Windows XP in 2002, more than one a week, trying to fix the known problems. And there are many unknown ones, of course, that crop up all of the time. Who’s to say that there isn’t one that’s been deliberately planted by a saboteur, and how would you guard against that?

SH: You were very active in the Microsoft lawsuits; that was another one of my questions. Personally, as an ordinary consumer in my workplace and at my home, I haven’t noticed anything being the least bit different since the Microsoft trials took place. How do things look from your point of view?

CL: Well, that’s a very complicated question. First of all, there are several different cases under way at the same time with regard to Microsoft. The original antitrust case that was brought against Microsoft did result in a consent decree that Microsoft signed which said that Microsoft, without admitting any guilt for anything in the past, would agree to do certain things in a different way. Every piece of evidence I have at the moment is that Microsoft is doing what it said it would do in that consent decree.

On the other hand, many of the things that Microsoft was charged with doing, and this is the complaint that continues by some of its competitors and by the states that remain in the case, are pretty old by the standards of an industry that reinvents itself every 18 months–all of the issues in the U.S. Department of Justice case date from 1998 or earlier, fully four generations ago in the industry. More interesting is the Microsoft case in Europe. Although it sounds the same in the general press, that case is a very different case brought against Microsoft by the European Union, the European Commission. And many observers in the United States and Europe point out that Microsoft has gained the benefit of its previous wrongdoing in comingling, bundling, pieces of software together so that it would be difficult to separate out, thereby driving other companies out of business, and as a direct result of that causing the prices of software to rise (or not to fall) for consumers, which is the only issue that antitrust law actually addresses.

Microsoft still insists that there’s no way that it could dissemble the various pieces of code that are part of its operating system, whether it’s the directory, or the media player, or the browser, which are today a part of Windows. I certainly have a technical opinion about whether or not that would have been possible to do, and I testified about that. But Microsoft and the Justice Department reached a settlement about how they were going to handle the issue going forward, so, in the United States at least, this is pretty much resolved, for the moment anyway.

But it’s a complicated issue that will arise again, with Microsoft or other companies, because the computer industry can’t really progress when there are too many different standards. The situation is very much like the railroads in Australia. I don’t know if you’ve ever tried to ride all the way across Australia from Sydney to Perth, but you literally have to change trains four times. And you might say, well, that’s no big deal, you’ve got to rest anyway, stretch your legs or whatever…. But the reason you have to change trains is that there are four different gauges of railway in Australia. They can’t run all the way across the continent on the same tracks, so they run tracks of different gauges into the same railroad stations, and you have to change trains to get on the different-sized track.

Similarly, the problem in the computing industry is that if you don’t have some common set of interoperability standards programmers can write to, nothing works. On the other hand, if you have only one standard, you’ve permitted a situation where you have a monopolist who’s in control of things and can pretty much dictate to the industry and set prices in an arbitrary way that’s injurious to consumers and to business.

What we’re really struggling with in the industry, and what the courts are struggling with, and, to be fair, what even Microsoft is struggling with, is to find a way to get to some reasonable number of standards and a set of interoperability conventions that work in a reasonable way most of the time, so that people can write software and expect it to work across the whole network.

Just as the Microsoft case has been very interesting in this regard, because Microsoft sort of fell into, by its practices and maybe by the inattention of both government agencies and its competitors, a situation where it was in control of several of those major standards, and then used that monopoly to try to leverage itself into control of some other key technologies and standards, there is a contrasting lesson in what happened to Apple. Apple took a very different approach. It had what is called a closed operating system. From the middle ’70s on, its operating system (OS) was one that did not permit programmers to do things in the sort of a free-form way, with lots of hooks into the guts of the operating system, that DOS, the original Microsoft OS, did. And the market pretty soundly voted on that choice. Apple has about 4 percent market share as a result of that choice. Because a lot of people–I would have been one of them 25 or 30 years ago–would have said, “I can’t do what I want to do with Apple’s OS, because I can’t see the things that I want to see.” By contrast, Microsoft opened up DOS. It got everybody to jump on to it. And then it moved users to Windows when it became successful. And then once it had everybody on the flypaper, it rolled it up and put it in its pocket. We can hardly blame it for trying to do that, I guess. So this is a really complicated issue.

[ILLUSTRATION OMITTED]

SH: Yes, indeed. So I suspect we could go on talking about this for a good long, but I want to go in a different direction once again. A large percentage of SLA membership comes from academic libraries. And I know that you have a background as a professor.

CL: I do. I was a professor and an academic dean, so yes, I’ve been there.

SH: And do you still teach?

CL: Well, whenever I get a chance. I do guest lectures occasionally, and I love to do it. The kind of talk I’m doing for you is an extension of that–it’s teaching, in a way.

SH: Do you see any noteworthy differences or commonalties in information-seeking concerns between the academic and the corporate environments?

CL: I do. There are certainly more similarities than there are differences, but there are differences both in tradition and in necessity. The kinds of things you need to be able to do, but also in the way in which people choose to do them, will be different in academia from the way they are in business or government. The academic world has always had a much greater tradition of open exchange of information. It’s really the currency of the realm for professors to publish results, to have those results be known, to further their own reputations and the reputations of their universities, their laboratories, and their departments.

So because of these traditions, and for other good reasons, the academic world tends to be considerably more open about the interchange of information than the business world, and certainly more so than the government world is, because in the commercial and government realms there are many economic, competitive, military, defense, or security issues for which certain information has to be much more strictly controlled. Again, we’re always fighting for the right balance, and the weighing mechanism is very different in these different arenas.

Moreover, the academic world is the origin of several of the movements that are affecting the computer industry in important and complex ways. For instance, publication of encryption algorithms, the rise of the open source movement, the Linux world, the 48 different open source licenses that are out there, the copyleft movement–I’m sure your folks will have heard of that, right?

SH: Yes.

CL: So those movements all either derive from, or are in large part fueled by, people who come out of or are associated in a strong way with the academic world. And many of those things that come from these movements have been beneficial to the industry because they sort of keep it balanced. In general, of course, the corporate world and the government world, and, particularly, the military or defense world have far greater reason for wanting to maintain confidentiality and security about what they’re doing. But they may also go too far in the other direction. And, of course, that’s the reason we have all of these battles with the Freedom of Information Act, over what access people ought to have, under what circumstances, to what information.

The academic world tends to fall predictably in most instances on one side of that debate. And the defense world tends to fall pretty predictably on the other side. And the commercial world tends to have a little bit of both, with its interest being generally motivated by protection of intellectual property and commercial interest.

[ILLUSTRATION OMITTED]

One of the interesting sidelights of that particular debate is that in this era, a lot of what we do and invent in the commercial world is valuable in a way that was never anticipated by the people who were writing up the legislation for the Patent Office many years ago. In almost all cases, if a commercial company has something that is really valuable, it may decline to patent it, because it turns out that it’s actually easier to defend it, to maintain it as a trade secret, than it is to maintain it as a patent. Because once a patent is published, there are so many ways around it. Basically by writing a patent you’ve given a blueprint to others not only of how to do what you’ve done, but also of how to find away around it so they won’t have conflict issues in the Patent Office.

So here’s an example where the 250-year-old technology of how to describe patents and protect intellectual property is just woefully inadequate to the modern era. In fact, until a decade or so ago, it was not even possible to patent an algorithm, which, of course, is the single most important thing you can invent in computer programming.

SH: I bet a lot of people didn’t think of it that way. I know that some of us–those of us who worked in the industry–certainly are aware of that.

CL: Yes, if anything is really important, the company just keeps it secret. Mostly what patent portfolios are in the big companies is security against huge lawsuits for infringement of other, unrelated patents. When you see things like the collision that almost occurred between Digital Equipment and Intel a few years ago–they filed patent infringement suits against one another–you see the danger. In that case the lawyers took a look at what was going to happen, took a deep breath, and said, “Oh, my God, what have we done?” And they came to an agreement–they thought better of it and settled. So instead of having it blow up, it was all over in two or three months. And that was smart, but also instructive. The discovery motions would have gone on for a decade; it would have cost them both hundreds of millions of dollars to litigate the case. So now all of the big companies in the industry basically use their patent portfolios for what they call mutually assured destruction. I’ve literally heard it described in those words. Most of the big companies get into what are called cross-licensing agreements, which essentially ensure that they won’t have the lawyers running the company into a decade-long intellectual property dispute with another major company that will drag them both down and kill them.

This happens because patent law is now entirely inadequate for modern inventions, especially in the computing industry. The ability of the courts to decide on issues associated with patent law that are, in today’s technology, so complex that, it’s brutal even to contemplate is woefully inadequate. You’ve got judges making decisions about intellectual property issues based on science that it would take them not the three months that the case is allotted, but three semesters just to understand at a superficial level. Judges, for all their education, much less juries of laypeople, none of whom have any hope at all of understanding nanotechnology or NP-Complete algorithms, just can’t decide these cases.

SH: Yes, we call upon people who don’t have the background to do those things.

CL: Absolutely. And they do the very best they can. And under the circumstances, they do, on balance, a pretty good job. But I think we’re heading for an era in which we’ve got to do something remarkably different with intellectual property protection, and I don’t mean only U.S. copyright issues, but international patent issues as well, or we’re just going to grind ourselves into the ground.

SH: We’re learning it’s viewed quite differently outside the United States as well.

CL: Yes, indeed so. And some countries, of course, basically don’t even recognize our intellectual property concepts. The computer industry has tremendous problems with exports of software to certain countries. It had been a really severe problem in India for many years. Some changes that occurred in Indian law about a decade ago have gone a long way to fixing that, which is why you see India as a site where so many software companies are moving large programming groups. But we today have that problem in spades, in China in particular. We see it to a lesser extent, whether it’s just piracy or whether it’s worse than that, in certain countries in South America, and particularly in Eastern Europe.

SH: Well, you have certainly whet my appetite for our keynote address. I’ve come to the end of my questions. Is there anything else that you want to share with the audience, prior to them seeing you in person?

CL: Well, this is going to be a lot of fun. You asked me earlier if I like teaching. One of the roles that I frequently play is to stand up on stage and through a series of stories and vignettes try to make complex technical issues as entertaining as possible. I try to explain to people why and how they are complex, why they’re important, and why they’re not as obviously solvable, or solvable in as easy a way, as they might think.

So I’ve been licking my chops about this particular speech, because so much of that preamble is stuff that your folks will understand immediately by their own direct experience. We’ll be able to get at some much more interesting and entertaining stuff that’s a layer or two down from that, which I hope will help them understand exactly what they can do to be influential, as you said, to get the attention of the right people to understand and act on the problem.

My father was a minister and he was a very scholarly minister. He had advanced degrees in ancient Greek and Hebrew, as well as in church history, so I used to go to church two or three times a week, because that’s what you do when you’re a preacher’s kid. And I would listen to him give these extremely arcane sermons on subjects that had to do with the translation of this particular text in the King James Bible from the original Greek, and other unimaginably technical–some might say boring–topics that could be pretty dry to anyone but an expert. But his congregation was never bored, because my father was also a professional magician. In every sermon I ever heard him deliver, someplace during the middle of it, he would do a magic trick. The magic trick was always one that illustrated some aspect of the thing that he was trying to teach the congregation. So I learned that you can illustrate deeply complicated, technical, arcane issues in a way that can be appreciated by a lay audience, if you work hard enough at finding the right analogies to foster understanding.

I grew up to be a technologist, rather than a theologian, but my subject is also technical and just as inaccessible to nonexperts as anything my father’s sermons covered. The lesson he taught me is that the difficulty of the subject matter doesn’t excuse me from the responsibility to make my subject clear and intelligible when it is important for a larger purpose–in the case of technology, for having the public and various officials make good decisions about the use of technology in public policy. Finding a dramatic and accurate way to convey the important issues in a way that makes them accessible to a lay audience is my goal. After all, most of us are not going to go out and learn about the mathematics of encryption technologies, any more than my father’s audience was going to go out and learn Attic Greek. But we’re all going to be dramatically affected by the choices that are made in the next few years about security and privacy technologies on the Net, so we need an informed citizenry in order to get those decisions right.

It is wrong for people who are experts in a subject, no matter what it is, to think that they’re excused from the responsibility of making the way in which their subject contributes to the public policy debate accessible to those who need to be a part of that debate. So I try to find ways to close the gap. When people walk away from one of my talks, I hope they leave with an understanding of something really profound and important, even if they understand it through amusing examples rather than by having me write the equations on the wall. What I want to do is get to the point where we can discuss some of these interesting issues in technology, security, and identity, and the ways in which they can conflict with one another, in a way that will make it accessible so the members of the audience can contribute meaningfully to the public policy debates about these important decisions. And I’m particularly pleased to be talking to the SLA because I think they will have a very large role to play in influencing decisionmakers to act in the right way.

SH: It sounds wonderful to me. I think it’s going to be a really inspirational talk for launching our conference. Thank you for giving us this preview.

Suzi Hayes is the SLA 2004 Annual Conference Program Committee Chair. She can be contacted at suzihayes@earthlink.net

COPYRIGHT 2004 Special Libraries Association

COPYRIGHT 2004 Gale Group