Home | Older Articles |     Share This Page

Why are computers so hard to use?

P. Lutus Message Page

(This is an old article I keep around to remind myself how much has changed. This article was written about the time of the earliest versions of Linux.)

"With each passing year, because of advances in computer technology, there are more things, each more sophisticated, that we aren't allowed to do any more." — P. Lutus

Computers are much smarter and cheaper than before, but they are no easier to use than when they were stupid and expensive. Why is this?

There are many reasons. Each of the reasons, taken by itself, is not enough to stop progress toward a simple, powerful, flexible computing environment, but all of them together explain why we are here. Here are some of them.

     Hardware: A fast, powerful, inexpensive processor

The Goal: Use modern materials technology to create the fastest, least expensive processor.
The Reality: The processor has to run older computer programs already in place (the "legacy" requirement). This means any new processor features can't interfere with old processor features, including features that stand in the way of rapid advance.
The Solution: Take the risks required to abandon a successful but aging processor design. Persuade the computer buying public to replace its software along with its hardware. And smile while you say this.
Discussion: This problem affects the entire computer world, not just small computers. Software development is by far the most costly part of computer technology, and no one wants to throw away software that works. It is easier and less expensive to think of ways to make old software work on new computers than to write new software.

One partial solution is to write code that is more portable. This requires strict adherence to software development guidelines, something that everyone agrees is important, but no one ends up doing.

     Software: Flexible applications that meet people's needs

The Goal: Design programs and data structures that offer power and sophistication, but that also can be shaped to meet future needs the designers cannot imagine.
The Reality: Gigantic applications that contain many individual solutions crafted by software designers to meet specific needs, but that cannot be changed in the field to meet new needs.
The Solution: Make computer programmers use their programs. This may sound too simple to work, but it is obvious that programmers are not using their programs in the way that end users do. Secondary solution: Create tools that allow the design of new program functions in the field, in a way that the average user will understand.
Discussion: Programmers usually respond to a request for a new feature by writing the feature in the computing language with which they are most familiar. This is the easiest solution for the programmer, but it creates a perpetual dependency between end users and programmers — end users ask for something, programmers provide it, but in a way that solves only that one problem, and does not provide a general solution to problems of that type.

     Graphical User Environments: A step forward?

The Goal: Create a unified environment (the reality behind such labels as "windows") that standardizes the way that applications communicate with the keyboard, pointing device, display, printer, and file system. Share as much computer code as possible by creating a standard library structure, encourage participation in the library system. Encourage programmers to write their programs in a standardized way, so that users can move between applications without changing how they work.
The Reality: Before "windows" et. al., one had to test an application in every available environment to be sure it worked. After all, someone might be (for example) using an odd display adapter that behaved in a nonstandard way.

Now that we have "windows," nothing has changed . I recently wrote an application in C++ under a modern graphical environment, expecting to realize the benefits of this standardization. I ended up having to test my application in every version of the environment (it behaved differently in, and required revision for, each and every one), and to cap it off I then received a bug report from several people who owned a particular display adapter — I had to partially rewrite my program to accommodate that particular adapter when used in that particular version of the environment.

The Solution: Strict compliance with hardware and software design guidelines, so that all devices appear the same to all applications within the environment.
Discussion: Hardware designers must adhere to a set of strict rules, and resist adding non-standard features in a lame attempt to set themselves apart from their competitors. Hardware designers must also write and test software drivers for their adapters that are absolutely bulletproof — they must work exactly the same as all the other adapters, running all available applications, in all standard hardware configurations.

The foregoing point should be obvious, but it is almost never done. Browsing the Web, one regularly sees lists of incompatibilities that remind one of the bad old days of DOS — "if you are using computer X and adapter Y, then you can't run program Z."

     Object-Oriented Programming: Panacea or Hype?

The Goal: Redesign computing environments to focus on computer users and their needs instead of a focus on computer programs and their needs (in a manner of speaking).
The Reality: Instead of the old reality — a system filled with programs that wouldn't talk to each other — we now have data and programs enclosed in packages. They still won't talk to each other, but (positive sign) they are sitting closer to each other than they used to.
The Solution: Again, make computer programmers use their programs. In particular, programmers should try to perform a normal task, from beginning to end, as an end user would. And take notes.
Discussion: The current embodiment of object orientation is ridiculous. You can drag any object from any application and drop it on any other application, but there the similarity to the original goal ends. In general, the recipient application doesn't know what to do with the object it has received.

It isn't enough to create an object composed of disparate elements. There has to be a purpose to that assembly — the object should be more than the sum of its parts. In particular, it should be obvious to end users how to use this new ability to solve old problems in new ways.


Some aspects of the present situation cannot be avoided. There is an essential tension between a perfectly coordinated computing environment on one hand, and a democratic society enclosing a free, competitive marketplace on the other.

For example, the idea behind a common software library is a good one. Instead of requiring every programmer to write a particular routine anew, you can choose a very well written version of that routine and offer it to all through a sharing mechanism. The sharing mechanism in Windows is called the "Dynamic Linking Library," and the files containing the library have the suffix .DLL (in case you were curious).

But in practice, this scheme isn't working very well. Most companies write their own DLL files and share them only within that company's applications. This saves some space for the end user, but much more space would be saved, and much programming effort would be saved, if sharing were more general.

Also the library scheme has one very serious flaw: if the common library contains an error, all applications that use the library will fail at once. This actually happened recently — Microsoft released a new version of a DLL they maintain, but this particular file contained an inadvertent error. Suddenly software developers all over the world (including the author) were besieged with reports of an error, an error about which they could do nothing.

In some ways the shared library paradigm reflects (negatively) on the nature of human society. For the shared library method to work, we have to pretend to be members of a utopian society, one in which the most talented technologists act for the common good. In reality, when the scheme works, it is usually because someone acted for the common good accidentally .

The next technological breakthrough will not be in technology, it will be in marketing.
We already know how to create a great computer and a great environment. The problem lies in how to fund, produce and market these ideas. At the moment we are paying too much attention to issues of competition, intellectual rights and novelty for novelty's sake, and too little to the issue of optimal technical solutions.

The next big breakthrough will not be a smaller, faster, cheaper computer, although that will happen too. The next big breakthrough will be a method to unify the best ideas in computing with reliable financial backing and a persuasive marketing campaign based on the real needs of end users.

Will Microsoft be the source of this breakthrough? I personally don't think so. In my view, Microsoft is following a very conservative path, based on incremental improvements on what has worked in the past, a strong emphasis on continued high profitability, and little investment in alternative approaches.

These are the reasons why computer technology has produced no measurable productivity increase in the workplace — too much competition for the sake of competition, too little investment in basic research, too great a focus on the bottom line, too many hardware and software vendors competing for a thinning profit margin. And Microsoft, the one organization that can afford to act differently, instead acts like a very large version of a small software house — stature, but no vision.

     A digression — UNIX vs. Windows: A biased, subjective comparison

I personally feel that UNIX is the standard to which everything else is compared. This is called "irrational bias," and I won't try to justify it to you. I am particularly reminded of UNIX's innate superiority every time I try to set up a version of Windows. In setting up UNIX, you can use previous configuration files, you can automate the process using scripts, you can even copy a complete operating system from one machine to another with a reasonable expectation that it will run on the destination.

None of these is true of Windows — every time you set up Windows, you have to start from scratch. You must make hand entries for dozens of prompts, you have to be there when the prompt appears, you can't automate any part of the process . In this sense, Windows is the most certain guarantee of the value of unskilled labor in modern times. (You can copy an entire hard drive to another hard drive using specialized techniques, thus cloning Windows, but I am talking about what you can do with Windows, not in spite of it).

This lack of automation is generally true of the operation of Windows programs also. If you want to replace a phrase in a long text file, you have to do it by hand, every time. If you want to convert a directory of graphic files from one graphic format to another, you have to load each graphic individually, convert it by pressing the same buttons in the same way, and save it again. If there are 100 graphic files in the directory, you perform the same hand motions 100 times.

     Here is my biased, subjective point-by-point comparison of UNIX and Windows:

Property UNIX Windows
Appearance Ugly Beautiful
Flexibility Very flexible, some say too flexible. Bends and then breaks with little warning. Not flexible. Breaks without bending first.
Graphics capability Tentative, experimental Innate
Automation Inherent Practically nonexistent
Communication with other file systems Humbly coöperative Belligerent
Extend environment or applications to include new capabilities Quick, but requires substantial expertise. (1) Never. (2) Maybe, if someone at Microsoft has the same idea at the same time, knows a programmer, and has clout. If you are not at Microsoft, ESP might work.
Write new applications Easy, standard environment, obvious process, powerful environmental features and tools. Very complex, poor documentation, many gotchas. To succeed you have to be very smart and very single. Up there with the classic hard things of modern times: landing the Space Shuttle on a rainy night, hitting a major-league fast ball, or explaining Dan Quayle to an extraterrestrial.
Cost Free (Linux, FreeBSD, others). And even beyond free — the free vendors try to compete for your "business" by telling you all about the features of their free OS. Expensive — after all, Windows programming is hard, Windows system programming is even harder. Someone has to pay…
Contribution to the progress of computer science Substantial and ongoing as a positive object lesson. Substantial and ongoing as a negative object lesson.
Meets the needs of end users Only if the "end user" is a congenital techie. Moderately, but may be training end users to expect too little from their computer environments. Offers too little automation and cleverness, requires too much manual labor. Demands that the user learn a lot, while in turn learning nothing from the user.
Future potential Substantial, but needs graphic user interface to keep up. Substantial, but needs some of the power and flexibility of UNIX for credibility.

These Pages Created and Maintained using    Arachnophilia.

Main Page

Home | Older Articles |     Share This Page