









                      Towards the Future of Desktop Computing:
                      ________________________________________
                             Time to Debunk a Few Myths
                             __________________________


                 This author has been increasingly frustrated by a
            flurry of articles in the popular computer magazines which
            contain glib and oft-repeated assertions, liberal (to say
            the least) attempts to redefine history, and other
            statements which generally seem to be creating a groundswell
            of support to push the computer industry in a profoundly
            disturbing direction.  While one could easily dismiss one or
            two such articles, it has come to the point where somebody
            really must speak up and say something.

                 Note well that this article, although it will most
            likely be of no particular offense to the religious powers-
            that-be in Iran (and hopefully won't result in any death
            threats) is likely to step on a few toes, attack a few
            sacred cows, and offend somebody somewhere.  Be forewarned.

            A Critical Look at UNIX

                 There is probably nobody in the world of personal
            computing who hasn't heard about the UNIX operating system.
            In most cases, these people don't really know much about the
            UNIX system, other than that it's supposed to offer "true
            multi-user, multi-tasking" support, that it's supposed to be
            an "open system", portable to any kind of computer, and that
            it supposedly makes it possible to run your applications
            with no changes regardless of what kind of computer you
            might be presented with.

                 As in most cases where something sounds too good to be
            true, the critical eye would naturally be well advised to
            examine this "miracle" operating system closely to see where
            the inevitable negative side of things are.  (It's
            astounding that this hasn't been done in the popular
            computer publications before now).  Let's take a look at a
            few of them.

                 One continues to read numerous articles in the various
            trade magazines extolling the multitudinous virtues of all
            those tantalizingly near wonders:  UNIX, RISC machines,
            transputers, OS/2, the new NEXT machine, and the like.  For
            each, the implication is:  "gee, if I only had (fill in the
                                                  ____                 
            blank), I could finally do (fill in the blank with something
            you think you might like to do)."
                _____                        

                 As is so often the case, once one actually gets the
            thing they think would be so marvelous, (and sometimes it












            takes a while even then for the truth to finally sink in),
            it turns out that the un-foreseen negatives are usually
            enough to make the treasured new toy at least something less
            than was anticipated.

                 The purpose of this article, then, is to make a plea
            for at least a measure of sanity as we together all rush
            headlong into whatever lies in our collective future
            regarding desktop personal computing.

                 It's time to debunk a few myths, and attack a few
            sacred cows.

            Multiuser vs. Shared Processor
            ______________________________

                 It's important not to forget that a perfectly viable
            way to provide a multiuser system is to give each user a
            computer of their own.  In fact, nearly anybody today using
            a UNIX system from a serial terminal has in fact got a
            processor of their own, since virtually all serial CRT
            terminals have microcomputers embedded in them.

            Does One Really Save Money Going "Shared Processor"?
            ____________________________________________________

                 In fact, if one looks at the price of a serial (even
            "dumb") terminal, and its underlying components, one quickly
            finds that most of the components in a serial terminal
            (power supply, CRT, keyboard, housing, display logic, RS-232
            serial interface, switches, hardware and connectors, etc.)
            are quite similar to those needed to build a diskless
            network workstation.  Adding a LAN interface (ARCnet
            interfaces can be bought for a little over $100 each in
            single-unit quantities), another microprocessor (8088s
            presently sell for about $5-6 in single unit quantities and
            80286s for $50-$60), and some memory (typically no more than
            would be required to support the same user at a central
            shared processor) and you have what is needed to provide a
            completely dedicated processor.
                       _________           

                 The miniscule cost difference between trying to cram
            multiple users onto a single CPU, versus giving each user a
            processor of their own, if any, just doesn't justify the
            hassle of trying to split the computer power of a single
            microprocessor chip "n" ways.

            Machine Architectures
            _____________________

                 Going back to the origins of modern computing, one
            intriguingly constant thread throughout has been the
            continuous theoretical debating in academic circles about
            what constitutes the "best" architecture for a computer.

                 In the final analysis, just about any decent
            architecture can be made to work.  Just as is true for












            programs, some are more suitable for certain purposes than
            others are.  There is no one "perfect" word processor,
            "perfect" DBMS, or the like... it depends upon what one
            needs to do, what resources one is willing to devote to
            learning and using it, and what degree of long-term
            development is required.

                 In any case, hardware engineers are rarely content to
            leave things well enough alone, even when they should have
            the wisdom to do so.  And, many of them have very limited
            experience with serious systems-level programming and the
            resulting software implications of their creations.

                 One of the reasons why UNIX appeals to so many hardware
            engineers and engineering-directed companies is because they
            feel that it gives them the "license" to create
            "nonstandard" processor architectures without having to
            themselves pay the considerable costs of developing (as a
            minimum) a new operating system to run on their pet
            creation.  It is clearly much cheaper and faster to port a
            version of UNIX than to create a well-crafted and perfectly
            adapted operating system from scratch.

            80x8x Architecture
            __________________

                 For better or for worse, the Intel 80x8x architecture
            has sold more computers than any other general-purpose
            computer architecture in history.  Its primary virtue comes
            from this standardization.

                 Those among us who are old enough to remember the IBM
            360 and the impact it made on the history of data processing
            most likely realize that its impact was primarily due to the
            architecture's range and longevity.

                 Range, because it was effectively the first time that a
            single processor instruction set had been available across a
            family of compatible computers having well over an order of
            magnitude difference in their power and cost.

                 Longevity, because for the first time, third-party
            software houses had a good probability of being able to
            design major, serious applications packages and be assured
            that the machine they were designing for would not be
            obsolete by the time the package was ready for market.

                 In fact, this longevity had a second major impact as
            well.  It meant that "multi-generation" developments
            occurred:  there was enough life in the architecture to
            develop really good compilers and other such development
                    ______                                          
            tools, which were subsequently used to develop even better
                                                           ___________
            debuggers and development tools, which then permitted really
                                                                   _____
            good applications, and so on.  Arriving at a truly rich
            applications development environment often involves at least












            three such levels of development.  Figuring perhaps two to
            three years for each such "generation" suggests that it
            takes six to ten years following the development of a given
            computer architecture before a really complete, rich and
            well-evolved environment results (things continue to become
            even more interesting after that).

                  The important thing about the 80x8x architecture is
            not its ultimate MIPS or FLOPS potential, or how many
            registers it has, or what its clock speed is, but that its
            huge population figures and compatibility across the range,
            from the lowly 4.77 MHz 8088 to the new 80486, have made it
            possible to walk into computer shops (even department
            stores!) and buy off-the-shelf copies of the most diverse
            and most capable world-class spreadsheets, comm packages,
            database managers, word processors, program development
            tools and other software that has ever been available for a
            single computer architecture, and all at prices that would
            have previously seemed a hopeless fantasy.

            RISC Architectures
            __________________

                 One of the current "hot topics" seems to be RISC-
            architecture machines, and RISC machines seem to have a
            somehow-incestuous relationship with UNIX.  Therefore, it
            seems to make sense to discuss them somewhat here.

            Compiled Code vs. Directly Executed Code

                 Most computer users have the instinctive understanding
            that compiled or native code is fast, and interpreters are
            slow.  Most of these same people don't realize that most
            microcomputers are internally implemented as microprogrammed
            interpreters.

                 In fact, the same is true for many mainframe computers.
            In the original IBM 360 series, for example, one had to go
            all the way up to the 360/75 (a million-dollar-plus
            mainframe computer) before one arrived at a machine which
            actually implemented the 360 instruction set in hardware...
            all the smaller 360 series machines ran microcoded
            interpreters which simply emulated the 360 instruction set.
            (The 360/30, for example, internally used eight-bit data
            paths!)

                 The raison d'etre of the RISC architecture is simply to
                     _____________                                      
            get back to a machine architecture simple enough that the
            extra overhead resulting from these microcoded interpreters
            can be eliminated.

            A Historical Note

                 Actually, today's 80x8x family architecture is the
            modern descendent of the Intel 8008.  The 8008, which was












            the world's first eight-bit microprocessor-on-a-chip, used
            an architecture which was originally designed by Datapoint
            Corporation (Computer Terminal Corporation in those days)
            staffers Vic Poor and Harry Pyle.  Both Texas Instruments
            and Intel were approached to engineer and build the chip,
            which Datapoint wanted for its terminals.  Originally,
            neither company was very interested, but finally both
            decided that it might help them to sell their memory chips.
            Of the two, only the Intel part was ever an engineering
            success, but it arrived too late.

                 By the time the Intel part was working and deliverable,
            Datapoint had already built up the processor in TTL instead.
            This processor, used in the Datapoint 2200 desktop personal
            computer system, which implemented most of the simplified
            instruction set directly in hardware, (and despite the fact
            that it had variable instruction length and therefore a
            variable number of machine cycles needed depending upon the
            particular instruction type) was perhaps the first desktop
            RISC machine.

            RISC Machines vs. Software Compatibility

                 Unless and until one RISC architecture becomes as
            overpoweringly universal as the 80x8x architecture has, and
            stays that way for a long enough period of time, RISC
            machines will only exhibit their true power on programs that
            their users have written using whatever programming tools
            are made available for that particular machine.

                 Unless two specific RISC machines have the same
            instruction set, one cannot simply pick up an application's
            object code and move a copy of it to a nearby RISC machine,
            whether they both run UNIX or not.

                 The incredible ease with which one can easily buy
            third-party applications software within the PC-MS/DOS world
            and use it on clones built by any company in any country and
            available everywhere simply has no parallel in the RISC
            machine world, and it is important as we hurtle towards
            diverging processor hardware architectures that we fully
            realize what we will be giving up in the process.

            UNIX Applications Package Availability
            ______________________________________

                 Of course, an operating system is only useful if it
            allows one to run useful software.  Why should a person want
            to run UNIX on their computer?  Why, of course, to run all
            that wonderful UNIX software.  The astute among you might
            ask "what wonderful UNIX software?".  Ahh, that is a good
            question, indeed.

                 As an experiment, try the following: starting with the
            inside cover of this magazine, read through every single ad












            which mentions software, and make a list of the packages
            being offered.  In one column, put the software that runs
            under MS/DOS.  In the second column, put the software that
            runs under UNIX.

                 It won't take you long to realize the awful truth:  the
            best software packages for most personal computer
            applications don't run under UNIX, but MS/DOS instead.  In
            fact, most MS/DOS packages are far superior to their UNIX
            equivalents (if there are any UNIX equivalents).
                                  ___                       

            Applications Package Pricing
            ____________________________

                 In the event that you actually found any UNIX packages
            during the experiment described above, it is also a
            fascinating exercise to look at their prices.  You quickly
            discover that UNIX software packages, when you find them,
            cost as a rule several times as much as their MS/DOS
            equivalents.  To add insult to injury, the MS/DOS packages
            generally tend to be better-written and come with better
            documentation as well.

            Applications Package Portability
            ________________________________

                 Given the supposedly near-miraculous portability of
            applications under UNIX, and its legendary aura (a glamorous
            and enchanting playground for programmers, as one might be
            led to believe), what possible reason could there be for
            such a strange state of affairs?

                 One of the many reasons is that, although UNIX itself
            is relatively-speaking portable among machines with
            different architectures, the object code for applications
            written under it is certainly not.

           _______________________________________________________________
           |                                                             |
                 True portability of commercial, off-the-shelf third-
           |                                                             |
            party applications is not guaranteed, regardless of whether
           |                                                             |
            or not you may be running UNIX, if the machines use
           |                                                             |
            dissimilar instruction sets.
           |                                                             |
           _______________________________________________________________
           |                                                             |

                 Portability under UNIX is achieved by taking the C-
            language source for a given application, transferring it to
            the machine that you want to run it on, and compiling a copy
            on your machine's own C compiler, so that it will be usable
            upon your specific computer system.

                 (Indeed, even this level of compatibility is often not
            what it's cracked up to be, since UNIX exists in a number of
            different flavors, and the "standard library routines" of
            those flavors tend to have numerous minor or major
            incompatibilities, lurking and ready to trip up the unwary.












            The ongoing efforts of the appropriate standards committees
            have been working on this problem, but it will be some time
            (if ever) before UNIX is brought back into a coherent,
            uniform single version.)

                 This source-level-only portability (and even that is
                                                              ____   
            highly conditional upon your sticking within the same
            "flavor" of UNIX) can admittedly be very helpful for
            programs that you or your colleagues have written.  Hence,
            the interest in UNIX within university and research
            environments.  However, if you seriously think for even a
            minute that you're ever going to walk into a computer shop
            and see Lotus Development selling you a source-version of 1-
            2-3, or Ashton-Tate selling the source code to dBASE IV, or
            Microsoft selling the C source to Microsoft WORD, then
            you've got a screw loose somewhere.

                 The limited availability of major, world-class
            applications packages under UNIX explains the compelling
            appeal of UNIX users for "bridges" between UNIX and MS/DOS.
            Understandable that UNIX users would long for the ability to
            run MS/DOS software.

                 The same factors (but in reverse) also explain the
            extreme lack of interest on the part of most MS/DOS users
            for bridges allowing them to run UNIX software on their PCs.
            (It is curiously pathetic that most UNIX diehards seem so
            perplexed about the typical MS/DOS user's lack of demand for
            "bridges" to UNIX).

            Unix Has Some Blemishes Too
            ___________________________

                 Those users who have had the opportunity to work with
            UNIX for a while know that it is not without its own
            peculiar collection of problems and disadvantages.

            Multi-User EQU Timesharing
            __________________________

                 Let's not forget that the modern and wonderful-sounding
            term "multi-user, multi-tasking" is merely a euphemism for
            the archaic and outmoded concept of "timesharing".

                 When the UNIX system was first begun, it started as an
            elegantly designed, efficient, compact, and classic
            minicomputer timesharing system.  Like all timesharing
            systems, its raison d'etre was that big machines were
                         _____________                           
            cheaper per available machine cycle than small computers
            were, and that at the time it was felt that typical users'
            workloads were not anywhere close to using the existing
            power of the minicomputer to its potential.  Therefore,
            especially given the high cost of computer power at the
            time, it made sense to try to make one computer serve
            multiple users simultaneously.













                 In the meantime, however, personal computers have
            changed the expectations of the users.  Applications have
            grown comparatively huge, and users' demands for computer
            processor time have grown as well.  Serious power-user
            spreadsheet recalcs, AUTOCAD redraws, desktop publishing,
            graphics, automated word processing of lengthy documents,
            and CAD/CAM applications, to name just a few, routinely
            create processing demands to support a single user which
            were hopelessly outrageous in the early days of the
            minicomputer timesharing environment for which UNIX was
            designed.

                 The increasing demand for machine cycles to support a
            single user, combined with the unbelievably reduced cost of
            those machine cycles, have made timesharing systems such as
            UNIX almost totally obsolete.

                 Certainly, UNIX can be used in single-user mode.  Many
            successful UNIX systems today are used primarily in that
            fashion.  However, the complexity, bulk and inefficiencies
            involved due to the continuing support of multiple users
            still linger, a ghost from UNIX's past.  OS/2, at least,
            makes the sensible recognition of the fact that multitasking
            (and not multi-user) is what is really called for in the
            modern world.  It is amazing that UNIX zealots seem to think
            that a today-obsolete "true multi-user" timesharing
                                   ____                        
            capability is a requirement for a modern operating system to
            be considered somehow "real".

            Multiple Users Connected via Dumb Serial Terminals
            __________________________________________________

                 One of the central concepts shaping UNIX is its
            determination to support many users, attached by serial
            terminals.  One of UNIX's breakthroughs at the time was the
            use of a configuration system (based on a thing called
            TERMCAP) that allowed different users to have different
            kinds of serial terminals, and UNIX trying to make the
            different characteristics of each kind of terminal invisible
            to applications software.  (This is sort of the same kind of
            idea that resulted, under MS/DOS, with the huge profusion of
            printer drivers for third-party software packages).

                 UNIX tries to perform this standardization at the
            system level instead, where it supposedly makes it
            ______                                            
            unnecessary for the application to have to be concerned
            about the differences between terminals.

            The Lowest-Common-Denominator Phenomenon

                 Inevitably, some types of operations are simply not
            easy to implement on some kinds of terminals.  (An example:
            some kinds of CRT terminals "consume" a character cell on
            the screen to turn on or off underlining or other special
            character attributes).  Just as some kinds of word












            processing operations are not attempted (even by the most
            sophisticated word processors) on some kinds of printers,
            much UNIX software has been written based on the "lowest
            common denominator" serial terminal in mind, in order to
            support the widest possible user audience.  The result is
            often a screen handling, by core utilities at least, which
            is highly lacking by modern standards.

            Speed of Serial RS-232 Terminals

                 One of the factors that made the modern generation of
            PC-style software packages feasible was the very high
            bandwidth available between the processor and the display
            screen.  This tight coupling of computer power and display
            made it practical to produce a new kind of applications
            software, such as word processors capable of reformatting
            all the subsequent text on a screen near-instantaneously as
            characters are added to a line at the top of the display.
            Even at 19,200 baud, rewriting a large display through a
            serial comm link is painfully slow, and is better avoided
            whenever possible.

            Graphics on Serial Terminals

                 These problems regarding performance and capability of
            a processor separated from its display by a slow serial link
            are nowhere more evident than when one wishes to perform
            high quality, modern graphics.  High resolution, bitmapped
            displays can easily require a megabyte or more of memory for
            display refresh purposes, and sending a megabyte of data
            across even a 56 kilobaud serial link takes on the order of
            two minutes.
                ________

                 While serial terminals are often practical for specific
            applications where data communication between the processor
            and the terminal is necessarily very limited (such as when
            the terminals are geographically distributed and connected
            by modem), it is probably fair to say that the inherent
            performance and capability limitations caused by trying to
            use a serial "dumb" terminal connected to a remote computer,
            combined with the vanishingly small cost of true local
            computing cycles, have made the serial "dumb" terminal
            largely obsolete as a principal, full-function workstation.

            Mouse Support

                 The increasing interest in the use of performance-
            enhancement pointing devices, such as mice, also works to
            the disadvantage of serial "dumb" terminals.  Mice typically
            generate a fairly high-data-rate stream of information,
            which is relatively costly to transmit and handle by a
            shared processor "timesharing" system.  The information that
            must be sent to most graphics terminals to support the mouse













            cursor that so effortlessly seems to glide across the screen
            must not be forgotten, either.

                 Although somebody must make one somewhere, this author
            has never noticed a serial terminal providing integral
            support for a mouse and allowing local mouse cursor control
            and shared use of the RS-232 link going back to the host
            processor.

                 It would therefore appear that the straightforward use
            of a mouse is one of the things one often must give up in
            attempting to use the highly-touted "true multi-user,
            multiterminal" characteristics of UNIX.

            Inefficient Sequential and Large File Handling
            ______________________________________________

                 One of the things for which UNIX is often promoted is
            for use as a high-powered network data server.  To those who
            are aware of UNIX's internal workings, this must surely be a
            curious choice, to say the least, since two of UNIX's weak
            spots are large file handling and database integrity.

            Inefficient Disk File Allocation

                 In order to achieve its generality and work on nearly
            any kind of disk drive, UNIX usually does not attempt to
            allocate a file in a large contiguous area of its disk, or
            to keep files and system data in a small number of
            physically contiguous cylinders.  In fact, once a UNIX
            system has been in use for a while, large files tend to
            become extremely fragmented, scattered in numerous small
            pieces hither and yon.  As a result, reading of large
            sequential files tends to be slowed by extremely heavy
            thrashing and seeking of the disk drive's read head.

            Inefficient Large File Random Access

                 UNIX was written with the "small is beautiful" design
            philosophy.  UNIX systems, therefore, typically use a
            multilevel indexing scheme to keep track of where all the
            various bits and pieces of a file have been strewn.  While
            relatively efficient for small files which have a small
            number of pieces, very large databases cause this index to
            expand to more and more levels and make it necessary to
            access numerous levels of index before the desired data
            sector can actually be read or written.

                 In order to achieve decent throughput to its disk, UNIX
            systems are obliged to have a relatively large disk cache to
            try to keep as much of this multi-level index structure, for
            every opened file if possible, in memory at all times.















            Database Integrity

                 Unfortunately, while this disk cache reduces the
            negative effects of UNIX's inefficient disk organization, it
            causes other problems.  Most UNIX systems do not use what is
            called a "write-through" cache.

                 In other words, UNIX systems typically process disk
            write requests by marking the copy of the sector in the
            cache that it should be eventually written to the disk at
            some later time.  Typically, this write is physically done
            just before the sector buffer in question is to be reused to
            hold a newly requested record, and this is generally quite
            some time following the actual write request.

                 Systems and applications programmers who are designing
            linked disk data structures or other significant data bases
            often very carefully sequence the order in which they issue
            writes to disk sectors, in order to reduce or eliminate the
            severity of damage to data structures in the event of a
            system crash occurring at an inopportune time.

                 This careful attention to detail, however, is often
            made fruitless if the underlying operating system doesn't
            actually write the sectors to the physical disk in the
            "real" order in which the writes are requested.  The result
            can include corrupted system tables, as well as user
            database integrity problems and inconsistencies.

                 Of course, regardless of how internally secure an
            operating system is against accidental programming faults or
            even intentionally malicious programs, computing systems do
                                                                     __
            go down unexpectedly from time to time due to power or
            hardware faults.

                 UNIX solves its part of this problem with a special
            procedure which is normally run during system startup, which
            looks around the disk and tries to figure out which of the
            system tables might have been incompletely updated when the
            system last went down.  It then makes a best-efforts attempt
            to fix these up, so that its system data structures on the
            disk are consistent.

                 Unfortunately, UNIX generally has no way to provide
            similar automatic recovery capability for user data file
            integrity and internal consistency.  Most UNIX systems have
            the ability to force disk writes to take place immediately
            when they are issued, which helps resolve this problem.
            However, the extreme overhead which results (UNIX programs
            don't generally try to reduce their writes, since they
                            ___                                   
            assume those writes are being "optimized away" by UNIX's
            cache) usually is sufficient to motivate users to simply
            cross their fingers and hope for the best.













            Table-Oriented Internal Structure
            _________________________________

                 Another unfortunate characteristic of UNIX that one
            rarely sees mentioned in the press is UNIX's internal
            fascination with the use of fixed-size tables for so many
            things.  Perhaps due to the natural way the C language
            allows for stepping through the "n" sequential elements of
            an array, UNIX systems tend to use arbitrarily-sized tables
            for internal storage purposes and do sequential searches of
            them, rather than using the more efficient and elegant
            dynamically allocated linked lists, trees, and such, which
            every computer science student learns about in their
            Information Structures classes.

                 This characteristic of UNIX shows through in arbitrary
            limits on how many of this-or-that one is allowed to have,
            regardless of the fact that the however-many-somethings-else
            one is allowed aren't being used at all.

            Problems with Keyahead
            ______________________

                 Although some UNIX system somewhere has probably fixed
            the problem, every UNIX system the author has ever worked on
            has the undesirable characteristic that characters "keyed-
            ahead", or entered before the underlying program has
            requested them, are echoed to the screen in the "wrong"
            place.

                 Apparently, most UNIXes echo characters to the screen
            as the keys are actually pressed.  Since the program being
            run is not yet ready for these keystrokes, it has not
            correctly positioned the cursor on the screen to the
            beginning of the field into which the data is to be keyed.
            This results in difficulties in editing of fields being
            entered, and frequently causes further problems when user
            keystrokes appear in the middle of program-generated
            prompts, borders, strings or other information being output
            to the screen.

                 This problem causes all kinds of grief when one is
            trying to write a program to handle full-screen field-
            oriented data entry and the like, and explains part of the
            reason why UNIX programs are often forced to offer the user
            the option of redrawing the screen upon request.

            Ability of Other Tasks to Scramble Screens
            __________________________________________

                 The same problem occurs in the other direction as well.
            UNIX tasks are generally allowed to send information to the
            screen at any time, including while the user is trying to
            key in data at the keyboard.  In the absence of a windowing
            environment (hence UNIX users' overpowering interest in
            windowing), the occasional messages that inevitably pop up
            now and again from all those tasks (being "conveniently"












            executed in the background) appear on the screen right
            there, in the middle of whatever it is the user is trying to
            key in.  If (horrors!) the cursor position and sending
            background program's messages conspire to cause the screen
            to be scrolled up, generally without the knowledge of the
            program the user is trying to use on the screen at the time,
            (and which thought it knew what was actually on the screen,
                       _______                                         
            and where) the result is often an unholy mess.

                 At least some competing systems, such as QNX, are
            intelligent enough to block the output of program-generated
            information to the screen temporarily while the user is
            actually typing.

            User-Hostile
            ____________

                 UNIX is itself near-legendary as being among the most
            user-hostile operating systems in current widespread use.
            This is due largely due to its underlying design philosophy,
            dating back to the very early 1970s and the limited
            minicomputers of that time.

            UNIX Itself is User-Hostile

                 Although UNIX fans will profess that this user-
            hostility can be "hidden" underneath multiple layers of
            insulating software, in truth the underlying character of
            the system itself must invariably "show through" on
            occasion, either in regards to error handling, user
            facilities provided, or something.

                 MS-DOS users having seen the familiar
            "Abort/Retry/Ignore" messages are well acquainted with this
            phenomenon.  (Another fine example from the MS-DOS world is
            the way in which Ctrl-S stops and continues screen output,
            during the TYPE command for example, but woe be to you if
            you have somehow managed to accidentally press some other
            keystroke... in this case, Ctrl-S will no longer work, and
            the only way to pause screen output is then usually to abort
            the program with Ctrl-Break).

                 Certainly, it is possible to hide most such things, and
            this is done in some packages, but the likelihood that one
            will never be confronted with even more puzzling or cryptic
            messages on UNIX systems is small indeed.

            Many UNIX Tools aren't much Better

                 Many of the UNIX-provided tools are pretty user-
            hostile, too.  Generic UNIX's program editor, VI, is a
            startlingly discouraging revelation to those frustrated
            souls who always considered WORDSTAR hard to learn and
            unfriendly to use.  Using VI, one can lose an entire day's
            painstaking work with terrifying ease.












                 As just one example, VI uses multi-keystroke commands
            where the exact meaning of a keystroke depends upon the
            state of the program, as established by the keystrokes that
            precede the keystroke in question.  Unfortunately, the
            preceding keystrokes are generally not displayed on the
            screen.  If you are afraid you might have hit the wrong
            preceding key by mistake, there is nothing on the screen to
            allay your fears.

                 If when you enter the final character of your command,
            your worst fears are confirmed, you can sometimes get your
                                                    _________         
            data back, if you notice the problem right away.  However,
            VI offers nothing like the multi-level, generalized UNDO
            offered by the better program editors in the MS-DOS world,
            such as BRIEF.

                 UNIX programs tend to take all kinds of options using
            command line switches.  A classic example is your normal C
            compiler.  While command-line switches are a potentially
            fine way of choosing options in a program, and are nearly-
            obligatory if the program is to be runnable unattended from
            a batch or MAKE file, your average C compiler (Microsoft C
            under MS-DOS is a good example) has so many options
            available that the letters have long-since lost any possible
            mnemonic value.  To make matters worse, the burgeoning
            number of options have made it necessary to make option
            flags case-sensitive!

                 Many of the other tools and other facilities that are
            typically provided with UNIX systems are so poorly
            documented (if documented at all) or so arcane that only a
            very few users are ever likely to use them to any advantage
            whatsoever.  As an example, I'd be very interested to see
            what percentage of UNIX users have ever even attempted to
            use YACC or LEX, let alone used them successfully.

            Bulky
            _____

                 This points to another basic problem with UNIX.
            Originally, the system started out as a tight, well-crafted
            (if limited) operating system suitable to run (and support
            multiple users!) on a (large for the time) 32K minicomputer.
            What happened?

                 People who've moved from one house to another several
            times in a relatively short period know well the purgative
            effect of a move.  While keeping something around that you
            don't really need is relatively easy if you've not got a
            good reason to take the trouble to throw it away, things
            that you don't really need tend to get thrown away when you
            move.

                 Part of the problem is that UNIX's "portability" has
            insulated it from its "moves".  Programming tools that












            somebody wrote once upon a time and thought might be useful
            tend to accumulate and get passed along as UNIX gets ported
            from one user and machine to another.

                 UNIX's development background in university
            environments has also played a part.  Highly prolific
            student programmers (often showing a distaste for mundane
            and uninteresting aspects of the programming challenge such
            as documentation) are only too pleased to add their own
            personal contributions to "the operating system", and these
            contributions are often primarily of academic interest.

                 People who don't understand what all this stuff does
            (and this means nearly everybody) are loath to be the one to
            throw it away, for fear that someone further down the line
            might miss it and complain that it was missing.

                 I think it is interesting to see the occasional
            complaints by pioneering OS/2 users about how OS/2 cannot be
            booted from a 1.2 megabyte diskette (since the system is too
            big to fit enough of it on the diskette to complete the boot
            process).

                 Recent articles in BYTE have mentioned in passing that
            Apple's A/UX uses 70 megabytes of the 80 megabyte hard disk
            on which it is shipped, and that the NEXT computer's optical
            disk (250 megabytes worth) is already two-thirds full when
            shipped.  This should certainly at least raise an eyebrow of
            a user who considered Microsoft WORD 4.0, shipped on nine
                                                                 ____
            360K diskettes, a large package.  (Those 70 megabytes of
            Apple's A/UX would consume some two hundred diskettes!
                                       __________________________ 
            Let's not even consider the number of diskettes it would
            have taken to distribute the NEXT computer's UNIX operating
            system and utilities).  Care to guess how long it would take
            to learn to actually use all that junk?

            Internally Fragile
            __________________

                 The UNIX system's long history and large number of
            programmers having made their individual contributions has
            also had an effect.  Like any system which has been expanded
            and enhanced far beyond its original design, UNIX has become
            "fragile".  Making any significant change in the way it
            works is likely to cause problems with some section of code,
            somewhere, which counts on its continuing to work in the way
            it used to.  This phenomenon is very familiar to any
            programmers who have done their share of maintenance on an
            older system.

                 Given the enormous number of individual programmers who
            have had their hands in this particular pie and stirred this
            particular pot, and largely without any kind of centralized
            coordination, it is amazing and a credit to its original
            designers that UNIX continues to work at all.












            But UNIX Can Be Modified (I suppose).
            _____________________________________

                 A common point made by UNIX fans is that anybody who
            doesn't like the way that UNIX does something can change it
            so it works differently.  This is seldom really a viable
            option.  UNIX gurus (those with enough knowledge of the UNIX
            system overall to allow them to modify it in significant
            ways with confidence that they aren't breaking it in some
            unforeseen way) are nearly as rare as any other operating
            system gurus are.  Most users are not willing to pay the
            cost of such expertise and time.

            So, Why UNIX?
            _____________

                 Given the numerous problems that seem endemic to UNIX,
            "If UNIX is so bad, why does anybody use it, anyhow?" is a
            fair question to ask.

            It Is Appropriate for Teaching
            ______________________________

                 In educational environments, UNIX is often taught
            primarily because it is a relatively classic example of a
            timesharing operating system.  It has been around long
            enough that there are numerous suitable texts available
            describing it.  Source copies of key parts of the system are
            widely available for teaching purposes.  Since nearly all of
            UNIX is written in C, the source can be understood by
            students not yet having learned much about assembly language
            and computer internal architecture.  There are runnable
            versions available for nearly any kind of computer system
            that the Computer Science Department is likely to have
            available for students to use.

            For Better or For Worse, It Is Widely Used
            __________________________________________

                 It is curious that those tighter, more contemporary and
            better-focused alternative operating systems, such as QNX
            (which, in particular, has perhaps the best local area
            networking support of PC-compatible operating systems), have
            failed to achieve the degree of interest and market share
            that they arguably deserve.

                 It is also rather strange that more companies who have
            discovered UNIX's various shortcomings don't switch to
            something else.  Perhaps one explanation is that the
            executives responsible, having "bucked the system" enough to
            switch to UNIX in the first place, are reluctant to admit
            that the system they fought so hard to put in place was a
            big mistake.  Sort of reminds one about the old tale about
            the Emporer who bought an expensive and evidently wonderful
            new suit of clothing.  It must be wonderful, right?
            Everybody says so.














                 The UNIX system, I guess, has mostly a lot of inertia
            going for it (just as MS/DOS does).  Despite UNIX's many
            flaws, there are a lot of people who manage to use it, and
            companies who are happy (given the low cost compared to the
            alternatives) to offer it with their products.  There just
            aren't a lot of major companies lining up to write a better
            system to displace it.

            WHY
            ___

                 Why do so many companies offer UNIX?

            The Desire to Lock-in the Customer

                 It is important that one not forget what an
            unprecedented event in the history of computing the PC
            phenomenon is:  where object code (and even hardware
            subsystems) can be copied or plugged into nearly any
            compatible system made by any of literally hundreds of
            companies worldwide, and work.

                 As mentioned before, hardware engineers often resent
            being restrained to compatibility with what they've done in
            the past.  Companies, likewise, are interested in creating
            systems that are unique enough that their customers are
            effectively "locked in" to the company's product line.  It
            is not in a company's interest that a customer can simply
            transfer all his programs from his existing computers to the
            machines offered by a competitor at a lower price.  This
            "commoditization" of computer hardware creates fierce
            competition, reducing profit margins of suppliers.

                 Offering a machine with a different hardware
            architecture or different instruction set makes it more
            difficult for users to move their applications off that
            vendor's equipment and onto that of a competitor.  Often,
            especially for longstanding, complex applications, the
            original programmer is gone and the package is relatively
            poorly understood by maintenance programmers, if any.
            Sometimes, original source files may have been lost or
            become sufficiently disorganized that nobody really
            remembers how to recompile and reconstruct the application
            any more.

                 Having a different bus, or some other unique hardware
            feature allows a company to provide both product
            differentiation and to reduce competition for add-on boards
            and other peripheral equipment.  It is widely felt by
            observers that IBM's MCA, introduced with the PS/2 line, was
            primarily motivated by an (only partially successful)
            attempt to provide this kind of "lock-in".















           _______________________________________________________________
           |                                                             |
                 There is a reason, folks, why most UNIX systems always
           |                                                             |
            seem to be based on hardware that is more expensive and less
           |                                                             |
            "standard" than the kind of gear that runs MS-DOS.  Offering
           |                                                             |
            nonstandard hardware and UNIX allows both manufacturers and
           |                                                             |
            dealers to jack up their profit margins beyond what they
           |                                                             |
            could hope for selling the commodity-type "standard" stuff,
           |                                                             |
            where competition is so ferocious.
           |                                                             |
           _______________________________________________________________
           |                                                             |

                 It is both satisfying to those frustrated creative
            people in the Engineering Department, as well as quick and
            relatively cheap, to differentiate a company's hardware
            enough to render competing company's products incompatible.

            Less Expensive to Port Than Write a New One

                 Companies who create a computer system that is not
            hardware-compatible with another pre-existing one have an
            immediate problem:  a computer system without software makes
            a great boat anchor, but isn't useful for much else.

                 Creating an operating system from scratch is a lengthy
            and expensive operation.  It requires gurus of a high order,
            who are hard to find and expensive, as well as difficult and
            frustrating to try to manage.  Such efforts are often
            fraught with delays.  Even when everything is clean and
            working, it still remains to write complete and coherent
            technical documentation for the new system.

                 UNIX, because it is relatively quick and easy to port
            to a new and nonstandard computer system, allows a company
            to reduce these costs and delays.  The main reason that so
            many companies offer UNIX with their products is not because
            it is a great operating system, but because it's got name
            recognition and it's cheap to do so.

            Perceived Market

                 Because nearly everybody has at least heard of UNIX, a
            company offering it can offer a "recognized name" and talk
            glowingly about "compatibility", "portability" and "open
            systems", to an established market base, while actually
            offering an otherwise non-compatible and non-standard
            hardware product.  To a company, therefore, they believe
            that there is an established "UNIX market" that they can
            sell to without having to pre-educate their potential
            customers about the merits of their particular operating
            system.

                 Even those customers who've never actually attempted to
            generate or use a UNIX system seem to think that something
            that is so widely talked about must obviously be really
            great.  Therefore, they aren't put off by the prospect of












            trying to learn and use it.  This is the "grass is always
            greener on the other side" phenomenon.

            Let's Talk about LANs and Ethernet
            Let's Talk about LANs and Ethernet

                 With a little bit of reflection, it should be obvious
            that, for most purposes, it doesn't really matter whether a
            number of users share a single processor or each have their
            own, as long as they have access to shared transaction-
            capable databases, a common library of applications, and
            comprehensive communications facilities that allow the
            individual processors to work together.

                 Since local area networks and the distributed
            processing solution they make possible provide the primary
            alternative to the obsolete big-processor-with-multiple-
            terminals timesharing approach, it is appropriate that we
            discuss them here as well.

            A Bit of History...
            ___________________

                 Ethernet's origins seem to date back into the early
            1970's, not much later than the time when UNIX was
            originally being written.  There was a project about that
            time at the University of Hawaii, called the Aloha Project,
                                                         _____________ 
            that involved using radio transceivers to allow shared
            communications between computers across a radio link
            (through the "Ether", hence the subsequent name).  The idea
            was that there would be a very large number of computers
            involved, and that the use of the communications channel by
            each would be very infrequent (hence, a polling scheme would
            be impractical).  The result was the idea of using a CSMA-
            style of determining who could use the channel when:
            computers would passively monitor the given radio channel,
            and only "fire up" their transmitter when they actually had
            something to send.  Ethernet seems to be basically the Aloha
                                                                   _____
            Project, but moving the transmission to a cable medium
            _______                                               
            instead of radio transmission.

                 Work apparently continued on Ethernet and its
            internetworking protocols at Xerox PARC, and interesting
            papers popped up from time to time.  However, the system
            initially stayed internal to Xerox (and apparently some
            academic institutions that were experimenting with it).

                 About the same time, research into a common-cable,
            shareable communications network were also under way at
            Datapoint Corporation in San Antonio, Texas.  The initially
            implemented version of this system was relatively slow
            (about 7000 baud) and consequently did not allow interleaved
            access (it use was limited to one pair of users at a time)
            and was primarily used for sending files between processors
            and to a shared printer.













                 Among Datapoint's R&D department were a number of
            researchers also interested in ham radio (notably, Harry
            Pyle, already mentioned as one of the architects that helped
            develop the architecture for what became the Intel 8008) and
            the intriguing ideas embodied in the Aloha Project as well
                                                 _____________        
            as a growing dissatisfaction with the limitations of the
            initial "combus" system resulted in experiments which would
            ultimately produce the first commercially successful, high
            speed local area network cabling system for desktop
            computers, Datapoint's ARCnet.

                 Unlike Ethernet, Datapoint's system used a token-
            passing approach (which was considerably more efficient
            under heavy load, when performance was the most critical).
            Datapoint researchers also had experimented with a "tap"-
            type approach using a simple electrical bus, (such as most
            Ethernet systems unfortunately still use to this day) but
            decided against it for reliability and maintainability
            reasons.  Instead, Datapoint chose to use a unique
            interconnected-stars approach based upon an active hub,
                                                        __________ 
            which provided electrical isolation between every cable in
            the system (each cable would be individually driven, making
            the network as a whole immune to any given cable being
            shorted).  This approach also made for easier and less
            expensive installation, allowed nodes to be physically and
            logically added to or removed from the network at any time
            without disturbing the nodes already present, permitted very
            large networks, and made for relatively quick and
            straightforward diagnosis and isolation of any failed cables
            or other network components.

                 At the time, most existing networks (the US Defense
            Department's ARPAnet and DEC's DECnet, for example) were
            heterogeneous networks.  Although holding the promise of
            being able to support transparent database access and
            transaction processing using remote resources, they in fact
            primarily ended up being used for two main purposes:  file
            transfer and terminal-front-end (i.e. allowing a terminal
            attached to a local minicomputer system to act as if it was
            attached directly to a remote computer system).  Access
            through the network was usually accomplished by connecting
            to either a remote computer system (in terminal front-end
            mode) or by connecting to a remotely located file (supplying
            the codeword for that file).  They were all communications
                                                        ______________
            networks as opposed to processing networks.
            ________               ___________________ 

                 Again, Datapoint took a different and radical approach
            (and one that has been copied by most successful local area
            networks that followed).  Datapoint's local area network
            operating system, the ARC System, was the first one to be
            based upon the concept of a remote disk volume.  This made
            it possible to establish a connection to a complete
            interrelated collection of files by a single network
            operation, using a single codeword instead of having to












            supply a codeword for every one of possibly hundreds of
            files, and considerably reduced the overhead of networking.
            This made it possible, for the first time, to put the bulk
            of the processing power in the workstation, tightly coupled
            to the screen and user interface, and produce a diskless,
            dedicated-processor, desktop personal computer.  Only the
            physical disk drive was remoted (even operating system
            functions like file allocation were distributed to the
            user's workstation), and for the first time, existing user
            applications programs had the possibility of access to
            remote databases located in one or more different locations,
            transparently both in terms of programming and with access
            times comparable to the same data if it had been stored
            locally.  Datapoint's ARC System, the first commercially
            successful local area network, was hailed as the "most
            significant innovation in data processing since the
            minicomputer".

                 The obvious use of ARCnet was to make it possible to
            issue transaction-oriented updates to shared databases, but
            one other major advantage became apparent in those early
            days of LANs.  Previously, maintenance of utility and
            applications packages, including installing new software and
            updating to new releases, had to be done by each individual
            user independently.  This resulted in a confusing mixture of
            old and new releases, incompatible versions, and other
            problems, not to mention the time wasted in trying to
            maintain multiple copies of everything.  Suddenly, once the
            LAN was in place, a newly installed utility program was
            immediately available to every user on the network, in a
            natural and automatic way.

                 Although users of popular LANs such as Novell's
            Advanced Netware today take many innovative features
            originating in the ARC System for granted, other advanced
            and unique features of the ARC System, such as its Secure
            Mount feature, its fully distributed global enqueue system
            (for controlling access to multiple, distributed shared
            resources), and conjoint networking features, are still to
            be matched by more recent LAN software.

                 Contrary to some recent articles in the popular
            computer magazines which claimed that local area networking
            originated on [high-powered] "workstations", which they
            claimed originated following the Apollo workstations of the
            early 1980s, Datapoint had already by 1980 installed
            literally thousands of ARCnet local area networks in
            businesses worldwide, linking tens of thousands of personal
            desktop computers, starting from its first commercial
            customer installation of the ARC System at Chase Manhattan
            Bank in New York, in 1977.

                 It was this massive success of ARCnet in the commercial
            marketplace that encouraged Datapoint to commission the












            development of the COM9026, the first commercially available
            LSI LAN smart-interface-on-a-chip, by Standard Microsystems,
            and this was a crucial step in ARCnet's continuing success
            and widespread use in the LAN marketplace.

                 In spite of Datapoint's initial reluctance to reveal
            details of the internal workings of the ARCnet local area
            network (in retrospect, this was probably a mistake), it is
            significant that ARCnet to this day is supported by several
            hundred (and growing) different equipment and software
            manufacturers and is easily one of the handful of most
            popular networking systems in the LAN marketplace.  In fact,
            the worldwide base of ARCnet nodes (as of this writing) is
            approximately 1.4 million nodes (compare this to Ethernet's
            figure of only about 600,000 nodes).  In Canada, fully 85%
            of all LAN nodes installed are ARCnet.

            Let's Talk about Ethernet
            _________________________

                 The same UNIX-style "since everyone is talking about
            it, it must be good" perception also applies to Ethernet.
            Let's take a look at some of Ethernet's weaknesses.

            Inherently Less Secure
            ______________________

                 One problem with the design of Ethernet is that (due to
            the "dumb" low-level access allowed to the network and
            messages being sent on it) most Ethernet implementations
            allow malicious programs to look at general message traffic
            on the network.

                 Not only is the message traffic on Ethernet systems
            susceptible to casual snooping, but this author, at least,
            is not aware of any mechanism built into Ethernet and most
            networks based upon it to prevent the possibility of
            malicious programs inserting themselves between a user
            workstation and its server, thereby being able to examine
            data traffic, not to mention being able thereby to modify,
            suppress, or insert spurious transactions among those
            legitimate transactions being performed.  (In fact,
            virtually all LANs which permit transparent bridging via
            "gateways" between networks are probably so fundamentally
            compromised in terms of their ultimate security as to be
            deeply disturbing to anyone who has seriously considered the
            ramifications).

                 On Datapoint's ARC System, in contrast, the intelligent
            Resource Interface Modules (what Datapoint called its
            hardware interface to the ARCnet) physically prevented
            programs, malicious or not, from being able to read messages
            not directed to their node on the network.  Further,
            Datapoint's Secure Mount feature both allowed for resource
            connections to be established on the network and codewords
            validated without ever actually sending the codeword across












            the network, as well as inherently defeating the ability of
            malicious programs to insert themselves between the user and
            the shared file server.

            Significantly More Costly to Install
            ____________________________________

                 Ethernet systems also tend to be substantially more
            expensive to install and support than ARCnet cabling
            systems.

            Expensive Cabling

                 Initially, Ethernet used a ludicrous large cable, which
            was expensive, bulky, and difficult to install.  Costly taps
            were installed into holes physically bored into the cable,
            and subsequently could not be removed without leaving the
            cable exposed to the corrosive aspects of air, moisture, and
            other contaminants.

                 Although more recent "thin-cable" Ethernet
            installations now use a more practical cable and less costly
            BNC connectors, they still require both a cable going in and
            a cable going out of nearly every processor on the network.
            This adds to cable costs and makes installation more
            difficult.  ARCnet, in contrast, requires only a simple,
            single run of RG-62 coax cable (whose cost per installed
            foot is comparable to two-conductor twisted pair and usually
            cheaper than the double twisted pairs used by some other
            LANs).

                 ARCnet, like "thin-cable" Ethernet but unlike many
            other LANs, uses inexpensive and widely available, standard
            BNC connectors.

            Expensive Adapters

                 Not only is Ethernet usually more expensive to cable
            than ARCnet, but its adapters generally cost more, as well.
            While it is easy to pay $400-$600 each or more for Ethernet
            interfaces, ARCnet interfaces are being sold by numerous
            vendors for prices between $85 and $150.  Even after
            considering the added cost of the requisite active and/or
            passive hubs, ARCnet interfacing costs can easily run
            between only $100 to $200 per node, a very low price given
            the superior performance and reliability of ARCnet.

            More Difficult to Configure
            ___________________________

                 The inflexibility with which Ethernets must be cabled
            also make them more difficult to configure, both during
            initial installation and when nodes are added later.

                 As an example, "thin-wire" Ethernet requires that a
            given cable segment "pass-through" each node on that












            segment.  It is difficult at the time of initial
            installation to know for certain where later workstations
            might be desired, so that adequate coax can be provided to
            facilitate the future addition.  If the original wires are
            too short, one runs the risk of later having to completely
            rerun those cables when installing new nodes at a later
            date.

                 Even worse, installing new nodes on "thin-wire"
            Ethernet requires taking the entire network down in order to
            add the new node.  As networks get larger and users
            (especially those using diskless workstations) more
            dependent upon them, this need to take the network down to
            add new nodes becomes less and less acceptable.

                 One can reduce the intolerability of this problem by
            organizing an Ethernet into separate segments connected by
            repeaters.  With judicious use, it is only necessary to take
            down a single segment (which could still inconvenience as
            many as hundreds of users).  Even this approach is
            complicated and expensive.  And, no matter what, cutting the
            cable to install a new user in the middle of a segment will
            still inevitably take down everyone whose computer has to
            communicate through the portion of the bus that has been
            temporarily severed.

            Inherently Inefficient for Many Common Cases
            ____________________________________________

                 Although one might initially suspect that ARCnet (which
            runs at 2.5 million bits per second) would perform at one
            quarter the rate of Ethernet's ten million bits per second,
            this is another case (like comparing the MIPS rate promised
            by RISC machines directly against the MIPS rate of CISC or
            Complex Instruction Set Computers) where things are less
            obvious than they might initially appear.  In actual fact,
            although ARCnet does run at roughly only one-quarter the raw
            bit rate of Ethernet systems, well-implemented ARCnet
            systems typically deliver an actual performance perhaps
            eighty percent or better of that seen on an Ethernet system.
            How is this possible?  There is, of course, an explanation.

            More Complex Protocols Required

                 The Ethernet system uses a very low-level network
            protocol, which embodies little inherent error control.
            Therefore, to achieve reliable communications, many more
            messages must be sent and received at the operating system
            level.  (The Ethernet specification as published by Xerox
            makes enlightening reading, particularly all the cases where
            it comments on problems remaining "to be handled by higher-
            level software").

                 ARCnet's intelligent controller, implemented in a
            single LSI chip, handles most of the chores of network












            support, including many tasks that would otherwise have to
            require the attention of the workstation and server
            processors.  This can (in networks taking good advantage of
            ARCnet's inherent robustness) greatly simplify higher level
            network protocols and give surprisingly effective throughput
            figures.

            Entire Packets Sent When Fruitless

                 One example of a situation where Ethernet wastes
            network bandwidth is in its sending of large data blocks on
            the network when the intended receiver is not in a position
            to receive it.  The ARCnet network, on the other hand,
            automatically verifies that the receiver is ready to receive
            a given data block before sending it across the network.
            This helps to reduce the time spent on retransmitting large
            data blocks, and improves network efficiency.

            Less Efficient under Heavy Loading

                 Much has already been written about Ethernet systems'
            characteristic of starting to thrash as their presented load
            approaches saturation, much in the fashion of poorly
            designed virtual-memory operating systems as they tried to
            run too many programs in a too-small amount of real memory.

                 To be fair, Ethernet was originally designed for usage
            where the total number of nodes was very large (such as, for
                                                ____                    
            example, every home in a town) and used only infrequently.
            If one had, say, ten thousand network nodes and each one
            sent three short messages per day (a city-wide teletex
            network might be an example of such a system), then CSMA/CD
            would probably be the ideal protocol to use.

                 However, in heavy business use where efficiency demands
            that resources are used as effectively as possible, overall
            throughput efficiency that reaches a point where it rapidly
            declines as a function of further increasing load is a most
            undesirable characteristic of Ethernet.  ARCnet, on the
            other hand, has the characteristic that its efficiency
            increases, smoothly and asymptotically to a maximum value,
            as presented load approaches and temporarily even exceeds,
            saturation levels.

                 One other area that has rarely been mentioned in the
            press is that Ethernet systems' throughput efficiency also
            declines somewhat as a function of the physical size of the
            network, as this increases the size of the window in which
            undetected collisions can occur.

















            Fatal Flaw?  Diagnosis and Isolation of Network Problems
            ________________________________________________________

           _______________________________________________________________
           |                                                             |
                 The most serious problem with Ethernet is probably not
           |                                                             |
            its security failings or performance problems, but the fact
           |                                                             |
            that fault diagnosis and isolation on Ethernet systems are
           |                                                             |
            so difficult.
           |                                                             |
           _______________________________________________________________
           |                                                             |

                 Let's start by recognizing a basic and immutable truth
            about Local Area Networks:  they grow.  Once installed, the
            easy incremental growth characteristics make it feasible to
            keep adding more processors to perform new functions, and to
            integrate and connect more users within the company.  The
            LAN quickly becomes absolutely crucial to daily company
            operations, just as much so as continued operation of the
            telephone system or electrical power.  As a result, the LAN
            must be absolutely reliable and extended periods of downtime
            cannot be tolerated.

                 No responsible electrician would wire an entire large
            office building, or an entire company, on one single
            electrical circuit, even if the circuit had sufficient
            capacity.  One good reason is because a single short would
            take down the entire building.  However, it's amazing how
            many otherwise responsible data processing people are
            content to connect a large network of computers, possibly
            the entire company, via one single electrical Ethernet
            cable, where one single short anywhere on that cable can
            likewise take down the whole shebang.

           _______________________________________________________________
           |                                                             |
                 Most Ethernet systems can be completely and totally
           |                                                             |
            brought to their knees by something as simple as a paperclip
           |                                                             |
            inserted in a plug-in cable connector by a disgruntled
           |                                                             |
            employee.
           |                                                             |
           _______________________________________________________________
           |                                                             |

                 Shorting of the Ethernet bus cable is not the only
            danger.  The fact that numerous computers are electrically
            connected to one single, electrically common bus cable (as
            is used by "thin-wire" Ethernet) also means that this same
            wire provides a common path for not only data, but also for
            possible damaging high-voltage surges.  A high-powered high-
            voltage surge on a "thin-wire" Ethernet cable could
            conceivably kill literally every single Ethernet interface
            board in every computer attached to it.  Acquiring and
            installing replacement cards in each of these PCs could take
            days, during which time the unfortunate company could be
            effectively out of business.  (It must be said here that
            installing a totally-fiber-optic version of Ethernet













            eliminates this possibility, but the increased cost is so
            high as to make this academic for most non-military users).

                 This type of accidental or malicious sabotage can
            usually be performed from the privacy of nearly any network
            user's office.  If removed before it is finally discovered,
            the network manager may never be able to find out what the
            problem was that took the network down, or where it came
            from.

                 What's worse, Ethernet systems generally are cabled in
            such a way that they have no centralized points to which
            those responsible for maintaining the network can go to
            quickly find the problem and isolate it from the rest of the
            network so that company operations can be resumed.  Although
            simple shorts are one possible type of fault which can be
            localized by using an expensive device called a time-domain
                                                            ___________
            reflectometer, I would guess that something much less than
            _____________                                             
            five percent of Ethernet installations have such a piece of
            test equipment on-site.  During the wait, the company's
            business is effectively out of business.
                                    ___             

                 Many other types of problems which can occur with
            Ethernets can only be resolved by laboriously following the
            cable, something like following Hansel and Gretel's trail of
            cookie crumbs, as it snakes through suspended ceilings,
            through locked offices, raised floors and utility closets,
            and disconnecting each attached computer, one at a time,
            until the network "comes alive" once more.  (Footnote 1)
            This isn't maybe such a big deal during working hours, when
            access to offices is relatively easy, but can cause real
            problems in the 76% of the hours of the week outside of the
            normal 40-hour work week, when people have gone home and
            entire company departments and offices are locked up for the
            night or weekend.

                 ARCnet systems, on the other hand, use interconnected
            stars based upon active hubs.  This hierarchical cabling
            approach, almost like a b-tree file system, can make it easy
            and fast to break a malfunctioning network down into
            functional subsets and rapidly isolate failed sections of
            the network from the rest.

                 And of course, ARCnet's unique active hub approach,
            which electrically isolates each cable from each other,
            makes a paperclip or staple jammed into a user workstation's
            ____________________

                 1.  Remember the old-fashioned kind of Christmas-tree-
            lights they used to make, where when one bulb blew, the
            whole string quit?  Remember what a pain it was to find the
            bad bulb?  And if you were unlucky enough to have two or
                                                              ___ __
            more bad bulbs, the resulting hassle was often enough to put
            _____                                                       
            one out of the Christmas spirit, and permanently!












            ARCnet cable go totally unnoticed by the rest of the
            network, which continues to function normally.  ARCnet
            protects the network, such that only that user's own
            workstation will be affected by such a problem.

                 How can it be, now that ARCnet has been used for over
            eleven years worldwide and well proven with over a million
            network nodes in use, that Ethernet still hasn't adopted
            ARCnet's active hub cabling approach to improve its ease of
            installation, reliability and maintainability?  Good
            question.  (Actually, there are a few companies which offer
            multiport, "active hubs" for Ethernet, but these tend to be
            extremely expensive and all-too-rarely discussed or used in
            installed Ethernet systems).

            So Why is Everybody Pushing Ethernet?
            _____________________________________

                 Given the numerous problems that plague Ethernet, why
            is it that Ethernet is used by so many vendors?

            Good Installed Base
            ___________________

                 Ethernet has a relatively large installed base (though
            only about half the installed base of ARCnet) in educational
            and research environments, where possible downtime is not as
            potentially disastrous as it is in the commercial, business
            environment.  This seems logical, since Ethernet seems to be
            inextricably tied to UNIX, which is also most successful in
            the same educational and research environments.

            Visibility
            __________

                 Ethernet has several large corporate vendors who seem
            to be willing to spend large sums of money on publicizing
            it.  These include such biggies as Dec and Intel.  Although
            many "biggies" are certainly involved with ARCnet (NCR is in
            fact the "second source" for the ARCnet LSI controller
            chip), none have them have been inclined to spend a lot of
            money promoting ARCnet in the press.  Many naive users (and
            vendors) have moved towards Ethernet simply by default, not
            having enough experience with it to see the downside.

                 Also, many ARCnet vendors "private label" their network
            without calling it ARCnet.  Therefore, many users are
            already using ARCnet without knowing it.  This fragmentation
            makes it look like ARCnet has a smaller market share than it
            actually does.

            The Standards Issue
            ___________________

                 One point that has moved some vendors to push Ethernet
            is that Ethernet is covered by various international
            standards.













                 ARCnet has not (yet) had the formality of an ANSI, ISO
            or IEEE standards number applied to it.  One reason that
            ARCnet vendors haven't been too concerned about this
            oversight is that it just hasn't seemed very necessary.
            Even without an "official" standard number, the fact is that
            the 1.4 million ARCnet nodes already existing worldwide are
            already compatible.

            Ethernet's Weakness
            ___________________

                 Ethernet's numerous weaknesses are, ironically, another
            draw for some companies.  At a recent major computer show,
            more than one vendor replied with refreshing candor that
            they were involved with Ethernet precisely because Ethernet
            was so unreliable and difficult to maintain (in troublesome
            cases, at least) that it fairly begged for all manner of
            expensive analyzers, testers, monitors, and other add-ons,
            which these companies were only too happy to sell to the
            desperate users needing them.

                 Of course, most companies don't realize that they need
            these things until after they've made the network decision,
            and therefore have not had the opportunity to factor these
            high-cost add-ons into their cost comparisons when choosing
            which LAN to install.

            Ethernet Works Okay Most of the Time
            ____________________________________

                 Like a four-pack-a-day smoker that thinks he's
            invulnerable because he hasn't come down with lung cancer
            (yet), some Ethernet users believe that Ethernet must be
            okay since they have used it for (fill in some time period)
            and haven't yet experienced the problems to which it is
            prone.

                 This sort of reminds one of the Soviets, who for
            decades insisted that their nuclear reactors were so
                                  _____                         
            inherently safe that they didn't need the enormous and
            costly concrete-and-steel containment vessels used by all
            Western countries.  This worked fine until that inevitable
            day when all the little "impossible" things lined up just so
                                                                 _______
            at the same time, and Chernobyl happened.

                 Like disk head crashes, which happen infrequently but
            are never pleasant when they do, people like to think that
            it won't happen to them.  It's just easier to not think
                               ____                                
            about what could happen.  However, (usually after having
                       _____                                        
            learned the lesson the hard way, and sometimes more than
            once), successful companies realize that they must take a
                                                          ____       
            defensive posture and avoid using inherently unstable and
            electrically vulnerable technologies for critical systems
            upon which day-to-day corporate operation depends, and
            especially when more reliable alternatives are readily
            available.












            Profitability
            _____________

                 The fact that ARCnet equipment has largely achieved
            commodity status has also worked to entice some vendors to
            try to sell Ethernet instead.  Numerous Taiwanese and other
            clonemakers now crank out ARCnet boards in huge volumes and
            at astonishingly low prices (at least one vendor is quoting
            less than $50 each in quantities of 50).

                 Just as IBM tried to shake off the low-cost clones by
            shifting to the MCA bus, some network vendors and dealers
            have decided to push Ethernet simply because Ethernet (and
            Token-Ring) boards and equipment are more expensive and
            therefore sustain better margins and profitability.

                 More than one dealer at the same computer show referred
            to earlier replied to the effect that "...of course, ARCnet
            is better.  However, ARCnet boards sell for $200 less than
            Ethernet boards.  If a customer installing a new network
            with ten nodes comes in and asks about Ethernet boards, why
            should I tell him about ARCnet and lose the extra profit I
            can make by selling him the Ethernet cards he already thinks
            he wants?"

            A Brief Word About Productivity Enhancement
            A Brief Word About Productivity Enhancement

                 In a recent article in a popular magazine, the author
            made the comment that upgrading his PC into a "full
            workstation" (at a direct cost of nearly $8K) and changing
            to a windowed multitasking environment had improved his
            productivity, even though the work he was actually doing
            continued to be basically the same.

                 Like the electric train set which will finally be
                                                        _______   
            satisfying when one adds just a few more sections of track,
                                            ___                        
            or a few more buildings to the layout, or another
            transformer and another locomotive and set of cars,
            continued hardware upgrades are an easy answer to next step
            in the search for that elusive "Holy Grail", the ideal,
            productive, development system.

                 However, there are other, largely overlooked,
            productivity enhancers available for those MS-DOS
            programmers who are creative enough to find and use them.

                 One sure way to judge the potential productivity of any
            programmer is to look to see what kinds of software tools
            are in their toolbox.  A productive programmer's toolbox
            will contain a good, serious editor (such as BRIEF), at
            least one high-quality compiler for production work
            (typically at present Microsoft C or Turbo C, although one
            of the dBASE compatible compilers can substitute for more
            business-applications types) and another, higher-powered
            general programming tool for the rapid and effective












            development of one-shot programs where programming time is
            more important than execution time (such as SNOBOL4+ or
            ICON).

                 (NOTE: don't confuse SNOBOL4+, an MS-DOS product, with
            the confusingly named and mysterious poorly-documented thing
            called SNOBOL and often shipped with UNIX systems.  Whatever
            the latter is, it doesn't seem to be the same.)

                 It amazes me to see how many programmers waste time
            struggling to use a low-level and tedious tool like C (if
            they try to use an automated method at all) to create these
            one-shot type programs.  Programmers who realize the value
            of trading off cheap machine time to save costly and limited
            programming time and which haven't yet discovered truly
            high-powered general-purpose programming languages like
            SNOBOL4+ or ICON are wasting a substantial amount of their
            own time and their employer's money.  This is true whether
            it's because they are writing one-shot programs using less-
            powerful tools, or not writing such one-shot programs
                               ___                               
            because it would take so long to do so using more primitive
            but more familiar tools.

                 It's hard to imagine any hardware upgrade, at the $50-
                                      ___                              
            $100 it costs to acquire a copy of SNOBOL4+ or ICON, that
            could conceivably result in such a considerably enhanced
            productivity.

            So, Where Are We Really Going?
            So, Where Are We Really Going?

                 We've covered a lot of ground in this article.  Where
            does this author think that data processing is going in the
            not-too-distant future?  Here we go:

            Networking, Clearly
            ___________________

                 It should be clear by now that the rapidly increasing
            power and declining cost of microprocessors make their
            incorporation and dedicated use in workstations inevitable.
            Local Area Networks, of whatever flavor, are pretty
            obviously the vehicle of choice for allowing those high-
            powered personal workstations to share data and
            applications, as well as to pass work-in-progress from one
            employee or department to another.

            Multiprocessing
            _______________

                 I think that a major ongoing effort will be to find new
            ways, not to try to cram multiple users onto a shared
            processor (I feel that those days are past us, and good
            riddance), but to figuring out ways to make practical use of
            multiple processors concurrently to support a single user.
            Why?  Well, reading from the ads in the back of the popular
            magazines, an 8088 or 8086 costs (retail) $3-4 and has a












            Norton SI (yes, I know that's a flawed number but at least
            it's something) rating of 1 to 2.  80286's cost about $60
            and have a Norton SI rating of maybe 7 to 10.  80386's cost
            something like $400-$500 and have a Norton SI rating of 15
            to 25.  A little experimenting with these numbers should
            make further explanation on this point superfluous.

            Mainframes
            __________

                 The same general principle, ("you can have as much
            processing power as cheap as you want, as long as you don't
            insist on having it all in the same place"), I feel
            ultimately dooms classical mainframes for most business
            applications.

                 Major companies don't try to operate using one super-
            employee.  Instead, they set up a corporate and management
            structure and systems that coordinate and route
            communication and workflow among numerous ordinary-human-
            level employees in such a fashion that the necessary work
            all gets done in an orderly fashion.  Work is distributed,
            processed, collected and passed along, until the total job
            gets done.

                 Most data processing people don't understand, and often
            joke about, management's peculiar fascination with
            organizational charts and the like.  However, DP types need
            to realize that management has already solved the kinds of
            problems in the use of numerous small processors
            (employees!) to solve big problems that DP people profess to
            consider difficult or impossible.  Once management
            eventually realizes how desperately their management and
            organizational skills are needed for designing a suitable
            organizational structure for an army of small computers
            working together to meet corporate goals, we will be well on
            the way to solving these problems.

            Multitasking vs. Multiprocessing
            ________________________________

                 Multitasking, in the classical sense, is another
            expression of an attempt to share one processor among
            several tasks.  It ultimately makes more sense to spin most
            of these tasks off to one or more separate processors, to
            run in real, rather than in simulated, parallel.
                   ____                                     

                 As an intermediate step, we'll probably have to pass
            through a stage where we struggle along with OS/2 and
            something like "old fashioned" timesharing-style
            multitasking.  During this time, we'll all figure out what
            things we and other users really want to do in parallel.
            Hopefully, out of these discoveries, we'll eventually end up
            with suitable personal multiprocessors that really do the
            things we really want, and do it the way it should be done.













            Multiwindowing
            ______________

                 It seems safe to say that computing can only get more
            visual, and this suggests increasing emphasis on multi-
            window-type screen interfaces and increasing use of bit-
            mapped graphics, hopefully with dedicated processors to
            handle the increasingly heavy overheads involved in managing
            such things.

                 (It is unfortunate but must be stated that all of these
            systems seem to have in common the fact that they are all
            difficult to write programs for).

            Back-End Batch-Type Processing
            ______________________________

                 It is important not to forget that many business
            programming requirements are not inherently interactive, but
            are better suited by batch-type processing.  This major
            class of business problems is not well suited to WIMP-type
            interfaces, but prefers some kind of advanced batch job
            facility.  As mainframes become increasingly squeezed out of
            the corporate data processing mainstream, I think we'll be
            seeing increased attention being given to new and more
            effective ways to handle large-scale batch processing using
            distributed computers, including more effective ways to
            distribute large batch workloads onto teams of smaller
            computers.

            UNIX Won't Go Away Until We Get Something Better
            ________________________________________________

                 Finally, I don't expect UNIX to go away anytime real
            soon.  As disappointing as that is, the costs involved in
            developing a major, competitive operating system are, simply
            put, very high, and not susceptible to yielding a quick
            return on that investment.  Given the notoriously short-term
            view of corporate America, I don't see a bunch of serious
            contenders lining up to provide us with the (badly needed)
            replacement.

                 Up to this point, I've been ignoring the MacIntosh and
            its successors.  This isn't out of some kind of "ignore it
            and pretend it doesn't exist" mentality, but rather because
            Apple has successfully (up to now) managed to keep it as a
            single-vendor product, and such single-vendor things usually
            tend to be a niche product, big though that niche might
            eventually be.

                 OS/2 is obviously eventually going to be a success, to
            one degree or another, and will provide an alternative to
            UNIX for those users who desperately want multitasking, are
            willing to pay the price, and don't want to fight UNIX to
            get it.  Hopefully, Gordon Letwin also has designed OS/2
            internals to make it easier to spin off various system and
            applications tasks to other cooperating processors once












            those become more commonly available:  this could prove to
            be OS/2's real eventual ace-in-the-hole.

                 Meanwhile, I expect MS-DOS to continue its dominance
            for the immediate foreseeable future.  While one could
            certainly imagine better operating systems, at least MS-DOS
            has the characteristic that it doesn't stand in the way of a
            programmer making the machine do what it needs to do.  Any
            operating system, which by benign neglect or otherwise,
            results in the greatest flood of first-class third-party
            operating software and packages that the world has ever
            seen, clearly can't be dismissed as any kind of a failure.

                 Are there any dark horses?  Yes, but they're very dark,
            indeed.  QNX, which at least seems to have the most elegant
            potential this author has seen for multiprocessing and local
            area networking on PCs, seems to have carved out a small but
            profitable niche for itself in the market.  However, in the
            blind rush to support the other "guaranteed biggies", QNX
            seems to be being ignored by most of the major players.

                 Datapoint, one of the original innovators, seems to
            have largely faded from the scene.  Although they are
            supposed to have a very advanced multitasking and networking
            operating system running on PCs these days, nobody seems to
            know anything about it, and it's never written up in any of
            the popular magazines.  In Datapoint's current condition,
            weakened following a series of shocking management blunders
            some years back, the company is probably not in a strong
            enough position to grab a major market share with its
            operating system product, no matter how good it might be.

                 The only visible way out, in the absence of a major
            hardware shakeup such as the arrival in force of neural-
            network computers or some such that renders all the current
            operating systems obsolete, seems to be some well-funded,
            long-term-view company coming forward with both the money to
            create a truly contemporary new operating system standard
            and the deep-pocketed staying power to make it a success.
            In today's corporate America, the only likely-sounding
            source for such a development would be Japan Inc., and I'd
            rather not even think about that.

           _______________________________________________________________

                 About the author:  Gordon E. Peterson is an outspoken
            and admittedly opinionated computer consultant and systems
            developer.  He is best known as having been the creator of
            the systems software for the world's first commercially
            successful LAN, the ARC System, in 1976 and 1977.  Most
            recently, he has been working as a consultant through
            Infopoint Systems in Texas and Natural Intelligence in
            Maine, on various assignments in Europe, and can













            occasionally be found pursuing his other great passion,
            traveling on ocean liners.

