nettime's_roving_reporter on 10 Jul 2000 00:09:40 -0000

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> The Cathedral and the Bizarre


                       The Cathedral and the Bizarre
   07 July 2000
   by Jeff Lewis
   Contributing Columnist
   The OpenSource movement is one of those things I both love and hate. I
   love the concept - it harkens back to the 70s and how I got into
   computing. It was a time when almost no one actually owned a computer;
   we shared access to systems which ran Unix and MTS and other esoteric
   OSes, or we borrowed time on PDP-8s, and PDP-11s when the University
   would let us have it. We didn't compete about brand of computer - who
   could afford our own computer? - we engaged in friendly competition
   trying to improve each other's code.
   That all changed in the late 70's when a young programmer actually had
   the audacity to sell his BASIC interpreter to the other programmers
   rather than just giving it and the source code for it away. He went
   away and sold it to Altair and Apple and well... the rest, as they
   say, is history. With the IBM-PC making microcomputing respectable in
   1981 (running an OS owned by, although not written by, that same young
   programmer), the die was cast and computing was changed in a
   fundamental way. Creating software, even for recreational purposes,
   was tied to making money - and nothing kills the notion of community
   faster than putting a price on it.
   Some people, like Richard Stallman, have always tried to keep a bit of
   this spirit alive - admittedly, it must be like fighting uphill in an
   avalanche. But it wasn't until Linux that the OpenSource movement
   really kicked in. Sure, there were lots of other OpenSource projects
   before Linux - BSD, GNU, and while Linux relies on GNU for a lot of
   its tools, the truth is that none of these projects ever managed to
   capture the heart of people like Linux.
   Which leads me to the 'hate' part. There's a growing fanaticism within
   the OpenSource community which is starting to smell almost as bad as
   the fanaticism it tries to combat.
   Bruno! Never kill a customer! 
   Eric Raymond's comments at MacHack were wonderfully telling in several
   ways. He criticised Mac programmers for being too focussed on user
   interface and criticised MacOS for intertwining the UI with system
   functionality, making it harder for new programmers to get on board
   writing MacOS apps.
   Interestingly, the number one problems with Linux, from a consumer
   perspective, are that it doesn't have a standardised UI; its tools are
   simply too difficult to use and configure, and it requires far too
   much upfront learning to get up to speed. The last is the most
   telling: the Linux model moves the cost of learning from the developer
   to the user.
   As for upfront developer time, well, that's true - it is harder to
   learn MacOS than it is to learn, say, the ANSI C libraries... but in
   truth, Raymond overstates the issue. In fact, MacOS can be divided in
   several large chunks. I regularly write 'console' apps in MacOS to do
   little tasks which don't warrant a lot of UI. I also have several
   shells for drag and drop functionality which handles all that sort of
   stuff. In fact, I tend to stick to the ANSI C libraries. There are
   some deficits in MacOS, relative to the 'standard' libraries - MacOS
   has overwhelmingly complex networking support compared to sockets and
   streams, but it also offers more complex functionality.
   Ironically, it's the focus on UI, perhaps taken to obsession, which
   gives MacOS its edge over... well, it has to be said this way - over
   all other OSes. Windows comes close, but misses simply because Windows
   developers really do not have that obsession to make the UI perfect.
   Linux and the various X-Window interfaces rarely even come close. The
   obsession with sticking to a standard behaviour means that MacOS users
   experience a consistency of behaviour that no other OS can offer
   (although again, Windows is getting closer - Linux is not even close).
   Why the obsession? Simple. The customer is the reason for the
   software. The customer must never be punished for choosing your
   software. That means that the UI experience must be compelling. It has
   to be refined. It should, as much as possible, anticipate without
   being annoying (which is where Windows applications - especially
   Microsoft's - fall down... They're just a little too eager to
   anticipate and help). The customer isn't likely to be using just your
   software, which means you have to play nice with the other software.
   You can't strongarm the customer by only supporting a proprietary
   format. You can't have a wildly different UI and behaviour because
   you'll throw the user off.
   At least Raymond does admit that the OpenSource community can learn
   something about UI from Mac programmers... I hope they can also learn
   something about real world customers.
   Herding Cats 
   Sure, it's getting better, but let's put this in perspective: Linux
   has been around for five or six years. The IBM-PC was created in 1981.
   The Lisa was released in 1983. The Mac in 1984. By 1985, Microsoft,
   Digital Research, Commodore-Amiga and Atari (by way of DRI) all had
   GUI based computers, or GUI based interfaces for their computers, and
   had established guidelines for UI. Admittedly, Microsoft's was
   particularly bad (trust me, Windows 1.0 was stunningly horrible), but
   then Microsoft - even though their own developers hated it - managed
   to evolve and improve it. In five years, we went from no computer
   having a UI to all computers used by the general public having a GUI
   based OS - and three of four being pretty decent ones at at that.
   Linux, on the other hand, pops up around 1995 after the GUI market had
   twelve years of development - and promptly reinvents the wheel and
   badly at that. (In defense of Linus Torvalds, he wasn't trying to
   build a consumer OS, he was trying to create a freely available
   Unix-like kernel in the spirit of Andrew Tannenbaum's Minix, which had
   been the hot mini-OS on Atari and Amiga computers...)
   The core 'advantage' Raymond puts forward, many programmers improving
   the code, many eyes looking for bugs, is also its chief weakness.
   Programmers - at least the kind who are likely to get involved in
   OpenSource project - are a notoriously independent lot. They love to
   do their own thing and always think they have a better way to do
   Normally, this is an advantage because, with filtering and discipline,
   this is from whence the fountain of creativity which drives this
   industry comes. Unchecked and unfiltered, though, and you have
   unbridled chaos. As a result, you have no less than six different
   desktop systems and two different configuration systems and tools
   whose command line options change not only from system flavour - but
   from revision to revision. Perl is the best example of this - when
   they went to version five, they changed the language syntax in a way
   which broke existing code. Perl itself is a testimony to the
   OpenSource mindset - it's a gruesome mishmash of inconsistent syntax
   and function calls - definitely a product designed by committee - but
   one wherein each member clearly wasn't listening to anyone else.
   Raymond touts the stability of Linux as proof of the OpenSource
   concept, but that's a bit misleading. The core of Linux was written by
   one person - Linus Torvalds. Moreover, there is a small group who
   shepherds the contributions to the kernel to keep it stable and clean.
   In other words, there's a priesthood at the top of the bazaar. If you
   check into each successful OpenSource project, you see the same thing:
   a small group of referees who filter the input and weed out the bad
   ideas. The bazaar has cops. The chaos is contained.
   When you get a company the size of Apple or Microsoft, you have
   thousands of developers who do peer review of code. You have the
   referees who determine what goes in to a product and what doesn't...
   but they have one thing that the OpenSource method doesn't: they have
   markets to answer to.
   Who ordered that? 
   See, when commercial developers create a product, they start by trying
   to solve a problem that customers need solved. The focus is always on
   the customer. What do they need? What do they want (which isn't always
   the same thing :)? Can they use this software? How much handholding
   will they need to make this work? Can we make this experience so great
   that the competition can't sway them back?
   This last is another flaw with OpenSource - the lack of competition.
   Let me start by saying that it's very weird to be saying this - I'm
   not a big fan of the competitive model, but there are times when it
   works. The advantage of competition is that it keeps you on your toes.
   It forces you to pay attention to what the consumers want. Yet in
   OpenSource, there isn't really any direct competition. Everything goes
   into the pool, good and bad. If you have a good idea, everyone else
   gets that good idea and it gets merged into all products - even the
   bad ones. This means that the real differentiation for OpenSource
   isn't marketshare or even marketability, but simply whether or not it
   can garner a following.
   That tends to happen whenever someone first solves a problem. People
   jump on the bandwagon and promote the software, and the crowd grows
   and grows - often way out of proportion to the quality of the
   solution. Perl, again, is an excellent example of this. It's really a
   terrible language - badly.. ok... not designed, clumsy and arbitrary.
   But it works - is better than shell scripting, and more powerful than
   awk... and it was the first serious attempt at such a language. So it
   went platinum with a bullet.
   The problem there is that the 'capitalist trench' problem is just as
   real in OpenSource as it is in commerical product: once a group buys
   into a specific solution, the cost of changing grows with time. That's
   true even if the software is 'free' because the maintenance costs and
   time to convert to another solution are not.
   You want fries with that operating system? 
   The other core point Raymond tried to make is that software is a
   service industry. To be honest, this is a disturbing concept in
   several ways.
   First, let me ask you a question: if you make your living by selling
   service on software, what's the motivation to make the software as
   easy to operate and maintain as possible? The answer? Well - not much.
   And so we have Linux. Very powerful. Very flexible. Very hard for
   average computer users to configure and maintain.
   Now, that's not to say that commercial developers always have a
   motivation to make their software easier... I would argue that any
   software which needs a certification program is bad software. Any
   software which needs an entire section at Chapters (errr, Computer
   Literacy... Barnes and Nobel) devoted to rack after rack of MSCE
   books, cheat cards and training CDs (and it's not just Microsoft -
   Novell started this idea, but everyone is getting into it - Cisco just
   joined the gang) is badly written from a user's perspective.
   Actually, let me go back to Cisco for a moment. Cisco used to have a
   great freeware configuration tool. You could download it from their
   website. That tool has quietly vanished and has been replaced with
   CiscoWorks and of course, Cisco certified training. A very ominous
   There are also many kinds of software which have lifespans just too
   short to make it sensible to bind the customer in a service agreement.
   Games are a good example. Most games are onetime sells. You can do a
   little addon with cheatbooks, or tie-in products, but for every Quake,
   there's a thousand other games which get a moment of success, then
   fade away.
   While Raymond cites Doom as an example of a product which revived
   itself by going OpenSource, he conveniently forgets that Doom was a
   very successful commercial product for a long time - both as shareware
   and commercialware. It was after it had saturated its market and had
   been surpassed by newer games and technologies that the authors
   decided it could be put into OpenSou rce, both to give new game
   developers a hand learning some of the tricks and to let people who
   just wanted to play around have a toy. It did generate an aftermarket,
   but it's questionable as to how much revenue that aftermarket
   He does cite the large number of addon and tie-in products Doom
   generated, and yes, that's an argument for opening up a product to
   some degree - I'm all for open formats and interchangable files, but
   that's a long leap from fully open source on a commercial product.
   Quality means many things 
   OpenSource fans tend to believe that they are quality fanatics, but
   what definition of quality are we using? To them, the most important
   kind of quality is stability - software shouldn't crash. The problem
   is that OpenSource software does crash. Does it crash less often than
   say, a Mac app? Well, probably, but most OpenSource projects tend to
   be much simpler and smaller than Mac apps and in general, tend to be
   very minimalistic. So they have the 'quality' of simplicity - for the
   programmer, but the lack of quality for the end user.
   The argument is that since everyone who uses the software has the
   source, if you find a bug you can fix it then submit the bug fix to
   someone and it becomes part of the product - if you can figure out who
   to contact. But there again, we see a confusion as to whom the
   customer is: most people who use computers do not know how to program
   in BASIC, let alone C or C++, and more importantly - they do not want
   to. They buy a computer to solve a problem - to get work done - not to
   debug someone's code.
   Ironically, when commercial developers release applications which are
   clearly not 100%, we accuse them of forcing the customer to be beta
   testers, but in a sense, OpenSource assumes you're not only going to
   be a tester; you're going to be a programmer and fix the bug!
   Raymond points out that Windows 2000, which reportedly shipped with
   63,000 bugs, shows that OpenSource works because under Brook's "Law",
   the number of bugs is roughly proportional to the square of the number
   of programmers. Since Linux has 40,000 developers working on it -
   there should be 2 billion bugs in Linux. The flaw is that he perceives
   Linux as a collection of small projects, but sees Win2K as a single
   monolithic application - much as he seems to see MacOS. In reality,
   Win2K and MacOS aren't monolithic. They are composed of many smaller
   parts which are handled by smaller teams. Much like Linux.
   As for comparing bug counts - at least Microsoft has a bug count. If
   Raymond had bothered to check the number, he'd have found that a
   rather large proportion of the 63,000 bugs are cosmetic - and none
   were considered 'showstoppers'. We don't even have a way to determine
   the real bug count for Linux since there's no central repository for
   this sort of information.
   There is another system... 
   Apple has taken the fruit of the OpenSource movement - BSD Unix, and
   added their own proprietary components which address the shortcomings
   of BSD, from the perspective of the typical Mac user. They're bringing
   UI consistency to the 'Wild West' and making a UI a required part of
   the Unix experience. Old time Unix fans will find this unpleasent and
   even undesirable, but this is the future.
   Eric Raymond made one other comment: that the elitism of Mac
   programmers is a danger to its long term survival... and I ask him -
   where has be been? This same 'elitism' is what pulled us into a GUI
   based world of computing. It's what made computing accessable to the
   average person. This sounds like a weird echo of the 'Apple is doomed'
   argument we used to hear so often - but in case he's missed it - Apple
   is still here and doing better than ever.
   Yes, OpenSource is important. Yes, it has much to contribute and
   provides a way for non-commercial development to happen. Personally,
   as I've said before, I think it's the most important new development
   in computing... but it's not the future until it learns who the
   customer is. Apple has made a couple of slips in that area with Aqua
   and the new Quicktime, but at least they admit that they made an error
   and have backed off. They embraced OpenSource and made it work for
   Not bad for a cathedral.
     Jeff Lewis has been a programmer analyst for 24 years and started
     programming commercially about the same time Unix and the original
   microprocessor was released. He's worked on systems from DEC PDP-8/e's
   to microprocessors to mainframes although these days he limits himself
    to Macs and PCs for the most part. He wrote for a Canadian national
       computer paper for five years and his goal is to foster mutual
   understanding or at least tolerance between platform centric users and
     to remind people that computers are just tools, not lifestyles and
                    definitely should not be religions.

#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: and "info nettime-l" in the msg body
#  archive: contact: