Misunderstanding Computers

Why do we insist on seeing the computer as a magic box for controlling other people?
人はどうしてコンピュータを、人を制する魔法の箱として考えたいのですか?
Why do we want so much to control others when we won't control ourselves?
どうしてそれほど、自分を制しないのに、人をコントロールしたいのですか?

Computer memory is just fancy paper, CPUs are just fancy pens with fancy erasers, and the network is just a fancy backyard fence.
コンピュータの記憶というものはただ改良した紙ですし、CPU 何て特長ある筆に特殊の消しゴムがついたものにすぎないし、ネットワークそのものは裏庭の塀が少し拡大されたものぐらいです。

(original post/元の投稿 -- defining computers site/コンピュータを定義しようのサイト)

Sunday, February 20, 2022

When the IBM 9000 Scientific Actually Existed (PC History and Context)

(This is not about the more recent mainframe called Enterprise System/9000. It's about a scientific (originally) workstation in the early 1980s called the System 9000.)

I was sure that the existence of the 68000-base IBM System 9000 scientific workstation actually predated the 8088-based IBM PC model 5150, but all sorts of articles say it started development after the machine we know as the IBM PC and was released in 1982 or 1983.

Nope, nope, and nope. I rediscovered something today (late Fri. night, Feb. 19, '22, when I probably really should have been going to bed). My memory was not wrong:

http://www.columbia.edu/cu/computinghistory/cs9000.html

(A picture says a lot. I'd like to post a picture here, but the only one I can find is the one at the end of the link above, which doesn't offer liberal use terms. That one shows a vertically stacked system with a plotter on top of a system unit that includes function keys on a slanted panel above the keyboard tray, and, mounted above the plotter, a monitor with floppy disk drives to its left. It looks like something you'd use in a lab.)

So there were running prototypes in 1980. Not released until 1982/83, but running prototypes in 1980.

IBM shot itself in the foot on this one. Big time. 

Effectively gave the industry to Bill Gates and Microsoft.

And I have to remind myself of a little bit of religious mystery: 

Microsoft was the only company -- well, one of the few companies -- willing to sell a defective operating system -- which thus allowed a lot of people to make money off fixing Microsoft's problems. (That's the short version. Maybe I shouldn't say "defective". "Unfinished" would probably be a better way to put it.) And it was good enough to start letting some people start keeping their family history work on their computers, among other things. That's why God (or natural consequence, for you atheists and agnostics out there) let Microsoft take over the market.

(I should not that shooting yourself in the foot is what this world is for. Shooting yourself in the foot doesn't have to be a fatal error, and, even though it's painful, it can be a good experience if you learn from it.)

 


Unpacking that:

First, when talking about software, unfinished and defective mean essentially the same thing -- that it doesn't (yet) perfectly satisfy the customer's needs. 

But there is no such thing as finished software. If you have a piece of software that's in use, someone is going to be regularly finding ways in which it doesn't meet spec, and other ways in which it could be extended and improved. 

Even now, software always has defects. Nobody sells truly finished software. When talking about software, finished means dead. 

But back in the 1970s and 1980s, most companies in the nascent software market intended to at least try to make their products free of known defects before they put them on the market. Microsoft, on the other hand, was willing to put products out that were known (by their own estimate) to be only 80% finished. That meant that the customers could be using it while they worked on a better version.

(It also meant customer frustration because of overly optimistic interpretation of sales literature claims. I got bitten hard several times by that, and, yes, the pain is still there. The second time, yeah, I should have been more wary. The third time, I guess it was my fault for taking the claims too literally yet again. Talk about painful mistakes, but I've learned that Microsoft's stuff doesn't work very well for me.)

In the 1990s, Microsoft made too much of this principle with their 80/20 rule of getting a product out the door when 80% of the function was implemented, and letting the customer help figure out the remaining 20%. All too often, it was closer to 20/80 in my opinion, but even that is not exactly wrong in the agile approach to technology. 

("Agile" is actually a discipline that I approve of in concept, if not in extant implementations. But I'm being a little more general than Agile techniques here.)

Other companies, including IBM, still tended back in the 1980s to try too hard to give a product too much polish before turning it over to the customer. That approach may make for more satisfied engineers (maybe), but gives the customer less say in how the product should develop, and less opportunity to revise and refine their ideas of their requirements early on, while the product requirements are easier to rework.

Microsoft BASIC is one example of this principle. (And Tiny BASIC is an even more extreme example.)

Dartmouth stripped down the definition of the Fortran (ForTran <= Formula Translator) language to produce a definition of a BASIC (Beginners' All-purpose Symbolic Instruction Code or something like that) language. Fortran was too complicated (for some meaning of complicated) for ordinary people to understand, so they made it simpler. More important, Fortran had to be compiled, meaning only people with a compiler could use it, putting it even further out of reach of ordinary people.

But even the Dartmouth definition of BASIC was more than your usual user thought they wanted at the time. A programmable calculator with a large screen was just fine for an awful lot of purposes, and was more than they/we had before then.

So Paul Allen and Bill Gates borrowed (with tacit non-disapproval) a certain company's PDP minicomputer at night for a couple of months and worked up a stripped-down derivative of a derivative of Dartmouth's BASIC (and in the present intellectual property regime would have probably owed a lot of royalties to Dartmouth and DEC among others) and got it running on the very early microcomputers, starting with the 8080 but continuing to the 6502, 6800, and many others.

(If I've made it sound easy, they were definitely ignoring their studies during those two months they worked on the first 8080 version, using understanding acquired from previous experience elsewhere, and putting in long hours for the whole two months.)

Microsoft BASIC was very incomplete. But it filled a big need. And customers were able to give them feedback, which was very important. (A different, somewhat freely distributable version of BASIC, Tiny BASIC, was even more incomplete, but it filled a similar big need in a more varied, but smaller overall market.)

Family history, at the time, was a field in which the professionals had very arcane rules about paper and ink to use, format of data, data to include, and so forth. As incomplete as anyone's implementation of BASIC was, programs written in BASIC were able to essentially help the researcher get the data into computer files fairly correctly and fairly painlessly. 

(Family and personal history are a couple of the hidden real reasons for the need for personal computing systems.)

The same sort of thing happened with Microsoft DOS and Windows operating systems. They were incomplete and even defective in many ways, but they provided a framework under which a variety of programs useful in business applications could be written and shared/sold. 

CPM from Digital Research was more complete than DOS, but more expensive. So was the Macintosh, from Apple. (Microware's OS-9/6809 was very nicely done for the time and, on Radio Shack's Color Computer, was priced more within reach, but it had an image of being too technical for the average user, and Microware really wasn't trying hard to sell it in the general market.)

Essentially, the incomplete (or defective) nature of Microsoft's products provided a virtual meeting ground for the cooperation of a large number of smart people to fix, enhance, and extend the products.

Similar things could be said for Commodore's 6502-based computers, but they had the limits of an 8-bit architecture, and Jack Tramiel and Commodore's board of directors were way too content just selling lots of cheap stuff and letting the customers figure out what to do with it. 

When Tramiel picked up the (68000-based) Amiga, he didn't have a way for people to bridge the gap between Commodore's earlier 8-bit stuff and the Amiga.

One thing that can be said about Microsoft, they understood managing and selling upgrades.

Incompleteness and even defects can be a valuable feature of a technological product.

So this much was not bad, really. It was when Microsoft got too big for their britches in the mid-1980s when things started going really south, and when Microsoft refused to give up their developed tacit monopoly in the mid-to-late 1990s that things went permanently south.

Compare this to Apple's Macintosh? Apple had good technological reasons to keep closer control at the time. Even the 68000 didn't have enough oomph to provide a stable graphical user experience if too many people got their hands in the pie. But the lack of approachability did ultimately hurt the Macintosh's acceptance in the marketplace, more so than the price to the end-user. The technological reasons for doing so notwithstanding, maintaining that control hurt their market acceptance.

Intel's development of the 8086 followed a similar pattern of required upgrades, moving from less complete to more, although they almost killed themselves with the 80286.

Both Intel and Microsoft are now eating the consequences of trying too hard to be the only dog. Too big, too heavy, too much momentum in the technology of the past. And we, the market, are eating the consequences of letting them get into that position.

All the big companies are in the way. When a company gets too big, it can't help but get in the way, especially when they are busy trying to establish or keep the tacit monopoly position in their market. (I'm very concerned about Google, even though I use their stuff. Even though it works for me now -- Microsoft's stuff never worked very well for me. Even though Google's stuff works for me now, I'm looking for alternatives. Monopolies are not a good thing.)

I've wandered from the topic. 

Yes, using the 68000 would have required IBM to work harder to keep focused on a limited introductory feature set similar to the 8088 IBM PC.

So what about the IBM 9000 and Motorola's 68000 and the IBM 5150?

Could IBM have based their personal computer offering on the 68000 instead of the 8088 and been successful?

The IBM 9000 has been regularly taken up, along with Apple's Lisa, as an example of how developing a PC based on the 68000 would be prohibitively expensive for the personal computer market. 

But both are more of an example of how the 68000 allowed an awful lot more to be done than the 8086 -- so much more that over-specification became way too easy. It had lots of address space, decent speed at 16 bits, not slowed down significantly at 32 bits. And it was hard to tell the idealists in the company (the board of directors and the sales crew) who wanted to add features to a product, no, we can't do that yet, until it was too late and the product had departed significantly from what the customers wanted and was way over budget and way past the delivery date or market window. That's a significant part of the reason both the 9000 and the Lisa did not come out in 1980, and ultimately did not do well in the market.

The Macintosh is a counter-example. Much tighter focus in development, more accessible entry price, more accessible product. (And borrowing heavily from the lessons of the Lisa.) 

I often say that the 68000 was may have been "too good" for the market at the time, since it seems to have required someone with Steve Jobs' focus and tenacity, and the lessons of the Lisa, to successfully develop the Macintosh.

The 9000 was targeted at the scientific community, and it was intended to be a "complete" (meaning all parts IBM) solution. That kept it off the market too long and kept the price high.

Could IBM have stripped down the 9000 design and built a machine comparable to the IBM PC with a 68000 and sold it at a comparable price? Or even started over with a simpler goal and successfully developed a 68000-based PC?

People have been "explaining" that it would have been "prohibitively expensive" for an awfully long time.

Sure, the 9000 was significantly more expensive than a PC, but it came with significantly more stuff. Expensive stuff. A complete, (relatively) solid real-time OS in ROM, for instance, 128K of ROM vs. the 8088 PC's 20K ROM with BIOS+BASIC only; base configuration included disk integrated into the OS, vs. no disk and no OS beyond BASIC in the original IBM PC. 128K of RAM vs. the IBM PC's 16K in base configuration (not just double, 8 times the amount).

The base configuration of the 9000 came with floppy disk storage, touch-panel display, keyboard designed for laboratory use, ports to interface it to scientific instruments, and something called memory management. All of that was very expensive stuff at the time. (I don't remember if the plotter was standard in the base configuration.) 

And it was expensive to put any of that stuff on an 8088 PC.

Speaking of memory management, some people thought the segment registers in the 8086 were for "memory management", but that's just plain wrong. They were not designed for the same class of function. No mechanism to control class of access, no bounds registers to set segment size, no help when trying to page things in and out of memory. More of a cheap substitute for bank switching. Again, it was not a bad thing, just not what some people thought it was.

FWIW, the 68000 didn't need bank switching at the time because of the large address bus. And it came with 8 address registers, 7 of which could easily be used as true segment registers without the clumsy 16-byte granularity of the 8086 segment registers. Segment size still had to be enforced in software on the 68000, rather than hardware.

As the guy at the link I posted above said, strip the 9000 down to the kind of machine the original PC was and it would have been very competitive in 1980.

How much more would it have cost than the original IBM PC?

8 extra data lines. 4 extra address lines. 12 extra traces on the mainboard, twelve more lines in the expansion bus, two extra buffers. And they could've fudged on the extra address lines, left them out of the first model and kept the first model limited to a single megabyte address space. A megabyte of address space was huge at the time. 

Other than the bus connectors, less than ten dollars, including markup.

Bus connectors were often mentioned as a blocking point at the time. They had sources for the bus connector they used in the System/23 Datamaster, and the connector was not overly expensive. But, with just 52 lines, it was just big enough for 8 data bits, 20 address bits, and the control signals and power lines they wanted. They would have either had to get a wider connector, or they would have had to use a second connector like the one in the AT bus from the outset. Forty dollar (after markup) for just the bus connectors seemed large to them, I guess, even though I think they should have been able to see that the cost would come down at the volumes they would be purchasing, even with their woefully underestimated demand.

RAM chips? The original board laid them out in four banks of eight. Arranging that as 2 by 16 instead of 4 by eight would not have killed any budgets. The one minor disadvantage was that you physically had to start with twice the base configuration RAM, 16 chips instead of 8. (The max on-mainboard would not have changed.) But that does not seriously harm the end price, either. Calculating the price as a portion of the base sticker price for the 8086 IBM PC, it would have added something like fifty dollars.

BIOS ROM? The original had four 2K ROMs, didn't it? Arrange those in 16-bit wide pairs and you're done. Same count of ROMs, one extra buffer. Three bucks added for liberal markup on the buffer and PC board traces. (I know there were engineers who felt that it was somehow sacrilege to use a pair of 8-bit ROMs instead of a single 16-bit wide ROM, but, no, it wasn't. And, like I say, it did not really add significantly to the cost to do it in pairs, if you're going to have four ROMs anyway.)

The ROM for BASIC? Yeah, that would have had to be done as a pair of ROMs, so add, I think it was, twenty dollars at manufacturing prices plus markup for two 8K by 8  ROMs instead of 15 dollars for a single 16K by 8 ROM.

Would the 68000's declaimed lower code density have meant more ROM? 

No. Code density on the 68000 was not worse than on the 8086, unless you deliberately wrote the 68000 code like you were transliterating 8086 code. 

The 68000 does not have as good code density as the 8-bit 6809, but the 6809 is exceptionally code dense when programmed by someone who knows what he is doing. Different question.

If you were using compilers of the time to do the coding, compilers for the 68000 were often really weak on using the 68000's instruction set and addressing modes. The compiler engineers really seemed to be writing the code generators like they were writing for, who knows? Some other, much more limited CPU.

If an engineer did the BIOS and BASIC in assembler and didn't bother to learn the CPU, sure, he would get similarly bad results. But the 68000 had a regular instruction set. It shouldn't take more than, say, eight hours of playing around with a prototyping kit (such as Motorola's inexpensive ECB) to understand.

Ah. Microsoft. Yeah. Their BASIC for the 6809 was not a model of either space or cycle efficiency. Apparently they did just map 8080 registers to 6809 registers and key 8080 instructions to something close in the 6809 instruction/addressing mode repertoire, and did the conversion automatically with no cleanup afterward. So, if they had gone that route, using Microsoft's BASIC, they'd have had to use a second 16K ROM. Add 15 dollars.

Do I fault Microsoft for their BASIC for 6809? Yeah, I guess I do, a little. It's a dog. Sorry. Some people think it's representative of 6809 code. No.

Peripheral chips? I've heard people talk about lack of 16-bit peripheral chips for the 68000. Why do they talk about that? I don't know.

The 68000 specifically included instructions to support the use of 8-bit port parts without any additional buffers or data bus translation. Hanging a 6821 on a 68000 was literally no more difficult than hanging one on a 6800 or 6809. Likewise the 6845 video controller that got used in the original PC. And non-Motorola parts would be no more difficult. Completely beside the point.

Price of the processor, yes. But not the four times price that is often tossed around. That's a small lot price. IBM's projected manufacturing, underestimated as it was, still would have allowed much better pricing on the CPU. So the 68000 would have added a hundred dollars to the cost of the first run. 

Availability? Motorola was always conservative on availability estimates. It was not an actual problem, although I suppose some of IBM's purchasing department might not have known that.

Scaling the operating system down from the 9000's? That could have been a problem.

But IBM ultimately infamously went to a third party for the OS of the 5150 PC, anyway. 

Operating systems aren't that hard to port to a CPU as capable as the 68000. I'm sure Microware would have been game for porting OS-9/6809 to the 68000 a couple of years earlier than actually happened. OS-9 was a good OS that was cheap enough for Radio Shack to sell for the 6809-based Color Computer, and it took them less than a year to go from the 6800 to the 6809 with both OS-9/6809 and Basic09 in 1979/1980. On to the 68000 in 1980/1981 instead of 1981/1982? Not a problem. 

Some people worry about the position independent coding used in OS-9, but the 68000, like the 6809, directly supports PC-relative addressing (IP-relative, in Intel-speak), so you don't need a linking loader. No need for relocation tables. A module can load at any address and be linked and used by multiple processes, themselves loaded in arbitrary locations without patching long lists of addresses. 

You point an index/address register to the base of the module, and the caller knows the offsets, and the callee doesn't care where it is loaded. Everything is relative. That's why OS-9 can be real-time, multitasking with no MMU.

Another possibility for third party OS and BASIC was Technical Systems Consultants, the producers of TSC Flex (similar to CP/M) and Uniflex (like a stripped-down Unix, but in a different way from OS-9 -- not real-time, not position-independent). They knew their way around assembly language on Motorola's CPUs, too. 

Several possible third-party sources.

Total price increase? $200, maybe $250.

Pushing the price up to $1750 from $1500 was not prohibitive, not even a problem, if they had simply done the necessary market research.

This sounds theoretical? 

I did some tentative design work about that time, similar to Motorola's ECB prototyping board for the 68000. Comparing it to the IBM PC specs, using factory volume prices on the CPU, they could have sold a 68000-based equivalent with a base configuration of 32K RAM instead of 16K to make the data bus easy, and still sold it at about 200, maybe 250 dollars more than the original IBM PC, without losing money. The market would have accepted that.

I mean look at what people were paying for OS-9/6809 systems in 1981. Well, okay, those systems came with at least one disk drive in base configuration for something more than $2000 total. But the acceptance would have been there.

I remember from conversations with people who worked at IBM that there was concern about the PC competing with the System 34. That was a sort-of valid concern, really. But that's the only thing that might have required them to set the price above 2000. But, no, the System 34 is a lot more than just CPU, just like the 9000 was. It was misplaced concern.

I've mentioned the 6809 above, and I'll mention it again here. Motorola shot themselves in the foot by failing to upgrade the 6809 to be a true 16-bit CPU, either with expanded index registers, or with full-width segment registers, to break the 64K address barrier without external bank switching.

They also shot themselves in the foot by over-designing the 68020. Way overdone, that one. If they had gone directly in 1984 to what they eventually called the CPU 32 instead, the early market for RISC CPUs would have taken a temporary but serious ding.

Was the 8088 PC a mistake, then?

Not exactly. 

I rather think that going only with the 68000 would have been a mistake of a different sort, and I'm pretty sure I would have complaints about that, as well.

I think I've hinted at some of the reasons above, but I can sort-of summarize by noting that the reason monopolies are bad in technological fields is that all the technology we have is unfinished, broken outside the context in which it was designed. We can't escape that. Solutions are always context-specific. And technology that is too powerful from the start can really get in the way.

So, I say all of this, and yet I say that the 8088-based 5150 we know and love as the IBM PC was a mistake.

I personally think, if they had been really smart, they would have released both the 8088 PC and a 68000 PC at the same time, running relatively compatible OSses and software. Deliberately sowing the field with variety keeps options open for everyone.

Variety helps the market more than hinders it. But that is a separate topic for a separate rant another day. (Or an alternate reality novel. This is part of a theme I'm trying to explore in a novel that I never seem to have time to work on any more.)

No comments:

Post a Comment