Return to the lecture notes index
April 8, 2005 (Lecture 33)

Naming, Name Spaces, and the Plan 9 Distributed Operating System

In a distributed environment, naming is a critical issue. There might be oil (a.k.a, black golad) under my feet -- but if I don't know that it is there, it doesn't matter to me.

In a distributed system, if I have no way of naming a resource, I can't use it. So, it is critical that all resources can be (uniquely) named, that these names can be found, and that these names can be assigned, and managed in a reasonable way.

One of the first projects to realize this, and work toward a solution, was Plan 9, a distributed operating system project at AT&T, and one that involved many of the original developers of UNIX.

The Goal of Plan 9

Okay guys, this is going to sound bizarre. And, to be honest, it does to me, too. One of the original goals of Plan 9 was to emulate a time-sharing "mainframe" using a network of workstations. Yep, you've got that right -- go from what we now have -- to what we abandoned many years ago. One slogan of the project was to "build a UNIX out of a lot of little systems, not a system out of a lot of little UNIXes."

But, to understand this better, we need to put things into a historical perspective. In the early to mid 1980s, users wanted their own machines -- but they were very, very expensive to administer and maintain. And, to be somewhat honest about it, companies hadn't accepted that cost, as to a significant extent, they now have. Quite simply, IT organizations existed within machine rooms -- they didn't reach to the desktop.

We also need to understand that at this point in time, little systems cost more per unit of processing power than big ones. Huge mainframes, and emerging shared-memory multiprocessors were very cost-effective. Workstations lost to the economies of scale -- both in hardware and software.

So, the Plan 9 approach was to, more-or-less, use the workstations as "thin clients". You guys may never have seen a thin client -- they haven't been popular since the 80s, maybe the very ealry 90s in the special case of X Window client devices. They were basically stripped down personal computers that had network connectivity. They were basically super-smart terminals -- but did no processing except that related to events on the client system, for example, display rendering. They were a keyboard, a display, and some presentation smarts -- but not computational or storage resources.

This approach let the processing be done by bigger, cost-effective machines. At this time, those were things like mainframes, and shared-memory multiprocessors. It also put a cap on the amount of support required at the desktop. The personal machines looked personal -- but really had nothing requiring support. And, since they were doing so little, they needed less frequent upgrades -- defraying some of the added cost.

Okay, so, in the end, personal computers became dirt cheap and IT people became ubiquitous. Plan 9's model seems ridiculous. And, it probably was -- even then. By this time, Ken Olson, CEO of Digital Equipment Corp (DEC), had already been shamed for declaring that there was no reason for the average person to have a PC at home, and Bill gates was starting to receive heat for his thought that 640K was more than enough memory for any adn every purpose.

Aother off thing about Plan 9 was the backup system -- everything was written to a Write Only Read Once (WORM) drive each morning. They believed this drive to be an infinite resource -- the manufacturer would offer bigger ones faster than they could use them. Ouch.

But this said, we did learn from Plan 9 -- and we'll take a look at some of the forward thinking that it exhibited.

Namespaces

Traditional UNIX systems hide many things behind the file system interface. For example, most devices are represented as files and can be manipulated using the file system interface. There are also other types of special files, such as pipes, one tool for interprocess communication.

Plan 9 took this three steps further -- it hid much more behind the file system, it abstracted away network locations behind the file system, and it allowed a private view of the world through the file system.

Almost everything was viewed by software as a file. For example, the screen was updated by writing raster updates to /dev/bitblt, text was written by writing to /dev/cons, and the mouse was read by reading from /dev/mouse. This actually isn't that far of a reach from unix, where we (now) have things like /dev/mouse and have always enjoyed /dev/tty. But, it is true that UNIX special cases such files with kernel-level code, whereas this functionality was part of Plan 9's view of the world -- everything is within the file server.

But one very neat thing is that this view of the world through the file system was unique to each session. If a user had multiple sessions open, each had its own instance of the user's view of the world -- different /dev/mouses, for example. And, this worked so well that the window manager could be run in a window beneath the window manager, &c. So, not only could the file system be customized by user -- it could be customized by session.

And, when network resources came into play, such as files within the distributed file system, this, too, was abstracted away. We now, with AFS and NFS tkae this for granted. And Plan 9 wasn't the first -- but they were an early adopter. And, unlike AFS and NFS where the file system's transparency was added to the OS after-the-fact. It was part of Plan 9's regular file system.

Ever used /proc? The "virtual file system" that contains all of the kernel tunables, system state information, and private process information -- all protected with appropriate file system permissions? If not, take a look within /proc on a UNIX or Linux system -- all the details at your finger tips. As far as I know, this feature was first introduced in Plan 9 -- and then copied by UNIXes.

So Much More...

Okay -- let's drop the Plan 9 terminology here. the top half of what Plan 9 called a fiel system wasn't a file system. It was a distributed directory service. And this is where Plan 9 gets tons of credit.

They realized, in a very early distributed system, that it is imperative to be able to locate and manage resources, without worrying about their location, and that, despite being part of a distributed system, resources must be organized or organizable within the context of the user. How different is a Plan 9 file name, representing a non-file object from the name of a service, within a distributed enviornment, that can be located and accessed with the help of a directory service?

This is why Plan 9 is important. Forget that time-sharing emulation stuff and the funny specialized resources for not-so-specialized problems strategy.

Wanna' Know More?