Return to the lecture notes index
Tuesday, February 23, 2010 (Lecture 11)

Introduction: What Is a Distributed File Systems (DFS)

Today we'll discuss the construction of the distributed version of another one of our old friends from operating systems: The file system. So, what is a DFS?

Let's start to answer that question by reminding ourselves about regular, vanilla flavored, traditional file systems. What are they? What do they do? Let's begin by asking ourselves a more fundamental question: What is a file?

A file is a collection of data organized by the user. The data within a file isn't necessarily meaningful to the operating system. Instead, a file is created by the user and meaninful to the user. It is the job of the operating system to maintain this unit, without understanding or caring why.

A file system is the component of an operating system that is responsible for managing files. File systems typically implement persistent storage, although volatile file systems are also possible (/proc is such an example). So what do we mean when we say manage files? Well, here are some examples:

So what is it that a DFS does? Well, basically the same thing. The big difference isn't what it does but the environment in which it lives. A traditional file system typically has all of the users and all of the storage resident on the same machine. A distributed file system typically operates in an enviornment where the data may be spread out across many, many hosts on a network -- and the users of the system may be equally distributed.

For the purposes of our conversation, we'll assume that each node of our distributed system has a rudimentary local file system. Our goal will be to coordinate these file systems and hide their existence from the user. The user should, as often as possible, believe that they are using a local file system. This means that the naming scheme needs to hide the machine names, &c. This also means that the system should mitigate the latency imposed by network communication. And the system should reduce the increase risk of accidental and malicious damage to user data assocaited with vulnerabilities in network communication. We could actually continue this list. To do so, pick a property of a local file system and append transparency.

Why Would we want a DFS?

There are many reasons that we might want a DFS. Some of the more common ones are listed below:

How Do We Build a DFS?

Let's take a look at how to construct a design a DFS. What questions do we need to ask. Well, let's walk through some of the bigger design decisions.

Grafting a Name Space Into the Tree

For the most part, it isn't too hard to tie a distributed file system into the name space of the local file system. The process of associating a remote directory tree with a particular point in the local directory tree is known as mounting the local directory that represents the root of the remote directory is typically known as a mount point. Directories are often mounted as the result of an action on the part of an administrator. In the context of UNIX systems, this can be via the "mount" command. The mounts may be done "by hand" or as part of the system initialization. Many UNIX systems automatically associate remote directories with local ones based on "/etc/vfstab". Other mechanisms may also be available. Solaris, for example, can bind remote directories to local mount points "as needed" and "on demand" through the "automounter".

Since mounting hides the location of files from the user, it allows the files to be spread out across multiple servers and/or replicated, without the users knowledge. AFS implements read-only replication. Coda supports read-write replication.

Implementing The Operations

Once the name spaces are glued, the operations on the distributed file system need to be defined. In other words, once the know that the user wants to access a directory entry that lives somewhere else, how does it know what to do? In the context of UNIX and UNIX-like operating systems, this is typically done via the virtual file system (VFS) interface. Recall from operating systems that VFS and vnodes provide an easy way to keep the same interface and the same operations, but to implement them differently. Many other operating systems provide similar object-oriented interfaces.

In a traditional operating system, the operations on a vnode access a local device and local cache. In a similar fashion, we can write operations to implement the VFS and vnode interfaces that will go across the network to get our data. On systems that don't have as nice an interface, we have to do more heavy kernel hacking -- but implementation is still possible. We just implement the DFS using whatever tools are provided to implement a local file system, but ultimately look to the network for the data.

Unit of Transit

So, now that we can move data across the network, how much do we move? Well, there are two intuitive answers to this question: whole files and blocks. Since a file is the unit of storage known to the user, it might make sense to move a whole file each time the user asks to access one. This makes sense, because we don't know how the file is organized, so we don't know what the user might want to do -- except that the user has indicated that the data within the file is related. Another option is to use a unit that is convenient to the file system. Perhaps we agree to a particular block size, and move only one block at a time, as demanded.

Whole file systems are convenient in the sense that once the file arrives, the user will not encounter any delays. Block-based systems are convenient because the user can "get started" after the first block arrives and does not need to wait for the whole (and perhaps very large) file to arrive. As we'll talk about soon, it is easier to maintain file-level coherency if a whole file based system is used and block-level coherency if a block-based system is used. But, in practice, the user is not likely to notice the difference between block-level and file-level systems -- the overwhelming majority of files are smaller than a single block.

Reads, Writes and Coherency

For the moment, let's assume that there is no caching and that a user can atomically read from a file, write to it, and write it back. If there is only one user, it doesn't matter what the she or he might do, or he or she does it -- the file will always be as expected when all is said and done.

But if multiple users are playing with the file, this is not the case. In a file-based system, if two users write to two different parts of the file, the final result will be the file from the perspective of one user, or the other, but not both. This is because one of the writes is destroyed by the subsequent writes. In a block-based system, if the writes lie in different blocks, the final file may represent the changes of both users -- this is because only the block containing each write is written -- not the whole file.

This might be good, because neither user's change is lost. This might also be bad, because the final file isn't what either user expects. It might be better to have one broken application than two broken applications. This is completely a semantic decision.

In a traditional UNIX file system, the unit of sharing is typically a byte. Unless flock(), lockf(), or some other voluntary locking is used, files are not protected from inconsistency caused by multiple users and conflicting writes.

Andrew File System (AFS) version 1 and 2 implemented whole file semantics, as does Coda. NFS and AFS version 3 implement block level semantics. None of these popular systems implement byte-level semantics as does a local file system. This is because the network overhead of sending a message is too high to justify such a small payload.

AFS changed from file level to block level access to reduce the lead time associated with opening and closing a file, and also to improve cache efficiency (more soon). It is unclear if actual workloads, or simply perception, prompted the change. My own belief is that it was simply perception (good NFS salespeople?) This is because most files are smaller than a block -- databases should not be distributed via a DFS!

Caching and Server State

There is typically a fairly high latency associated with the network. We really don't want to ask the file server for help each time the user accesses a file -- we need caching. If only one user would access a unit (file or block) at a time, this wouldn't be hard. Upon a close, we could simply move it to the client, let the client play with it, and then move it back (if written) or simply invalidate the cached copy (if read).

But life gets much harder in a multiuser system. What happens if two different users want to read the same file? Well, I suppose we could just give both of them copies of the file, or block, as appropriate. But then, what happens if one of them, or another user somewhere else writes to the file? How do the clients hodling cached copies know?

Well, we can apply one of several solutions to this problem. The first solution is to apply the Ostrich principle -- we could just hope it doesn't happen and live with the inconsistency if it does. This isn't a bad technique, because, in practice, files are rarely written by multiple users -- never mind concurrently.

Or, perhaps we can periodically validate our cache by checking with the server: "I have checksome [blah]. Is this still valid?" or "I have timestamp [blah] is this still valid?" In this case we can eventually detect a change an invalidate the cache, but there is a window of vulnerability. But, the good thing is that the user can access the data without bringing it back from the network each time. (And, even that approach can still allow for inconsistency, since it takes non-zero time). The cache and validate approach is the technique used by NFS.

An alternative to this approach is used by AFS and Coda. These systems employ smarter servers that keep track of the users. They do this by issuing a callback promise to each client as it collects a file/block. The server promises the client that it will be informed immediately if the file changes. This notice allows the client to invalidate its cache entry.

The callback-based approach is optimistic and saves the network overhead of the periodic validations, while ensuring a more strict consistency model. But it does complicate the server and reduce its robustness. This is because the server must keep track of the callbacks and problems can arise if the server fails, forgets about these promises, and restarts. Typically callbacks have timelimits associated with them, to reduce the amount of damage in the event that a client crashes and doesn't close() the file, or server fails and cannot issue a callback.

Coda and Disconnected Operation

Let's talk a little more about how AFS-2 (whole file) is organized. Typically files are collected into volumes which are nothing more than a collection of files grouped together for adminsitrative purposes. A server can hold one or more volumes. Read-only volumes may be replicated, so that a back-up copy is used if the primary copy becomes unreachable. Since writable volumes cannot be replicated write conflicts aren't a problem. Additionally, a client might have a cached copy of certain files.

Now, if a volume server should become unreachable, a client can continue to work out of its cache -- at least until it misses or tries to write a file back to the server. At this point, things break down, either because the client cannot get the files it needs, or because the client cannot inform the server of the changes, so the client cannot close the file -- or the server might violate a callback promise to other clients.

Now, let's hypothesize, for just a moment that we make sure that clients have all of the files that they need, and also allow them to close the file, even if they cannot contact the server to inform it of the change? How could we make this work?

Well, Coda has something called the hoard daemon whose job is to keep certain files in the client's cache, requesting them as necessary, just in case the client should later find itself unable to communicate with the server. The user can explicitly specify which files should be hoarded, or build a list by watching his/her activity using a tool called spy. This is the easy part (well, in some sense).

But what do we do about the delayed writes? When we can eventually tell the server about them, what do we do? Someone else may have found themselves in the same position. In orde to solve this problem, Coda keeps a version number with each file and another associated with each volume. This version number simply tracks the number of times a file has been modified. Before a client writes a file to the server, it checks the version of the file on the server. If that version number matches the version number of the file that the client read before the write, the client is safe and can send the new version of the file. The server can then increment the version number.

If the version number has increased, the client missed a callback promise. It cannot send its version -- it may be older than what is already there. This is called a conflict. Coda allows the user to send the file, but labels it as a conflict. These conflicts typically must be resolved by the user, before the file can be used. The user is basically asked, "Which one is the 'real' one?" The other one can, of course, be saved under a different name. There are some tools that can automatically merge multiple versions of certain files to automatically resolve conficts, but this is only possible for certin, well structured files -- and each one must be handed coded on an application-by-application basis.

Conflicts aren't typically a big problem, because shared files are typically not written by multiple users. As a result, the very small chance of a conflict isn't a bad price to pay for working through a failure.

Coda and Replication

Coda actually implements replication for writeable volumes. The collection of servers that main copies of a particular volume are known as a Volume Storage Group (VSG). Let's talk about how this is implemented. Coda actually doesn't maintain a single version number for each file and volume. It maintains a vector of version numbers known as the Coda Version Vector (CVV) for each file and Volume Version Vector (VVV) for each volume.

This version vector contains one entry for each replica. Each entry is the version number of the file on that replica. In the perfect case, the entry for each replica will be identical. The client actually requests the file in a two-step process.

  1. It asks all replicas for their version number
  2. It then asks the replica with the greatest version number for the file
  3. If the servers don't agree about the files version, the client can direct the servers to update a client that is behind, or inform them of a conflict. CVVs are compared just like vector timestamps. A conflict exists if two CVVs are concurrent, because concurrent vectors indicate that each server involved has seen some changes to the file, but not all changes.

In the perfect case, when the client writes a file, it does it in a multi-step process:

  1. The client sends the file to all servers, along with the original CVV.
  2. Each server increments its entry in the file's CVV and ACKS the client.
  3. The client merges the entries form all of the servers and sends the new CVV back to each server.
  4. If a conflict is detected, the client can inform the servers, so that it can be resolved automatically, or flagged for mitigation by the user.

Given this process, let's consider what happens if one or more servers should fail. In this case, the client cannot contact the server, so it temporarily forgets about it. The collections of volume servers that the client can communicate with is known as the Available Volume Storage Group (AVSG). The AVSG is a subset of the VSG.

In the event that the AVSG is smaller than the VSG, the client does nothing special. It goes throguh the same process as before, but only involves those servers in the AVSG.

Eventually when the partitioned or failed server becomes accessible, it will be added back to the AVSG. At this point, it will be involved in reads and writes. When this happens, the client will begin to notice any writes it has missed, because its CVV will be behind the others in the group. This will be automatically fixed by the a write operation.

Coda clients also periodically poll the members of their VSG. If they find that hosts have appeared that are not currently in their AVSG, they add them. When they add a server in the VSG back to the AVSG, they must compare the VVV's. If the new server's VVV does not match the client's copy of the VVV, thier is a conflict. To force a resolution of this conflict, the client drops all callbacks in the volume. This is because the server had updates while it was disconnected, but the client (because it couldn't talk to the server) missed the callbacks.

Now, let's assume that the network is partitioned. Let's say that half of the network is accessible to one client and the other half to the other client. If these clients play with different files, everything works as it did above. But if they play with the same files, a write-write conflict will occur. The servers in each partition will update their own version numbers, but not the other. For example, we could see the following:

  Initial:     <1,1,1,1>
               <1,1,1,1>
               <1,1,1,1>
               <1,1,1,1>

  --------- Partition 1/2 and 3/4 ----------
  Write 1/2:   <2,2,1,1>
               <2,2,1,1>

  Write 3/4:   <1,1,2,2>
               <1,1,2,2>


  --------- Partition repaired ----------
  Read (ouch!) <1,1,2,2>
               <1,1,2,2>
               <2,2,1,1>
               <2,2,1,1>
  

The next time a client does a read (or a write), it will detect the inconsistency. This inconsistency cannot be resolved automatically and must be repaired by the user. Coda simply flags it and requests the user's involvement before permitting a subsequent access.

Notice that replication does not affect the model we discussed before -- Coda's disconnected mode is not affected. The version vectors work just like the individual version numbers in this respect.

Coda and Weakly Connected Mode

Coda operates under the assumption that servers are busy and clients have plenty of free time. This is the reason that replication and conflict detection are client-driven in Coda. But this assumption does not always hold. It would be prohibitively time consuming to implement client-based replication over a modem line. It might also be prohibitively expensive over a low-bandwith wireless connection.

Coda handles this situation with what is known as weakly connected mode. If the Coda client finds itself with a limited connection to the servers, it picks one of the servers and sends it the update. The server then propagates the change to the other servers, if possible, and the version vectors are updated on the servers, as appropriate. Basically the update generates a server-server version conflict and triggers the server to resolve it.