It is assumed that you will be setting up both a server and a client. If you are just setting up a client to work off of somebody else's server (say in your department), you can skip to Section 4. However, every client that is set up requires modifications on the server to authorize that client (unless the server setup is done in a very insecure way), so even if you are not setting up a server you may wish to read this section to get an idea what kinds of authorization problems to look out for.
Setting up the server will be done in two steps: Setting up the configuration files for NFS, and then starting the NFS services.
There are three main configuration files you will need to edit to set up an NFS server: /etc/exports, /etc/hosts.allow, and /etc/hosts.deny. Strictly speaking, you only need to edit /etc/exports to get NFS to work, but you would be left with an extremely insecure setup. You may also need to edit your startup scripts; see Section 3.3.3 for more on that.
This file contains a list of entries; each entry indicates a volume that is shared and how it is shared. Check the man pages (man exports) for a complete description of all the setup options for the file, although the description here will probably satistfy most people's needs.
An entry in /etc/exports will typically look like this:
directory machine1(option11,option12) machine2(option21,option22) |
where
the directory that you want to share. It may be an entire volume though it need not be. If you share a directory, then all directories under it within the same file system will be shared as well.
client machines that will have access to the directory. The machines may be listed by their IP address or their DNS address (e.g., machine.company.com or 192.168.0.8). Using IP addresses is more reliable and more secure.
the option listing for each machine will describe what kind of access that machine will have. Important options are:
ro: The directory is shared read only; the client machine will not be able to write to it. This is the default.
rw: The client machine will have read and write access to the directory.
no_root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. (Excatly which UID the request is mapped to depends on the UID of user "nobody" on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories. You should not specify this option without a good reason.
no_subtree_check: If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
sync: By default, a Version 2 NFS server will tell a client machine that a file write is complete when NFS has finished handing the write over to the filesysytem; however, the file system may not sync it to the disk, even if the client makes a sync() call on the file system. The default behavior may therefore cause file corruption if the server reboots. This option forces the filesystem to sync to disk every time NFS completes a write operation. It slows down write times substantially but may be necessary if you are running NFS Version 2 in a production environment. Version 3 NFS has a commit operation that the client can call that actually will result in a disk sync on the server end.
Suppose we have two client machines, slave1 and slave2, that have IP addresses 192.168.0.1 and 192.168.0.2, respectively. We wish to share our software binaries and home directories with these machines. A typical setup for /etc/exports might look like this:
/usr/local 192.168.0.1(ro) 192.168.0.2(ro) /home 192.168.0.1(rw) 192.168.0.2(rw) |
Here we are sharing /usr/local read-only to slave1 and slave2, because it probably contains our software and there may not be benefits to allowing slave1 and slave2 to write to it that outweigh security concerns. On the other hand, home directories need to be exported read-write if users are to save work on them.
If you have a large installation, you may find that you have a bunch of computers all on the same local network that require access to your server. There are a few ways of simplifying references to large numbers of machines. First, you can give access to a range of machines at once by specifying a network and a netmask. For example, if you wanted to allow access to all the machines with IP addresses between 192.168.0.0 and 192.168.0.255 then you could have the entries:
/usr/local 192.168.0.0/255.255.255.0(ro) /home 192.168.0.0/255.255.255.0(rw) |
See the Networking-Overview HOWTO for further information about how netmasks work, and you may also wish to look at the man pages for init and hosts.allow.
Second, you can use NIS netgroups in your entry. To specify a netgroup in your exports file, simply prepend the name of the netgroup with an "@". See the NIS HOWTO for details on how netgroups work.
Third, you can use wildcards such as *.foo.com or 192.168. instead of hostnames.
However, you should keep in mind that any of these simplifications could cause a security risk if there are machines in your netgroup or local network that you do not trust completely.
A few cautions are in order about what cannot (or should not) be exported. First, if a directory is exported, its parent and child directories cannot be exported if they are in the same filesystem. However, exporting both should not be necessary because listing the parent directory in the /etc/exports file will cause all underlying directories within that file system to be exported.
Second, it is a poor idea to export a FAT or VFAT (i.e., MS-DOS or Windows 95/98) filesystem with NFS. FAT is not designed for use on a multi-user machine, and as a result, operations that depend on permissions will not work well. Moreover, some of the underlying filesystem design is reported to work poorly with NFS's expectations.
Third, device or other special files may not export correctly to non-Linux clients. See Section 8 for details on particular operating systems.
These two files specify which computers on the network can use services on your machine. Each line of the file is an entry listing a service and a set of machines. When the server gets a request from a machine, it does the following:
It first checks hosts.allow to see if the machine matches a description listed in there. If it does, then the machine is allowed access.
If the machine does not match an entry in hosts.allow, the server then checks hosts.deny to see if the client matches a listing in there. If it does then the machine is denied access.
If the client matches no listings in either file, then it is allowed access.
In addition to controlling access to services handled by inetd (such as telnet and FTP), this file can also control access to NFS by restricting connections to the daemons that provide NFS services. Restrictions are done on a per-service basis.
The first daemon to restrict access to is the portmapper. This daemon essentially just tells requesting clients how to find all the NFS services on the system. Restricting access to the portmapper is the best defense against someone breaking into your system through NFS because completely unauthorized clients won't know where to find the NFS daemons. However, there are two things to watch out for. First, restricting portmapper isn't enough if the intruder already knows for some reason how to find those daemons. And second, if you are running NIS, restricting portmapper will also restrict requests to NIS. That should usually be harmless since you usually want to restrict NFS and NIS in a similar way, but just be cautioned. (Running NIS is generally a good idea if you are running NFS, because the client machines need a way of knowing who owns what files on the exported volumes. Of course there are other ways of doing this such as syncing password files. See the NIS HOWTO for information on setting up NIS.)
In general it is a good idea with NFS (as with most internet services) to explicitly deny access to hosts that you don't need to allow access to.
The first step in doing this is to add the followng entry to /etc/hosts.deny:
portmap:ALL |
Starting with nfs-utils 0.2.0, you can be a bit more careful by controlling access to individual daemons. It's a good precaution since an intruder will often be able to weasel around the portmapper. If you have a newer version of NFS-utils, add entries for each of the NFS daemons (see the next section to find out what these daemons are; for now just put entries for them in hosts.deny):
lockd:ALL mountd:ALL rquotad:ALL statd:ALL |
Even if you have an older version of nfs-utils, adding these entries is at worst harmless (since they will just be ignored) and at best will save you some trouble when you upgrade. Some sys admins choose to put the entry ALL:ALL in the file /etc/hosts.deny, which causes any service that looks at these files to deny access to all hosts unless it is explicitly allowed. While this is more secure behavior, it may also get you in trouble when you are installing new services, you forget you put it there, and you can't figure out for the life of you why they won't work.
Next, we need to add an entry to hosts.allow to give any hosts access that we want to have access. (If we just leave the above lines in hosts.deny then nobody will have access to NFS.) Entries in hosts.allow follow the format
Here, host is IP address of a potential client; it may be possible in some versions to use the DNS name of the host, but it is strongly deprecated.
Suppose we have the setup above and we just want to allow access to slave1.foo.com and slave2.foo.com, and suppose that the IP addresses of these machines are 192.168.0.1 and 192.168.0.2, respectively. We could add the following entry to /etc/hosts.allow:
For recent nfs-utils versions, we would also add the following (again, these entries are harmless even if they are not supported):
lockd: 192.168.0.1 , 192.168.0.2 rquotad: 192.168.0.1 , 192.168.0.2 mountd: 192.168.0.1 , 192.168.0.2 statd: 192.168.0.1 , 192.168.0.2 |
If you intend to run NFS on a large number of machines in a local network, /etc/hosts.allow also allows for network/netmask style entries in the same manner as /etc/exports above.
The NFS server should now be configured and we can start it running. First, you will need to have the appropriate packages installed. This consists mainly of a new enough kernel and a new enough version of the nfs-utils package. See Section 2.4 if you are in doubt.
Next, before you can start NFS, you will need to have TCP/IP networking functioning correctly on your machine. If you can use telnet, FTP, and so on, then chances are your TCP networking is fine.
That said, with most recent Linux distributions you may be able to get NFS up and running simply by rebooting your machine, and the startup scripts should detect that you have set up your /etc/exports file and will start up NFS correctly. If you try this, see Section 3.4 Verifying that NFS is running. If this does not work, or if you are not in a position to reboot your machine, then the following section will tell you which daemons need to be started in order to run NFS services. If for some reason nfsd was already running when you edited your configuration files above, you will have to flush your configuration; see Section 3.5 for details.
NFS depends on the portmapper daemon, either called portmap or rpc.portmap. It will need to be started first. It should be located in /sbin but is sometimes in /usr/sbin. Most recent Linux distributions start this daemon in the boot scripts, but it is worth making sure that it is running before you begin working with NFS (just type ps aux | grep portmap).
NFS serving is taken care of by five daemons: rpc.nfsd, which does most of the work; rpc.lockd and rpc.statd, which handle file locking; rpc.mountd, which handles the initial mount requests, and rpc.rquotad, which handles user file quotas on exported volumes. Starting with 2.2.18, lockd is called by nfsd upon demand, so you do not need to worry about starting it yourself. statd will need to be started separately. Most recent Linux distributions will have startup scripts for these daemons.
The daemons are all part of the nfs-utils package, and may be either in the /sbin directory or the /usr/sbin directory.
If your distribution does not include them in the startup scripts, then then you should add them, configured to start in the following order:
rpc.portmap |
rpc.mountd, rpc.nfsd |
rpc.statd, rpc.lockd (if necessary), rpc.rquotad |
The nfs-utils package has sample startup scripts for RedHat and Debian. If you are using a different distribution, in general you can just copy the RedHat script, but you will probably have to take out the line that says:
. ../init.d/functions |
To do this, query the portmapper with the command rpcinfo -p to find out what services it is providing. You should get something like this:
program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 749 rquotad 100011 2 udp 749 rquotad 100005 1 udp 759 mountd 100005 1 tcp 761 mountd 100005 2 udp 764 mountd 100005 2 tcp 766 mountd 100005 3 udp 769 mountd 100005 3 tcp 771 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 300019 1 tcp 830 amd 300019 1 udp 831 amd 100024 1 udp 944 status 100024 1 tcp 946 status 100021 1 udp 1042 nlockmgr 100021 3 udp 1042 nlockmgr 100021 4 udp 1042 nlockmgr 100021 1 tcp 1629 nlockmgr 100021 3 tcp 1629 nlockmgr 100021 4 tcp 1629 nlockmgr |
This says that we have NFS versions 2 and 3, rpc.statd version 1, network lock manager (the service name for rpc.lockd) versions 1, 3, and 4. There are also different service listings depending on whether NFS is travelling over TCP or UDP. Linux systems use UDP by default unless TCP is explicitly requested; however other OSes such as Solaris default to TCP.
If you do not at least see a line that says "portmapper", a line that says "nfs", and a line that says "mountd" then you will need to backtrack and try again to start up the daemons (see Section 7, Troubleshooting, if this still doesn't work).
If you do see these services listed, then you should be ready to set up NFS clients to access files from your server.
If you come back and change your /etc/exports file, the changes you make may not take effect immediately. You should run the command exportfs -ra to force nfsd to re-read the /etc/exports file. If you can't find the exportfs command, then you can kill nfsd with the -HUP flag (see the man pages for kill for details).
If that still doesn't work, don't forget to check hosts.allow to make sure you haven't forgotten to list any new client machines there. Also check the host listings on any firewalls you may have set up (see Section 7 for more details on firewalls and NFS).