0% found this document useful (0 votes)
92 views

Protocol Components: Remote Procedure Call (RPC) Protocol: Rpcbind

The document discusses the Remote Procedure Call (RPC) protocol and how it enables client/server communication. It describes how RPC defines interactions between clients that format requests and servers that execute those requests. It also discusses how the NFS protocol uses RPC to allow file access operations on remote systems in a stateless manner through procedures like reading, writing and creating files. The Mount protocol is also summarized, which handles opening and closing files for NFS through procedures like mounting and unmounting file systems.

Uploaded by

Priyanka Aher
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

Protocol Components: Remote Procedure Call (RPC) Protocol: Rpcbind

The document discusses the Remote Procedure Call (RPC) protocol and how it enables client/server communication. It describes how RPC defines interactions between clients that format requests and servers that execute those requests. It also discusses how the NFS protocol uses RPC to allow file access operations on remote systems in a stateless manner through procedures like reading, writing and creating files. The Mount protocol is also summarized, which handles opening and closing files for NFS through procedures like mounting and unmounting file systems.

Uploaded by

Priyanka Aher
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Protocol Components: Remote Procedure Call (RPC) Protocol

RPC is a simple client/server protocol application. RPC defines the interaction


between a client, which formats a request for execution by the server, and the
server, which executes the client's request on the local system. The server
performs whatever processing is required and returns the data and control of
the procedure to the client. Sun developed RPC for use in NFS, but it has
since been employed quite usefully by many other client/server-based
products.

The rpcbind daemon (a process that runs in the background waiting for


requests) runs on both the client and the server and is responsible for
implementing RPC protocol exchanges between hosts on the network.

A service is a group of RPC procedures that have been grouped together into
programs. A unique number is used to identify each service, which means that
more than one service can operate at any given time. An application that
needs to use a service can use the different programs that make up the
service to perform specific actions. For example, when designing an NFS
service, one program might be responsible for determining a file's attributes,
and another program might be responsible for the actual transfer of data
between the client and server computers.

The unique service number is used to identify different network services that
run on a particular system, and the mapping for this is usually found in the
file /etc/rpc. 

The NFS Protocol and Mount Protocol

The NFS protocol is a set of procedures (called primitives) that are executed


via RPC to allow an action to be performed on a remote computer. NFS is
a stateless protocol, which means that the server does not have to maintain
information about the state of each client. If the server (or the network) fails,
the client needs only to repeat the operation. The server doesn't have to
rebuild any data tables or other structures to recover the state of a client after
a failure.

Note
Certain operations, such as file or record locking, do require a stateful protocol
of some sort, and many implementations of NFS accomplish this by using
another protocol to handle the specific function. NFS itself is composed of a
set of procedures that deal only with file access.

The RPC procedures that make up the NFS protocol are the following:

 NullThe "do nothing" routine. It is provided in all RPC services and is


used for testing and timing operations.

 Get File AttributesGets the file attributes of a file on a remote system.

 Set File AttributesSets the file attributes of a file on the remote server.

 Get File System RootNo longer used. Instead, the Mount protocol


performs this function.

 Look Up a FilenameReturns a file handle used to access a file.

 Read From Symbolic LinkReturns information about symbolic links to a file


on the remote server.

 Read From FileProcedure to read data from a file on a remote system.

 Write to CacheCache feature to be included in version 3 of the protocol.

 Write to FileUsed to write data to a file on a remote server.

 Create FileCreates a file on the remote server.

 Remove FileDeletes a file on the remote server.

 Rename FileRenames a file on the remote server.

 Create Link to FileCreates a hard link (in the same file system) to a file.

 Create Symbolic LinkCreates a symbolic link (can be used to link a file


across file systems). A symbolic link is a pointer to a file.

 Create DirectoryCreates a directory on the remote server.


 Remove DirectoryDeletes an empty directory on the remote server.

 Read From DirectoryObtains a list of files from a directory on the server.

 Get File System AttributesReturns information about the file system on


the remote server, such as the total size and available free space.

There is no provision in these procedures to open or close a file.


Because NFS is a stateless protocol, it doesn't handle file opens or closes.
The Mount protocol performs this function and returns a file handle to NFS.
The mountd daemon runs on both the client and the server computer and is
responsible for maintaining a list of current connections. Most
implementations of NFS recover from client crashes by having the client send
a message to the NFS server when it boots, telling it to unmount all its
previous connections to the client.

When compared to the NFS protocol, the Mount protocol consists of only a
very few procedures:

 NullThe "do nothing" procedure, just like the one listed under the NFS
protocol.

 MNTMounts a file system and returns to the client a file handle and the
name of the remote file system.

 UNMTTheopposite of the MNT procedure. It unmounts a file system and


removes from its table the reference to it.

 UMNTALLSimilar to the UNMT procedure, but this one unmounts all remote


file systems that are being used by the NFS client.

 EXPORTDisplays a list of exported file systems.

 DUMPDisplays a list of file systems on a server that are currently mounted


by a client.

Configuring NFS Servers and Clients

The biod daemon runs on the client system and communicates with the


remote NFS server. The daemon also processes the data that is transferred
between the NFS client and the NFS server. The RPC daemon must also be
running, and either UDP or TCP needs to be running, depending on which
one your version of NFS uses as a transport. Users can mount a file system
offered by an NFS server, provided that they are not prevented from mounting
the file system by the server, by using the mount command.

Note

The commands shown in the following sections might differ from one version
of Unix to another. As always with Unix or Linux, consult the man pages to
determine the exact syntax for commands and the locations of files mentioned
in relation to the commands.

NFS Client Daemons

On the client side of the NFS process, there are actually three daemon
processes that are used. The first is biod, which stands for block input/output
daemon. This daemon processes the input/output with the NFS server on
behalf of the user process that is making requests of the remote file system. If
you use NFS heavily on a client, you can improve performance by starting up
more than one biod daemon. The syntax used to start the daemon is as
follows:
/etc/biod [number of daemon processes]

This daemon is usually started in the /etc/rc.local startup file. Modify this file


if you want to permanently change the number of daemons running on the
client system. You can first test by executing the command online to
determine how many daemons you need to start and then place the
necessary commands in the startup file.

When deciding performance issues, remember that on a heavily loaded client,


making a change in one place might result in poorer performance from
another part of the system. So don't assume that you need a lot of extra
daemons running unless you can first show that they are needed and do
improve performance. Each daemon process is like any other process running
on the system, and it uses up system resources, especially memory. Begin by
using one or two daemons if you are using a workstation dedicated to one
user. For a multiple-user computer, test your performance by increasing the
number of daemons until NFS performance is satisfactory (all the time
checking, of course, other performance indicators to be sure that the overall
system impact is justified).

Although having multiple daemons means that NFS requests can be


processed in parallel, remember that the network itself might be a bottleneck.
Additional biod daemons will not increase throughput when the network itself
is the limiting factor.

Also note that the biod daemon is a client process. You should not run it on an
NFS server unless that server is also a client of another NFS server.

In addition to the biod daemon, the lockd and statd daemons also run on the


client. For more information on these, see the section "Server-Side
Daemons," later in this chapter.

The mount Command

The mount command is used to mount a local file system, and you can also
use the command to mount a remote NFS file system. The syntax for
usingmount to make available a file system being exported by an NFS server is
as follows:
mount -F nfs -o options machine:filesystem mountpoint

In some versions of Unix, the syntax for mounting a remote NFS file system is
a little different. For example, in SCO Unix you use a lowercase f and an
uppercase NFS:
mount -f NFS -o options machine:filesystem mountpoint

In BSD Unix, there is a command called mountnfs, which uses the system


call mount to perform most of its functions. This version of the mountcommand
comes with a lot of additional parameters, including the capability to specify
on the mount command line whether to use UPD or TCP as the underlying
transport mechanism.

The value you supply for machine:filesystem should be the hostname of the


remote server that is exporting the file system you want to mount formachine.
Substitute the name of the file system for filesystem. The following example
causes the remote file system on host zira, called/usr/projectx/docs, to be
made accessible in the local file system hierarchy at the /usr/docs directory:
mount -F nfs -o ro zira:usr/projectx/docs /usr/docs

This is the same way you mount other local file systems into the local
hierarchy. Under the /usr/docs directory, you can access any other
subdirectories that exist on host zira under the /usr/projectx/docs directory.

The -o parameter can be used to specify options for the mount command. In


the preceding example, the letters ro for the option were used to make the
remote file system read-only by users on the local computer.

Other options that can be used when mounting a remote file system include
the following:

 rw Mounts the file system for local read-write access, which is the
default.

 ro Mounts the file system for local read-only access.

 suid Allows setuid execution.

 nosuid Disallows setuid execution.

 timeo=x Specifies a timeout value (in tenths of a second).


The mount command will fail if it cannot mount the remote file system
within this time limit.

 retry=x The mount command will attempt to mount the remote file


system x number of times, with each attempt lasting for the length of
time specified by the timeo parameter.

 soft Causes an error to be returned if the mount is unsuccessful.


Opposite of the hard option.

 hard Causes the mount attempt to continue until it succeeds. Opposite


of the soft option.

For more command-line parameters and options, see the man page for
the mount command for your particular system.

Caution
A computer can be an NFS server, an NFS client, or perhaps both a server
and a client. However, you should not try to mount an exported file system on
the same server that is exporting it. This can lead to looping problems,
causing unpredictable behavior.

The mountpoint is the path to the location in the local file system where the
remote NFS file system will appear, and this path must exist before
themount command is issued. Any files existing in the mountpoint directory will
no longer be accessible to users after a remote file system is attached to the
directory with the mount command, so do not use just any directory. Note that
the files are not lost. They reappear when the remote file system is
unmounted.

Server-Side Daemons

The nfsd daemon process handles requests from NFS clients for the server.
The nfsd daemon interprets requests and sends them to the I/O system to
perform the requests' actual functions. The daemon communicates with
the biod daemon on the client, processing requests and returning data to the
requestor's daemon.

An NFS server will usually be set up to serve multiple clients. You can set up
multiple copies of the nfsd daemon on the server so that the server can handle
multiple client requests in a timely manner.

The syntax for the command to start the daemon is as follows:


/etc/nfsd [number of nfs daemons to start]

For example, to start up five copies of the nfsd daemon at boot time, modify


your startup scripts to include the following command:
/etc/nfsd 5

Unix systems and the utilities that are closely associated with them are
continually being updated or improved. Some new versions include using the
concept of threads to make it possible for a daemon to be implemented as a
multithreaded process, capable of handling many requests at one time. Digital
Unix 4.0 (now HP True64 Unix) is an operating system that provides a
multithreaded NFS server daemon.

Other daemons the NFS server runs include the lockd daemon to handle file
locking and the statd daemon to help coordinate the status of current file
locks.

Configuring Server Daemons

For an NFS server, choose a computer that has the hardware capabilities
needed to support your network clients. If the NFS server will be used to allow
clients to view seldom-used documentation, a lesspowerful hardware
configuration might be all you need. If the server is going to be used to export
a large number of directories, say from a powerful disk storage subsystem,
the hardware requirements become much more important. You will have to
make capacity judgments concerning the CPU power, disk subsystems, and
network adapter card performance.

Setting up an NFS server is a simple task. Create a list of the directories that
are to be exported, and place entries for these in the /etc/exports file on the
server. At boot time the exportfs program starts and obtains information from
this file. The exportfs program uses this data to make exported directories
available to clients that make requests.

Sharing File Systems: The exportfs Command

At system boot time, the exportfs program is usually started by


the /sbin/init.d/nfs.server script file, but this can vary, depending on the
particular implementation of Unix you are using. The exportfs program reads
the information in the /etc/exports configuration file.

The syntax for this command varies, depending on what actions you want to
perform:
/usr/sbin/exportfs [-auv]
/usr/sbin/exportfs [-uv] [dir ...]
/usr/sbin/exportfs -i [-o options] [-v] [dir ...]

The parameters and options you can use with this command are listed here:
 a Causes exportfs to read the /etc/exports file and export all directories
for which it finds an entry. When used with the -u parameter, it causes
all directories to be unexported.

 i Specifies options in the /etc/exports file to be associated with each


directory to be exported. It is used to tell exportfs to ignore the options
you placed in this file.

 u Used to stop exporting a directory (or all directories if used with the -
a option).

 v Tells exportfs to
operate in "verbose" mode, giving you additional
feedback in response to your commands.

The options you can specify after the -o qualifier are the same as you use in
the /etc/exports file (see the following section, "Configuration Files").

To export or unexport (stop sharing) all entries found in the /etc/exports file,


use the -a or -u option. This is probably the most-often-used form because
you can specify the other options you need on a per-directory basis in
the /etc/exports file. This example causes all directories listed
in/etc/exports to be available for use by remote clients:
exportfs -a

The following example causes your NFS server to stop sharing all the
directories listed for export in the /etc/exports file:
exportfs -au

The second form can be used to export or unexport (stop exporting) a


particular directory (or directories) instead of all directories. You specify the
directories on the command line. You can use this form if you want to stop
sharing a particular directory because of system problems or maintenance, for
example. Using the following syntax causes the NFS server to stop sharing
the /etc/user/accounting directory with remote users:
exportfs -u /etc/users/accounting
The next form of the command can be used to ignore the options found in
the /etc/exports file. Instead, you can supply them (using the -o parameter) on
the command line. You will probably use this in special cases because you
could just as easily change the options in the /etc/exports file if the change
were a permanent one. If, for example, you decided that you wanted to make
an exported directory that is currently set to be read-write to be read-only, you
could use the following command:
exportfs -o ro /etc/users/purch

You can also dismount and mount remote file systems using different options
when troubleshooting or when researching the commands you will need when
preparing to upgrade a network segment where connections need to change.

If changes are made to the /etc/exports file while the system is running, use


the exportfs command (with the -a parameter) to make the changes take
effect. To get a list of directories that are currently being exported, you can
execute the command with no options, and it will show you a list.

Of course, it is not necessarily a good idea to make changes on-the-fly without


keeping track of the connections. When you decide to perform online testing
to mount or dismount file systems, be sure that you are not going to impact
any users who are currently making productive use of the resources. To make
testing more foolproof and to provide a quick back-out procedure, try copying
the /etc/exports file to keep a safe starting copy and making changes to the
copied file, loading it by using the exportfs -a command. When you determine
that something has been done incorrectly, you can simply use the backup
copy of the file you have made to restore the status quo.

Configuration Files

To make a file system or a directory in a file system available for export, add
the pathnames to the /etc/exports file. The format for an entry in this file is as
follows:
directory [-option, ...]

The term directory is a pathname for the directory you want to share with
other systems. The options you can include are the following:
 ro This makes the directory available to remote users in a read-only
mode. The default is readwrite, and remote users can change data
in files on your system if you do not specify ro here.

 rw=hostnames This specifies a specific host or hosts that you want to


have read-write access. If a host is not included in hostnames, it will have
only read access to the exported file system.

 anon=uid Use
this parameter to set the uid (user ID) that will be used for
anonymous users, if allowed.

 root=hostnames Users who have root access on a system listed


in hostnames can gain root access on the exported file system.

 access=client This specifies a client that can have mount access to this


file system.

For example:
/etc/users/acctpay -access=acct
/etc/users/docs -ro
/etc/users/reports/monthend -rw=ono

In this file, the first directory, /etc/users/acctpay, which stores accounts


payable files, will be shared with a group called acctthe accounting
department. The /docs directory can be accessed by anyone in read-only
mode. The /reports/monthend directory can be accessed in read-only mode by
most users, but users on the computer whose hostname is ono will have read-
write access.

Caution

You should give considerable thought to the matter before using NFS to
export sensitive or critical data. If the information could cause great harm if it
were to be altered or exposed, you should not treat it lightly and make it
available on the network via NFS. NFS is better suited for ordinary user data
files and programs, directories, or other resources that are shared by a large
number of users. There are not enough security mechanisms in place when
using many implementations of NFS to make it a candidate for a high-security
environment.
Automounting File Systems

The Mount protocol takes care of the details of making a connection for the
NFS client to the NFS server. This means that it is necessary to use
themount command to make the remote file system available at a mountpoint in
the local file system. To make this process even easier, the automountddaemon
has been created. This daemon listens for NFS requests and mounts a
remote file system locally on an as-needed basis. The mounted condition
usually persists for a specified number of minutes (the default is usually five
minutes) in order to satisfy any further requests.

As with other daemons, the automountd daemon is started at boot time in


the /etc/rc.local file. You can enter it as a command after the system is up
and running, if needed. When a client computer tries to access a file that is
referenced in an automount map, the automountd daemon checks to see
whether the file system for that directory is currently mounted. The daemon
temporarily mounts the file system so that the user's request can be fulfilled, if
needed.

The automount map is a file that tells the daemon where the file system to be
mounted is located and where it should be mounted in the local file system.
Options can also be included for the mount process, for example, to make it is
read-write or read-only. The automountd daemon mounts a file system under
the mountpoint /tmp_mnt. It then creates a symbolic link that appears to the
user as part of his file system.

Mounting File Systems Using the automount Command

The /etc/rc.local file usually contains the command used to start


the automountd daemon. This daemon is responsible for processing NFS mount
requests as they are defined in special files called map files.

The syntax for the automount command is as follows:


automount [-mnTv] [-D name=value] [-f master-file]
[-M mount-directory] [-tl duration] [-tm interval]
[-tw interval][directory mapname [- mount-options]]
3. Setting Up an NFS Server
3.1. Introduction to the server setup

It is assumed that you will be setting up both a server and a client. If you are just
setting up a client to work off of somebody else's server (say in your department), you
can skip to Section 4. However, every client that is set up requires modifications on
the server to authorize that client (unless the server setup is done in a very insecure
way), so even if you are not setting up a server you may wish to read this section to
get an idea what kinds of authorization problems to look out for.
Setting up the server will be done in two steps: Setting up the configuration files for
NFS, and then starting the NFS services.

3.2. Setting up the Configuration Files

There are three main configuration files you will need to edit to set up an NFS
server: /etc/exports, /etc/hosts.allow, and /etc/hosts.deny. Strictly speaking, you
only need to edit /etc/exportsto get NFS to work, but you would be left with an
extremely insecure setup. You may also need to edit your startup scripts; see Section
3.3.3 for more on that.

3.2.1. /etc/exports

This file contains a list of entries; each entry indicates a volume that is shared and
how it is shared. Check the man pages (man exports) for a complete description of all
the setup options for the file, although the description here will probably satistfy most
people's needs.

An entry in /etc/exports will typically look like this:


directory machine1(option11,option12) machine2(option21,option22)

where

directory

the directory that you want to share. It may be an entire volume though it need
not be. If you share a directory, then all directories under it within the same file
system will be shared as well.

machine1 and machine2

client machines that will have access to the directory. The machines may be
listed by their DNS address or their IP address
(e.g., machine.company.com or 192.168.0.8). Using IP addresses is more
reliable and more secure. If you need to use DNS addresses, and they do not
seem to be resolving to the right machine, see Section 7.3.

optionxx
the option listing for each machine will describe what kind of access that
machine will have. Important options are:

 ro: The directory is shared read only; the client machine will not be able
to write to it. This is the default.

 rw: The client machine will have read and write access to the directory.

 no_root_squash: By default, any file request made by user root on the


client machine is treated as if it is made by user nobody on the server.
(Excatly which UID the request is mapped to depends on the UID of
user "nobody" on the server, not the client.) If no_root_squash is
selected, then root on the client machine will have the same level of
access to the files on the system as root on the server. This can have
serious security implications, although it may be necessary if you want
to perform any administrative work on the client machine that involves
the exported directories. You should not specify this option without a
good reason.

 no_subtree_check: If only part of a volume is exported, a routine called


subtree checking verifies that a file that is requested from the client is in
the appropriate part of the volume. If the entire volume is exported,
disabling this check will speed up transfers.

 sync: By default, all but the most recent version (version 1.11) of
the exportfs command will use async behavior, telling a client machine
that a file write is complete - that is, has been written to stable storage -
when NFS has finished handing the write over to the filesysytem. This
behavior may cause data corruption if the server reboots, and
the sync option prevents this. See Section 5.9 for a complete discussion
of sync and async behavior.

Suppose we have two client machines, slave1 and slave2, that have IP


addresses 192.168.0.1 and 192.168.0.2, respectively. We wish to share our software
binaries and home directories with these machines. A typical setup
for /etc/exports might look like this:

/usr/local 192.168.0.1(ro) 192.168.0.2(ro)


/home 192.168.0.1(rw) 192.168.0.2(rw)
Here we are sharing /usr/local read-only to slave1 and slave2, because it probably
contains our software and there may not be benefits to allowing slave1 and slave2 to
write to it that outweigh security concerns. On the other hand, home directories need
to be exported read-write if users are to save work on them.

If you have a large installation, you may find that you have a bunch of computers all
on the same local network that require access to your server. There are a few ways of
simplifying references to large numbers of machines. First, you can give access to a
range of machines at once by specifying a network and a netmask. For example, if
you wanted to allow access to all the machines with IP addresses
between 192.168.0.0 and 192.168.0.255 then you could have the entries:

/usr/local 192.168.0.0/255.255.255.0(ro)
/home 192.168.0.0/255.255.255.0(rw)

See the Networking-Overview HOWTO for further information about how netmasks


work, and you may also wish to look at the man pages for init and hosts.allow.

Second, you can use NIS netgroups in your entry. To specify a netgroup in your
exports file, simply prepend the name of the netgroup with an "@". See the NIS
HOWTO for details on how netgroups work.

Third, you can use wildcards such as *.foo.com or 192.168. instead of hostnames.


There were problems with wildcard implementation in the 2.2 kernel series that were
fixed in kernel 2.2.19.

However, you should keep in mind that any of these simplifications could cause a
security risk if there are machines in your netgroup or local network that you do not
trust completely.

A few cautions are in order about what cannot (or should not) be exported. First, if a
directory is exported, its parent and child directories cannot be exported if they are in
the same filesystem. However, exporting both should not be necessary because listing
the parent directory in the /etc/exports file will cause all underlying directories
within that file system to be exported.

Second, it is a poor idea to export a FAT or VFAT (i.e., MS-DOS or Windows 95/98)
filesystem with NFS. FAT is not designed for use on a multi-user machine, and as a
result, operations that depend on permissions will not work well. Moreover, some of
the underlying filesystem design is reported to work poorly with NFS's expectations.
Third, device or other special files may not export correctly to non-Linux clients.
See Section 8 for details on particular operating systems.

3.2.2. /etc/hosts.allow and /etc/hosts.deny

These two files specify which computers on the network can use services on your
machine. Each line of the file contains a single entry listing a service and a set of
machines. When the server gets a request from a machine, it does the following:

 It first checks hosts.allow to see if the machine matches a description listed in


there. If it does, then the machine is allowed access.

 If the machine does not match an entry in hosts.allow, the server then
checks hosts.deny to see if the client matches a listing in there. If it does then
the machine is denied access.

 If the client matches no listings in either file, then it is allowed access.

In addition to controlling access to services handled by inetd (such as telnet and FTP),


this file can also control access to NFS by restricting connections to the daemons that
provide NFS services. Restrictions are done on a per-service basis.

The first daemon to restrict access to is the portmapper. This daemon essentially just
tells requesting clients how to find all the NFS services on the system. Restricting
access to the portmapper is the best defense against someone breaking into your
system through NFS because completely unauthorized clients won't know where to
find the NFS daemons. However, there are two things to watch out for. First,
restricting portmapper isn't enough if the intruder already knows for some reason how
to find those daemons. And second, if you are running NIS, restricting portmapper
will also restrict requests to NIS. That should usually be harmless since you usually
want to restrict NFS and NIS in a similar way, but just be cautioned. (Running NIS is
generally a good idea if you are running NFS, because the client machines need a way
of knowing who owns what files on the exported volumes. Of course there are other
ways of doing this such as syncing password files. See the NIS HOWTO for
information on setting up NIS.)

In general it is a good idea with NFS (as with most internet services) to explicitly
deny access to IP addresses that you don't need to allow access to.

The first step in doing this is to add the followng entry to /etc/hosts.deny:
portmap:ALL

Starting with nfs-utils 0.2.0, you can be a bit more careful by controlling access to


individual daemons. It's a good precaution since an intruder will often be able to
weasel around the portmapper. If you have a newer version of nfs-utils, add entries for
each of the NFS daemons (see the next section to find out what these daemons are; for
now just put entries for them in hosts.deny):

lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL

Even if you have an older version of nfs-utils, adding these entries is at worst
harmless (since they will just be ignored) and at best will save you some trouble when
you upgrade. Some sys admins choose to put the entry ALL:ALL in the
file /etc/hosts.deny, which causes any service that looks at these files to deny access
to all hosts unless it is explicitly allowed. While this is more secure behavior, it may
also get you in trouble when you are installing new services, you forget you put it
there, and you can't figure out for the life of you why they won't work.

Next, we need to add an entry to hosts.allow to give any hosts access that we want to
have access. (If we just leave the above lines in hosts.deny then nobody will have
access to NFS.) Entries inhosts.allow follow the format

service: host [or network/netmask] , host [or network/netmask]

Here, host is IP address of a potential client; it may be possible in some versions to


use the DNS name of the host, but it is strongly discouraged.

Suppose we have the setup above and we just want to allow access
to slave1.foo.com and slave2.foo.com, and suppose that the IP addresses of these
machines are 192.168.0.1 and 192.168.0.2, respectively. We could add the following
entry to /etc/hosts.allow:

portmap: 192.168.0.1 , 192.168.0.2

For recent nfs-utils versions, we would also add the following (again, these entries are
harmless even if they are not supported):
lockd: 192.168.0.1 , 192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2

If you intend to run NFS on a large number of machines in a local


network, /etc/hosts.allow also allows for network/netmask style entries in the same
manner as /etc/exports above.

3.3. Getting the services started


# service nfs stop
# service nfslock stop
# service portmap stop
# service portmap start
# service nfslock start
# service nfs start

3.3.1. Pre-requisites

The NFS server should now be configured and we can start it running. First, you will
need to have the appropriate packages installed. This consists mainly of a new enough
kernel and a new enough version of the nfs-utils package. See Section 2.4 if you are in
doubt.

Next, before you can start NFS, you will need to have TCP/IP networking functioning
correctly on your machine. If you can use telnet, FTP, and so on, then chances are
your TCP networking is fine.

That said, with most recent Linux distributions you may be able to get NFS up and
running simply by rebooting your machine, and the startup scripts should detect that
you have set up your /etc/exportsfile and will start up NFS correctly. If you try this,
see Section 3.4 Verifying that NFS is running. If this does not work, or if you are not
in a position to reboot your machine, then the following section will tell you which
daemons need to be started in order to run NFS services. If for some reason nfsd was
already running when you edited your configuration files above, you will have to
flush your configuration; seeSection 3.5 for details.
3.3.2. Starting the Portmapper

NFS depends on the portmapper daemon, either called portmap or rpc.portmap. It


will need to be started first. It should be located in /sbin but is sometimes
in /usr/sbin. Most recent Linux distributions start this daemon in the boot scripts, but
it is worth making sure that it is running before you begin working with NFS (just
type ps aux | grep portmap).

3.3.3. The Daemons

NFS serving is taken care of by five daemons: rpc.nfsd, which does most of the
work; rpc.lockd and rpc.statd, which handle file locking; rpc.mountd, which
handles the initial mount requests, andrpc.rquotad, which handles user file quotas on
exported volumes. Starting with 2.2.18, lockd is called by nfsd upon demand, so you
do not need to worry about starting it yourself. statd will need to be started separately.
Most recent Linux distributions will have startup scripts for these daemons.

The daemons are all part of the nfs-utils package, and may be either in
the /sbin directory or the /usr/sbin directory.

If your distribution does not include them in the startup scripts, then then you should
add them, configured to start in the following order:

rpc.portmap

rpc.mountd, rpc.nfsd

rpc.statd, rpc.lockd (if necessary), and rpc.rquotad

The nfs-utils package has sample startup scripts for RedHat and Debian. If you are
using a different distribution, in general you can just copy the RedHat script, but you
will probably have to take out the line that says:

. ../init.d/functions

to avoid getting error messages.


3.4. Verifying that NFS is running

To do this, query the portmapper with the command rpcinfo -p to find out what
services it is providing. You should get something like this:

program vers proto port


100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 749 rquotad
100011 2 udp 749 rquotad
100005 1 udp 759 mountd
100005 1 tcp 761 mountd
100005 2 udp 764 mountd
100005 2 tcp 766 mountd
100005 3 udp 769 mountd
100005 3 tcp 771 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
300019 1 tcp 830 amd
300019 1 udp 831 amd
100024 1 udp 944 status
100024 1 tcp 946 status
100021 1 udp 1042 nlockmgr
100021 3 udp 1042 nlockmgr
100021 4 udp 1042 nlockmgr
100021 1 tcp 1629 nlockmgr
100021 3 tcp 1629 nlockmgr
100021 4 tcp 1629 nlockmgr

This says that we have NFS versions 2 and 3, rpc.statd version 1, network lock
manager (the service name for rpc.lockd) versions 1, 3, and 4. There are also different
service listings depending on whether NFS is travelling over TCP or UDP. Linux
systems use UDP by default unless TCP is explicitly requested; however other OSes
such as Solaris default to TCP.

If you do not at least see a line that says portmapper, a line that says nfs, and a line
that says mountd then you will need to backtrack and try again to start up the daemons
(see Section 7, Troubleshooting, if this still doesn't work).

If you do see these services listed, then you should be ready to set up NFS clients to
access files from your server.
3.5. Making changes to /etc/exports later on

If you come back and change your /etc/exports file, the changes you make may not
take effect immediately. You should run the command exportfs -ra to force nfsd to
re-read the /etc/exports   file. If you can't find the exportfs command, then you can
kill nfsd with the -HUP flag (see the man pages for kill for details).

If that still doesn't work, don't forget to check hosts.allow to make sure you haven't
forgotten to list any new client machines there. Also check the host listings on any
firewalls you may have set up (seeSection 7 and Section 6 for more details on
firewalls and NFS).

4. Setting up an NFS Client


4.1. Mounting remote directories

Before beginning, you should double-check to make sure your mount program is new
enough (version 2.10m if you want to use Version 3 NFS), and that the client machine
supports NFS mounting, though most standard distributions do. If you are using a 2.2
or later kernel with the /proc filesystem you can check the latter by reading the
file /proc/filesystems and making sure there is a line containing nfs. If not,
typing insmod nfs may make it magically appear if NFS has been compiled as a
module; otherwise, you will need to build (or download) a kernel that has NFS
support built in. In general, kernels that do not have NFS compiled in will give a very
specific error when the mount command below is run.

To begin using machine as an NFS client, you will need the portmapper running on
that machine, and to use NFS file locking, you will also
need rpc.statd and rpc.lockd running on both the client and the server. Most recent
distributions start those services by default at boot time; if yours doesn't, see Section
3.2 for information on how to start them up.

With portmap, lockd, and statd running, you should now be able to mount the


remote directory from your server just the way you mount a local hard drive, with the
mount command. Continuing our example from the previous section, suppose our
server above is called master.foo.com,and we want to mount the /home directory
on slave1.foo.com. Then, all we have to do, from the root prompt onslave1.foo.com, is
type:
# mount master.foo.com:/home /mnt/home

and the directory /home on master will appear as the directory /mnt/home on slave1.


(Note that this assumes we have created the directory /mnt/home as an empty mount
point beforehand.)

If this does not work, see the Troubleshooting section (Section 7).

You can get rid of the file system by typing

# umount /mnt/home

just like you would for a local file system.

4.2. Getting NFS File Systems to Be Mounted at Boot Time

NFS file systems can be added to your /etc/fstab file the same way local file systems
can, so that they mount when your system starts up. The only difference is that the file
system type will be set to nfsand the dump and fsck order (the last two entries) will
have to be set to zero. So for our example above, the entry in /etc/fstab would look
like:
# device mountpoint fs-type options dump fsckorder
...
master.foo.com:/home /mnt nfs rw 0 0
...

See the man pages for fstab if you are unfamiliar with the syntax of this file. If you
are using an automounter such as amd or autofs, the options in the corresponding
fields of your mount listings should look very similar if not identical.

At this point you should have NFS working, though a few tweaks may still be
necessary to get it to work well. You should also read Section 6 to be sure your setup
is reasonably secure.
4.3. Mount options
4.3.1. Soft vs. Hard Mounting

There are some options you should consider adding at once. They govern the way the
NFS client handles a server crash or network outage. One of the cool things about
NFS is that it can handle this gracefully. If you set up the clients right. There are two
distinct failure modes:

soft

If a file request fails, the NFS client will report an error to the process on the
client machine requesting the file access. Some programs can handle this with
composure, most won't. We do not recommend using this setting; it is a recipe
for corrupted files and lost data. You should especially not use this for mail
disks --- if you value your mail, that is.

hard

The program accessing a file on a NFS mounted file system will hang when the
server crashes. The process cannot be interrupted or killed (except by a "sure
kill") unless you also specify intr. When the NFS server is back online the
program will continue undisturbed from where it was. We recommend
using hard,intr on all NFS mounted file systems.

Picking up the from previous example, the fstab entry would now look like:
# device mountpoint fs-type options dump fsckord
...
master.foo.com:/home /mnt/home nfs rw,hard,intr 0 0
...

4.3.2. Setting Block Size to Optimize Transfer Speeds

The rsize and wsize mount options specify the size of the chunks of data that the
client and server pass back and forth to each other.

The defaults may be too big or to small; there is no size that works well on all or most
setups. On the one hand, some combinations of Linux kernels and network cards
(largely on older machines) cannot handle blocks that large. On the other hand, if they
can handle larger blocks, a bigger size might be faster.
Getting the block size right is an important factor in performance and is a must if you
are planning to use the NFS server in a production environment. See Section 5 for
details.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy