Lpi 8
Lpi 8
Table of Contents
If you're viewing this document online, you can click any of the topics below to link directly to that section.
If you are new to Linux, we recommend that you first complete the previous tutorials in the
LPI certification 101 and 102 exam prep series before continuing:
• 101 series, Part 1: Linux fundamentals
• 101 series, Part 2: Basic administration
• 101 series, Part 3: Intermediate administration
• 101 series, Part 4: Advanced administration
• 102 series, Part 1: Compiling sources and managing packages
• 102 series, Part 2: Compiling and configuring the kernel
• 102 series, Part 3: Networking
The data going between the telnet client and server isn't encrypted, and can thus be read by
anyone snooping the network. Not only that, but authentication (the sending of your
password to the server) is performed in plain text, making it a trivial matter for someone
capturing your network data to get instant access to your password. In fact, using a network
sniffer, it's possible for someone to reconstruct your entire telnet session, seeing everything
on the screen that you saw!
Obviously, these tools that were designed with the assumption that the network was secure
and unsniffable are inappropriate for today's distributed and public networks.
Secure shell
A better solution was needed, and that solution came in the form of a tool called secure shell,
or ssh. The most popular modern incarnation of this tool is available in the openssh
package, available for virtually every Linux distribution, not to mention many other systems.
What sets ssh apart from its insecure cousins is that it encrypts all communications between
the client and the server using strong encryption. By doing this, it becomes difficult
(impossible, even) to monitor the communications between the client and server. In this way,
ssh provides its service as advertised -- it is a secure shell. In fact, ssh has excellent
"all-round" security -- even authentication takes advantage of encryption and various key
exchange strategies to ensure that the user's password cannot be easily grabbed by anyone
monitoring data being transmitted over the network.
In this age of the popularization of the Internet, ssh is a valuable tool for enhancing network
security when using Linux systems. Most security-savvy network admins discourage the use
of -- or even don't allow the use of -- telnet and rsh on their systems at all because ssh is
such a capable and secure replacement.
Using ssh
Generally, most distributions' openssh packages can be used without any manual
configuration. After installing openssh, you'll have a couple of binaries. One is, of course,
ssh -- the secure shell client that can be used to connect to any system running sshd, the
secure shell server. To use ssh, you typically start a session by typing something like:
$ ssh drobbins@otherbox
Above, I instruct ssh to login as the "drobbins" user account on remotebox. As with telnet,
you'll be prompted for a password; after entering it, you'll be presented with a new login
session on the remote system.
Starting sshd
If you want to allow ssh connections to your machine, you'll need to start the sshd server.
To start the sshd server, you would typically use the rc-script that came with the openssh
package, typing something like:
# /etc/init.d/sshd start
or
# /etc/rc.d/init.d/sshd start
If necessary, you can adjust configuration options for sshd by modifying the
/etc/ssh/sshd_config file. For more information on the various options available, type man
sshd.
Secure copy
The openssh package also comes with a handy tool called scp, which stands for "secure
copy". You can use this command to securely copy files to and from various systems on the
network. For example, if I wanted to copy ~/foo.txt to my home directory on remotebox, I
could type:
After being prompted for my password on remotebox, the copy will be performed. Or, if I
wanted to copy a file called bar.txt in remotebox's /tmp directory to my current working
directory on my local system, I could type:
$ scp drobbins@remotebox:/tmp/bar.txt .
Section 3. NFS
Introducing NFS
The Network File System (NFS) is a technology that allows the transparent sharing of files
between UNIX and Linux systems connected via a Local Area Network, or LAN. NFS has
been around for a long time; it's well known and used extensively used in the Linux and UNIX
worlds. In particular, NFS is often used to share home directories among many machines on
the network, providing a consistent environment for a user when he or she logs in to a
machine (*any* machine) on the LAN. Thanks to NFS, it's possible to mount remote
filesystem trees and have them fully integrated into a system's local filesystem. NFS'
transparency and maturity is what makes it such a useful and popular choice for network file
sharing under Linux.
NFS basics
To share files using NFS, you first need to set up an NFS server. This NFS server can then
"export" filesystems. When a filesystem is exported, it means that it is made available to be
accessed by other systems on the LAN. Then, any authorized system that is also set up as
an NFS client can mount this exported filesystem using the standard "mount" command.
After the mount completes, the remote filesystem is "grafted in" in the same way that a
locally-mounted filesystem (like /mnt/cdrom) would be after it is mounted. The fact that all of
the file data is being read from the NFS server rather than from a disk is simply not an issue
to any standard Linux application. Everything simply works.
Attributes of NFS
Shared NFS filesystems have a number of interesting attributes. The first "nifty attribute" is a
result of NFS' stateless design. Because client access to the NFS server is stateless in
nature, it's possible for the NFS server to reboot without causing client applications to crash
or fail. All access to remote NFS files will simply "pause" until the server comes back online.
Also, because of NFS' stateless design, NFS servers can handle large numbers of clients
without any additional overhead besides that of transferring the actual file data over the
network. In other words, NFS performance is dependant on the amount of NFS data being
transferred over the network, rather than the number of machines that happen to be
requesting said data.
Securing NFS
It's important to mention that NFS version 2 and 3 have some very clear security limitations.
They were designed to be used in a specific environment -- a secure, trusted LAN. In
particular, NFS 2 and 3 were designed to be used on a LAN where "root" access to the
machine is only allowed by administrators. Due to the design of NFS 2 and NFS 3, if a
malicious user has "root" access to a machine on your LAN, he or she will be capable of
bypassing NFS security and very likely be able to access or even modify files on the NFS
server that he or she wouldn't normally be able to otherwise. For this reason, NFS should not
be deployed casually. If you're going to use NFS on your LAN, great -- but set up a firewall
first. Make sure that people outside your LAN won't be able to access your NFS server.
Then, make sure that your internal LAN is relatively secure, and that you are fully aware of all
the hosts participating in your LAN. Once your LAN's security has been thoroughly reviewed
and (if necessary) improved, you're ready to safely use NFS (see Part 7 of this tutorial series
for more on this).
Now that our NFS server has support for NFS in the kernel, it's time to set up an /etc/exports
file. The /etc/exports file will describe the local filesystems that will be made available for
export, as well as which hosts will be able to access these filesystems, and whether they will
be exported as read/write or read-only. It will also allow us to specify other options that
control NFS behavior.
But before we look at the format of the /etc/exports file, a big fat implementation warning is in
order! The NFS implementation in the Linux kernel only allows the export of one local
directory per filesystem. This means that if both /usr and /home are on the same ext3
filesystem (using /dev/hda6, for example), then you can't have both /usr and /home export
lines in /etc/exports. If you try to add these lines, you'll see errors like this when your
/etc/exports file gets reread (which will happen if you type exportfs -ra after your NFS
server is up and running):
As you can see, the first line in my /etc/exports file is a comment. On the second line, I select
my root ("/") filesystem for export. Note that while this exports everything under "/", it will not
export any other local filesystem. For example, if my NFS server has a CD-ROM mounted at
/mnt/cdrom, the contents of the CDROM will not be available unless they are exported
explicitly in /etc/exports. Now, notice the third line in my /etc/exports file. On this line, I export
/mnt/backup; as you might guess, /mnt/backup is on a separate filesystem from /, and it
contains a backup of my system. Each line also has a "192.168.1.9(rw,no_root_squash)" on
it. This information tells nfsd to only make these exports available to the NFS client with the
IP address of 192.168.1.9. It also tells nfsd to make these filesystems writeable as well as
readable by NFS client systems, and instructs the NFS server to allow the remote NFS client
to allow a superuser account to have true "root" access to the filesystems.
In this sample /etc/exports file, I use a host mask of /24 to mask out the last eight bits in the
IP address I specify. It's also very important that there is no space between the IP address
specification and the "(", or NFS will interpret your information incorrectly. And, as you might
guess, there are other options that one can specify besides "rw" and "no_root_squash"; type
"man exports" for a complete list.
# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32802 status
100024 1 tcp 46049 status
# rpcinfo
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32768 status
100024 1 tcp 32768 status
You can also perform this check from a remote system by typing rpcinfo -p myhost, as
follows:
# rpcinfo -p sidekick
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32768 status
100024 1 tcp 32768 status
/ 192.168.1.1/24(rw,no_root_squash)
Inventor's root filesystem will now be mounted on sidekick at /mnt/nfs; you should now be
able to type cd /mnt/nfs and look around inside and see inventor's files. Again, note that if
inventor's /home tree is on another filesystem, then /mnt/nfs/home will not contain anything --
another mount (as well as another entry in inventor's /etc/exports file) will be required to
access that data.
Inventor's /usr tree will now be NFS mounted to the pre-existing /mnt/usr directory. It's
important to again note that inventor's /etc/exports file didn't need to explicitly export /usr; it
was included "for free" in our "/" export line.
The best thing you can do to improve your NFS skills is to try setting up your own NFS 3
server and client(s) -- the experience will be invaluable. The second-best thing you can do is
to read the Linux NFS HOWTO, which is quite a good HOWTO.
Learn more about what ssh is capable of in the developerWorks series on ssh: Part 1, Part 2,
and Part 3. Also be sure to visit the home of openssh at http://www.openssh.com, which is
an excellent place to continue your study of this important tool.
Samba is another important networked file-sharing technology. For more information about
Samba, read the developerWorks Samba articles: the first "Key concepts" article, Samba
installation article, and Samba configuration article.
Once you're up to speed on Samba, spend some time studying the Linux DNS HOWTO. The
LPI 102 exam is also going to expect that you have some familiarity with Sendmail. Red Hat
has a good Red Hat Sendmail HOWTO that will help to get you up to speed.
The Linux Network Administrators guide, available from Linuxdoc.org's "Guides" section, is a
good complement to this series of tutorials -- give it a read! You may also find Eric S.
Raymond's Unix and Internet Fundamentals HOWTO to be helpful.
In the Bash by example article series on developerWorks, learn how to use bash
programming constructs to write your own bash scripts. This series (particularly parts 1 and
2) are excellent additional preparation for the LPI exam:
The Technical FAQ for Linux Users by Mark Chapman is a 50-page in-depth list of
frequently-asked Linux questions, along with detailed answers. The FAQ itself is in PDF
(Acrobat) format. If you're a beginning or intermediate Linux user, you really owe it to yourself
to check this FAQ out. The Linux glossary for Linux users, also from Mark, is also excellent.
If you're not too familiar with the vi editor, you should check out Daniel's tutorial on Vi. This
developerWorks tutorial will give you a gentle yet fast-paced introduction to this powerful text
editor. Consider this must-read material if you don't know how to use vi.
For more information on the Linux Professional Institute, visit the LPI home page.
Your feedback
We look forward to getting your feedback on this tutorial. Additionally, you are welcome to
contact the lead author, Daniel Robbins, directly at drobbins@gentoo.org.
Colophon
This tutorial was written entirely in XML, using the developerWorks Toot-O-Matic tutorial
generator. The open source Toot-O-Matic tool is an XSLT stylesheet and several XSLT
extension functions that convert an XML file into a number of HTML pages, a zip file, JPEG
heading graphics, and two PDF files. Our ability to generate multiple text and binary formats
from a single source file illustrates the power and flexibility of XML. (It also saves our
production team a great deal of time and effort.)