LFS201 Labs - V2020 06 11
LFS201 Labs - V2020 06 11
Essentials of System
Administration
Version 2020-06-11
Version 2020-06-11
The training materials provided or developed by The Linux Foundation in connection with the training services are protected
by copyright and other intellectual property rights.
Open source code incorporated herein may have other copyright holders and is used pursuant to the applicable open source
license.
The training materials are provided for individual use by participants in the form in which they are provided. They may not be
copied, modified, distributed to non-participants or used to provide training to others without the prior written consent of The
Linux Foundation.
No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without express prior
written consent.
Published by:
No representations or warranties are made with respect to the contents or use of this material, and any express or implied
warranties of merchantability or fitness for any particular purpose or specifically disclaimed.
Although third-party application software packages may be referenced herein, this is for demonstration purposes only and
shall not constitute an endorsement of any of these software applications.
Linux is a registered trademark of Linus Torvalds. Other trademarks within this course material are the property of their
respective owners.
If there are any questions about proper and fair use of the material herein, please contact:
training@linuxfoundation.org
1 Introduction 1
1.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3 Processes 7
3.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Signals 11
4.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6 RPM 19
6.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7 dpkg 23
7.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8 yum 25
8.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
9 zypper 29
9.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
10 APT 31
10.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
11 System Monitoring 35
11.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
12 Process Monitoring 37
12.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
iii
iv CONTENTS
14.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
15 I/O Scheduling ** 45
15.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
17 Disk Partitioning 51
17.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
22 Encrypting Disks 75
22.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
24 RAID ** 81
24.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
26 Kernel Modules 87
26.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
28 Virtualization Overview 91
28.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
36 Firewalls 127
36.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
38 GRUB 133
38.1 Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Introduction
1.1 Labs
Thus, the sensible procedure is to configure things such that single commands may be run with superuser privilege, by using
the sudo mechanism. With sudo the user only needs to know their own password and never needs to know the root password.
If you are using a distribution such as Ubuntu, you may not need to do this lab to get sudo configured properly for the course.
However, you should still make sure you understand the procedure.
To check if your system is already configured to let the user account you are using run sudo, just do a simple command like:
$ sudo ls
You should be prompted for your user password and then the command should execute. If instead, you get an error message
you need to execute the following procedure.
Launch a root shell by typing su and then giving the root password, not your user password.
On all recent Linux distributions you should navigate to the /etc/sudoers.d subdirectory and create a file, usually with the
name of the user to whom root wishes to grant sudo access. However, this convention is not actually necessary as sudo will
scan all files in this directory as needed. The file can simply contain:
An older practice (which certainly still works) is to add such a line at the end of the file /etc/sudoers. It is best to do so using
the visudo program, which is careful about making sure you use the right syntax in your edit.
You probably also need to set proper permissions on the file by typing:
(Note some Linux distributions may require 400 instead of 440 for the permissions.)
1
2 CHAPTER 1. INTRODUCTION
After you have done these steps, exit the root shell by typing exit and then try to do sudo ls again.
There are many other ways an administrator can configure sudo, including specifying only certain permissions for certain
users, limiting searched paths etc. The /etc/sudoers file is very well self-documented.
However, there is one more setting we highly recommend you do, even if your system already has sudo configured. Most
distributions establish a different path for finding executables for normal users as compared to root users. In particular the
directories /sbin and /usr/sbin are not searched, since sudo inherits the PATH of the user, not the full root user.
Thus, in this course we would have to be constantly reminding you of the full path to many system administration utilities;
any enhancement to security is probably not worth the extra typing and figuring out which directories these programs are in.
Consequently, we suggest you add the following line to the .bashrc file in your home directory:
PATH=$PATH:/usr/sbin:/sbin
If you log out and then log in again (you don’t have to reboot) this will be fully effective.
2.1 Labs
$ du --help
Solution 2.1
To obtain a full list of directories under / along with their size:
4.3M /home
16K /lost+found
39M /etc
4.0K /srv
3.6M /root
178M /opt
138M /boot
6.1G /usr
1.1G /var
16K /mnt
4.0K /media
869M /tmp
8.4G /
• --maxdepth=1: Just go down one level from / and sum up everything recursively underneath in the tree.
3
4 CHAPTER 2. LINUX FILESYSTEM TREE LAYOUT
• -x Stay on one filesystem; don’t look at directories that are not on the / partition. In this case that means ignore:
/dev /proc /run /sys
because these are pseudo-filesystems which exist in memory only; they are just empty mount points when the system
is not running. Because this was done on a RHEL system, the following mount points are also not followed:
/bin /sbin /lib /lib64
since they are just symbolically linked to their counterparts under /usr.
Please Note
Exactly what you see in this exercise will depend on your kernel version, so you may not match the output shown
precisely.
1. As root, cd into /proc and do a directory listing. This should display a number of files and directories:
$ cd /proc
$ ls -F
1/ 128/ 1510/ 20/ 2411/ 30895/ 53/ 6925/ 802/ 951/ kmsg
10/ 129/ 1511/ 2015/ 2425/ 31/ 54/ 7/ 81/ 952/ kpagecgroup
1002/ 13/ 1512/ 2022/ 2436/ 31449/ 55/ 70/ 813/ 957/ kpagecount
1007/ 130/ 1513/ 2023/ 2444/ 32/ 56/ 702/ 814/ 97/ kpageflags
10540/ 131/ 1514/ 20300/ 2451/ 33/ 58/ 709/ 816/ 9742/ loadavg
10590/ 13172/ 152/ 20354/ 2457/ 34/ 585/ 71/ 817/ 98/ locks
10798/ 132/ 15552/ 20380/ 2489/ 35/ 59/ 718/ 82/ 99/ meminfo
10805/ 133/ 15663/ 20388/ 25/ 36/ 60/ 719/ 83/ 9923/ misc
10806/ 134/ 15737/ 20392/ 2503/ 37/ 61/ 72/ 834/ acpi/ modules
10809/ 135/ 159/ 20396/ 2504/ 374/ 6193/ 721/ 835/ asound/ mounts@
10810/ 136/ 15981/ 2086/ 2531/ 379/ 62/ 723/ 84/ buddyinfo mtrr
10813/ 137/ 16/ 2090/ 2546/ 38/ 63/ 725/ 841/ bus/ net@
10894/ 138/ 162/ 211/ 2549/ 380/ 634/ 727/ 842/ cgroups pagetypeinfo
10925/ 1384/ 1632/ 22/ 2562/ 40/ 64/ 73/ 85/ cmdline partitions
10932/ 1385/ 1636/ 2205/ 25794/ 41/ 65/ 7300/ 857/ config.gz sched_debug
10934/ 1387/ 166/ 2209/ 26/ 42/ 662/ 74/ 86/ consoles scsi/
10935/ 139/ 1670/ 2212/ 2610/ 43/ 663/ 757/ 864/ cpuinfo self@
10941/ 1390/ 17/ 2232/ 26108/ 44/ 665/ 758/ 867/ crypto slabinfo
10983/ 1393/ 17271/ 2238/ 2619/ 4435/ 666/ 76/ 87/ devices softirqs
10998/ 14/ 17361/ 2296/ 2624/ 45/ 67/ 761/ 88/ diskstats stat
11/ 140/ 1793/ 2298/ 2627/ 46/ 670/ 762/ 881/ dma swaps
11047/ 1410/ 18/ 23/ 2644/ 468/ 671/ 765/ 886/ driver/ sys/
1105/ 1415/ 1831/ 23042/ 2645/ 47/ 673/ 766/ 887/ execdomains sysrq-trigger
1121/ 1429/ 18880/ 2344/ 2679/ 470/ 674/ 768/ 888/ fb sysvipc/
1123/ 1437/ 18903/ 2348/ 27/ 484/ 678/ 769/ 889/ filesystems thread-self@
1135/ 1445/ 19/ 2353/ 2706/ 49/ 679/ 77/ 89/ fs/ timer_list
11420/ 146/ 19392/ 2354/ 2762/ 492/ 68/ 771/ 9/ interrupts timer_stats
11499/ 1463/ 19488/ 2365/ 28/ 493/ 682/ 78/ 90/ iomem tty/
11515/ 147/ 1954/ 23683/ 2858/ 5/ 683/ 79/ 92/ ioports uptime
11530/ 1476/ 1963/ 2370/ 28730/ 50/ 686/ 793/ 921/ irq/ version
1163/ 148/ 19727/ 2372/ 28734/ 51/ 687/ 794/ 928/ kallsyms vmallocinfo
1164/ 1485/ 19734/ 2374/ 29/ 510/ 69/ 8/ 930/ kcore vmstat
12/ 149/ 19984/ 24/ 2973/ 514/ 690/ 80/ 931/ keys zoneinfo
127/ 15/ 2/ 2406/ 3/ 52/ 691/ 801/ 944/ key-users
Notice many of the directory names are numbers; each corresponds to a running process and the name is the process
ID. An important subdirectory we will discuss later is /proc/sys, under which many system parameters can be examined
or modified.
• /proc/cpuinfo:
• /proc/meminfo:
• /proc/mounts:
• /proc/swaps:
• /proc/version:
• /proc/partitions:
• /proc/interrupts:
The names give a pretty good idea about what information they reveal.
Note that this information is not being constantly updated; it is obtained only when one wants to look at it.
3. Take a peek at any random process directory (if it is not a process you own some of the information might be limited
unless you use sudo):
$ ls -F 4435
attr/ coredump_filter gid_map mountinfo oom_score_adj sessionid syscall
autogroup cpuset io mounts pagemap setgroups task/
auxv cwd@ limits mountstats personality smaps timerslack_ns
cgroup environ loginuid net/ projid_map stack uid_map
clear_refs exe@ map_files/ ns/ root@ stat wchan
cmdline fd/ maps oom_adj sched statm
comm fdinfo/ mem oom_score schedstat status
Take a look at some of the fields in here such as: cmdline, cwd, environ, mem, and status
Processes
3.1 Labs
$ help ulimit
1. Start a new shell by typing bash (or opening a new terminal) so that your changes are only effective in the new shell.
View the current limit on the number of open files and explicitly view the hard and soft limits.
2. Set the limit to the hard limit value and verify if it worked.
4. Try to set the limit back to the previous value. Did it work?
Solution 3.1
1. $ bash
$ ulimit -n
1024
$ ulimit -S -n
1024
$ ulimit -H -n
4096
2. $ ulimit -n hard
$ ulimit -n
7
8 CHAPTER 3. PROCESSES
4096
3. $ ulimit -n 2048
$ ulimit -n
2048
4. $ ulimit -n 4096
bash: ulimit: open files: cannot modify limit: Operation not permitted
$ ulimit -n
2048
2. Semaphores
3. Message Queues
More modern programs tend to use POSIX IPC methods for all three of these mechanisms, but there are still plenty of System
V IPC applications found in the wild.
$ ipcs
Note almost all of the currently running shared memory segments have a key of 0 (also known as IPC_PRIVATE) which means
they are only shared between processes in a parent/child relationship. Furthermore, all but one are marked for destruction
when there are no further attachments.
One can gain further information about the processes that have created the segments and last attached to them with:
$ ipcs -p
Thus, by doing:
$ ipcs
....
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
....
0x00000000 622601 coop 600 2097152 2 dest
0x0000001a 13303818 coop 666 8196 0
....
shows a shared memory segment with no attachments and not marked for destruction. Thus it might persist forever, leaking
memory if no subsequent process attaches to it.
Signals
4.1 Labs
signals.c
1 /*
2 * Examining Signal Priorities.
3 *
4 * In the below, do not send or handle either of the signals SIGKILL
5 * or SIGSTOP.
6 *
7 * Write a C program that includes a signal handler that can handle
8 * any signal. The handler should avoid making any system calls (such
9 * as those that might occur doing I/O).
10 *
11 * The handler should simply store the sequence of signals as they
12 * come in, and update a counter array for each signal that indicates
13 * how many times the signal has been handled.
14 *
15 * The program should begin by suspending processing of all signals
16 * (using sigprocmask().
17 *
18 * It should then install the new set of signal handlers (which can be
19 * the same for all signals, registering them with the sigaction()
20 * interface.
21 *
22 * The the program should send every possible signal to itself multiple
23 * times, using the raise() function.
24 *
11
12 CHAPTER 4. SIGNALS
58 #include <stdio.h>
59 #include <unistd.h>
60 #include <signal.h>
61 #include <stdlib.h>
62 #include <string.h>
63 #include <pthread.h>
64
65 #define NUMSIGS 64
66
92 /*
93 * Now, use sigaction to create references to local signal
94 * handlers * and raise the signal to myself
95 */
96
97 printf
98 ("\nInstalling signal handler and Raising signal for signal number:\n\n");
99 for (signum = 1; signum <= NUMSIGS; signum++) {
100 if (signum == SIGKILL || signum == SIGSTOP || signum == 32
101 || signum == 33) {
102 printf(" --");
103 } else {
104 sigaction(signum, &sigact, &oldact);
105 /* send the signal 3 times! */
106 rc = raise(signum);
107 rc = raise(signum);
108 rc = raise(signum);
109 if (rc) {
110 printf("Failed on Signal %d\n", signum);
111 } else {
112 printf("%4d", signum);
113 }
114 }
115 if (signum % 16 == 0)
116 printf("\n");
117 }
118 fflush(stdout);
119
145 sig_count[sig]++;
146 signumbuf[line] = sig;
147 sigcountbuf[line] = sig_count[sig];
148 line++;
149 }
• Does not send the signals SIGKILL or SIGSTOP, which can not be handled and will always terminate a program.
• Stores the sequence of signals as they come in, and updates a counter array for each signal that indicates how many
times the signal has been handled.
• Begins by suspending processing of all signals and then installs a new set of signal handlers for all signals.
• Sends every possible signal to itself multiple times and then unblocks signal handling and the queued up signal handlers
will be called.
• If more than one of a given signal is raised while the process has blocked it, does the process receive it multiple times?
Does the behavior of real time signals differ from normal signals?
• Are all signals received by the process, or are some handled before they reach it?
One signal, SIGCONT (18 on x86) may not get through; can you figure out why?
Please Note
On some Linux distributions signals 32 and 33 can not be blocked and will cause the program to fail. Even though
system header files indicate SIGRTMIN=32, the command kill -l indicates SIGRTMIN=34.
Note that POSIX says one should use signal names, not numbers, which are allowed to be completely implementation
dependent.
5.1 Labs
Let’s get a feel for how git works and how easy it easy to use. For now we will just make our own local project.
1. First we create a working directory and then initialize git to work with it:
$ mkdir git-test
$ cd git-test
$ git init
2. Initializing the project creates a .git directory which will contain all the version control information; the main directories
included in the project remain untouched. The initial contents of this directory look like:
$ ls -l .git
total 40
drwxrwxr-x 2 coop coop 4096 Dec 30 13:59 branches/
-rw-rw-r-- 1 coop coop 92 Dec 30 13:59 config
-rw-rw-r-- 1 coop coop 58 Dec 30 13:59 description
-rw-rw-r-- 1 coop coop 23 Dec 30 13:59 HEAD
15
16 CHAPTER 5. PACKAGE MANAGEMENT SYSTEMS
Later we will describe the contents of this directory and its subdirectories; for the most part they start out empty.
$ git status
On branch master
Initial commit
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
Notice it is telling us that our file is staged but not yet committed.
This must be done for each new project unless you have it predefined in a global configuration file.
6. Now let’s modify the file, and then see the history of differences:
If you do not specify an identifying message to accompany the commit with the -m option you will jump into an editor
to put some content in. You must do this or the commit will be rejected. The editor chosen will be what is set in your
EDITOR environment variable, which can be superseded with setting GIT_EDITOR.
8. You can see your history with:
$ git log
commit eafad66304ebbcd6acfe69843d246de3d8f6b9cc
Author: A Genius <a_genius@linux.com>
Date: Wed Dec 30 11:07:19 2009 -0600
My initial commit
and you can see the information got in there. You will note the long hexadecimal string which is the commit number; it
is a 160-bit, 40-digit unique identifier. git cares about these beasts, not file names.
9. You are now free to modify the already exiting file and add new files with git add. But they are staged until you do
another git commit
10. Now that was not so bad. But we have only scratched the surface.
RPM
6.1 Labs
This lab will work equally well on Red Hat and SUSE-based systems.
2. List information about the package including all the files it contains.
Solution 6.1
Note a fancier form that combines these two steps would be:
19
20 CHAPTER 6. RPM
3. $ rpm -V logrotate
..?...... /etc/cron.daily/logrotate
S.5....T. c /etc/logrotate.conf
On RedHat
4. On RHEL 8:
$ sudo rpm -e logrotate
error: Failed dependencies:
logrotate is needed by (installed) vsftpd-3.0.3-28.el8.x86_64
logrotate >= 3.5.2 is needed by (installed) rsyslog-8.37.0-13.el8.x86_64
On openSUSE
On OpenSUSE-Leap 15.1:
$ sudo rpm -e logrotate
error: Failed dependencies:
logrotate is needed by (installed) xdm-1.1.11-lp151.13.2.x86_64
logrotate is needed by (installed) wpa_supplicant-2.6-lp151.4.4.x86_64
logrotate is needed by (installed) chrony-3.2-lp151.8.6.x86_64
logrotate is needed by (installed) net-snmp-5.7.3-lp151.7.5.x86_64
logrotate is needed by (installed) syslog-service-2.0-lp151.3.3.noarch
logrotate is needed by (installed) vsftpd-3.0.3-lp151.6.3.x86_64
logrotate is needed by (installed) libvirt-daemon-5.1.0-lp151.7.6.1.x86_64
logrotate is needed by (installed) iscsiuio-0.7.8.2-lp151.13.6.1.x86_64
logrotate is needed by (installed) mcelog-1.60-lp151.2.3.1.x86_64
Note that the exact package dependency tree depends on both the distribution and choice of installed software.
This lab will work equally well on Red Hat and SUSE-based systems.
1. Backup the contents of /var/lib/rpm as the rebuild process will overwrite the contents. If you neglect to do this and
something goes wrong you are in serious trouble.
3. Compare the new contents of the directory with the backed up contents; don’t examine the actual file contents as they
are binary data, but note the number and names of the files.
4. Get a listing of all rpms on the system. You may want to compare this list with one generated before you actually do the
rebuild procedure. If the query command worked, your new database files should be fine.
5. Compare again the two directory contents. Do they have the same files now?
6. You could delete the backup (probably about 100 MB in size) but you may want to keep it around for a while to make
sure your system is behaving properly before trashing it.
Solution 6.2
V 2020-06-11 © Copyright the Linux Foundation 2020. All rights reserved.
6.1. LABS 21
1. $ cd /var/lib
$ sudo cp -a rpm rpm_BACKUP
3. $ ls -l rpm rpm_BACKUP
5. $ ls -l rpm rpm_BACKUP
6. Probably you should not do this until you are sure the system is fine!
$ sudo rm -rf rpm_BACKUP
dpkg
7.1 Labs
2. List information about the package including all the files it contains.
Solution 7.1
1. $ dpkg -S /etc/logrotate.conf
logrotate: /etc/logrotate.conf
2. $ dpkg -L logrotate
...
3. $ dpkg -V logrotate
23
24 CHAPTER 7. DPKG
yum
8.1 Labs
1. Check to see if there are any available updates for your system.
3. List all installed kernel-related packages, and list all installed or available ones.
4. Install the httpd-devel package, or anything else you might not have installed yet. Doing a simple:
$ sudo yum list
will let you see a complete list; you may want to give a wildcard argument to narrow the list.
Solution 8.1
25
26 CHAPTER 8. YUM
Try the commands you used above both as root and as a regular user. Do you notice any difference?
Solution 8.2
Please Note
Depending on your distribution version, you may get some permission errors if you do not use sudo with the following
commands, even though we are just getting information.
$ sudo yum search bash
$ sudo yum list bash
$ sudo yum info bash
$ sudo yum deplist bash
Please Note
On RHEL you may get some permission errors if you don’t use sudo with some of the following commands, even when
we are just getting information.
1. Use the following command to list all package groups available on your system:
$ yum grouplist
2. Identify the Backup Client group and generate the information about this group using the command
$ yum groupinfo "Backup Client"
3. Install using:
4. Identify a package group that’s currently installed on your system and that you don’t need. Remove it using yum
groupremove as in:
$ sudo yum groupremove "Backup Client"
Note you will be prompted to confirm removal so you can safely type the command to see how it works.
You may find that the groupremove does not remove everything that was installed; whether this is a bug or a feature
can be discussed.
“Webmin is a web-based interface for system administration for Unix. Using any modern web browser, you can setup user
accounts, Apache, DNS, file sharing and much more. Webmin removes the need to manually edit Unix configuration files like
/etc/passwd, and lets you manage a system from the console or remotely.”
We are going to create a repository for installation and upgrade. While we could simply go the download page and get the
current rpm, that would not automatically give us any upgrades.
1. Create a new repository file called webmin.repo in the /etc/yum.repos.d directory. It should contain the following:
webmin.repo
[Webmin]
name=Webmin Distribution Neutral
baseurl=http://download.webmin.com/download/yum
mirrorlist=http://download.webmin.com/download/yum/mirrorlist
enabled=1
gpgcheck=0
(Note you can also cut and paste the contents from http://www.webmin.com/download.html.)
2. Install the webmin package.
$ sudo yum install webmin
zypper
9.1 Labs
On openSUSE
To do these labs you need to have access to a system that is zypper-based, such as SUSE, or openSUSE.
1. Check to see if there are any available updates for your system.
4. List all installed kernel-related packages, and list all installed or available ones.
5. Install the apache2-devel package, or anything else you might not have installed yet. (Note httpd is apache2 on SUSE
systems.) Doing a simple:
$ sudo zypper search
will let you see a complete list; you may want to give a wildcard argument to narrow the list.
Solution 9.1
1. $ zypper list-updates
3. $ zypper repos
29
30 CHAPTER 9. ZYPPER
Try the commands you used above both as root and as a regular user. Do you notice any difference?
Solution 9.2
Without the -d option only packages with bash in their actual name are reported. You may have to do zypper info on
the package to see where bash is mentioned.
2. $ zypper search bash
will give a list of files bash requires. Perhaps the easiest way to see what depends on having bash installed is to do
$ sudo zypper remove --dry-run bash
For this exercise bash is a bad choice since it is so integral to the system; you really can’t remove it anyway.
APT
10.1 Labs
2. List all installed kernel-related packages, and list all installed or available ones.
3. Install the apache2-dev package, or anything else you might not have installed yet. Doing a simple:
$ apt-cache pkgnames
will let you see a complete list; you may want to give a wildcard argument to narrow the list.
Solution 10.1
1. First synchronize the package index files with remote repositories:
$ sudo apt update
To actually upgrade:
$ sudo apt upgrade
$ sudo apt -u upgrade
(You can also use dist-upgrade as discussed earlier.) Only the first form will try to do the installations.
2. $ apt-cache search "kernel"
$ apt-cache search -n "kernel"
$ apt-cache pkgnames "kernel"
31
32 CHAPTER 10. APT
The second and third forms only find packages that have kernel in their name.
$ dpkg --get-selections "*kernel*"
to get only installed packages. Note that on Debian-based systems you probably should use linux not kernel for
kernel-related packages as they don’t usually have kernel in their name.
3. $ sudo apt install apache2-dev
Try the commands you used above both as root and as a regular user. Do you notice any difference?
Solution 10.2
You can then easily install them like regular single packages, as in:
System Monitoring
11.1 Labs
stress-ng is essentially an enhanced version of stress, which respects its symptoms and options. It is actively maintained:
see https://wiki.ubuntu.com/Kernel/Reference/stress-ng
All major distributions should have stress-ng in their packaging systems However, for RHEL/CentOS it needs to be obtained
from the EPEL repository. As of this writing there is no package in the EPEL 8 repository, but you can install the one from
EPEL 7 without a problem.
$ stress-ng --help
$ info stress-ng
35
36 CHAPTER 11. SYSTEM MONITORING
$ stress-ng -c 8 -i 4 -m 6 -t 20s
will:
• Fork off 6 memory-intensive processes, each spinning on malloc(), allocating 256 MB by default. The size can be
changed as in --vm-bytes 128M.
After installing stress-ng, you may want to start up your system’s graphical system monitor, which you can find on your
application menu, or run from the command line, which is probably gnome-system-monitor or ksysguard.
Now begin to put stress on the system. The exact numbers you use will depend on your system’s resources, such as the
number of CPU’s and RAM size.
$ stress-ng -m 4 -t 20s
Play with combinations of the switches and see how they impact each other. You may find the stress-ng program useful to
simulate various high load conditions.
Process Monitoring
12.1 Labs
1. Run ps with the options -ef. Then run it again with the options aux. Note the differences in the output.
2. Run ps so that only the process ID, priority, nice value, and the process command line are displayed.
3. Start a new bash session by typing bash at the command line. Start another bash session using the nice command
but this time giving it a nice value of 10.
4. Run ps as in step 2 to note the differences in priority and nice values. Note the process ID of the two bash sessions.
5. Change the nice value of one of the bash sessions to 15 using renice. Once again, observe the change in priority and
nice values.
6. Run top and watch the output as it changes. Hit q to stop the program.
Solution 12.1
1. $ ps -ef
$ ps aux
2. $ ps -o pid,pri,ni,cmd
PID PRI NI CMD
2389 19 0 bash
22079 19 0 ps -o pid,pri,ni,cmd
37
38 CHAPTER 12. PROCESS MONITORING
2389 19 0 bash
22115 19 0 bash
22171 9 10 bash
22227 9 10 ps -o pid,pri,ni,cmd
4. $ renice 15 -p 22171
$ ps -o pid,pri,ni,cmd
PID PRI NI CMD
2389 19 0 bash
22115 19 0 bash
22171 4 15 bash
22246 4 15 ps -o pid,pri,ni,cmd
5. $ top
1. Use dd to start a background process which reads from /dev/urandom and writes to /dev/null.
3. Bring the process to the foreground using the fg command. Then hit Ctrl-Z. What does this do? Look at the process
state again, what is it?
5. Bring the job back to the foreground, then terminate it using kill from another window.
Solution 12.2
2. $ ps -C dd -o pid,cmd,stat
25899 dd if=/dev/urandom of=/dev/ R
Should be S or R.
3. $ fg
$ ˆZ
$ ps -C dd -o pid,cmd,stat
PID CMD STAT
25899 dd if=/dev/urandom of=/dev/ T
State should be T.
$ jobs
[1]+ Stopped dd if=/dev/urandom of=/dev/null
5. Bring the job back to the foreground, then kill it using the kill command from another window.
$ fg
$ kill 25899
13.1 Labs
$ sudo /sbin/swapoff -a
Make sure you turn it back on later, when we are done, with
$ sudo /sbin/swapon -a
Now we are going to put the system under increasing memory pressure. One way to do this is to exploit the stress-ng program
we installed earlier, running it with arguments such as:
$ stress-ng -m 12 -t 10s
You should see the OOM (Out of Memory) killer swoop in and try to kill processes in a struggle to stay alive. You can see what
is going on by running dmesg or monitoring /var/log/messages or /var/log/syslog, or through graphical interfaces that
expose the system logs.
39
40 CHAPTER 13. MEMORY MONITORING AND USAGE
14.1 Labs
Results can be read from the terminal window or directed to a file, and also to a csv format (comma separated value).
Companion programs, bon csv2html and bon csv2txt, can be used convert to html and plain text output formats.
We recommend you read the man page for bonnie++ before using as it has quite a few options regarding which tests to
perform and how exhaustive and stressful they should be. A quick synopsis is obtained with:
$ bonnie++ --help
Version: 1.96
where:
41
42 CHAPTER 14. I/O MONITORING AND TUNING
• -b means do a fsync after every write, which forces flushing to disk rather than just writing to cache.
• -d /mnt just specifies the directory to place the temporary file created; make sure it has enough space, in this case 300
MB, available.
If you don’t supply a figure for your memory size, the program will figure out how much the system has and will create a testing
file 2-3 times as large. We are not doing that here because it takes much longer to get a feel for things.
On RedHat
On an RHEL/CentOS system:
1.97,1.97,c7,1,1544116090,300M,,,,286399,15,293907,14,,,+++++,+++,3143,12,,,,,,,,,,,,,,,,,,,\
14us,15us,,18us,59226us,,,,,,
On Ubuntu
On an Ubuntu system, running as a virtual machine under hypervisor on the same physical machine:
$ sudo bonnie++ -n 0 -u 0 -r 100 -f -b -d /mnt
1.97,1.97,ubuntu,1,1544120438,300M,,,,230788,94,+++++,+++,,,+++++,+++,+++++,+++,,,,,,,,,,,,,,,,,,,\
11210us,606us,,631us,4365us,,,,,,
After reading the documentation, try longer and larger, more ambitious tests. Try some of the tests we turned off. If your
system is behaving well, save the results for future benchmarking comparisons when the system is sick.
It can be downloaded from http://sourceforge.net/projects/fsmark/ Once you have obtained the tarball, you can
unpack it and compile it with:
Read the README file as we are only going to touch the surface.
If the compile fails with an error like:
$ make
....
/usr/bin/ld: cannot find -lc
it is because you haven’t installed the static version of glibc. You can do this on Red Hat-based systems by doing:
On Debian-based systems the relevant static library is installed along with the shared one so no additional package needs to
be sought.
• If you already have /etc/yum.repos.d/CentOS-PowerTools.repo on your system, then just make sure it has
enabled=1 set in the file. Otherwise we have made a copy of this file and placed it in the SOLUTIONS section for
this course:
CentOS-PowerTools.repo
[PowerTools]
name=CentOS-$releasever - PowerTools
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=PowerTools&infra=$infra
#baseurl=http://mirror.centos.org/$contentdir/$releasever/PowerTools/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
• On RHEL 8 you can get away with adding this file to the repository directory, but when you install you will probably
have to do it as:
# dnf install --nogpgcheck glibc-static
or go to mirror sites and find the latest copies of the PowerTools repository and download and install both
glibc-static and libxcrypt-static
• RHEL 8 believes static linking to be a security risk and they discourage the use of any static packages. So you
may want to uninstall when done.
For a test we are going to create 2500 files, each 10 KB in size, and after each write we’ll perform an fsync to flush out to
disk. This can be done in the /tmp directory with the command:
$ iostat -x -d /dev/sda 2 10
The numbers you should surely note are the number of files per second reported by fs mark and the bandwidth percentage
utilized as reported by iostat. If this is approaching 100 percent, you are I/O-bound.
Depending on what kind of filesystem you are using you may be able to get improved results by changing the mount options.
For example, for ext3 or ext4 you can try:
Please Note
• In the above mount command you should use whatever partition /tmp belongs to, such as /.
Note that these options may cause problems if you have a power failure, or other ungraceful system shutdown; i.e., there is
likely to be a trade-off between stability and speed.
Documentation about some of the mount options can be found with the kernel source under Documentation/filesystems
and the man page for mount.
CentOS-PowerTools.repo
I/O Scheduling **
15.1 Labs
lab iosched.sh
#!/bin/bash
NMAX=8
NMEGS=100
[[ -n $1 ]] && NMAX=$1
[[ -n $2 ]] && NMEGS=$2
TIMEFORMAT="%R %U %S"
##############################################################
# simple test of parallel reads
do_read_test(){
for n in $(seq 1 $NMAX) ; do
cat file$n > /dev/null &
done
# wait for previous jobs to finish
wait
}
45
46 CHAPTER 15. I/O SCHEDULING **
done
# wait for previous jobs to finish
wait
}
##############################################################
# begin the actual work
If you are taking the online self-paced version of this course, the script is available for download from your Lab screen.
Because changing the I/O scheduler is a privileged operation, you will have to run it as:
How it works
The script:
• Cycles through the available I/O schedulers on a hard disk while doing a configurable number of parallel reads
and writes of files of a configurable size. (Note that exactly which schedulers are available will depend on the
kernel has been configured and compiled and may vary quite a bit from machine to machine.)
• Makes sure, when testing reads, it is actually reading from disk and not from cached pages of memory; the cache
is flushed out by doing (as root):
before doing the reads. The script does a cat into /dev/null to avoid writing to disk.
• Makes sure all reads are complete before obtaining timing information; this is done by issuing a wait command
under the shell.
• Tests writes by simply copying a file (which will be in cached memory after the first read) multiple times simulta-
neously. To make sure it has waited for all writes to complete before getting timing information, it issues a sync
call.
The provided script takes two arguments. The first is the number of simultaneous reads and writes to perform. The second is
the size (in MB) of each file.
This script must be run as root as it echoes values into the /proc and /sys directory trees.
Extra Credit
For additional exploring you might try changing some of the tunable parameters and see how results vary.
16.1 Labs
Essentially, tmpfs functions as a ramdisk; it resides purely in memory. But it has some nice properties that old-fashioned
conventional ramdisk implementations did not have:
1. The filesystem adjusts its size (and thus the memory that is used) dynamically; it starts at zero and expands as necessary
up to the maximum size it was mounted with.
2. If your RAM gets exhausted, tmpfs can utilize swap space. (You still can’t try to put more in the filesystem than its
maximum capacity allows, however.)
3. tmpfs does not require having a normal filesystem placed in it, such as ext3 or vfat; it has its own methods for dealing
with files and I/O that are aware that it is really just space in memory (it is not actually a block device), and as such are
optimized for speed.
Thus there is no need to pre-format the filesystem with a mkfs command; you merely just have to mount it and use it.
Mount a new instance of tmpfs anywhere on your directory structure with a command like:
See how much space the filesystem has been given and how much it is using:
$ df -h /mnt/tmpfs
You should see it has been allotted a default value of half of your RAM; however, the usage is zero, and will only start to grow
as you place files on /mnt/tmpfs.
You could change the allotted size as a mount option as in:
49
50 CHAPTER 16. LINUX FILESYSTEMS AND THE VFS
You might try filling it up until you reach full capacity and see what happens. Do not forget to unmount when you are done
with:
$ df -h /dev/shm
Many applications use this such as when they are using POSIX shared memory as an inter-process communication mecha-
nism. Any user can create, read and write files in /dev/shm, so it is a good place to create temporary files in memory.
Create some files in /dev/shm and note how the filesystem is filling up with df.
In addition, many distributions mount multiple instances of tmpfs; for example, on a RHEL system:
Notice this was run on a system with 16 GB of ram, so clearly you cannot have all these tmpfs filesystems actually using the
default ˜8 GB they have each been allotted!
Please Note
Some distributions (such as Fedora) may (by default) mount /tmp as a tmpfs system; in such cases one has to avoid
putting large files in /tmp to avoid running out of memory. Or one can disable this behavior as we discussed earlier
when describing /tmp.
Disk Partitioning
17.1 Labs
The exercises in this section will make simple use of mkfs for formatting filesystems, and mount for mounting them at
places in the root filesystem tree. These commands will be explained in detail in the next session.
In the first three exercises in this section, we will use the loop device mechanism with or without the parted program.
Very Important
For the purposes of later exercises in this course you will need unpartitioned disk space. It need not be large, certainly
one or two GB will suffice.
If you are using your own native machine, you either have it or you do not. If you do not, you will have shrink a partition
and the filesystem on it (first!) and then make it available, using gparted and/or the steps we have outlined or will
outline.
Please Note
If you have real physical unpartitioned disk space you do not need to do the following procedures, but it is still a very
useful learning exercise.
In this first exercise, we are going to create a file that will be used as a container for a full hard disk partition image, and
for all intents and purposes can be used like a real hard partition. In the following exercise, we will show how to put
more than one partition on it and have it behave as an entire disk.
51
52 CHAPTER 17. DISK PARTITIONING
You can make a much smaller file if you like or do not have that much available space in the partition you are creating
the file on.
2. Put a filesystem on it:
$ mkfs.ext4 imagefile
mke2fs 1.42.9 (28-Dec-2013)
imagefile is not a block special device.
Proceed anyway? (y,n) y
Discarding device blocks: done
.....
Of course you can format with a different filesystem, doing mkfs.ext3, mkfs.vfat, mkfs.xfs etc.
3. Mount it somewhere:
$ mkdir mntpoint
$ sudo mount -o loop imagefile mntpoint
You can now use this to your heart’s content, putting files etc. on it.
4. When you are done unmount it with:
$ sudo umount mntpoint
We will discuss losetup in a subsequent exercise, and you can use /dev/loop[0-7] but you have to be careful they are not
already in use, as we will explain.
You should note that using a loop device file instead of a real partition can be useful, but it is pretty worthless for doing any
kind of measurements or benchmarking. This is because you are placing one filesystem layer on top of another, which can
only have a negative effect on performance, and mostly you just use the behavior of the underlying filesystem the image file is
created on.
The next level of complication is to divide the container file into multiple partitions, each of which can be used to hold a
filesystem, or a swap area.
You can reuse the image file created in the previous exercise or create a new one.
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
The -C 130 sets the number of phony cylinders in the drive, and is only necessary in old versions of fdisk, which
unfortunately you will find on RHEL 6. However, it will do no harm on other distributions.
2. Type m to get a list of commands:
m
Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
g create a new empty GPT partition table
G create an IRIX (SGI) partition table
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
3. Create a new primary partition and make it 256 MB (or whatever size you would like:
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): +256M
Partition 1 of type Linux and of size 256 MiB is set
Syncing disks.
While this has given us some good practice, we haven’t yet seen a way to use the two partitions we just created. We’ll start
over in the next exercise with a method that lets us do so.
We expect that you should read the man pages for losetup and parted before doing the following procedures.
Once again, you can reuse the image file or, better still, zero it out and start freshly or with another file.
where the first command finds the first free loop device. The reason to do this is you may already be using one or more
loop devices. For example, on the system that this is being written on, before the above command is executed:
$ losetup -a
/dev/loop0: []: (/usr/src/KERNELS.sqfs)
a squashfs compressed, read-only filesystem is already mounted using /dev/loop0. (The output of this command will
vary with distribution.) If we were to ignore this and use losetup on /dev/loop0 we would almost definitely corrupt the
file.
$ fdisk -l /dev/loop1
5. What happens next depends on what distribution you are on. For example, on RHEL and Ubuntu you will find new
device nodes have been created:
$ ls -l /dev/loop1*
brw-rw---- 1 root disk 7, 1 Oct 7 14:54 /dev/loop1
brw-rw---- 1 root disk 259, 0 Oct 7 14:54 /dev/loop1p1
brw-rw---- 1 root disk 259, 3 Oct 7 14:54 /dev/loop1p2
brw-rw---- 1 root disk 259, 4 Oct 7 14:54 /dev/loop1p3
$ df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 29G 8.5G 19G 32% /
....
/dev/loop1p1 ext3 233M 2.1M 219M 1% mnt1
/dev/loop1p2 ext4 233M 2.1M 215M 1% mnt2
/dev/loop1p3 vfat 489M 0 489M 0% mnt3
8. After using the filesystems to your heart’s content you can unwind it all:
$ sudo umount mnt1 mnt2 mnt3
$ rmdir mnt1 mnt2 mnt3
$ sudo losetup -d /dev/loop1
18.1 Labs
3. Compare the contents of /tmp/appendit with /etc/hosts; there should not be any differences.
4. Try to add the append-only attribute to /tmp/appendit by using chattr. You should see an error here. Why?
5. As root, retry adding the append-only attribute; this time it should work. Look at the file’s extended attributes by using
lsattr.
6. As a normal user, try and use cat to copy over the contents of /etc/passwd to /tmp/appendit. You should get an
error. Why?
7. Try the same thing again as root. You should also get an error. Why?
8. As the normal user, again use the append redirection operator (>>) and try appending the /etc/passwd file to /tmp/
appendit. This should work. Examine the resulting file to confirm.
9. As root, set the immutable attribute on /tmp/appendit, and look at the extended attributes again.
10. Try appending output to /tmp/appendit, try renaming the file, creating a hard link to the file, and deleting the file as
both the normal user and as root.
11. We can remove this file by removing the extended attributes. Do so.
Solution 18.1
1. $ cd /tmp
$ touch appendit
$ ls -l appendit
57
58 CHAPTER 18. FILESYSTEM FEATURES: ATTRIBUTES, CREATING, CHECKING, MOUNTING
4. $ chattr +a appendit
chattr: Operation not permitted while setting flags on appendit
7. $ sudo su
$ cat /etc/passwd > appendit
bash: appendit: Operation not permitted
$ exit
$ mv appendit appendit.rename
mv: cannot move `appendit' to `appendit.rename': Operation not permitted
$ ln appendit appendit.hardlink
ln: creating hard link `appendit.hardlink' => `appendit': Operation not permitted
$ rm -f appendit
rm: cannot remove `appendit': Operation not permitted
$ sudo su
$ echo hello >> appendit
-bash: appendit: Permission denied
$ mv appendit appendit.rename
mv: cannot move `appendit' to `appendit.rename': Operation not permitted
$ ln appendit appendit.hardlink
ln: creating hard link `appendit.hardlink' => `appendit': Operation not permitted
$ rm -f appendit
$ exit
11. $ sudo su
$ lsattr appendit
----ia-------e- appendit
$ ls appendit
ls: cannot access appendit: No such file or directory
1. Use fdisk to create a new 250 MB partition on your system, probably on /dev/sda. Or create a file full of zeros to use
as a loopback file to simulate a new partition.
2. Use mkfs to format a new filesystem on the partition or loopback file just created. Do this three times, changing the block
size each time. Note the locations of the superblocks, the number of block groups and any other pertinent information,
for each case.
3. Create a new subdirectory (say /mnt/tempdir) and mount the new filesystem at this location. Verify it has been
mounted.
5. Try to create a file in the mounted directory. You should get an error here, why?
7. Add a line to your /etc/fstab file so that the filesystem will be mounted at boot time.
9. Modify the configuration for the new filesystem so that binary files may not be executed from the filesystem (change
defaults to noexec in the /mnt/tempdir entry). Then remount the filesystem and copy an executable file (such as
/bin/ls) to /mnt/tempdir and try to run it. You should get an error: why?
When you are done you will probably want to clean up by removing the entry from /etc/fstab.
Solution 18.2
1. We won’t show the detailed steps in fdisk, as it is all ground covered earlier. We will assume the partition created is
/dev/sda11, just to have something to show.
$ sudo fdisk /dev/sda
.....
w
$ partprobe -s
Sometimes the partprobe won’t work, and to be sure the system knows about the new partition you have to reboot.
2. $ sudo mkfs -t ext4 -v /dev/sda11
$ sudo mkfs -t ext4 -b 2048 -v /dev/sda11
$ sudo mkfs -t ext4 -b 4096 -v /dev/sda11
Note the -v flag (verbose) will give the requested information; you will see that for a small partition like this the default is
1024 byte blocks.
3. $ sudo mkdir /mnt/tempdir
$ sudo mount /dev/sda11 /mnt/tempdir
$ mount | grep tempdir
If you get an error while unmounting, make sure you are not currently in the directory.
5. $ sudo touch /mnt/tempdir/afile
Then do:
$ sudo mount -o remount /mnt/tempdir
$ sudo cp /bin/ls /mnt/tempdir
$ /mnt/tempdir/ls
You will get warned that this is a file and not a partition, just proceed.
Note the -v flag (verbose) will give the requested information; you will see that for a small partition like this the default is
1024 byte blocks.
3. $ sudo mkdir /mnt/tempdir
$ sudo mount -o loop /imagefile /mnt/tempdir
$ mount | grep tempdir
If you get an error while unmounting, make sure you are not currently in the directory.
5. $ sudo touch /mnt/tempdir/afile
in /etc/fstab
/imagefile /mnt/tempdir ext4 loop 1 2
in /etc/fstab
/imagefile /mnt/tempdir ext4 loop,noexec 1 2
Then do:
$ sudo mount -o remount /mnt/tempdir
$ sudo cp /bin/ls /mnt/tempdir
$ /mnt/tempdir/ls
19.1 Labs
$ cat /proc/swaps
We will now add more swap space by adding either a new partition or a file. To use a file we can do:
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.30576 s, 822 MB/s
$ mkswap swpfile
(For a real partition just feed mkswap the partition name, but be aware all data on it will be erased!)
Activate the new swap space:
63
64 CHAPTER 19. FILESYSTEM FEATURES: SWAP, QUOTAS, USAGE
$ cat /proc/swaps
Note the Priority field; swap partitions or files of lower priority will not be used until higher priority ones are filled.
Remove the swap file from use and delete it to save space:
Please Note
The subsection describing this material was marked as optional, so you may not have covered the material necessary
to do this exercise.
1. Change the entry in /etc/fstab for your new filesystem to use user quotas (change noexec to usrquota in the entry
for /mnt/tempdir). Then remount the filesystem.
2. Initialize quotas on the new filesystem, and then turn the quota checking system on.
3. Now set some quota limits for the normal user account: a soft limit of 500 blocks and a hard limit of 1000 blocks.
4. As the normal user, attempt to use dd to create some files to exceed the quota limits. Create bigfile1 (200 blocks)
and bigfile2 (400 blocks).
You should get a warning. Why?
Solution 19.2
1. Change /etc/fstab to have one of the following two lines according to whether you are using a real partition or a
loopback file:
in /etc/fstab
/dev/sda11 /mnt/tempdir ext4 usrquota 1 2
/imagefile /mnt/tempdir ext4 loop,usrquota 1 2
Then remount:
(You won’t normally do the line above, but we are doing it to make the next part easier).
5. $ cd /mnt/tempdir
$ dd if=/dev/zero of=bigfile1 bs=1024 count=200
200+0 records in
200+0 records out
204800 bytes (205 kB) copied, 0.000349604 s, 586 MB/s
$ quota
Disk quotas for user student (uid 500):
Filesystem blocks quota lim grace files qu lim gr
/dev/sda11 200 500 1000 1 0 0
$ quota
Disk quotas for user student (uid 500):
Filesystem blocks quota limit grace files quota limit grace
/dev/sda11 1000* 500 1000 6days 3 0 0
$ ls -l
total 1068
-rw------- 1 root root 7168 Dec 10 18:56 aquota.user
-rw-rw-r-- 1 student student 204800 Dec 10 18:58 bigfile1
-rw-rw-r-- 1 student student 409600 Dec 10 18:58 bigfile2
-rw-rw-r-- 1 student student 409600 Dec 10 19:01 bigfile3
drwx------ 2 root root 16384 Dec 10 18:47 lost+found
-rwxr-xr-x 1 root root 41216 Dec 10 18:52 more
20.1 Labs
However, native filesystems in UNIX-type operating systems, including Linux, tend not to suffer serious problems with filesys-
tem fragmentation.
This is primarily because they do not try to cram files onto the innermost disk regions where access times are faster. Instead,
they spread free space out throughout the disk, so that when a file has to be created there is a much better chance that a
region of free blocks big enough can be found to contain the entire file in either just one or a small number of pieces.
For modern hardware, the concept of innermost disk regions is obscured by the hardware anyway.
Don’t do this
For SSDs defragmentation can actually shorten the lifespan of the storage media due to finite read/erase/write cycles.
On smart operating systems defragmentation is turned off by default on SSD drives.
Furthermore, the newer journalling filesystems (including ext4) work with extents (large contiguous regions) by design.
$ sudo e4defrag
e4defrag is part of the e2fsprogs package and should be on all modern Linux distributions.
• -v: Be verbose.
67
68 CHAPTER 20. THE EXT2/EXT3/EXT4 FILESYSTEMS
• A file
• A directory
• An entire device
Examples:
Success: [ 112/152 ]
Failure: [ 40/152 ]
Try running e4defrag on various files, directories, and entire devices, always trying with -c first.
You will generally find that Linux filesystems only tend to need defragmentation when they get very full, over 90 percent or so,
or when they are small and have relatively large files, like when a boot partition is used.
or you can substitute /dev/sdaX (using whatever partition the filesystem you want to modify is mounted on) for imagefile.
1. Using dumpe2fs, obtain information about the filesystem whose properties you want to adjust.
Display:
• The maximum mount count setting (after which a filesystem check will be forced.)
• The Check interval (the amount of time after which a filesystem check is forced)
• The number of blocks reserved, and the total number of blocks
2. Change:
3. Using dumpe2fs, once again obtain information about the filesystem and compare with the original output.
Solution 20.2
2. $ tune2fs -c 30 imagefile
tune2fs 1.42.9 (28-Dec-2013)
Setting maximal mount count to 30
$ tune2fs -i 3w imagefile
tune2fs 1.42.9 (28-Dec-2013)
Setting interval between checks to 1814400 seconds
$ tune2fs -m 10 imagefile
tune2fs 1.42.9 (28-Dec-2013)
Setting reserved blocks percentage to 10% (26214 blocks)
14c14
< Reserved block count: 13107
---
> Reserved block count: 26214
29c29
< Last write time: Wed Oct 26 14:26:19 2016
---
> Last write time: Wed Oct 26 14:26:20 2016
31c31
< Maximum mount count: -1
---
> Maximum mount count: 30
33c33,34
< Check interval: 0 (<none>)
---
> Check interval: 1814400 (3 weeks)
> Next check after: Wed Nov 16 13:26:16 2016
21.1 Labs
Please Note
We do not have a detailed lab exercise you can do with xfs; many systems still will not have the kernel modules and
relevant user utilities installed. However, if your Linux kernel and distribution does support it, you can easily create a
filesystem with mkfs -t xfs.
$ man -k xfs
71
72 CHAPTER 21. THE XFS AND BTRFS FILESYSTEMS **
Read about these utility programs and see if you can play with them on the filesystem you created.
Please Note
We do not have a detailed lab exercise you can do with btrfs; many systems still will not have the kernel modules
and relevant user utilities installed. However, if your Linux kernel and distribution support it, you can easily create a
filesystem with mkfs -t btrfs.
You can find out about available btrfs-related utilities with either just typing btrfs:
$ btrfs
Command groups:
subvolume manage subvolumes: create, delete, list, etc
filesystem overall filesystem tasks and information
balance balance data across devices, or change block groups using filters
device manage and query devices in the filesystem
scrub verify checksums of data and metadata
rescue toolbox for specific rescue operations
inspect-internal query various internal information
property modify properties of filesystem objects
quota manage filesystem quota settings
qgroup manage quota groups
replace replace a device in the filesystem
Commands:
check Check structural integrity of a filesystem (unmounted).
restore Try to restore files from a damaged filesystem (unmounted)
send Send the subvolume(s) to stdout.
receive Receive subvolumes from a stream
help Display help information
version Display btrfs-progs version
$ btrfs --help
Read about these utility programs and see if you can play with them on the filesystem you created.
The command
$ man -k btrfs
Encrypting Disks
22.1 Labs
In this exercise, you will encrypt a partition on the disk in order to provide a measure of security in the event that the
hard drive or laptop is stolen. Reviewing the cryptsetup documentation first would be a good idea (man cryptsetup
and cryptsetup --help).
1. Create a new partition for the encrypted block device with fdisk. Make sure the kernel is aware of the new partition
table. A reboot will do this but there are other methods.
2. Format the partition with cryptsetup using LUKS for the crypto layer.
3. Create the un-encrypted pass through device by opening the encrypted block device, i.e., secret-disk.
4. Add an entry to /etc/crypttab so that the system prompts for the passphrase on reboot.
Solution 22.1
Create a new partition (in the below /dev/sda4 to be concrete) and then either issue:
$ sudo partprobe -s
75
76 CHAPTER 22. ENCRYPTING DISKS
to have the system re-read the modified partition table, or reboot (which is far safer).
Note: If you can’t use a real partition, use the technique in the previous chapter to use a loop device or image file for
the same purpose.
2. $ sudo cryptsetup luksFormat /dev/sda4
in /etc/crypttab
secret-disk /dev/sda4
in /etc/fstab
/dev/mapper/secret-disk /secret ext4 defaults 1 2
9. Reboot.
The process for encrypting is similar to the previous exercise, except we will not create a file system on the encrypted block
device.
In this case, we are also going to use the existing swap device by first de-activating it and then formatting it for use as an
encrypted swap device. It would be a little bit safer to use a fresh partition below, or you can safely reuse the encrypted
partition you set up in the previous exercise. At the end we explain what to do if you have problems restoring.
You may want to revert back to the original unencrypted partition when we are done by just running mkswap on it again when
it is not being used, as well as reverting the changes in the configuration files, /etc/crypttab and /etc/fstab.
1. Find out what partition you are currently using for swap and then deactivate it:
$ cat /proc/swaps
Filename Type Size Used Priority
/dev/sda11 partition 4193776 0 -1
5. To ensure the encrypted swap partition can be activated at boot you need to do two things:
(a) Add a line to /etc/crypttab so that the system prompts for the passphrase on reboot:
in /etc/crypttab
(Note /dev/urandom is preferred over /dev/random for reasons involving potential entropy shortages as dis-
cussed in the man page for crypttab.) You don’t need the detailed options that follow, but we give them as an
example of what more you can do.
(b) Add an entry to /etc/fstab so that the swap device is activated on boot.
in /etc/fstab
/dev/mapper/swapcrypt none swap defaults 0 0
If the swapon command fails it is likely because /etc/fstab no longer properly describes the swap partition. If this partition
is described in there by actual device node (/dev/sda11) there won’t be a problem. You can fix either by changing the line in
there to be:
in /etc/fstab
/dev/sda11 swap swap defaults 0 0
in /etc/fstab
LABEL=SWAP swap swap defaults 0 0
23.1 Labs
3. Create a volume group named myvg and add the two physical volumes to it. Use the default extent size.
4. Allocate a 300 MB logical volume named mylvm from volume group myvg.
Solution 23.1
1. Execute:
$ sudo fdisk /dev/sda
using whatever hard disk is appropriate, and create the two partitions. While in fdisk, typing t will let you set the partition
type to 8e. While it doesn’t matter if you don’t set the type, it is a good idea to lessen confusion. Use w to rewrite the
partition table and exit, and then
$ sudo partprobe -s
79
80 CHAPTER 23. LOGICAL VOLUME MANAGEMENT (LVM)
If you want the mount to be persistent, edit /etc/fstab to include the line:
in /etc/fstab
/dev/myvg/mylvm /mylvm ext4 defaults 0 0
6. $ sudo lvdisplay
7. $ df -h
$ sudo lvresize -r -L 350M /dev/myvg/mylvm
$ df -h
or
$ sudo lvresize -r -L +50M /dev/myvg/mylvm
RAID **
24.1 Labs
The process will be the same whether the partitions are on one drive or several (Although there is obviously little reason to
actually create a RAID on a single device).
1. Create two 200 MB partitions of type raid (fd) either on your hard disk using fdisk, or using LVM.
3. Format the RAID device as an ext4 filesystem. Then mount it at /myraid and make the mount persistent.
4. Place the information about /dev/md0 in /etc/mdadm.conf, using mdadm. (Depending on your distribution, this file
may not previously exist.)
Solution 24.1
1. If you need to create new partitions do:
$ sudo fdisk /dev/sda
and create the partitions as we have done before. For purposes of being definite, we will call them /dev/sdaX and
/dev/sdaY. You will need to run partprobe or kpartx or reboot after you are done to make sure the system is properly
aware of the new partitions.
2. $ sudo mdadm -C /dev/md0 --level=1 --raid-disks=2 /dev/sdaX /dev/sdaY
81
82 CHAPTER 24. RAID **
in /etc/fstab
/dev/md0 /myraid ext4 defaults 0 0
5. $ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 dm-14[1] dm-13[0]
204736 blocks [2/2] [UU]
Please Note
You should probably verify that with a reboot, the RAID volume is mounted automatically. When you are done, you
probably will want to clean up by removing the line from /etc/fstab, and then getting rid of the partitions.
25.1 Labs
2. Check the current value of net.ipv4.icmp_echo_ignore_all, which is used to turn on and off whether your system
will respond to ping. A value of 0 allows your system to respond to pings.
3. Set the value to 1 using the sysctl command line utility and then check if pings are responded to.
4. Set the value back to 0 and show the original behavior in restored.
5. Now change the value by modifying /etc/sysctl.conf and force the system to activate this setting file without a reboot.
You will probably want to reset your system to have its original behavior when you are done.
Solution 25.1
You can use either localhost, 127.0.0.1 (loopback address) or your actual IP address for target of ping below.
1. $ ping localhost
2. $ sysctl net.ipv4.icmp_echo_ignore_all
83
84 CHAPTER 25. KERNEL SERVICES AND CONFIGURATION
in /etc/sysctl.conf
net.ipv4.icmp_echo_ignore_all=1
6. $ sysctl net.ipv4.icmp_echo_ignore_all
$ ping localhost
Since the changes to /etc/sysctl.conf are persistent, you probably want to restore things to its previous state.
However, when the PID reaches the value shown in /proc/sys/kernel/pid_max, which is conventionally 32768 (32K), they
will wrap around to lower numbers. If nothing else, this means you can’t have more than 32K processes on the system since
there are only that many slots for PIDs.
3. Reset pid_max to a lower value than the ones currently being issued.
Solution 25.2
Very Important
In the below we are going to use two methods, one involving sysctl, the other directly echoing values to /proc/sys/
kernel/pid_max. Note that the echo method requires you to be root; sudo won’t work. We’ll leave it to you to figure
out why, if you don’t already know!
1. $ sysctl kernel.pid_max
$ cat /proc/sys/kernel/pid_max
2. Type:
$ cat &
[1] 29222
$ kill -9 29222
4. $ cat &
[2] 311
$ kill -9 311
Please Note
Note that when starting over, the kernel begins at PID=300, not a lower value. You might notice that assigning PIDs to
new processes is actually not trivial; since the system may have already turned over, the kernel always has to check
when generating new PIDs that the PID is not already in use. The Linux kernel has a very efficient way of doing this
that does not depend on the number of processes on the system.
Kernel Modules
26.1 Labs
3. Re-list all loaded kernel modules and see if your module was indeed loaded.
Solution 26.1
1. $ lsmod
2. In the following, substitute whatever module name you used for e1000e. Either of these methods work but, of course,
the second is easier.
87
88 CHAPTER 26. KERNEL MODULES
27.1 Labs
Solution 27.1
1. Create a file named /etc/udev/rules.d/75-myusb.rules and have it include just one line of content:
$ cat /etc/udev/rules.d/75-myusb.rules
SUBSYSTEM=="usb", SYMLINK+="myusb"
Do not use the deprecated key value BUS in place of SUBSYSTEM, as recent versions of udev have removed it.
Note the name of this file really does not matter. If there was an ACTION component to the rule the system would execute
it; look at other rules for examples.
2. Plug in a device.
3. $ ls -lF /dev | grep myusb
89
90 CHAPTER 27. DEVICES AND UDEV
Virtualization Overview
28.1 Labs
Very Important
• The following labs are best run on a physical machine running Linux natively.
• It may be possible to run them within a Virtual Machine running under a hypervisor, such as VMWare or Virtual
Box, or even KVM. However, this requires nested virtualization to be running properly.
• Whether or not this works depends on the particular hypervisor used, the underlying host operating system (i.e.,
Windows, Mac OS or Linux) as well as the particular variant, such as which Linux or Windows version as well
as the particular kernel.
• Furthermore it also depends on your particular hardware. For example, we have found nested virtualization
working with VMWare on various x86 64 machines but with Oracle Virtual Box only on some.
• If this works, performance will be poor as compared to running on native hardware, but that is not important for
the simple demonstrative exercises we will do.
• Your mileage will vary! If it does not work we cannot be responsible for helping you trying to get it rolling.
1. First check that you have hardware virtualization available and enabled:
$ grep -e vmx -e svm /proc/cpuinfo
where vmx is for INTEL CPUs and svm for AMD. If you do not see either one of these:
• If you are on a physical machine, maybe you can fix this. Reboot your machine and see if you can turn on
virtualization in the BIOS settings. Note that some IT personnel may make this impossible for “security” reasons,
so try to get that policy changed.
• You are on a virtual machine running under a hypervisor, and you do not have nested virtualization operable.
2. If for either of these reasons, you do not have hardware virtualization, you may be able to run virt-manager, but with
weak performance.
91
92 CHAPTER 28. VIRTUALIZATION OVERVIEW
3. You need all relevant packages installed on your system. One can work hard to construct an exact list. However, exact
names and requirements change with time, and most enterprise distributions ship with all (or almost all) of the software
you need.
4. The easiest and best procedure is to run the script we have already supplied to you:
Very Important
• Do not run libvirtd at the same time as another hypervisor as dire consequences are likely to arise. This can
easily include crashing your system and doing damage to any virtual machines being used.
• We recommend both stopping and disabling your other hypervisor as in:
$ sudo systemctl stop vmware
$ sudo systemctl disable vmware
or
$ sudo systemctl stop vboxdrv
$ sudo systemctl disable vboxdrv
Exercise 28.2: Using virt-manager with KVM to Install a Virtual Machine and
Run it
In this exercise we will use pre-built iso images built by TinyCoreLinux (https://www.tinycorelinux.net) because they
are cooked up very nicely and are quite small.
If you would like, you can substitute any installation iso image for another Linux distribution, such as Debian, CentOS,
Ubuntu, Fedora, OpenSUSE etc. The basic steps will be identical and only differ when you get to the installation phase for
building your new VM; which is no different than building any fresh installation on an actual physical machine.
We will give step-by-step instructions with screen capture images; If you feel confident, please try to just launch virt-manager
and see if you can work your way through the necessary steps, as the GUI is reasonably clearly constructed.
3. We have included three different iso install images from TinyCoreLinux in the RESOURCES/s_28 directory:
Core-current.iso
CorePlus-current.iso
TinyCore-current.iso
(You can check and see if there are newer versions upstream at https://www.tinycorelinux.net but these should
be fine.)
CorePlus-current is largest and robust and we will use this as it will install with full graphics. The others are considerably
quicker to use, however.
Navigate through your file system and pick the desired image:
4. Next you have to request the amount of memory and number of CPUs or cores to use. These images are pretty minimal.
A choice of 256 MB is more than enough; you may have fun seeing how low you can go!
5. Next you have to configure the location and size of the VM that is being created. You actually need very little for
TinyCoreLinux, but the GUI will not let you choose less than 0.1 GB (about 100 MB.) (From the command line it is easy
to configure less space.)
If you do not click on Select or Create custom storage your image will be placed in /var/lib/libvirt/images.
Since images can be quite large, you might want to configure to put it elsewhere. Or you can replace the images
directory in /var/lib/libvirt with a symbolic link to somewhere else, as in:
$ cd /var/lib/libvirt
$ sudo mv images images_ORIGINAL
$ sudo mkdir /tmp/images
$ sudo ln -s /tmp/images images
(You probably want a different location for the images files than /tmp, but you get the idea.)
6. You are now ready to begin installation of your own VM from the TinyCoreLinux installation disk:
Please Note
We recommend clicking on Customize configuration before install. While you may want to make other changes,
the mouse pointer is configured by default to be a PS2 device; it is better to add a USB tablet input pointer.
You can make other choices for the graphical interface, here we just choose the first one, the default, and hit return.
8. This will take a while and eventually you will see the following screen:
It is not obvious what to do here, but you need to see the icons at the bottom so you should resize and make the screen
taller:
9. Click on the terminal icon (or right click on the background and open up a terminal. Note the font is microscopic
unfortunately. Then type
tc-install
in the window.
10. Select Whole disk and sda and click the forward arrow:
11. Things will crank for a while and each step will be reflected in the output window.
When installation is complete you can go to the File menu and shut down the virtual machine.
12. Start up virt-manager again (if you have killed it) and you should now see something like:
Right click on the VM and open and run. Your new virtual machine should be up and running! (If you get confused and
think you are running the original install image, you can verify it is not that by noting there is no tc-install program in the
new disk image.
Containers Overview
29.1 Labs
Overview
In this exercise, we will install, run and test the docker package, and follow with getting and deploying httpd, the
Apache web server container.
1. Make sure Docker is installed (or emulated with podman.) Pick the right command for your distribution:
$ sudo yum install docker # RHEL/CentOS 7
$ sudo dnf install podman podman-docker # RHEL/CentOS 8, Fedora
$ sudo apt install docker.io # Ubuntu, Debian
$ sudo zypper install docker # OpenSUSE
Reinstall Docker?
• If you get strange errors at later points in the exercise you might find it useful to reinstall docker. We have
observed cases (for example, with RHEL 7) where docker configurations were broken, after a system
upgrade,
103
104 CHAPTER 29. CONTAINERS OVERVIEW
Very Important
You can skip to the next step on podman-based systems as there is no docker service to start!
You may want to verify that it is running properly with systemctl status docker:
If you see anything indicating failure you should inspect /var/log/messages or whatever other logging file you have
on your system for clues. If you are running a standard distribution kernel you should be fine, but if you are running a
custom Linux kernel, it is likely you have to select the proper configuration options, especially as regards to networking.
This is too complicated to go into here, so please stay with a distribution supplied kernel unless you want a challenging
exercise!
3. Search for the httpd container, with
$ sudo docker search apache
(You could have used httpd instead of apache in the above command with very similar results.)
From now on we will not show detailed output since if you have gotten this far, things should be fine.
This may take a couple of minutes while all the components download.
5. List the installed containers:
$ sudo docker images
7. Start the httpd docker container. The terminal will appear to hang as it is now connected to the httpd daemon.
c7:/tmp>sudo docker run httpd
AH00558: httpd: Could not reliably determine the server's fully qualified domain name,
using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
......
8. You can open a graphical web browser pointing to the IP address in the above output. (Do not use the address shown
in the output above!)
Or you can use a text-based browser (especially if you are not in a graphical environment) by opening up a new terminal
window (do not kill the one in which the docker httpd container is running!) and doing one of the following commands:
$ lynx http://172.17.0.2
$ w3m http://172.17.0.2
$ elinks http://172.17.0.2
10. This will leave images and their associated storage under either /var/lib/docker or /var/lib/containers depend-
ing on your particular system and distribution. If you do not need to reuse them you can clean up with:
c7:/tmp>sudo docker rmi -f docker.io/httpd
Untagged: docker.io/httpd:latest
Untagged: docker.io/httpd@sha256:cf774f082e92e582d02acdb76dc84e61dcf5394a90f99119d1ae39bcecbff075
Deleted: sha256:cf6b6d2e846326d2e49e12961ee0f63d8b5386980b5d3a11b8283151602fa756
and on some systems you may also need to do: Deleted Containers:
30.1 Labs
1. Examine /etc/passwd and /etc/shadow, comparing the fields in each file, especially for the normal user account.
What is the same and what is different?
It should fail because you need a password for user1; it was never established.
4. Set the password for user1 to user1pw and then try to login again as user1.
6. Look at the /etc/default/useradd file and see what the current defaults are set to. Also look at the /etc/login.defs
file.
7. Create a user account for user2 which will use the Korn shell (ksh) as its default shell. (if you do not have /bin/ksh
install it or use the C shell at /bin/csh.) Set the password to user2pw.
8. Look at /etc/shadow. What is the current expiration date for the user1 account?
9. Use chage to set the account expiration date of user1 to December 1, 2013.
Look at /etc/shadow to see what the new expiration date is.
Solution 30.1
107
108 CHAPTER 30. USER ACCOUNT MANAGEMENT
(You can use any normal user name in the place of student.) About the only thing that matches is the user name field.
2. $ sudo useradd user1
3. $ ssh user1@localhost
user1@localhost's password:
Note you may have to first start up the sshd service as in:
$ sudo service sshd restart
or
$ sudo systemctl restart sshd.service
$ cat /etc/login.defs
....
We don’t reproduce the second file as it is rather longer, but examine it on your system.
7. $ sudo useradd -s /bin/ksh user2
$ sudo passwd user2
Changing password for user user2.
New password:
user1:$6$OBE1mPMw$CIc7urbQ9ZSnyiniVOeJxKqLFu8fz4whfEexVem2TFpucuwRN1CCHZ
19XGhj4qVujslRIS.P4aCXd/y1U4utv.:16372:0:99999:7::16040:
2. Set up a restricted account and verify its restricted nature, then clean up.
Please Note
• On some distributions, notably some Ubuntu-based versions, there is a bug which prevents this lab from behaving
properly. On RedHat-based distributions, the above correct behaviour is observed.
• As noted earlier, the use of restricted shells is deprecated as they are really not secure and there are better
methods available. However, you may run into them which is why we discuss this facility.
Solution 30.2
1. c7:/tmp>rbash -r
c7:/tmp>cd $HOME
rbash: cd: restricted
c7:/tmp>PATH=$PATH:/tmp
rbash: PATH: readonly variable
c7:/tmp>exit
exit
c7:/home/coop>sudo su - fool
Last failed login: Tue Oct 25 14:15:54 CDT 2016 on pts/1
There was 1 failed login attempt since the last successful login.
Attempting to create directory /home/fool/perl5
Group Management
31.1 Labs
1. Create two new user accounts (rocky and bullwinkle in the below) and make sure they have home directories.
2. Create two new groups, friends and bosses (with a GID of 490). Look at /etc/group. See what GID was given to
each new group.
4. Login as rocky. Create a directory called somedir and set the group ownership to bosses. (Using chgrp which will be
discussed in the next session.)
(You will probably need to add execute privileges for all on rocky’s home directory.)
5. Login as bullwinkle and try to create a file in /home/rocky/somedir called somefile using the touch command.
Can you do this? No, because of the group ownership and the chmod a+x on the directory.
6. Add bullwinkle to the bosses group and try again. Note you will have to log out and log back in again for the new
group membership to be effective. do the following:
Solution 31.1
1. $ sudo useradd -m rocky $ sudo useradd -m bullwinkle $ sudo
passwd rocky
Enter new UNIX password: Retype new UNIX password: passwd:
password updated successfully
111
112 CHAPTER 31. GROUP MANAGEMENT
$ ls -l /home
total 12 drwxr-xr-x 2 bullwinkle bullwinkle 4096 Oct 30
09:39 bullwinkle drwxr-xr-x 2 rocky rocky 4096 Oct 30 09:39
rocky drwxr-xr-x 20 student student 4096 Oct 30 09:18
student
$ chmod a+x .
$ exit
32.1 Labs
It is possible to either give permissions directly, or add or subtract permissions. The syntax is pretty obvious. Try the following
examples:
$ ls -l afile
$ touch afile
$ ls -l afile
which shows it is created by default with both read and write permissions for owner and group, but only read for world.
In fact, at the operating system level the default permissions given when creating a file or directory are actually read/write for
owner, group and world (0666); the default values have actually been modified by the current umask.
$ umask
113
114 CHAPTER 32. FILE PERMISSIONS AND OWNERSHIP
0002
which is the most conventional value set by system administrators for users. This value is combined with the file creation
permissions to get the actual result; i.e.,
Try modifying the umask and creating new files and see the resulting permissions, as in:
$ umask 0022
$ touch afile2
$ umask 0666
$ touch afile3
$ ls -l afile*
1. Create a file using your usual user name and run getfacl on it to see its properties.
2. Create a new user account with default properties (or reuse one from previous exercises.
3. Login as that user and try to add a line to the file you created in the first step. This should fail.
4. User setfacl to make the file writeable by the new user and try again.
5. User setfacl to make the file not readable by the new user and try again.
6. Clean up as necessary
Solution 32.3
It is probably easiest to open two terminal windows, one to work in as your normal user account, and the other as the secondary
one.
1. In window 1:
$ echo This is a file > /tmp/afile
$ getfacl /tmp/afile
getfacl: Removing leading '/' from absolute path names
# file: tmp/afile
# owner: coop
# group: coop
user::rw-
group::rw-
other::r--
2. In window 1:
$ sudo useradd fool
$ sudo passwd fool
...
3. In window 2:
$ sudo su - fool
$ echo another line > /tmp/afile
-bash: /tmp/afile: Permission denied
4. In window 1:
$ setfacl -m u:fool:rw /tmp/afile
$ getfacl /tmp/afile
In window 2:
$ echo another line > /tmp/afile
5. In window 1:
$ setfacl -m u:fool:w /tmp/afile
In window 2:
$ echo another line > /tmp/afile
-bash: /tmp/afile: Permission denied
6. Cleaning up:
$ rm /tmp/afile
$ sudo userdel -r fool
117
118 CHAPTER 33. PLUGGABLE AUTHENTICATION MODULES (PAM)
Network Addresses
34.1 Labs
Please Note
There are no lab exercises in this chapter. It just sets the stage for the following section on network configuration, which
has several labs.
119
120 CHAPTER 34. NETWORK ADDRESSES
35.1 Labs
Please Note
You may have to use a different network interface name than eth0. You can most easily do this exercise with nmtui or
your system’s graphical interface. We will present a command line solution, but beware details may not exactly fit your
distribution flavor or fashion.
1. Show your current IP address, default route and DNS settings for eth0. Keep a copy of them for resetting later.
2. Bring down eth0 and reconfigure to use a static address instead of DCHP, using the information you just recorded.
3. Bring the interface back up, and configure the nameserver resolver with the information that you noted before.
Verify your hostname and then ping it.
You will probably want to restore your configuration when you are done.
Solution 35.1
or
$ ifconfig eth0
$ route -n
$ cp /etc/resolv.conf resolv.conf.keep
121
122 CHAPTER 35. NETWORK DEVICES AND CONFIGURATION
or
$ sudo ifconfig eth0 down
On RedHat / CentOS
Make sure the following is in /etc/sysconfig/network-scripts/ifcfg-eth0 on Red Hat-based systems:
in /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
IPADDR=noted from step 1
NETMASK=noted from step 1
GATEWAY=noted from step 1
in /etc/sysconfig/network or /etc/networking/interfaces
or
$ sudo ifconfig eth0 up
4. $ sudo reboot
$ ping hostname
1. Open /etc/hosts and add an entry for mysystem.mydomain that will point to the IP address associated with your
network card.
2. Add a second entry that will make all references to ad.doubleclick.net point to 127.0.0.1.
3. As an optional exercise, download the host file from: http://winhelp2002.mvps.org/hosts2.htm or more directly
from http://winhelp2002.mvps.org/hosts.txt, and install it on your system. Do you notice any difference using
your browser with and without the new host file in place?
Solution 35.2
1. $ sudo sh -c "echo 192.168.1.180 mysystem.mydomain >> /etc/hosts"
$ ping mysystem.mydomain
3. $ wget http://winhelp2002.mvps.org/hosts.txt
--2014-11-01 08:57:12-- http://winhelp2002.mvps.org/hosts.txt
Resolving winhelp2002.mvps.org (winhelp2002.mvps.org)... 216.155.126.40
Connecting to winhelp2002.mvps.org (winhelp2002.mvps.org)|216.155.126.40|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 514744 (503K) [text/plain]
Saving to: hosts.txt
shows the address as 172.16.2.135 Note that this command shows all information about the connection and you could
have specified the UUID instead of the NAME as in:
$ nmcli con show 1c46bf37-2e4c-460d-8b20-421540f7d0e2
$ ping -c 3 172.16.2.140
PING 172.16.2.140 (172.16.2.140) 56(84) bytes of data.
64 bytes from 172.16.2.140: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 172.16.2.140: icmp_seq=2 ttl=64 time=0.034 ms
64 bytes from 172.16.2.140: icmp_seq=3 ttl=64 time=0.032 ms
1. Begin by examining your current routing tables, using both route and ip:
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.16.2.2 0.0.0.0 UG 100 0 0 ens33
link-local * 255.255.0.0 U 1000 0 0 ens33
172.16.2.0 * 255.255.255.0 U 100 0 0 ens33
192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0
$ ip route
default via 172.16.2.2 dev ens33 proto static metric 100
169.254.0.0/16 dev ens33 scope link metric 1000
172.16.2.0/24 dev ens33 proto kernel scope link src 172.16.2.135 metric 100
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.16.2.2 0.0.0.0 UG 100 0 0 ens33
link-local * 255.255.0.0 U 1000 0 0 ens33
172.16.2.0 * 255.255.255.0 U 100 0 0 ens33
192.168.100.0 172.16.2.1 255.255.255.0 UG 100 0 0 ens33
192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0
5. Reboot and verify the route has taken effect (i.e., it is persistent: If so remove it:
$ route
6. Note you can set a route with either route or ip from the command line but it won’t survive a reboot as in:
$ sudo ip route add 192.168.100.0/24 via 172.16.2.1
$ sudo route
....
You can verify that a route established this way is not persistent.
Firewalls
36.1 Labs
/usr/sbin/firewalld
/usr/bin/firewall-cmd
If you fail to find the program, then you need to install with one of the following, depending on your distribution in the usual
way:
If this fails, the firewalld package is not available for your distribution. In this case you will have to install from source.
To do this, go to https://fedorahosted.org/firewalld/ and you can get the git source repository, or you can easily
download the most recent tarball.
Then you have to follow the common procedure for installing from source (using whatever the current version is):
127
128 CHAPTER 36. FIREWALLS
You will have to deal with any inadequacies that come up in the ./configure step, such as missing libraries etc. When you
install from a packaging system, the distribution takes care of this for you, but from source it can be problematic. If you have
run the Linux Foundation’s ready-for.sh script on your system, you are unlikely to have problems.
$ firewall-cmd --help
For more detailed explanation of anything which piques your interest, do man firewall-cmd which explains things more deeply,
and man firewalld which gives an overview, as well as a listing of other man pages that describe the various configuration
files in /etc, and elucidate concepts such as zones and services.
Solution 36.3
$ sudo firewall-cmd --zone=public --add-service=http
success
success
dhcpv6-client ssh
after adding the new services, they would disappear from the list! This curious behavior is because we did not include the
--permanent flag when adding the services, and the --reload option reloads the known persistent services only.
We have concentrated on the command line approach simply because we want to be distribution-flexible. However, for most
relatively simple firewall configuration tasks, you can probably do them efficiently with less memorization from the GUI.
Once you launch the firewall configuration GUI, do the previous exercise of adding http and https to the public zone, and
verify that it has taken effect.
Make sure you take the time to understand the graphical interface.
37.1 Labs
Solution 37.1
131
132 CHAPTER 37. SYSTEM STARTUP AND SHUTDOWN
GRUB
38.1 Labs
Please Note
This exercise requires that it be run from the console (i.e., not over SSH).
1. Reboot your machine and go into the GRUB interactive shell by hitting e (or whatever other key is required as listed on
your screen.)
2. Make your system boot into non-graphical mode. How you do this depends on the system.
On traditional systems that respect runlevels (which we will talk about in the next section) you can append a 3 to the
kernel command line in the specific entry you pick from the GRUB menu of choices. This will still work on systemd
systems that still bother to emulate SysVinit runlevels.
On some other systems you may need to append text instead.
4. After the system is fully operational in non-graphical mode, bring it up to graphical mode. Depending on your system,
one of the following commands should do it:
133
134 CHAPTER 38. GRUB
39.1 Labs
First we have to create the service-specific script; you can create one of your own for fun, or to get the procedure down just
(as root) create a file named /etc/init.d/fake_service (which can be extract from your downloaded SOLUTIONS file as
fake service) containing the following:
/etc/init.d/fake service
#!/bin/bash
# fake_service
# Starts up, writes to a dummy file, and exits
#
# chkconfig: 35 69 31
# description: This service doesn't do anything.
# Source function library
. /etc/sysconfig/fake_service
case "$1" in
start) echo "Running fake_service in start mode..."
135
136 CHAPTER 39. SYSTEM INIT: SYSTEMD, SYSTEMV AND UPSTART
touch /var/lock/subsys/fake_service
echo "$0 start at $(date)" >> /var/log/fake_service.log
if [ ${VAR1} = "true" ]
then
echo "VAR1 set to true" >> /var/log/fake_service.log
fi
echo
;;
stop)
echo "Running the fake_service script in stop mode..."
echo "$0 stop at $(date)" >> /var/log/fake_service.log
if [ ${VAR2} = "true" ]
then
echo "VAR2 = true" >> /var/log/fake_service.log
fi
rm -f /var/lock/subsys/fake_service
echo
;;
*)
echo "Usage: fake_service {start | stop}"
exit 1
esac
exit 0
If you are taking the online self-paced version of this course, the script is available for download from your Lab screen.
Make the file above executable and give other proper permissions:
You’ll notice the script includes the file /etc/sysconfig/fake service. (On non-RHEL systems you should change this to
/etc/default/fake_service.) Create it and give it the following contents:
Test to see if the script works properly by running the following commands:
For fun you can add additional modes like restart to the script file; look at other scripts in the directory to get examples of
what to do.
Next we will want to have the ability to start fake service whenever the system starts, and stop it when it shuts down. If you
do:
you will get an error as it hasn’t been set up yet for this. You can easily do this with:
To test this completely you’ll have to reboot the system to see if it comes on automatically. You can also try varying the
runlevels in which the service is running.
The analogous procedure is to create (as root) a file directly under /etc/systemd/system or somewhere else in that directory
tree; distributions have some varying tastes on this. For example a very minimal file named /etc/systemd/system/fake2.
service (which can be extracted from your downloaded SOLUTIONS file as fake2.service) containing the following:
fake2.service
[Unit]
Description=fake2
After=network.target
[Service]
ExecStart=/bin/sh -c '/bin/echo I am starting the fake2 service ; /bin/sleep 30'
ExecStop=/bin/echo I am stopping the fake2 service
[Install]
WantedBy=multi-user.target
Now there are many things that can go in this unit file. The After=network.target means the service should start only after
the network does, while the WantedBy=multi-user.target means it should start when we reach multiple-user mode. This
is equivalent to runlevels 2 and 3 in SysVinit. Note graphical.target would correlate with runlevel 5.
Now all we have to do to start, stop and check the service status are to issue the commands:
If you are fiddling with the unit file while doing this you’ll need to reload things with:
(use /var/log/syslog on Ubuntu) either in background or in another windows while the service is running.
To set things up so the service turns on or off on system boot:
Once again, you really need to reboot to make sure it has taken effect.
40.1 Labs
1. Create a directory called backup and in it place a compressed tar archive of all the files under /usr/include, with the
highest level directory being include. You can use any compression method (gzip, bzip2 or xzip).
3. Create a directory called restore and unpack and decompress the archive.
4. Compare the contents with the original directory the archive was made from.
Solution 40.1
1. $ mkdir /tmp/backup
$ cd /usr ; tar zcvf /tmp/backup/include.tar.gz include
$ cd /usr ; tar jcvf /tmp/backup/include.tar.bz2 include
$ cd /usr ; tar Jcvf /tmp/backup/include.tar.xz include
or
$ tar -C /usr -zcf include.tar.gz include
$ tar -C /usr -jcf include.tar.bz2 include
$ tar -C /usr -Jcf include.tar.xz include
2. $ ls -lh include.tar.*
139
140 CHAPTER 40. BACKUP AND RECOVERY METHODS
c7:/tmp/backup>ls -lh
total 17M
-rw-rw-r-- 1 coop coop 5.3M Jul 18 08:17 include.tar.bz2
-rw-rw-r-- 1 coop coop 6.7M Jul 18 08:16 include.tar.gz
-rw-rw-r-- 1 coop coop 4.5M Jul 18 08:18 include.tar.xz
c7:/tmp/backup>
Note it is not necessary to give the j, J, or z option when decompressing; tar is smart enough to figure out what is
needed.
4. $ cd .. ; mkdir restore ; cd restore
$ tar xvf ../backup/include.tar.bz2
include/
include/unistd.h
include/re_comp.h
include/regex.h
include/link
.....
$ diff -qr include /usr/include
1. Create a directory called backup and in it place a compressed cpio archive of all the files under /usr/include, with
the highest level directory being include. You can use any compression method (gzip, bzip2 or xzip).
3. Create a directory called restore and unpack and decompress the archive.
4. Compare the contents with the original directory the archive was made from.
Solution 40.2
1. $ (cd /usr ; find include | cpio -c -o > /home/student/backup/include.cpio)
82318 blocks
$ ls -lh include*
total 64M
-rw-rw-r-- 1 coop coop 41M Nov 3 15:26 include.cpio
-rw-rw-r-- 1 coop coop 6.7M Nov 3 15:28 include.cpio.gz
-rw-rw-r-- 1 coop coop 5.3M Nov 3 14:44 include.tar.bz2
-rw-rw-r-- 1 coop coop 6.8M Nov 3 14:44 include.tar.gz
-rw-rw-r-- 1 coop coop 4.7M Nov 3 14:46 include.tar.xz
Note the redirection of input; the archive is not an argument. One could also do:
$ cd ../restore
$ cat ../backup/include.cpio | cpio -ivt
$ gunzip -c include.cpio.gz | cpio -ivt
3. $ rm -rf include
$ cpio -id < ../backup/include.cpio
$ ls -lR include
or
$ cpio -idv < ../backup/include.cpio
1. Using rsync, we will again create a complete copy of /usr/include in your backup directory:
$ rm -rf include
$ rsync -av /usr/include .
sending incremental file list
include/
include/FlexLexer.h
include/_G_config.h
include/a.out.h
include/aio.h
.....
2. Let’s run the command a second time and see if it does anything:
$ rsync -av /usr/include .
sending incremental file list
3. One confusing thing about rsync is you might have expected the right command to be:
$ rsync -av /usr/include include
sending incremental file list
...
However, if you do this, you’ll find it actually creates a new directory, include/include!
4. To get rid of the extra files you can use the --delete option:
$ rsync -av --delete /usr/include .
5. For another simple exercise, remove a subdirectory tree in your backup copy and then run rsync again with and without
the --dry-run option:
$ rm -rf include/xen
$ rsync -av --delete --dry-run /usr/include .
sending incremental file list
include/
include/xen/
include/xen/evtchn.h
include/xen/privcmd.h
#!/bin/sh
set -x
which will work on a local machine as well as over the network. Note the important -x option which stops rsync from
crossing filesystem boundaries.
Extra Credit
For more fun, if you have access to more than one computer, try doing these steps with source and destination on
different machines.
41.1 Labs
Please Note
This exercise can only be performed on a system (such as RHEL) where SELinux is installed. While it is possible to
install on Debian-based distributions, such as Ubuntu, it is not the easiest task and it is not often done.
1. Verify SELinux is enabled and in enforcing mode, by executing getenforce and sestatus. If not, edit /etc/selinux/
config, reboot, and check again.
2. Install the httpd package (if not already present) which provides the Apache web server, and then verify that it is
working:
(You can also use lynx or elinks etc. as the browser, or use your graphical browser such as firefox or chrome, in this
and succeeding steps.)
3. As superuser, create a small file in /var/www/html:
$ sudo sh -c "echo file1 > /var/www/html/file1.html"
Now create another small file in root’s home directory and move it to /var/www/html. (Do not copy it, move it!) Then
try and view it:
$ sudo cd /root
$ sudo sh -c "echo file2 > file2.html"
$ sudo mv file2.html /var/www.html
$ elinks -dump http://localhost/file2.html
143
144 CHAPTER 41. LINUX SECURITY MODULES
Forbidden
Please Note
This exercise can only be performed on a system (such as Ubuntu) where AppArmor is installed.
The below was tested on Ubuntu, but should work on other AppArmor-enabled systems, such as OpenSUSE, where
the apt commands should be replaced by zypper.
On Ubuntu, the /bin/ping utility runs with SUID enabled. For this exercise, we will copy ping to ping-x and adjust the
capabilities so the program functions.
Then we will build an AppArmor profile, install and verify that nothing has changed. Modifying the AppArmor profile and
adding capabilities will allow the program more functionality.
2. Create a copy of ping (called ping-x) and verify it has no initial special permissions or capabilities. Furthermore, it
cannot work when executed by student, a normal user:
The output from aa-status is long, so we can grep for the interesting lines:
student@ubuntu:˜$ sudo aa-status | grep -e "ˆ[[:alnum:]]" -e ping
We can see ping has a profile that is loaded and enabled for enforcement.
5. Next we will construct a new profile for ping-x. This step requires two terminal windows.
The first window (window1)will be running the aa-genprof command. This will generate a AppArmor profile by scan-
ning /var/log/syslog for AppArmor errors.
The second window (window2) will be used to run ping-x. (See the man page for aa-genprof for additional information.)
In window1:
student@ubuntu:˜$ sudo aa-genprof /bin/ping-x
Writing updated profile for /bin/ping-x.
Setting /bin/ping-x to complain mode.
Profiling: /bin/ping-x
In window2:
student@ubuntu:˜$ ping-x -c3 -4 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.120 ms
In window1:
The command ping-x has completed, we must now instruct aa-genprof to scan for the required information to be added
to the profile. It may require several scans to collect all of the information for the profile.
Enter S to scan:
Reading log entries from /var/log/syslog.
Updating AppArmor profiles in /etc/apparmor.d.
Complain-mode changes:
Profile: /bin/ping-x
Capability: net_raw
Severity: 8
[1 - capability net_raw,]
(A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish
Profile: /bin/ping-x
Network Family: inet
Socket Type: raw
Profile: /bin/ping-x
Network Family: inet
Socket Type: dgram
[1 - #include <abstractions/nameservice>]
2 - network inet dgram,
(A)llow / [(D)eny] / (I)gnore / Audi(t) / Abo(r)t / (F)inish
The following local profiles were changed. Would you like to save them?
[1 - /bin/ping-x]
(S)ave Changes / Save Selec(t)ed Profile / [(V)iew Changes] / View Changes b/w (C)lean profiles / Abo(r)t
Profiling: /bin/ping-x
Enter F to finish:
/bin/ping-x {
#include <abstractions/base>
#include <abstractions/nameservice>
capability net_raw,
/bin/ping-x mr,
/lib/x86_64-linux-gnu/ld-*.so mr,
7. The aa-genproc utility installs and activates the new policy so it should be ready to use, and the policies can be reloaded
on demand with the systemctl reload apparmor command. To avoid any potential issues, and verify the changes
will survive, reboot the system.
Once the system has restarted, as the user student, verify ping-x still functions with the new profile enabled. Ping the
localhost by ip address:
student@ubuntu:˜$ ping-x -c3 -4 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.095 ms
8. This should work as expected. The profile is very specific, and AppArmor will not allow functionality outside of the
specified parameters. To verify AppArmor is protecting this application, try to ping the IPV6 localhost address.
This should fail:
student@ubuntu:˜$ ping-x -c3 -6 ::1
ping: socket: Permission denied
(Note, the -6 option means use only IPv6 and ::1 is the local host in IPv6.)
The output indicates there is a socket issue. If the system log is examined it will be discovered that our ping-x program
has no access to IPv6 within AppArmor:
766:104): apparmor="DENIED" operation="create" profile="/bin/ping-x"
pid=2709 comm="ping-x" family="inet6" sock_type="raw" protocol=58
requested_mask="create" denied_mask="create
9. To correct this deficiency, re-run aa-genprof as we did earlier, and in window2, ping the IPv6 loopback and append the
additional options
42.1 Labs
2. Copy an executable file to it from somewhere else on your system and test that it works in the new location.
4. Test if the executable still works. It should give you an error because of the noexec mount option.
5. Clean up.
Solution 42.1
or
$ sudo mount -o noexec,remount image mountpoint
149
150 CHAPTER 42. LOCAL SYSTEM SECURITY
4. $ mountpoint/ls
Note that this is not persistent. To make it persistent you would need to add the option to /etc/fstab with a line like:
in /etc/fstab
/home/student/image /home/student/mountpoint ext3 loop,rw,noexec 0 0
writeit.c
1 /*
2 @*/
3 #include <stdio.h>
4 #include <unistd.h>
5 #include <fcntl.h>
6 #include <stdlib.h>
7 #include <string.h>
8 #include <stdlib.h>
9 #include <sys/stat.h>
10
$ make writeit
or equivalently
If (as a normal user) you try to run this program on a file owned by root you’ll get
wrote -1 bytes
$ sudo ./writeit
wrote 15 bytes
Thus, the root user was able to overwrite the file it owned, but a normal user could not.
Note that changing the owner of writeit to root does not help:
wrote -1 bytes
By setting the setuid bit you can make any normal user capable of doing it:
wrote 15 bytes
Please Note
You may be asking, why didn’t we just write a script to do such an operation, rather than to write and compile an
executable program?
Under Linux, if you change the setuid bit on such an executable script, it won’t do anything unless you actually change
the setuid bit on the shell (such as bash) which would be a big mistake; anything running from then on would have
escalated privilege!
Basic Troubleshooting
43.1 Labs
Please Note
There are no lab exercises for in this chapter. It just summarizes points discussed earlier when considering configuring
and monitoring the system, and in addition, sets the stage for the following section on system rescue, which has several
labs.
153
154 CHAPTER 43. BASIC TROUBLESHOOTING
System Rescue
44.1 Labs
Very Important
In the following exercises we are going to deliberately damage the system and then recover through the use of rescue
media. Thus, it is obviously prudent to make sure you can indeed boot off the rescue media before you try anything
more ambitious.
So first make sure you have rescue media, either a dedicated rescue/recovery image, or an install or Live image on
either an optical disk or usb drive.
Boot off it and make sure you know how to force the system to boot off the rescue media (you are likely to have to fiddle
with the BIOS settings), and when the system boots, choose rescue mode.
Please Note
If you are using a virtual machine, the procedure is logically the same with two differences:
• Getting to the BIOS might be difficult depending on the hypervisor you use. Some of them require very rapid
keystrokes, so read the documentation and make sure you know how to do it.
• You can use a physical optical disk or drive, making sure the virtual machine settings have it mounted, and if it
is USB you may have some other hurdles to make sure the virtual machine can claim the physical device. It is
usually easier to simply connect a .iso image file directly to the virtual machine.
If you are working with a virtual machine, obviously things are less dangerous, and if you are afraid of corrupting the
system in an unfixable way, simply make a backup copy of the virtual machine image before you do these exercises,
you can always replace the image with it later.
155
156 CHAPTER 44. SYSTEM RESCUE
Very Important
Do not do the following exercises unless you are sure you can boot your system off rescue/recovery media!
On a BLSCFG System
You can corrupt the command line by editing /etc/grub2/grubenv instead.
2. Reboot the machine. The system will fail to boot, saying something like No root device was found. You will also see
that a panic occurred.
3. Insert into your machine the installation or Live DVD or CD or USB drive (or network boot media) if you have access
to a functioning installation server). Reboot again. When the boot menu appears, choose to enter rescue mode.
4. As an alternative, you can try selecting a rescue image from the GRUB menu; most distributions offer this. You’ll get the
same experience as using rescue media, but it will not always work. For example, if the root filesystem is damaged it
will be impossible to do anything.
5. In rescue mode, agree when asked to search for filesystems. If prompted, open a shell, and explore the rescue system
by running utilities such as mount and ps.
6. Repair your broken system by fixing your GRUB configuration file, either by editing it or restoring from a backup copy.
7. Type exit to return to the installer, remove the boot media, and follow the instructions on how to reboot. Reboot your
machine. It should come up normally.
1. As root (not with sudo), change the root password. We will pretend we don’t know what the new password is.
2. Log out and try to login again as root using the old password. Obviously you will fail.
3. Boot using the rescue media, and select Rescue when given the option. Let it mount filesystems and then go to a
command line shell.
4. Go into your chroot-ed environment (so you have normal access to your systems):
$ chroot /mnt/sysimage
5. Exit, remove the rescue media, and reboot, you should be able to login normally now.
Very Important
This exercise is dangerous and could leave to an unusable system. Make sure you really understand things before
doing it
Please Note
The following instructions for an MBR system. if you have GPT you need to use sgdisk with the --backup-file and
--load-backup options as discussed in the partitioning chapter
Be careful: make sure you issue the exact command above and that the file saved has the right length:
$ sudo ls -l /root/mbrsave
-rw-r--r-- 1 root root 446 Nov 12 07:54 mbrsave
5. Exit from the rescue environment and reboot. The system should boot properly now.
Please Note
This exercise has been specifically written for Red Hat-based systems. You should be able to easily construct the
appropriate substitutions for other distribution families.
or
$ rpm -e zsh
Note we have chosen a package that generally has no dependencies to simplify matters. If you choose something that
does, you will have to watch your step in the below so that anything else you remove you reinstall as needed as well.
The --force option tells rpm to use the source directory in determining dependency information etc. Note that if the
install image is much older than your system which has had many updates the whole procedure might collapse!
6. $ zsh
....
[coop@q7]/tmp/LFS201%