Kali Linux Revealed 2021 Edition2
Kali Linux Revealed 2021 Edition2
need to take the security of your installation more seriously. In this chapter, we will first discuss
security policies, highlighting various points to consider when defining such a policy, and outlin-
ing some of the threats to your system and to you as a security professional. We will also discuss
security measures for desktop and laptop systems and focus on firewalls and packet filtering. Fi-
nally, we will discuss monitoring tools and strategies and show you how to best implement them
to detect potential threats to your system.
It is impractical to discuss security in broad strokes since the idea represents a vast range of con-
cepts, tools, and procedures, none of which apply universally. Choosing among them requires a
precise idea of what your goals are. Securing a system starts with answering a few questions. Rush-
ing headlong into implementing an arbitrary set of tools runs the risk of focusing on the wrong
aspects of security.
It is usually best to determine a specific goal. A good approach to help with that determination
starts with the following questions:
• What are you trying to protect? The security policy will be different depending on whether
you want to protect computers or data. In the latter case, you also need to know which data.
• What are you trying to protect against? Is it leakage of confidential data? Accidental data
loss? Revenue loss caused by disruption of service?
• Also, who are you trying to protect against? Security measures will be quite different for
guarding against a typo by a regular user of the system versus protecting against a deter-
mined external attacker group.
The term ”risk” is customarily used to refer collectively to these three factors: what to protect,
what should be prevented, and who might make this happen. Modeling the risk requires answers
to these three questions. From this risk model, a security policy can be constructed and the policy
can be implemented with concrete actions.
Permanent Questioning Bruce Schneier, a world expert in security matters (not only computer security), tries
to counter one of security’s most important myths with a motto: “Security is a process,
not a product.” Assets to be protected change over time and so do threats and the
means available to potential attackers. Even if a security policy has initially been
perfectly designed and implemented, you should never rest on your laurels. The risk
components evolve and the response to that risk must evolve accordingly.
Extra constraints are also worth taking into account as they can restrict the range of available
policies. How far are you willing to go to secure a system? This question has a major impact on
which policy to implement. Too often, the answer is only defined in terms of monetary costs,
As the previous section explained, there is no single response to the question of how to secure Kali
Linux. It all depends on how you use it and what you are trying to protect.
7.2.1. On a Server
If you run Kali Linux on a publicly accessible server, you most likely want to secure network ser-
vices by changing any default passwords that might be configured (see section 7.3, “Securing Net-
work Services” [page 159]) and possibly also by restricting their access with a firewall (see sec-
tion 7.4, “Firewall or Packet Filtering” [page 159]).
If you hand out user accounts either directly on the server or on one of the services, you want to
ensure that you set strong passwords (they should resist brute-force attacks). At the same time,
you might want to setup fail2ban, which will make it much harder to brute-force passwords over
the network (by filtering away IP addresses that exceed a limit of failed login attempts). Install
fail2ban with apt update followed by apt install fail2ban.
If you run web services, you probably want to host them over HTTPS to prevent network interme-
diaries from sniffing your traffic (which might include authentication cookies).
7.2.2. On a Laptop
The laptop of a penetration tester is not subject to the same risks as a public server: for instance,
you are less likely to be subject to random scans from script kiddies and even when you are, you
probably won’t have any network services enabled.
Real risk often arises when you travel from one customer to the next. For example, your laptop
could be stolen while traveling or seized by customs. That is why you most likely want to use full
disk encryption (see section 4.2.2, “Installation on a Fully Encrypted File System” [page 88]) and
possibly also setup the “nuke” feature (see “Adding a Nuke Password for Extra Safety” [page 250]):
the data that you have collected during your engagements are confidential and require the utmost
protection.
You may also need firewall rules (see section 7.4, “Firewall or Packet Filtering” [page 159]) but not
for the same purpose as on the server. You might want to forbid all outbound traffic except the
traffic generated by your VPN access. This is meant as a safety net, so that when the VPN is down,
you immediately notice it (instead of falling back to the local network access). That way, you
do not divulge the IP addresses of your customers when you browse the web or do other online
activities. In addition, if you are performing a local internal engagement, it is best to remain in
control of all of your activity to reduce the noise you create on the network, which can alert the
customer and their defense systems.
In general, it is a good idea to disable services that you do not use. Kali makes it easy to do this
since network services are disabled by default.
As long as services remain disabled, they do not pose any security threat. However, you must be
careful when you enable them because:
• there is no firewall by default, so if they listen on all network interfaces, they are effectively
publicly available.
• some services have no authentication credentials and let you set them on first use; others
have default (and thus widely known) credentials preset. Make sure to (re)set any password
to something that only you know.
• many services run as root with full administrator privileges, so the consequences of unau-
thorized access or a security breach are therefore usually severe.
Default Credentials We won’t list here all tools that come with default credentials, instead you should
check the README.Debian file of the respective packages, as well as kali.org/docs/1
and tools.kali.org2 to see if the service needs some special care to be secured.
SSH Service If you run in live mode, the password of the kali account is ”kali.” Thus you should
not enable SSH before changing the password of the kali account, or before having
tweaked its configuration to disallow password-based logins.
You may also want to generate new host SSH keys, if you installed Kali by a pre-
generated image. This is covered in “Generating New SSH Host Keys ” [page 115].
A firewall is a piece of computer equipment with hardware, software, or both that parses the in-
coming or outgoing network packets (coming to or leaving from a local network) and only lets
through those matching certain predefined conditions.
A filtering network gateway is a type of firewall that protects an entire network. It is usually
installed on a dedicated machine configured as a gateway for the network so that it can parse all
packets that pass in and out of the network. Alternatively, a local firewall is a software service that
runs on one particular machine in order to filter or limit access to some services on that machine,
or possibly to prevent outgoing connections by rogue software that a user could, willingly or not,
have installed.
1
https://www.kali.org/docs/introduction/default-credentials/
2
https://tools.kali.org/
Netfilter uses four distinct tables, which store rules regulating three kinds of operations on packets:
Each table contains lists of rules called chains. The firewall uses standard chains to handle packets
based on predefined circumstances. The administrator can create other chains, which will only
be used when referred by one of the standard chains (either directly or indirectly).
The filter table has three standard chains:
These chains are illustrated in Figure 7.1, “How Netfilter Chains are Called” [page 161].
Each chain is a list of rules; each rule is a set of conditions and an action to perform when the
conditions are met. When processing a packet, the firewall scans the appropriate chain, one rule
after another, and when the conditions for one rule are met, it jumps (hence the -j option in the
commands with section 7.4.2.2, “Rules” [page 163]) to the specified action to continue processing.
The most common behaviors are standardized and dedicated actions exist for them. Taking one
of these standard actions interrupts the processing of the chain, since the packets fate is already
sealed (barring an exception mentioned below). Listed below are the Netfilter actions.
• ACCEPT: allow the packet to go on its way.
• REJECT: reject the packet with an Internet control message protocol (ICMP) error packet
(the --reject-with type option of iptables determines the type of error to send).
• DROP: delete (ignore) the packet.
• LOG: log (via syslogd) a message with a description of the packet. Note that this action
does not interrupt processing, and the execution of the chain continues at the next rule,
which is why logging refused packets requires both a LOG and a REJECT/DROP rule. Common
parameters associated with logging include:
– --log-level, with default value warning, indicates the syslog severity level.
– --log-prefix allows specifying a text prefix to differentiate between logged messages.
– --log-tcp-sequence, --log-tcp-options, and --log-ip-options indicate extra data to be in-
tegrated into the message: respectively, the TCP sequence number, TCP options, and
IP options.
• ULOG: log a message via ulogd, which can be better adapted and more efficient than
syslogd for handling large numbers of messages; note that this action, like LOG, also re-
turns processing to the next rule in the calling chain.
• chain_name: jump to the given chain and evaluate its rules.
• SNAT (only in the nat table): apply Source Network Address Translation (SNAT). Extra options
describe the exact changes to apply, including the --to-source address:port option, which
defines the new source IP address and/or port.
• DNAT (only in the nat table): apply Destination Network Address Translation (DNAT). Extra op-
tions describe the exact changes to apply, including the --to-destination address:port option,
which defines the new destination IP address and/or port.
• MASQUERADE (only in the nat table): apply masquerading (a special case of Source NAT).
• REDIRECT (only in the nat table): transparently redirect a packet to a given port of the
firewall itself; this can be used to set up a transparent web proxy that works with no con-
figuration on the client side, since the client thinks it connects to the recipient whereas the
communications actually go through the proxy. The --to-ports port(s) option indicates the
port, or port range, where the packets should be redirected.
Other actions, particularly those concerning the mangle table, are outside the scope of this text.
The iptables(8) and ip6tables(8) manual pages have a comprehensive list.
What is ICMP? Internet Control Message Protocol (ICMP) is the protocol used to transmit ancillary
information on communications. It tests network connectivity with the ping com-
mand, which sends an ICMP echo request message, which the recipient is meant to
answer with an ICMP echo reply message. It signals a firewall rejecting a packet, indi-
cates an overflow in a receive buffer, proposes a better route for the next packets in the
connection, and so on. This protocol is defined by several RFC documents. RFC777
and RFC792 were the first, but many others extended and/or revised the protocol.
è http://www.faqs.org/rfcs/rfc777.html
è http://www.faqs.org/rfcs/rfc792.html
For reference, a receive buffer is a small memory zone storing data between the time
it arrives from the network and the time the kernel handles it. If this zone is full, new
data cannot be received and ICMP signals the problem so that the emitter can slow
down its transfer rate (which should ideally reach an equilibrium after some time).
Note that although an IPv4 network can work without ICMP, ICMPv6 is strictly re-
quired for an IPv6 network, since it combines several functions that were, in the IPv4
world, spread across ICMPv4, Internet Group Membership Protocol (IGMP), and Ad-
dress Resolution Protocol (ARP). ICMPv6 is defined in RFC4443.
è http://www.faqs.org/rfcs/rfc4443.html
The iptables and ip6tables commands are used to manipulate tables, chains, and rules. Their
-t table option indicates which table to operate on (by default, filter).
Commands
The major options for interacting with chains are listed below:
• -L chain lists the rules in the chain. This is commonly used with the -n option to disable
name resolution (for example, iptables -n -L INPUT will display the rules related to in-
coming packets).
• -N chain creates a new chain. You can create new chains for a number of purposes, including
testing a new network service or fending off a network attack.
• -X chain deletes an empty and unused chain (for example, iptables -X ddos-attack).
• -A chain rule adds a rule at the end of the given chain. Remember that rules are processed
from top to bottom so be sure to keep this in mind when adding rules.
• -I chain rule_num rule inserts a rule before the rule number rule_num. As with the -A option,
keep the processing order in mind when inserting new rules into a chain.
• -D chain rule_num (or -D chain rule) deletes a rule in a chain; the first syntax identifies the
rule to be deleted by its number (iptables -L --line-numbers will display these num-
bers), while the latter identifies it by its contents.
• -F chain flushes a chain (deletes all its rules). For example, to delete all of the rules related
to outgoing packets, you would run iptables -F OUTPUT. If no chain is mentioned, all the
rules in the table are deleted.
• -P chain action defines the default action, or “policy” for a given chain; note that only stan-
dard chains can have such a policy. To drop all incoming traffic by default, you would run
iptables -P INPUT DROP.
Rules
Each rule is expressed as conditions -j action action_options. If several conditions are described
in the same rule, then the criterion is the conjunction (logical AND) of the conditions, which is at
least as restrictive as each individual condition.
The -p protocol condition matches the protocol field of the IP packet. The most common values
are tcp, udp, icmp, and icmpv6. This condition can be complemented with conditions on the TCP
ports, with clauses such as --source-port port and --destination-port port .
The -s address or -s network/mask condition matches the source address of the packet. Corre-
spondingly, -d address or -d network/mask matches the destination address.
The -i interface condition selects packets coming from the given network interface. -o interface
selects packets going out on a specific interface.
The --state state condition matches the state of a packet in a connection (this requires the
ipt_conntrack kernel module, for connection tracking). The NEW state describes a packet start-
ing a new connection, ESTABLISHED matches packets belonging to an already existing connec-
tion, and RELATED matches packets initiating a new connection related to an existing one (which
is useful for the ftp-data connections in the “active” mode of the FTP protocol).
There are many available options for iptables and ip6tables and mastering them all requires
a great deal of study and experience. However, one of the options you will use most often is the
one to block malicious network traffic from a host or range of hosts. For example, to silently block
incoming traffic from the IP address 10.0.1.5 and the 31.13.74.0/24 class C subnet:
# iptables -A INPUT -s 10.0.1.5 -j DROP
# iptables -A INPUT -s 31.13.74.0/24 -j DROP
# iptables -n -L INPUT
Chain INPUT (policy ACCEPT)
target prot opt source destination
DROP all -- 10.0.1.5 0.0.0.0/0
DROP all -- 31.13.74.0/24 0.0.0.0/0
Another commonly-used iptables command is to permit network traffic for a specific service or
port. To allow users to connect to SSH, HTTP, and IMAP, you could run the following commands:
# iptables -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT
# iptables -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT
# iptables -A INPUT -m state --state NEW -p tcp --dport 143 -j ACCEPT
# iptables -n -L INPUT
Chain INPUT (policy ACCEPT)
target prot opt source destination
DROP all -- 10.0.1.5 0.0.0.0/0
DROP all -- 31.13.74.0/24 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:143
It is considered to be good computer hygiene to clean up old and unnecessary rules. The easiest
way to delete iptables rules is to reference the rules by line number, which you can retrieve with
There are more specific conditions, depending on the generic conditions described above. For
more information refer to manual pages for iptables(8) and ip6tables(8)
Each rule creation requires one invocation of iptables or ip6tables. Typing these commands
manually can be tedious, so the calls are usually stored in a script so that the system is automati-
cally configured the same way every time the machine boots. This script can be written by hand
but it can also be interesting to prepare it with a high-level tool such as fwbuilder.
# apt install fwbuilder
The principle is simple. In the first step, describe all the elements that will be involved in the
actual rules:
Next, create the rules with simple drag-and-drop actions on the objects as shown in Figure 7.2,
“Fwbuilder’s Main Window” [page 166]. A few contextual menus can change the condition (negat-
ing it, for instance). Then the action needs to be chosen and configured.
As far as IPv6 is concerned, you can either create two distinct rulesets for IPv4 and IPv6, or create
only one and let fwbuilder translate the rules according to the addresses assigned to the objects.
fwbuilder will generate a script configuring the firewall according to the rules that you have
defined. Its modular architecture gives it the ability to generate scripts targeting different systems
including iptables for Linux, ipf for FreeBSD, and pf for OpenBSD.
In order to implement the firewall rules each time the machine is booted, you will need to register
the configuration script in an up directive of the /etc/network/interfaces file. In the following
example, the script is stored under /usr/local/etc/arrakis.fw (arrakis being the hostname of
the machine).
auto eth0
iface eth0 inet static
address 192.168.0.1
network 192.168.0.0
netmask 255.255.255.0
broadcast 192.168.0.255
up /usr/local/etc/arrakis.fw
Data confidentiality and protection is an important aspect of security but it is equally important
to ensure availability of services. As an administrator and security practitioner, you must ensure
that everything works as expected, and it is your responsibility to detect anomalous behavior and
service degradation in a timely manner. Monitoring and logging software plays a key role in this
aspect of security, providing insight into what is happening on the system and the network.
In this section, we will review some tools that can be used to monitor several aspects of a Kali
system.
The logcheck program monitors log files every hour by default and sends unusual log messages
in emails to the administrator for further analysis.
The list of monitored files is stored in /etc/logcheck/logcheck.logfiles. The default values
work fine if the /etc/rsyslog.conf file has not been completely overhauled.
logcheck can report in various levels of detail: paranoid, server, and workstation. paranoid is very
verbose and should probably be restricted to specific servers such as firewalls. server is the default
mode and is recommended for most servers. workstation is obviously designed for workstations
and is extremely terse, filtering out more messages than the other options.
In all three cases, logcheck should probably be customized to exclude some extra messages (de-
pending on installed services), unless you really want to receive hourly batches of long unin-
teresting emails. Since the message selection mechanism is rather complex, /usr/share/doc/
logcheck-database/README.logcheck-database.gz is a required—if challenging—read.
• those that qualify a message as a cracking attempt (stored in a file in the /etc/logcheck/
cracking.d/ directory);
Curly Brackets {} in a The use of braces in a bash command, has many different functions. In the example
Command above, we are using it for a shorthand for repeating parts of the command. Bash then
will expand out the command before executing it.
In the example below, it will have have the same outcome, three files created in our
home directory.
top is an interactive tool that displays a list of currently running processes. The default sorting
is based on the current amount of processor use and can be obtained with the P key. Other sort
orders include a sort by occupied memory (M key), by total processor time (T key), and by process
identifier (N key). The k key kills a process by entering its process identifier. The r key changes
the priority of a process.
When the system seems to be overloaded, top is a great tool to see which processes are compet-
ing for processor time or consuming too much memory. In particular, it is often interesting to
check if the processes consuming resources match the real services that the machine is known to
host. An unknown process running as the ”www-data” user should really stand out and be inves-
tigated since it’s probably an instance of software installed and executed on the system through
a vulnerability in a web application.
top is a very flexible tool and its manual page gives details on how to customize its display and
adapt it to your personal needs and habits.
The xfce4-taskmanager graphical tool is similar to top and it provides roughly the same features.
For GNOME users there is gnome-system-monitor and for KDE users there is ksysguard which are
both similar as well.
Once a system is installed and configured, most system files should stay relatively static until
the system is upgraded. Therefore, it is a good idea to monitor changes in system files since any
unexpected change could be cause for alarm and should be investigated. This section presents a
few of the most common tools used to monitor system files, detect changes, and optionally notify
you as the administrator of the system.
dpkg --verify (or dpkg -V) is an interesting tool since it displays the system files that have
been modified (potentially by an attacker), but this output should be taken with a grain of salt. To
do its job, dpkg relies on checksums stored in its own database which is stored on the hard disk
(found in /var/lib/dpkg/info/package.md5sums). A thorough attacker will therefore modify
these files so they contain the new checksums for the subverted files, or an advanced attacker
will compromise the package on your Debian mirror. To protect against this class of attack, use
APT’s digital signature verification system (see section 8.3.6, “Validating Package Authenticity”
[page 208]) to properly verify the packages.
Running dpkg -V will verify all installed packages and will print out a line for each file that fails
verification. Each character denotes a test on some specific meta-data. Unfortunately, dpkg does
not store the meta-data needed for most tests and will thus output question marks for them. Cur-
rently only the checksum test can yield a 5 on the third character (when it fails).
# dpkg -V
??5?????? /lib/systemd/system/ssh.service
??5?????? c /etc/libvirt/qemu/networks/default.xml
??5?????? c /etc/lvm/lvm.conf
??5?????? c /etc/salt/roster
In the example above, dpkg reports a change to SSH’s service file that the administrator made to
the packaged file instead of using an appropriate /etc/systemd/system/ssh.service override
The Advanced Intrusion Detection Environment (AIDE) tool checks file integrity and detects any
change against a previously-recorded image of the valid system. The image is stored as a database
(/var/lib/aide/aide.db) containing the relevant information on all files of the system (finger-
prints, permissions, timestamps, and so on).
You can install AIDE by running apt update followed by apt install aide. You will first initial-
ize the database with aideinit; it will then run daily (via the /etc/cron.daily/aide script) to
check that nothing relevant changed. When changes are detected, AIDE records them in log files
(/var/log/aide/*.log) and sends its findings to the administrator by email.
Protecting the Database Since AIDE uses a local database to compare the states of the files, the validity of
its results is directly linked to the validity of the database. If an attacker gets root
permissions on a compromised system, they will be able to replace the database and
cover their tracks. One way to prevent this subversion is to store the reference data
on read-only storage media.
You can use options in /etc/default/aide to tweak the behavior of the aide package. The
AIDE configuration proper is stored in /etc/aide/aide.conf and /etc/aide/aide.conf.d/ (ac-
tually, these files are only used by update-aide.conf to generate /var/lib/aide/aide.conf.
autogenerated). The configuration indicates which properties of which files need to be checked.
For instance, the contents of log files changes routinely, and such changes can be ignored as long
as the permissions of these files stay the same, but both contents and permissions of executable
programs must be constant. Although not very complex, the configuration syntax is not fully
intuitive and we recommend reading the aide.conf(5) manual page for more details.
A new version of the database is generated daily in /var/lib/aide/aide.db.new; if all recorded
changes were legitimate, it can be used to replace the reference database.
Tripwire is very similar to AIDE; even the configuration file syntax is almost the same. The main
addition provided by tripwire is a mechanism to sign the configuration file so that an attacker
cannot make it point at a different version of the reference database.
Samhain also offers similar features as well as some functions to help detect rootkits (see “The
checksecurity and chkrootkit/rkhunter packages” [page 171] below). It can also be deployed globally
on a network and record its traces on a central server (with a signature).
7.6. Summary
In this chapter, we took a look at the concept of security policies, highlighting various points to
consider when defining such a policy and outlining some of the threats to your system and to you
personally, as a security professional. We discussed desktop and laptop security measures as well
as firewalls and packet filtering. Finally, we reviewed monitoring tools and strategies and showed
how to best implement them to detect potential threats to your system.
Summary Tips:
• Take time to define a comprehensive security policy.
• Real risk often arises when you travel from one customer to the next. For example, your lap-
top could be stolen while traveling or seized by customs. Prepare for these unfortunate pos-
sibilities by using full disk encryption (see section 4.2.2, “Installation on a Fully Encrypted
File System” [page 88]) and consider the nuke feature (see “Adding a Nuke Password for
Extra Safety” [page 250]) to protect your clients data.
• Disable services that you do not use. Kali makes it easy to do this since all external network
services are disabled by default.
• If you are running Kali on a publicly accessible server, change any default passwords for
services that might be configured (see section 7.3, “Securing Network Services” [page 159])
and restrict their access with a firewall (see section 7.4, “Firewall or Packet Filtering” [page
159]) prior to launching them.
• Use fail2ban to detect and block password-guessing attacks and remote brute force password
attacks.
• If you run web services, host them over HTTPS to prevent network intermediaries from
sniffing your traffic (which might include authentication cookies).
• The Linux kernel embeds the netfilter firewall. There is no turn-key solution for configuring
any firewall, since network and user requirements differ. However, you can control netfilter
from user space with the iptables and ip6tables commands.
• Implement firewall rules (see section 7.4, “Firewall or Packet Filtering” [page 159]) to forbid
all outbound traffic except the traffic generated by your VPN access. This is meant as a safety
dpkg
apt
sources.list
Upgrades
Package repositories
Chapter
Debian Package 8
Management
Contents
Introduction to APT 176 Basic Package Interaction 181 Advanced APT Configuration and Usage 200
APT Package Reference: Digging Deeper into the Debian Package System 210 Summary 222
After the basics of Linux, it is time to learn the package management system of a Debian-based
distribution. In such distributions, including Kali, the Debian package is the canonical way to
make software available to end-users. Understanding the package management system will give
you a great deal of insight into how Kali is structured, enable you to more effectively troubleshoot
issues, and help you quickly locate help and documentation for the wide array of tools and utilities
included in Kali Linux.
In this chapter, we will introduce the Debian package management system and introduce dpkg and
the APT suite of tools. One of the primary strengths of Kali Linux lies in the flexibility of its package
management system, which leverages these tools to provide near-seamless installation, upgrades,
removal, and manipulation of application software, and even of the base operating system itself.
It is critical that you understand how this system works to get the most out of Kali and streamline
your efforts. The days of painful compilations, disastrous upgrades, debugging gcc, make, and
configure problems are long gone, however, the number of available applications has exploded
and you need to understand the tools designed to take advantage of them. This is also a critical
skill because there are a number of security tools that, due to licensing or other issues, cannot be
included in Kali but have Debian packages available for download. It is important that you know
how to process and install these packages and how they impact the system, especially when things
do not go as expected.
We will begin with some basic overviews of APT, describe the structure and contents of binary and
source packages, take a look at some basic tools and scenarios, and then dig deeper to help you
wring every ounce of utility from this spectacular package system and suite of tools.
Let’s begin with some basic definitions, an overview, and some history about Debian packages,
starting with dpkg and APT.
A Debian package is a compressed archive of a software application. A binary package (a .deb file)
contains files that can be directly used (such as programs or documentation), while a source pack-
age contains the source code for the software and the instructions required for building a binary
package. A Debian package contains the application’s files as well as other metadata including the
names of the dependencies the application needs, as well as scripts that enable the execution of
commands at different stages in the package’s lifecycle (installation, upgrades, and removal).
The dpkg tool was designed to process and install .deb packages, but if it encountered an unsat-
isfied dependency (like a missing library) that would prevent the package from installing, dpkg
would simply list the missing dependency, because it had no awareness or built-in logic to find or
process the packages that might satisfy those dependencies. The Advanced Package Tool (APT),
Package Source and The word source can be ambiguous. A source package—a package containing the
Source Package source code of a program—should not be confused with a package source—a repos-
itory (website, FTP server, CD/DVD-ROM, local directory, etc.) that contains pack-
ages.
The sources.list file is the key configuration file for defining package sources, and it is impor-
tant to understand how it is laid out and how to configure it since APT will not function without a
properly defined list of package sources. Let’s discuss its syntax, take a look at the various repos-
itories that are used by Kali Linux, and discuss mirrors and mirror redirection, then you will be
ready to put APT to use.
Each active line of the /etc/apt/sources.list file (and of the /etc/apt/sources.list.d/*.
list files) contains the description of a source, made of three parts separated by spaces. Com-
mented lines begin with a # character:
#deb cdrom:[Debian GNU/Linux 2020.3 _Kali-rolling_ - Official Snapshot amd64 LIVE/
å INSTALL Binary 20200728-20:25]/ kali-last-snapshot contrib main non-free
Let’s take a look at the syntax of this file. The first field indicates the source type:
• deb for binary packages,
• deb-src for source packages.
The second field gives the base URL of the source: this can consist of a Kali mirror or any other
package archive set up by a third party. The URL can start with file:// to indicate a local source
installed in the system’s file hierarchy, with http:// to indicate a source accessible from a web
server, or with ftp:// for a source available on an FTP server. The URL can also start with cdrom:
for CD/DVD-ROM/Blu-ray disc-based installations, although this is less frequent since network-
based installation methods are more and more common.
The cdrom entries describe the CD/DVD-ROM you have. Contrary to other entries, a CD/DVD-ROM
is not always available, since it has to be inserted into the drive and usually only one disc can be
read at a time. For those reasons, these sources are managed in a slightly different way and need to
be added with the apt-cdrom program, usually executed with the add parameter. The latter will
then request the disc to be inserted in the drive and will browse its contents looking for Packages
files. It will use these files to update its database of available packages (this operation is usually
done by the apt update command). After that, APT will request the disc if it needs a package
stored on it.
The syntax of the last field depends on the structure of the repository. In the simplest cases, you
can easily indicate a subdirectory (with a required trailing slash) of the desired source (this is often
a simple “./”, which refers to the absence of a subdirectory—the packages are then directly at the
A standard sources.list file for a system running Kali Linux refers to one repository (kali-rolling)
and the three previously mentioned components: main, contrib, and non-free:
This is the main repository for end-users. It should always contain installable and recent packages.
It is managed by a tool that merges Debian Testing and the Kali-specific packages in a way that
ensures that each package’s dependencies can be satisfied within kali-rolling. In other words,
barring any bug in maintainer scripts, all packages should be installable.
Since Debian Testing evolves daily, so does Kali Rolling. The Kali-specific packages are also regu-
larly updated as we monitor the upstream releases of the most important packages.
1
https://www.debian.org/social_contract#guidelines
This repository is not for public use. It is a space where Kali developers resolve dependency prob-
lems arising from the merge of the Kali-specific packages into Debian Testing.
It is also the place where updated packages land first, so if you need an update that was released
recently and that has not yet reached kali-rolling, you might be able to grab it from this repository.
This is not recommended for regular users.
The sources.list extracts above refer to http.kali.org: this is a server running MirrorBrain2 ,
which will redirect your HTTP requests to an official mirror close to you. MirrorBrain monitors
each mirror to ensure that they are working and up-to-date; it will always redirect you to a good
mirror.
Debugging a Mirror If you have a problem with the mirror (for instance because apt update fails), you
Redirection can use curl -sI to see where you are being redirected:
If the problem persists, you can edit /etc/apt/sources.list and hardcode the
name of another known working mirror in place of (or before) the http.kali.org
entry.
2
http://mirrorbrain.org/
Armed with a basic understanding of the APT landscape, let’s take a look at some basic package
interactions including the initialization of APT; installation, removal, and purging of packages;
and upgrading of the Kali Linux system. Then let’s venture from the command line to take a look
at some graphical APT tools.
APT is a vast project and tool set, whose original plans included a graphical interface. From a client
perspective, it is centered around the command-line tool apt-get as well as apt, which was later
developed to overcome design flaws of apt-get.
There are graphical alternatives developed by third parties, including synaptic and aptitude,
which we will discuss later. We tend to prefer apt, which we use in the examples that follow. We
will, however, detail some of the major syntax differences between tools, as they arise.
When working with APT, you should first download the list of currently available packages with
apt update. Depending on the speed of your connection, this can take some time because various
packages’ list, sources’ list and translation files have grown in size alongside Kali development. Of
course, CD/DVD-ROM installation sets install much more quickly, because they are local to your
machine.
Thanks to the thoughtful design of the Debian package system, you can install packages, with or
without their dependencies, fairly easily. Let’s take a look at package installation with dpkg and
apt.
dpkg is the core tool that you will use (either directly or indirectly through APT) when you need
to install a package. It is also a go-to choice if you are operating offline, since it doesn’t require an
Internet connection. Remember, dpkg will not install any dependencies that the package might
require. To install a package with dpkg, simply provide the -i or --install option and the path to
the .deb. This implies that you have previously downloaded (or obtained in some other way) the
.deb file of the package to install.
# dpkg -i man-db_2.9.3-2_amd64.deb
(Reading database ... 309317 files and directories currently installed.)
Preparing to unpack man-db_2.9.3-2_amd64.deb ...
Unpacking man-db (2.9.3-2) over (2.9.3-2) ...
Setting up man-db (2.9.3-2) ...
Updating database of manual pages ...
man-db.service is a disabled or a static unit not running, not starting it.
Processing triggers for kali-menu (2020.4.0) ...
Processing triggers for mime-support (3.64) ...
We can see the different steps performed by dpkg and can see at what point any error may have
occurred. The -i or --install option performs two steps automatically: it unpacks the package and
runs the configuration scripts. You can perform these two steps independently (as apt does behind
the scenes) with the --unpack and --configure options, respectively:
# dpkg --unpack man-db_2.9.3-2_amd64.deb
(Reading database ... 309317 files and directories currently installed.)
Preparing to unpack man-db_2.9.3-2_amd64.deb ...
Unpacking man-db (2.9.3-2) over (2.9.3-2) ...
Processing triggers for kali-menu (2020.4.0) ...
Processing triggers for mime-support (3.64) ...
# dpkg --configure man-db
Setting up man-db (2.9.3-2) ...
Updating database of manual pages ...
Note that the “Processing triggers” lines refer to code that is automatically executed whenever a
package adds, removes, or modifies files in some monitored directories. For instance, the mime-
support package monitors /usr/lib/mime/packages and executes the update-mime command
A frequent error, which you are bound to encounter sooner or later, is a file collision. When a
package contains a file that is already installed by another package, dpkg will refuse to install it.
The following types of messages will then appear:
Unpacking libgdm (from .../libgdm_3.8.3-2_amd64.deb) ...
dpkg: error processing /var/cache/apt/archives/libgdm_3.8.3-2_amd64.deb (--unpack):
å trying to overwrite ’/usr/bin/gdmflexiserver’, which is also in package gdm3
å 3.4.1-9
In this case, if you think that replacing this file is not a significant risk to the stability of your
system (which is usually the case), you can use --force-overwrite to overwrite the file.
While there are many available --force-* options, only --force-overwrite is likely to be used regu-
larly. These options exist for exceptional situations, and it is better to leave them alone as much as
possible in order to respect the rules imposed by the packaging mechanism. Do not forget, these
rules ensure the consistency and stability of your system.
Although APT is much more advanced than dpkg and does a lot more behind the scenes, you will
find that interacting with packages is quite simple. You can add a package to the system with a
simple apt install package. APT will automatically install the necessary dependencies:
# apt install kali-tools-gpu
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
oclgausscrack truecrack
The following NEW packages will be installed:
Need to get 2,602 kB of archives.
After this operation, 2,898 kB of additional disk space will be used.
Get:1 http://kali.download/kali kali-rolling/main amd64 oclgausscrack amd64 1.3-1kali3
å [30.7 kB]
Get:2 http://kali.download/kali kali-rolling/main amd64 truecrack amd64 3.6+git20150326
å -0kali1 [2,558 kB]
You can also use apt-get install package, or aptitude install package. For simple pack-
age installation, they do essentially the same thing. As you will see later, the differences are more
meaningful for upgrades or when dependencies resolution do not have any perfect solution.
If sources.list lists several distributions, you can specify the package version with apt install
package=version, but indicating its distribution of origin (kali-rolling or kali-dev) with apt
install package/distribution is usually preferred.
As with dpkg, you can also instruct apt to forcibly install a package and overwrite files with --force-
overwrite, but the syntax is a bit strange since you are passing the argument through to dpkg:
# apt -o Dpkg::Options::=”--force-overwrite” install zsh
As a rolling distribution, Kali Linux has spectacular upgrade capabilities. In this section, we will
take a look at how simple it is to upgrade Kali, and we will discuss strategies for planning your
updates.
We recommend regular upgrades, because they will install the latest security updates. To upgrade,
use apt update followed by either apt upgrade, apt-get upgrade, or aptitude safe-upgrade.
These commands look for installed packages that can be upgraded without removing any packages.
In other words, the goal is to ensure the least intrusive upgrade possible. The apt-get command
Be Aware of Important To anticipate some of these problems, you can install the apt-listchanges package,
Changes which displays information about possible problems at the beginning of a pack-
age upgrade. This information is compiled by the package maintainers and put in
/usr/share/doc/package/NEWS.Debian files for your benefit. Reading these files
(possibly through apt-listchanges) should help you avoid nasty surprises.
Since becoming a rolling distribution, Kali can receive upgrades several times a day. However,
that might not be the best strategy. So, how often should you upgrade Kali Linux? There is no
hard rule but there are some guidelines that can help you. You should upgrade:
• When you are aware of a security issue that is fixed in an update.
• When you suspect that an updated version might fix a bug that you are experiencing.
• Before reporting a bug to make sure it is still present in the latest version that you have
available.
• Often enough to get the security fixes that you have not heard about.
There are also cases where it is best to not upgrade. For example, it might not be a good idea to
upgrade:
• If you can’t afford any breakage (for example, because you go offline, or because you are
about to give a presentation with your computer); it is best to do the upgrade later, when
you have enough time to troubleshoot any issue introduced in the process.
• If a disruptive change happened recently (or is still ongoing) and you fear that all issues
have not yet been discovered. For example, when a new GNOME version is released, not all
Removing a package is even simpler than installing one. Let’s take a look at how to remove a
package with dpkg and apt.
To remove a package with dpkg, supply the -r or --remove option, followed by the name of a pack-
age. This removal is not, however, complete: all of the configuration files, maintainer scripts, log
files (system logs), data generated by the service (such as the content of an LDAP server directory
or the content of a database for an SQL server), and most other user data handled by the pack-
age remain intact. The remove option makes it easy to uninstall a program and later re-install it
with the same configuration. Also remember that dependencies are not removed. Consider this
example:
# dpkg --remove kali-tools-gpu
(Reading database ... 108172 files and directories currently installed.)
Removing kali-tools-gpu (2021.1.0) ...
You can also remove packages from the system with apt remove package. APT will automatically
delete the packages that depend on the package that is being removed. Like the dpkg example,
configuration files and user data will not be removed.
Through the addition of suffixes to package names, you can use apt (or apt-get and aptitude)
to install certain packages and remove others on the same command line. With an apt install
command, add “-” to the names of the packages you wish to remove. With an apt remove com-
mand, add “+” to the names of the packages you wish to install.
The next example shows two different ways to install package1 and to remove package2.
# apt install package1 package2-
[...]
# apt remove package1+ package2
This can also be used to exclude packages that would otherwise be installed, for example due to
a Recommends (discussed later in section 8.4.1.3, “Recommends, Suggests, and Enhances Fields”
[page 214]). In general, the dependency solver will use that information as a hint to look for alter-
native solutions.
To remove all data associated with a package, you can purge the package with the dpkg -P
package, or apt purge package commands. This will completely remove the package and all
user data, and in the case of apt, will delete dependencies as well.
# dpkg -r debian-cd
(Reading database ... 333606 files and directories currently installed.)
Removing debian-cd (3.1.32) ...
# dpkg -P debian-cd
(Reading database ... 332950 files and directories currently installed.)
Removing debian-cd (3.1.32) ...
Purging configuration files for debian-cd (3.1.32) ...
Warning! Given the definitive nature of purge, do not execute it lightly. You will lose everything
associated with that package.
Next, let’s take a look at some of the tools that can be used to inspect Debian packages. We will
learn of dpkg, apt, and apt-cache commands that can be used to query and visualize the package
database.
We will begin with several dpkg options that query the internal dpkg database. This database
resides on the filesystem at /var/lib/dpkg and contains multiple sections including configura-
tion scripts (/var/lib/dpkg/info), a list of files the package installed (/var/lib/dpkg/info/*.
list), and the status of each package that has been installed (/var/lib/dpkg/status). You can
use dpkg to interact with the files in this database. Note that most options are available in a long
version (one or more relevant words, preceded by a double dash) and a short version (a single
letter, often the initial of one word from the long version, and preceded by a single dash). This
convention is so common that it is a POSIX standard.
First, let’s take a look at --listfiles package (or -L), which lists the files that were installed by the
specified package:
$ dpkg -L base-passwd
/.
Next, dpkg --search file (or -S), finds any packages containing the file or path passed in the
argument. For example, to find the package containing /bin/date:
$ dpkg -S /bin/date
coreutils: /bin/date
The dpkg --status package (or -s) command displays the headers of an installed package. For
example, to search the headers for the coreutils package:
$ dpkg -s coreutils
Package: coreutils
Status: install ok installed
Priority: required
Section: utils
Installed-Size: 13855
Maintainer: Michael Stone <mstone@debian.org>
Architecture: amd64
Multi-Arch: foreign
Version: 8.23-3
Replaces: mktemp, realpath, timeout
The dpkg --list (or -l) command displays the list of packages known to the system and their
installation status. You can also use grep on the output to search for certain fields, or provide
wildcards (such as b*) to search for packages that match a particular partial search string. This
will show a summary of the packages. For example, to show a summary list of all packages that
start with ’b’:
$ dpkg -l ’b*’
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=========================-==========================-============-=====================
ii backdoor-factory 3.4.2+dfsg-4 all Patch 32/64 bits ELF & win32/64 binaries with shellcode
un backupninja <none> <none> (no description available)
un backuppc <none> <none> (no description available)
un balsa <none> <none> (no description available)
un base <none> <none> (no description available)
[...]
The dpkg --contents file.deb (or -c) command lists all the files in a particular .deb file:
$ dpkg -c /var/cache/apt/archives/gpg_2.2.20-1_amd64.deb
drwxr-xr-x root/root 0 2020-03-23 15:05 ./
drwxr-xr-x root/root 0 2020-03-23 15:05 ./usr/
drwxr-xr-x root/root 0 2020-03-23 15:05 ./usr/bin/
-rwxr-xr-x root/root 1062832 2020-03-23 15:05 ./usr/bin/gpg
drwxr-xr-x root/root 0 2020-03-23 15:05 ./usr/share/
drwxr-xr-x root/root 0 2020-03-23 15:05 ./usr/share/doc/
drwxr-xr-x root/root 0 2020-03-23 15:05 ./usr/share/doc/gpg/
-rw-r--r-- root/root 677 2020-03-23 14:45 ./usr/share/doc/gpg/NEWS.Debian.gz
-rw-r--r-- root/root 24353 2020-03-23 15:05 ./usr/share/doc/gpg/changelog.Debian.gz
-rw-r--r-- root/root 384091 2020-03-20 11:38 ./usr/share/doc/gpg/changelog.gz
The dpkg --info file.deb (or -I) command displays the headers of the specified .deb file:
$ dpkg -I /var/cache/apt/archives/gpg_2.2.20-1_amd64.deb
new Debian package, version 2.0.
size 894224 bytes: control archive=1160 bytes.
1219 bytes, 25 lines control
374 bytes, 6 lines md5sums
Package: gpg
Source: gnupg2
Version: 2.2.20-1
Architecture: amd64
Maintainer: Debian GnuPG Maintainers <pkg-gnupg-maint@lists.alioth.debian.org>
Installed-Size: 1505
Depends: gpgconf (= 2.2.20-1), libassuan0 (>= 2.5.0), libbz2-1.0, libc6 (>= 2.25),
å libgcrypt20 (>= 1.8.0), libgpg-error0 (>= 1.35), libreadline8 (>= 6.0),
å libsqlite3-0 (>= 3.7.15), zlib1g (>= 1:1.1.4)
Recommends: gnupg (= 2.2.20-1)
Breaks: gnupg (<< 2.1.21-4)
Replaces: gnupg (<< 2.1.21-4)
Section: utils
Priority: optional
Multi-Arch: foreign
Homepage: https://www.gnupg.org/
Description: GNU Privacy Guard -- minimalist public key operations
GnuPG is GNU’s tool for secure communication and data storage.
It can be used to encrypt data and to create digital signatures.
It includes an advanced key management facility and is compliant
with the proposed OpenPGP Internet standard as described in RFC4880.
[...]
You can also use dpkg to compare package version numbers with the --compare-versions option,
which is often called by external programs, including configuration scripts executed by dpkg itself.
This option requires three parameters: a version number, a comparison operator, and a second
version number. The different possible operators are: lt (strictly less than), le (less than or equal
to), eq (equal), ne (not equal), ge (greater than or equal to), and gt (strictly greater than). If the
comparison is correct, dpkg returns 0 (success); if not, it gives a non-zero return value (indicating
failure). Consider these comparisons:
$ dpkg --compare-versions 1.2-3 gt 1.1-4
$ echo $?
0
Note the unexpected failure of the last comparison: for dpkg, the string ”pre” (usually denoting
a pre-release) has no particular meaning, and dpkg simply interprets it as a string, in which case
”2.6.0pre3-1” is alphabetically greater than ”2.6.0-1”. When we want a package’s version number
to indicate that it is a pre-release, we use the tilde character, “~”:
$ dpkg --compare-versions 2.6.0~pre3-1 lt 2.6.0-1
$ echo $?
0
The apt-cache command can display much of the information stored in APT’s internal database.
This information is a sort of cache since it is gathered from the different sources listed in the
sources.list file. This happens during the apt update operation.
VOCABULARY A cache is a temporary storage system used to speed up frequent data access when
Cache the usual access method is expensive (performance-wise). This concept can be applied
in numerous situations and at different scales, from the core of microprocessors up to
high-end storage systems.
In the case of APT, the reference Packages files are those located on Debian mirrors.
That said, it would be very ineffective to push every search through the online package
databases. That is why APT stores a copy of those files (in /var/lib/apt/lists/)
and searches are done within those local files. Similarly, /var/cache/apt/archives/
contains a cached copy of already downloaded packages to avoid downloading them
again if you need to reinstall them.
To avoid excessive disk usage when you upgrade frequently, you should regularly sort
through the /var/cache/apt/archives/ directory. Two commands can be used for
this: apt clean (or apt-get clean) entirely empties the directory; apt autoclean
(apt-get autoclean) only removes packages that can no longer be downloaded be-
cause they have disappeared from the mirror and are therefore useless. Note that the
configuration parameter APT::Clean-Installed can be used to prevent the removal
of .deb files that are currently installed. Also, note that apt drops the downloaded
files once they have been installed, so this matters mainly when you use other tools.
The apt-cache command can do keyword-based package searches with apt-cache search
keyword. It can also display the headers of the package’s available versions with apt-cache show
package. This command provides the package’s description, its dependencies, and the name of its
An Alternative: axi-cache apt-cache search is a very rudimentary tool, basically implementing grep on pack-
age’s descriptions. It often returns too many results or none at all, when too many
keywords are included.
axi-cache search term, on the other hand, provides better results, sorted by rele-
vancy. It uses the Xapian search engine and is part of the apt-xapian-index package,
which indexes all package information (and more, like the .desktop files from all De-
bian packages). It knows about tags and returns results in a matter of milliseconds.
Some features are more rarely used. For instance, apt-cache policy displays the priorities
of package sources as well as those of individual packages. Another example is apt-cache
dumpavail, which displays the headers of all available versions of all packages. apt-cache
pkgnames displays the list of all the packages that appear at least once in the cache.
8.2.6. Troubleshooting
Sooner or later, you will run into a problem when interacting with a package. In this section, we
will outline some basic troubleshooting steps that you can take and provide some tools that will
lead you closer to a potential solution.
In spite of the Debian or Kali maintainers’ best efforts, a system upgrade isn’t always as smooth
as we would hope. New software versions may be incompatible with previous ones (for instance,
their default behavior or their data format may have changed), or bugs may slip through the cracks
despite the testing performed by package maintainers and Debian Unstable users.
Leveraging Bug Reports You might sometimes find that a new version of software doesn’t work
at all. This generally happens if the application isn’t particularly popular and hasn’t been tested
enough. The first thing to do is to have a look at the Kali bug tracker3 and at the Debian bug
tracking system4 at https://bugs.debian.org/package , and check whether the problem has already
been reported. If it hasn’t, you should report it yourself (see section 6.3, “Filing a Good Bug Report”
[page 134] for detailed instructions). If it is already known, the bug report and the associated
messages are usually an excellent source of information related to the bug. In some cases, a patch
already exists and has been made available in the bug report itself; you can then recompile a fixed
version of the broken package locally (see section 9.1, “Modifying Kali Packages” [page 228]). In
other cases, users may have found a workaround for the problem and shared their insights about
it in their replies to the report; those instructions may help you work around the problem until a
fix or patch is released. In a best-case scenario, the package may have already been fixed and you
may find details in the bug report.
Downgrading to a Working Version When the problem is a clear regression (where the former
version worked), you can try to downgrade the package. In this case, you will need a copy of the
old version. If you have access to the old version in one of the repositories configured in APT, you
can use a simple one-liner command to downgrade (see section 8.2.2.2, “Installing Packages with
APT” [page 183]). But with Kali’s rolling release, you will usually only find a single version of each
package at any one time.
You can still try to find the old .deb file and install it manually with dpkg. Old .deb files can be
found in multiple places:
• In the pool directory on your usual Kali mirror (removed and obsolete packages are kept
for three to four days to avoid problems with users not having the latest package indices)
3
https://bugs.kali.org/
4
https://bugs.debian.org/
The dpkg tool keeps a log of all of its actions in /var/log/dpkg.log. This log is extremely verbose,
since it details all the stages of each package. In addition to offering a way to track dpkg’s behav-
ior, it helps to keep a history of the development of the system: you can find the exact moment
when each package has been installed or updated, and this information can be extremely useful
in understanding a recent change in behavior. Additionally, with all versions being recorded, it is
easy to cross-check the information with the changelog.Debian.gz for packages in question, or
even with online bug reports.
# tail /var/log/dpkg.log
2021-01-06 23:16:37 status installed kali-tools-gpu:amd64 2021.1.0
2021-01-06 23:16:37 remove kali-tools-gpu:amd64 2021.1.0 <none>
2021-01-06 23:16:37 status half-configured kali-tools-gpu:amd64 2021.1.0
2021-01-06 23:16:37 status half-installed kali-tools-gpu:amd64 2021.1.0
2021-01-06 23:16:37 status config-files kali-tools-gpu:amd64 2021.1.0
2021-01-06 23:16:37 status not-installed kali-tools-gpu:amd64 <none>
When you mistakenly damage your system by removing or modifying certain files, the easiest way
to restore them is to reinstall the affected package. Unfortunately, the packaging system finds that
the package is already installed and politely refuses to reinstall it. To avoid this, use the --reinstall
option of the apt and apt-get commands. The following command reinstalls postfix even if it is
already present:
The aptitude command line is slightly different but achieves the same result with aptitude
reinstall postfix. The dpkg command does not prevent re-installation, but it is rarely called
directly.
Do Not Use apt Using apt --reinstall to restore packages modified during an attack will certainly
--reinstall to Recover not recover the system as it was.
from an Attack After an attack, you can’t rely on anything: dpkg and apt might have been replaced by
malicious programs, not reinstalling the files as you would like them to. The attacker
might also have altered or created files outside the control of dpkg.
Remember that you can specify a specific distribution with apt as well, which means you can roll
back to an older version of a package (if for instance you know that it works well), provided that
it is still available in one of the sources referenced by the sources.list file:
# apt install w3af/kali-dev
If you are not careful, the use of a --force-* option or some other malfunction can lead to a system
where the APT family of commands will refuse to function. In effect, some of these options allow
installation of a package when a dependency is not met, or when there is a conflict. The result is
an inconsistent system from the point of view of dependencies, and the APT commands will refuse
to execute any action except those that will bring the system back to a consistent state (this often
consists of installing the missing dependency or removing a problematic package). This usually
results in a message like this one, obtained after installing a new version of rdesktop while ignoring
its dependency on a newer version of libc6:
# apt full-upgrade
[...]
You might want to run ’apt-get -f install’ to correct these.
The following packages have unmet dependencies:
rdesktop: Depends: libc6 (>= 2.14) but 2.3.6.ds1-13etch7 is installed
E: Unmet dependencies. Try using -f.
If you are a courageous administrator who is certain of the correctness of your analysis, you may
choose to ignore a dependency or conflict and use the corresponding --force-* option. In this case,
if you want to be able to continue to use apt or aptitude, you must edit /var/lib/dpkg/status
to delete or modify the dependency, or conflict, that you have chosen to override.
This manipulation is an ugly hack and should never be used, except in the most extreme case of
necessity. Quite frequently, a more fitting solution is to recompile the package that is causing
APT is a C++ program whose code mainly resides in the libapt-pkg shared library. Thanks to this
shared library, it opened the door for the creation of user interfaces (front-ends), since the shared
library code can easily be reused. Historically, apt-get was only designed as a test front-end for
libapt-pkg but its success tends to obscure this fact.
Over time, despite the popularity of command line interfaces like apt and apt-get, various graph-
ical interfaces were developed. We will take a look at two of those interfaces in this section:
aptitude and synaptic.
Aptitude
Aptitude, shown in Figure 8.1, “The aptitude package manager” [page 196], is an interactive pro-
gram that can be used in semi-graphical mode on the console. You can browse the list of installed
and available packages, look up all the information, and select packages to install or remove. The
program is designed specifically to be used by administrators so its default behavior is much more
intelligent than APT’s, and its interface much easier to understand.
aptitude’s Documentation This section does not cover the finer details of using aptitude, it rather focuses on
giving you a user survival kit. aptitude is rather well documented and we advise
you to use its complete manual available in the aptitude-doc-en package (see /usr/
share/doc/aptitude/html/en/index.html) or at
è https://www.debian.org/doc/manuals/aptitude/
.
To search for a package, you can type / followed by a search pattern. This pattern matches the
name of the package but can also be applied to the description (if preceded by ~d), to the section
(with ~s), or to other characteristics detailed in the documentation. The same patterns can filter
the list of displayed packages: type the l key (as in limit) and enter the pattern.
Managing the automatic flag of Debian packages (see section 8.3.4, “Tracking Automatically In-
stalled Packages” [page 205]) is a breeze with aptitude. It is possible to browse the list of installed
packages and mark packages as automatic with Shift+m or you can remove the mark with the m
key. Automatic packages are displayed with an “A” in the list of packages. This feature also offers
a simple way to visualize the packages in use on a machine, without all the libraries and depen-
dencies that you don’t really care about. The related pattern that can be used with l (to activate
the filter mode) is ~i!~M. It specifies that you only want to see installed packages (~i) not marked
as automatic (!~M).
Using aptitude on the Most of Aptitude’s features are accessible via the interactive interface as well as via the
Command-Line Interface command-line. These command-lines will seem familiar to regular users of apt-get
and apt-cache.
The advanced features of aptitude are also available on the command-line. You can
use the same package search patterns as in the interactive version. For example, if
you want to clean up the list of manually installed packages, and if you know that
none of the locally installed programs require any particular libraries or Perl modules,
you can mark the corresponding packages as automatic with a single command:
Here, you can clearly see the power of the search pattern system of aptitude, which
enables the instant selection of all the packages in the libs and perl sections.
Beware, if some packages are marked as automatic and if no other package depends
on them, they will be removed immediately (after a confirmation request).
Better Solver Algorithms To conclude this section, let’s note that aptitude has more elaborate
algorithms compared to apt when it comes to resolving difficult situations. When a set of actions
is requested and when these combined actions would lead to an incoherent system, aptitude
evaluates several possible scenarios and presents them in order of decreasing relevance. However,
these algorithms are not foolproof. Fortunately, there is always the possibility to manually select
the actions to perform. When the currently selected actions lead to contradictions, the upper part
of the screen indicates a number of broken packages (you can directly navigate to those packages
Aptitude’s Log Like dpkg, aptitude keeps a trace of executed actions in its logfile (/var/log/
aptitude). However, since both commands work at a very different level, you cannot
find the same information in their respective logfiles. While dpkg logs all the opera-
tions executed on individual packages step by step, aptitude gives a broader view of
high-level operations like a system-wide upgrade.
Beware, this logfile only contains a summary of operations performed by aptitude.
If other front-ends (or even dpkg itself) are occasionally used, then aptitude’s log
will only contain a partial view of the operations, so you can’t rely on it to build a
trustworthy history of the system.
Synaptic
Synaptic is a graphical package manager that features a clean and efficient graphical interface
(shown in Figure 8.2, “synaptic Package Manager” [page 200]) based on GTK+. Its many ready-to-
use filters give fast access to newly available packages, installed packages, upgradable packages,
obsolete packages, and so on. If you browse through these lists, you can select the operations to
be done on the packages (install, upgrade, remove, purge); these operations are not performed
immediately, but put into a task list. A single click on a button then validates the operations and
they are performed in one go.
Now it is time to dive into some more advanced topics. First, we will take a look at advanced
configuration of APT, which will allow you to set more permanent options that will apply to APT
tools. We will then show how package priorities can be manipulated, which opens the door for
advanced fine-tuned, customized updates and upgrades. We will also show how to handle multiple
distributions so that you can start experimenting with packages coming from other distributions.
Next, we will take a look at how to track automatically installed packages, a capability that enables
you to manage packages that are installed through dependencies. We will also explain how multi-
arch support opens the door for running packages built for various hardware architectures. Last
but not least, we will discuss the cryptographic protocols and utilities in place that will let you
validate each package’s authenticity.
Before we dive into the configuration of APT, let’s take a moment to discuss the configuration
mechanism of a Debian-based system. Historically, configuration was handled by dedicated con-
figuration files. However, in modern Linux systems like Debian and Kali, configuration directories
with the .d suffix are becoming more commonly used. Each directory represents a configuration
Beware of Configuration While APT has native support of its /etc/apt/apt.conf.d directory, this is not al-
Files Generated from .d ways the case. For some applications (like exim, for example), the .d directory is a
Directories Debian-specific addition used as input to dynamically generate the canonical configu-
ration file used by the application. In those cases, the packages provide an “update-*”
command (for example: update-exim4.conf) that will concatenate the files from the
.d directory and overwrite the main configuration file.
In those cases, you must not manually edit the main configuration file as your changes
will be lost on the next execution of the update-* command, and you must also not
forget to run the former command after having edited a file out of the .d directory
(or your changes will not be used).
Armed with an understanding of the .d configuration mechanism, let’s talk about how you
can leverage it to configure APT. As we have discussed, you can alter APT’s behavior through
command-line arguments to dpkg like this example, which performs a forced overwrite install of
zsh:
Obviously this is very cumbersome, especially if you use options frequently, but you can also use
the .d directory configuration structure to configure certain aspects of APT by adding directives
to a file in the /etc/apt/apt.conf.d/ directory. For example, this (and any other) directive can
easily be added to a file in /etc/apt/apt.conf.d/. The name of this file is somewhat arbitrary,
but a common convention is to use either local or 99local:
$ cat /etc/apt/apt.conf.d/99local
Dpkg::Options {
”--force-overwrite”;
}
There are many other helpful configuration options and we certainly can’t cover them all, but
one we will touch on involves network connectivity. For example, if you can only access the web
One of the most important aspects in the configuration of APT is the management of the priori-
ties associated with each package source. For instance, you might want to extend your Kali Linux
system with one or two newer packages from Debian Unstable or Debian Experimental. It is pos-
sible to assign a priority to each available package (the same package can have several priorities
depending on its version or the distribution providing it). These priorities will influence APT’s
behavior: for each package, it will always select the version with the highest priority (except if
this version is older than the installed one and its priority is less than 1000).
APT defines several default priorities. Each installed package version has a priority of 100. A non-
installed version has a priority of 500 by default but it can jump to 990 if it is part of the target
release (defined with the -t command-line option or the APT::Default-Release configuration direc-
tive).
You can modify the priorities by adding entries in the /etc/apt/preferences file with the names
of the affected packages, their version, their origin and their new priority.
APT will never install an older version of a package (that is, a package whose version number is
lower than the one of the currently installed package) except when its priority is higher than 1000.
APT will always install the highest priority package that follows this constraint. If two packages
have the same priority, APT installs the newest one (whose version number is the highest). If
two packages of same version have the same priority but differ in their content, APT installs the
version that is not installed (this rule has been created to cover the case of a package update
without the increment of the revision number, which is usually required).
In more concrete terms, a package whose priority is less than 0 will never be installed. A package
with a priority ranging between 0 and 100 will only be installed if no other version of the package is
already installed. With a priority between 100 and 500, the package will only be installed if there
is no other newer version installed or available in another distribution. A package of priority
between 501 and 990 will only be installed if there is no newer version installed or available in the
target distribution. With a priority between 990 and 1000, the package will be installed except if
the installed version is newer. A priority greater than 1000 will always lead to the installation of
the package even if it forces APT to downgrade to an older version.
When APT checks /etc/apt/preferences, it first takes into account the most specific entries
(often those specifying the concerned package), then the more generic ones (including for exam-
ple all the packages of a distribution). If several generic entries exist, the first match is used. The
available selection criteria include the package’s name and the source providing it. Every package
Priority of Debian If you listed Debian experimental in your sources.list file, the corresponding pack-
Experimental ages will almost never be installed because their default APT priority is 1. This
is of course a specific case, designed to keep users from installing experimental
packages by mistake. The packages can only be installed by typing apt install
package/experimental, assuming of course that you are aware of the risks and po-
tential headaches of life on the edge. It is still possible (though not recommended)
to treat packages of experimental like those of other distributions by giving them a
priority of 500. This is done with a specific entry in /etc/apt/preferences:
Package: *
Pin: release a=experimental
Pin-Priority: 500
Let’s suppose that you only want to use packages from Kali and that you only want Debian pack-
ages installed when explicitly requested. You could write the following entries in the /etc/apt/
preferences file (or in any file in /etc/apt/preferences.d/):
Package: *
Pin: release o=Kali
Pin-Priority: 900
Package: *
Pin: release o=Debian
Pin-Priority: -10
In the last two examples, you have seen a=experimental, which defines the name of the selected
distribution and o=Kali and o=Debian, which limit the scope to packages whose origin are Kali and
Debian, respectively.
Let’s now assume that you have a server with several local programs depending on the version 5.22
of Perl and that you want to ensure that upgrades will not install another version of it. You could
use this entry:
Package: perl
Pin: version 5.22*
Pin-Priority: 1001
The reference documentation for this configuration file is available in the manual page
apt_preferences(5), which you can display with man apt_preferences.
Given that apt is such a marvelous tool, you will likely want to dive in and start experimenting
with packages coming from other distributions. For example, after installing a Kali Rolling system,
you might want to try out a software package available in Kali Dev, Debian Unstable, or Debian
Experimental without diverging too much from the system’s initial state.
Even if you will occasionally encounter problems while mixing packages from different distribu-
tions, apt manages such coexistence very well and limits risks very effectively (provided that the
package dependencies are accurate). First, list all distributions used in /etc/apt/sources.list
and define your reference distribution with the APT::Default-Release parameter (see section 8.2.3,
“Upgrading Kali Linux” [page 184]).
Let’s suppose that Kali Rolling is your reference distribution but that Kali Dev and Debian
Unstable are also listed in your sources.list file. In this case, you can use apt install
package/unstable to install a package from Debian Unstable. If the installation fails due to some
unsatisfiable dependencies, let it solve those dependencies within Unstable by adding the -t un-
stable parameter.
In this situation, upgrades (upgrade and full-upgrade) are done within Kali Rolling except for
packages already upgraded to another distribution: those will follow updates available in the other
distributions. We will explain this behavior with the help of the default priorities set by APT below.
Do not hesitate to use apt-cache policy (see sidebar “Using apt-cache policy” [page 204]) to
verify the given priorities.
Everything relies on the fact that APT only considers packages of higher or equal version than the
installed package (assuming that /etc/apt/preferences has not been used to force priorities
higher than 1000 for some packages).
Using apt-cache policy To gain a better understanding of the mechanism of priority, do not hesitate to exe-
cute apt-cache policy to display the default priority associated with each package
source. You can also use apt-cache policy package to display the priorities of all
available versions of a given package.
One of the essential functionalities of apt is the tracking of packages installed only through de-
pendencies. These packages are called automatic and often include libraries.
With this information, when packages are removed, the package managers can compute a list of
automatic packages that are no longer needed (because there are no manually installed packages
depending on them). The command apt autoremove will get rid of those packages. Aptitude does
not have this command because it removes them automatically as soon as they are identified. In
all cases, the tools display a clear message listing the affected packages.
It is a good habit to mark as automatic any package that you don’t need directly so that they
are automatically removed when they aren’t necessary anymore. You can use apt-mark auto
package to mark the given package as automatic, whereas apt-mark manual package does the
opposite. aptitude markauto and aptitude unmarkauto work in the same way, although they
offer more features for marking many packages at once (see section 8.2.7.1, “Aptitude” [page 196]).
The console-based interactive interface of aptitude also makes it easy to review the automatic
flag on many packages.
You might want to know why an automatically installed package is present on the system. To get
this information from the command line, you can use aptitude why package (apt and apt-get
have no similar feature):
$ aptitude why python-debian
i aptitude Recommends apt-xapian-index
i A apt-xapian-index Depends python-debian (>= 0.1.15)
All Debian packages have an Architecture field in their control information. This field can contain
either “all” (for packages that are architecture-independent) or the name of the architecture that
it targets (like amd64, or armhf). In the latter case, by default, dpkg will only install the package
if its architecture matches the host’s architecture as returned by dpkg --print-architecture.
This restriction ensures that you do not end up with binaries compiled for an incorrect architec-
ture. Everything would be perfect except that (some) computers can run binaries for multiple
architectures, either natively (an amd64 system can run i386 binaries) or through emulators.
Enabling Multi-Arch
Multi-arch support for dpkg allows users to define foreign architectures that can be installed on
the current system. This is easily done with dpkg --add-architecture, as in the example below
where the i386 architecture needs to be added to the amd64 system in order to run Microsoft Win-
dows applications using Wine5 . There is a corresponding dpkg --remove-architecture to drop
support of a foreign architecture, but it can only be used when no packages of this architecture
remain installed.
# dpkg --print-architecture
amd64
# wine
it looks like wine32 is missing, you should install it.
multiarch needs to be enabled first. as root, please
execute ”dpkg --add-architecture i386 & apt-get update &
apt-get install wine32”
Usage: wine PROGRAM [ARGUMENTS...] Run the specified program
wine --help Display this help and exit
wine --version Output version information and exit
# dpkg --add-architecture i386
# dpkg --print-foreign-architectures
i386
# apt update
[...]
# apt install wine32
[...]
Setting up libwine:i386 (1.8.6-5) ...
Setting up vdpau-driver-all:i386 (1.1.1-6) ...
Setting up wine32:i386 (1.8.6-5) ...
5
https://www.winehq.org/
APT will automatically detect when dpkg has been configured to support foreign architectures
and will start downloading the corresponding Packages files during its update process.
Foreign packages can then be installed with apt install package:architecture.
Using Proprietary i386 There are multiple use cases for multi-arch, but the most popular one is the possibility
Binaries on amd64 to execute 32-bit binaries (i386) on 64-bit systems (amd64), in particular since several
popular proprietary applications (like Skype) are only provided in 32-bit versions.
To make multi-arch actually useful and usable, libraries had to be repackaged and moved to an
architecture-specific directory so that multiple copies (targeting different architectures) can be
installed alongside one another. Such updated packages contain the Multi-Arch: same header field
to tell the packaging system that the various architectures of the package can be safely co-installed
(and that those packages can only satisfy dependencies of packages of the same architecture).
$ dpkg -s libwine
dpkg-query: error: --status needs a valid package name but ’libwine’ is not: ambiguous
å package name ’libwine’ with more than one installed instance
It is worth noting that Multi-Arch: same packages must have their names qualified with their
architecture to be unambiguously identifiable. These packages may also share files with other
System upgrades are very sensitive operations and you really want to ensure that you only install
official packages from the Kali repositories. If the Kali mirror you are using has been compro-
mised, a computer cracker could try to add malicious code to an otherwise legitimate package.
Such a package, if installed, could do anything the cracker designed it to do including disclose
passwords or confidential information. To circumvent this risk, Kali provides a tamper-proof seal
to guarantee—at install time—that a package really comes from its official maintainer and hasn’t
been modified by a third party.
The seal works with a chain of cryptographic hashes and a signature. The signed file is the Release
file, provided by the Kali mirrors. It contains a list of the Packages files (including their com-
pressed forms, Packages.gz and Packages.xz, and the incremental versions), along with their
MD5, SHA1, and SHA256 hashes, which ensures that the files haven’t been tampered with. These
Packages files contain a list of the Debian packages available on the mirror along with their hashes,
which ensures in turn that the contents of the packages themselves haven’t been altered either.
The trusted keys are managed with the apt-key command found in the apt package. This program
maintains a keyring of GnuPG public keys, which are used to verify signatures in the Release.gpg
files available on the mirrors. It can be used to add new keys manually (when non-official mirrors
are needed). Generally however, only the official Kali keys are needed. These keys are automati-
cally kept up-to-date by the kali-archive-keyring package (which puts the corresponding keyrings
in /etc/apt/trusted.gpg.d). However, the first installation of this particular package requires
caution: even if the package is signed like any other, the signature cannot be verified externally.
Cautious administrators should therefore check the fingerprints of imported keys before trusting
them to install new packages:
# apt-key fingerprint
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
/etc/apt/trusted.gpg.d/debian-archive-buster-automatic.gpg
----------------------------------------------------------
pub rsa4096 2019-04-14 [SC] [expires: 2027-04-12]
80D1 5823 B7FD 1561 F9F7 BCDD DC30 D7C2 3CBB ABEE
uid [ unknown] Debian Archive Automatic Signing Key (10/buster) <ftpmaster@debian.org>
/etc/apt/trusted.gpg.d/debian-archive-buster-security-automatic.gpg
-------------------------------------------------------------------
pub rsa4096 2019-04-14 [SC] [expires: 2027-04-12]
5E61 B217 265D A980 7A23 C5FF 4DFA B270 CAA9 6DFA
uid [ unknown] Debian Security Archive Automatic Signing Key (10/buster) <ftpmaster@debian.org>
sub rsa4096 2019-04-14 [S] [expires: 2027-04-12]
/etc/apt/trusted.gpg.d/debian-archive-buster-stable.gpg
-------------------------------------------------------
pub rsa4096 2019-02-05 [SC] [expires: 2027-02-03]
6D33 866E DD8F FA41 C014 3AED DCC9 EFBF 77E1 1517
uid [ unknown] Debian Stable Release Key (10/buster) <debian-release@lists.debian.org>
/etc/apt/trusted.gpg.d/debian-archive-jessie-automatic.gpg
----------------------------------------------------------
pub rsa4096 2014-11-21 [SC] [expires: 2022-11-19]
126C 0D24 BD8A 2942 CC7D F8AC 7638 D044 2B90 D010
uid [ unknown] Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>
/etc/apt/trusted.gpg.d/debian-archive-jessie-security-automatic.gpg
-------------------------------------------------------------------
pub rsa4096 2014-11-21 [SC] [expires: 2022-11-19]
D211 6914 1CEC D440 F2EB 8DDA 9D6D 8F6B C857 C906
uid [ unknown] Debian Security Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>
/etc/apt/trusted.gpg.d/debian-archive-jessie-stable.gpg
-------------------------------------------------------
pub rsa4096 2013-08-17 [SC] [expires: 2021-08-15]
75DD C3C4 A499 F1A1 8CB5 F3C8 CBF8 D6FD 518E 17E1
uid [ unknown] Jessie Stable Release Key <debian-release@lists.debian.org>
/etc/apt/trusted.gpg.d/debian-archive-stretch-automatic.gpg
-----------------------------------------------------------
pub rsa4096 2017-05-22 [SC] [expires: 2025-05-20]
E1CF 20DD FFE4 B89E 8026 58F1 E0B1 1894 F66A EC98
uid [ unknown] Debian Archive Automatic Signing Key (9/stretch) <ftpmaster@debian.org>
sub rsa4096 2017-05-22 [S] [expires: 2025-05-20]
/etc/apt/trusted.gpg.d/debian-archive-stretch-security-automatic.gpg
--------------------------------------------------------------------
pub rsa4096 2017-05-22 [SC] [expires: 2025-05-20]
6ED6 F5CB 5FA6 FB2F 460A E88E EDA0 D238 8AE2 2BA9
uid [ unknown] Debian Security Archive Automatic Signing Key (9/stretch) <ftpmaster@debian.org>
sub rsa4096 2017-05-22 [S] [expires: 2025-05-20]
/etc/apt/trusted.gpg.d/debian-archive-stretch-stable.gpg
--------------------------------------------------------
pub rsa4096 2017-05-20 [SC] [expires: 2025-05-18]
067E 3C45 6BAE 240A CEE8 8F6F EF0F 382A 1A7B 6500
uid [ unknown] Debian Stable Release Key (9/stretch) <debian-release@lists.debian.org>
/etc/apt/trusted.gpg.d/kali-archive-keyring.gpg
-----------------------------------------------
pub rsa4096 2012-03-05 [SC] [expires: 2023-01-16]
44C6 513A 8E4F B3D3 0875 F758 ED44 4FF0 7D8D 0BF6
uid [ unknown] Kali Linux Repository <devel@kali.org>
sub rsa4096 2012-03-05 [E] [expires: 2023-01-16]
When a third-party package source is added to the sources.list file, APT needs to be told to
trust the corresponding GPG authentication key (otherwise it will keep complaining that it can’t
ensure the authenticity of the packages coming from that repository). The first step is of course
8.4. APT Package Reference: Digging Deeper into the Debian Package Sys-
tem
Now it is time to dive really deep into Debian and Kali’s package system. At this point, we are
going to move beyond tools and syntax and focus more on the nuts and bolts of the packaging
system. This behind-the-scenes view will help you understand how APT works at its foundation
and will give you insight into how to seriously streamline and customize your Kali system. You
may not necessarily memorize all the material in this section, but the walk-through and reference
material will serve you well as you grow in your mastery of the Kali Linux system.
So far, you have interacted with APT’s package data through the various tools designed to interface
with it. Next, we will dig deeper and take a look inside the packages and look at the internal meta-
information (or information about other information) used by the package management tools.
This combination of a file archive and of meta-information is directly visible in the structure of a
.deb file, which is simply an ar archive, concatenating three files:
$ ar t /var/cache/apt/archives/apt_1.4~beta1_amd64.deb
debian-binary
control.tar.gz
data.tar.xz
The debian-binary file contains a single version number describing the format of the archive:
$ ar p /var/cache/apt/archives/apt_1.4~beta1_amd64.deb debian-binary
2.0
And finally, the data.tar.xz archive (the compression format might vary) contains the actual
files to be installed on the file system:
$ ar p /var/cache/apt/archives/apt_1.4~beta1_amd64.deb data.tar.xz | tar -tJf -
./
./etc/
./etc/apt/
./etc/apt/apt.conf.d/
./etc/apt/apt.conf.d/01autoremove
./etc/apt/preferences.d/
./etc/apt/sources.list.d/
./etc/apt/trusted.gpg.d/
./etc/cron.daily/
./etc/cron.daily/apt-compat
./etc/kernel/
./etc/kernel/postinst.d/
./etc/kernel/postinst.d/apt-auto-removal
./etc/logrotate.d/
./etc/logrotate.d/apt
./lib/
./lib/systemd/
[...]
Note that in this example, you are viewing a .deb package in APT’s archive cache and that your
archive may contain files with different version numbers than what is shown.
In this section, we will introduce this meta-information contained in each package and show you
how to leverage it.
We will begin by looking at the control file, which is contained in the control.tar.gz archive.
The control file contains the most vital information about the package. It uses a structure similar
to email headers and can be viewed with the dpkg -I command. For example, the control file
for apt looks like this:
In this section, we will walk you through the control file and explain the various fields. Each of
these will give you a better understanding of the packaging system, give you more fine-tuned
configuration control, and provide you with insight needed to troubleshoot problems that may
occur.
The package dependencies are defined in the Depends field in the package header. This is a list of
conditions to be met for the package to work correctly—this information is used by tools such as
apt in order to install the required libraries, in appropriate versions fulfilling the dependencies of
the package to be installed. For each dependency, you can restrict the range of versions that meet
that condition. In other words, it is possible to express the fact that you need the package libc6
In a list of conditions to be met, the comma serves as a separator, interpreted as a logical “AND.”
In conditions, the vertical bar (“|”) expresses a logical “OR” (it is an inclusive “OR,” not an exclu-
sive “either/or”). Carrying greater priority than “AND,” you can use it as many times as necessary.
Thus, the dependency “(A OR B) AND C” is written A | B, C. In contrast, the expression “A OR (B
AND C)” should be written as “(A OR B) AND (A OR C)”, since the Depends field does not tolerate
parentheses that change the order of priorities between the logical operators “OR” and “AND”.
It would thus be written A | B, A | C. See https://www.debian.org/doc/debian-policy/
ch-relationships.html for more information.
The dependencies system is a good mechanism for guaranteeing the operation of a program but it
has another use with metapackages. These are empty packages that only describe dependencies.
They facilitate the installation of a consistent group of programs preselected by the metapackage
maintainer; as such, apt install metapackage will automatically install all of these programs
using the metapackage’s dependencies. The gnome, kali-tools-wireless, and kali-linux-large packages
are examples of metapackages. For more information on Kali’s metapackages, see https://tools.
kali.org/kali-metapackages
Pre-dependencies, which are listed in the Pre-Depends field in the package headers, complete the
normal dependencies; their syntax is identical. A normal dependency indicates that the pack-
age in question must be unpacked and configured before configuration of the package declaring
the dependency. A pre-dependency stipulates that the package in question must be unpacked
and configured before execution of the pre-installation script of the package declaring the pre-
dependency, that is before its installation.
A pre-dependency is very demanding for apt because it adds a strict constraint on the ordering of
the packages to install. As such, pre-dependencies are discouraged unless absolutely necessary. It
is even recommended to consult other developers on debian-devel@lists.debian.org before adding
a pre-dependency as it is generally possible to find another solution as a work-around.
The Recommends and Suggests fields describe dependencies that are not compulsory. The rec-
ommended dependencies, the most important, considerably improve the functionality offered by
the package but are not indispensable to its operation. The suggested dependencies, of secondary
importance, indicate that certain packages may complement and increase their respective utility,
but it is perfectly reasonable to install one without the others.
You should always install the recommended packages unless you know exactly why you do not
need them. Conversely, it is not necessary to install suggested packages unless you know why you
need them.
The Enhances field also describes a suggestion, but in a different context. It is indeed located in
the suggested package, and not in the package that benefits from the suggestion. Its interest lies
in that it is possible to add a suggestion without having to modify the package that is concerned.
Thus, all add-ons, plug-ins, and other extensions of a program can then appear in the list of sug-
gestions related to the software. Although it has existed for several years, this last field is still
largely ignored by programs such as apt or synaptic. The original goal was to let a package like
xul-ext-adblock-plus (a Firefox extension) declare Enhances: firefox, firefox-esr and thus appear in
the list of suggested packages associated to firefox and firefox-esr.
The Conflicts field indicates when a package cannot be installed simultaneously with another. The
most common reasons for this are that both packages include a file of the same name, provide the
same service on the same transmission control protocol (TCP) port, or would hinder each other’s
operation.
If it triggers a conflict with an already installed package, dpkg will refuse to install a package,
except if the new package specifies that it will replace the installed package, in which case dpkg
will choose to replace the old package with the new one. APT always follows your instructions: if
you choose to install a new package, it will automatically offer to uninstall the package that poses
a problem.
The Breaks field has an effect similar to that of the Conflicts field, but with a special meaning. It
signals that the installation of a package will break another package (or particular versions of it).
In general, this incompatibility between two packages is transitory and the Breaks relationship
specifically refers to the incompatible versions.
This field introduces the very interesting concept of a virtual package. It has many roles, but two are
of particular importance. The first role consists in using a virtual package to associate a generic
service with it (the package provides the service). The second indicates that a package completely
replaces another and that for this purpose, it can also satisfy the dependencies that the other
would satisfy. It is thus possible to create a substitution package without having to use the same
package name.
metapackage and Virtual It is essential to clearly distinguish metapackages from virtual packages. The former
Package are real packages (including real .deb files), whose only purpose is to express depen-
dencies.
Virtual packages, however, do not exist physically; they are only a means of identify-
ing real packages based on common, logical criteria (for example, service provided, or
compatibility with a standard program or a pre-existing package).
Providing a Service Let’s discuss the first case in greater detail with an example: all mail
servers, such as postfix or sendmail are said to provide the mail-transport-agent virtual package.
Thus, any package that needs this service to be functional (e.g. a mailing list manager, such as
smartlist or sympa) simply states in its dependencies that it requires a mail-transport-agent instead
of specifying a large yet incomplete list of possible solutions. Furthermore, it is useless to install
two mail servers on the same machine, which is why each of these packages declares a conflict
with the mail-transport-agent virtual package. A conflict between a package and itself is ignored
by the system, but this technique will prohibit the installation of two mail servers side by side.
Interchangeability with Another Package The Provides field is also interesting when the con-
tent of a package is included in a larger package. For example, the libdigest-md5-perl Perl module
was an optional module in Perl 5.6, and has been integrated as standard in Perl 5.8. As such, the
package perl has since version 5.8 declared Provides: libdigest-md5-perl so that the dependencies
on this package are met if the system has Perl 5.8 (or newer). The libdigest-md5-perl package itself
was deleted, since it no longer had any purpose when old Perl versions were removed.
This feature is very useful, since it is never possible to anticipate the vagaries of development
and it is necessary to be able to adjust to renaming, and other automatic replacement, of obsolete
software.
The Replaces field indicates that the package contains files that are also present in another pack-
age, but that the package is legitimately entitled to replace them. Without this specification, dpkg
fails, stating that it cannot overwrite the files of another package (technically, it is possible to
force it to do so with the --force-overwrite option, but that is not considered standard operation).
This allows identification of potential problems and requires the maintainer to study the matter
prior to choosing whether to add such a field.
The use of this field is justified when package names change or when a package is included in
another. This also happens when the maintainer decides to distribute files differently among
various binary packages produced from the same source package: a replaced file no longer belongs
to the old package, but only to the new one.
If all of the files in an installed package have been replaced, the package is considered to be re-
moved. Finally, this field also encourages dpkg to remove the replaced package where there is a
conflict.
In addition to the control file, the control.tar.gz archive for each Debian package may contain
a number of scripts (postinst, postrm, preinst, prerm) called by dpkg at different stages in the
processing of a package. We can use dpkg -I to show these files as they reside in a .deb package
archive:
$ dpkg -I /var/cache/apt/archives/zsh_5.3-1_amd64.deb | head
new debian package, version 2.0.
size 814486 bytes: control archive=2557 bytes.
838 bytes, 20 lines control
3327 bytes, 43 lines md5sums
969 bytes, 41 lines * postinst #!/bin/sh
348 bytes, 20 lines * postrm #!/bin/sh
175 bytes, 5 lines * preinst #!/bin/sh
175 bytes, 5 lines * prerm #!/bin/sh
Package: zsh
Version: 5.3-1
$ dpkg -I zsh_5.3-1_amd64.deb preinst
#!/bin/sh
set -e
# Automatically added by dh_installdeb
dpkg-maintscript-helper symlink_to_dir /usr/share/doc/zsh zsh-common 5.0.7-3 -- ”$@”
# End automatically added section
The Debian Policy6 describes each of these files in detail, specifying the scripts called and the
arguments they receive. These sequences may be complicated, since if one of the scripts fails,
dpkg will try to return to a satisfactory state by canceling the installation or removal in progress
(insofar as it is possible).
The dpkg Database You can traverse the dpkg database on the filesystem at /var/lib/dpkg/. This di-
rectory contains a running record of all the packages that have been installed on
the system. All of the configuration scripts for installed packages are stored in the
/var/lib/dpkg/info/ directory, in the form of a file prefixed with the package’s
name:
$ ls /var/lib/dpkg/info/zsh.*
/var/lib/dpkg/info/zsh.list
/var/lib/dpkg/info/zsh.md5sums
/var/lib/dpkg/info/zsh.postinst
/var/lib/dpkg/info/zsh.postrm
/var/lib/dpkg/info/zsh.preinst
/var/lib/dpkg/info/zsh.prerm
This directory also includes a file with the .list extension for each package, contain-
ing the list of files that belong to that package:
6
https://www.debian.org/doc/debian-policy/ch-maintainerscripts.html
The /var/lib/dpkg/status file contains a series of data blocks (in the format of
the famous mail headers request for comment, RFC 2822) describing the status of
each package. The information from the control file of the installed packages is also
replicated there.
$ more /var/lib/dpkg/status
Package: gnome-characters
Status: install ok installed
Priority: optional
Section: gnome
Installed-Size: 1785
Maintainer: Debian GNOME Maintainers <pkg-gnome-
å maintainers@lists.alioth.debian.org>
Architecture: amd64
Version: 3.20.1-1
[...]
Let’s discuss the configuration files and see how they interact. In general, the preinst script is
executed prior to installation of the package, while the postinst follows it. Likewise, prerm is
invoked before removal of a package and postrm afterwards. An update of a package is equivalent
to removal of the previous version and installation of the new one. It is not possible to describe in
detail all the possible scenarios here but we will discuss the most common two: an installation/up-
date and a removal.
These sequences can be quite confusing, but a visual representation may help. Manoj Srivastava
made some diagrams7 explaining how the configuration scripts are called by dpkg. Similar dia-
grams have also been developed by the Debian Women project8 ; they are a bit simpler to under-
stand, but less complete.
7
https://people.debian.org/~srivasta/MaintainerScripts.html
8
https://wiki.debian.org/MaintainerScripts
Package Removal
The debconf Tool The debconf tool was created to resolve a recurring problem in Debian. All Debian
packages unable to function without a minimum of configuration used to ask ques-
tions with calls to the echo and read commands in postinst shell scripts (and other
similar scripts). This forced the installer to babysit large installations or updates in
order to respond to various configuration queries as they arose. These manual inter-
actions have now been almost entirely dispensed with, thanks to debconf.
The debconf tool has many interesting features: It requires the developer to specify
user interaction; it allows localization of all the displayed strings (all translations are
stored in the templates file describing the interactions); it provides different fron-
tends for questions (text mode, graphical mode, non-interactive); and it allows cre-
ation of a central database of responses to share the same configuration with several
computers. The most important feature is that all of the questions can be presented in
a row, all at once, prior to starting a long installation or update process. Now, you can
go about your business while the system handles the installation on its own, without
having to stay there staring at the screen, waiting for questions to pop up.
In addition to the maintainer scripts and control data already mentioned in the previous sections,
the control.tar.gz archive of a Debian package may contain other interesting files:
# ar p /var/cache/apt/archives/bash_4.4-2_amd64.deb control.tar.gz | tar -tzf -
./
./conffiles
./control
./md5sums
./postinst
./postrm
./preinst
./prerm
The first—md5sums—contains the MD5 checksums for all of the package’s files. Its main advantage
is that it allows dpkg --verify to check if these files have been modified since their installation.
Note that when this file doesn’t exist, dpkg will generate it dynamically at installation time (and
store it in the dpkg database just like other control files).
These options apply to dpkg, but most of the time the administrator will work directly with the
aptitude or apt programs. It is, thus, necessary to know the syntax used to indicate the options
to pass to the dpkg command (their command line interfaces are very similar).
# apt -o DPkg::options::=”--force-confdef” -o DPkg::options::=”--force-confold” full-
å upgrade
These options can be stored directly in apt’s configuration. To do so, simply write the following
line in the /etc/apt/apt.conf.d/local file:
DPkg::options { ”--force-confdef”; ”--force-confold”; }
Including this option in the configuration file means that it will also be used in a graphical inter-
face such as aptitude.
Conversely, you can also force dpkg to ask configuration file questions. The --force-confask
option instructs dpkg to display the questions about the configuration files, even in cases where
they would not normally be necessary. Thus, when reinstalling a package with this option, dpkg
will ask the questions again for all of the configuration files modified by the administrator. This is
very convenient, especially for reinstalling the original configuration file if it has been deleted and
8.5. Summary
In this section, we learned more about the Debian package system, discussed the Advanced Pack-
age Tool (APT) and dpkg, learned about basic package interaction, advanced APT configuration
and usage, and dug deeper into the Debian package system with a brief reference of the .deb file
format. We looked at the control file, configuration scripts, checksums, and the conffiles file.
Summary Tips:
A Debian package is a compressed archive of a software application. It contains the application’s
files as well as other metadata including the names of the dependencies that the application needs
plus any scripts that enable the execution of commands at different stages in the package’s lifecy-
cle (installation, removal, upgrades).
The dpkg tool, contrary to apt and apt-get (of the APT family), has no knowledge of all the avail-
able packages that could be used to fulfill package dependencies. Thus, to manage Debian pack-
ages, you will likely use the latter tools as they are able to automatically resolve dependency issues.
You can use APT to install and remove applications, update packages, and even upgrade your
entire system. Here are the key points that you should know about APT and its configuration:
• The sources.list file is the key configuration file for defining package sources (or reposi-
tories that contain packages).
• Debian and Kali use three sections to differentiate packages according to the licenses chosen
by the authors of each work: main contains all packages that fully comply with the Debian
Free Software Guidelines9 ; non-free contains software that does not (entirely) conform to
the Free Software Guidelines but can nevertheless be distributed without restrictions; and
contrib (contributions) includes open source software that cannot function without some
non-free elements.
• Kali maintains several repositories including: kali-rolling, which is the main repository for
end-users and should always contain installable and recent packages; and kali-dev, which is
used by Kali developers and is not for public use.
• When working with APT, you should first download the list of currently-available packages
with apt update.
• You can add a package to the system with a simple apt install package. APT will auto-
matically install the necessary dependencies.
• To remove a package use apt remove package. It will also remove the reverse dependen-
cies of the package (i.e. packages that depend on the package to be removed).
9
https://www.debian.org/social_contract#guidelines
As an advanced user, you can create files in /etc/apt/apt.conf.d/ to configure certain aspects
of APT. You can also manage package priorities, track automatically installed packages, work with
several distributions or architectures at once, use cryptographic signatures to validate packages,
and upgrade files using the techniques outlined in this chapter.
In spite of the Debian or Kali maintainers’ best efforts, a system upgrade isn’t always as smooth as
we would hope. When this happens, you can look at the Kali bug tracker10 and at the Debian bug
tracking system11 at https://bugs.debian.org/package to check whether the problem has already
10
https://bugs.kali.org/
11
https://bugs.debian.org/
Custom packages
Custom kernel
Custom images
live-build
Persistence
Chapter
Advanced Usage 9
Contents
Modifying Kali Packages 228 Recompiling the Linux Kernel 237 Building Custom Kali Live ISO Images 241
Adding Persistence to the Live ISO with a USB Key 246 Summary 251
Kali has been built as a highly modular and customizable penetration testing platform and allows
for some fairly advanced customization and usage. Customizations can happen at multiple lev-
els, beginning at the source code level. The sources of all Kali packages are publicly available. In
this chapter, we will show how you can retrieve packages, modify them, and build your own cus-
tomized packages out of them. The Linux kernel is somewhat of a special case and as such, it is
covered in a dedicated section (section 9.2, “Recompiling the Linux Kernel” [page 237]), where we
will discuss where to find sources, how to configure the kernel build, and finally how to compile
it and how to build the associated kernel packages.
The second level of customization is in the process of building live ISO images. We will show how
the live-build tool offers plenty of hooks and configuration options to customize the resulting
ISO image, including the possibility to use custom Debian packages in place of the packages avail-
able on mirrors.
We will also discuss how you can create a persistent live ISO built onto a USB key that will preserve
files and operating system changes between reboots.
Modifying Kali packages is usually a task for Kali contributors and developers: they update pack-
ages with new upstream versions, they tweak the default configuration for a better integration in
the distribution, or they fix bugs reported by users. But you might have specific needs not fulfilled
by the official packages and knowing how to build a modified package can thus be very valuable.
You might wonder why you need to bother with the package at all. After all, if you have to modify
a piece of software, you can always grab its source code (usually with git) and run the modified
version directly from the source checkout. This is fine when it is possible and when you use your
home directory for this purpose, but if your application requires a system-wide setup (for example,
with a make install step) then it will pollute your file system with files unknown to dpkg and will
soon create problems that cannot be caught by package dependencies. Furthermore, with proper
packages you will be able to share your changes and deploy them on multiple computers much
more easily or revert the changes after having discovered that they were not working as well as
you hoped.
So when would you want to modify a package? Let’s take a look at a few examples. First, we
will assume that you are a heavy user of Social-Engineer Toolkit (SET) and you noticed a new
upstream release but the Kali developers are all busy for a conference and you want to try it out
immediately. You want to update the package yourself. In another case, we will assume that you
are struggling to get your MIFARE NFC card working and you want to rebuild “libfreefare” to
enable debug messages in order to have actionable data to provide in a bug report that you are
currently preparing. In a last case, we will assume that the “pyrit” program fails with a cryptic
error message. After a web search, you find a commit that you expect to fix your problem in the
upstream GitHub repository and you want to rebuild the package with this fix applied.
Rebuilding a Kali package starts with getting its source code. A source package is composed of
multiple files: the main file is the *.dsc (Debian Source Control) file as it lists the other accompa-
nying files, which can be *.tar.gz,bz2,xz, sometimes *.diff.gz, or *.debian.tar.gz,bz2,xz
files.
The source packages are stored on Kali mirrors that are available over HTTP. You could use your
web browser to download all the required files but the easiest way to accomplish this is to use
the apt source source_package_name command. This command requires a deb-src line in the
/etc/apt/sources.list file and up-to-date index files (accomplished by running apt update).
By default, Kali doesn’t add the required line as few Kali users actually need to retrieve source
packages but you can easily add it (see sample file in section 8.1.3, “Kali Repositories” [page 179]
and the associated explanations in section 8.1.2, “Understanding the sources.list File” [page
178]).
$ apt source libfreefare
Reading package lists... Done
NOTICE: ’libfreefare’ packaging is maintained in the ’Git’ version control system at:
git://anonscm.debian.org/collab-maint/libnfc.git
Please use:
git clone git://anonscm.debian.org/collab-maint/libnfc.git
to retrieve the latest (possibly unreleased) updates to the package.
Need to get 119 kB of source archives.
Get:1 http://kali.download/kali kali-rolling/main libfreefare 0.4.0-2.1 (dsc) [2,144 B]
Get:2 http://kali.download/kali kali-rolling/main libfreefare 0.4.0-2.1 (tar) [113 kB]
Get:3 http://kali.download/kali kali-rolling/main libfreefare 0.4.0-2.1 (diff) [3,732 B]
Fetched 119 kB in 1s (100 kB/s)
dpkg-source: info: extracting libfreefare in libfreefare-0.4.0
dpkg-source: info: unpacking libfreefare_0.4.0.orig.tar.gz
dpkg-source: info: unpacking libfreefare_0.4.0-2.1.debian.tar.xz
$ cd libfreefare-0.4.0
$ ls
AUTHORS cmake configure.ac COPYING examples libfreefare m4 NEWS test
ChangeLog CMakeLists.txt contrib debian HACKING libfreefare.pc.in Makefile.am README TODO
$ ls debian
changelog compat control copyright libfreefare0.install libfreefare-bin.install libfreefare-dev.install libfreefare-doc.install README.Source rules source watch
In this example, while we received the source package from a Kali mirror, the package is the
same as in Debian since the version string doesn’t contain “kali.” This means that no kali-specific
changes have been applied.
If you need a specific version of the source package, which is currently not available in the repos-
itories listed in /etc/apt/sources.list, then the easiest way to download it is to find out the
URL of its .dsc file by looking it up on https://pkg.kali.org/ and then handing that URL over
to dget (from the devscripts package).
$ dget http://http.kali.org/pool/main/libf/libfreefare/libfreefare_0.4.0+0~
å git1439352548.ffde4d-1.dsc
dget: retrieving http://http.kali.org/pool/main/libf/libfreefare/libfreefare_0.4.0+0~
å git1439352548.ffde4d-1.dsc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 362 100 362 0 0 1117 0 --:--:-- --:--:-- --:--:-- 1120
100 1935 100 1935 0 0 3252 0 --:--:-- --:--:-- --:--:-- 3252
dget: retrieving http://http.kali.org/pool/main/libf/libfreefare/libfreefare_0.4.0+0~
å git1439352548.ffde4d.orig.tar.gz
[...]
libfreefare_0.4.0+0~git1439352548.ffde4d-1.dsc:
dscverify: libfreefare_0.4.0+0~git1439352548.ffde4d-1.dsc failed signature check:
gpg: WARNING: no command supplied. Trying to guess what you mean ...
gpg: Signature made Wed 12 Aug 2015 12:14:03 AM EDT
gpg: using RSA key 43EF73F4BD8096DA
gpg: Can’t check signature: No public key
Validation FAILED!!
$ dpkg-source -x libfreefare_0.4.0+0~git1439352548.ffde4d-1.dsc
gpgv: Signature made Wed 12 Aug 2015 12:14:03 AM EDT
gpgv: using RSA key 43EF73F4BD8096DA
gpgv: Can’t check signature: No public key
dpkg-source: warning: failed to verify signature on ./libfreefare_0.4.0+0~git1439352548
å .ffde4d-1.dsc
dpkg-source: info: extracting libfreefare in libfreefare-0.4.0+0~git1439352548.ffde4d
dpkg-source: info: unpacking libfreefare_0.4.0+0~git1439352548.ffde4d.orig.tar.gz
dpkg-source: info: unpacking libfreefare_0.4.0+0~git1439352548.ffde4d-1.debian.tar.xz
It is worth noting that dget did not automatically extract the source package because it could
not verify the PGP signature on the source package. Thus we did that step manually with
dpkg-source -x dsc-file. You can also force the source package extraction by passing the -
-allow-unauthenticated or -u option. Inversely, you can use --download-only to skip the source
package extraction step.
Retrieving Sources from You might have noticed that the apt source invocation tells you about a possible Git
Git repository used to maintain the package. It might point to a Debian Git repository or
to a Kali Git repository.
All Kali-specific packages are maintained in Git repositories hosted on gitlab.com/-
kalilinux/packages1 . You can retrieve the sources from those repositories with git
clone https://gitlab.com/kalilinux/packages/source-package.git.
You can use the Git repositories as another way to retrieve the sources and thus
(mostly) follow the other instructions from this section. But when Kali developers
work with those repositories, they use another packaging workflow and use tools
from the git-buildpackage package that we will not cover here. You can learn more
about those tools here:
è https://honk.sigxcpu.org/piki/projects/git-buildpackage/
Now that you have the sources, you still need to install build dependencies. They will be necessary
to build the desired binary packages but are also likely required for partial builds that you might
want to run to test the changes while you make them.
Each source package declares its build dependencies in the Build-Depends field of the debian/
control file. Let’s instruct apt to install those (assuming that you are in a directory containing
an unpacked source package):
$ sudo apt build-dep ./
Note, using directory ’./’ to get the build dependencies
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
autoconf automake autopoint autotools-dev debhelper dh-autoreconf
1
https://gitlab.com/kalilinux/packages
In this sample, all build dependencies can be satisfied with packages available to APT. This might
not always be the case as the tool building kali-rolling does not ensure installability of build de-
pendencies (only dependencies of binary packages are taken into account). In practice, binary
dependencies and build dependencies are often tightly coupled and most packages will have their
build dependencies satisfiable.
We can’t cover all the possible changes that you might want to make to a given package in this
section. This would amount to teaching you all the nitty gritty2 details of Debian packaging. How-
ever, we will cover the three common use cases presented earlier and we will explain some of the
unavoidable parts (like maintaining the changelog file).
The first thing to do is to change the package version number so that the rebuilt packages can be
distinguished from the original packages provided by Debian or Kali. To achieve this, we usually
add a suffix identifying the entity (person or company) applying the changes. Since buxy is my
IRC nickname, I will use it as a suffix. Such a change is best effected with the dch command (Debian
CHangelog) from the devscripts package, with a command such as dch --local buxy. This invokes
a text editor (sensible-editor, which runs the editor assigned in the VISUAL or EDITOR environ-
ment variables, or /usr/bin/editor otherwise), which allows you to document the differences
introduced by this rebuild. This editor shows that dch really did change the debian/changelog
file:
$ head -n 1 debian/changelog
libfreefare (0.4.0-2) unstable; urgency=low
$ dch --local buxy
[...]
$ head debian/changelog
libfreefare (0.4.0-2buxy1) UNRELEASED; urgency=medium
2
https://www.debian.org/doc/manuals/maint-guide/
If you do such changes regularly, you might want to set the DEBFULLNAME and DEBEMAIL en-
vironment variables to your full name and your email, respectively. Their values will be used
by many packaging tools, including dch, which will embed them on the trailer line shown above
(starting with “ -- ”).
Applying a Patch
In one of our use cases, we have downloaded the pyrit source package and we want to apply a patch
that we found in the upstream Git repository. This is a common operation and it should always be
simple. Unfortunately, patches can be handled in different ways depending on the source package
format and on the Git packaging workflow in use (when Git is used to maintain the package).
With an Unpacked Source Package You have run apt source pyrit and you have a pyrit-0.
4.0 directory. You can apply your patch directly with patch -p1 < patch-file:
At this point, you have manually patched the source code and you can already build binary pack-
ages of your modified version (see section 9.1.4, “Starting the Build” [page 236]). But if you
try to build an updated source package, it will fail, complaining about “unexpected upstream
changes.” This is because pyrit (like a majority of the source packages) uses the source format (see
debian/source/format file) known as 3.0 (quilt), where changes to the upstream code must be
recorded in separate patches stored in debian/patches/ and where the debian/patches/series
file indicates the order in which patches must be applied. You can register your changes in a new
patch by running dpkg-source --commit:
$ dpkg-source --commit
dpkg-source: info: local changes detected, the modified files are:
pyrit-0.4.0/cpyrit/pckttools.py
Enter the desired patch name: fix-for-scapy-2.3.patch
Quilt Patch Series This patch management convention has been popularized by a tool named quilt
and the “3.0 (quilt)” source package format is thus compatible with this tool—with
the small deviation that it uses debian/patches instead of patches. This tool is
available in the package of the same name and you can find a nice tutorial here:
è https://raphaelhertzog.com/2012/08/08/
how-to-use-quilt-to-manage-patches-in-debian-packages/
If the source package uses the 1.0 or 3.0 (native) source format, then there is no requirement to
register your upstream changes in a patch. They are automatically bundled in the resulting source
package.
With a Git Repository If you have used Git to retrieve the source package, the situation is even
more complicated. There are multiple Git workflows and associated tools, and obviously not all
Debian packages are using the same workflows and tools. The distinction already explained about
source format is still relevant but you must also check whether patches are pre-applied in the
source tree or whether they are only stored in debian/patches (in this case, they are then applied
at build time).
The most popular tool is git-buildpackage. It is what we use to manage all repositories on git-
lab.com/kalilinux/packages. When you use it, patches are not pre-applied in the source tree but
they are stored in debian/patches. You can manually add patches in that directory and list them
in debian/patches/series but users of git-buildpackage tend to use gbp pq to edit the entire
patch series as a single branch that you can extend or rebase to your liking. Check the manual
pages for gbp-pq(1) to learn how to invoke it.
git-dpm (with associated command of the same name) is another git packaging tool that you can
find in use. It records metadata in debian/.git-dpm and keeps patches applied in the source tree
by merging a constantly-rebased branch that it builds out of the content of debian/patches.
You usually have to tweak build options when you want to enable an optional feature or behavior
that is not activated in the official package, or when you want to customize parameters that are
set at build time through a ./configure option or through variables set in the build environment.
In those cases, the changes are usually limited to debian/rules, which drives the steps in the
package build process. In the simplest cases, the lines concerning the initial configuration (./con-
figure …) or the actual build ($(MAKE) … or make …) are easy to spot. If these commands are not
Let’s take a look at an example at this point, as we discuss packaging upstream versions. Let’s say
you are a SET power-user and you noticed a new upstream release (7.4.5) that is not yet available
in Kali (which only has version 7.4.4). You want to build an updated package and try it out. This is a
minor version bump and you thus don’t expect the update to require any change at the packaging
level.
To update the source package, you extract the new source tarball next to the current source pack-
age and you copy the debian directory from the current source package to the new one. Then
you bump the version in debian/changelog.
$ apt source set
Reading package lists... Done
NOTICE: ’set’ packaging is maintained in the ’Git’ version control system at:
https://gitlab.com/kalilinux/packages/set.git
Please use:
git clone https://gitlab.com/kalilinux/packages/set.git
to retrieve the latest (possibly unreleased) updates to the package.
Need to get 42.3 MB of source archives.
[...]
dpkg-source: warning: failed to verify signature on ./set_7.4.4-0kali1.dsc
dpkg-source: info: extracting set in set-7.4.4
dpkg-source: info: unpacking set_7.4.4.orig.tar.gz
dpkg-source: info: unpacking set_7.4.4-0kali1.debian.tar.xz
dpkg-source: info: applying edit-config-file
dpkg-source: info: applying fix-path-interpreter.patch
$ wget https://github.com/trustedsec/social-engineer-toolkit/archive/7.4.5.tar.gz -O
å set_7.4.5.orig.tar.gz
[...]
When all the needed changes have been applied to the sources, you can start generating the actual
binary package or .deb file. The whole process is managed by the dpkg-buildpackage command
and it looks like this:
$ dpkg-buildpackage -us -uc -b
dpkg-buildpackage: source package libfreefare
dpkg-buildpackage: source version 0.4.0-2buxy1
dpkg-buildpackage: source distribution UNRELEASED
dpkg-buildpackage: source changed by Raphael Hertzog <buxy@kali.org>
dpkg-buildpackage: host architecture amd64
[...]
dh_builddeb
dpkg-deb: building package ’libfreefare0-dbgsym’ in ’../libfreefare0-dbgsym_0.4.0-2buxy1_amd64.deb’.
dpkg-deb: building package ’libfreefare0’ in ’../libfreefare0_0.4.0-2buxy1_amd64.deb’.
dpkg-deb: building package ’libfreefare-dev’ in ’../libfreefare-dev_0.4.0-2buxy1_amd64.deb’.
dpkg-deb: building package ’libfreefare-bin-dbgsym’ in ’../libfreefare-bin-dbgsym_0.4.0-2buxy1_amd64.deb’.
dpkg-deb: building package ’libfreefare-bin’ in ’../libfreefare-bin_0.4.0-2buxy1_amd64.deb’.
dpkg-deb: building package ’libfreefare-doc’ in ’../libfreefare-doc_0.4.0-2buxy1_all.deb’.
dpkg-genchanges -b >../libfreefare_0.4.0-2buxy1_amd64.changes
dpkg-genchanges: binary-only upload (no source code included)
dpkg-source --after-build libfreefare-0.4.0
dpkg-buildpackage: binary-only upload (no source included)
The -us -uc options disable signatures on some of the generated files (.dsc, .changes) because this
operation will fail if you do not have a GnuPG key associated with the identity you have put in the
changelog file. The -b option asks for a “binary-only build.” In this case, the source package (.dsc)
will not be created, only the binary (.deb) packages will. Use this option to avoid failures during
the source package build: if you haven’t properly recorded your changes in the patch management
system, it might complain and interrupt the build process.
As suggested by dpkg-deb’s messages, the generated binary packages are now available in the
parent directory (the one that hosts the directory of the source package). You can install them
with dpkg -i or apt install.
$ sudo apt install ../libfreefare0_0.4.0-2buxy1_amd64.deb \
../libfreefare-bin_0.4.0-2buxy1_amd64.deb
Reading package lists... Done
We prefer apt install over dpkg -i as it will deal with missing dependencies gracefully. But
not so long ago, you had to use dpkg as apt was not able to deal with .deb files outside of any
repository.
dpkg-buildpackage More often than not, Debian developers use a higher-level program such as debuild;
wrappers this runs dpkg-buildpackage as usual, but it also adds an invocation of a program
(lintian) that runs many checks to validate the generated package against the De-
bian policy3 . This script also cleans up the environment so that local environment
variables do not pollute the package build. The debuild command is one of the tools
in the devscripts suite, which share some consistency and configuration to make the
maintainers’ task easier.
The kernels provided by Kali include the largest possible number of features, as well as the max-
imum number of drivers, in order to cover the broadest spectrum of existing hardware configu-
rations. This is why some users prefer to recompile the kernel in order to include only what they
specifically need. There are two reasons for this choice. First, it is a way to optimize memory
consumption since all kernel code, even if it is never used, occupies physical memory. Because
the statically compiled portions of the kernel are never moved to swap space, an overall decrease
in system performance will result from having drivers and features built in that are never used.
Second, reducing the number of drivers and kernel features reduces the risk of security problems
since only a fraction of the available kernel code is being run.
3
https://www.debian.org/doc/debian-policy/
If you choose to compile your own kernel, you must accept the conse-
quences: Kali cannot ensure security updates for your custom kernel. By
keeping the kernel provided by Kali, you benefit from updates prepared by
the Debian Project.
Recompilation of the kernel is also necessary if you want to use certain features that are only
available as patches (and not included in the standard kernel version).
The Debian Kernel The Debian kernel team maintains the Debian Kernel Handbook (also available in the
Handbook debian-kernel-handbook package) with comprehensive documentation about most
kernel-related tasks and about how official Debian kernel packages are handled. This
is the first place you should look into if you need more information than what is
provided in this section.
è https://kernel-team.pages.debian.net/kernel-handbook/
Unsurprisingly, Debian and Kali manage the kernel in the form of a package, which is not how ker-
nels have traditionally been compiled and installed. Since the kernel remains under the control of
the packaging system, it can then be removed cleanly, or deployed on several machines. Further-
more, the scripts associated with these packages automate the interaction with the bootloader
and the initrd generator.
The upstream Linux sources contain everything needed to build a Debian package of the kernel
but you still need to install the build-essential package to ensure that you have the tools required to
build a Debian package. Furthermore, the configuration step for the kernel requires the libncurses5-
dev package. Finally, the fakeroot package will enable creation of the Debian package without need-
ing administrative privileges.
# apt install build-essential libncurses5-dev fakeroot
Since the Linux kernel sources are available as a package, you can retrieve them by installing the
linux-source-version package. The apt-cache search ^linux-source command should list the
latest kernel version packaged by Kali. Note that the source code contained in these packages
The next step consists of configuring the kernel according to your needs. The exact procedure
depends on the goals.
4
https://www.kernel.org/
$ cp /boot/config-4.9.0-kali1-amd64 ~/kernel/linux-source-4.9/.config
make menuconfig compiles and launches a text-mode kernel configuration interface (this is
where the libncurses5-dev package is required), which allows navigating the many available ker-
nel options in a hierarchical structure. Pressing the Space key changes the value of the selected
option, and Enter validates the button selected at the bottom of the screen; Select returns to the
selected sub-menu; Exit closes the current screen and moves back up in the hierarchy; Help will
display more detailed information on the role of the selected option. The arrow keys allow mov-
ing within the list of options and buttons. To exit the configuration program, choose Exit from
the main menu. The program then offers to save the changes that you have made; accept if you
are satisfied with your choices.
Other interfaces have similar features but they work within more modern graphical interfaces,
such as make xconfig, which uses a Qt graphical interface, and make gconfig, which uses GTK+.
The former requires libqt4-dev, while the latter depends on libglade2-dev and libgtk2.0-dev.
Dealing with Outdated When you provide a .config file that has been generated with another (usually older)
.config Files kernel version, you will have to update it. You can do so with make oldconfig, which
will interactively ask you the questions corresponding to the new configuration op-
tions. If you want to use the default answer to all those questions, you can use make
olddefconfig. With make oldnoconfig, it will assume a negative answer to all ques-
tions.
Clean Up Before If you have already compiled a kernel in the directory and wish to rebuild everything
Rebuilding from scratch (for example because you substantially changed the kernel configura-
tion), you will have to run make clean to remove the compiled files. make distclean
removes even more generated files, including your .config file, so make sure to back
it up first.
Once the kernel configuration is ready, a simple make deb-pkg will generate up to five Debian
packages in standard .deb format: linux-image-version, which contains the kernel image and the
associated modules; linux-headers-version, which contains the header files required to build ex-
ternal modules; linux-firmware-image-version, which contains the firmware files needed by some
drivers (this package might be missing when you build from the kernel sources provided by De-
bian or Kali); linux-image-version-dbg, which contains the debugging symbols for the kernel image
and its modules; and linux-libc-dev, which contains headers relevant to some user-space libraries
like GNU’s C library (glibc).
The version is defined by the concatenation of the upstream version (as defined by the variables
VERSION, PATCHLEVEL, SUBLEVEL, and EXTRAVERSION in the Makefile), of the LOCALVER-
SION configuration parameter, and of the LOCALVERSION environment variable. The package
version reuses the same version string with an appended revision that is regularly incremented
(and stored in .version), except if you override it with the KDEB_PKGVERSION environment
variable.
$ make deb-pkg LOCALVERSION=-custom KDEB_PKGVERSION=$(make kernelversion)-1
[...]
$ ls ../*.deb
../linux-headers-4.9.0-kali1-custom_4.9.2-1_amd64.deb
../linux-image-4.9.0-kali1-custom_4.9.2-1_amd64.deb
../linux-image-4.9.0-kali1-custom-dbg_4.9.2-1_amd64.deb
../linux-libc-dev_4.9.2-1_amd64.deb
To actually use the built kernel, the only step left is to install the required packages with dpkg -i
file.deb. The “linux-image” package is required; you only have to install the “linux-headers”
package if you have some external kernel modules to build, which is the case if you have some
“*-dkms” packages installed (check with dpkg -l ”*-dkms” | grep ^ii). The other packages
are generally not needed (unless you know why you need them)!
Kali Linux has a ton of functionality and flexibility right out of the box. Once Kali is installed, you
can perform all sorts of amazing feats with a little guidance, creativity, patience, and practice.
The first step is to install the packages needed and to retrieve the Git repository with the Kali
live-build configuration:
# apt install curl git live-build
[...]
# git clone https://gitlab.com/kalilinux/build-scripts/live-build-config.git
[...]
# cd live-build-config
# ls
auto bin build_all.sh build.sh kali-config README.md simple-cdd
At this point, you can already create an updated (but unmodified) Kali live ISO image just by run-
ning ./build.sh --verbose. The build will take a long time to complete as it will download all
the packages to include. When finished, you will find the freshly created ISO image in the new
images directory.
The build.sh live-build wrapper that we provide is responsible for setting up the config direc-
tory that live-build expects to find. It can put in place different configurations depending on
its --variant option.
5
https://www.offensive-security.com/kali-linux/kali-linux-iso-of-doom/
6
https://www.offensive-security.com/kali-linux/kali-linux-evil-wireless-access-point/
7
https://live-team.pages.debian.net/live-manual/html/live-manual/index.en.html
This concept of variant allows for some high-level pre-defined customizations but if you take the
time to read through the Debian Live System Manual8 , you will discover many other ways to cus-
tomize the images, just by changing the content of the appropriate sub-directory of kali-config.
The following sections will provide some examples.
metapackages are empty packages whose sole purpose is to have many dependencies on other
packages. They make it easier to install sets of packages that you often want to install together.
The kali-meta source package builds all the metapackages provided by Kali Linux:
8
https://live-team.pages.debian.net/live-manual/html/live-manual.en.html
You can leverage these metapackages when you create custom package lists for live-build. The
full list of available metapackages and the tools they include can be found at https://tools.kali.
org/kali-metapackages
Debconf Preseeding of You can provide Debconf preseed files (see section 4.3.2, “Creating a Preseed File”
Installed Packages [page 97] for explanations) as preseed/*.cfg files. They will be used to configure
the packages installed in the live file system.
live-build offers hooks that can be executed at different steps of the build process. Chroot
hooks are executable scripts that you install as hooks/live/*.chroot files in your config tree
and that are executed within the chroot. While chroot is the command that lets you temporar-
ily changes the operating system’s root directory to a directory of your choice, it is also used
by extension to designate a directory hosting a full (alternate) file system tree. This is the case
here with live-build, where the chroot directory is the directory where the live file system is
being prepared. Since applications started in a chroot can’t see outside of that directory, the
same goes with the chroot hooks: you can only use and modify anything available in that ch-
root environment. We rely on those hooks to perform multiple Kali specific customizations (see
kali-config/common/hooks/live/kali-hacks.chroot).
Binary hooks (hooks/live/*.binary) are executed in the context of the build process (and not
chrooted anywhere) at the end of the process. You can modify the content of the ISO image built
but not of the live file system since at this point, it has already been generated. We use this feature
in Kali to make some changes to the default isolinux configuration generated by live-build. For
example, see kali-config/common/hooks/live/persistence.binary where we add the boot
menu entries enabling persistence.
Another very common customization is to add files either in the live file system or in the ISO
image.
You can add files to the live file system by putting them at their expected location be-
low the includes.chroot config directory. For example, we provide kali-config/common/
includes.chroot/usr/lib/live/config/0031-kali-password, which ends up as /usr/lib/
live/config/0031-kali-password in the live file system.
Live-Boot Hooks Scripts installed as /lib/live/config/XXXX- name are executed by the init script
of the live-boot package. They reconfigure many aspects of the system to be suited
for a live system. You can add scripts of your own to customize your live system at
run-time: it’s notably used to implement a custom boot parameter for example.
You can add files to the ISO image by putting them at their expected location below the includes.
binary config directory. For example, we provide kali-config/common/includes.binary/
isolinux/splash.png to override the background image used by the Isolinux bootloader (which
is stored in /isolinux/splash.png in the filesystem of the ISO image).
Next, we will discuss the steps required to add persistence to a Kali USB key. The nature of a live
system is to be ephemeral. All data stored on the live system and all the changes made are lost
when you reboot. To remedy this, you can use a feature of live-boot called persistence, which is
enabled when the boot parameters include the persistence keyword.
Since modifying the boot menu is a non-trivial task, the Kali live image includes two menu entries
by default that enable persistence: Live USB Persistence and Live USB Encrypted Persistence, as
shown in Figure 9.1, “Persistence Menu Entries” [page 246].
When this feature is enabled, live-boot will scan all partitions looking for file systems labeled persis-
tence (which can be overridden with the persistence-label=value boot parameter) and the installer
will set up persistence of the directories which are listed in the persistence.conf file found in
that partition (one directory per line). The special value “/ union” enables full persistence of all
directories with a union mount, an overlay that stores only the changes when compared to the
underlying file system. The data of the persisted directories are stored in the file system that
contains the corresponding persistence.conf file.
In this section, we assume that you have prepared a Kali live USB key by following the instructions
at section 2.1.4, “Copying the Image on a DVD-ROM or USB Key” [page 19] and that you have used
a USB key big enough to hold the ISO image (roughly 4 GB) and the data of the directories that you
want to persist. We also assume that the USB key is recognized by Linux as /dev/sdb and that it
only contains the two partitions that are part of the default ISO image (/dev/sdb1 and /dev/sdb2).
Be very careful when performing this procedure. You can easily destroy important data if you
re-partition the wrong drive.
To add a new partition, you must know the size of the image that you copied so that you can make
the new partition start after the live image. Then use parted to actually create the partition. The
commands below analyze the ISO image named kali-linux-2020.3-live-amd64.iso, which is
assumed to be present on the USB key as well:
# parted /dev/sdb print
Model: SanDisk Cruzer Glide (scsi)
Disk /dev/sdb: 16.0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
With the new /dev/sdb3 partition in place, format it with an ext4 filesystem labelled “persistence”
with the help of the mkfs.ext4 command (and its -L option to set the label). The partition is then
mounted on the /mnt directory and you add the required persistence.conf configuration file. As
The USB key is now ready and can be booted with the “Live USB Persistence” boot menu entry.
live-boot is also able to handle persistence file systems on encrypted partitions. You can thus
protect the data of your persistent directories by creating a LUKS encrypted partition holding the
persistence data.
The initial steps are the same up to the creation of the partition but instead of formatting it with
an ext4 file system, use cryptsetup to initialize it as a LUKS container. Then open that container
and setup the ext4 file system in the same way as in the non-encrypted setup, but instead of using
the /dev/sdb3 partition, use the virtual partition created by cryptsetup. This virtual partition
represents the decrypted content of the encrypted partition, which is available in /dev/mapper
under the name that you assigned it. In the example below, we will use the name kali_persistence.
Again, ensure that you are using the correct drive and partition.
# cryptsetup --verbose --verify-passphrase luksFormat /dev/sdb3
WARNING!
========
This will overwrite data on /dev/sdb3 irrevocably.
If you have multiple use-cases for your Kali live system, you can use multiple filesystems with dif-
ferent labels and indicate on the boot command line which (set of) filesystems should be used for
the persistence feature: this is done with the help of the persistence-label=label boot parameter.
Let’s assume that you are a professional pen-tester. When you work for a customer, you use an
encrypted persistence partition to protect the confidentiality of your data in case the USB key is
stolen or compromised. At the same time, you want to be able to showcase Kali and some pro-
motional material stored in an unencrypted partition of the same USB key. Since you don’t want
to manually edit the boot parameters on each boot, you want to build a custom live image with
dedicated boot menu entries.
The first step is to build the custom live ISO (following section 9.3, “Building Custom Kali Live
ISO Images” [page 241] and in particular section 9.3.4, “Using Hooks to Tweak the Contents of
the Image” [page 245]). The main customization is to modify kali-config/common/hooks/live/
persistence-menu.binary to make it look like this (note the persistence-label parameters):
#!/bin/sh
if [ ! -d isolinux ]; then
cd binary
fi
label live-demo
menu label ^Live USB with Demo Data
linux /live/vmlinuz
initrd /live/initrd.img
append boot=live username=root hostname=kali persistence-label=demo persistence
label live-work
menu label ^Live USB with Work Data
linux /live/vmlinuz
initrd /live/initrd.img
append boot=live username=root hostname=kali persistence-label=work persistence-
å encryption=luks persistence
END
Next, we will build our custom ISO and copy it to the USB key. Then we will create and initialize the
two partitions and files systems that will be used for persistence. The first partition is unencrypted
(labeled “demo”), and the second is encrypted (labeled “work”). Assuming /dev/sdb is our USB
key and the size of our custom ISO image is 3000 MB, it would look like this:
# parted /dev/sdb mkpart primary 3000 MB 55%
# parted /dev/sdb mkpart primary 55% 100%
# mkfs.ext4 -L demo /dev/sdb3
[...]
# mount /dev/sdb3 /mnt
# echo ”/ union” >/mnt/persistence.conf
# umount /mnt
# cryptsetup --verbose --verify-passphrase luksFormat /dev/sdb4
[...]
# cryptsetup luksOpen /dev/sdb4 kali_persistence
[...]
# mkfs.ext4 -L work /dev/mapper/kali_persistence
[...]
# mount /dev/mapper/kali_persistence /mnt
# echo ”/ union” >/mnt/persistence.conf
# umount /mnt
# cryptsetup luksClose /dev/mapper/kali_persistence
And that’s all. You can now boot the USB key and select from the new boot menu entries as needed!
Adding a Nuke Password Kali provides a cryptsetup-nuke-password package that modifies cryptsetup’s early
for Extra Safety boot scripts to implement a new feature: you can set a nuke password which—when
used—will destroy all keys used to manage the encrypted partition.
# dpkg-reconfigure cryptsetup-nuke-password
More information about this feature can be found in the following tutorial:
è https://www.kali.org/tutorials/nuke-kali-linux-luks/
9.5. Summary
In this chapter, we learned about modifying Kali source packages, which are the basic building
blocks of all applications shipped in Kali. We also discovered how to customize and install the Kali
kernel. Then we discussed the live-build environment and discussed how to build a customized
Kali Linux ISO. We also demonstrated how to create both encrypted and unencrypted Kali USB
installs.
Modifying Kali packages is usually a task for Kali contributors and developers, but you might have
specific needs not fulfilled by the official packages and knowing how to build a modified package
can be very valuable, especially if you want to share your changes, deploy them internally, or
cleanly roll the software back to a previous state.
When you need to modify a piece of software, it might be tempting to download the source, make
the changes, and use the modified software. However, if your application requires a system-wide
setup (e.g. with a make install step), then it will pollute your file system with files unknown to
dpkg and will soon create problems that cannot be caught by package dependencies. In addition,
this type of software modification is more tedious to share.
When creating a modified package, the general process is always the same: grab the source pack-
age, extract it, make your changes, and then build the package. For each step, there are often
multiple tools that can handle each task.
To start rebuilding a Kali package, first download the source package, which is composed of a
*.dsc (Debian Source Control) file and of additional files referenced from that control file.
Additionally, you can use dget (from the devscripts package) to download a .dsc file directly
together with its accompanying files. For Kali-specific packages whose sources are hosted in a
Git repository on gitlab.com/kalilinux/packages9 , you can retrieve the sources with git clone
https://gitlab.com/kalilinux/packages/source-package.git.
After downloading sources, install the packages listed in the source package’s build dependencies
with sudo apt build-dep ./. This command must be run from the package’s source directory.
Updates to a source package consist of a combination of some of the following steps:
• The required first step is changing the version number to distinguish your package from
the original with dch --local version-identifier, or modify other package details with
dch.
• Applying a patch with patch -p1 <patch-file or modifying quilt’s patch series.
• Tweaking build options, usually found in the package’s debian/rules file, or other files in
the debian/ directory.
After modifying a source package, you can build the binary package with dpkg-buildpackage
-us -uc -b from the source directory, which will generate an unsigned binary package. The pack-
age can then be installed with dpkg -i package-name_version_arch.deb.
As an advanced user, you may wish to recompile the Kali kernel. You may want to slim down the
standard Kali kernel, which is loaded with many features and drivers, add non-standard drivers
or features, or apply kernel patches. Beware though: a misconfigured kernel may destabilize your
system and you must be prepared to accept that Kali cannot ensure security updates for your
custom kernel.
For most kernel modifications, you will need to install a few packages with apt install
build-essential libncurses5-dev fakeroot.
The command apt-cache search ^linux-source should list the latest kernel version packaged
by Kali, and apt install linux-source-version-number installs a compressed archive of the
kernel source into /usr/src.
The source files should be extracted with tar -xaf into a directory other than /usr/src (such as
~/kernel).
When the time comes to configure your kernel, keep these points in mind:
9
https://gitlab.com/kalilinux/packages
9.5.3. Summary Tips for Building Custom Kali Live ISO Images
Official Kali ISO images are built with live-build10 , which is a set of scripts that allows for the
complete automation and customization of all facets of ISO image creation.
Your Kali system must be completely up-to-date before using live-build.
The Kali live-build configuration can be retrieved from Kali’s Git repositories
with two commands: apt install curl git live-build followed by git clone
https://gitlab.com/kalilinux/build-scripts/live-build-config.git
To generate an updated but unmodified Kali live ISO image, simply run ./build.sh --verbose.
The build will take a long time to complete as it will download all the packages to include. When
finished, you will find the new ISO image in the images directory. If you add --variant variant
to the command line, it will build the given variant of the Kali ISO image. The various variants are
defined by their configuration directories kali-config/variant-*. The main image is the Xfce
variant.
There are several ways to customize your ISO by modifying live-build’s configuration directory:
• Packages can be added to (or removed from) a live ISO by modifying package-lists/*.
list.chroot files.
• Custom packages can be included in the live image by placing the .deb files in a packages.
chroot directory. Their installation can be preseeded with preseed/*.cfg files.
10
https://live-team.pages.debian.net/live-manual/html/live-manual/index.en.html
11
https://live-team.pages.debian.net/live-manual/html/live-manual/index.en.html