0% found this document useful (0 votes)
13 views

Python

Python is a versatile programming language ideal for automating tasks, offering simplicity and efficiency compared to shell scripts and other languages like C/C++/Java. It supports high-level data types, modular programming, and interactive development, making it suitable for a wide range of applications, from simple scripts to complex software. The tutorial introduces the Python interpreter, its usage, and basic programming concepts through examples, encouraging hands-on experimentation.

Uploaded by

rajandas369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Python

Python is a versatile programming language ideal for automating tasks, offering simplicity and efficiency compared to shell scripts and other languages like C/C++/Java. It supports high-level data types, modular programming, and interactive development, making it suitable for a wide range of applications, from simple scripts to complex software. The tutorial introduces the Python interpreter, its usage, and basic programming concepts through examples, encouraging hands-on experimentation.

Uploaded by

rajandas369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 340

1.

Whetting Your Appetite


If you do much work on computers, eventually you find that there’s some
task you’d like to automate. For example, you may wish to perform a search-
and-replace over a large number of text files, or rename and rearrange a
bunch of photo files in a complicated way. Perhaps you’d like to write a small
custom database, or a specialized GUI application, or a simple game.

If you’re a professional software developer, you may have to work with


several C/C++/Java libraries but find the usual write/compile/test/re-compile
cycle is too slow. Perhaps you’re writing a test suite for such a library and
find writing the testing code a tedious task. Or maybe you’ve written a
program that could use an extension language, and you don’t want to design
and implement a whole new language for your application.

Python is just the language for you.

You could write a Unix shell script or Windows batch files for some of these
tasks, but shell scripts are best at moving around files and changing text
data, not well-suited for GUI applications or games. You could write a C/C+
+/Java program, but it can take a lot of development time to get even a first-
draft program. Python is simpler to use, available on Windows, macOS, and
Unix operating systems, and will help you get the job done more quickly.

Python is simple to use, but it is a real programming language, offering much


more structure and support for large programs than shell scripts or batch
files can offer. On the other hand, Python also offers much more error
checking than C, and, being a very-high-level language, it has high-level data
types built in, such as flexible arrays and dictionaries. Because of its more
general data types Python is applicable to a much larger problem domain
than Awk or even Perl, yet many things are at least as easy in Python as in
those languages.

Python allows you to split your program into modules that can be reused in
other Python programs. It comes with a large collection of standard modules
that you can use as the basis of your programs — or as examples to start
learning to program in Python. Some of these modules provide things like file
I/O, system calls, sockets, and even interfaces to graphical user interface
toolkits like Tk.

Python is an interpreted language, which can save you considerable time


during program development because no compilation and linking is
necessary. The interpreter can be used interactively, which makes it easy to
experiment with features of the language, to write throw-away programs, or
to test functions during bottom-up program development. It is also a handy
desk calculator.

Python enables programs to be written compactly and readably. Programs


written in Python are typically much shorter than equivalent C, C++, or Java
programs, for several reasons:

 the high-level data types allow you to express complex operations in a


single statement;
 statement grouping is done by indentation instead of beginning and
ending brackets;
 no variable or argument declarations are necessary.

Python is extensible: if you know how to program in C it is easy to add a new


built-in function or module to the interpreter, either to perform critical
operations at maximum speed, or to link Python programs to libraries that
may only be available in binary form (such as a vendor-specific graphics
library). Once you are really hooked, you can link the Python interpreter into
an application written in C and use it as an extension or command language
for that application.

By the way, the language is named after the BBC show “Monty Python’s
Flying Circus” and has nothing to do with reptiles. Making references to
Monty Python skits in documentation is not only allowed, it is encouraged!

Now that you are all excited about Python, you’ll want to examine it in some
more detail. Since the best way to learn a language is to use it, the tutorial
invites you to play with the Python interpreter as you read.

In the next chapter, the mechanics of using the interpreter are explained.
This is rather mundane information, but essential for trying out the examples
shown later.

The rest of the tutorial introduces various features of the Python language
and system through examples, beginning with simple expressions,
statements and data types, through functions and modules, and finally
touching upon advanced concepts like exceptions and user-defined classes.

2. Using the Python Interpreter


2.1. Invoking the Interpreter
The Python interpreter is usually installed as /usr/local/bin/python3.12 on those
machines where it is available; putting /usr/local/bin in your Unix shell’s search path
makes it possible to start it by typing the command:

python3.12

to the shell. [1] Since the choice of the directory where the interpreter lives is an installation
option, other places are possible; check with your local Python guru or system administrator.
(E.g., /usr/local/python is a popular alternative location.)

On Windows machines where you have installed Python from the Microsoft Store,
the python3.12 command will be available. If you have the py.exe launcher installed, you can
use the py command. See Excursus: Setting environment variables for other ways to launch
Python.

Typing an end-of-file character (Control-D on Unix, Control-Z on Windows) at the primary


prompt causes the interpreter to exit with a zero exit status. If that doesn’t work, you can exit the
interpreter by typing the following command: quit().

The interpreter’s line-editing features include interactive editing, history substitution and code
completion on systems that support the GNU Readline library. Perhaps the quickest check to see
whether command line editing is supported is typing Control-P to the first Python prompt you
get. If it beeps, you have command line editing; see Appendix Interactive Input Editing and
History Substitution for an introduction to the keys. If nothing appears to happen, or if ^P is
echoed, command line editing isn’t available; you’ll only be able to use backspace to remove
characters from the current line.

The interpreter operates somewhat like the Unix shell: when called with standard input
connected to a tty device, it reads and executes commands interactively; when called with a file
name argument or with a file as standard input, it reads and executes a script from that file.

A second way of starting the interpreter is python -c command [arg] ..., which
executes the statement(s) in command, analogous to the shell’s -c option. Since Python
statements often contain spaces or other characters that are special to the shell, it is usually
advised to quote command in its entirety.

Some Python modules are also useful as scripts. These can be invoked using python -
m module [arg] ..., which executes the source file for module as if you had spelled out its
full name on the command line.

When a script file is used, it is sometimes useful to be able to run the script and enter interactive
mode afterwards. This can be done by passing -i before the script.
All command line options are described in Command line and environment.

2.1.1. Argument Passing


When known to the interpreter, the script name and additional arguments thereafter are turned
into a list of strings and assigned to the argv variable in the sys module. You can access this
list by executing import sys. The length of the list is at least one; when no script and no
arguments are given, sys.argv[0] is an empty string. When the script name is given
as '-' (meaning standard input), sys.argv[0] is set to '-'. When -c command is
used, sys.argv[0] is set to '-c'. When -m module is used, sys.argv[0] is set to the full
name of the located module. Options found after -c command or -m module are not consumed
by the Python interpreter’s option processing but left in sys.argv for the command or module
to handle.

2.1.2. Interactive Mode


When commands are read from a tty, the interpreter is said to be in interactive mode. In this
mode it prompts for the next command with the primary prompt, usually three greater-than signs
(>>>); for continuation lines it prompts with the secondary prompt, by default three dots (...).
The interpreter prints a welcome message stating its version number and a copyright notice
before printing the first prompt:

$ python3.12
Python 3.12 (default, April 4 2022, 09:25:04)
[GCC 10.2.0] on linux
Type "help", "copyright", "credits" or "license" for more
information.
>>>

Continuation lines are needed when entering a multi-line construct. As an example, take a look at
this if statement:

>>>

>>> the_world_is_flat = True


>>> if the_world_is_flat:
... print("Be careful not to fall off!")
...
Be careful not to fall off!

For more on interactive mode, see Interactive Mode.


2.2. The Interpreter and Its Environment
2.2.1. Source Code Encoding
By default, Python source files are treated as encoded in UTF-8. In that encoding, characters of
most languages in the world can be used simultaneously in string literals, identifiers and
comments — although the standard library only uses ASCII characters for identifiers, a
convention that any portable code should follow. To display all these characters properly, your
editor must recognize that the file is UTF-8, and it must use a font that supports all the characters
in the file.

To declare an encoding other than the default one, a special comment line should be added as
the first line of the file. The syntax is as follows:

# -*- coding: encoding -*-

where encoding is one of the valid codecs supported by Python.

For example, to declare that Windows-1252 encoding is to be used, the first line of your source
code file should be:

# -*- coding: cp1252 -*-

One exception to the first line rule is when the source code starts with a UNIX “shebang” line. In
this case, the encoding declaration should be added as the second line of the file. For example:

#!/usr/bin/env python3
# -*- coding: cp1252 -*-

Footnotes

[1]

On Unix, the Python 3.x interpreter is by default not installed with the executable
named python, so that it

3. An Informal Introduction to Python


In the following examples, input and output are distinguished by the
presence or absence of prompts (>>> and …): to repeat the example, you
must type everything after the prompt, when the prompt appears; lines that
do not begin with a prompt are output from the interpreter. Note that a
secondary prompt on a line by itself in an example means you must type a
blank line; this is used to end a multi-line command.

You can toggle the display of prompts and output by clicking on >>> in the
upper-right corner of an example box. If you hide the prompts and output for
an example, then you can easily copy and paste the input lines into your
interpreter.

Many of the examples in this manual, even those entered at the interactive
prompt, include comments. Comments in Python start with the hash
character, #, and extend to the end of the physical line. A comment may
appear at the start of a line or following whitespace or code, but not within a
string literal. A hash character within a string literal is just a hash character.
Since comments are to clarify code and are not interpreted by Python, they
may be omitted when typing in examples.

Some examples:

# this is the first comment


spam = 1 # and this is the second comment
# ... and now a third!
text = "# This is not a comment because it's inside quotes."

3.1. Using Python as a Calculator


Let’s try some simple Python commands. Start the interpreter and wait for the primary
prompt, >>>. (It shouldn’t take long.)

3.1.1. Numbers
The interpreter acts as a simple calculator: you can type an expression at it and it will write the
value. Expression syntax is straightforward: the operators +, -, * and / can be used to perform
arithmetic; parentheses (()) can be used for grouping. For example:

>>>

>>> 2 + 2
4
>>> 50 - 5*6
20
>>> (50 - 5*6) / 4
5.0
>>> 8 / 5 # division always returns a floating-point number
1.6

The integer numbers (e.g. 2, 4, 20) have type int, the ones with a fractional part
(e.g. 5.0, 1.6) have type float. We will see more about numeric types later in the tutorial.

Division (/) always returns a float. To do floor division and get an integer result you can use
the // operator; to calculate the remainder you can use %:

>>>

>>> 17 / 3 # classic division returns a float


5.666666666666667
>>>
>>> 17 // 3 # floor division discards the fractional part
5
>>> 17 % 3 # the % operator returns the remainder of the division
2
>>> 5 * 3 + 2 # floored quotient * divisor + remainder
17

With Python, it is possible to use the ** operator to calculate powers [1]:

>>>

>>> 5 ** 2 # 5 squared
25
>>> 2 ** 7 # 2 to the power of 7
128

The equal sign (=) is used to assign a value to a variable. Afterwards, no result is displayed
before the next interactive prompt:

>>>

>>> width = 20
>>> height = 5 * 9
>>> width * height
900

If a variable is not “defined” (assigned a value), trying to use it will give you an error:
>>>

>>> n # try to access an undefined variable


Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'n' is not defined

There is full support for floating point; operators with mixed type operands convert the integer
operand to floating point:

>>>

>>> 4 * 3.75 - 1
14.0

In interactive mode, the last printed expression is assigned to the variable _. This means that
when you are using Python as a desk calculator, it is somewhat easier to continue calculations,
for example:

>>>

>>> tax = 12.5 / 100


>>> price = 100.50
>>> price * tax
12.5625
>>> price + _
113.0625
>>> round(_, 2)
113.06

This variable should be treated as read-only by the user. Don’t explicitly assign a value to it —
you would create an independent local variable with the same name masking the built-in variable
with its magic behavior.

In addition to int and float, Python supports other types of numbers, such
as Decimal and Fraction. Python also has built-in support for complex numbers, and uses
the j or J suffix to indicate the imaginary part (e.g. 3+5j).

3.1.2. Text
Python can manipulate text (represented by type str, so-called “strings”) as well as numbers.
This includes characters “!”, words “rabbit”, names “Paris”, sentences
“Got your back.”, etc. “Yay! :)”. They can be enclosed in single quotes ('...') or
double quotes ("...") with the same result [2].

>>>

>>> 'spam eggs' # single quotes


'spam eggs'
>>> "Paris rabbit got your back :)! Yay!" # double quotes
'Paris rabbit got your back :)! Yay!'
>>> '1975' # digits and numerals enclosed in quotes are also
strings
'1975'

To quote a quote, we need to “escape” it, by preceding it with \. Alternatively, we can use the
other type of quotation marks:

>>>

>>> 'doesn\'t' # use \' to escape the single quote...


"doesn't"
>>> "doesn't" # ...or use double quotes instead
"doesn't"
>>> '"Yes," they said.'
'"Yes," they said.'
>>> "\"Yes,\" they said."
'"Yes," they said.'
>>> '"Isn\'t," they said.'
'"Isn\'t," they said.'

In the Python shell, the string definition and output string can look different.
The print() function produces a more readable output, by omitting the enclosing quotes and
by printing escaped and special characters:

>>>

>>> s = 'First line.\nSecond line.' # \n means newline


>>> s # without print(), special characters are included in the
string
'First line.\nSecond line.'
>>> print(s) # with print(), special characters are interpreted,
so \n produces new line
First line.
Second line.
If you don’t want characters prefaced by \ to be interpreted as special characters, you can
use raw strings by adding an r before the first quote:

>>>

>>> print('C:\some\name') # here \n means newline!


C:\some
ame
>>> print(r'C:\some\name') # note the r before the quote
C:\some\name

There is one subtle aspect to raw strings: a raw string may not end in an odd number
of \ characters; see the FAQ entry for more information and workarounds.

String literals can span multiple lines. One way is using triple-
quotes: """...""" or '''...'''. End of lines are automatically included in the string, but
it’s possible to prevent this by adding a \ at the end of the line. The following example:

print("""\
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
""")

produces the following output (note that the initial newline is not included):

Usage: thingy [OPTIONS]


-h Display this usage message
-H hostname Hostname to connect to

Strings can be concatenated (glued together) with the + operator, and repeated with *:

>>>

>>> # 3 times 'un', followed by 'ium'


>>> 3 * 'un' + 'ium'
'unununium'

Two or more string literals (i.e. the ones enclosed between quotes) next to each other are
automatically concatenated.

>>>
>>> 'Py' 'thon'
'Python'

This feature is particularly useful when you want to break long strings:

>>>

>>> text = ('Put several strings within parentheses '


... 'to have them joined together.')
>>> text
'Put several strings within parentheses to have them joined
together.'

This only works with two literals though, not with variables or expressions:

>>>

>>> prefix = 'Py'


>>> prefix 'thon' # can't concatenate a variable and a string
literal
File "<stdin>", line 1
prefix 'thon'
^^^^^^
SyntaxError: invalid syntax
>>> ('un' * 3) 'ium'
File "<stdin>", line 1
('un' * 3) 'ium'
^^^^^
SyntaxError: invalid syntax

If you want to concatenate variables or a variable and a literal, use +:

>>>

>>> prefix + 'thon'


'Python'

Strings can be indexed (subscripted), with the first character having index 0. There is no separate
character type; a character is simply a string of size one:

>>>
>>> word = 'Python'
>>> word[0] # character in position 0
'P'
>>> word[5] # character in position 5
'n'

Indices may also be negative numbers, to start counting from the right:

>>>

>>> word[-1] # last character


'n'
>>> word[-2] # second-last character
'o'
>>> word[-6]
'P'

Note that since -0 is the same as 0, negative indices start from -1.

In addition to indexing, slicing is also supported. While indexing is used to obtain individual
characters, slicing allows you to obtain a substring:

>>>

>>> word[0:2] # characters from position 0 (included) to 2


(excluded)
'Py'
>>> word[2:5] # characters from position 2 (included) to 5
(excluded)
'tho'

Slice indices have useful defaults; an omitted first index defaults to zero, an omitted second
index defaults to the size of the string being sliced.

>>>

>>> word[:2] # character from the beginning to position 2


(excluded)
'Py'
>>> word[4:] # characters from position 4 (included) to the end
'on'
>>> word[-2:] # characters from the second-last (included) to the
end
'on'

Note how the start is always included, and the end always excluded. This makes sure
that s[:i] + s[i:] is always equal to s:

>>>

>>> word[:2] + word[2:]


'Python'
>>> word[:4] + word[4:]
'Python'

One way to remember how slices work is to think of the indices as pointing between characters,
with the left edge of the first character numbered 0. Then the right edge of the last character of a
string of n characters has index n, for example:

+---+---+---+---+---+---+
| P | y | t | h | o | n |
+---+---+---+---+---+---+
0 1 2 3 4 5 6
-6 -5 -4 -3 -2 -1

The first row of numbers gives the position of the indices 0…6 in the string; the second row
gives the corresponding negative indices. The slice from i to j consists of all characters between
the edges labeled i and j, respectively.

For non-negative indices, the length of a slice is the difference of the indices, if both are within
bounds. For example, the length of word[1:3] is 2.

Attempting to use an index that is too large will result in an error:

>>>

>>> word[42] # the word only has 6 characters


Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: string index out of range

However, out of range slice indexes are handled gracefully when used for slicing:

>>>
>>> word[4:42]
'on'
>>> word[42:]
''

Python strings cannot be changed — they are immutable. Therefore, assigning to an indexed
position in the string results in an error:

>>>

>>> word[0] = 'J'


Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object does not support item assignment
>>> word[2:] = 'py'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object does not support item assignment

If you need a different string, you should create a new one:

>>>

>>> 'J' + word[1:]


'Jython'
>>> word[:2] + 'py'
'Pypy'

The built-in function len() returns the length of a string:

>>>

>>> s = 'supercalifragilisticexpialidocious'
>>> len(s)
34
See also
Text Sequence Type — str

Strings are examples of sequence types, and support the common operations supported by
such types.
String Methods

Strings support a large number of methods for basic transformations and searching.
f-strings

String literals that have embedded expressions.


Format String Syntax

Information about string formatting with str.format().


printf-style String Formatting

The old formatting operations invoked when strings are the left operand of the % operator
are described in more detail here.
3.1.3. Lists
Python knows a number of compound data types, used to group together
other values. The most versatile is the list, which can be written as a list
of comma-separated values (items) between square brackets. Lists might
contain items of different types, but usually the items all have the same
type.

>>>

>>> squares = [1, 4, 9, 16, 25]


>>> squares
[1, 4, 9, 16, 25]

Like strings (and all other built-in sequence types), lists can be indexed
and sliced:

>>>

>>> squares[0] # indexing returns the item


1
>>> squares[-1]
25
>>> squares[-3:] # slicing returns a new list
[9, 16, 25]

Lists also support operations like concatenation:

>>>

>>> squares + [36, 49, 64, 81, 100]


[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
Unlike strings, which are immutable, lists are a mutable type, i.e. it is
possible to change their content:

>>>

>>> cubes = [1, 8, 27, 65, 125] # something's


wrong here
>>> 4 ** 3 # the cube of 4 is 64, not 65!
64
>>> cubes[3] = 64 # replace the wrong value
>>> cubes
[1, 8, 27, 64, 125]

You can also add new items at the end of the list, by using
the list.append() method (we will see more about methods later):

>>>

>>> cubes.append(216) # add the cube of 6


>>> cubes.append(7 ** 3) # and the cube of 7
>>> cubes
[1, 8, 27, 64, 125, 216, 343]

Simple assignment in Python never copies data. When you assign a list to
a variable, the variable refers to the existing list. Any changes you make
to the list through one variable will be seen through all other variables
that refer to it.:

>>>

>>> rgb = ["Red", "Green", "Blue"]


>>> rgba = rgb
>>> id(rgb) == id(rgba) # they reference the same
object
True
>>> rgba.append("Alph")
>>> rgb
["Red", "Green", "Blue", "Alph"]

All slice operations return a new list containing the requested elements.
This means that the following slice returns a shallow copy of the list:

>>>
>>> correct_rgba = rgba[:]
>>> correct_rgba[-1] = "Alpha"
>>> correct_rgba
["Red", "Green", "Blue", "Alpha"]
>>> rgba
["Red", "Green", "Blue", "Alph"]

Assignment to slices is also possible, and this can even change the size of
the list or clear it entirely:

>>>

>>> letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g']


>>> letters
['a', 'b', 'c', 'd', 'e', 'f', 'g']
>>> # replace some values
>>> letters[2:5] = ['C', 'D', 'E']
>>> letters
['a', 'b', 'C', 'D', 'E', 'f', 'g']
>>> # now remove them
>>> letters[2:5] = []
>>> letters
['a', 'b', 'f', 'g']
>>> # clear the list by replacing all the elements
with an empty list
>>> letters[:] = []
>>> letters
[]

The built-in function len() also applies to lists:

>>>

>>> letters = ['a', 'b', 'c', 'd']


>>> len(letters)
4

It is possible to nest lists (create lists containing other lists), for example:

>>>

>>> a = ['a', 'b', 'c']


>>> n = [1, 2, 3]
>>> x = [a, n]
>>> x
[['a', 'b', 'c'], [1, 2, 3]]
>>> x[0]
['a', 'b', 'c']
>>> x[0][1]
'b'

3.2. First Steps Towards


Programming
Of course, we can use Python for more complicated tasks than adding
two and two together. For instance, we can write an initial sub-sequence
of the Fibonacci series as follows:

>>>

>>> # Fibonacci series:


>>> # the sum of two elements defines the next
>>> a, b = 0, 1
>>> while a < 10:
... print(a)
... a, b = b, a+b
...
0
1
1
2
3
5
8

This example introduces several new features.

 The first line contains a multiple assignment: the


variables a and b simultaneously get the new values 0 and 1. On
the last line this is used again, demonstrating that the expressions
on the right-hand side are all evaluated first before any of the
assignments take place. The right-hand side expressions are
evaluated from the left to the right.

 The while loop executes as long as the condition


(here: a < 10) remains true. In Python, like in C, any non-zero
integer value is true; zero is false. The condition may also be a
string or list value, in fact any sequence; anything with a non-zero
length is true, empty sequences are false. The test used in the
example is a simple comparison. The standard comparison
operators are written the same as in C: < (less than), > (greater
than), == (equal to), <= (less than or equal to), >= (greater than or
equal to) and != (not equal to).
 The body of the loop is indented: indentation is Python’s way of
grouping statements. At the interactive prompt, you have to type a
tab or space(s) for each indented line. In practice you will prepare
more complicated input for Python with a text editor; all decent
text editors have an auto-indent facility. When a compound
statement is entered interactively, it must be followed by a blank
line to indicate completion (since the parser cannot guess when
you have typed the last line). Note that each line within a basic
block must be indented by the same amount.
 The print() function writes the value of the argument(s) it is
given. It differs from just writing the expression you want to write
(as we did earlier in the calculator examples) in the way it handles
multiple arguments, floating-point quantities, and strings. Strings
are printed without quotes, and a space is inserted between items,
so you can format things nicely, like this:

>>>

>>> i = 256*256
>>> print('The value of i is', i)
The value of i is 65536

The keyword argument end can be used to avoid the newline after
the output, or end the output with a different string:

>>>

>>> a, b = 0, 1
>>> while a < 1000:
... print(a, end=',')
... a, b = b, a+b
...
0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,9
87,

Footnotes
[1]

Since ** has higher precedence than -, -3**2 will be interpreted


as -(3**2) and thus result in -9. To avoid this and get 9, you can
use (-3)**2.
[2]

Unlike other languages, special characters such as \n have the same


meaning with both single ('...') and double ("...") quotes. The
only difference between the two is that within single quotes you
don’t need to escape " (but you have to escape \') and vice versa.

4. More Control Flow Tools


As well as the while statement just introduced, Python uses a few more that
we will encounter in this chapter.

4.1. if Statements
Perhaps the most well-known statement type is the if statement. For example:

>>>

>>> x = int(input("Please enter an integer: "))


Please enter an integer: 42
>>> if x < 0:
... x = 0
... print('Negative changed to zero')
... elif x == 0:
... print('Zero')
... elif x == 1:
... print('Single')
... else:
... print('More')
...
More

There can be zero or more elif parts, and the else part is optional. The keyword ‘elif’ is
short for ‘else if’, and is useful to avoid excessive indentation. An if … elif … elif …
sequence is a substitute for the switch or case statements found in other languages.

If you’re comparing the same value to several constants, or checking for specific types or
attributes, you may also find the match statement useful. For more details see match Statements.
4.2. for Statements
The for statement in Python differs a bit from what you may be used to in C or Pascal. Rather
than always iterating over an arithmetic progression of numbers (like in Pascal), or giving the
user the ability to define both the iteration step and halting condition (as C),
Python’s for statement iterates over the items of any sequence (a list or a string), in the order
that they appear in the sequence. For example (no pun intended):

>>>

>>> # Measure some strings:


>>> words = ['cat', 'window', 'defenestrate']
>>> for w in words:
... print(w, len(w))
...
cat 3
window 6
defenestrate 12

Code that modifies a collection while iterating over that same collection can be tricky to get
right. Instead, it is usually more straight-forward to loop over a copy of the collection or to create
a new collection:

# Create a sample collection


users = {'Hans': 'active', 'Éléonore': 'inactive', '景太郎': 'active'}

# Strategy: Iterate over a copy


for user, status in users.copy().items():
if status == 'inactive':
del users[user]

# Strategy: Create a new collection


active_users = {}
for user, status in users.items():
if status == 'active':
active_users[user] = status

4.3. The range() Function


If you do need to iterate over a sequence of numbers, the built-in function range() comes in
handy. It generates arithmetic progressions:

>>>
>>> for i in range(5):
... print(i)
...
0
1
2
3
4

The given end point is never part of the generated sequence; range(10) generates 10 values,
the legal indices for items of a sequence of length 10. It is possible to let the range start at
another number, or to specify a different increment (even negative; sometimes this is called the
‘step’):

>>>

>>> list(range(5, 10))


[5, 6, 7, 8, 9]

>>> list(range(0, 10, 3))


[0, 3, 6, 9]

>>> list(range(-10, -100, -30))


[-10, -40, -70]

To iterate over the indices of a sequence, you can combine range() and len() as follows:

>>>

>>> a = ['Mary', 'had', 'a', 'little', 'lamb']


>>> for i in range(len(a)):
... print(i, a[i])
...
0 Mary
1 had
2 a
3 little
4 lamb

In most such cases, however, it is convenient to use the enumerate() function, see Looping
Techniques.

A strange thing happens if you just print a range:


>>>

>>> range(10)
range(0, 10)

In many ways the object returned by range() behaves as if it is a list, but in fact it isn’t. It is an
object which returns the successive items of the desired sequence when you iterate over it, but it
doesn’t really make the list, thus saving space.

We say such an object is iterable, that is, suitable as a target for functions and constructs that
expect something from which they can obtain successive items until the supply is exhausted. We
have seen that the for statement is such a construct, while an example of a function that takes an
iterable is sum():

>>>

>>> sum(range(4)) # 0 + 1 + 2 + 3
6

Later we will see more functions that return iterables and take iterables as arguments. In
chapter Data Structures, we will discuss in more detail about list().

4.4. break and continue Statements,


and else Clauses on Loops
The break statement breaks out of the innermost enclosing for or while loop.

A for or while loop can include an else clause.

In a for loop, the else clause is executed after the loop reaches its final iteration.

In a while loop, it’s executed after the loop’s condition becomes false.

In either kind of loop, the else clause is not executed if the loop was terminated by a break.

This is exemplified in the following for loop, which searches for prime numbers:

>>>

>>> for n in range(2, 10):


... for x in range(2, n):
... if n % x == 0:
... print(n, 'equals', x, '*', n//x)
... break
... else:
... # loop fell through without finding a factor
... print(n, 'is a prime number')
...
2 is a prime number
3 is a prime number
4 equals 2 * 2
5 is a prime number
6 equals 2 * 3
7 is a prime number
8 equals 2 * 4
9 equals 3 * 3

(Yes, this is the correct code. Look closely: the else clause belongs to
the for loop, not the if statement.)

When used with a loop, the else clause has more in common with the else clause of
a try statement than it does with that of if statements: a try statement’s else clause runs
when no exception occurs, and a loop’s else clause runs when no break occurs. For more on
the try statement and exceptions, see Handling Exceptions.

The continue statement, also borrowed from C, continues with the next iteration of the loop:

>>>

>>> for num in range(2, 10):


... if num % 2 == 0:
... print("Found an even number", num)
... continue
... print("Found an odd number", num)
...
Found an even number 2
Found an odd number 3
Found an even number 4
Found an odd number 5
Found an even number 6
Found an odd number 7
Found an even number 8
Found an odd number 9
4.5. pass Statements
The pass statement does nothing. It can be used when a statement is required syntactically but
the program requires no action. For example:

>>>

>>> while True:


... pass # Busy-wait for keyboard interrupt (Ctrl+C)
...

This is commonly used for creating minimal classes:

>>>

>>> class MyEmptyClass:


... pass
...

Another place pass can be used is as a place-holder for a function or conditional body when you
are working on new code, allowing you to keep thinking at a more abstract level. The pass is
silently ignored:

>>>

>>> def initlog(*args):


... pass # Remember to implement this!
...

4.6. match Statements


A match statement takes an expression and compares its value to successive patterns given as
one or more case blocks. This is superficially similar to a switch statement in C, Java or
JavaScript (and many other languages), but it’s more similar to pattern matching in languages
like Rust or Haskell. Only the first pattern that matches gets executed and it can also extract
components (sequence elements or object attributes) from the value into variables.

The simplest form compares a subject value against one or more literals:

def http_error(status):
match status:
case 400:
return "Bad request"
case 404:
return "Not found"
case 418:
return "I'm a teapot"
case _:
return "Something's wrong with the internet"

Note the last block: the “variable name” _ acts as a wildcard and never fails to match. If no case
matches, none of the branches is executed.

You can combine several literals in a single pattern using | (“or”):

case 401 | 403 | 404:


return "Not allowed"

Patterns can look like unpacking assignments, and can be used to bind variables:

# point is an (x, y) tuple


match point:
case (0, 0):
print("Origin")
case (0, y):
print(f"Y={y}")
case (x, 0):
print(f"X={x}")
case (x, y):
print(f"X={x}, Y={y}")
case _:
raise ValueError("Not a point")

Study that one carefully! The first pattern has two literals, and can be thought of as an extension
of the literal pattern shown above. But the next two patterns combine a literal and a variable, and
the variable binds a value from the subject (point). The fourth pattern captures two values,
which makes it conceptually similar to the unpacking assignment (x, y) = point.

If you are using classes to structure your data you can use the class name followed by an
argument list resembling a constructor, but with the ability to capture attributes into variables:

class Point:
def __init__(self, x, y):
self.x = x
self.y = y

def where_is(point):
match point:
case Point(x=0, y=0):
print("Origin")
case Point(x=0, y=y):
print(f"Y={y}")
case Point(x=x, y=0):
print(f"X={x}")
case Point():
print("Somewhere else")
case _:
print("Not a point")

You can use positional parameters with some builtin classes that provide an ordering for their
attributes (e.g. dataclasses). You can also define a specific position for attributes in patterns by
setting the __match_args__ special attribute in your classes. If it’s set to (“x”, “y”), the
following patterns are all equivalent (and all bind the y attribute to the var variable):

Point(1, var)
Point(1, y=var)
Point(x=1, y=var)
Point(y=var, x=1)

A recommended way to read patterns is to look at them as an extended form of what you would
put on the left of an assignment, to understand which variables would be set to what. Only the
standalone names (like var above) are assigned to by a match statement. Dotted names
(like foo.bar), attribute names (the x= and y= above) or class names (recognized by the “(…)”
next to them like Point above) are never assigned to.

Patterns can be arbitrarily nested. For example, if we have a short list of Points,
with __match_args__ added, we could match it like this:

class Point:
__match_args__ = ('x', 'y')
def __init__(self, x, y):
self.x = x
self.y = y

match points:
case []:
print("No points")
case [Point(0, 0)]:
print("The origin")
case [Point(x, y)]:
print(f"Single point {x}, {y}")
case [Point(0, y1), Point(0, y2)]:
print(f"Two on the Y axis at {y1}, {y2}")
case _:
print("Something else")

We can add an if clause to a pattern, known as a “guard”. If the guard is false, match goes on
to try the next case block. Note that value capture happens before the guard is evaluated:

match point:
case Point(x, y) if x == y:
print(f"Y=X at {x}")
case Point(x, y):
print(f"Not on the diagonal")

Several other key features of this statement:

 Like unpacking assignments, tuple and list patterns have exactly the same meaning and
actually match arbitrary sequences. An important exception is that they don’t match
iterators or strings.

 Sequence patterns support extended


unpacking: [x, y, *rest] and (x, y, *rest) work similar to unpacking
assignments. The name after * may also be _, so (x, y, *_) matches a sequence of at
least two items without binding the remaining items.
 Mapping patterns: {"bandwidth": b, "latency": l} captures
the "bandwidth" and "latency" values from a dictionary. Unlike sequence patterns,
extra keys are ignored. An unpacking like **rest is also supported. (But **_ would be
redundant, so it is not allowed.)
 Subpatterns may be captured using the as keyword:
 case (Point(x1, y1), Point(x2, y2) as p2): ...

will capture the second element of the input as p2 (as long as the input is a sequence of
two points)

 Most literals are compared by equality, however the


singletons True, False and None are compared by identity.
 Patterns may use named constants. These must be dotted names to prevent them from
being interpreted as capture variable:
 from enum import Enum
 class Color(Enum):
 RED = 'red'
 GREEN = 'green'
 BLUE = 'blue'

 color = Color(input("Enter your choice of 'red', 'blue' or
'green': "))

 match color:
 case Color.RED:
 print("I see red!")
 case Color.GREEN:
 print("Grass is green")
 case Color.BLUE:
 print("I'm feeling the blues :(")

For a more detailed explanation and additional examples, you can look into PEP 636 which is
written in a tutorial format.

4.7. Defining Functions


We can create a function that writes the Fibonacci series to an arbitrary boundary:

>>>

>>> def fib(n): # write Fibonacci series up to n


... """Print a Fibonacci series up to n."""
... a, b = 0, 1
... while a < n:
... print(a, end=' ')
... a, b = b, a+b
... print()
...
>>> # Now call the function we just defined:
>>> fib(2000)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597

The keyword def introduces a function definition. It must be followed by the function name and
the parenthesized list of formal parameters. The statements that form the body of the function
start at the next line, and must be indented.

The first statement of the function body can optionally be a string literal; this string literal is the
function’s documentation string, or docstring. (More about docstrings can be found in the
section Documentation Strings.) There are tools which use docstrings to automatically produce
online or printed documentation, or to let the user interactively browse through code; it’s good
practice to include docstrings in code that you write, so make a habit of it.

The execution of a function introduces a new symbol table used for the local variables of the
function. More precisely, all variable assignments in a function store the value in the local
symbol table; whereas variable references first look in the local symbol table, then in the local
symbol tables of enclosing functions, then in the global symbol table, and finally in the table of
built-in names. Thus, global variables and variables of enclosing functions cannot be directly
assigned a value within a function (unless, for global variables, named in a global statement,
or, for variables of enclosing functions, named in a nonlocal statement), although they may be
referenced.

The actual parameters (arguments) to a function call are introduced in the local symbol table of
the called function when it is called; thus, arguments are passed using call by value (where
the value is always an object reference, not the value of the object). [1] When a function calls
another function, or calls itself recursively, a new local symbol table is created for that call.

A function definition associates the function name with the function object in the current symbol
table. The interpreter recognizes the object pointed to by that name as a user-defined function.
Other names can also point to that same function object and can also be used to access the
function:

>>>

>>> fib
<function fib at 10042ed0>
>>> f = fib
>>> f(100)
0 1 1 2 3 5 8 13 21 34 55 89

Coming from other languages, you might object that fib is not a function but a procedure since
it doesn’t return a value. In fact, even functions without a return statement do return a value,
albeit a rather boring one. This value is called None (it’s a built-in name). Writing the
value None is normally suppressed by the interpreter if it would be the only value written. You
can see it if you really want to using print():

>>>

>>> fib(0)
>>> print(fib(0))
None

It is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of
printing it:
>>>

>>> def fib2(n): # return Fibonacci series up to n


... """Return a list containing the Fibonacci series up to
n."""
... result = []
... a, b = 0, 1
... while a < n:
... result.append(a) # see below
... a, b = b, a+b
... return result
...
>>> f100 = fib2(100) # call it
>>> f100 # write the result
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]

This example, as usual, demonstrates some new Python features:

 The return statement returns with a value from a function. return without an
expression argument returns None. Falling off the end of a function also returns None.
 The statement result.append(a) calls a method of the list object result. A method
is a function that ‘belongs’ to an object and is named obj.methodname, where obj is
some object (this may be an expression), and methodname is the name of a method that
is defined by the object’s type. Different types define different methods. Methods of
different types may have the same name without causing ambiguity. (It is possible to
define your own object types and methods, using classes, see Classes) The
method append() shown in the example is defined for list objects; it adds a new
element at the end of the list. In this example it is equivalent
to result = result + [a], but more efficient.
4.8. More on Defining Functions
It is also possible to define functions with a variable number of arguments. There are three
forms, which can be combined.

4.8.1. Default Argument Values


The most useful form is to specify a default value for one or more arguments. This creates a
function that can be called with fewer arguments than it is defined to allow. For example:

def ask_ok(prompt, retries=4, reminder='Please try again!'):


while True:
reply = input(prompt)
if reply in {'y', 'ye', 'yes'}:
return True
if reply in {'n', 'no', 'nop', 'nope'}:
return False
retries = retries - 1
if retries < 0:
raise ValueError('invalid user response')
print(reminder)

This function can be called in several ways:

 giving only the mandatory


argument: ask_ok('Do you really want to quit?')
 giving one of the optional
arguments: ask_ok('OK to overwrite the file?', 2)
 or even giving all
arguments: ask_ok('OK to overwrite the file?', 2, 'Come on, only
yes or no!')

This example also introduces the in keyword. This tests whether or not a sequence contains a
certain value.

The default values are evaluated at the point of function definition in the defining scope, so that

i = 5

def f(arg=i):
print(arg)

i = 6
f()

will print 5.

Important warning: The default value is evaluated only once. This makes a difference when the
default is a mutable object such as a list, dictionary, or instances of most classes. For example,
the following function accumulates the arguments passed to it on subsequent calls:

def f(a, L=[]):


L.append(a)
return L

print(f(1))
print(f(2))
print(f(3))

This will print

[1]
[1, 2]
[1, 2, 3]

If you don’t want the default to be shared between subsequent calls, you can write the function
like this instead:

def f(a, L=None):


if L is None:
L = []
L.append(a)
return L

4.8.2. Keyword Arguments


Functions can also be called using keyword arguments of the form kwarg=value. For instance,
the following function:

def parrot(voltage, state='a stiff', action='voom', type='Norwegian


Blue'):
print("-- This parrot wouldn't", action, end=' ')
print("if you put", voltage, "volts through it.")
print("-- Lovely plumage, the", type)
print("-- It's", state, "!")

accepts one required argument (voltage) and three optional arguments (state, action,
and type). This function can be called in any of the following ways:

parrot(1000) # 1
positional argument
parrot(voltage=1000) # 1 keyword
argument
parrot(voltage=1000000, action='VOOOOOM') # 2 keyword
arguments
parrot(action='VOOOOOM', voltage=1000000) # 2 keyword
arguments
parrot('a million', 'bereft of life', 'jump') # 3
positional arguments
parrot('a thousand', state='pushing up the daisies') # 1
positional, 1 keyword

but all the following calls would be invalid:

parrot() # required argument missing


parrot(voltage=5.0, 'dead') # non-keyword argument after a keyword
argument
parrot(110, voltage=220) # duplicate value for the same
argument
parrot(actor='John Cleese') # unknown keyword argument

In a function call, keyword arguments must follow positional arguments. All the keyword
arguments passed must match one of the arguments accepted by the function (e.g. actor is not a
valid argument for the parrot function), and their order is not important. This also includes
non-optional arguments (e.g. parrot(voltage=1000) is valid too). No argument may
receive a value more than once. Here’s an example that fails due to this restriction:

>>>

>>> def function(a):


... pass
...
>>> function(0, a=0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: function() got multiple values for argument 'a'

When a final formal parameter of the form **name is present, it receives a dictionary
(see Mapping Types — dict) containing all keyword arguments except for those corresponding
to a formal parameter. This may be combined with a formal parameter of the
form *name (described in the next subsection) which receives a tuple containing the positional
arguments beyond the formal parameter list. (*name must occur before **name.) For example,
if we define a function like this:

def cheeseshop(kind, *arguments, **keywords):


print("-- Do you have any", kind, "?")
print("-- I'm sorry, we're all out of", kind)
for arg in arguments:
print(arg)
print("-" * 40)
for kw in keywords:
print(kw, ":", keywords[kw])

It could be called like this:

cheeseshop("Limburger", "It's very runny, sir.",


"It's really very, VERY runny, sir.",
shopkeeper="Michael Palin",
client="John Cleese",
sketch="Cheese Shop Sketch")

and of course it would print:

-- Do you have any Limburger ?


-- I'm sorry, we're all out of Limburger
It's very runny, sir.
It's really very, VERY runny, sir.
----------------------------------------
shopkeeper : Michael Palin
client : John Cleese
sketch : Cheese Shop Sketch

Note that the order in which the keyword arguments are printed is guaranteed to match the order
in which they were provided in the function call.

4.8.3. Special parameters


By default, arguments may be passed to a Python function either by position or explicitly by
keyword. For readability and performance, it makes sense to restrict the way arguments can be
passed so that a developer need only look at the function definition to determine if items are
passed by position, by position or keyword, or by keyword.

A function definition may look like:

def f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):


----------- ---------- ----------
| | |
| Positional or keyword |
| - Keyword only
-- Positional only
where / and * are optional. If used, these symbols indicate the kind of parameter by how the
arguments may be passed to the function: positional-only, positional-or-keyword, and keyword-
only. Keyword parameters are also referred to as named parameters.

4.8.3.1. Positional-or-Keyword Arguments


If / and * are not present in the function definition, arguments may be passed to a function by
position or by keyword.

4.8.3.2. Positional-Only Parameters


Looking at this in a bit more detail, it is possible to mark certain parameters as positional-only.
If positional-only, the parameters’ order matters, and the parameters cannot be passed by
keyword. Positional-only parameters are placed before a / (forward-slash). The / is used to
logically separate the positional-only parameters from the rest of the parameters. If there is
no / in the function definition, there are no positional-only parameters.

Parameters following the / may be positional-or-keyword or keyword-only.

4.8.3.3. Keyword-Only Arguments


To mark parameters as keyword-only, indicating the parameters must be passed by keyword
argument, place an * in the arguments list just before the first keyword-only parameter.

4.8.3.4. Function Examples


Consider the following example function definitions paying close attention to the
markers / and *:

>>>

>>> def standard_arg(arg):


... print(arg)
...
>>> def pos_only_arg(arg, /):
... print(arg)
...
>>> def kwd_only_arg(*, arg):
... print(arg)
...
>>> def combined_example(pos_only, /, standard, *, kwd_only):
... print(pos_only, standard, kwd_only)
The first function definition, standard_arg, the most familiar form, places no restrictions on
the calling convention and arguments may be passed by position or keyword:

>>>

>>> standard_arg(2)
2

>>> standard_arg(arg=2)
2

The second function pos_only_arg is restricted to only use positional parameters as there is
a / in the function definition:

>>>

>>> pos_only_arg(1)
1

>>> pos_only_arg(arg=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: pos_only_arg() got some positional-only arguments passed
as keyword arguments: 'arg'

The third function kwd_only_args only allows keyword arguments as indicated by a * in the
function definition:

>>>

>>> kwd_only_arg(3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: kwd_only_arg() takes 0 positional arguments but 1 was
given

>>> kwd_only_arg(arg=3)
3

And the last uses all three calling conventions in the same function definition:

>>>
>>> combined_example(1, 2, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: combined_example() takes 2 positional arguments but 3
were given

>>> combined_example(1, 2, kwd_only=3)


1 2 3

>>> combined_example(1, standard=2, kwd_only=3)


1 2 3

>>> combined_example(pos_only=1, standard=2, kwd_only=3)


Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: combined_example() got some positional-only arguments
passed as keyword arguments: 'pos_only'

Finally, consider this function definition which has a potential collision between the positional
argument name and **kwds which has name as a key:

def foo(name, **kwds):


return 'name' in kwds

There is no possible call that will make it return True as the keyword 'name' will always bind
to the first parameter. For example:

>>>

>>> foo(1, **{'name': 2})


Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() got multiple values for argument 'name'
>>>

But using / (positional only arguments), it is possible since it allows name as a positional
argument and 'name' as a key in the keyword arguments:

>>>

>>> def foo(name, /, **kwds):


... return 'name' in kwds
...
>>> foo(1, **{'name': 2})
True

In other words, the names of positional-only parameters can be used in **kwds without
ambiguity.

4.8.3.5. Recap
The use case will determine which parameters to use in the function definition:

def f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):

As guidance:

 Use positional-only if you want the name of the parameters to not be available to the
user. This is useful when parameter names have no real meaning, if you want to enforce
the order of the arguments when the function is called or if you need to take some
positional parameters and arbitrary keywords.
 Use keyword-only when names have meaning and the function definition is more
understandable by being explicit with names or you want to prevent users relying on the
position of the argument being passed.
 For an API, use positional-only to prevent breaking API changes if the parameter’s name
is modified in the future.
4.8.4. Arbitrary Argument Lists
Finally, the least frequently used option is to specify that a function can be called with an
arbitrary number of arguments. These arguments will be wrapped up in a tuple (see Tuples and
Sequences). Before the variable number of arguments, zero or more normal arguments may
occur.

def write_multiple_items(file, separator, *args):


file.write(separator.join(args))

Normally, these variadic arguments will be last in the list of formal parameters, because they
scoop up all remaining input arguments that are passed to the function. Any formal parameters
which occur after the *args parameter are ‘keyword-only’ arguments, meaning that they can
only be used as keywords rather than positional arguments.

>>>

>>> def concat(*args, sep="/"):


... return sep.join(args)
...
>>> concat("earth", "mars", "venus")
'earth/mars/venus'
>>> concat("earth", "mars", "venus", sep=".")
'earth.mars.venus'

4.8.5. Unpacking Argument Lists


The reverse situation occurs when the arguments are already in a list or tuple but need to be
unpacked for a function call requiring separate positional arguments. For instance, the built-
in range() function expects separate start and stop arguments. If they are not available
separately, write the function call with the *-operator to unpack the arguments out of a list or
tuple:

>>>

>>> list(range(3, 6)) # normal call with separate


arguments
[3, 4, 5]
>>> args = [3, 6]
>>> list(range(*args)) # call with arguments unpacked
from a list
[3, 4, 5]

In the same fashion, dictionaries can deliver keyword arguments with the **-operator:

>>>

>>> def parrot(voltage, state='a stiff', action='voom'):


... print("-- This parrot wouldn't", action, end=' ')
... print("if you put", voltage, "volts through it.", end=' ')
... print("E's", state, "!")
...
>>> d = {"voltage": "four million", "state": "bleedin' demised",
"action": "VOOM"}
>>> parrot(**d)
-- This parrot wouldn't VOOM if you put four million volts through
it. E's bleedin' demised !

4.8.6. Lambda Expressions


Small anonymous functions can be created with the lambda keyword. This function returns the
sum of its two arguments: lambda a, b: a+b. Lambda functions can be used wherever
function objects are required. They are syntactically restricted to a single expression.
Semantically, they are just syntactic sugar for a normal function definition. Like nested function
definitions, lambda functions can reference variables from the containing scope:

>>>

>>> def make_incrementor(n):


... return lambda x: x + n
...
>>> f = make_incrementor(42)
>>> f(0)
42
>>> f(1)
43

The above example uses a lambda expression to return a function. Another use is to pass a small
function as an argument:

>>>

>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]
>>> pairs.sort(key=lambda pair: pair[1])
>>> pairs
[(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]

4.8.7. Documentation Strings


Here are some conventions about the content and formatting of documentation strings.

The first line should always be a short, concise summary of the object’s purpose. For brevity, it
should not explicitly state the object’s name or type, since these are available by other means
(except if the name happens to be a verb describing a function’s operation). This line should
begin with a capital letter and end with a period.

If there are more lines in the documentation string, the second line should be blank, visually
separating the summary from the rest of the description. The following lines should be one or
more paragraphs describing the object’s calling conventions, its side effects, etc.

The Python parser does not strip indentation from multi-line string literals in Python, so tools
that process documentation have to strip indentation if desired. This is done using the following
convention. The first non-blank line after the first line of the string determines the amount of
indentation for the entire documentation string. (We can’t use the first line since it is generally
adjacent to the string’s opening quotes so its indentation is not apparent in the string literal.)
Whitespace “equivalent” to this indentation is then stripped from the start of all lines of the
string. Lines that are indented less should not occur, but if they occur all their leading whitespace
should be stripped. Equivalence of whitespace should be tested after expansion of tabs (to 8
spaces, normally).

Here is an example of a multi-line docstring:

>>>

>>> def my_function():


... """Do nothing, but document it.
...
... No, really, it doesn't do anything.
... """
... pass
...
>>> print(my_function.__doc__)
Do nothing, but document it.

No, really, it doesn't do anything.

4.8.8. Function Annotations


Function annotations are completely optional metadata information about the types used by user-
defined functions (see PEP 3107 and PEP 484 for more information).

Annotations are stored in the __annotations__ attribute of the function as a dictionary and
have no effect on any other part of the function. Parameter annotations are defined by a colon
after the parameter name, followed by an expression evaluating to the value of the annotation.
Return annotations are defined by a literal ->, followed by an expression, between the parameter
list and the colon denoting the end of the def statement. The following example has a required
argument, an optional argument, and the return value annotated:

>>>

>>> def f(ham: str, eggs: str = 'eggs') -> str:


... print("Annotations:", f.__annotations__)
... print("Arguments:", ham, eggs)
... return ham + ' and ' + eggs
...
>>> f('spam')
Annotations: {'ham': <class 'str'>, 'return': <class 'str'>,
'eggs': <class 'str'>}
Arguments: spam eggs
'spam and eggs'
4.9. Intermezzo: Coding Style
Now that you are about to write longer, more complex pieces of Python, it is a good time to talk
about coding style. Most languages can be written (or more concise, formatted) in different
styles; some are more readable than others. Making it easy for others to read your code is always
a good idea, and adopting a nice coding style helps tremendously for that.

For Python, PEP 8 has emerged as the style guide that most projects adhere to; it promotes a
very readable and eye-pleasing coding style. Every Python developer should read it at some
point; here are the most important points extracted for you:

 Use 4-space indentation, and no tabs.

4 spaces are a good compromise between small indentation (allows greater nesting depth)
and large indentation (easier to read). Tabs introduce confusion, and are best left out.

 Wrap lines so that they don’t exceed 79 characters.

This helps users with small displays and makes it possible to have several code files side-
by-side on larger displays.

 Use blank lines to separate functions and classes, and larger blocks of code inside
functions.
 When possible, put comments on a line of their own.
 Use docstrings.
 Use spaces around operators and after commas, but not directly inside bracketing
constructs: a = f(1, 2) + g(3, 4).
 Name your classes and functions consistently; the convention is to
use UpperCamelCase for classes and lowercase_with_underscores for
functions and methods. Always use self as the name for the first method argument
(see A First Look at Classes for more on classes and methods).
 Don’t use fancy encodings if your code is meant to be used in international environments.
Python’s default, UTF-8, or even plain ASCII work best in any case.
 Likewise, don’t use non-ASCII characters in identifiers if there is only the slightest
chance people speaking a different language will read or maintain the code.

Footnotes

[1]

Actually, call by object reference would be a better description, since if a mutable object is
passed, the caller will see any changes the callee makes to it (items inserted into a list).
5. Data Structures
This chapter describes some things you’ve learned about already in more
detail, and adds some new things as well.

5.1. More on Lists


The list data type has some more methods. Here are all of the methods of
list objects:

list.append(x)

Add an item to the end of the list. Equivalent to a[len(a):] = [x].


list.extend(iterable)

Extend the list by appending all the items from the iterable. Equivalent
to a[len(a):] = iterable.
list.insert(i, x)

Insert an item at a given position. The first argument is the index of the
element before which to insert, so a.insert(0, x) inserts at the front
of the list, and a.insert(len(a), x) is equivalent to a.append(x).
list.remove(x)

Remove the first item from the list whose value is equal to x. It raises
a ValueError if there is no such item.
list.pop([i])

Remove the item at the given position in the list, and return it. If no
index is specified, a.pop() removes and returns the last item in the
list. It raises an IndexError if the list is empty or the index is outside
the list range.
list.clear()

Remove all items from the list. Equivalent to del a[:].


list.index(x[, start[, end]])

Return zero-based index in the list of the first item whose value is
equal to x. Raises a ValueError if there is no such item.

The optional arguments start and end are interpreted as in the slice
notation and are used to limit the search to a particular subsequence
of the list. The returned index is computed relative to the beginning of
the full sequence rather than the start argument.
list.count(x)

Return the number of times x appears in the list.


list.sort(*, key=None, reverse=False)

Sort the items of the list in place (the arguments can be used for sort
customization, see sorted() for their explanation).
list.reverse()

Reverse the elements of the list in place.


list.copy()

Return a shallow copy of the list. Equivalent to a[:].

An example that uses most of the


list methods:

>>>

>>> fruits = ['orange',


'apple', 'pear', 'banana',
'kiwi', 'apple', 'banana']
>>> fruits.count('apple')
2
>>> fruits.count('tangerine')
0
>>> fruits.index('banana')
3
>>> fruits.index('banana', 4)
# Find next banana starting at
position 4
6
>>> fruits.reverse()
>>> fruits
['banana', 'apple', 'kiwi',
'banana', 'pear', 'apple',
'orange']
>>> fruits.append('grape')
>>> fruits
['banana', 'apple', 'kiwi',
'banana', 'pear', 'apple',
'orange', 'grape']
>>> fruits.sort()
>>> fruits
['apple', 'apple', 'banana',
'banana', 'grape', 'kiwi',
'orange', 'pear']
>>> fruits.pop()
'pear'

You might have noticed that


methods
like insert, remove or sort that only
modify the list have no return value
printed – they return the
default None. [1] This is a design
principle for all mutable data
structures in Python.

Another thing you might notice is


that not all data can be sorted or
compared. For
instance, [None, 'hello', 10] doe
sn’t sort because integers can’t be
compared to strings and None can’t
be compared to other types. Also,
there are some types that don’t
have a defined ordering relation.
For example, 3+4j < 5+7j isn’t a
valid comparison.

5.1.1. Using Lists as


Stacks
The list methods make it very easy
to use a list as a stack, where the
last element added is the first
element retrieved (“last-in, first-
out”). To add an item to the top of
the stack, use append(). To retrieve
an item from the top of the stack,
use pop() without an explicit index.
For example:

>>>

>>> stack = [3, 4, 5]


>>> stack.append(6)
>>> stack.append(7)
>>> stack
[3, 4, 5, 6, 7]
>>> stack.pop()
7
>>> stack
[3, 4, 5, 6]
>>> stack.pop()
6
>>> stack.pop()
5
>>> stack
[3, 4]

5.1.2. Using Lists as


Queues
It is also possible to use a list as a
queue, where the first element
added is the first element retrieved
(“first-in, first-out”); however, lists
are not efficient for this purpose.
While appends and pops from the
end of list are fast, doing inserts or
pops from the beginning of a list is
slow (because all of the other
elements have to be shifted by
one).

To implement a queue,
use collections.deque which was
designed to have fast appends and
pops from both ends. For example:

>>>

>>> from collections import


deque
>>> queue = deque(["Eric",
"John", "Michael"])
>>> queue.append("Terry")
# Terry arrives
>>> queue.append("Graham")
# Graham arrives
>>> queue.popleft()
# The first to arrive now
leaves
'Eric'
>>> queue.popleft()
# The second to arrive now
leaves
'John'
>>> queue
# Remaining queue in order of
arrival
deque(['Michael', 'Terry',
'Graham'])

5.1.3. List
Comprehensions
List comprehensions provide a
concise way to create lists.
Common applications are to make
new lists where each element is the
result of some operations applied to
each member of another sequence
or iterable, or to create a
subsequence of those elements that
satisfy a certain condition.

For example, assume we want to


create a list of squares, like:

>>>

>>> squares = []
>>> for x in range(10):
... squares.append(x**2)
...
>>> squares
[0, 1, 4, 9, 16, 25, 36, 49,
64, 81]

Note that this creates (or


overwrites) a variable named x that
still exists after the loop completes.
We can calculate the list of squares
without any side effects using:

squares = list(map(lambda x:
x**2, range(10)))

or, equivalently:

squares = [x**2 for x in


range(10)]

which is more concise and readable.

A list comprehension consists of


brackets containing an expression
followed by a for clause, then zero
or more for or if clauses. The
result will be a new list resulting
from evaluating the expression in
the context of
the for and if clauses which follow
it. For example, this listcomp
combines the elements of two lists
if they are not equal:

>>>

>>> [(x, y) for x in [1,2,3]


for y in [3,1,4] if x != y]
[(1, 3), (1, 4), (2, 3), (2,
1), (2, 4), (3, 1), (3, 4)]

and it’s equivalent to:

>>>

>>> combs = []
>>> for x in [1,2,3]:
... for y in [3,1,4]:
... if x != y:
...
combs.append((x, y))
...
>>> combs
[(1, 3), (1, 4), (2, 3), (2,
1), (2, 4), (3, 1), (3, 4)]

Note how the order of


the for and if statements is the
same in both these snippets.

If the expression is a tuple (e.g.


the (x, y) in the previous
example), it must be parenthesized.

>>>

>>> vec = [-4, -2, 0, 2, 4]


>>> # create a new list with
the values doubled
>>> [x*2 for x in vec]
[-8, -4, 0, 4, 8]
>>> # filter the list to
exclude negative numbers
>>> [x for x in vec if x >= 0]
[0, 2, 4]
>>> # apply a function to all
the elements
>>> [abs(x) for x in vec]
[4, 2, 0, 2, 4]
>>> # call a method on each
element
>>> freshfruit = [' banana', '
loganberry ', 'passion fruit
']
>>> [weapon.strip() for weapon
in freshfruit]
['banana', 'loganberry',
'passion fruit']
>>> # create a list of 2-tuples
like (number, square)
>>> [(x, x**2) for x in
range(6)]
[(0, 0), (1, 1), (2, 4), (3,
9), (4, 16), (5, 25)]
>>> # the tuple must be
parenthesized, otherwise an
error is raised
>>> [x, x**2 for x in range(6)]
File "<stdin>", line 1
[x, x**2 for x in range(6)]
^^^^^^^
SyntaxError: did you forget
parentheses around the
comprehension target?
>>> # flatten a list using a
listcomp with two 'for'
>>> vec = [[1,2,3], [4,5,6],
[7,8,9]]
>>> [num for elem in vec for
num in elem]
[1, 2, 3, 4, 5, 6, 7, 8, 9]

List comprehensions can contain


complex expressions and nested
functions:

>>>

>>> from math import pi


>>> [str(round(pi, i)) for i in
range(1, 6)]
['3.1', '3.14', '3.142',
'3.1416', '3.14159']

5.1.4. Nested List


Comprehensions
The initial expression in a list
comprehension can be any arbitrary
expression, including another list
comprehension.

Consider the following example of a


3x4 matrix implemented as a list of
3 lists of length 4:

>>>
>>> matrix = [
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12],
... ]

The following list comprehension


will transpose rows and columns:

>>>

>>> [[row[i] for row in matrix]


for i in range(4)]
[[1, 5, 9], [2, 6, 10], [3, 7,
11], [4, 8, 12]]

As we saw in the previous section,


the inner list comprehension is
evaluated in the context of
the for that follows it, so this
example is equivalent to:

>>>

>>> transposed = []
>>> for i in range(4):
...
transposed.append([row[i] for
row in matrix])
...
>>> transposed
[[1, 5, 9], [2, 6, 10], [3, 7,
11], [4, 8, 12]]

which, in turn, is the same as:

>>>

>>> transposed = []
>>> for i in range(4):
... # the following 3 lines
implement the nested listcomp
... transposed_row = []
... for row in matrix:
...
transposed_row.append(row[i])
...
transposed.append(transposed_ro
w)
...
>>> transposed
[[1, 5, 9], [2, 6, 10], [3, 7,
11], [4, 8, 12]]

In the real world, you should prefer


built-in functions to complex flow
statements. The zip() function
would do a great job for this use
case:

>>>

>>> list(zip(*matrix))
[(1, 5, 9), (2, 6, 10), (3, 7,
11), (4, 8, 12)]

See Unpacking Argument Lists for


details on the asterisk in this line.

5.2. The del statemen


t
There is a way to remove an item
from a list given its index instead of
its value: the del statement. This
differs from the pop() method
which returns a value.
The del statement can also be used
to remove slices from a list or clear
the entire list (which we did earlier
by assignment of an empty list to
the slice). For example:

>>>
>>> a = [-1, 1, 66.25, 333,
333, 1234.5]
>>> del a[0]
>>> a
[1, 66.25, 333, 333, 1234.5]
>>> del a[2:4]
>>> a
[1, 66.25, 1234.5]
>>> del a[:]
>>> a
[]

del can also be used to delete


entire variables:

>>>

>>> del a

Referencing the name a hereafter is


an error (at least until another value
is assigned to it). We’ll find other
uses for del later.

5.3. Tuples and


Sequences
We saw that lists and strings have
many common properties, such as
indexing and slicing operations.
They are two examples
of sequence data types
(see Sequence Types — list, tuple,
range). Since Python is an evolving
language, other sequence data
types may be added. There is also
another standard sequence data
type: the tuple.

A tuple consists of a number of


values separated by commas, for
instance:
>>>

>>> t = 12345, 54321, 'hello!'


>>> t[0]
12345
>>> t
(12345, 54321, 'hello!')
>>> # Tuples may be nested:
>>> u = t, (1, 2, 3, 4, 5)
>>> u
((12345, 54321, 'hello!'), (1,
2, 3, 4, 5))
>>> # Tuples are immutable:
>>> t[0] = 88888
Traceback (most recent call
last):
File "<stdin>", line 1, in
<module>
TypeError: 'tuple' object does
not support item assignment
>>> # but they can contain
mutable objects:
>>> v = ([1, 2, 3], [3, 2, 1])
>>> v
([1, 2, 3], [3, 2, 1])

As you see, on output tuples are


always enclosed in parentheses, so
that nested tuples are interpreted
correctly; they may be input with or
without surrounding parentheses,
although often parentheses are
necessary anyway (if the tuple is
part of a larger expression). It is not
possible to assign to the individual
items of a tuple, however it is
possible to create tuples which
contain mutable objects, such as
lists.

Though tuples may seem similar to


lists, they are often used in different
situations and for different
purposes. Tuples are immutable,
and usually contain a
heterogeneous sequence of
elements that are accessed via
unpacking (see later in this section)
or indexing (or even by attribute in
the case of namedtuples). Lists
are mutable, and their elements are
usually homogeneous and are
accessed by iterating over the list.

A special problem is the


construction of tuples containing 0
or 1 items: the syntax has some
extra quirks to accommodate these.
Empty tuples are constructed by an
empty pair of parentheses; a tuple
with one item is constructed by
following a value with a comma (it
is not sufficient to enclose a single
value in parentheses). Ugly, but
effective. For example:

>>>

>>> empty = ()
>>> singleton = 'hello', #
<-- note trailing comma
>>> len(empty)
0
>>> len(singleton)
1
>>> singleton
('hello',)

The
statement t = 12345, 54321, 'hel
lo!' is an example of tuple packing:
the
values 12345, 54321 and 'hello!' a
re packed together in a tuple. The
reverse operation is also possible:

>>>
>>> x, y, z = t

This is called, appropriately


enough, sequence unpacking and
works for any sequence on the
right-hand side. Sequence
unpacking requires that there are as
many variables on the left side of
the equals sign as there are
elements in the sequence. Note that
multiple assignment is really just a
combination of tuple packing and
sequence unpacking.

5.4. Sets
Python also includes a data type
for sets. A set is an unordered
collection with no duplicate
elements. Basic uses include
membership testing and eliminating
duplicate entries. Set objects also
support mathematical operations
like union, intersection, difference,
and symmetric difference.

Curly braces or the set() function


can be used to create sets. Note: to
create an empty set you have to
use set(), not {}; the latter creates
an empty dictionary, a data
structure that we discuss in the
next section.

Here is a brief demonstration:

>>>

>>> basket = {'apple',


'orange', 'apple', 'pear',
'orange', 'banana'}
>>> print(basket)
# show that duplicates have
been removed
{'orange', 'banana', 'pear',
'apple'}
>>> 'orange' in basket
# fast membership testing
True
>>> 'crabgrass' in basket
False

>>> # Demonstrate set


operations on unique letters
from two words
>>>
>>> a = set('abracadabra')
>>> b = set('alacazam')
>>> a
# unique letters in a
{'a', 'r', 'b', 'c', 'd'}
>>> a - b
# letters in a but not in b
{'r', 'd', 'b'}
>>> a | b
# letters in a or b or both
{'a', 'c', 'r', 'd', 'b', 'm',
'z', 'l'}
>>> a & b
# letters in both a and b
{'a', 'c'}
>>> a ^ b
# letters in a or b but not
both
{'r', 'd', 'b', 'm', 'z', 'l'}

Similarly to list comprehensions, set


comprehensions are also supported:

>>>

>>> a = {x for x in
'abracadabra' if x not in
'abc'}
>>> a
{'r', 'd'}

5.5. Dictionaries
Another useful data type built into
Python is
the dictionary (see Mapping Types
— dict). Dictionaries are sometimes
found in other languages as
“associative memories” or
“associative arrays”. Unlike
sequences, which are indexed by a
range of numbers, dictionaries are
indexed by keys, which can be any
immutable type; strings and
numbers can always be keys.
Tuples can be used as keys if they
contain only strings, numbers, or
tuples; if a tuple contains any
mutable object either directly or
indirectly, it cannot be used as a
key. You can’t use lists as keys,
since lists can be modified in place
using index assignments, slice
assignments, or methods
like append() and extend().

It is best to think of a dictionary as a


set of key: value pairs, with the
requirement that the keys are
unique (within one dictionary). A
pair of braces creates an empty
dictionary: {}. Placing a comma-
separated list of key:value pairs
within the braces adds initial
key:value pairs to the dictionary;
this is also the way dictionaries are
written on output.

The main operations on a dictionary


are storing a value with some key
and extracting the value given the
key. It is also possible to delete a
key:value pair with del. If you store
using a key that is already in use,
the old value associated with that
key is forgotten. It is an error to
extract a value using a non-existent
key.

Performing list(d) on a dictionary


returns a list of all the keys used in
the dictionary, in insertion order (if
you want it sorted, just
use sorted(d) instead). To check
whether a single key is in the
dictionary, use the in keyword.

Here is a small example using a


dictionary:

>>>

>>> tel = {'jack': 4098,


'sape': 4139}
>>> tel['guido'] = 4127
>>> tel
{'jack': 4098, 'sape': 4139,
'guido': 4127}
>>> tel['jack']
4098
>>> del tel['sape']
>>> tel['irv'] = 4127
>>> tel
{'jack': 4098, 'guido': 4127,
'irv': 4127}
>>> list(tel)
['jack', 'guido', 'irv']
>>> sorted(tel)
['guido', 'irv', 'jack']
>>> 'guido' in tel
True
>>> 'jack' not in tel
False

The dict() constructor builds


dictionaries directly from sequences
of key-value pairs:
>>>

>>> dict([('sape', 4139),


('guido', 4127), ('jack',
4098)])
{'sape': 4139, 'guido': 4127,
'jack': 4098}

In addition, dict comprehensions


can be used to create dictionaries
from arbitrary key and value
expressions:

>>>

>>> {x: x**2 for x in (2, 4,


6)}
{2: 4, 4: 16, 6: 36}

When the keys are simple strings, it


is sometimes easier to specify pairs
using keyword arguments:

>>>

>>> dict(sape=4139, guido=4127,


jack=4098)
{'sape': 4139, 'guido': 4127,
'jack': 4098}

5.6. Looping
Techniques
When looping through dictionaries,
the key and corresponding value
can be retrieved at the same time
using the items() method.

>>>

>>> knights = {'gallahad': 'the


pure', 'robin': 'the brave'}
>>> for k, v in
knights.items():
... print(k, v)
...
gallahad the pure
robin the brave

When looping through a sequence,


the position index and
corresponding value can be
retrieved at the same time using
the enumerate() function.

>>>

>>> for i, v in
enumerate(['tic', 'tac',
'toe']):
... print(i, v)
...
0 tic
1 tac
2 toe

To loop over two or more sequences


at the same time, the entries can be
paired with the zip() function.

>>>

>>> questions = ['name',


'quest', 'favorite color']
>>> answers = ['lancelot', 'the
holy grail', 'blue']
>>> for q, a in zip(questions,
answers):
... print('What is your
{0}? It is {1}.'.format(q, a))
...
What is your name? It is
lancelot.
What is your quest? It is the
holy grail.
What is your favorite color?
It is blue.

To loop over a sequence in reverse,


first specify the sequence in a
forward direction and then call
the reversed() function.

>>>

>>> for i in reversed(range(1,


10, 2)):
... print(i)
...
9
7
5
3
1

To loop over a sequence in sorted


order, use the sorted() function
which returns a new sorted list
while leaving the source unaltered.

>>>

>>> basket = ['apple',


'orange', 'apple', 'pear',
'orange', 'banana']
>>> for i in sorted(basket):
... print(i)
...
apple
apple
banana
orange
orange
pear

Using set() on a sequence


eliminates duplicate elements. The
use of sorted() in combination
with set() over a sequence is an
idiomatic way to loop over unique
elements of the sequence in sorted
order.

>>>

>>> basket = ['apple',


'orange', 'apple', 'pear',
'orange', 'banana']
>>> for f in
sorted(set(basket)):
... print(f)
...
apple
banana
orange
pear

It is sometimes tempting to change


a list while you are looping over it;
however, it is often simpler and
safer to create a new list instead.

>>>

>>> import math


>>> raw_data = [56.2,
float('NaN'), 51.7, 55.3, 52.5,
float('NaN'), 47.8]
>>> filtered_data = []
>>> for value in raw_data:
... if not
math.isnan(value):
...
filtered_data.append(value)
...
>>> filtered_data
[56.2, 51.7, 55.3, 52.5, 47.8]
5.7. More on
Conditions
The conditions used
in while and if statements can
contain any operators, not just
comparisons.

The comparison
operators in and not in are
membership tests that determine
whether a value is in (or not in) a
container. The
operators is and is not compare
whether two objects are really the
same object. All comparison
operators have the same priority,
which is lower than that of all
numerical operators.

Comparisons can be chained. For


example, a < b == c tests
whether a is less than b and
moreover b equals c.

Comparisons may be combined


using the Boolean
operators and and or, and the
outcome of a comparison (or of any
other Boolean expression) may be
negated with not. These have lower
priorities than comparison
operators; between them, not has
the highest priority and or the
lowest, so that A and not B or C is
equivalent
to (A and (not B)) or C. As
always, parentheses can be used to
express the desired composition.

The Boolean
operators and and or are so-
called short-circuit operators: their
arguments are evaluated from left
to right, and evaluation stops as
soon as the outcome is determined.
For example, if A and C are true
but B is false, A and B and C does
not evaluate the expression C.
When used as a general value and
not as a Boolean, the return value
of a short-circuit operator is the last
evaluated argument.

It is possible to assign the result of


a comparison or other Boolean
expression to a variable. For
example,

>>>

>>> string1, string2, string3 =


'', 'Trondheim', 'Hammer Dance'
>>> non_null = string1 or
string2 or string3
>>> non_null
'Trondheim'

Note that in Python, unlike C,


assignment inside expressions must
be done explicitly with the walrus
operator :=. This avoids a common
class of problems encountered in C
programs: typing = in an expression
when == was intended.

5.8. Comparing
Sequences and Other
Types
Sequence objects typically may be
compared to other objects with the
same sequence type. The
comparison
uses lexicographical ordering: first
the first two items are compared,
and if they differ this determines
the outcome of the comparison; if
they are equal, the next two items
are compared, and so on, until
either sequence is exhausted. If two
items to be compared are
themselves sequences of the same
type, the lexicographical
comparison is carried out
recursively. If all items of two
sequences compare equal, the
sequences are considered equal. If
one sequence is an initial sub-
sequence of the other, the shorter
sequence is the smaller (lesser)
one. Lexicographical ordering for
strings uses the Unicode code point
number to order individual
characters. Some examples of
comparisons between sequences of
the same type:

(1, 2, 3) < (1, 2,


4)
[1, 2, 3] < [1, 2,
4]
'ABC' < 'C' < 'Pascal' <
'Python'
(1, 2, 3, 4) < (1, 2,
4)
(1, 2) < (1, 2,
-1)
(1, 2, 3) == (1.0,
2.0, 3.0)
(1, 2, ('aa', 'ab')) < (1, 2,
('abc', 'a'), 4)

Note that comparing objects of


different types with < or > is legal
provided that the objects have
appropriate comparison methods.
For example, mixed numeric types
are compared according to their
numeric value, so 0 equals 0.0, etc.
Otherwise, rather than providing an
arbitrary ordering, the interpreter
will raise a TypeError exception.

Footnotes

[1]

Other languages may return the


mutated object, which allows
method chaining, such as d-
>insert("a")->remove("b")-
>sort();.

6. Modules
If you quit from the Python interpreter and enter it again, the definitions you
have made (functions and variables) are lost. Therefore, if you want to write
a somewhat longer program, you are better off using a text editor to prepare
the input for the interpreter and running it with that file as input instead. This
is known as creating a script. As your program gets longer, you may want to
split it into several files for easier maintenance. You may also want to use a
handy function that you’ve written in several programs without copying its
definition into each program.

To support this, Python has a way to put definitions in a file and use them in
a script or in an interactive instance of the interpreter. Such a file is called
a module; definitions from a module can be imported into other modules or
into the main module (the collection of variables that you have access to in a
script executed at the top level and in calculator mode).

A module is a file containing Python definitions and statements. The file


name is the module name with the suffix .py appended. Within a module,
the module’s name (as a string) is available as the value of the global
variable __name__. For instance, use your favorite text editor to create a file
called fibo.py in the current directory with the following contents:

# Fibonacci numbers module

def fib(n): # write Fibonacci series up to n


a, b = 0, 1
while a < n:
print(a, end=' ')
a, b = b, a+b
print()

def fib2(n): # return Fibonacci series up to n


result = []
a, b = 0, 1
while a < n:
result.append(a)
a, b = b, a+b
return result

Now enter the Python interpreter and import this module with the following
command:

>>>

>>> import fibo

This does not add the names of the functions defined in fibo directly to the
current namespace (see Python Scopes and Namespaces for more details); it
only adds the module name fibo there. Using the module name you can
access the functions:

>>>

>>> fibo.fib(1000)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987
>>> fibo.fib2(100)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
>>> fibo.__name__
'fibo'

If you intend to use a function often you can assign it to a local name:

>>>

>>> fib = fibo.fib


>>> fib(500)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377
6.1. More on Modules
A module can contain executable statements as well as function definitions. These statements are
intended to initialize the module. They are executed only the first time the module name is
encountered in an import statement. [1] (They are also run if the file is executed as a script.)

Each module has its own private namespace, which is used as the global namespace by all
functions defined in the module. Thus, the author of a module can use global variables in the
module without worrying about accidental clashes with a user’s global variables. On the other
hand, if you know what you are doing you can touch a module’s global variables with the same
notation used to refer to its functions, modname.itemname.

Modules can import other modules. It is customary but not required to place
all import statements at the beginning of a module (or script, for that matter). The imported
module names, if placed at the top level of a module (outside any functions or classes), are added
to the module’s global namespace.

There is a variant of the import statement that imports names from a module directly into the
importing module’s namespace. For example:

>>>

>>> from fibo import fib, fib2


>>> fib(500)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377

This does not introduce the module name from which the imports are taken in the local
namespace (so in the example, fibo is not defined).

There is even a variant to import all names that a module defines:

>>>

>>> from fibo import *


>>> fib(500)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377

This imports all names except those beginning with an underscore (_). In most cases Python
programmers do not use this facility since it introduces an unknown set of names into the
interpreter, possibly hiding some things you have already defined.

Note that in general the practice of importing * from a module or package is frowned upon, since
it often causes poorly readable code. However, it is okay to use it to save typing in interactive
sessions.
If the module name is followed by as, then the name following as is bound directly to the
imported module.

>>>

>>> import fibo as fib


>>> fib.fib(500)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377

This is effectively importing the module in the same way that import fibo will do, with the
only difference of it being available as fib.

It can also be used when utilising from with similar effects:

>>>

>>> from fibo import fib as fibonacci


>>> fibonacci(500)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377
Note

For efficiency reasons, each module is only imported once per interpreter session. Therefore, if
you change your modules, you must restart the interpreter – or, if it’s just one module you want
to test interactively, use importlib.reload(),
e.g. import importlib; importlib.reload(modulename).
6.1.1. Executing modules as scripts
When you run a Python module with

python fibo.py <arguments>

the code in the module will be executed, just as if you imported it, but with the __name__ set
to "__main__". That means that by adding this code at the end of your module:

if __name__ == "__main__":
import sys
fib(int(sys.argv[1]))

you can make the file usable as a script as well as an importable module, because the code that
parses the command line only runs if the module is executed as the “main” file:
$ python fibo.py 50
0 1 1 2 3 5 8 13 21 34

If the module is imported, the code is not run:

>>>

>>> import fibo


>>>

This is often used either to provide a convenient user interface to a module, or for testing
purposes (running the module as a script executes a test suite).

6.1.2. The Module Search Path


When a module named spam is imported, the interpreter first searches for a built-in module with
that name. These module names are listed in sys.builtin_module_names. If not found, it
then searches for a file named spam.py in a list of directories given by the
variable sys.path. sys.path is initialized from these locations:

 The directory containing the input script (or the current directory when no file is
specified).
 PYTHONPATH (a list of directory names, with the same syntax as the shell
variable PATH).
 The installation-dependent default (by convention including a site-
packages directory, handled by the site module).

More details are at The initialization of the sys.path module search path.

Note

On file systems which support symlinks, the directory containing the input script is calculated
after the symlink is followed. In other words the directory containing the symlink is not added to
the module search path.

After initialization, Python programs can modify sys.path. The directory containing the script
being run is placed at the beginning of the search path, ahead of the standard library path. This
means that scripts in that directory will be loaded instead of modules of the same name in the
library directory. This is an error unless the replacement is intended. See section Standard
Modules for more information.
6.1.3. “Compiled” Python files
To speed up loading modules, Python caches the compiled version of each module in
the __pycache__ directory under the name module.version.pyc, where the version
encodes the format of the compiled file; it generally contains the Python version number. For
example, in CPython release 3.3 the compiled version of spam.py would be cached
as __pycache__/spam.cpython-33.pyc. This naming convention allows compiled
modules from different releases and different versions of Python to coexist.

Python checks the modification date of the source against the compiled version to see if it’s out
of date and needs to be recompiled. This is a completely automatic process. Also, the compiled
modules are platform-independent, so the same library can be shared among systems with
different architectures.

Python does not check the cache in two circumstances. First, it always recompiles and does not
store the result for the module that’s loaded directly from the command line. Second, it does not
check the cache if there is no source module. To support a non-source (compiled only)
distribution, the compiled module must be in the source directory, and there must not be a source
module.

Some tips for experts:

 You can use the -O or -OO switches on the Python command to reduce the size of a
compiled module. The -O switch removes assert statements, the -OO switch removes
both assert statements and __doc__ strings. Since some programs may rely on having
these available, you should only use this option if you know what you’re doing.
“Optimized” modules have an opt- tag and are usually smaller. Future releases may
change the effects of optimization.
 A program doesn’t run any faster when it is read from a .pyc file than when it is read
from a .py file; the only thing that’s faster about .pyc files is the speed with which they
are loaded.
 The module compileall can create .pyc files for all modules in a directory.
 There is more detail on this process, including a flow chart of the decisions, in PEP 3147.
6.2. Standard Modules
Python comes with a library of standard modules, described in a separate document, the Python
Library Reference (“Library Reference” hereafter). Some modules are built into the interpreter;
these provide access to operations that are not part of the core of the language but are
nevertheless built in, either for efficiency or to provide access to operating system primitives
such as system calls. The set of such modules is a configuration option which also depends on
the underlying platform. For example, the winreg module is only provided on Windows
systems. One particular module deserves some attention: sys, which is built into every Python
interpreter. The variables sys.ps1 and sys.ps2 define the strings used as primary and
secondary prompts:
>>>

>>> import sys


>>> sys.ps1
'>>> '
>>> sys.ps2
'... '
>>> sys.ps1 = 'C> '
C> print('Yuck!')
Yuck!
C>

These two variables are only defined if the interpreter is in interactive mode.

The variable sys.path is a list of strings that determines the interpreter’s search path for
modules. It is initialized to a default path taken from the environment variable PYTHONPATH, or
from a built-in default if PYTHONPATH is not set. You can modify it using standard list
operations:

>>>

>>> import sys


>>> sys.path.append('/ufs/guido/lib/python')

6.3. The dir() Function


The built-in function dir() is used to find out which names a module defines. It returns a sorted
list of strings:

>>>

>>> import fibo, sys


>>> dir(fibo)
['__name__', 'fib', 'fib2']
>>> dir(sys)
['__breakpointhook__', '__displayhook__', '__doc__',
'__excepthook__',
'__interactivehook__', '__loader__', '__name__', '__package__',
'__spec__',
'__stderr__', '__stdin__', '__stdout__', '__unraisablehook__',
'_clear_type_cache', '_current_frames', '_debugmallocstats',
'_framework',
'_getframe', '_git', '_home', '_xoptions', 'abiflags',
'addaudithook',
'api_version', 'argv', 'audit', 'base_exec_prefix', 'base_prefix',
'breakpointhook', 'builtin_module_names', 'byteorder',
'call_tracing',
'callstats', 'copyright', 'displayhook', 'dont_write_bytecode',
'exc_info',
'excepthook', 'exec_prefix', 'executable', 'exit', 'flags',
'float_info',
'float_repr_style', 'get_asyncgen_hooks',
'get_coroutine_origin_tracking_depth',
'getallocatedblocks', 'getdefaultencoding', 'getdlopenflags',
'getfilesystemencodeerrors', 'getfilesystemencoding',
'getprofile',
'getrecursionlimit', 'getrefcount', 'getsizeof',
'getswitchinterval',
'gettrace', 'hash_info', 'hexversion', 'implementation',
'int_info',
'intern', 'is_finalizing', 'last_traceback', 'last_type',
'last_value',
'maxsize', 'maxunicode', 'meta_path', 'modules', 'path',
'path_hooks',
'path_importer_cache', 'platform', 'prefix', 'ps1', 'ps2',
'pycache_prefix',
'set_asyncgen_hooks', 'set_coroutine_origin_tracking_depth',
'setdlopenflags',
'setprofile', 'setrecursionlimit', 'setswitchinterval',
'settrace', 'stderr',
'stdin', 'stdout', 'thread_info', 'unraisablehook', 'version',
'version_info',
'warnoptions']

Without arguments, dir() lists the names you have defined currently:

>>>

>>> a = [1, 2, 3, 4, 5]
>>> import fibo
>>> fib = fibo.fib
>>> dir()
['__builtins__', '__name__', 'a', 'fib', 'fibo', 'sys']

Note that it lists all types of names: variables, modules, functions, etc.
dir() does not list the names of built-in functions and variables. If you want a list of those, they
are defined in the standard module builtins:

>>>

>>> import builtins


>>> dir(builtins)
['ArithmeticError', 'AssertionError', 'AttributeError',
'BaseException',
'BlockingIOError', 'BrokenPipeError', 'BufferError',
'BytesWarning',
'ChildProcessError', 'ConnectionAbortedError', 'ConnectionError',
'ConnectionRefusedError', 'ConnectionResetError',
'DeprecationWarning',
'EOFError', 'Ellipsis', 'EnvironmentError', 'Exception', 'False',
'FileExistsError', 'FileNotFoundError', 'FloatingPointError',
'FutureWarning', 'GeneratorExit', 'IOError', 'ImportError',
'ImportWarning', 'IndentationError', 'IndexError',
'InterruptedError',
'IsADirectoryError', 'KeyError', 'KeyboardInterrupt',
'LookupError',
'MemoryError', 'NameError', 'None', 'NotADirectoryError',
'NotImplemented',
'NotImplementedError', 'OSError', 'OverflowError',
'PendingDeprecationWarning', 'PermissionError',
'ProcessLookupError',
'ReferenceError', 'ResourceWarning', 'RuntimeError',
'RuntimeWarning',
'StopIteration', 'SyntaxError', 'SyntaxWarning', 'SystemError',
'SystemExit', 'TabError', 'TimeoutError', 'True', 'TypeError',
'UnboundLocalError', 'UnicodeDecodeError', 'UnicodeEncodeError',
'UnicodeError', 'UnicodeTranslateError', 'UnicodeWarning',
'UserWarning',
'ValueError', 'Warning', 'ZeroDivisionError', '_',
'__build_class__',
'__debug__', '__doc__', '__import__', '__name__', '__package__',
'abs',
'all', 'any', 'ascii', 'bin', 'bool', 'bytearray', 'bytes',
'callable',
'chr', 'classmethod', 'compile', 'complex', 'copyright',
'credits',
'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'exec',
'exit',
'filter', 'float', 'format', 'frozenset', 'getattr', 'globals',
'hasattr',
'hash', 'help', 'hex', 'id', 'input', 'int', 'isinstance',
'issubclass',
'iter', 'len', 'license', 'list', 'locals', 'map', 'max',
'memoryview',
'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print',
'property',
'quit', 'range', 'repr', 'reversed', 'round', 'set', 'setattr',
'slice',
'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type',
'vars',
'zip']

6.4. Packages
Packages are a way of structuring Python’s module namespace by using “dotted module names”.
For example, the module name A.B designates a submodule named B in a package named A. Just
like the use of modules saves the authors of different modules from having to worry about each
other’s global variable names, the use of dotted module names saves the authors of multi-module
packages like NumPy or Pillow from having to worry about each other’s module names.

Suppose you want to design a collection of modules (a “package”) for the uniform handling of
sound files and sound data. There are many different sound file formats (usually recognized by
their extension, for example: .wav, .aiff, .au), so you may need to create and maintain a
growing collection of modules for the conversion between the various file formats. There are
also many different operations you might want to perform on sound data (such as mixing, adding
echo, applying an equalizer function, creating an artificial stereo effect), so in addition you will
be writing a never-ending stream of modules to perform these operations. Here’s a possible
structure for your package (expressed in terms of a hierarchical filesystem):

sound/ Top-level package


__init__.py Initialize the sound package
formats/ Subpackage for file format
conversions
__init__.py
wavread.py
wavwrite.py
aiffread.py
aiffwrite.py
auread.py
auwrite.py
...
effects/ Subpackage for sound effects
__init__.py
echo.py
surround.py
reverse.py
...
filters/ Subpackage for filters
__init__.py
equalizer.py
vocoder.py
karaoke.py
...

When importing the package, Python searches through the directories on sys.path looking for
the package subdirectory.

The __init__.py files are required to make Python treat directories containing the file as
packages (unless using a namespace package, a relatively advanced feature). This prevents
directories with a common name, such as string, from unintentionally hiding valid modules
that occur later on the module search path. In the simplest case, __init__.py can just be an
empty file, but it can also execute initialization code for the package or set
the __all__ variable, described later.

Users of the package can import individual modules from the package, for example:

import sound.effects.echo

This loads the submodule sound.effects.echo. It must be referenced with its full name.

sound.effects.echo.echofilter(input, output, delay=0.7, atten=4)

An alternative way of importing the submodule is:

from sound.effects import echo

This also loads the submodule echo, and makes it available without its package prefix, so it can
be used as follows:

echo.echofilter(input, output, delay=0.7, atten=4)

Yet another variation is to import the desired function or variable directly:


from sound.effects.echo import echofilter

Again, this loads the submodule echo, but this makes its function echofilter() directly
available:

echofilter(input, output, delay=0.7, atten=4)

Note that when using from package import item, the item can be either a submodule (or
subpackage) of the package, or some other name defined in the package, like a function, class or
variable. The import statement first tests whether the item is defined in the package; if not, it
assumes it is a module and attempts to load it. If it fails to find it, an ImportError exception is
raised.

Contrarily, when using syntax like import item.subitem.subsubitem, each item except
for the last must be a package; the last item can be a module or a package but can’t be a class or
function or variable defined in the previous item.

6.4.1. Importing * From a Package


Now what happens when the user writes from sound.effects import *? Ideally, one
would hope that this somehow goes out to the filesystem, finds which submodules are present in
the package, and imports them all. This could take a long time and importing sub-modules might
have unwanted side-effects that should only happen when the sub-module is explicitly imported.

The only solution is for the package author to provide an explicit index of the package.
The import statement uses the following convention: if a package’s __init__.py code
defines a list named __all__, it is taken to be the list of module names that should be imported
when from package import * is encountered. It is up to the package author to keep this
list up-to-date when a new version of the package is released. Package authors may also decide
not to support it, if they don’t see a use for importing * from their package. For example, the
file sound/effects/__init__.py could contain the following code:

__all__ = ["echo", "surround", "reverse"]

This would mean that from sound.effects import * would import the three named
submodules of the sound.effects package.

Be aware that submodules might become shadowed by locally defined names. For example, if
you added a reverse function to the sound/effects/__init__.py file,
the from sound.effects import * would only import the two
submodules echo and surround, but not the reverse submodule, because it is shadowed by
the locally defined reverse function:
__all__ = [
"echo", # refers to the 'echo.py' file
"surround", # refers to the 'surround.py' file
"reverse", # !!! refers to the 'reverse' function now !!!
]

def reverse(msg: str): # <-- this name shadows the 'reverse.py'


submodule
return msg[::-1] # in the case of a 'from sound.effects
import *'

If __all__ is not defined, the statement from sound.effects import * does not import
all submodules from the package sound.effects into the current namespace; it only ensures
that the package sound.effects has been imported (possibly running any initialization code
in __init__.py) and then imports whatever names are defined in the package. This includes
any names defined (and submodules explicitly loaded) by __init__.py. It also includes any
submodules of the package that were explicitly loaded by previous import statements. Consider
this code:

import sound.effects.echo
import sound.effects.surround
from sound.effects import *

In this example, the echo and surround modules are imported in the current namespace
because they are defined in the sound.effects package when
the from...import statement is executed. (This also works when __all__ is defined.)

Although certain modules are designed to export only names that follow certain patterns when
you use import *, it is still considered bad practice in production code.

Remember, there is nothing wrong with


using from package import specific_submodule! In fact, this is the recommended
notation unless the importing module needs to use submodules with the same name from
different packages.

6.4.2. Intra-package References


When packages are structured into subpackages (as with the sound package in the example),
you can use absolute imports to refer to submodules of siblings packages. For example, if the
module sound.filters.vocoder needs to use the echo module in
the sound.effects package, it can use from sound.effects import echo.
You can also write relative imports, with the from module import name form of import
statement. These imports use leading dots to indicate the current and parent packages involved in
the relative import. From the surround module for example, you might use:

from . import echo


from .. import formats
from ..filters import equalizer

Note that relative imports are based on the name of the current module. Since the name of the
main module is always "__main__", modules intended for use as the main module of a Python
application must always use absolute imports.

6.4.3. Packages in Multiple Directories


Packages support one more special attribute, __path__. This is initialized to be a list containing
the name of the directory holding the package’s __init__.py before the code in that file is
executed. This variable can be modified; doing so affects future searches for modules and
subpackages contained in the package.

While this feature is not often needed, it can be used to extend the set of modules found in a
package.

Footnotes

[1]

In fact function definitions are also ‘statements’ that are ‘executed’; the execution of a
module-level function definition adds the function name to the module’s global namespace.

7. Input and Output


There are several ways to present the output of a program; data can be
printed in a human-readable form, or written to a file for future use. This
chapter will discuss some of the possibilities.

7.1. Fancier Output Formatting


So far we’ve encountered two ways of writing values: expression statements and
the print() function. (A third way is using the write() method of file objects; the standard
output file can be referenced as sys.stdout. See the Library Reference for more information
on this.)

Often you’ll want more control over the formatting of your output than simply printing space-
separated values. There are several ways to format output.

 To use formatted string literals, begin a string with f or F before the opening quotation
mark or triple quotation mark. Inside this string, you can write a Python expression
between { and } characters that can refer to variables or literal values.

>>>

>>> year = 2016


>>> event = 'Referendum'
>>> f'Results of the {year} {event}'
'Results of the 2016 Referendum'

 The str.format() method of strings requires more manual effort. You’ll still
use { and } to mark where a variable will be substituted and can provide detailed
formatting directives, but you’ll also need to provide the information to be formatted. In
the following code block there are two examples of how to format variables:

>>>

>>> yes_votes = 42_572_654


>>> total_votes = 85_705_149
>>> percentage = yes_votes / total_votes
>>> '{:-9} YES votes {:2.2%}'.format(yes_votes, percentage)
' 42572654 YES votes 49.67%'

Notice how the yes_votes are padded with spaces and a negative sign only for
negative numbers. The example also prints percentage multiplied by 100, with 2
decimal places and followed by a percent sign (see Format Specification Mini-
Language for details).

 Finally, you can do all the string handling yourself by using string slicing and
concatenation operations to create any layout you can imagine. The string type has some
methods that perform useful operations for padding strings to a given column width.

When you don’t need fancy output but just want a quick display of some variables for debugging
purposes, you can convert any value to a string with the repr() or str() functions.

The str() function is meant to return representations of values which are fairly human-
readable, while repr() is meant to generate representations which can be read by the
interpreter (or will force a SyntaxError if there is no equivalent syntax). For objects which
don’t have a particular representation for human consumption, str() will return the same value
as repr(). Many values, such as numbers or structures like lists and dictionaries, have the same
representation using either function. Strings, in particular, have two distinct representations.

Some examples:

>>>

>>> s = 'Hello, world.'


>>> str(s)
'Hello, world.'
>>> repr(s)
"'Hello, world.'"
>>> str(1/7)
'0.14285714285714285'
>>> x = 10 * 3.25
>>> y = 200 * 200
>>> s = 'The value of x is ' + repr(x) + ', and y is ' + repr(y) +
'...'
>>> print(s)
The value of x is 32.5, and y is 40000...
>>> # The repr() of a string adds string quotes and backslashes:
>>> hello = 'hello, world\n'
>>> hellos = repr(hello)
>>> print(hellos)
'hello, world\n'
>>> # The argument to repr() may be any Python object:
>>> repr((x, y, ('spam', 'eggs')))
"(32.5, 40000, ('spam', 'eggs'))"

The string module contains a Template class that offers yet another way to substitute values
into strings, using placeholders like $x and replacing them with values from a dictionary, but
offers much less control of the formatting.

7.1.1. Formatted String Literals


Formatted string literals (also called f-strings for short) let you include the value of Python
expressions inside a string by prefixing the string with f or F and writing expressions
as {expression}.

An optional format specifier can follow the expression. This allows greater control over how the
value is formatted. The following example rounds pi to three places after the decimal:
>>>

>>> import math


>>> print(f'The value of pi is approximately {math.pi:.3f}.')
The value of pi is approximately 3.142.

Passing an integer after the ':' will cause that field to be a minimum number of characters
wide. This is useful for making columns line up.

>>>

>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 7678}


>>> for name, phone in table.items():
... print(f'{name:10} ==> {phone:10d}')
...
Sjoerd ==> 4127
Jack ==> 4098
Dcab ==> 7678

Other modifiers can be used to convert the value before it is formatted. '!
a' applies ascii(), '!s' applies str(), and '!r' applies repr():

>>>

>>> animals = 'eels'


>>> print(f'My hovercraft is full of {animals}.')
My hovercraft is full of eels.
>>> print(f'My hovercraft is full of {animals!r}.')
My hovercraft is full of 'eels'.

The = specifier can be used to expand an expression to the text of the expression, an equal sign,
then the representation of the evaluated expression:

>>>

>>> bugs = 'roaches'


>>> count = 13
>>> area = 'living room'
>>> print(f'Debugging {bugs=} {count=} {area=}')
Debugging bugs='roaches' count=13 area='living room'

See self-documenting expressions for more information on the = specifier. For a reference on
these format specifications, see the reference guide for the Format Specification Mini-Language.
7.1.2. The String format() Method
Basic usage of the str.format() method looks like this:

>>>

>>> print('We are the {} who say "{}!"'.format('knights', 'Ni'))


We are the knights who say "Ni!"

The brackets and characters within them (called format fields) are replaced with the objects
passed into the str.format() method. A number in the brackets can be used to refer to the
position of the object passed into the str.format() method.

>>>

>>> print('{0} and {1}'.format('spam', 'eggs'))


spam and eggs
>>> print('{1} and {0}'.format('spam', 'eggs'))
eggs and spam

If keyword arguments are used in the str.format() method, their values are referred to by
using the name of the argument.

>>>

>>> print('This {food} is {adjective}.'.format(


... food='spam', adjective='absolutely horrible'))
This spam is absolutely horrible.

Positional and keyword arguments can be arbitrarily combined:

>>>

>>> print('The story of {0}, {1}, and {other}.'.format('Bill',


'Manfred',
...
other='Georg'))
The story of Bill, Manfred, and Georg.

If you have a really long format string that you don’t want to split up, it would be nice if you
could reference the variables to be formatted by name instead of by position. This can be done by
simply passing the dict and using square brackets '[]' to access the keys.
>>>

>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}


>>> print('Jack: {0[Jack]:d}; Sjoerd: {0[Sjoerd]:d}; '
... 'Dcab: {0[Dcab]:d}'.format(table))
Jack: 4098; Sjoerd: 4127; Dcab: 8637678

This could also be done by passing the table dictionary as keyword arguments with
the ** notation.

>>>

>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}


>>> print('Jack: {Jack:d}; Sjoerd: {Sjoerd:d}; Dcab:
{Dcab:d}'.format(**table))
Jack: 4098; Sjoerd: 4127; Dcab: 8637678

This is particularly useful in combination with the built-in function vars(), which returns a
dictionary containing all local variables:

>>>

>>> table = {k: str(v) for k, v in vars().items()}


>>> message = " ".join([f'{k}: ' + '{' + k +'};' for k in
table.keys()])
>>> print(message.format(**table))
__name__: __main__; __doc__: None; __package__: None;
__loader__: ...

As an example, the following lines produce a tidily aligned set of columns giving integers and
their squares and cubes:

>>>

>>> for x in range(1, 11):


... print('{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x))
...
1 1 1
2 4 8
3 9 27
4 16 64
5 25 125
6 36 216
7 49 343
8 64 512
9 81 729
10 100 1000

For a complete overview of string formatting with str.format(), see Format String Syntax.

7.1.3. Manual String Formatting


Here’s the same table of squares and cubes, formatted manually:

>>>

>>> for x in range(1, 11):


... print(repr(x).rjust(2), repr(x*x).rjust(3), end=' ')
... # Note use of 'end' on previous line
... print(repr(x*x*x).rjust(4))
...
1 1 1
2 4 8
3 9 27
4 16 64
5 25 125
6 36 216
7 49 343
8 64 512
9 81 729
10 100 1000

(Note that the one space between each column was added by the way print() works: it always
adds spaces between its arguments.)

The str.rjust() method of string objects right-justifies a string in a field of a given width by
padding it with spaces on the left. There are similar
methods str.ljust() and str.center(). These methods do not write anything, they just
return a new string. If the input string is too long, they don’t truncate it, but return it unchanged;
this will mess up your column lay-out but that’s usually better than the alternative, which would
be lying about a value. (If you really want truncation you can always add a slice operation, as
in x.ljust(n)[:n].)

There is another method, str.zfill(), which pads a numeric string on the left with zeros. It
understands about plus and minus signs:
>>>

>>> '12'.zfill(5)
'00012'
>>> '-3.14'.zfill(7)
'-003.14'
>>> '3.14159265359'.zfill(5)
'3.14159265359'

7.1.4. Old string formatting


The % operator (modulo) can also be used for string formatting.
Given format % values (where format is a string), % conversion specifications in format are
replaced with zero or more elements of values. This operation is commonly known as string
interpolation. For example:

>>>

>>> import math


>>> print('The value of pi is approximately %5.3f.' % math.pi)
The value of pi is approximately 3.142.

More information can be found in the printf-style String Formatting section.

7.2. Reading and Writing Files


open() returns a file object, and is most commonly used with two positional arguments and one
keyword argument: open(filename, mode, encoding=None)

>>>

>>> f = open('workfile', 'w', encoding="utf-8")

The first argument is a string containing the filename. The second argument is another string
containing a few characters describing the way in which the file will be used. mode can
be 'r' when the file will only be read, 'w' for only writing (an existing file with the same name
will be erased), and 'a' opens the file for appending; any data written to the file is automatically
added to the end. 'r+' opens the file for both reading and writing. The mode argument is
optional; 'r' will be assumed if it’s omitted.

Normally, files are opened in text mode, that means, you read and write strings from and to the
file, which are encoded in a specific encoding. If encoding is not specified, the default is
platform dependent (see open()). Because UTF-8 is the modern de-facto
standard, encoding="utf-8" is recommended unless you know that you need to use a
different encoding. Appending a 'b' to the mode opens the file in binary mode. Binary mode
data is read and written as bytes objects. You can not specify encoding when opening file in
binary mode.

In text mode, the default when reading is to convert platform-specific line endings (\n on
Unix, \r\n on Windows) to just \n. When writing in text mode, the default is to convert
occurrences of \n back to platform-specific line endings. This behind-the-scenes modification to
file data is fine for text files, but will corrupt binary data like that in JPEG or EXE files. Be very
careful to use binary mode when reading and writing such files.

It is good practice to use the with keyword when dealing with file objects. The advantage is that
the file is properly closed after its suite finishes, even if an exception is raised at some point.
Using with is also much shorter than writing equivalent try-finally blocks:

>>>

>>> with open('workfile', encoding="utf-8") as f:


... read_data = f.read()

>>> # We can check that the file has been automatically closed.
>>> f.closed
True

If you’re not using the with keyword, then you should call f.close() to close the file and
immediately free up any system resources used by it.

Warning

Calling f.write() without using the with keyword or calling f.close() might result in
the arguments of f.write() not being completely written to the disk, even if the program exits
successfully.

After a file object is closed, either by a with statement or by calling f.close(), attempts to
use the file object will automatically fail.

>>>

>>> f.close()
>>> f.read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: I/O operation on closed file.
7.2.1. Methods of File Objects
The rest of the examples in this section will assume that a file object called f has already been
created.

To read a file’s contents, call f.read(size), which reads some quantity of data and returns it
as a string (in text mode) or bytes object (in binary mode). size is an optional numeric argument.
When size is omitted or negative, the entire contents of the file will be read and returned; it’s
your problem if the file is twice as large as your machine’s memory. Otherwise, at
most size characters (in text mode) or size bytes (in binary mode) are read and returned. If the
end of the file has been reached, f.read() will return an empty string ('').

>>>

>>> f.read()
'This is the entire file.\n'
>>> f.read()
''

f.readline() reads a single line from the file; a newline character (\n) is left at the end of
the string, and is only omitted on the last line of the file if the file doesn’t end in a newline. This
makes the return value unambiguous; if f.readline() returns an empty string, the end of the
file has been reached, while a blank line is represented by '\n', a string containing only a single
newline.

>>>

>>> f.readline()
'This is the first line of the file.\n'
>>> f.readline()
'Second line of the file\n'
>>> f.readline()
''

For reading lines from a file, you can loop over the file object. This is memory efficient, fast, and
leads to simple code:

>>>

>>> for line in f:


... print(line, end='')
...
This is the first line of the file.
Second line of the file

If you want to read all the lines of a file in a list you can also
use list(f) or f.readlines().

f.write(string) writes the contents of string to the file, returning the number of characters
written.

>>>

>>> f.write('This is a test\n')


15

Other types of objects need to be converted – either to a string (in text mode) or a bytes object (in
binary mode) – before writing them:

>>>

>>> value = ('the answer', 42)


>>> s = str(value) # convert the tuple to string
>>> f.write(s)
18

f.tell() returns an integer giving the file object’s current position in the file represented as
number of bytes from the beginning of the file when in binary mode and an opaque number
when in text mode.

To change the file object’s position, use f.seek(offset, whence). The position is
computed from adding offset to a reference point; the reference point is selected by
the whence argument. A whence value of 0 measures from the beginning of the file, 1 uses the
current file position, and 2 uses the end of the file as the reference point. whence can be omitted
and defaults to 0, using the beginning of the file as the reference point.

>>>

>>> f = open('workfile', 'rb+')


>>> f.write(b'0123456789abcdef')
16
>>> f.seek(5) # Go to the 6th byte in the file
5
>>> f.read(1)
b'5'
>>> f.seek(-3, 2) # Go to the 3rd byte before the end
13
>>> f.read(1)
b'd'

In text files (those opened without a b in the mode string), only seeks relative to the beginning of
the file are allowed (the exception being seeking to the very file end with seek(0, 2)) and the
only valid offset values are those returned from the f.tell(), or zero. Any other offset value
produces undefined behaviour.

File objects have some additional methods, such as isatty() and truncate() which are less
frequently used; consult the Library Reference for a complete guide to file objects.

7.2.2. Saving structured data with json


Strings can easily be written to and read from a file. Numbers take a bit more effort, since
the read() method only returns strings, which will have to be passed to a function like int(),
which takes a string like '123' and returns its numeric value 123. When you want to save more
complex data types like nested lists and dictionaries, parsing and serializing by hand becomes
complicated.

Rather than having users constantly writing and debugging code to save complicated data types
to files, Python allows you to use the popular data interchange format called JSON (JavaScript
Object Notation). The standard module called json can take Python data hierarchies, and
convert them to string representations; this process is called serializing. Reconstructing the data
from the string representation is called deserializing. Between serializing and deserializing, the
string representing the object may have been stored in a file or data, or sent over a network
connection to some distant machine.

Note

The JSON format is commonly used by modern applications to allow for data exchange. Many
programmers are already familiar with it, which makes it a good choice for interoperability.

If you have an object x, you can view its JSON string representation with a simple line of code:

>>>

>>> import json


>>> x = [1, 'simple', 'list']
>>> json.dumps(x)
'[1, "simple", "list"]'
Another variant of the dumps() function, called dump(), simply serializes the object to a text
file. So if f is a text file object opened for writing, we can do this:

json.dump(x, f)

To decode the object again, if f is a binary file or text file object which has been opened for
reading:

x = json.load(f)
Note

JSON files must be encoded in UTF-8. Use encoding="utf-8" when opening JSON file as
a text file for both of reading and writing.

This simple serialization technique can handle lists and dictionaries, but serializing arbitrary
class instances in JSON requires a bit of extra effort. The reference for the json module
contains an explanation of this.

See also

pickle - the pickle module

Contrary to JSON, pickle is a protocol which allows the serialization of arbitrarily complex
Python objects. As such, it is specific to Python and cannot be used to communicate with
applications written in other languages. It is also insecure by default: deserializing pickle data
coming from an untrusted source can execute arbitrary code, if the data was crafted by a skilled
attacker.

8. Errors and Exceptions


Until now error messages haven’t been more than mentioned, but if you
have tried out the examples you have probably seen some. There are (at
least) two distinguishable kinds of errors: syntax errors and exceptions.

8.1. Syntax Errors


Syntax errors, also known as parsing errors, are perhaps the most common kind of complaint you
get while you are still learning Python:

>>>
>>> while True print('Hello world')
File "<stdin>", line 1
while True print('Hello world')
^^^^^
SyntaxError: invalid syntax

The parser repeats the offending line and displays little ‘arrow’s pointing at the token in the line
where the error was detected. The error may be caused by the absence of a token before the
indicated token. In the example, the error is detected at the function print(), since a colon
(':') is missing before it. File name and line number are printed so you know where to look in
case the input came from a script.

8.2. Exceptions
Even if a statement or expression is syntactically correct, it may cause an error when an attempt
is made to execute it. Errors detected during execution are called exceptions and are not
unconditionally fatal: you will soon learn how to handle them in Python programs. Most
exceptions are not handled by programs, however, and result in error messages as shown here:

>>>

>>> 10 * (1/0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
>>> 4 + spam*3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'spam' is not defined
>>> '2' + 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate str (not "int") to str

The last line of the error message indicates what happened. Exceptions come in different types,
and the type is printed as part of the message: the types in the example
are ZeroDivisionError, NameError and TypeError. The string printed as the exception
type is the name of the built-in exception that occurred. This is true for all built-in exceptions,
but need not be true for user-defined exceptions (although it is a useful convention). Standard
exception names are built-in identifiers (not reserved keywords).

The rest of the line provides detail based on the type of exception and what caused it.
The preceding part of the error message shows the context where the exception occurred, in the
form of a stack traceback. In general it contains a stack traceback listing source lines; however, it
will not display lines read from standard input.

Built-in Exceptions lists the built-in exceptions and their meanings.

8.3. Handling Exceptions


It is possible to write programs that handle selected exceptions. Look at the following example,
which asks the user for input until a valid integer has been entered, but allows the user to
interrupt the program (using Control-C or whatever the operating system supports); note that a
user-generated interruption is signalled by raising the KeyboardInterrupt exception.

>>>

>>> while True:


... try:
... x = int(input("Please enter a number: "))
... break
... except ValueError:
... print("Oops! That was no valid number. Try again...")
...

The try statement works as follows.

 First, the try clause (the statement(s) between the try and except keywords) is
executed.
 If no exception occurs, the except clause is skipped and execution of the try statement is
finished.
 If an exception occurs during execution of the try clause, the rest of the clause is
skipped. Then, if its type matches the exception named after the except keyword,
the except clause is executed, and then execution continues after the try/except block.
 If an exception occurs which does not match the exception named in the except clause, it
is passed on to outer try statements; if no handler is found, it is an unhandled
exception and execution stops with an error message.

A try statement may have more than one except clause, to specify handlers for different
exceptions. At most one handler will be executed. Handlers only handle exceptions that occur in
the corresponding try clause, not in other handlers of the same try statement. An except
clause may name multiple exceptions as a parenthesized tuple, for example:

... except (RuntimeError, TypeError, NameError):


... pass
A class in an except clause matches exceptions which are instances of the class itself or one of
its derived classes (but not the other way around — an except clause listing a derived class does
not match instances of its base classes). For example, the following code will print B, C, D in
that order:

class B(Exception):
pass

class C(B):
pass

class D(C):
pass

for cls in [B, C, D]:


try:
raise cls()
except D:
print("D")
except C:
print("C")
except B:
print("B")

Note that if the except clauses were reversed (with except B first), it would have printed B, B,
B — the first matching except clause is triggered.

When an exception occurs, it may have associated values, also known as the
exception’s arguments. The presence and types of the arguments depend on the exception type.

The except clause may specify a variable after the exception name. The variable is bound to the
exception instance which typically has an args attribute that stores the arguments. For
convenience, builtin exception types define __str__() to print all the arguments without
explicitly accessing .args.

>>>

>>> try:
... raise Exception('spam', 'eggs')
... except Exception as inst:
... print(type(inst)) # the exception type
... print(inst.args) # arguments stored in .args
... print(inst) # __str__ allows args to be printed
directly,
... # but may be overridden in exception
subclasses
... x, y = inst.args # unpack args
... print('x =', x)
... print('y =', y)
...
<class 'Exception'>
('spam', 'eggs')
('spam', 'eggs')
x = spam
y = eggs

The exception’s __str__() output is printed as the last part (‘detail’) of the message for
unhandled exceptions.

BaseException is the common base class of all exceptions. One of its


subclasses, Exception, is the base class of all the non-fatal exceptions. Exceptions which are
not subclasses of Exception are not typically handled, because they are used to indicate that
the program should terminate. They include SystemExit which is raised
by sys.exit() and KeyboardInterrupt which is raised when a user wishes to interrupt
the program.

Exception can be used as a wildcard that catches (almost) everything. However, it is good
practice to be as specific as possible with the types of exceptions that we intend to handle, and to
allow any unexpected exceptions to propagate on.

The most common pattern for handling Exception is to print or log the exception and then re-
raise it (allowing a caller to handle the exception as well):

import sys

try:
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
except OSError as err:
print("OS error:", err)
except ValueError:
print("Could not convert data to an integer.")
except Exception as err:
print(f"Unexpected {err=}, {type(err)=}")
raise
The try … except statement has an optional else clause, which, when present, must follow
all except clauses. It is useful for code that must be executed if the try clause does not raise an
exception. For example:

for arg in sys.argv[1:]:


try:
f = open(arg, 'r')
except OSError:
print('cannot open', arg)
else:
print(arg, 'has', len(f.readlines()), 'lines')
f.close()

The use of the else clause is better than adding additional code to the try clause because it
avoids accidentally catching an exception that wasn’t raised by the code being protected by
the try … except statement.

Exception handlers do not handle only exceptions that occur immediately in the try clause, but
also those that occur inside functions that are called (even indirectly) in the try clause. For
example:

>>>

>>> def this_fails():


... x = 1/0
...
>>> try:
... this_fails()
... except ZeroDivisionError as err:
... print('Handling run-time error:', err)
...
Handling run-time error: division by zero

8.4. Raising Exceptions


The raise statement allows the programmer to force a specified exception to occur. For
example:

>>>

>>> raise NameError('HiThere')


Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: HiThere
The sole argument to raise indicates the exception to be raised. This must be either an
exception instance or an exception class (a class that derives from BaseException, such
as Exception or one of its subclasses). If an exception class is passed, it will be implicitly
instantiated by calling its constructor with no arguments:

raise ValueError # shorthand for 'raise ValueError()'

If you need to determine whether an exception was raised but don’t intend to handle it, a simpler
form of the raise statement allows you to re-raise the exception:

>>>

>>> try:
... raise NameError('HiThere')
... except NameError:
... print('An exception flew by!')
... raise
...
An exception flew by!
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
NameError: HiThere

8.5. Exception Chaining


If an unhandled exception occurs inside an except section, it will have the exception being
handled attached to it and included in the error message:

>>>

>>> try:
... open("database.sqlite")
... except OSError:
... raise RuntimeError("unable to handle error")
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
FileNotFoundError: [Errno 2] No such file or directory:
'database.sqlite'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):


File "<stdin>", line 4, in <module>
RuntimeError: unable to handle error

To indicate that an exception is a direct consequence of another, the raise statement allows an
optional from clause:

# exc must be exception instance or None.


raise RuntimeError from exc

This can be useful when you are transforming exceptions. For example:

>>>

>>> def func():


... raise ConnectionError
...
>>> try:
... func()
... except ConnectionError as exc:
... raise RuntimeError('Failed to open database') from exc
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "<stdin>", line 2, in func
ConnectionError

The above exception was the direct cause of the following


exception:

Traceback (most recent call last):


File "<stdin>", line 4, in <module>
RuntimeError: Failed to open database

It also allows disabling automatic exception chaining using the from None idiom:

>>>

>>> try:
... open('database.sqlite')
... except OSError:
... raise RuntimeError from None
...
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
RuntimeError

For more information about chaining mechanics, see Built-in Exceptions.

8.6. User-defined Exceptions


Programs may name their own exceptions by creating a new exception class (see Classes for
more about Python classes). Exceptions should typically be derived from the Exception class,
either directly or indirectly.

Exception classes can be defined which do anything any other class can do, but are usually kept
simple, often only offering a number of attributes that allow information about the error to be
extracted by handlers for the exception.

Most exceptions are defined with names that end in “Error”, similar to the naming of the
standard exceptions.

Many standard modules define their own exceptions to report errors that may occur in functions
they define.

8.7. Defining Clean-up Actions


The try statement has another optional clause which is intended to define clean-up actions that
must be executed under all circumstances. For example:

>>>

>>> try:
... raise KeyboardInterrupt
... finally:
... print('Goodbye, world!')
...
Goodbye, world!
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
KeyboardInterrupt

If a finally clause is present, the finally clause will execute as the last task before
the try statement completes. The finally clause runs whether or not the try statement
produces an exception. The following points discuss more complex cases when an exception
occurs:
 If an exception occurs during execution of the try clause, the exception may be handled
by an except clause. If the exception is not handled by an except clause, the
exception is re-raised after the finally clause has been executed.
 An exception could occur during execution of an except or else clause. Again, the
exception is re-raised after the finally clause has been executed.
 If the finally clause executes a break, continue or return statement, exceptions
are not re-raised.
 If the try statement reaches a break, continue or return statement,
the finally clause will execute just prior to
the break, continue or return statement’s execution.
 If a finally clause includes a return statement, the returned value will be the one
from the finally clause’s return statement, not the value from
the try clause’s return statement.

For example:

>>>

>>> def bool_return():


... try:
... return True
... finally:
... return False
...
>>> bool_return()
False

A more complicated example:

>>>

>>> def divide(x, y):


... try:
... result = x / y
... except ZeroDivisionError:
... print("division by zero!")
... else:
... print("result is", result)
... finally:
... print("executing finally clause")
...
>>> divide(2, 1)
result is 2.0
executing finally clause
>>> divide(2, 0)
division by zero!
executing finally clause
>>> divide("2", "1")
executing finally clause
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in divide
TypeError: unsupported operand type(s) for /: 'str' and 'str'

As you can see, the finally clause is executed in any event. The TypeError raised by
dividing two strings is not handled by the except clause and therefore re-raised after
the finally clause has been executed.

In real world applications, the finally clause is useful for releasing external resources (such as
files or network connections), regardless of whether the use of the resource was successful.

8.8. Predefined Clean-up Actions


Some objects define standard clean-up actions to be undertaken when the object is no longer
needed, regardless of whether or not the operation using the object succeeded or failed. Look at
the following example, which tries to open a file and print its contents to the screen.

for line in open("myfile.txt"):


print(line, end="")

The problem with this code is that it leaves the file open for an indeterminate amount of time
after this part of the code has finished executing. This is not an issue in simple scripts, but can be
a problem for larger applications. The with statement allows objects like files to be used in a
way that ensures they are always cleaned up promptly and correctly.

with open("myfile.txt") as f:
for line in f:
print(line, end="")

After the statement is executed, the file f is always closed, even if a problem was encountered
while processing the lines. Objects which, like files, provide predefined clean-up actions will
indicate this in their documentation.
8.9. Raising and Handling Multiple Unrelated
Exceptions
There are situations where it is necessary to report several exceptions that have occurred. This is
often the case in concurrency frameworks, when several tasks may have failed in parallel, but
there are also other use cases where it is desirable to continue execution and collect multiple
errors rather than raise the first exception.

The builtin ExceptionGroup wraps a list of exception instances so that they can be raised
together. It is an exception itself, so it can be caught like any other exception.

>>>

>>> def f():


... excs = [OSError('error 1'), SystemError('error 2')]
... raise ExceptionGroup('there were problems', excs)
...
>>> f()
+ Exception Group Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| File "<stdin>", line 3, in f
| ExceptionGroup: there were problems
+-+---------------- 1 ----------------
| OSError: error 1
+---------------- 2 ----------------
| SystemError: error 2
+------------------------------------
>>> try:
... f()
... except Exception as e:
... print(f'caught {type(e)}: e')
...
caught <class 'ExceptionGroup'>: e
>>>

By using except* instead of except, we can selectively handle only the exceptions in the
group that match a certain type. In the following example, which shows a nested exception
group, each except* clause extracts from the group exceptions of a certain type while letting
all other exceptions propagate to other clauses and eventually to be reraised.

>>>
>>> def f():
... raise ExceptionGroup(
... "group1",
... [
... OSError(1),
... SystemError(2),
... ExceptionGroup(
... "group2",
... [
... OSError(3),
... RecursionError(4)
... ]
... )
... ]
... )
...
>>> try:
... f()
... except* OSError as e:
... print("There were OSErrors")
... except* SystemError as e:
... print("There were SystemErrors")
...
There were OSErrors
There were SystemErrors
+ Exception Group Traceback (most recent call last):
| File "<stdin>", line 2, in <module>
| File "<stdin>", line 2, in f
| ExceptionGroup: group1
+-+---------------- 1 ----------------
| ExceptionGroup: group2
+-+---------------- 1 ----------------
| RecursionError: 4
+------------------------------------
>>>

Note that the exceptions nested in an exception group must be instances, not types. This is
because in practice the exceptions would typically be ones that have already been raised and
caught by the program, along the following pattern:

>>>

>>> excs = []
... for test in tests:
... try:
... test.run()
... except Exception as e:
... excs.append(e)
...
>>> if excs:
... raise ExceptionGroup("Test Failures", excs)
...

8.10. Enriching Exceptions with Notes


When an exception is created in order to be raised, it is usually initialized with information that
describes the error that has occurred. There are cases where it is useful to add information after
the exception was caught. For this purpose, exceptions have a method add_note(note) that
accepts a string and adds it to the exception’s notes list. The standard traceback rendering
includes all notes, in the order they were added, after the exception.

>>>

>>> try:
... raise TypeError('bad type')
... except Exception as e:
... e.add_note('Add some information')
... e.add_note('Add some more information')
... raise
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
TypeError: bad type
Add some information
Add some more information
>>>

For example, when collecting exceptions into an exception group, we may want to add context
information for the individual errors. In the following each exception in the group has a note
indicating when this error has occurred.

>>>

>>> def f():


... raise OSError('operation failed')
...
>>> excs = []
>>> for i in range(3):
... try:
... f()
... except Exception as e:
... e.add_note(f'Happened in Iteration {i+1}')
... excs.append(e)
...
>>> raise ExceptionGroup('We have some problems', excs)
+ Exception Group Traceback (most recent call last):
| File "<stdin>", line 1, in <module>
| ExceptionGroup: We have some problems (3 sub-exceptions)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "<stdin>", line 3, in <module>
| File "<stdin>", line 2, in f
| OSError: operation failed
| Happened in Iteration 1
+---------------- 2 ----------------
| Traceback (most recent call last):
| File "<stdin>", line 3, in <module>
| File "<stdin>", line 2, in f
| OSError: operation failed
| Happened in Iteration 2
+---------------- 3 ----------------
| Traceback (most recent call last):
| File "<stdin>", line 3, in <module>
| File "<stdin>", line 2, in f
| OSError: operation failed
| Happened in Iteration 3
+------------------------------------

9. Classes
Classes provide a means of bundling data and functionality together.
Creating a new class creates a new type of object, allowing new instances of
that type to be made. Each class instance can have attributes attached to it
for maintaining its state. Class instances can also have methods (defined by
its class) for modifying its state.
Compared with other programming languages, Python’s class mechanism
adds classes with a minimum of new syntax and semantics. It is a mixture of
the class mechanisms found in C++ and Modula-3. Python classes provide all
the standard features of Object Oriented Programming: the class inheritance
mechanism allows multiple base classes, a derived class can override any
methods of its base class or classes, and a method can call the method of a
base class with the same name. Objects can contain arbitrary amounts and
kinds of data. As is true for modules, classes partake of the dynamic nature
of Python: they are created at runtime, and can be modified further after
creation.

In C++ terminology, normally class members (including the data members)


are public (except see below Private Variables), and all member functions
are virtual. As in Modula-3, there are no shorthands for referencing the
object’s members from its methods: the method function is declared with an
explicit first argument representing the object, which is provided implicitly by
the call. As in Smalltalk, classes themselves are objects. This provides
semantics for importing and renaming. Unlike C++ and Modula-3, built-in
types can be used as base classes for extension by the user. Also, like in C+
+, most built-in operators with special syntax (arithmetic operators,
subscripting etc.) can be redefined for class instances.

(Lacking universally accepted terminology to talk about classes, I will make


occasional use of Smalltalk and C++ terms. I would use Modula-3 terms,
since its object-oriented semantics are closer to those of Python than C++,
but I expect that few readers have heard of it.)

9.1. A Word About Names and Objects


Objects have individuality, and multiple names (in multiple scopes) can be bound to the same
object. This is known as aliasing in other languages. This is usually not appreciated on a first
glance at Python, and can be safely ignored when dealing with immutable basic types (numbers,
strings, tuples). However, aliasing has a possibly surprising effect on the semantics of Python
code involving mutable objects such as lists, dictionaries, and most other types. This is usually
used to the benefit of the program, since aliases behave like pointers in some respects. For
example, passing an object is cheap since only a pointer is passed by the implementation; and if a
function modifies an object passed as an argument, the caller will see the change — this
eliminates the need for two different argument passing mechanisms as in Pascal.

9.2. Python Scopes and Namespaces


Before introducing classes, I first have to tell you something about Python’s scope rules. Class
definitions play some neat tricks with namespaces, and you need to know how scopes and
namespaces work to fully understand what’s going on. Incidentally, knowledge about this subject
is useful for any advanced Python programmer.

Let’s begin with some definitions.

A namespace is a mapping from names to objects. Most namespaces are currently implemented
as Python dictionaries, but that’s normally not noticeable in any way (except for performance),
and it may change in the future. Examples of namespaces are: the set of built-in names
(containing functions such as abs(), and built-in exception names); the global names in a
module; and the local names in a function invocation. In a sense the set of attributes of an object
also form a namespace. The important thing to know about namespaces is that there is absolutely
no relation between names in different namespaces; for instance, two different modules may both
define a function maximize without confusion — users of the modules must prefix it with the
module name.

By the way, I use the word attribute for any name following a dot — for example, in the
expression z.real, real is an attribute of the object z. Strictly speaking, references to names
in modules are attribute references: in the expression modname.funcname, modname is a
module object and funcname is an attribute of it. In this case there happens to be a
straightforward mapping between the module’s attributes and the global names defined in the
module: they share the same namespace! [1]

Attributes may be read-only or writable. In the latter case, assignment to attributes is possible.
Module attributes are writable: you can write modname.the_answer = 42. Writable
attributes may also be deleted with the del statement. For
example, del modname.the_answer will remove the attribute the_answer from the
object named by modname.

Namespaces are created at different moments and have different lifetimes. The namespace
containing the built-in names is created when the Python interpreter starts up, and is never
deleted. The global namespace for a module is created when the module definition is read in;
normally, module namespaces also last until the interpreter quits. The statements executed by the
top-level invocation of the interpreter, either read from a script file or interactively, are
considered part of a module called __main__, so they have their own global namespace. (The
built-in names actually also live in a module; this is called builtins.)

The local namespace for a function is created when the function is called, and deleted when the
function returns or raises an exception that is not handled within the function. (Actually,
forgetting would be a better way to describe what actually happens.) Of course, recursive
invocations each have their own local namespace.

A scope is a textual region of a Python program where a namespace is directly accessible.


“Directly accessible” here means that an unqualified reference to a name attempts to find the
name in the namespace.
Although scopes are determined statically, they are used dynamically. At any time during
execution, there are 3 or 4 nested scopes whose namespaces are directly accessible:

 the innermost scope, which is searched first, contains the local names
 the scopes of any enclosing functions, which are searched starting with the nearest
enclosing scope, contain non-local, but also non-global names
 the next-to-last scope contains the current module’s global names
 the outermost scope (searched last) is the namespace containing built-in names

If a name is declared global, then all references and assignments go directly to the next-to-last
scope containing the module’s global names. To rebind variables found outside of the innermost
scope, the nonlocal statement can be used; if not declared nonlocal, those variables are read-
only (an attempt to write to such a variable will simply create a new local variable in the
innermost scope, leaving the identically named outer variable unchanged).

Usually, the local scope references the local names of the (textually) current function. Outside
functions, the local scope references the same namespace as the global scope: the module’s
namespace. Class definitions place yet another namespace in the local scope.

It is important to realize that scopes are determined textually: the global scope of a function
defined in a module is that module’s namespace, no matter from where or by what alias the
function is called. On the other hand, the actual search for names is done dynamically, at run
time — however, the language definition is evolving towards static name resolution, at
“compile” time, so don’t rely on dynamic name resolution! (In fact, local variables are already
determined statically.)

A special quirk of Python is that – if no global or nonlocal statement is in effect –


assignments to names always go into the innermost scope. Assignments do not copy data — they
just bind names to objects. The same is true for deletions: the statement del x removes the
binding of x from the namespace referenced by the local scope. In fact, all operations that
introduce new names use the local scope: in particular, import statements and function
definitions bind the module or function name in the local scope.

The global statement can be used to indicate that particular variables live in the global scope
and should be rebound there; the nonlocal statement indicates that particular variables live in
an enclosing scope and should be rebound there.

9.2.1. Scopes and Namespaces Example


This is an example demonstrating how to reference the different scopes and namespaces, and
how global and nonlocal affect variable binding:

def scope_test():
def do_local():
spam = "local spam"
def do_nonlocal():
nonlocal spam
spam = "nonlocal spam"

def do_global():
global spam
spam = "global spam"

spam = "test spam"


do_local()
print("After local assignment:", spam)
do_nonlocal()
print("After nonlocal assignment:", spam)
do_global()
print("After global assignment:", spam)

scope_test()
print("In global scope:", spam)

The output of the example code is:

After local assignment: test spam


After nonlocal assignment: nonlocal spam
After global assignment: nonlocal spam
In global scope: global spam

Note how the local assignment (which is default) didn’t change scope_test's binding of spam.
The nonlocal assignment changed scope_test's binding of spam, and the global assignment
changed the module-level binding.

You can also see that there was no previous binding for spam before the global assignment.

9.3. A First Look at Classes


Classes introduce a little bit of new syntax, three new object types, and some new semantics.

9.3.1. Class Definition Syntax


The simplest form of class definition looks like this:

class ClassName:
<statement-1>
.
.
.
<statement-N>

Class definitions, like function definitions (def statements) must be executed before they have
any effect. (You could conceivably place a class definition in a branch of an if statement, or
inside a function.)

In practice, the statements inside a class definition will usually be function definitions, but other
statements are allowed, and sometimes useful — we’ll come back to this later. The function
definitions inside a class normally have a peculiar form of argument list, dictated by the calling
conventions for methods — again, this is explained later.

When a class definition is entered, a new namespace is created, and used as the local scope —
thus, all assignments to local variables go into this new namespace. In particular, function
definitions bind the name of the new function here.

When a class definition is left normally (via the end), a class object is created. This is basically a
wrapper around the contents of the namespace created by the class definition; we’ll learn more
about class objects in the next section. The original local scope (the one in effect just before the
class definition was entered) is reinstated, and the class object is bound here to the class name
given in the class definition header (ClassName in the example).

9.3.2. Class Objects


Class objects support two kinds of operations: attribute references and instantiation.

Attribute references use the standard syntax used for all attribute references in
Python: obj.name. Valid attribute names are all the names that were in the class’s namespace
when the class object was created. So, if the class definition looked like this:

class MyClass:
"""A simple example class"""
i = 12345

def f(self):
return 'hello world'

then MyClass.i and MyClass.f are valid attribute references, returning an integer and a
function object, respectively. Class attributes can also be assigned to, so you can change the
value of MyClass.i by assignment. __doc__ is also a valid attribute, returning the docstring
belonging to the class: "A simple example class".
Class instantiation uses function notation. Just pretend that the class object is a parameterless
function that returns a new instance of the class. For example (assuming the above class):

x = MyClass()

creates a new instance of the class and assigns this object to the local variable x.

The instantiation operation (“calling” a class object) creates an empty object. Many classes like
to create objects with instances customized to a specific initial state. Therefore a class may
define a special method named __init__(), like this:

def __init__(self):
self.data = []

When a class defines an __init__() method, class instantiation automatically


invokes __init__() for the newly created class instance. So in this example, a new, initialized
instance can be obtained by:

x = MyClass()

Of course, the __init__() method may have arguments for greater flexibility. In that case,
arguments given to the class instantiation operator are passed on to __init__(). For example,

>>>

>>> class Complex:


... def __init__(self, realpart, imagpart):
... self.r = realpart
... self.i = imagpart
...
>>> x = Complex(3.0, -4.5)
>>> x.r, x.i
(3.0, -4.5)

9.3.3. Instance Objects


Now what can we do with instance objects? The only operations understood by instance objects
are attribute references. There are two kinds of valid attribute names: data attributes and
methods.

data attributes correspond to “instance variables” in Smalltalk, and to “data members” in C++.
Data attributes need not be declared; like local variables, they spring into existence when they
are first assigned to. For example, if x is the instance of MyClass created above, the following
piece of code will print the value 16, without leaving a trace:

x.counter = 1
while x.counter < 10:
x.counter = x.counter * 2
print(x.counter)
del x.counter

The other kind of instance attribute reference is a method. A method is a function that “belongs
to” an object.

Valid method names of an instance object depend on its class. By definition, all attributes of a
class that are function objects define corresponding methods of its instances. So in our
example, x.f is a valid method reference, since MyClass.f is a function, but x.i is not,
since MyClass.i is not. But x.f is not the same thing as MyClass.f — it is a method object,
not a function object.

9.3.4. Method Objects


Usually, a method is called right after it is bound:

x.f()

In the MyClass example, this will return the string 'hello world'. However, it is not
necessary to call a method right away: x.f is a method object, and can be stored away and
called at a later time. For example:

xf = x.f
while True:
print(xf())

will continue to print hello world until the end of time.

What exactly happens when a method is called? You may have noticed that x.f() was called
without an argument above, even though the function definition for f() specified an argument.
What happened to the argument? Surely Python raises an exception when a function that requires
an argument is called without any — even if the argument isn’t actually used…

Actually, you may have guessed the answer: the special thing about methods is that the instance
object is passed as the first argument of the function. In our example, the call x.f() is exactly
equivalent to MyClass.f(x). In general, calling a method with a list of n arguments is
equivalent to calling the corresponding function with an argument list that is created by inserting
the method’s instance object before the first argument.

In general, methods work as follows. When a non-data attribute of an instance is referenced, the
instance’s class is searched. If the name denotes a valid class attribute that is a function object,
references to both the instance object and the function object are packed into a method object.
When the method object is called with an argument list, a new argument list is constructed from
the instance object and the argument list, and the function object is called with this new
argument list.

9.3.5. Class and Instance Variables


Generally speaking, instance variables are for data unique to each instance and class variables
are for attributes and methods shared by all instances of the class:

class Dog:

kind = 'canine' # class variable shared by all


instances

def __init__(self, name):


self.name = name # instance variable unique to each
instance

>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.kind # shared by all dogs
'canine'
>>> e.kind # shared by all dogs
'canine'
>>> d.name # unique to d
'Fido'
>>> e.name # unique to e
'Buddy'

As discussed in A Word About Names and Objects, shared data can have possibly surprising
effects with involving mutable objects such as lists and dictionaries. For example, the tricks list
in the following code should not be used as a class variable because just a single list would be
shared by all Dog instances:

class Dog:

tricks = [] # mistaken use of a class variable


def __init__(self, name):
self.name = name

def add_trick(self, trick):


self.tricks.append(trick)

>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks # unexpectedly shared by all dogs
['roll over', 'play dead']

Correct design of the class should use an instance variable instead:

class Dog:

def __init__(self, name):


self.name = name
self.tricks = [] # creates a new empty list for each dog

def add_trick(self, trick):


self.tricks.append(trick)

>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks
['roll over']
>>> e.tricks
['play dead']

9.4. Random Remarks


If the same attribute name occurs in both an instance and in a class, then attribute lookup
prioritizes the instance:

>>>

>>> class Warehouse:


... purpose = 'storage'
... region = 'west'
...
>>> w1 = Warehouse()
>>> print(w1.purpose, w1.region)
storage west
>>> w2 = Warehouse()
>>> w2.region = 'east'
>>> print(w2.purpose, w2.region)
storage east

Data attributes may be referenced by methods as well as by ordinary users (“clients”) of an


object. In other words, classes are not usable to implement pure abstract data types. In fact,
nothing in Python makes it possible to enforce data hiding — it is all based upon convention.
(On the other hand, the Python implementation, written in C, can completely hide
implementation details and control access to an object if necessary; this can be used by
extensions to Python written in C.)

Clients should use data attributes with care — clients may mess up invariants maintained by the
methods by stamping on their data attributes. Note that clients may add data attributes of their
own to an instance object without affecting the validity of the methods, as long as name conflicts
are avoided — again, a naming convention can save a lot of headaches here.

There is no shorthand for referencing data attributes (or other methods!) from within methods. I
find that this actually increases the readability of methods: there is no chance of confusing local
variables and instance variables when glancing through a method.

Often, the first argument of a method is called self. This is nothing more than a convention: the
name self has absolutely no special meaning to Python. Note, however, that by not following
the convention your code may be less readable to other Python programmers, and it is also
conceivable that a class browser program might be written that relies upon such a convention.

Any function object that is a class attribute defines a method for instances of that class. It is not
necessary that the function definition is textually enclosed in the class definition: assigning a
function object to a local variable in the class is also ok. For example:

# Function defined outside the class


def f1(self, x, y):
return min(x, x+y)

class C:
f = f1

def g(self):
return 'hello world'
h = g

Now f, g and h are all attributes of class C that refer to function objects, and consequently they
are all methods of instances of C — h being exactly equivalent to g. Note that this practice
usually only serves to confuse the reader of a program.

Methods may call other methods by using method attributes of the self argument:

class Bag:
def __init__(self):
self.data = []

def add(self, x):


self.data.append(x)

def addtwice(self, x):


self.add(x)
self.add(x)

Methods may reference global names in the same way as ordinary functions. The global scope
associated with a method is the module containing its definition. (A class is never used as a
global scope.) While one rarely encounters a good reason for using global data in a method, there
are many legitimate uses of the global scope: for one thing, functions and modules imported into
the global scope can be used by methods, as well as functions and classes defined in it. Usually,
the class containing the method is itself defined in this global scope, and in the next section we’ll
find some good reasons why a method would want to reference its own class.

Each value is an object, and therefore has a class (also called its type). It is stored
as object.__class__.

9.5. Inheritance
Of course, a language feature would not be worthy of the name “class” without supporting
inheritance. The syntax for a derived class definition looks like this:

class DerivedClassName(BaseClassName):
<statement-1>
.
.
.
<statement-N>
The name BaseClassName must be defined in a namespace accessible from the scope
containing the derived class definition. In place of a base class name, other arbitrary expressions
are also allowed. This can be useful, for example, when the base class is defined in another
module:

class DerivedClassName(modname.BaseClassName):

Execution of a derived class definition proceeds the same as for a base class. When the class
object is constructed, the base class is remembered. This is used for resolving attribute
references: if a requested attribute is not found in the class, the search proceeds to look in the
base class. This rule is applied recursively if the base class itself is derived from some other
class.

There’s nothing special about instantiation of derived classes: DerivedClassName() creates


a new instance of the class. Method references are resolved as follows: the corresponding class
attribute is searched, descending down the chain of base classes if necessary, and the method
reference is valid if this yields a function object.

Derived classes may override methods of their base classes. Because methods have no special
privileges when calling other methods of the same object, a method of a base class that calls
another method defined in the same base class may end up calling a method of a derived class
that overrides it. (For C++ programmers: all methods in Python are effectively virtual.)

An overriding method in a derived class may in fact want to extend rather than simply replace
the base class method of the same name. There is a simple way to call the base class method
directly: just call BaseClassName.methodname(self, arguments). This is
occasionally useful to clients as well. (Note that this only works if the base class is accessible
as BaseClassName in the global scope.)

Python has two built-in functions that work with inheritance:

 Use isinstance() to check an instance’s type: isinstance(obj, int) will


be True only if obj.__class__ is int or some class derived from int.
 Use issubclass() to check class
inheritance: issubclass(bool, int) is True since bool is a subclass of int.
However, issubclass(float, int) is False since float is not a subclass
of int.
9.5.1. Multiple Inheritance
Python supports a form of multiple inheritance as well. A class definition with multiple base
classes looks like this:

class DerivedClassName(Base1, Base2, Base3):


<statement-1>
.
.
.
<statement-N>

For most purposes, in the simplest cases, you can think of the search for attributes inherited from
a parent class as depth-first, left-to-right, not searching twice in the same class where there is an
overlap in the hierarchy. Thus, if an attribute is not found in DerivedClassName, it is
searched for in Base1, then (recursively) in the base classes of Base1, and if it was not found
there, it was searched for in Base2, and so on.

In fact, it is slightly more complex than that; the method resolution order changes dynamically to
support cooperative calls to super(). This approach is known in some other multiple-
inheritance languages as call-next-method and is more powerful than the super call found in
single-inheritance languages.

Dynamic ordering is necessary because all cases of multiple inheritance exhibit one or more
diamond relationships (where at least one of the parent classes can be accessed through multiple
paths from the bottommost class). For example, all classes inherit from object, so any case of
multiple inheritance provides more than one path to reach object. To keep the base classes
from being accessed more than once, the dynamic algorithm linearizes the search order in a way
that preserves the left-to-right ordering specified in each class, that calls each parent only once,
and that is monotonic (meaning that a class can be subclassed without affecting the precedence
order of its parents). Taken together, these properties make it possible to design reliable and
extensible classes with multiple inheritance. For more detail, see The Python 2.3 Method
Resolution Order.

9.6. Private Variables


“Private” instance variables that cannot be accessed except from inside an object don’t exist in
Python. However, there is a convention that is followed by most Python code: a name prefixed
with an underscore (e.g. _spam) should be treated as a non-public part of the API (whether it is a
function, a method or a data member). It should be considered an implementation detail and
subject to change without notice.

Since there is a valid use-case for class-private members (namely to avoid name clashes of
names with names defined by subclasses), there is limited support for such a mechanism,
called name mangling. Any identifier of the form __spam (at least two leading underscores, at
most one trailing underscore) is textually replaced with _classname__spam,
where classname is the current class name with leading underscore(s) stripped. This mangling
is done without regard to the syntactic position of the identifier, as long as it occurs within the
definition of a class.

See also
The private name mangling specifications for details and special cases.

Name mangling is helpful for letting subclasses override methods without breaking intraclass
method calls. For example:

class Mapping:
def __init__(self, iterable):
self.items_list = []
self.__update(iterable)

def update(self, iterable):


for item in iterable:
self.items_list.append(item)

__update = update # private copy of original update() method

class MappingSubclass(Mapping):

def update(self, keys, values):


# provides new signature for update()
# but does not break __init__()
for item in zip(keys, values):
self.items_list.append(item)

The above example would work even if MappingSubclass were to introduce


a __update identifier since it is replaced with _Mapping__update in the Mapping class
and _MappingSubclass__update in the MappingSubclass class respectively.

Note that the mangling rules are designed mostly to avoid accidents; it still is possible to access
or modify a variable that is considered private. This can even be useful in special circumstances,
such as in the debugger.

Notice that code passed to exec() or eval() does not consider the classname of the invoking
class to be the current class; this is similar to the effect of the global statement, the effect of
which is likewise restricted to code that is byte-compiled together. The same restriction applies
to getattr(), setattr() and delattr(), as well as when
referencing __dict__ directly.
9.7. Odds and Ends
Sometimes it is useful to have a data type similar to the Pascal “record” or C “struct”, bundling
together a few named data items. The idiomatic approach is to use dataclasses for this
purpose:

from dataclasses import dataclass

@dataclass
class Employee:
name: str
dept: str
salary: int
>>>

>>> john = Employee('john', 'computer lab', 1000)


>>> john.dept
'computer lab'
>>> john.salary
1000

A piece of Python code that expects a particular abstract data type can often be passed a class
that emulates the methods of that data type instead. For instance, if you have a function that
formats some data from a file object, you can define a class with
methods read() and readline() that get the data from a string buffer instead, and pass it as
an argument.

Instance method objects have attributes, too: m.__self__ is the instance object with the
method m(), and m.__func__ is the function object corresponding to the method.

9.8. Iterators
By now you have probably noticed that most container objects can be looped over using
a for statement:

for element in [1, 2, 3]:


print(element)
for element in (1, 2, 3):
print(element)
for key in {'one':1, 'two':2}:
print(key)
for char in "123":
print(char)
for line in open("myfile.txt"):
print(line, end='')

This style of access is clear, concise, and convenient. The use of iterators pervades and unifies
Python. Behind the scenes, the for statement calls iter() on the container object. The
function returns an iterator object that defines the method __next__() which accesses
elements in the container one at a time. When there are no more elements, __next__() raises
a StopIteration exception which tells the for loop to terminate. You can call
the __next__() method using the next() built-in function; this example shows how it all
works:

>>>

>>> s = 'abc'
>>> it = iter(s)
>>> it
<str_iterator object at 0x10c90e650>
>>> next(it)
'a'
>>> next(it)
'b'
>>> next(it)
'c'
>>> next(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
next(it)
StopIteration

Having seen the mechanics behind the iterator protocol, it is easy to add iterator behavior to your
classes. Define an __iter__() method which returns an object with a __next__() method.
If the class defines __next__(), then __iter__() can just return self:

class Reverse:
"""Iterator for looping over a sequence backwards."""
def __init__(self, data):
self.data = data
self.index = len(data)

def __iter__(self):
return self
def __next__(self):
if self.index == 0:
raise StopIteration
self.index = self.index - 1
return self.data[self.index]
>>>

>>> rev = Reverse('spam')


>>> iter(rev)
<__main__.Reverse object at 0x00A1DB50>
>>> for char in rev:
... print(char)
...
m
a
p
s

9.9. Generators
Generators are a simple and powerful tool for creating iterators. They are written like regular
functions but use the yield statement whenever they want to return data. Each time next() is
called on it, the generator resumes where it left off (it remembers all the data values and which
statement was last executed). An example shows that generators can be trivially easy to create:

def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
>>>

>>> for char in reverse('golf'):


... print(char)
...
f
l
o
g

Anything that can be done with generators can also be done with class-based iterators as
described in the previous section. What makes generators so compact is that
the __iter__() and __next__() methods are created automatically.
Another key feature is that the local variables and execution state are automatically saved
between calls. This made the function easier to write and much more clear than an approach
using instance variables like self.index and self.data.

In addition to automatic method creation and saving program state, when generators terminate,
they automatically raise StopIteration. In combination, these features make it easy to create
iterators with no more effort than writing a regular function.

9.10. Generator Expressions


Some simple generators can be coded succinctly as expressions using a syntax similar to list
comprehensions but with parentheses instead of square brackets. These expressions are designed
for situations where the generator is used right away by an enclosing function. Generator
expressions are more compact but less versatile than full generator definitions and tend to be
more memory friendly than equivalent list comprehensions.

Examples:

>>>

>>> sum(i*i for i in range(10)) # sum of squares


285

>>> xvec = [10, 20, 30]


>>> yvec = [7, 5, 3]
>>> sum(x*y for x,y in zip(xvec, yvec)) # dot product
260

>>> unique_words = set(word for line in page for word in


line.split())

>>> valedictorian = max((student.gpa, student.name) for student in


graduates)

>>> data = 'golf'


>>> list(data[i] for i in range(len(data)-1, -1, -1))
['f', 'l', 'o', 'g']

Footnotes

[1]
Except for one thing. Module objects have a secret read-only attribute
called __dict__ which returns the dictionary used to implement the module’s namespace;
the name __dict__ is an attribute but not a global name. Obviously, using this violates the
abstraction of namespace implementation, and should be restricted

10. Brief Tour of the Standard Library


10.1. Operating System Interface
The os module provides dozens of functions for interacting with the operating system:

>>>

>>> import os
>>> os.getcwd() # Return the current working directory
'C:\\Python312'
>>> os.chdir('/server/accesslogs') # Change current working
directory
>>> os.system('mkdir today') # Run the command mkdir in the
system shell
0

Be sure to use the import os style instead of from os import *. This will
keep os.open() from shadowing the built-in open() function which operates much
differently.

The built-in dir() and help() functions are useful as interactive aids for working with large
modules like os:

>>>

>>> import os
>>> dir(os)
<returns a list of all module functions>
>>> help(os)
<returns an extensive manual page created from the module's
docstrings>

For daily file and directory management tasks, the shutil module provides a higher level
interface that is easier to use:

>>>
>>> import shutil
>>> shutil.copyfile('data.db', 'archive.db')
'archive.db'
>>> shutil.move('/build/executables', 'installdir')
'installdir'

10.2. File Wildcards


The glob module provides a function for making file lists from directory wildcard searches:

>>>

>>> import glob


>>> glob.glob('*.py')
['primes.py', 'random.py', 'quote.py']

10.3. Command Line Arguments


Common utility scripts often need to process command line arguments. These arguments are
stored in the sys module’s argv attribute as a list. For instance, let’s take the
following demo.py file:

# File demo.py
import sys
print(sys.argv)

Here is the output from running python demo.py one two three at the command line:

['demo.py', 'one', 'two', 'three']

The argparse module provides a more sophisticated mechanism to process command line
arguments. The following script extracts one or more filenames and an optional number of lines
to be displayed:

import argparse

parser = argparse.ArgumentParser(
prog='top',
description='Show top lines from each file')
parser.add_argument('filenames', nargs='+')
parser.add_argument('-l', '--lines', type=int, default=10)
args = parser.parse_args()
print(args)

When run at the command line with python top.py --


lines=5 alpha.txt beta.txt, the script
sets args.lines to 5 and args.filenames to ['alpha.txt', 'beta.txt'].

10.4. Error Output Redirection and Program


Termination
The sys module also has attributes for stdin, stdout, and stderr. The latter is useful for emitting
warnings and error messages to make them visible even when stdout has been redirected:

>>>

>>> sys.stderr.write('Warning, log file not found starting a new


one\n')
Warning, log file not found starting a new one

The most direct way to terminate a script is to use sys.exit().

10.5. String Pattern Matching


The re module provides regular expression tools for advanced string processing. For complex
matching and manipulation, regular expressions offer succinct, optimized solutions:

>>>

>>> import re
>>> re.findall(r'\bf[a-z]*', 'which foot or hand fell fastest')
['foot', 'fell', 'fastest']
>>> re.sub(r'(\b[a-z]+) \1', r'\1', 'cat in the the hat')
'cat in the hat'

When only simple capabilities are needed, string methods are preferred because they are easier to
read and debug:

>>>

>>> 'tea for too'.replace('too', 'two')


'tea for two'
10.6. Mathematics
The math module gives access to the underlying C library functions for floating-point math:

>>>

>>> import math


>>> math.cos(math.pi / 4)
0.70710678118654757
>>> math.log(1024, 2)
10.0

The random module provides tools for making random selections:

>>>

>>> import random


>>> random.choice(['apple', 'pear', 'banana'])
'apple'
>>> random.sample(range(100), 10) # sampling without replacement
[30, 83, 16, 4, 8, 81, 41, 50, 18, 33]
>>> random.random() # random float
0.17970987693706186
>>> random.randrange(6) # random integer chosen from range(6)
4

The statistics module calculates basic statistical properties (the mean, median, variance,
etc.) of numeric data:

>>>

>>> import statistics


>>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5]
>>> statistics.mean(data)
1.6071428571428572
>>> statistics.median(data)
1.25
>>> statistics.variance(data)
1.3720238095238095

The SciPy project <https://scipy.org> has many other modules for numerical computations.
10.7. Internet Access
There are a number of modules for accessing the internet and processing internet protocols. Two
of the simplest are urllib.request for retrieving data from URLs and smtplib for sending
mail:

>>>

>>> from urllib.request import urlopen


>>> with
urlopen('http://worldtimeapi.org/api/timezone/etc/UTC.txt') as
response:
... for line in response:
... line = line.decode() # Convert bytes to a
str
... if line.startswith('datetime'):
... print(line.rstrip()) # Remove trailing
newline
...
datetime: 2022-01-01T01:36:47.689215+00:00

>>> import smtplib


>>> server = smtplib.SMTP('localhost')
>>> server.sendmail('soothsayer@example.org',
'jcaesar@example.org',
... """To: jcaesar@example.org
... From: soothsayer@example.org
...
... Beware the Ides of March.
... """)
>>> server.quit()

(Note that the second example needs a mailserver running on localhost.)

10.8. Dates and Times


The datetime module supplies classes for manipulating dates and times in both simple and
complex ways. While date and time arithmetic is supported, the focus of the implementation is
on efficient member extraction for output formatting and manipulation. The module also
supports objects that are timezone aware.

>>>
>>> # dates are easily constructed and formatted
>>> from datetime import date
>>> now = date.today()
>>> now
datetime.date(2003, 12, 2)
>>> now.strftime("%m-%d-%y. %d %b %Y is a %A on the %d day of %B.")
'12-02-03. 02 Dec 2003 is a Tuesday on the 02 day of December.'

>>> # dates support calendar arithmetic


>>> birthday = date(1964, 7, 31)
>>> age = now - birthday
>>> age.days
14368

10.9. Data Compression


Common data archiving and compression formats are directly supported by modules
including: zlib, gzip, bz2, lzma, zipfile and tarfile.

>>>

>>> import zlib


>>> s = b'witch which has which witches wrist watch'
>>> len(s)
41
>>> t = zlib.compress(s)
>>> len(t)
37
>>> zlib.decompress(t)
b'witch which has which witches wrist watch'
>>> zlib.crc32(s)
226805979

10.10. Performance Measurement


Some Python users develop a deep interest in knowing the relative performance of different
approaches to the same problem. Python provides a measurement tool that answers those
questions immediately.

For example, it may be tempting to use the tuple packing and unpacking feature instead of the
traditional approach to swapping arguments. The timeit module quickly demonstrates a
modest performance advantage:

>>>
>>> from timeit import Timer
>>> Timer('t=a; a=b; b=t', 'a=1; b=2').timeit()
0.57535828626024577
>>> Timer('a,b = b,a', 'a=1; b=2').timeit()
0.54962537085770791

In contrast to timeit’s fine level of granularity, the profile and pstats modules provide
tools for identifying time critical sections in larger blocks of code.

10.11. Quality Control


One approach for developing high quality software is to write tests for each function as it is
developed and to run those tests frequently during the development process.

The doctest module provides a tool for scanning a module and validating tests embedded in a
program’s docstrings. Test construction is as simple as cutting-and-pasting a typical call along
with its results into the docstring. This improves the documentation by providing the user with an
example and it allows the doctest module to make sure the code remains true to the
documentation:

def average(values):
"""Computes the arithmetic mean of a list of numbers.

>>> print(average([20, 30, 70]))


40.0
"""
return sum(values) / len(values)

import doctest
doctest.testmod() # automatically validate the embedded tests

The unittest module is not as effortless as the doctest module, but it allows a more
comprehensive set of tests to be maintained in a separate file:

import unittest

class TestStatisticalFunctions(unittest.TestCase):

def test_average(self):
self.assertEqual(average([20, 30, 70]), 40.0)
self.assertEqual(round(average([1, 5, 7]), 1), 4.3)
with self.assertRaises(ZeroDivisionError):
average([])
with self.assertRaises(TypeError):
average(20, 30, 70)

unittest.main() # Calling from the command line invokes all tests

10.12. Batteries Included


Python has a “batteries included” philosophy. This is best seen through the sophisticated and
robust capabilities of its larger packages. For example:

 The xmlrpc.client and xmlrpc.server modules make implementing remote


procedure calls into an almost trivial task. Despite the modules’ names, no direct
knowledge or handling of XML is needed.
 The email package is a library for managing email messages, including MIME and
other RFC 2822-based message documents. Unlike smtplib and poplib which
actually send and receive messages, the email package has a complete toolset for building
or decoding complex message structures (including attachments) and for implementing
internet encoding and header protocols.
 The json package provides robust support for parsing this popular data interchange
format. The csv module supports direct reading and writing of files in Comma-Separated
Value format, commonly supported by databases and spreadsheets. XML processing is
supported by the xml.etree.ElementTree, xml.dom and xml.sax packages.
Together, these modules and packages greatly simplify data interchange between Python
applications and other tools.
 The sqlite3 module is a wrapper for the SQLite database library, providing a
persistent database that can be updated and accessed using slightly nonstandard SQL
syntax.
 Internationalization is supported by a number of modules including gettext, locale,
and the codecs package.

11. Brief Tour of the Standard Library


— Part II
This second tour covers more advanced modules that support professional
programming needs. These modules rarely occur in small scripts.

11.1. Output Formatting


The reprlib module provides a version of repr() customized for abbreviated displays of
large or deeply nested containers:
>>>

>>> import reprlib


>>> reprlib.repr(set('supercalifragilisticexpialidocious'))
"{'a', 'c', 'd', 'e', 'f', 'g', ...}"

The pprint module offers more sophisticated control over printing both built-in and user
defined objects in a way that is readable by the interpreter. When the result is longer than one
line, the “pretty printer” adds line breaks and indentation to more clearly reveal data structure:

>>>

>>> import pprint


>>> t = [[[['black', 'cyan'], 'white', ['green', 'red']],
[['magenta',
... 'yellow'], 'blue']]]
...
>>> pprint.pprint(t, width=30)
[[[['black', 'cyan'],
'white',
['green', 'red']],
[['magenta', 'yellow'],
'blue']]]

The textwrap module formats paragraphs of text to fit a given screen width:

>>>

>>> import textwrap


>>> doc = """The wrap() method is just like fill() except that it
returns
... a list of strings instead of one big string with newlines to
separate
... the wrapped lines."""
...
>>> print(textwrap.fill(doc, width=40))
The wrap() method is just like fill()
except that it returns a list of strings
instead of one big string with newlines
to separate the wrapped lines.

The locale module accesses a database of culture specific data formats. The grouping attribute
of locale’s format function provides a direct way of formatting numbers with group separators:
>>>

>>> import locale


>>> locale.setlocale(locale.LC_ALL, 'English_United States.1252')
'English_United States.1252'
>>> conv = locale.localeconv() # get a mapping of
conventions
>>> x = 1234567.8
>>> locale.format_string("%d", x, grouping=True)
'1,234,567'
>>> locale.format_string("%s%.*f", (conv['currency_symbol'],
... conv['frac_digits'], x), grouping=True)
'$1,234,567.80'

11.2. Templating
The string module includes a versatile Template class with a simplified syntax suitable for
editing by end-users. This allows users to customize their applications without having to alter the
application.

The format uses placeholder names formed by $ with valid Python identifiers (alphanumeric
characters and underscores). Surrounding the placeholder with braces allows it to be followed by
more alphanumeric letters with no intervening spaces. Writing $$ creates a single escaped $:

>>>

>>> from string import Template


>>> t = Template('${village}folk send $$10 to $cause.')
>>> t.substitute(village='Nottingham', cause='the ditch fund')
'Nottinghamfolk send $10 to the ditch fund.'

The substitute() method raises a KeyError when a placeholder is not supplied in a


dictionary or a keyword argument. For mail-merge style applications, user supplied data may be
incomplete and the safe_substitute() method may be more appropriate — it will leave
placeholders unchanged if data is missing:

>>>

>>> t = Template('Return the $item to $owner.')


>>> d = dict(item='unladen swallow')
>>> t.substitute(d)
Traceback (most recent call last):
...
KeyError: 'owner'
>>> t.safe_substitute(d)
'Return the unladen swallow to $owner.'

Template subclasses can specify a custom delimiter. For example, a batch renaming utility for a
photo browser may elect to use percent signs for placeholders such as the current date, image
sequence number, or file format:

>>>

>>> import time, os.path


>>> photofiles = ['img_1074.jpg', 'img_1076.jpg', 'img_1077.jpg']
>>> class BatchRename(Template):
... delimiter = '%'
...
>>> fmt = input('Enter rename style (%d-date %n-seqnum %f-format):
')
Enter rename style (%d-date %n-seqnum %f-format): Ashley_%n%f

>>> t = BatchRename(fmt)
>>> date = time.strftime('%d%b%y')
>>> for i, filename in enumerate(photofiles):
... base, ext = os.path.splitext(filename)
... newname = t.substitute(d=date, n=i, f=ext)
... print('{0} --> {1}'.format(filename, newname))

img_1074.jpg --> Ashley_0.jpg


img_1076.jpg --> Ashley_1.jpg
img_1077.jpg --> Ashley_2.jpg

Another application for templating is separating program logic from the details of multiple
output formats. This makes it possible to substitute custom templates for XML files, plain text
reports, and HTML web reports.

11.3. Working with Binary Data Record Layouts


The struct module provides pack() and unpack() functions for working with variable
length binary record formats. The following example shows how to loop through header
information in a ZIP file without using the zipfile module. Pack
codes "H" and "I" represent two and four byte unsigned numbers respectively.
The "<" indicates that they are standard size and in little-endian byte order:

import struct
with open('myfile.zip', 'rb') as f:
data = f.read()

start = 0
for i in range(3): # show the first 3 file
headers
start += 14
fields = struct.unpack('<IIIHH', data[start:start+16])
crc32, comp_size, uncomp_size, filenamesize, extra_size =
fields

start += 16
filename = data[start:start+filenamesize]
start += filenamesize
extra = data[start:start+extra_size]
print(filename, hex(crc32), comp_size, uncomp_size)

start += extra_size + comp_size # skip to the next header

11.4. Multi-threading
Threading is a technique for decoupling tasks which are not sequentially dependent. Threads can
be used to improve the responsiveness of applications that accept user input while other tasks run
in the background. A related use case is running I/O in parallel with computations in another
thread.

The following code shows how the high level threading module can run tasks in background
while the main program continues to run:

import threading, zipfile

class AsyncZip(threading.Thread):
def __init__(self, infile, outfile):
threading.Thread.__init__(self)
self.infile = infile
self.outfile = outfile

def run(self):
f = zipfile.ZipFile(self.outfile, 'w',
zipfile.ZIP_DEFLATED)
f.write(self.infile)
f.close()
print('Finished background zip of:', self.infile)
background = AsyncZip('mydata.txt', 'myarchive.zip')
background.start()
print('The main program continues to run in foreground.')

background.join() # Wait for the background task to finish


print('Main program waited until background was done.')

The principal challenge of multi-threaded applications is coordinating threads that share data or
other resources. To that end, the threading module provides a number of synchronization
primitives including locks, events, condition variables, and semaphores.

While those tools are powerful, minor design errors can result in problems that are difficult to
reproduce. So, the preferred approach to task coordination is to concentrate all access to a
resource in a single thread and then use the queue module to feed that thread with requests from
other threads. Applications using Queue objects for inter-thread communication and
coordination are easier to design, more readable, and more reliable.

11.5. Logging
The logging module offers a full featured and flexible logging system. At its simplest, log
messages are sent to a file or to sys.stderr:

import logging
logging.debug('Debugging information')
logging.info('Informational message')
logging.warning('Warning:config file %s not found', 'server.conf')
logging.error('Error occurred')
logging.critical('Critical error -- shutting down')

This produces the following output:

WARNING:root:Warning:config file server.conf not found


ERROR:root:Error occurred
CRITICAL:root:Critical error -- shutting down

By default, informational and debugging messages are suppressed and the output is sent to
standard error. Other output options include routing messages through email, datagrams, sockets,
or to an HTTP Server. New filters can select different routing based on message
priority: DEBUG, INFO, WARNING, ERROR, and CRITICAL.

The logging system can be configured directly from Python or can be loaded from a user editable
configuration file for customized logging without altering the application.
11.6. Weak References
Python does automatic memory management (reference counting for most objects and garbage
collection to eliminate cycles). The memory is freed shortly after the last reference to it has been
eliminated.

This approach works fine for most applications but occasionally there is a need to track objects
only as long as they are being used by something else. Unfortunately, just tracking them creates a
reference that makes them permanent. The weakref module provides tools for tracking objects
without creating a reference. When the object is no longer needed, it is automatically removed
from a weakref table and a callback is triggered for weakref objects. Typical applications include
caching objects that are expensive to create:

>>>

>>> import weakref, gc


>>> class A:
... def __init__(self, value):
... self.value = value
... def __repr__(self):
... return str(self.value)
...
>>> a = A(10) # create a reference
>>> d = weakref.WeakValueDictionary()
>>> d['primary'] = a # does not create a reference
>>> d['primary'] # fetch the object if it is still
alive
10
>>> del a # remove the one reference
>>> gc.collect() # run garbage collection right away
0
>>> d['primary'] # entry was automatically removed
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
d['primary'] # entry was automatically removed
File "C:/python312/lib/weakref.py", line 46, in __getitem__
o = self.data[key]()
KeyError: 'primary'

11.7. Tools for Working with Lists


Many data structure needs can be met with the built-in list type. However, sometimes there is a
need for alternative implementations with different performance trade-offs.
The array module provides an array object that is like a list that stores only homogeneous
data and stores it more compactly. The following example shows an array of numbers stored as
two byte unsigned binary numbers (typecode "H") rather than the usual 16 bytes per entry for
regular lists of Python int objects:

>>>

>>> from array import array


>>> a = array('H', [4000, 10, 700, 22222])
>>> sum(a)
26932
>>> a[1:3]
array('H', [10, 700])

The collections module provides a deque object that is like a list with faster appends and
pops from the left side but slower lookups in the middle. These objects are well suited for
implementing queues and breadth first tree searches:

>>>

>>> from collections import deque


>>> d = deque(["task1", "task2", "task3"])
>>> d.append("task4")
>>> print("Handling", d.popleft())
Handling task1
unsearched = deque([starting_node])
def breadth_first_search(unsearched):
node = unsearched.popleft()
for m in gen_moves(node):
if is_goal(m):
return m
unsearched.append(m)

In addition to alternative list implementations, the library also offers other tools such as
the bisect module with functions for manipulating sorted lists:

>>>

>>> import bisect


>>> scores = [(100, 'perl'), (200, 'tcl'), (400, 'lua'), (500,
'python')]
>>> bisect.insort(scores, (300, 'ruby'))
>>> scores
[(100, 'perl'), (200, 'tcl'), (300, 'ruby'), (400, 'lua'), (500,
'python')]

The heapq module provides functions for implementing heaps based on regular lists. The lowest
valued entry is always kept at position zero. This is useful for applications which repeatedly
access the smallest element but do not want to run a full list sort:

>>>

>>> from heapq import heapify, heappop, heappush


>>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]
>>> heapify(data) # rearrange the list into
heap order
>>> heappush(data, -5) # add a new entry
>>> [heappop(data) for i in range(3)] # fetch the three smallest
entries
[-5, 0, 1]

11.8. Decimal Floating-Point Arithmetic


The decimal module offers a Decimal datatype for decimal floating-point arithmetic.
Compared to the built-in float implementation of binary floating point, the class is especially
helpful for

 financial applications and other uses which require exact decimal representation,
 control over precision,
 control over rounding to meet legal or regulatory requirements,
 tracking of significant decimal places, or
 applications where the user expects the results to match calculations done by hand.

For example, calculating a 5% tax on a 70 cent phone charge gives different results in decimal
floating point and binary floating point. The difference becomes significant if the results are
rounded to the nearest cent:

>>>

>>> from decimal import *


>>> round(Decimal('0.70') * Decimal('1.05'), 2)
Decimal('0.74')
>>> round(.70 * 1.05, 2)
0.73

The Decimal result keeps a trailing zero, automatically inferring four place significance from
multiplicands with two place significance. Decimal reproduces mathematics as done by hand and
avoids issues that can arise when binary floating point cannot exactly represent decimal
quantities.

Exact representation enables the Decimal class to perform modulo calculations and equality
tests that are unsuitable for binary floating point:

>>>

>>> Decimal('1.00') % Decimal('.10')


Decimal('0.00')
>>> 1.00 % 0.10
0.09999999999999995

>>> sum([Decimal('0.1')]*10) == Decimal('1.0')


True
>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 ==
1.0
False

The decimal module provides arithmetic with as much precision as needed:

>>>

>>> getcontext().prec = 36
>>> Decimal(1) / Decimal(7)
Decimal('0.142857142857142857142857142857142857')

12. Virtual Environments and


Packages
12.1. Introduction
Python applications will often use packages and modules that don’t come as part of the standard
library. Applications will sometimes need a specific version of a library, because the application
may require that a particular bug has been fixed or the application may be written using an
obsolete version of the library’s interface.

This means it may not be possible for one Python installation to meet the requirements of every
application. If application A needs version 1.0 of a particular module but application B needs
version 2.0, then the requirements are in conflict and installing either version 1.0 or 2.0 will
leave one application unable to run.
The solution for this problem is to create a virtual environment, a self-contained directory tree
that contains a Python installation for a particular version of Python, plus a number of additional
packages.

Different applications can then use different virtual environments. To resolve the earlier example
of conflicting requirements, application A can have its own virtual environment with version 1.0
installed while application B has another virtual environment with version 2.0. If application B
requires a library be upgraded to version 3.0, this will not affect application A’s environment.

12.2. Creating Virtual Environments


The module used to create and manage virtual environments is called venv. venv will install
the Python version from which the command was run (as reported by the --version option).
For instance, executing the command with python3.12 will install version 3.12.

To create a virtual environment, decide upon a directory where you want to place it, and run
the venv module as a script with the directory path:

python -m venv tutorial-env

This will create the tutorial-env directory if it doesn’t exist, and also create directories
inside it containing a copy of the Python interpreter and various supporting files.

A common directory location for a virtual environment is .venv. This name keeps the directory
typically hidden in your shell and thus out of the way while giving it a name that explains why
the directory exists. It also prevents clashing with .env environment variable definition files
that some tooling supports.

Once you’ve created a virtual environment, you may activate it.

On Windows, run:

tutorial-env\Scripts\activate

On Unix or MacOS, run:

source tutorial-env/bin/activate

(This script is written for the bash shell. If you use the csh or fish shells, there are
alternate activate.csh and activate.fish scripts you should use instead.)
Activating the virtual environment will change your shell’s prompt to show what virtual
environment you’re using, and modify the environment so that running python will get you that
particular version and installation of Python. For example:

$ source ~/envs/tutorial-env/bin/activate
(tutorial-env) $ python
Python 3.5.1 (default, May 6 2016, 10:59:36)
...
>>> import sys
>>> sys.path
['', '/usr/local/lib/python35.zip', ...,
'~/envs/tutorial-env/lib/python3.5/site-packages']
>>>

To deactivate a virtual environment, type:

deactivate

into the terminal.

12.3. Managing Packages with pip


You can install, upgrade, and remove packages using a program called pip. By default pip will
install packages from the Python Package Index. You can browse the Python Package Index by
going to it in your web browser.

pip has a number of subcommands: “install”, “uninstall”, “freeze”, etc. (Consult the Installing
Python Modules guide for complete documentation for pip.)

You can install the latest version of a package by specifying a package’s name:

(tutorial-env) $ python -m pip install novas


Collecting novas
Downloading novas-3.1.1.3.tar.gz (136kB)
Installing collected packages: novas
Running setup.py install for novas
Successfully installed novas-3.1.1.3

You can also install a specific version of a package by giving the package name followed
by == and the version number:

(tutorial-env) $ python -m pip install requests==2.6.0


Collecting requests==2.6.0
Using cached requests-2.6.0-py2.py3-none-any.whl
Installing collected packages: requests
Successfully installed requests-2.6.0

If you re-run this command, pip will notice that the requested version is already installed and do
nothing. You can supply a different version number to get that version, or you can
run python -m pip install --upgrade to upgrade the package to the latest version:

(tutorial-env) $ python -m pip install --upgrade requests


Collecting requests
Installing collected packages: requests
Found existing installation: requests 2.6.0
Uninstalling requests-2.6.0:
Successfully uninstalled requests-2.6.0
Successfully installed requests-2.7.0

python -m pip uninstall followed by one or more package names will remove the
packages from the virtual environment.

python -m pip show will display information about a particular package:

(tutorial-env) $ python -m pip show requests


---
Metadata-Version: 2.0
Name: requests
Version: 2.7.0
Summary: Python HTTP for Humans.
Home-page: http://python-requests.org
Author: Kenneth Reitz
Author-email: me@kennethreitz.com
License: Apache 2.0
Location: /Users/akuchling/envs/tutorial-env/lib/python3.4/site-
packages
Requires:

python -m pip list will display all of the packages installed in the virtual environment:

(tutorial-env) $ python -m pip list


novas (3.1.1.3)
numpy (1.9.2)
pip (7.0.3)
requests (2.7.0)
setuptools (16.0)

python -m pip freeze will produce a similar list of the installed packages, but the output
uses the format that python -m pip install expects. A common convention is to put this
list in a requirements.txt file:

(tutorial-env) $ python -m pip freeze > requirements.txt


(tutorial-env) $ cat requirements.txt
novas==3.1.1.3
numpy==1.9.2
requests==2.7.0

The requirements.txt can then be committed to version control and shipped as part of an
application. Users can then install all the necessary packages with install -r:

(tutorial-env) $ python -m pip install -r requirements.txt


Collecting novas==3.1.1.3 (from -r requirements.txt (line 1))
...
Collecting numpy==1.9.2 (from -r requirements.txt (line 2))
...
Collecting requests==2.7.0 (from -r requirements.txt (line 3))
...
Installing collected packages: novas, numpy, requests
Running setup.py install for novas
Successfully installed novas-3.1.1.3 numpy-1.9.2 requests-2.7.0

pip has many more options. Consult the Installing Python Modules guide for complete
documentation for pip. When you’ve written a package and want to make it available on the
Python Package Index, consult the Python packaging user guide.

13. What Now?


Reading this tutorial has probably reinforced your interest in using Python —
you should be eager to apply Python to solving your real-world problems.
Where should you go to learn more?

This tutorial is part of Python’s documentation set. Some other documents in


the set are:
 The Python Standard Library:

You should browse through this manual, which gives complete (though
terse) reference material about types, functions, and the modules in
the standard library. The standard Python distribution includes a lot of
additional code. There are modules to read Unix mailboxes, retrieve
documents via HTTP, generate random numbers, parse command-line
options, compress data, and many other tasks. Skimming through the
Library Reference will give you an idea of what’s available.

 Installing Python Modules explains how to install additional modules


written by other Python users.
 The Python Language Reference: A detailed explanation of Python’s
syntax and semantics. It’s heavy reading, but is useful as a complete
guide to the language itself.

More Python resources:

 https://www.python.org: The major Python web site. It contains code,


documentation, and pointers to Python-related pages around the web.
 https://docs.python.org: Fast access to Python’s documentation.
 https://pypi.org: The Python Package Index, previously also nicknamed
the Cheese Shop [1], is an index of user-created Python modules that
are available for download. Once you begin releasing code, you can
register it here so that others can find it.
 https://code.activestate.com/recipes/langs/python/ : The Python
Cookbook is a sizable collection of code examples, larger modules, and
useful scripts. Particularly notable contributions are collected in a book
also titled Python Cookbook (O’Reilly & Associates, ISBN 0-596-00797-
3.)
 https://pyvideo.org collects links to Python-related videos from
conferences and user-group meetings.
 https://scipy.org: The Scientific Python project includes modules for
fast array computations and manipulations plus a host of packages for
such things as linear algebra, Fourier transforms, non-linear solvers,
random number distributions, statistical analysis and the like.

For Python-related questions and problem reports, you can post to the
newsgroup comp.lang.python, or send them to the mailing list at python-
list@python.org. The newsgroup and mailing list are gatewayed, so
messages posted to one will automatically be forwarded to the other. There
are hundreds of postings a day, asking (and answering) questions,
suggesting new features, and announcing new modules. Mailing list archives
are available at https://mail.python.org/pipermail/.
Before posting, be sure to check the list of Frequently Asked Questions (also
called the FAQ). The FAQ answers many of the questions that come up again
and again, and may already contain the solution for your problem.

Footnotes

[1]

“Cheese Shop” is a Monty Python’s sketch: a customer enters a cheese shop, but whatever
cheese he asks for, the clerk says it’s missing.

14. Interactive Input Editing and


History Substitution
Some versions of the Python interpreter support editing of the current input
line and history substitution, similar to facilities found in the Korn shell and
the GNU Bash shell. This is implemented using the GNU Readline library,
which supports various styles of editing. This library has its own
documentation which we won’t duplicate here.

14.1. Tab Completion and History Editing


Completion of variable and module names is automatically enabled at interpreter startup so that
the Tab key invokes the completion function; it looks at Python statement names, the current
local variables, and the available module names. For dotted expressions such as string.a, it
will evaluate the expression up to the final '.' and then suggest completions from the attributes
of the resulting object. Note that this may execute application-defined code if an object with
a __getattr__() method is part of the expression. The default configuration also saves your
history into a file named .python_history in your user directory. The history will be
available again during the next interactive interpreter session.

14.2. Alternatives to the Interactive Interpreter


This facility is an enormous step forward compared to earlier versions of the interpreter;
however, some wishes are left: It would be nice if the proper indentation were suggested on
continuation lines (the parser knows if an indent token is required next). The completion
mechanism might use the interpreter’s symbol table. A command to check (or even suggest)
matching parentheses, quotes, etc., would also be useful.

One alternative enhanced interactive interpreter that has been around for quite some time
is IPython, which features tab completion, object exploration and advanced history management.
It can also be thoroughly customized and embedded into other applications. Another similar
enhanced interactive environment is bpython.

15. Floating-Point Arithmetic: Issues


and Limitations
Floating-point numbers are represented in computer hardware as base 2
(binary) fractions. For example, the decimal fraction 0.625 has value 6/10 +
2/100 + 5/1000, and in the same way the binary fraction 0.101 has value
1/2 + 0/4 + 1/8. These two fractions have identical values, the only real
difference being that the first is written in base 10 fractional notation, and
the second in base 2.

Unfortunately, most decimal fractions cannot be represented exactly as


binary fractions. A consequence is that, in general, the decimal floating-point
numbers you enter are only approximated by the binary floating-point
numbers actually stored in the machine.

The problem is easier to understand at first in base 10. Consider the fraction
1/3. You can approximate that as a base 10 fraction:

0.3

or, better,

0.33

or, better,

0.333

and so on. No matter how many digits you’re willing to write down, the result
will never be exactly 1/3, but will be an increasingly better approximation of
1/3.

In the same way, no matter how many base 2 digits you’re willing to use, the
decimal value 0.1 cannot be represented exactly as a base 2 fraction. In
base 2, 1/10 is the infinitely repeating fraction
0.0001100110011001100110011001100110011001100110011...

Stop at any finite number of bits, and you get an approximation. On most
machines today, floats are approximated using a binary fraction with the
numerator using the first 53 bits starting with the most significant bit and
with the denominator as a power of two. In the case of 1/10, the binary
fraction is 3602879701896397 / 2 ** 55 which is close to but not exactly
equal to the true value of 1/10.

Many users are not aware of the approximation because of the way values
are displayed. Python only prints a decimal approximation to the true
decimal value of the binary approximation stored by the machine. On most
machines, if Python were to print the true decimal value of the binary
approximation stored for 0.1, it would have to display:

>>>

>>> 0.1
0.1000000000000000055511151231257827021181583404541015625

That is more digits than most people find useful, so Python keeps the
number of digits manageable by displaying a rounded value instead:

>>>

>>> 1 / 10
0.1

Just remember, even though the printed result looks like the exact value of
1/10, the actual stored value is the nearest representable binary fraction.

Interestingly, there are many different decimal numbers that share the same
nearest approximate binary fraction. For example, the
numbers 0.1 and 0.10000000000000001 and 0.10000000000000000555111512
31257827021181583404541015625 are all approximated
by 3602879701896397 / 2 ** 55. Since all of these decimal values share the
same approximation, any one of them could be displayed while still
preserving the invariant eval(repr(x)) == x.

Historically, the Python prompt and built-in repr() function would choose the
one with 17 significant digits, 0.10000000000000001. Starting with Python
3.1, Python (on most systems) is now able to choose the shortest of these
and simply display 0.1.
Note that this is in the very nature of binary floating point: this is not a bug in
Python, and it is not a bug in your code either. You’ll see the same kind of
thing in all languages that support your hardware’s floating-point arithmetic
(although some languages may not display the difference by default, or in all
output modes).

For more pleasant output, you may wish to use string formatting to produce
a limited number of significant digits:

>>>

>>> format(math.pi, '.12g') # give 12 significant digits


'3.14159265359'

>>> format(math.pi, '.2f') # give 2 digits after the point


'3.14'

>>> repr(math.pi)
'3.141592653589793'

It’s important to realize that this is, in a real sense, an illusion: you’re simply
rounding the display of the true machine value.

One illusion may beget another. For example, since 0.1 is not exactly 1/10,
summing three values of 0.1 may not yield exactly 0.3, either:

>>>

>>> 0.1 + 0.1 + 0.1 == 0.3


False

Also, since the 0.1 cannot get any closer to the exact value of 1/10 and 0.3
cannot get any closer to the exact value of 3/10, then pre-rounding
with round() function cannot help:

>>>

>>> round(0.1, 1) + round(0.1, 1) + round(0.1, 1) == round(0.3, 1)


False

Though the numbers cannot be made closer to their intended exact values,
the math.isclose() function can be useful for comparing inexact values:

>>>
>>> math.isclose(0.1 + 0.1 + 0.1, 0.3)
True

Alternatively, the round() function can be used to compare rough


approximations:

>>>

>>> round(math.pi, ndigits=2) == round(22 / 7, ndigits=2)


True

Binary floating-point arithmetic holds many surprises like this. The problem
with “0.1” is explained in precise detail below, in the “Representation Error”
section. See Examples of Floating Point Problems for a pleasant summary of
how binary floating point works and the kinds of problems commonly
encountered in practice. Also see The Perils of Floating Point for a more
complete account of other common surprises.

As that says near the end, “there are no easy answers.” Still, don’t be unduly
wary of floating point! The errors in Python float operations are inherited
from the floating-point hardware, and on most machines are on the order of
no more than 1 part in 2**53 per operation. That’s more than adequate for
most tasks, but you do need to keep in mind that it’s not decimal arithmetic
and that every float operation can suffer a new rounding error.

While pathological cases do exist, for most casual use of floating-point


arithmetic you’ll see the result you expect in the end if you simply round the
display of your final results to the number of decimal digits you
expect. str() usually suffices, and for finer control see
the str.format() method’s format specifiers in Format String Syntax.

For use cases which require exact decimal representation, try using
the decimal module which implements decimal arithmetic suitable for
accounting applications and high-precision applications.

Another form of exact arithmetic is supported by the fractions module


which implements arithmetic based on rational numbers (so the numbers
like 1/3 can be represented exactly).

If you are a heavy user of floating-point operations you should take a look at
the NumPy package and many other packages for mathematical and
statistical operations supplied by the SciPy project. See <https://scipy.org>.
Python provides tools that may help on those rare occasions when you
really do want to know the exact value of a float.
The float.as_integer_ratio() method expresses the value of a float as a
fraction:

>>>

>>> x = 3.14159
>>> x.as_integer_ratio()
(3537115888337719, 1125899906842624)

Since the ratio is exact, it can be used to losslessly recreate the original
value:

>>>

>>> x == 3537115888337719 / 1125899906842624


True

The float.hex() method expresses a float in hexadecimal (base 16), again


giving the exact value stored by your computer:

>>>

>>> x.hex()
'0x1.921f9f01b866ep+1'

This precise hexadecimal representation can be used to reconstruct the float


value exactly:

>>>

>>> x == float.fromhex('0x1.921f9f01b866ep+1')
True

Since the representation is exact, it is useful for reliably porting values


across different versions of Python (platform independence) and exchanging
data with other languages that support the same format (such as Java and
C99).

Another helpful tool is the sum() function which helps mitigate loss-of-
precision during summation. It uses extended precision for intermediate
rounding steps as values are added onto a running total. That can make a
difference in overall accuracy so that the errors do not accumulate to the
point where they affect the final total:

>>>

>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 ==
1.0
False
>>> sum([0.1] * 10) == 1.0
True

The math.fsum() goes further and tracks all of the “lost digits” as values are
added onto a running total so that the result has only a single rounding. This
is slower than sum() but will be more accurate in uncommon cases where
large magnitude inputs mostly cancel each other out leaving a final sum
near zero:

>>>

>>> arr = [-0.10430216751806065, -266310978.67179024,


143401161448607.16,
... -143401161400469.7, 266262841.31058735, -
0.003244936839808227]
>>> float(sum(map(Fraction, arr))) # Exact summation with single
rounding
8.042173697819788e-13
>>> math.fsum(arr) # Single rounding
8.042173697819788e-13
>>> sum(arr) # Multiple roundings in
extended precision
8.042178034628478e-13
>>> total = 0.0
>>> for x in arr:
... total += x # Multiple roundings in
standard precision
...
>>> total # Straight addition has no
correct digits!
-0.0051575902860057365
15.1. Representation Error
This section explains the “0.1” example in detail, and shows how you can perform an exact
analysis of cases like this yourself. Basic familiarity with binary floating-point representation is
assumed.

Representation error refers to the fact that some (most, actually) decimal fractions cannot be
represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C,
C++, Java, Fortran, and many others) often won’t display the exact decimal number you expect.

Why is that? 1/10 is not exactly representable as a binary fraction. Since at least 2000, almost all
machines use IEEE 754 binary floating-point arithmetic, and almost all platforms map Python
floats to IEEE 754 binary64 “double precision” values. IEEE 754 binary64 values contain 53 bits
of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the
form J/2**N where J is an integer containing exactly 53 bits. Rewriting

1 / 10 ~= J / (2**N)

as

J ~= 2**N / 10

and recalling that J has exactly 53 bits (is >= 2**52 but < 2**53), the best value for N is 56:

>>>

>>> 2**52 <= 2**56 // 10 < 2**53


True

That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value
for J is then that quotient rounded:

>>>

>>> q, r = divmod(2**56, 10)


>>> r
6

Since the remainder is more than half of 10, the best approximation is obtained by rounding up:

>>>
>>> q+1
7205759403792794

Therefore the best possible approximation to 1/10 in IEEE 754 double precision is:

7205759403792794 / 2 ** 56

Dividing both the numerator and denominator by two reduces the fraction to:

3602879701896397 / 2 ** 55

Note that since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded
up, the quotient would have been a little bit smaller than 1/10. But in no case can it
be exactly 1/10!

So the computer never “sees” 1/10: what it sees is the exact fraction given above, the best IEEE
754 double approximation it can get:

>>>

>>> 0.1 * 2 ** 55
3602879701896397.0

If we multiply that fraction by 10**55, we can see the value out to 55 decimal digits:

>>>

>>> 3602879701896397 * 10 ** 55 // 2 ** 55
1000000000000000055511151231257827021181583404541015625

meaning that the exact number stored in the computer is equal to the decimal value
0.1000000000000000055511151231257827021181583404541015625. Instead of displaying the
full decimal value, many languages (including older versions of Python), round the result to 17
significant digits:

>>>

>>> format(0.1, '.17f')


'0.10000000000000001'

The fractions and decimal modules make these calculations easy:


>>>

>>> from decimal import Decimal


>>> from fractions import Fraction

>>> Fraction.from_float(0.1)
Fraction(3602879701896397, 36028797018963968)

>>> (0.1).as_integer_ratio()
(3602879701896397, 36028797018963968)

>>> Decimal.from_float(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625'
)

>>> format(Decimal.from_float(0.1), '.17')


'0.10000000000000001'

16. Appendix
16.1. Interactive Mode
16.1.1. Error Handling
When an error occurs, the interpreter prints an error message and a stack trace. In interactive
mode, it then returns to the primary prompt; when input came from a file, it exits with a nonzero
exit status after printing the stack trace. (Exceptions handled by an except clause in
a try statement are not errors in this context.) Some errors are unconditionally fatal and cause
an exit with a nonzero exit status; this applies to internal inconsistencies and some cases of
running out of memory. All error messages are written to the standard error stream; normal
output from executed commands is written to standard output.

Typing the interrupt character (usually Control-C or Delete) to the primary or secondary
prompt cancels the input and returns to the primary prompt. [1] Typing an interrupt while a
command is executing raises the KeyboardInterrupt exception, which may be handled by
a try statement.

16.1.2. Executable Python Scripts


On BSD’ish Unix systems, Python scripts can be made directly executable, like shell scripts, by
putting the line
#!/usr/bin/env python3

(assuming that the interpreter is on the user’s PATH) at the beginning of the script and giving the
file an executable mode. The #! must be the first two characters of the file. On some platforms,
this first line must end with a Unix-style line ending ('\n'), not a Windows ('\r\n') line
ending. Note that the hash, or pound, character, '#', is used to start a comment in Python.

The script can be given an executable mode, or permission, using the chmod command.

$ chmod +x myscript.py

On Windows systems, there is no notion of an “executable mode”. The Python installer


automatically associates .py files with python.exe so that a double-click on a Python file will
run it as a script. The extension can also be .pyw, in that case, the console window that normally
appears is suppressed.

16.1.3. The Interactive Startup File


When you use Python interactively, it is frequently handy to have some standard commands
executed every time the interpreter is started. You can do this by setting an environment variable
named PYTHONSTARTUP to the name of a file containing your start-up commands. This is
similar to the .profile feature of the Unix shells.

This file is only read in interactive sessions, not when Python reads commands from a script, and
not when /dev/tty is given as the explicit source of commands (which otherwise behaves like
an interactive session). It is executed in the same namespace where interactive commands are
executed, so that objects that it defines or imports can be used without qualification in the
interactive session. You can also change the prompts sys.ps1 and sys.ps2 in this file.

If you want to read an additional start-up file from the current directory, you can program this in
the global start-up file using code
like if os.path.isfile('.pythonrc.py'): exec(open('.pythonrc.py').rea
d()). If you want to use the startup file in a script, you must do this explicitly in the script:

import os
filename = os.environ.get('PYTHONSTARTUP')
if filename and os.path.isfile(filename):
with open(filename) as fobj:
startup_file = fobj.read()
exec(startup_file)
16.1.4. The Customization Modules
Python provides two hooks to let you customize it: sitecustomize and usercustomize. To see how
it works, you need first to find the location of your user site-packages directory. Start Python and
run this code:

>>>

>>> import site


>>> site.getusersitepackages()
'/home/user/.local/lib/python3.x/site-packages'

Now you can create a file named usercustomize.py in that directory and put anything you
want in it. It will affect every invocation of Python, unless it is started with the -s option to
disable the automatic import.

sitecustomize works in the same way, but is typically created by an administrator of the
computer in the global site-packages directory, and is imported before usercustomize. See the
documentation of the site module for more details.

Footnotes

[1]

A problem with the GNU Readline package may prevent this.

Python Setup and Usage


This part of the documentation is devoted to general information on the
setup of the Python environment on different platforms, the invocation of the
interpreter and things that make working with Python easier.

 1. Command line and environment


o 1.1. Command line
 1.1.1. Interface options
 1.1.2. Generic options
 1.1.3. Miscellaneous options
 1.1.4. Options you shouldn’t use
o 1.2. Environment variables
 1.2.1. Debug-mode variables
 2. Using Python on Unix platforms
o 2.1. Getting and installing the latest version of Python
 2.1.1. On Linux
 2.1.2. On FreeBSD and OpenBSD
o 2.2. Building Python
o 2.3. Python-related paths and files
o 2.4. Miscellaneous
o 2.5. Custom OpenSSL
 3. Configure Python
o 3.1. Build Requirements
o 3.2. Generated files
 3.2.1. configure script
o 3.3. Configure Options
 3.3.1. General Options
 3.3.2. WebAssembly Options
 3.3.3. Install Options
 3.3.4. Performance options
 3.3.5. Python Debug Build
 3.3.6. Debug options
 3.3.7. Linker options
 3.3.8. Libraries options
 3.3.9. Security Options
 3.3.10. macOS Options
 3.3.11. Cross Compiling Options
o 3.4. Python Build System
 3.4.1. Main files of the build system
 3.4.2. Main build steps
 3.4.3. Main Makefile targets
 3.4.4. C extensions
o 3.5. Compiler and linker flags
 3.5.1. Preprocessor flags
 3.5.2. Compiler flags
 3.5.3. Linker flags
 4. Using Python on Windows
o 4.1. The full installer
 4.1.1. Installation steps
 4.1.2. Removing the MAX_PATH Limitation
 4.1.3. Installing Without UI
 4.1.4. Installing Without Downloading
 4.1.5. Modifying an install
o 4.2. The Microsoft Store package
 4.2.1. Known issues
 4.2.1.1. Redirection of local data, registry, and
temporary paths
o 4.3. The nuget.org packages
o 4.4. The embeddable package
 4.4.1. Python Application
 4.4.2. Embedding Python
o 4.5. Alternative bundles
o 4.6. Configuring Python
 4.6.1. Excursus: Setting environment variables
 4.6.2. Finding the Python executable
o 4.7. UTF-8 mode
o 4.8. Python Launcher for Windows
 4.8.1. Getting started
 4.8.1.1. From the command-line
 4.8.1.2. Virtual environments
 4.8.1.3. From a script
 4.8.1.4. From file associations
 4.8.2. Shebang Lines
 4.8.3. Arguments in shebang lines
 4.8.4. Customization
 4.8.4.1. Customization via INI files
 4.8.4.2. Customizing default Python versions
 4.8.5. Diagnostics
 4.8.6. Dry Run
 4.8.7. Install on demand
 4.8.8. Return codes
o 4.9. Finding modules
o 4.10. Additional modules
 4.10.1. PyWin32
 4.10.2. cx_Freeze
o 4.11. Compiling Python on Windows
o 4.12. Other Platforms
 5. Using Python on a Mac
o 5.1. Getting and Installing Python
 5.1.1. How to run a Python script
 5.1.2. Running scripts with a GUI
 5.1.3. Configuration
o 5.2. The IDE
o 5.3. Installing Additional Python Packages
o 5.4. GUI Programming
o 5.5. Distributing Python Applications
o 5.6. Other Resources
 6. Editors and IDEs

https://docs.python.org/3.12/using/cmdline.html#:~:text=%7C-,1.%20Command%20line%20and
%20environment,Need%20Python%20configured%20with%20the%20%2D%2Dwith%2Dtrace%2Drefs
%20build%20option.,-Added%20in%20version
2. Using Python on Unix platforms
2.1. Getting and installing the latest version of
Python
2.1.1. On Linux
Python comes preinstalled on most Linux distributions, and is available as a package on all
others. However there are certain features you might want to use that are not available on your
distro’s package. You can easily compile the latest version of Python from source.

In the event that Python doesn’t come preinstalled and isn’t in the repositories as well, you can
easily make packages for your own distro. Have a look at the following links:

See also
https://www.debian.org/doc/manuals/maint-guide/first.en.html

for Debian users


https://en.opensuse.org/Portal:Packaging

for OpenSuse users


https://docs.fedoraproject.org/en-US/package-maintainers/
Packaging_Tutorial_GNU_Hello/

for Fedora users


https://slackbook.org/html/package-management-making-packages.html

for Slackware users


2.1.2. On FreeBSD and OpenBSD
 FreeBSD users, to add the package use:

 pkg install python3

 OpenBSD users, to add the package use:


 pkg_add -r python

 pkg_add
ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages/<i
nsert your architecture here>/python-
<version>.tgz

For example i386 users get the 2.5.1 version of Python using:
pkg_add
ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages/i3
86/python-2.5.1p2.tgz

2.2. Building Python


If you want to compile CPython yourself, first thing you should do is get
the source. You can download either the latest release’s source or just grab a
fresh clone. (If you want to contribute patches, you will need a clone.)

The build process consists of the usual commands:

./configure
make
make install

Configuration options and caveats for specific Unix platforms are extensively
documented in the README.rst file in the root of the Python source tree.

Warning

make install can overwrite or masquerade


the python3 binary. make altinstall is therefore recommended instead
of make install since it only
installs exec_prefix/bin/pythonversion.
2.3. Python-related paths and files
These are subject to difference depending on local installation
conventions; prefix and exec_prefix are installation-dependent and
should be interpreted as for GNU software; they may be the same.

For example, on most Linux systems, the default for both is /usr.

File/directory Meaning
Recommen
ded
exec_prefix/bin/python3 location of
the
interpreter.
prefix/lib/pythonversion, exec_prefix/lib/ Recommen
pythonversion ded
File/directory Meaning
locations of
the
directories
containing
the
standard
modules.
Recommen
ded
locations of
the
directories
containing
the include
prefix/include/pythonversion, exec_prefix/ files
include/pythonversion needed for
developing
Python
extensions
and
embedding
the
interpreter.

2.4. Miscellaneous
To easily use Python scripts on Unix, you need to make them executable, e.g.
with

$ chmod +x script

and put an appropriate Shebang line at the top of the script. A good choice is
usually

#!/usr/bin/env python3

which searches for the Python interpreter in the whole PATH. However, some
Unices may not have the env command, so you may need to
hardcode /usr/bin/python3 as the interpreter path.

To use shell commands in your Python scripts, look at


the subprocess module.
2.5. Custom OpenSSL
1. To use your vendor’s OpenSSL configuration and system trust store,
locate the directory with openssl.cnf file or symlink in /etc. On
most distribution the file is either in /etc/ssl or /etc/pki/tls.
The directory should also contain a cert.pem file and/or
a certs directory.

2. $ find /etc/ -name openssl.cnf -printf "%h\n"


3. /etc/ssl

4. Download, build, and install OpenSSL. Make sure you


use install_sw and not install. The install_sw target does
not override openssl.cnf.
5. $ curl -O https://www.openssl.org/source/openssl-
VERSION.tar.gz
6. $ tar xzf openssl-VERSION
7. $ pushd openssl-VERSION
8. $ ./config \
9. --prefix=/usr/local/custom-openssl \
10. --libdir=lib \
11. --openssldir=/etc/ssl
12. $ make -j1 depend
13. $ make -j8
14. $ make install_sw
15. $ popd

16. Build Python with custom OpenSSL (see the configure --with-
openssl and --with-openssl-rpath options)
17. $ pushd python-3.x.x
18. $ ./configure -C \
19. --with-openssl=/usr/local/custom-openssl \
20. --with-openssl-rpath=auto \
21. --prefix=/usr/local/python-3.x.x
22. $ make -j8
23. $ make altinstall
Note

Patch releases of OpenSSL have a backwards compatible ABI. You don’t


need to recompile Python to update OpenSSL. It’s sufficient to replace the
custom OpenSSL installation with a newer version.
3. Configure Python
3.1. Build Requirements
Features required to build CPython:

 A C11 compiler. Optional C11 features are not required.


 Support for IEEE 754 floating-point numbers and floating-point Not-a-Number (NaN).
 Support for threads.
 OpenSSL 1.1.1 or newer for the ssl and hashlib modules.
 On Windows, Microsoft Visual Studio 2017 or later is required.

Changed in version 3.5: On Windows, Visual Studio 2015 or later is required.

Changed in version 3.6: Selected C99 features are now required,


like <stdint.h> and static inline functions.

Changed in version 3.7: Thread support and OpenSSL 1.0.2 are now required.

Changed in version 3.10: OpenSSL 1.1.1 is now required.

Changed in version 3.11: C11 compiler, IEEE 754 and NaN support are now required. On
Windows, Visual Studio 2017 or later is required.

See also PEP 7 “Style Guide for C Code” and PEP 11 “CPython platform support”.

3.2. Generated files


To reduce build dependencies, Python source code contains multiple generated files. Commands
to regenerate all generated files:

make regen-all
make regen-stdlib-module-names
make regen-limited-abi
make regen-configure

The Makefile.pre.in file documents generated files, their inputs, and tools used to
regenerate them. Search for regen-* make targets.
3.2.1. configure script
The make regen-configure command regenerates the aclocal.m4 file and
the configure script using the Tools/build/regen-configure.sh shell script which
uses an Ubuntu container to get the same tools versions and have a reproducible output.

The container is optional, the following command can be run locally:

autoreconf -ivf -Werror

The generated files can change depending on the exact autoconf-


archive, aclocal and pkg-config versions.

3.3. Configure Options


List all ./configure script options using:

./configure --help

See also the Misc/SpecialBuilds.txt in the Python source distribution.

3.3.1. General Options


--enable-loadable-sqlite-extensions

Support loadable extensions in the _sqlite extension module (default is no) of


the sqlite3 module.

See the sqlite3.Connection.enable_load_extension() method of


the sqlite3 module.

Added in version 3.6.

--disable-ipv6

Disable IPv6 support (enabled by default if supported), see the socket module.
--enable-big-digits=[15|30]

Define the size in bits of Python int digits: 15 or 30 bits.

By default, the digit size is 30.

Define the PYLONG_BITS_IN_DIGIT to 15 or 30.


See sys.int_info.bits_per_digit.
--with-suffix=SUFFIX

Set the Python executable suffix to SUFFIX.

The default suffix is .exe on Windows and macOS (python.exe executable), .js on
Emscripten node, .html on Emscripten browser, .wasm on WASI, and an empty string
on other platforms (python executable).

Changed in version 3.11: The default suffix on WASM platform is one


of .js, .html or .wasm.

--with-tzpath=<list of absolute paths separated by pat


hsep>

Select the default time zone search path for zoneinfo.TZPATH. See the Compile-time
configuration of the zoneinfo module.

Default: /usr/share/zoneinfo:/usr/lib/zoneinfo:/usr/share/lib/
zoneinfo:/etc/zoneinfo.

See os.pathsep path separator.

Added in version 3.9.

--without-decimal-contextvar

Build the _decimal extension module using a thread-local context rather than a
coroutine-local context (default), see the decimal module.

See decimal.HAVE_CONTEXTVAR and the contextvars module.

Added in version 3.9.

--with-dbmliborder=<list of backend names>

Override order to check db backends for the dbm module

A valid value is a colon (:) separated string with the backend names:

 ndbm;
 gdbm;
 bdb.
--without-c-locale-coercion

Disable C locale coercion to a UTF-8 based locale (enabled by default).


Don’t define the PY_COERCE_C_LOCALE macro.

See PYTHONCOERCECLOCALE and the PEP 538.


--without-freelists

Disable all freelists except the empty tuple singleton.

Added in version 3.11.

--with-platlibdir=DIRNAME

Python library directory name (default is lib).

Fedora and SuSE use lib64 on 64-bit platforms.

See sys.platlibdir.

Added in version 3.9.

--with-wheel-pkg-dir=PATH

Directory of wheel packages used by the ensurepip module (none by default).

Some Linux distribution packaging policies recommend against bundling dependencies.


For example, Fedora installs wheel packages in the /usr/share/python-
wheels/ directory and don’t install the ensurepip._bundled package.

Added in version 3.10.

--with-pkg-config=[check|yes|
no]

Whether configure should use pkg-config to detect build dependencies.

 check (default): pkg-config is optional


 yes: pkg-config is mandatory
 no: configure does not use pkg-config even when present

Added in version 3.11.

--enable-pystats

Turn on internal statistics gathering.


The statistics will be dumped to a arbitrary (probably unique) file in /tmp/py_stats/,
or C:\temp\py_stats\ on Windows. If that directory does not exist, results will be
printed on stdout.

Use Tools/scripts/summarize_stats.py to read the stats.

Added in version 3.11.

3.3.2. WebAssembly
Options
--with-emscripten-
target=[browser|node]

Set build flavor for wasm32-emscripten.

 browser (default): preload minimal stdlib, default MEMFS.


 node: NODERAWFS and pthread support.

Added in version 3.11.

--enable-wasm-
dynamic-linking

Turn on dynamic linking support for WASM.

Dynamic linking enables dlopen. File size of the executable increases due to limited
dead code elimination and additional features.

Added in version 3.11.

--enable-wasm-
pthreads

Turn on pthreads support for WASM.

Added in version 3.11.

3.3.3. Install
Options
--
prefix=PREFIX

Install architecture-independent files in PREFIX. On Unix, it defaults to /usr/local.


This value can be retrieved at runtime using sys.prefix.

As an example, one can use --prefix="$HOME/.local/" to install a Python in its


home directory.
--exec-
prefix=EPRE
FIX

Install architecture-dependent files in EPREFIX, defaults to --prefix.

This value can be retrieved at runtime using sys.exec_prefix.


--
disable-
test-
modules

Don’t build nor install test modules, like the test package or the _testcapi extension
module (built and installed by default).

Added in version 3.10.

--
with-
ensur
epip=
[upgra
de|
instal
l|no]

Select the ensurepip command run on Python installation:

 upgrade (default): run python -m ensurepip --altinstall --


upgrade command.
 install: run python -m ensurepip --altinstall command;
 no: don’t run ensurepip;

Added in version 3.6.


3.
3.
4.
P
er
fo
r
m
a
n
c
e
o
pt
io
n
s
Co
nfi
gur
ing
Pyt
hon
usi
ng
--
en
ab
le
-
op
ti
mi
za
ti
on
s
--
wi
th
-
lt
o(
PG
O
+
LT
O)
is
rec
om
me
nde
d
for
bes
t
per
for
ma
nce
.
Th
e
exp
eri
me
ntal
--
en
ab
le
-
bo
lt
fla
g
can
als
o
be
use
d
to
im
pro
ve
per
for
ma
nce
.

--
en
ab
le
-
op
ti
mi
za
ti
on
s

Enable Profile Guided Optimization (PGO) using PROFILE_TASK (disabled by default).

The C compiler Clang requires llvm-profdata program for PGO. On macOS, GCC
also requires it: GCC is just an alias to Clang on macOS.

Disable also semantic interposition in libpython if --enable-shared and GCC is


used: add -fno-semantic-interposition to the compiler and linker flags.

Note

During the build, you may encounter compiler warnings about profile data not being
available for some source files. These warnings are harmless, as only a subset of the code
is exercised during profile data acquisition. To disable these warnings on Clang,
manually suppress them by adding -Wno-profile-instr-unprofiled to CFLAGS.

Added in version 3.6.

Changed in version 3.10: Use -fno-semantic-interposition on GCC.


P
R
O
F
I
L
E
_
T
A
S
K

Environment variable used in the Makefile: Python command line arguments for the PGO
generation task.

Default: -m test --pgo --timeout=$(TESTTIMEOUT).

Added in version 3.8.

--
wit
h-
lto
=[fu
ll|
thin
|no|
yes]

Enable Link Time Optimization (LTO) in any build (disabled by default).

The C compiler Clang requires llvm-ar for LTO (ar on macOS), as well as an LTO-
aware linker (ld.gold or lld).

Added in version 3.6.

Added in version 3.11: To use ThinLTO feature, use --with-lto=thin on Clang.

Changed in version 3.12: Use ThinLTO as the default optimization policy on Clang if the
compiler accepts the flag.
--
enable
-bolt

Enable usage of the BOLT post-link binary optimizer (disabled by default).

BOLT is part of the LLVM project but is not always included in their binary
distributions. This flag requires that llvm-bolt and merge-fdata are available.

BOLT is still a fairly new project so this flag should be considered experimental for now.
Because this tool operates on machine code its success is dependent on a combination of
the build environment + the other optimization configure args + the CPU architecture,
and not all combinations are supported. BOLT versions before LLVM 16 are known to
crash BOLT under some scenarios. Use of LLVM 16 or newer for BOLT optimization is
strongly encouraged.

The BOLT_INSTRUMENT_FLAGS and BOLT_APPLY_FLAGS configure variables can


be defined to override the default set of arguments for llvm-bolt to instrument and apply
BOLT data to binaries, respectively.

Added in version 3.12.

--with-
computed-
gotos

Enable computed gotos in evaluation loop (enabled by default on supported compilers).


--without-
pymalloc

Disable the specialized Python memory allocator pymalloc (enabled by default).

See also PYTHONMALLOC environment variable.


--without-
strings

Disable static documentation strings to reduce the memory footprint (enabled by default).
Documentation strings defined in Python are not affected.

Don’t define the WITH_DOC_STRINGS macro.

See the PyDoc_STRVAR() macro.


--enable-p

Enable C-level code profiling with gprof (disabled by default).


--with-str
overflow

Add -fstrict-overflow to the C compiler flags (by default we add -fno-


strict-overflow instead).
3.3.5. Py
Build
A debug build i
the --with-py
option.

Effects of a deb

 Display all
list of defau
in the warn
 Add d to sy
 Add sys.g
unction.
 Add -X sh
line option.
 Add -d com
and PYTHON
variable to d
 Add suppor
the __lltr
low-level tr
evaluation l
defined.
 Install debu
allocators to
and other m
 Define Py_
UG macros.
 Add runtim
by #ifdef
Enable ass
ct_ASSER
set the NDEB
the --with
option). Ma
o Add sanity
o Unicode and
memory fill
of uninitiali
o Ensure that
replace the
with an exc
o Check that d
the current e
o The garbage
(gc.colle
checks on o
o The Py_SA
for integer u
downcasting

See also the Pyt


Mode and the -
refs configure

Changed in ver
builds and debu
compatible: def
the Py_DEBUG
implies the Py_
(see the --wit
refs option), w
only ABI incom

3.3.6. De
--with-pyd

Build Python in debug mode: define the Py_DEBUG macro (disabled by default).
--with-tra

Enable tracing references for debugging purpose (disabled by default).

Effects:

 Define the Py_TRACE_REFS macro.


 Add sys.getobjects() function.
 Add PYTHONDUMPREFS environment variable.

This build is not ABI compatible with release build (default build) or debug build
(Py_DEBUG and Py_REF_DEBUG macros).

Added in version 3.8.


--with-ass

Build with C assertions enabled (default is


no): assert(...); and _PyObject_ASSERT(...);.

If set, the NDEBUG macro is not defined in the OPT compiler variable.

See also the --with-pydebug option (debug build) which also enables assertions.

Added in version 3.6.

--with-val

Enable Valgrind support (default is no).


--with-dtr

Enable DTrace support (default is no).

See Instrumenting CPython with DTrace and SystemTap.

Added in version 3.6.

--with-add

Enable AddressSanitizer memory error detector, asan (default is no).

Added in version 3.6.

--with-mem

Enable MemorySanitizer allocation error detector, msan (default is no).

Added in version 3.6.

--with-und

Enable UndefinedBehaviorSanitizer undefined behaviour detector, ubsan (default is no).

Added in version 3.6.

3.3.7. Lin
--enable-s

Enable building a shared Python library: libpython (default is no).


--without-
Do not build libpythonMAJOR.MINOR.a and do not install python.o (built and
enabled by default).

Added in version 3.10.

3.3.8. Lib
--with-lib

Link against additional libraries (default is no).


--with-sys

Build the pyexpat module using an installed expat library (default is no).
--with-sys

Build the _decimal extension module using an installed mpdec library, see
the decimal module (default is no).

Added in version 3.3.

--with-rea

Use editline library for backend of the readline module.

Define the WITH_EDITLINE macro.

Added in version 3.10.

--without-

Don’t build the readline module (built by default).

Don’t define the HAVE_LIBREADLINE macro.

Added in version 3.10.

--with-lib

Override libm math library to STRING (default is system-dependent).


--with-lib

Override libc C library to STRING (default is system-dependent).


--with-ope

Root of the OpenSSL directory.

Added in version 3.7.


--with-ope

Set runtime library directory (rpath) for OpenSSL libraries:

 no (default): don’t set rpath;


 auto: auto-detect rpath from --with-openssl and pkg-config;
 DIR: set an explicit rpath.

Added in version 3.10.

3.3.9. Se
--with-has

Select hash algorithm for use in Python/pyhash.c:

 siphash13 (default);
 siphash24;
 fnv.

Added in version 3.4.

Added in version 3.11: siphash13 is added and it is the new default.

--with-bui

Built-in hash modules:

 md5;
 sha1;
 sha256;
 sha512;
 sha3 (with shake);
 blake2.

Added in version 3.9.

--with-ssl

Override the OpenSSL default cipher suites string:

 python (default): use Python’s preferred selection;


 openssl: leave OpenSSL’s defaults untouched;
 STRING: use a custom string

See the ssl module.


Added in version 3.7.

Changed in version 3.10: The settings python and STRING also set TLS 1.2 as
minimum protocol version.

3.3.10. m
See Mac/READ

--enable-u

--enable-u

Create a universal binary build. SDKDIR specifies which macOS SDK should be used to
perform the build (default is no).
--enable-f

--enable-f

Create a Python.framework rather than a traditional Unix install.


Optional INSTALLDIR specifies the installation path (default is no).
--with-uni

Specify the kind of universal binary that should be created. This option is only valid
when --enable-universalsdk is set.

Options:

 universal2;
 32-bit;
 64-bit;
 3-way;
 intel;
 intel-32;
 intel-64;
 all.
--with-fra

Specify the name for the python framework on macOS only valid when --enable-
framework is set (default: Python).
3.3.11. C
Cross compiling
the build platfo
--build=BU

configure for building on BUILD, usually guessed by config.guess.


--host=HOS

cross-compile to build programs to run on HOST (target platform)


--with-bui

path to build python binary for cross compiling

Added in version 3.11.

CONFIG_SIT

An environment variable that points to a file with configure overrides.

Example config.site file:

# config.site-aarch64
ac_cv_buggy_getaddrinfo=no
ac_cv_file__dev_ptmx=yes
ac_cv_file__dev_ptc=no

Cross compiling

CONFIG_SITE
--build
--host=
--with-

3.4. Pyth
3.4.1. Ma
 configur
 Makefile
 pyconfig
 Modules/
3.4.2. Ma
 C files (.c)
 A static lib
 python.o
 C extension
3.4.3. Ma
 make: Build
 make pla
 make pro
the make co
 make bui
default: 20 m
 make ins
 make reg
 make cle
 make dis
3.4.4. C e
Some C extensi

>>>

>>> import
>>> sys
<module 'sy
>>> sys.__f
Traceback (
File "<st
AttributeEr

Other C extensi

>>>

>>> import
>>> _asynci
<module '_a
>>> _asynci
'/usr/lib64

Modules/Set
the *shared*

The PyAPI_FU

 Use Py_EX
 Use Py_IM

If the Py_BUIL
import.
3.5. Com
Options set by t

3.5.1. Pre
CONFIGURE_

Value of CPPFLAGS variable passed to the ./configure script.

Added in version 3.6.

CPPFLAGS

(Objective) C/C++ preprocessor flags, e.g. -Iinclude_dir if you have headers in a


nonstandard directory include_dir.

Both CPPFLAGS and LDFLAGS need to contain the shell’s value to be able to build
extension modules using the directories specified in the environment variables.
BASECPPFLA

Added in version 3.4.

PY_CPPFLAG

Extra preprocessor flags added for building the interpreter object files.

Default: $(BASECPPFLAGS) -I. -I$(srcdir)/Include $


(CONFIGURE_CPPFLAGS) $(CPPFLAGS).

Added in version 3.2.

3.5.2. Co
CC

C compiler command.

Example: gcc -pthread.


CXX

C++ compiler command.

Example: g++ -pthread.


CFLAGS
C compiler flags.
CFLAGS_NOD

CFLAGS_NODIST is used for building the interpreter and stdlib C extensions. Use it
when a compiler flag should not be part of CFLAGS once Python is installed (gh-65320).

In particular, CFLAGS should not contain:

 the compiler flag -I (for setting the search path for include files). The -I flags
are processed from left to right, and any flags in CFLAGS would take precedence
over user- and package-supplied -I flags.
 hardening flags such as -Werror because distributions cannot control whether
packages installed by users conform to such heightened standards.

Added in version 3.5.

COMPILEALL

Options passed to the compileall command line when building PYC files
in make install. Default: -j0.

Added in version 3.12.

EXTRA_CFLA

Extra C compiler flags.


CONFIGURE_

Value of CFLAGS variable passed to the ./configure script.

Added in version 3.2.

CONFIGURE_

Value of CFLAGS_NODIST variable passed to the ./configure script.

Added in version 3.5.

Base compiler flags.

Optimization flags.
Strict or non-strict aliasing flags used to compile Python/dtoa.c.

Added in version 3.7.

Compiler flags used to build a shared library.

For example, -fPIC is used on Linux and on BSD.

Extra C flags added for building the interpreter object files.

Default: $(CCSHARED) when --enable-shared is used, or an empty string


otherwise.

Default: $(BASECFLAGS) $(OPT) $(CONFIGURE_CFLAGS) $(CFLAGS) $


(EXTRA_CFLAGS).

Default: $(CONFIGURE_CFLAGS_NODIST) $(CFLAGS_NODIST) -I$


(srcdir)/Include/internal.

Added in version 3.5.

C flags used for building the interpreter object files.

Default: $(PY_CFLAGS) $(PY_CFLAGS_NODIST) $(PY_CPPFLAGS) $


(CFLAGSFORSHARED).

Added in version 3.7.

Default: $(PY_STDMODULE_CFLAGS) -DPy_BUILD_CORE.

Added in version 3.2.

Compiler flags to build a standard library extension module as a built-in module, like
the posix module.
Default: $(PY_STDMODULE_CFLAGS) -DPy_BUILD_CORE_BUILTIN.

Added in version 3.8.

Purify command. Purify is a memory debugger program.

Default: empty string (not used).

Linker command used to build programs like python and _testembed.

Default: $(PURIFY) $(CC).

Value of LDFLAGS variable passed to the ./configure script.

Avoid assigning CFLAGS, LDFLAGS, etc. so users can use them on the command line to
append to these values without stomping the pre-set values.

Added in version 3.2.

LDFLAGS_NODIST is used in the same manner as CFLAGS_NODIST. Use it when a


linker flag should not be part of LDFLAGS once Python is installed (gh-65320).

In particular, LDFLAGS should not contain:

 the compiler flag -L (for setting the search path for libraries). The -L flags are
processed from left to right, and any flags in LDFLAGS would take precedence
over user- and package-supplied -L flags.

Value of LDFLAGS_NODIST variable passed to the ./configure script.

Added in version 3.8.

Linker flags, e.g. -Llib_dir if you have libraries in a nonstandard directory lib_dir.

Both CPPFLAGS and LDFLAGS need to contain the shell’s value to be able to build
extension modules using the directories specified in the environment variables.
Linker flags to pass libraries to the linker when linking the Python executable.

Example: -lrt.

Command to build a shared library.

Default: @LDSHARED@ $(PY_LDFLAGS).

Command to build libpython shared library.

Default: @BLDSHARED@ $(PY_CORE_LDFLAGS).

Default: $(CONFIGURE_LDFLAGS) $(LDFLAGS).

Default: $(CONFIGURE_LDFLAGS_NODIST) $(LDFLAGS_NODIST).

Added in version 3.8.

Linker flags used for building the interpreter object files.

Added in version 3.8.

4. Using Python on Windows


This document aims to give an overview of Windows-specific behaviour you
should know about when using Python on Microsoft Windows.

Unlike most Unix systems and services, Windows does not include a system
supported installation of Python. To make Python available, the CPython
team has compiled Windows installers with every release for many years.
These installers are primarily intended to add a per-user installation of
Python, with the core interpreter and library being used by a single user. The
installer is also able to install for all users of a single machine, and a
separate ZIP file is available for application-local distributions.
As specified in PEP 11, a Python release only supports a Windows platform
while Microsoft considers the platform under extended support. This means
that Python 3.12 supports Windows 8.1 and newer. If you require Windows 7
support, please install Python 3.8.

There are a number of different installers available for Windows, each with
certain benefits and downsides.

The full installer contains all components and is the best option for
developers using Python for any kind of project.

The Microsoft Store package is a simple installation of Python that is suitable


for running scripts and packages, and using IDLE or other development
environments. It requires Windows 10 and above, but can be safely installed
without corrupting other programs. It also provides many convenient
commands for launching Python and its tools.

The nuget.org packages are lightweight installations intended for continuous


integration systems. It can be used to build Python packages or run scripts,
but is not updateable and has no user interface tools.

The embeddable package is a minimal package of Python suitable for


embedding into a larger application.

4.1. The full installer


4.1.1. Installation steps
Four Python 3.12 installers are available for download - two each for the 32-bit and 64-bit
versions of the interpreter. The web installer is a small initial download, and it will automatically
download the required components as necessary. The offline installer includes the components
necessary for a default installation and only requires an internet connection for optional features.
See Installing Without Downloading for other ways to avoid downloading during installation.

After starting the installer, one of two options may be selected:


If you select “Install Now”:

 You will not need to be an administrator (unless a system update for the C Runtime
Library is required or you install the Python Launcher for Windows for all users)
 Python will be installed into your user directory
 The Python Launcher for Windows will be installed according to the option at the bottom
of the first page
 The standard library, test suite, launcher and pip will be installed
 If selected, the install directory will be added to your PATH
 Shortcuts will only be visible for the current user

Selecting “Customize installation” will allow you to select the features to install, the installation
location and other options or post-install actions. To install debugging symbols or binaries, you
will need to use this option.

To perform an all-users installation, you should select “Customize installation”. In this case:

 You may be required to provide administrative credentials or approval


 Python will be installed into the Program Files directory
 The Python Launcher for Windows will be installed into the Windows directory
 Optional features may be selected during installation
 The standard library can be pre-compiled to bytecode
 If selected, the install directory will be added to the system PATH
 Shortcuts are available for all users
4.1.2. Removing the MAX_PATH Limitation
Windows historically has limited path lengths to 260 characters. This meant that paths longer
than this would not resolve and errors would result.

In the latest versions of Windows, this limitation can be expanded to approximately 32,000
characters. Your administrator will need to activate the “Enable Win32 long paths” group policy,
or set LongPathsEnabled to 1 in the registry key HKEY_LOCAL_MACHINE\SYSTEM\
CurrentControlSet\Control\FileSystem.

This allows the open() function, the os module and most other path functionality to accept and
return paths longer than 260 characters.

After changing the above option, no further configuration is required.

Changed in version 3.6: Support for long paths was enabled in Python.

4.1.3. Installing Without UI


All of the options available in the installer UI can also be specified from the command line,
allowing scripted installers to replicate an installation on many machines without user
interaction. These options may also be set without suppressing the UI in order to change some of
the defaults.

The following options (found by executing the installer with /?) can be passed into the installer:

Name Description
/passive to display progress without requiring user interaction
/quiet to install/uninstall without displaying any UI
/simple to prevent user customization
/uninstall to remove Python (without confirmation)
/layout
to pre-download all components
[directory]
/log [filename] to specify log files location

All other options are passed as name=value, where the value is usually 0 to disable a
feature, 1 to enable a feature, or a path. The full list of available options is shown below.
Name Description Default
Perform a
InstallAllUsers system-wide 0
installation.
The installation
TargetDir Selected based on InstallAllUsers
directory
The default
DefaultAllUser installation %ProgramFiles%\Python X.Y or %ProgramFiles
sTargetDir directory for all- (x86)%\Python X.Y
user installs
The default %LocalAppData%\Programs\Python\PythonXY or
DefaultJustFor install directory %LocalAppData%\Programs\Python\PythonXY-
MeTargetDir for just-for-me 32 or %LocalAppData%\Programs\Python\
installs PythonXY-64
The default
custom install
DefaultCustom
directory (empty)
TargetDir
displayed in the
UI
Create file
associations if
AssociateFiles 1
the launcher is
also installed.
Compile
CompileAll all .py files 0
to .pyc.
Prepend install
and Scripts
directories
PrependPath 0
to PATH and
add .PY to PATH
EXT
Append install
and Scripts
directories
AppendPath 0
to PATH and
add .PY to PATH
EXT
Shortcuts Create shortcuts 1
for the
interpreter,
Name Description Default
documentation
and IDLE if
installed.
Install Python
Include_doc 1
manual
Install debug
Include_debug 0
binaries
Install developer
headers and
libraries.
Include_dev Omitting this 1
may lead to an
unusable
installation.
Install python.
exe and related
files. Omitting
Include_exe 1
this may lead to
an unusable
installation.
Install Python
Include_launch
Launcher for 1
er
Windows.
Installs the
launcher for all
InstallLaunche users. Also
1
rAllUsers requires Includ
e_launcher to
be set to 1
Install standard
library and
extension
modules.
Include_lib 1
Omitting this
may lead to an
unusable
installation.
Install bundled
Include_pip pip and 1
setuptools
Name Description Default
Install debugging
Include_symbo
symbols 0
ls
(*.pdb)
Install Tcl/Tk
Include_tcltk support and 1
IDLE
Install standard
Include_test 1
library test suite
Install utility
Include_tools 1
scripts
Only installs the
launcher. This
LauncherOnly will override 0
most other
options.
Disable most
SimpleInstall 0
install UI
A custom
message to
SimpleInstallD
display when the (empty)
escription
simplified install
UI is used.

For example, to silently install a default, system-wide Python installation, you could use the
following command (from an elevated command prompt):

python-3.9.0.exe /quiet InstallAllUsers=1 PrependPath=1


Include_test=0

To allow users to easily install a personal copy of Python without the test suite, you could
provide a shortcut with the following command. This will display a simplified initial page and
disallow customization:

python-3.9.0.exe InstallAllUsers=0 Include_launcher=0


Include_test=0
SimpleInstall=1 SimpleInstallDescription="Just for me, no test
suite."
(Note that omitting the launcher also omits file associations, and is only recommended for per-
user installs when there is also a system-wide installation that included the launcher.)

The options listed above can also be provided in a file named unattend.xml alongside the
executable. This file specifies a list of options and values. When a value is provided as an
attribute, it will be converted to a number if possible. Values provided as element text are always
left as strings. This example file sets the same options as the previous example:

<Options>
<Option Name="InstallAllUsers" Value="no" />
<Option Name="Include_launcher" Value="0" />
<Option Name="Include_test" Value="no" />
<Option Name="SimpleInstall" Value="yes" />
<Option Name="SimpleInstallDescription">Just for me, no test
suite</Option>
</Options>

4.1.4. Installing Without Downloading


As some features of Python are not included in the initial installer download, selecting those
features may require an internet connection. To avoid this need, all possible components may be
downloaded on-demand to create a complete layout that will no longer require an internet
connection regardless of the selected features. Note that this download may be bigger than
required, but where a large number of installations are going to be performed it is very useful to
have a locally cached copy.

Execute the following command from Command Prompt to download all possible required files.
Remember to substitute python-3.9.0.exe for the actual name of your installer, and to
create layouts in their own directories to avoid collisions between files with the same name.

python-3.9.0.exe /layout [optional target directory]

You may also specify the /quiet option to hide the progress display.

4.1.5. Modifying an install


Once Python has been installed, you can add or remove features through the Programs and
Features tool that is part of Windows. Select the Python entry and choose “Uninstall/Change” to
open the installer in maintenance mode.

“Modify” allows you to add or remove features by modifying the checkboxes - unchanged
checkboxes will not install or remove anything. Some options cannot be changed in this mode,
such as the install directory; to modify these, you will need to remove and then reinstall Python
completely.
“Repair” will verify all the files that should be installed using the current settings and replace any
that have been removed or modified.

“Uninstall” will remove Python entirely, with the exception of the Python Launcher for
Windows, which has its own entry in Programs and Features.

4.2. The Microsoft Store package


Added in version 3.7.2.

The Microsoft Store package is an easily installable Python interpreter that is intended mainly for
interactive use, for example, by students.

To install the package, ensure you have the latest Windows 10 updates and search the Microsoft
Store app for “Python 3.12”. Ensure that the app you select is published by the Python Software
Foundation, and install it.

Warning

Python will always be available for free on the Microsoft Store. If you are asked to pay for it,
you have not selected the correct package.

After installation, Python may be launched by finding it in Start. Alternatively, it will be


available from any Command Prompt or PowerShell session by typing python. Further, pip and
IDLE may be used by typing pip or idle. IDLE can also be found in Start.

All three commands are also available with version number suffixes, for example,
as python3.exe and python3.x.exe as well as python.exe (where 3.x is the specific
version you want to launch, such as 3.12). Open “Manage App Execution Aliases” through Start
to select which version of Python is associated with each command. It is recommended to make
sure that pip and idle are consistent with whichever version of python is selected.

Virtual environments can be created with python -m venv and activated and used as normal.

If you have installed another version of Python and added it to your PATH variable, it will be
available as python.exe rather than the one from the Microsoft Store. To access the new
installation, use python3.exe or python3.x.exe.

The py.exe launcher will detect this Python installation, but will prefer installations from the
traditional installer.

To remove Python, open Settings and use Apps and Features, or else find Python in Start and
right-click to select Uninstall. Uninstalling will remove all packages you installed directly into
this Python installation, but will not remove any virtual environments
4.2.1. Known issues
4.2.1.1. Redirection of local data, registry, and temporary paths
Because of restrictions on Microsoft Store apps, Python scripts may not have full write access to
shared locations such as TEMP and the registry. Instead, it will write to a private copy. If your
scripts must modify the shared locations, you will need to install the full installer.

At runtime, Python will use a private copy of well-known Windows folders and the registry. For
example, if the environment variable %APPDATA% is c:\Users\<user>\AppData\, then
when writing to C:\Users\<user>\AppData\Local will write to C:\Users\<user>\
AppData\Local\Packages\
PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\Local\.

When reading files, Windows will return the file from the private folder, or if that does not exist,
the real Windows directory. For example reading C:\Windows\System32 returns the contents
of C:\Windows\System32 plus the contents of C:\Program Files\WindowsApps\
package_name\VFS\SystemX86.

You can find the real path of any existing file using os.path.realpath():

>>>

>>> import os
>>> test_file = 'C:\\Users\\example\\AppData\\Local\\test.txt'
>>> os.path.realpath(test_file)
'C:\\Users\\example\\AppData\\Local\\Packages\\
PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\
Local\\test.txt'

When writing to the Windows Registry, the following behaviors exist:

 Reading from HKLM\\Software is allowed and results are merged with


the registry.dat file in the package.
 Writing to HKLM\\Software is not allowed if the corresponding key/value exists, i.e.
modifying existing keys.
 Writing to HKLM\\Software is allowed as long as a corresponding key/value does not
exist in the package and the user has the correct access permissions.

For more detail on the technical basis for these limitations, please consult Microsoft’s
documentation on packaged full-trust apps, currently available
at docs.microsoft.com/en-us/windows/msix/desktop/desktop-to-uwp-behind-the-scenes
4.3. The nuget.org packages
Added in version 3.5.2.

The nuget.org package is a reduced size Python environment intended for use on continuous
integration and build systems that do not have a system-wide install of Python. While nuget is
“the package manager for .NET”, it also works perfectly fine for packages containing build-time
tools.

Visit nuget.org for the most up-to-date information on using nuget. What follows is a summary
that is sufficient for Python developers.

The nuget.exe command line tool may be downloaded directly


from https://aka.ms/nugetclidl, for example, using curl or PowerShell. With the tool,
the latest version of Python for 64-bit or 32-bit machines is installed using:

nuget.exe install python -ExcludeVersion -OutputDirectory .


nuget.exe install pythonx86 -ExcludeVersion -OutputDirectory .

To select a particular version, add a -Version 3.x.y. The output directory may be changed
from ., and the package will be installed into a subdirectory. By default, the subdirectory is
named the same as the package, and without the -ExcludeVersion option this name will
include the specific version installed. Inside the subdirectory is a tools directory that contains
the Python installation:

# Without -ExcludeVersion
> .\python.3.5.2\tools\python.exe -V
Python 3.5.2

# With -ExcludeVersion
> .\python\tools\python.exe -V
Python 3.5.2

In general, nuget packages are not upgradeable, and newer versions should be installed side-by-
side and referenced using the full path. Alternatively, delete the package directory manually and
install it again. Many CI systems will do this automatically if they do not preserve files between
builds.

Alongside the tools directory is a build\native directory. This contains a MSBuild


properties file python.props that can be used in a C++ project to reference the Python install.
Including the settings will automatically use the headers and import libraries in your build.

The package information pages on nuget.org are www.nuget.org/packages/python for the 64-bit
version and www.nuget.org/packages/pythonx86 for the 32-bit version.
4.4. The embeddable package
Added in version 3.5.

The embedded distribution is a ZIP file containing a minimal Python environment. It is intended
for acting as part of another application, rather than being directly accessed by end-users.

When extracted, the embedded distribution is (almost) fully isolated from the user’s system,
including environment variables, system registry settings, and installed packages. The standard
library is included as pre-compiled and optimized .pyc files in a ZIP,
and python3.dll, python37.dll, python.exe and pythonw.exe are all provided.
Tcl/tk (including all dependents, such as Idle), pip and the Python documentation are not
included.

Note

The embedded distribution does not include the Microsoft C Runtime and it is the responsibility
of the application installer to provide this. The runtime may have already been installed on a
user’s system previously or automatically via Windows Update, and can be detected by
finding ucrtbase.dll in the system directory.

Third-party packages should be installed by the application installer alongside the embedded
distribution. Using pip to manage dependencies as for a regular Python installation is not
supported with this distribution, though with some care it may be possible to include and use pip
for automatic updates. In general, third-party packages should be treated as part of the
application (“vendoring”) so that the developer can ensure compatibility with newer versions
before providing updates to users.

The two recommended use cases for this distribution are described below.

4.4.1. Python Application


An application written in Python does not necessarily require users to be aware of that fact. The
embedded distribution may be used in this case to include a private version of Python in an
install package. Depending on how transparent it should be (or conversely, how professional it
should appear), there are two options.

Using a specialized executable as a launcher requires some coding, but provides the most
transparent experience for users. With a customized launcher, there are no obvious indications
that the program is running on Python: icons can be customized, company and version
information can be specified, and file associations behave properly. In most cases, a custom
launcher should simply be able to call Py_Main with a hard-coded command line.
The simpler approach is to provide a batch file or generated shortcut that directly calls
the python.exe or pythonw.exe with the required command-line arguments. In this case,
the application will appear to be Python and not its actual name, and users may have trouble
distinguishing it from other running Python processes or file associations.

With the latter approach, packages should be installed as directories alongside the Python
executable to ensure they are available on the path. With the specialized launcher, packages can
be located in other locations as there is an opportunity to specify the search path before
launching the application.

4.4.2. Embedding Python


Applications written in native code often require some form of scripting language, and the
embedded Python distribution can be used for this purpose. In general, the majority of the
application is in native code, and some part will either invoke python.exe or directly
use python3.dll. For either case, extracting the embedded distribution to a subdirectory of
the application installation is sufficient to provide a loadable Python interpreter.

As with the application use, packages can be installed to any location as there is an opportunity
to specify search paths before initializing the interpreter. Otherwise, there is no fundamental
differences between using the embedded distribution and a regular installation.

4.5. Alternative bundles


Besides the standard CPython distribution, there are modified packages including additional
functionality. The following is a list of popular versions and their key features:

ActivePython

Installer with multi-platform compatibility, documentation, PyWin32

Anaconda

Popular scientific modules (such as numpy, scipy and pandas) and the conda package
manager.

Enthought Deployment Manager

“The Next Generation Python Environment and Package Manager”.

Previously Enthought provided Canopy, but it reached end of life in 2016.

WinPython
Windows-specific distribution with prebuilt scientific packages and tools for building
packages.

Note that these packages may not include the latest versions of Python or
other libraries, and are not maintained or supported by the core Python team.

4.6. Configuring Python


To run Python conveniently from a command prompt, you might consider
changing some default environment variables in Windows. While the installer
provides an option to configure the PATH and PATHEXT variables for you,
this is only reliable for a single, system-wide installation. If you regularly use
multiple versions of Python, consider using the Python Launcher for
Windows.

4.6.1. Excursus: Setting environment


variables
Windows allows environment variables to be configured permanently at both
the User level and the System level, or temporarily in a command prompt.

To temporarily set environment variables, open Command Prompt and use


the set command:

C:\>set PATH=C:\Program Files\Python 3.9;%PATH%


C:\>set PYTHONPATH=%PYTHONPATH%;C:\My_python_lib
C:\>python

These changes will apply to any further commands executed in that console,
and will be inherited by any applications started from the console.

Including the variable name within percent signs will expand to the existing
value, allowing you to add your new value at either the start or the end.
Modifying PATH by adding the directory containing python.exe to the start is
a common way to ensure the correct version of Python is launched.

To permanently modify the default environment variables, click Start and


search for ‘edit environment variables’, or open System properties, Advanced
system settings and click the Environment Variables button. In this dialog,
you can add or modify User and System variables. To change System
variables, you need non-restricted access to your machine (i.e. Administrator
rights).
Note

Windows will concatenate User variables after System variables, which may
cause unexpected results when modifying PATH.

The PYTHONPATH variable is used by all versions of Python, so you should


not permanently configure it unless the listed paths only include code that is
compatible with all of your installed Python versions.
See also
https://docs.microsoft.com/en-us/windows/win32/procthread/environment-
variables

Overview of environment variables on Windows


https://docs.microsoft.com/en-us/windows-server/administration/windows-
commands/set_1

The set command, for temporarily modifying environment variables


https://docs.microsoft.com/en-us/windows-server/administration/
windows-commands/setx

The setx command, for permanently modifying environment variables


4.6.2. Finding the Python
executable
Changed in version 3.5.

Besides using the automatically created start menu entry for the
Python interpreter, you might want to start Python in the
command prompt. The installer has an option to set that up for
you.

On the first page of the installer, an option labelled “Add


Python to PATH” may be selected to have the installer add the
install location into the PATH. The location of
the Scripts\ folder is also added. This allows you to
type python to run the interpreter, and pip for the package
installer. Thus, you can also execute your scripts with command
line options, see Command line documentation.

If you don’t enable this option at install time, you can always re-
run the installer, select Modify, and enable it. Alternatively, you
can manually modify the PATH using the directions in Excursus:
Setting environment variables. You need to set
your PATH environment variable to include the directory of
your Python installation, delimited by a semicolon from other
entries. An example variable could look like this (assuming the
first two entries already existed):

C:\WINDOWS\system32;C:\WINDOWS;C:\Program
Files\Python 3.9

4.7. UTF-8 mode


Added in version 3.7.

Windows still uses legacy encodings for the system encoding


(the ANSI Code Page). Python uses it for the default encoding
of text files (e.g. locale.getencoding()).

This may cause issues because UTF-8 is widely used on the


internet and most Unix systems, including WSL (Windows
Subsystem for Linux).

You can use the Python UTF-8 Mode to change the default text
encoding to UTF-8. You can enable the Python UTF-8
Mode via the -X utf8 command line option, or
the PYTHONUTF8=1 environment variable.
See PYTHONUTF8 for enabling UTF-8 mode, and Excursus:
Setting environment variables for how to modify environment
variables.

When the Python UTF-8 Mode is enabled, you can still use the
system encoding (the ANSI Code Page) via the “mbcs” codec.

Note that adding PYTHONUTF8=1 to the default environment


variables will affect all Python 3.7+ applications on your
system. If you have any Python 3.7+ applications which rely on
the legacy system encoding, it is recommended to set the
environment variable temporarily or use the -
X utf8 command line option.

Note

Even when UTF-8 mode is disabled, Python uses UTF-8 by


default on Windows for:
 Console I/O including standard I/O (see PEP 528 for
details).
 The filesystem encoding (see PEP 529 for details).
4.8. Python Launcher for
Windows
Added in version 3.3.

The Python launcher for Windows is a utility which aids in


locating and executing of different Python versions. It allows
scripts (or the command-line) to indicate a preference for a
specific Python version, and will locate and execute that
version.

Unlike the PATH variable, the launcher will correctly select the
most appropriate version of Python. It will prefer per-user
installations over system-wide ones, and orders by language
version rather than using the most recently installed version.

The launcher was originally specified in PEP 397.

4.8.1. Getting started


4.8.1.1. From the command-line
Changed in version 3.6.

System-wide installations of Python 3.3 and later will put the


launcher on your PATH. The launcher is compatible with all
available versions of Python, so it does not matter which version
is installed. To check that the launcher is available, execute the
following command in Command Prompt:

py

You should find that the latest version of Python you have
installed is started - it can be exited as normal, and any
additional command-line arguments specified will be sent
directly to Python.

If you have multiple versions of Python installed (e.g., 3.7 and


3.12) you will have noticed that Python 3.12 was started - to
launch Python 3.7, try the command:

py -3.7
If you want the latest version of Python 2 you have installed, try
the command:

py -2

If you see the following error, you do not have the launcher
installed:

'py' is not recognized as an internal or


external command,
operable program or batch file.

The command:

py --list

displays the currently installed version(s) of Python.

The -x.y argument is the short form of


the -V:Company/Tag argument, which allows selecting a
specific Python runtime, including those that may have come
from somewhere other than python.org. Any runtime registered
by following PEP 514 will be discoverable. The --
list command lists all available runtimes using the -
V: format.

When using the -V: argument, specifying the Company will


limit selection to runtimes from that provider, while specifying
only the Tag will select from all providers. Note that omitting
the slash implies a tag:

# Select any '3.*' tagged runtime


py -V:3

# Select any 'PythonCore' released runtime


py -V:PythonCore/

# Select PythonCore's latest Python 3


runtime
py -V:PythonCore/3
The short form of the argument (-3) only ever selects from core
Python releases, and not other distributions. However, the
longer form (-V:3) will select from any.

The Company is matched on the full string, case-insenitive. The


Tag is matched oneither the full string, or a prefix, provided the
next character is a dot or a hyphen. This allows -V:3.1 to
match 3.1-32, but not 3.10. Tags are sorted using numerical
ordering (3.10 is newer than 3.1), but are compared using text
(-V:3.01 does not match 3.1).

4.8.1.2. Virtual environments


Added in version 3.5.

If the launcher is run with no explicit Python version


specification, and a virtual environment (created with the
standard library venv module or the
external virtualenv tool) active, the launcher will run the
virtual environment’s interpreter rather than the global one. To
run the global interpreter, either deactivate the virtual
environment, or explicitly specify the global Python version.

4.8.1.3. From a script


Let’s create a test Python script - create a file
called hello.py with the following contents

#! python
import sys
sys.stdout.write("hello from Python %s\n" %
(sys.version,))

From the directory in which hello.py lives, execute the


command:

py hello.py

You should notice the version number of your latest Python 2.x
installation is printed. Now try changing the first line to be:

#! python3
Re-executing the command should now print the latest Python
3.x information. As with the above command-line examples,
you can specify a more explicit version qualifier. Assuming you
have Python 3.7 installed, try changing the first line
to #! python3.7 and you should find the 3.7 version
information printed.

Note that unlike interactive use, a bare “python” will use the
latest version of Python 2.x that you have installed. This is for
backward compatibility and for compatibility with Unix, where
the command python typically refers to Python 2.

4.8.1.4. From file associations


The launcher should have been associated with Python files
(i.e. .py, .pyw, .pyc files) when it was installed. This means
that when you double-click on one of these files from Windows
explorer the launcher will be used, and therefore you can use the
same facilities described above to have the script specify the
version which should be used.

The key benefit of this is that a single launcher can support


multiple Python versions at the same time depending on the
contents of the first line.

4.8.2. Shebang Lines


If the first line of a script file starts with #!, it is known as a
“shebang” line. Linux and other Unix like operating systems
have native support for such lines and they are commonly used
on such systems to indicate how a script should be executed.
This launcher allows the same facilities to be used with Python
scripts on Windows and the examples above demonstrate their
use.

To allow shebang lines in Python scripts to be portable between


Unix and Windows, this launcher supports a number of ‘virtual’
commands to specify which interpreter to use. The supported
virtual commands are:

 /usr/bin/env
 /usr/bin/python
 /usr/local/bin/python
 python

For example, if the first line of your script starts with


#! /usr/bin/python

The default Python will be located and used. As many Python


scripts written to work on Unix will already have this line, you
should find these scripts can be used by the launcher without
modification. If you are writing a new script on Windows which
you hope will be useful on Unix, you should use one of the
shebang lines starting with /usr.

Any of the above virtual commands can be suffixed with an


explicit version (either just the major version, or the major and
minor version). Furthermore the 32-bit version can be requested
by adding “-32” after the minor version.
I.e. /usr/bin/python3.7-32 will request usage of the 32-
bit python 3.7.

Added in version 3.7: Beginning with python launcher 3.7 it is


possible to request 64-bit version by the “-64” suffix.
Furthermore it is possible to specify a major and architecture
without minor (i.e. /usr/bin/python3-64).

Changed in version 3.11: The “-64” suffix is deprecated, and


now implies “any architecture that is not provably i386/32-bit”.
To request a specific environment, use the new -
V:TAG argument with the complete tag.

The /usr/bin/env form of shebang line has one further


special property. Before looking for installed Python
interpreters, this form will search the executable PATH for a
Python executable matching the name provided as the first
argument. This corresponds to the behaviour of the
Unix env program, which performs a PATH search. If an
executable matching the first argument after the env command
cannot be found, but the argument starts with python, it will
be handled as described for the other virtual commands. The
environment variable PYLAUNCHER_NO_SEARCH_PATH may
be set (to any value) to skip this search of PATH.

Shebang lines that do not match any of these patterns are looked
up in the [commands] section of the launcher’s .INI file. This
may be used to handle certain commands in a way that makes
sense for your system. The name of the command must be a
single argument (no spaces in the shebang executable), and the
value substituted is the full path to the executable (additional
arguments specified in the .INI will be quoted as part of the
filename).
[commands]
/bin/xpython=C:\Program Files\XPython\
python.exe

Any commands not found in the .INI file are treated


as Windows executable paths that are absolute or relative to the
directory containing the script file. This is a convenience for
Windows-only scripts, such as those generated by an installer,
since the behavior is not compatible with Unix-style shells.
These paths may be quoted, and may include multiple
arguments, after which the path to the script and any additional
arguments will be appended.

4.8.3. Arguments in shebang lines


The shebang lines can also specify additional options to be
passed to the Python interpreter. For example, if you have a
shebang line:

#! /usr/bin/python -v

Then Python will be started with the -v option

4.8.4. Customization
4.8.4.1. Customization via INI files
Two .ini files will be searched by the launcher - py.ini in the
current user’s application data directory (%LOCALAPPDATA
% or $env:LocalAppData) and py.ini in the same
directory as the launcher. The same .ini files are used for both
the ‘console’ version of the launcher (i.e. py.exe) and for the
‘windows’ version (i.e. pyw.exe).

Customization specified in the “application directory” will have


precedence over the one next to the executable, so a user, who
may not have write access to the .ini file next to the launcher,
can override commands in that global .ini file.

4.8.4.2. Customizing default Python


versions
In some cases, a version qualifier can be included in a command
to dictate which version of Python will be used by the
command. A version qualifier starts with a major version
number and can optionally be followed by a period (‘.’) and a
minor version specifier. Furthermore it is possible to specify if a
32 or 64 bit implementation shall be requested by adding “-32”
or “-64”.

For example, a shebang line of #!python has no version


qualifier, while #!python3 has a version qualifier which
specifies only a major version.

If no version qualifiers are found in a command, the


environment variable PY_PYTHON can be set to specify the
default version qualifier. If it is not set, the default is “3”. The
variable can specify any value that may be passed on the
command line, such as “3”, “3.7”, “3.7-32” or “3.7-64”. (Note
that the “-64” option is only available with the launcher
included with Python 3.7 or newer.)

If no minor version qualifiers are found, the environment


variable PY_PYTHON{major} (where {major} is the current
major version qualifier as determined above) can be set to
specify the full version. If no such option is found, the launcher
will enumerate the installed Python versions and use the latest
minor release found for the major version, which is likely,
although not guaranteed, to be the most recently installed
version in that family.

On 64-bit Windows with both 32-bit and 64-bit


implementations of the same (major.minor) Python version
installed, the 64-bit version will always be preferred. This will
be true for both 32-bit and 64-bit implementations of the
launcher - a 32-bit launcher will prefer to execute a 64-bit
Python installation of the specified version if available. This is
so the behavior of the launcher can be predicted knowing only
what versions are installed on the PC and without regard to the
order in which they were installed (i.e., without knowing
whether a 32 or 64-bit version of Python and corresponding
launcher was installed last). As noted above, an optional “-32”
or “-64” suffix can be used on a version specifier to change this
behaviour.

Examples:

 If no relevant options are set, the


commands python and python2 will use the latest
Python 2.x version installed and the
command python3 will use the latest Python 3.x
installed.
 The command python3.7 will not consult any options
at all as the versions are fully specified.
 If PY_PYTHON=3, the
commands python and python3 will both use the
latest installed Python 3 version.
 If PY_PYTHON=3.7-32, the command python will
use the 32-bit implementation of 3.7 whereas the
command python3 will use the latest installed Python
(PY_PYTHON was not considered at all as a major
version was specified.)
 If PY_PYTHON=3 and PY_PYTHON3=3.7, the
commands python and python3 will both use
specifically 3.7

In addition to environment variables, the same settings can be


configured in the .INI file used by the launcher. The section in
the INI file is called [defaults] and the key name will be the
same as the environment variables without the
leading PY_ prefix (and note that the key names in the INI file
are case insensitive.) The contents of an environment variable
will override things specified in the INI file.

For example:

 Setting PY_PYTHON=3.7 is equivalent to the INI file


containing:
[defaults]
python=3.7
 Setting PY_PYTHON=3 and PY_PYTHON3=3.7 is
equivalent to the INI file containing:
[defaults]
python=3
python3=3.7

4.8.5. Diagnostics
If an environment variable PYLAUNCHER_DEBUG is set (to any
value), the launcher will print diagnostic information to stderr
(i.e. to the console). While this information manages to be
simultaneously verbose and terse, it should allow you to see
what versions of Python were located, why a particular version
was chosen and the exact command-line used to execute the
target Python. It is primarily intended for testing and debugging.
4.8.6. Dry Run
If an environment variable PYLAUNCHER_DRYRUN is set (to
any value), the launcher will output the command it would have
run, but will not actually launch Python. This may be useful for
tools that want to use the launcher to detect and then launch
Python directly. Note that the command written to standard
output is always encoded using UTF-8, and may not render
correctly in the console.

4.8.7. Install on demand


If an environment variable PYLAUNCHER_ALLOW_INSTALL is
set (to any value), and the requested Python version is not
installed but is available on the Microsoft Store, the launcher
will attempt to install it. This may require user interaction to
complete, and you may need to run the command again.

An additional PYLAUNCHER_ALWAYS_INSTALL variable


causes the launcher to always try to install Python, even if it is
detected. This is mainly intended for testing (and should be used
with PYLAUNCHER_DRYRUN).

4.8.8. Return codes


The following exit codes may be returned by the Python
launcher. Unfortunately, there is no way to distinguish these
from the exit code of Python itself.

The names of codes are as used in the sources, and are only for
reference. There is no way to access or resolve them apart from
reading this page. Entries are listed in alphabetical order of
names.

Name Value Description


A pyvenv.cfg was
RC_BAD_VENV_CFG 107
found but is corrupt.
RC_CREATE_PROCESS 101 Failed to launch Python.
An install was started,
but the command will
RC_INSTALLING 111
need to be re-run after it
completes.
Name Value Description
Unexpected error.
RC_INTERNAL_ERROR 109
Please report a bug.
Unable to obtain
RC_NO_COMMANDLINE 108 command line from the
operating system.
Unable to locate the
RC_NO_PYTHON 103
requested version.
A pyvenv.cfg was
RC_NO_VENV_CFG 106
required but not found.

4.9. Finding modules


These notes supplement the description at The initialization of
the sys.path module search path with detailed Windows notes.

When no ._pth file is found, this is how sys.path is


populated on Windows:

 An empty entry is added at the start, which corresponds


to the current directory.
 If the environment variable PYTHONPATH exists, as
described in Environment variables, its entries are added
next. Note that on Windows, paths in this variable must
be separated by semicolons, to distinguish them from the
colon used in drive identifiers (C:\ etc.).
 Additional “application paths” can be added in the
registry as subkeys of \SOFTWARE\Python\
PythonCore{version}\PythonPath under both
the HKEY_CURRENT_USER and HKEY_LOCAL_MACHI
NE hives. Subkeys which have semicolon-delimited path
strings as their default value will cause each path to be
added to sys.path. (Note that all known installers
only use HKLM, so HKCU is typically empty.)
 If the environment variable PYTHONHOME is set, it is
assumed as “Python Home”. Otherwise, the path of the
main Python executable is used to locate a “landmark
file” (either Lib\os.py or pythonXY.zip) to deduce
the “Python Home”. If a Python home is found, the
relevant sub-directories added
to sys.path (Lib, plat-win, etc) are based on that
folder. Otherwise, the core Python path is constructed
from the PythonPath stored in the registry.
 If the Python Home cannot be located,
no PYTHONPATH is specified in the environment, and no
registry entries can be found, a default path with relative
entries is used (e.g. .\Lib;.\plat-win, etc).

If a pyvenv.cfg file is found alongside the main executable


or in the directory one level above the executable, the following
variations apply:

 If home is an absolute path and PYTHONHOME is not set,


this path is used instead of the path to the main
executable when deducing the home location.

The end result of all this is:

 When running python.exe, or any other .exe in the


main Python directory (either an installed version, or
directly from the PCbuild directory), the core path is
deduced, and the core paths in the registry are ignored.
Other “application paths” in the registry are always read.
 When Python is hosted in another .exe (different
directory, embedded via COM, etc), the “Python Home”
will not be deduced, so the core path from the registry is
used. Other “application paths” in the registry are always
read.
 If Python can’t find its home and there are no registry
value (frozen .exe, some very strange installation setup)
you get a path with some default, but relative, paths.

For those who want to bundle Python into their application or


distribution, the following advice will prevent conflicts with
other installations:

 Include a ._pth file alongside your executable


containing the directories to include. This will ignore
paths listed in the registry and environment variables,
and also ignore site unless import site is listed.
 If you are
loading python3.dll or python37.dll in your
own executable, explicitly call Py_SetPath() or (at
least) Py_SetProgramName() before Py_Initiali
ze().
 Clear and/or overwrite PYTHONPATH and
set PYTHONHOME before launching python.exe from
your application.
 If you cannot use the previous suggestions (for example,
you are a distribution that allows people to
run python.exe directly), ensure that the landmark
file (Lib\os.py) exists in your install directory. (Note
that it will not be detected inside a ZIP file, but a
correctly named ZIP file will be detected instead.)

These will ensure that the files in a system-wide installation will


not take precedence over the copy of the standard library
bundled with your application. Otherwise, your users may
experience problems using your application. Note that the first
suggestion is the best, as the others may still be susceptible to
non-standard paths in the registry and user site-packages.

Changed in version 3.6: Add ._pth file support and


removes applocal option from pyvenv.cfg.

Changed in version 3.6: Add pythonXX.zip as a potential


landmark when directly adjacent to the executable.

Deprecated since version 3.6: Modules specified in the registry


under Modules (not PythonPath) may be imported
by importlib.machinery.WindowsRegistryFinder.
This finder is enabled on Windows in 3.6.0 and earlier, but may
need to be explicitly added to sys.meta_path in the future.

4.10. Additional modules


Even though Python aims to be portable among all platforms,
there are features that are unique to Windows. A couple of
modules, both in the standard library and external, and snippets
exist to use these features.

The Windows-specific standard modules are documented in MS


Windows Specific Services.

4.10.1. PyWin32
The PyWin32 module by Mark Hammond is a collection of
modules for advanced Windows-specific support. This includes
utilities for:

 Component Object Model (COM)


 Win32 API calls
 Registry
 Event log
 Microsoft Foundation Classes (MFC) user interfaces

PythonWin is a sample MFC application shipped with


PyWin32. It is an embeddable IDE with a built-in debugger.

See also
Win32 How Do I…?

by Tim Golden
Python and COM

by David and Paul Boddie


4.10.2. cx_Freeze
cx_Freeze wraps Python scripts into executable
Windows programs (*.exe files). When you have
done this, you can distribute your application without
requiring your users to install Python.

4.11. Compiling Python on


Windows
If you want to compile CPython yourself, first thing
you should do is get the source. You can download
either the latest release’s source or just grab a
fresh checkout.

The source tree contains a build solution and project


files for Microsoft Visual Studio, which is the
compiler used to build the official Python releases.
These files are in the PCbuild directory.

Check PCbuild/readme.txt for general


information on the build process.

For extension modules, consult Building C and C++


Extensions on Windows.

4.12. Other Platforms


With ongoing development of Python, some platforms
that used to be supported earlier are no longer
supported (due to the lack of users or developers).
Check PEP 11 for details on all unsupported
platforms.

 Windows CE is no longer supported since


Python 3 (if it ever was).
 The Cygwin installer offers to install
the Python interpreter as well

See Python for Windows for detailed information


about platforms with pre-compiled installers.

5. Using Python on a Mac


Author:

Bob Savage <bobsavage@mac.com>

Python on a Mac running macOS is in principle very similar to Python on any


other Unix platform, but there are a number of additional features such as
the integrated development environment (IDE) and the Package Manager
that are worth pointing out.

5.1. Getting and Installing Python


macOS used to come with Python 2.7 pre-installed between versions 10.8 and 12.3. You are
invited to install the most recent version of Python 3 from the Python website. A current
“universal2 binary” build of Python, which runs natively on the Mac’s new Apple Silicon and
legacy Intel processors, is available there.

What you get after installing is a number of things:

 A Python 3.12 folder in your Applications folder. In here you find IDLE, the
development environment that is a standard part of official Python distributions;
and Python Launcher, which handles double-clicking Python scripts from the Finder.
 A framework /Library/Frameworks/Python.framework, which includes the
Python executable and libraries. The installer adds this location to your shell path. To
uninstall Python, you can remove these three things. A symlink to the Python executable
is placed in /usr/local/bin/.
Note

On macOS 10.8-12.3, the Apple-provided build of Python is installed


in /System/Library/Frameworks/Python.framework and /usr/bin/python,
respectively. You should never modify or delete these, as they are Apple-controlled and are used
by Apple- or third-party software. Remember that if you choose to install a newer Python version
from python.org, you will have two different but functional Python installations on your
computer, so it will be important that your paths and usages are consistent with what you want to
do.

IDLE includes a Help menu that allows you to access Python documentation. If you are
completely new to Python you should start reading the tutorial introduction in that document.

If you are familiar with Python on other Unix platforms you should read the section on running
Python scripts from the Unix shell.

5.1.1. How to run a Python script


Your best way to get started with Python on macOS is through the IDLE integrated development
environment; see section The IDE and use the Help menu when the IDE is running.

If you want to run Python scripts from the Terminal window command line or from the Finder
you first need an editor to create your script. macOS comes with a number of standard Unix
command line editors, vim nano among them. If you want a more Mac-like editor, BBEdit from
Bare Bones Software (see https://www.barebones.com/products/bbedit/index.html) are good
choices, as is TextMate (see https://macromates.com). Other editors
include MacVim (https://macvim.org) and Aquamacs (https://aquamacs.org).

To run your script from the Terminal window you must make sure that /usr/local/bin is in
your shell search path.

To run your script from the Finder you have two options:

 Drag it to Python Launcher.


 Select Python Launcher as the default application to open your script (or
any .py script) through the finder Info window and double-click it. Python
Launcher has various preferences to control how your script is launched. Option-
dragging allows you to change these for one invocation, or use its Preferences menu to
change things globally.
5.1.2. Running scripts with a GUI
With older versions of Python, there is one macOS quirk that you need to be aware of: programs
that talk to the Aqua window manager (in other words, anything that has a GUI) need to be run
in a special way. Use pythonw instead of python to start such scripts.

With Python 3.9, you can use either python or pythonw.


5.1.3. Configuration
Python on macOS honors all standard Unix environment variables such as PYTHONPATH, but
setting these variables for programs started from the Finder is non-standard as the Finder does
not read your .profile or .cshrc at startup. You need to create a
file ~/.MacOSX/environment.plist. See Apple’s Technical Q&A QA1067 for details.

For more information on installation Python packages, see section Installing Additional Python
Packages.

5.2. The IDE


Python ships with the standard IDLE development environment. A good introduction to using
IDLE can be found at https://www.hashcollision.org/hkn/python/idle_intro/index.html.

5.3. Installing Additional Python Packages


This section has moved to the Python Packaging User Guide.

5.4. GUI Programming


There are several options for building GUI applications on the Mac with Python.

PyObjC is a Python binding to Apple’s Objective-C/Cocoa framework, which is the foundation


of most modern Mac development. Information on PyObjC is available from pyobjc.

The standard Python GUI toolkit is tkinter, based on the cross-platform Tk toolkit
(https://www.tcl.tk). An Aqua-native version of Tk is bundled with macOS by Apple, and the
latest version can be downloaded and installed from https://www.activestate.com; it can also be
built from source.

A number of alternative macOS GUI toolkits are available:

 PySide: Official Python bindings to the Qt GUI toolkit.


 PyQt: Alternative Python bindings to Qt.
 Kivy: A cross-platform GUI toolkit that supports desktop and mobile platforms.
 Toga: Part of the BeeWare Project; supports desktop, mobile, web and console apps.
 wxPython: A cross-platform toolkit that supports desktop operating systems.
5.5. Distributing Python Applications
A range of tools exist for converting your Python code into a standalone distributable
application:
 py2app: Supports creating macOS .app bundles from a Python project.
 Briefcase: Part of the BeeWare Project; a cross-platform packaging tool that supports
creation of .app bundles on macOS, as well as managing signing and notarization.
 PyInstaller: A cross-platform packaging tool that creates a single file or folder as a
distributable artifact.
5.6. Other Resources
The Pythonmac-SIG mailing list is an excellent support resource for Python users and developers
on the Mac:

https://www.python.org/community/sigs/current/pythonmac-sig/

Another useful resource is the MacPython wiki:

https://wiki.python.org/moin/MacPython

6. Editors and IDEs


There are a number of IDEs that support Python programming language.
Many editors and IDEs provide syntax highlighting, debugging tools, and PEP
8 checks.

Please go to Python Editors and Integrated Development Environments for a


compr

The Python Language Reference


This reference manual describes the syntax and “core semantics” of the
language. It is terse, but attempts to be exact and complete. The semantics
of non-essential built-in object types and of the built-in functions and
modules are described in The Python Standard Library. For an informal
introduction to the language, see The Python Tutorial. For C or C++
programmers, two additional manuals exist: Extending and Embedding the
Python Interpreter describes the high-level picture of how to write a Python
extension module, and the Python/C API Reference Manual describes the
interfaces available to C/C++ programmers in detail.

 1. Introduction
o 1.1. Alternate Implementations
o 1.2. Notation
 2. Lexical analysis
o 2.1. Line structure
o 2.2. Other tokens
o 2.3. Identifiers and keywords
o 2.4. Literals
o 2.5. Operators
o 2.6. Delimiters
 3. Data model
o 3.1. Objects, values and types
o 3.2. The standard type hierarchy
o 3.3. Special method names
o 3.4. Coroutines
 4. Execution model
o 4.1. Structure of a program
o 4.2. Naming and binding
o 4.3. Exceptions
 5. The import system
o 5.1. importlib
o 5.2. Packages
o 5.3. Searching
o 5.4. Loading
o 5.5. The Path Based Finder
o 5.6. Replacing the standard import system
o 5.7. Package Relative Imports
o 5.8. Special considerations for __main__
o 5.9. References
 6. Expressions
o 6.1. Arithmetic conversions
o 6.2. Atoms
o 6.3. Primaries
o 6.4. Await expression
o 6.5. The power operator
o 6.6. Unary arithmetic and bitwise operations
o 6.7. Binary arithmetic operations
o 6.8. Shifting operations
o 6.9. Binary bitwise operations
o 6.10. Comparisons
o 6.11. Boolean operations
o 6.12. Assignment expressions
o 6.13. Conditional expressions
o 6.14. Lambdas
o 6.15. Expression lists
o 6.16. Evaluation order
o 6.17. Operator precedence
 7. Simple statements
o 7.1. Expression statements
o 7.2. Assignment statements
o 7.3. The assert statement
o 7.4. The pass statement
o 7.5. The del statement
o 7.6. The return statement
o 7.7. The yield statement
o 7.8. The raise statement
o 7.9. The break statement
o 7.10. The continue statement
o 7.11. The import statement
o 7.12. The global statement
o 7.13. The nonlocal statement
o 7.14. The type statement
 8. Compound statements
o 8.1. The if statement
o 8.2. The while statement
o 8.3. The for statement
o 8.4. The try statement
o 8.5. The with statement
o 8.6. The match statement
o 8.7. Function definitions
o 8.8. Class definitions
o 8.9. Coroutines
o 8.10. Type parameter lists
 9. Top-level components
o 9.1. Complete Python programs
o 9.2. File input
o 9.3. Interactive input
o 9.4. Expression input
 10. Full Grammar specification

1. Introduction
This reference manual describes the Python programming language. It is not
intended as a tutorial.

While I am trying to be as precise as possible, I chose to use English rather


than formal specifications for everything except syntax and lexical analysis.
This should make the document more understandable to the average reader,
but will leave room for ambiguities. Consequently, if you were coming from
Mars and tried to re-implement Python from this document alone, you might
have to guess things and in fact you would probably end up implementing
quite a different language. On the other hand, if you are using Python and
wonder what the precise rules about a particular area of the language are,
you should definitely be able to find them here. If you would like to see a
more formal definition of the language, maybe you could volunteer your time
— or invent a cloning machine :-).

It is dangerous to add too many implementation details to a language


reference document — the implementation may change, and other
implementations of the same language may work differently. On the other
hand, CPython is the one Python implementation in widespread use
(although alternate implementations continue to gain support), and its
particular quirks are sometimes worth being mentioned, especially where the
implementation imposes additional limitations. Therefore, you’ll find short
“implementation notes” sprinkled throughout the text.

Every Python implementation comes with a number of built-in and standard


modules. These are documented in The Python Standard Library. A few built-
in modules are mentioned when they interact in a significant way with the
language definition.

1.1. Alternate Implementations


Though there is one Python implementation which is by far the most popular, there are some
alternate implementations which are of particular interest to different audiences.

Known implementations include:

CPython

This is the original and most-maintained implementation of Python, written in C. New


language features generally appear here first.

Jython

Python implemented in Java. This implementation can be used as a scripting language for
Java applications, or can be used to create applications using the Java class libraries. It is
also often used to create tests for Java libraries. More information can be found at the
Jython website.

Python for .NET

This implementation actually uses the CPython implementation, but is a managed .NET
application and makes .NET libraries available. It was created by Brian Lloyd. For more
information, see the Python for .NET home page.
IronPython

An alternate Python for .NET. Unlike Python.NET, this is a complete Python


implementation that generates IL, and compiles Python code directly to .NET assemblies.
It was created by Jim Hugunin, the original creator of Jython. For more information,
see the IronPython website.

PyPy

An implementation of Python written completely in Python. It supports several advanced


features not found in other implementations like stackless support and a Just in Time
compiler. One of the goals of the project is to encourage experimentation with the
language itself by making it easier to modify the interpreter (since it is written in Python).
Additional information is available on the PyPy project’s home page.

Each of these implementations varies in some way from the language as


documented in this manual, or introduces specific information beyond
what’s covered in the standard Python documentation. Please refer to the
implementation-specific documentation to determine what else you need
to know about the specific implementation you’re using.

1.2. Notation
The descriptions of lexical analysis and syntax use a modified Backus–
Naur form (BNF) grammar notation. This uses the following style of
definition:

name ::= lc_letter (lc_letter | "_")*


lc_letter ::= "a"..."z"

The first line says that a name is an lc_letter followed by a sequence


of zero or more lc_letters and underscores. An lc_letter in turn
is any of the single characters 'a' through 'z'. (This rule is actually
adhered to for the names defined in lexical and grammar rules in this
document.)

Each rule begins with a name (which is the name defined by the rule)
and ::=. A vertical bar (|) is used to separate alternatives; it is the least
binding operator in this notation. A star (*) means zero or more
repetitions of the preceding item; likewise, a plus (+) means one or more
repetitions, and a phrase enclosed in square brackets ([ ]) means zero or
one occurrences (in other words, the enclosed phrase is optional).
The * and + operators bind as tightly as possible; parentheses are used for
grouping. Literal strings are enclosed in quotes. White space is only
meaningful to separate tokens. Rules are normally contained on a single
line; rules with many alternatives may be formatted alternatively with
each line after the first beginning with a vertical bar.

In lexical definitions (as the example above), two more conventions are
used: Two literal characters separated by three dots mean a choice of any
single character in the given (inclusive) range of ASCII characters. A
phrase between angular brackets (<...>) gives an informal description
of the symbol defined; e.g., this could be used to describe the notion of
‘control character’ if needed.

Even though the notation used is almost the


same, there is a big difference between the
meaning of lexical and syntactic definitions: a
lexical definition operates on the individual
characters of the input source, while a syntax
definition operates on the stream of tokens
generated by the lexical analysis. All uses of
BNF in the next chapter (“Lexical Analysis”)
are lexical definitions; uses in subsequent
chapters 2. Lexical analysis
A Python program is read by a parser. Input to the parser is a stream
of tokens, generated by the lexical analyzer. This chapter describes how the
lexical analyzer breaks a file into tokens.

Python reads program text as Unicode code points; the encoding of a source
file can be given by an encoding declaration and defaults to UTF-8, see PEP
3120 for details. If the source file cannot be decoded, a SyntaxError is
raised.

2.1. Line structure


A Python program is divided into a number of logical lines.
2.1.1. Logical lines
The end of a logical line is represented by the token NEWLINE. Statements cannot cross logical
line boundaries except where NEWLINE is allowed by the syntax (e.g., between statements in
compound statements). A logical line is constructed from one or more physical lines by
following the explicit or implicit line joining rules.

2.1.2. Physical lines


A physical line is a sequence of characters terminated by an end-of-line sequence. In source files
and strings, any of the standard platform line termination sequences can be used - the Unix form
using ASCII LF (linefeed), the Windows form using the ASCII sequence CR LF (return
followed by linefeed), or the old Macintosh form using the ASCII CR (return) character. All of
these forms can be used equally, regardless of platform. The end of input also serves as an
implicit terminator for the final physical line.

When embedding Python, source code strings should be passed to Python APIs using the
standard C conventions for newline characters (the \n character, representing ASCII LF, is the
line terminator).

2.1.3. Comments
A comment starts with a hash character (#) that is not part of a string literal, and ends at the end
of the physical line. A comment signifies the end of the logical line unless the implicit line
joining rules are invoked. Comments are ignored by the syntax.

2.1.4. Encoding declarations


If a comment in the first or second line of the Python script matches the regular
expression coding[=:]\s*([-\w.]+), this comment is processed as an encoding
declaration; the first group of this expression names the encoding of the source code file. The
encoding declaration must appear on a line of its own. If it is the second line, the first line must
also be a comment-only line. The recommended forms of an encoding expression are

# -*- coding: <encoding-name> -*-

which is recognized also by GNU Emacs, and

# vim:fileencoding=<encoding-name>

which is recognized by Bram Moolenaar’s VIM.


If no encoding declaration is found, the default encoding is UTF-8. If the implicit or explicit
encoding of a file is UTF-8, an initial UTF-8 byte-order mark (b’xefxbbxbf’) is ignored rather
than being a syntax error.

If an encoding is declared, the encoding name must be recognized by Python (see Standard
Encodings). The encoding is used for all lexical analysis, including string literals, comments and
identifiers.

2.1.5. Explicit line joining


Two or more physical lines may be joined into logical lines using backslash characters (\), as
follows: when a physical line ends in a backslash that is not part of a string literal or comment, it
is joined with the following forming a single logical line, deleting the backslash and the
following end-of-line character. For example:

if 1900 < year < 2100 and 1 <= month <= 12 \


and 1 <= day <= 31 and 0 <= hour < 24 \
and 0 <= minute < 60 and 0 <= second < 60: # Looks like a
valid date
return 1

A line ending in a backslash cannot carry a comment. A backslash does not continue a comment.
A backslash does not continue a token except for string literals (i.e., tokens other than string
literals cannot be split across physical lines using a backslash). A backslash is illegal elsewhere
on a line outside a string literal.

2.1.6. Implicit line joining


Expressions in parentheses, square brackets or curly braces can be split over more than one
physical line without using backslashes. For example:

month_names = ['Januari', 'Februari', 'Maart', # These are the


'April', 'Mei', 'Juni', # Dutch names
'Juli', 'Augustus', 'September', # for the
months
'Oktober', 'November', 'December'] # of the year

Implicitly continued lines can carry comments. The indentation of the continuation lines is not
important. Blank continuation lines are allowed. There is no NEWLINE token between implicit
continuation lines. Implicitly continued lines can also occur within triple-quoted strings (see
below); in that case they cannot carry comments.
2.1.7. Blank lines
A logical line that contains only spaces, tabs, formfeeds and possibly a comment, is ignored (i.e.,
no NEWLINE token is generated). During interactive input of statements, handling of a blank
line may differ depending on the implementation of the read-eval-print loop. In the standard
interactive interpreter, an entirely blank logical line (i.e. one containing not even whitespace or a
comment) terminates a multi-line statement.

2.1.8. Indentation
Leading whitespace (spaces and tabs) at the beginning of a logical line is used to compute the
indentation level of the line, which in turn is used to determine the grouping of statements.

Tabs are replaced (from left to right) by one to eight spaces such that the total number of
characters up to and including the replacement is a multiple of eight (this is intended to be the
same rule as used by Unix). The total number of spaces preceding the first non-blank character
then determines the line’s indentation. Indentation cannot be split over multiple physical lines
using backslashes; the whitespace up to the first backslash determines the indentation.

Indentation is rejected as inconsistent if a source file mixes tabs and spaces in a way that makes
the meaning dependent on the worth of a tab in spaces; a TabError is raised in that case.

Cross-platform compatibility note: because of the nature of text editors on non-UNIX


platforms, it is unwise to use a mixture of spaces and tabs for the indentation in a single source
file. It should also be noted that different platforms may explicitly limit the maximum
indentation level.

A formfeed character may be present at the start of the line; it will be ignored for the indentation
calculations above. Formfeed characters occurring elsewhere in the leading whitespace have an
undefined effect (for instance, they may reset the space count to zero).

The indentation levels of consecutive lines are used to generate INDENT and DEDENT tokens,
using a stack, as follows.

Before the first line of the file is read, a single zero is pushed on the stack; this will never be
popped off again. The numbers pushed on the stack will always be strictly increasing from
bottom to top. At the beginning of each logical line, the line’s indentation level is compared to
the top of the stack. If it is equal, nothing happens. If it is larger, it is pushed on the stack, and
one INDENT token is generated. If it is smaller, it must be one of the numbers occurring on the
stack; all numbers on the stack that are larger are popped off, and for each number popped off a
DEDENT token is generated. At the end of the file, a DEDENT token is generated for each
number remaining on the stack that is larger than zero.

Here is an example of a correctly (though confusingly) indented piece of Python code:


def perm(l):
# Compute the list of all permutations of l
if len(l) <= 1:
return [l]
r = []
for i in range(len(l)):
s = l[:i] + l[i+1:]
p = perm(s)
for x in p:
r.append(l[i:i+1] + x)
return r

The following example shows various indentation errors:

def perm(l): # error: first line indented


for i in range(len(l)): # error: not indented
s = l[:i] + l[i+1:]
p = perm(l[:i] + l[i+1:]) # error: unexpected indent
for x in p:
r.append(l[i:i+1] + x)
return r # error: inconsistent dedent

(Actually, the first three errors are detected by the parser; only the last error is found by the
lexical analyzer — the indentation of return r does not match a level popped off the stack.)

2.1.9. Whitespace between tokens


Except at the beginning of a logical line or in string literals, the whitespace characters space, tab
and formfeed can be used interchangeably to separate tokens. Whitespace is needed between two
tokens only if their concatenation could otherwise be interpreted as a different token (e.g., ab is
one token, but a b is two tokens).

2.2. Other tokens


Besides NEWLINE, INDENT and DEDENT, the following categories of tokens
exist: identifiers, keywords, literals, operators, and delimiters. Whitespace characters (other than
line terminators, discussed earlier) are not tokens, but serve to delimit tokens. Where ambiguity
exists, a token comprises the longest possible string that forms a legal token, when read from left
to right.
2.3. Identifiers and keywords
Identifiers (also referred to as names) are described by the following lexical definitions.

The syntax of identifiers in Python is based on the Unicode standard annex UAX-31, with
elaboration and changes as defined below; see also PEP 3131 for further details.

Within the ASCII range (U+0001..U+007F), the valid characters for identifiers are the same as in
Python 2.x: the uppercase and lowercase letters A through Z, the underscore _ and, except for the
first character, the digits 0 through 9.

Python 3.0 introduces additional characters from outside the ASCII range (see PEP 3131). For
these characters, the classification uses the version of the Unicode Character Database as
included in the unicodedata module.

Identifiers are unlimited in length. Case is significant.

identifier ::= xid_start xid_continue*


id_start ::= <all characters in general categories Lu, Ll, Lt,
Lm, Lo, Nl, the underscore, and characters with the Other_ID_Start
property>
id_continue ::= <all characters in id_start, plus characters in
the categories Mn, Mc, Nd, Pc and others with the Other_ID_Continue
property>
xid_start ::= <all characters in id_start whose NFKC
normalization is in "id_start xid_continue*">
xid_continue ::= <all characters in id_continue whose NFKC
normalization is in "id_continue*">

The Unicode category codes mentioned above stand for:

 Lu - uppercase letters
 Ll - lowercase letters
 Lt - titlecase letters
 Lm - modifier letters
 Lo - other letters
 Nl - letter numbers
 Mn - nonspacing marks
 Mc - spacing combining marks
 Nd - decimal numbers
 Pc - connector punctuations
 Other_ID_Start - explicit list of characters in PropList.txt to support backwards
compatibility
 Other_ID_Continue - likewise
All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers
is based on NFKC.

A non-normative HTML file listing all valid identifier characters for Unicode 15.0.0 can be
found at https://www.unicode.org/Public/15.0.0/ucd/DerivedCoreProperties.txt

2.3.1. Keywords
The following identifiers are used as reserved words, or keywords of the language, and cannot be
used as ordinary identifiers. They must be spelled exactly as written here:

False await else import pass


None break except in raise
True class finally is return
and continue for lambda try
as def from nonlocal while
assert del global not with
async elif if or yield

2.3.2. Soft Keywords


Added in version 3.10.

Some identifiers are only reserved under specific contexts. These are known as soft keywords.
The identifiers match, case, type and _ can syntactically act as keywords in certain contexts,
but this distinction is done at the parser level, not when tokenizing.

As soft keywords, their use in the grammar is possible while still preserving compatibility with
existing code that uses these names as identifier names.

match, case, and _ are used in the match statement. type is used in the type statement.

Changed in version 3.12: type is now a soft keyword.

2.3.3. Reserved classes of identifiers


Certain classes of identifiers (besides keywords) have special meanings. These classes are
identified by the patterns of leading and trailing underscore characters:

_*

Not imported by from module import *.

_
In a case pattern within a match statement, _ is a soft keyword that denotes a wildcard.

Separately, the interactive interpreter makes the result of the last evaluation available in
the variable _. (It is stored in the builtins module, alongside built-in functions
like print.)

Elsewhere, _ is a regular identifier. It is often used to name “special” items, but it is not
special to Python itself.

Note

The name _ is often used in conjunction with internationalization; refer to the


documentation for the gettext module for more information on this convention.

It is also commonly used for unused variables.


__*__

System-defined names, informally known as “dunder” names. These names are defined
by the interpreter and its implementation (including the standard library). Current system
names are discussed in the Special method names section and elsewhere. More will likely
be defined in future versions of Python. Any use of __*__ names, in any context, that
does not follow explicitly documented use, is subject to breakage without warning.

__*

Class-private names. Names in this category, when used within the context of a class
definition, are re-written to use a mangled form to help avoid name clashes between
“private” attributes of base and derived classes. See section Identifiers (Names).
2.4. Literals
Literals are notations for constant values of some built-in types.

2.4.1. String and Bytes literals


String literals are described by the following lexical definitions:

stringliteral ::= [stringprefix](shortstring |


longstring)
stringprefix ::=
"r" | "u" | "R" | "U" | "f" | "F"
| "fr" | "Fr" | "fR" | "FR" |
"rf" | "rF" | "Rf" | "RF"
shortstring ::= "'" shortstringitem* "'" | '"'
shortstringitem* '"'
longstring ::= "'''" longstringitem* "'''" |
'"""' longstringitem* '"""'
shortstringitem ::= shortstringchar | stringescapeseq
longstringitem ::= longstringchar | stringescapeseq
shortstringchar ::= <any source character except "\"
or newline or the quote>
longstringchar ::= <any source character except "\">
stringescapeseq ::= "\" <any source character>
bytesliteral ::= bytesprefix(shortbytes | longbytes)
bytesprefix ::= "b" | "B" | "br" | "Br" | "bR" |
"BR" | "rb" | "rB" | "Rb" | "RB"
shortbytes ::= "'" shortbytesitem* "'" | '"'
shortbytesitem* '"'
longbytes ::= "'''" longbytesitem* "'''" | '"""'
longbytesitem* '"""'
shortbytesitem ::= shortbyteschar | bytesescapeseq
longbytesitem ::= longbyteschar | bytesescapeseq
shortbyteschar ::= <any ASCII character except "\" or
newline or the quote>
longbyteschar ::= <any ASCII character except "\">
bytesescapeseq ::= "\" <any ASCII character>

One syntactic restriction not indicated by these productions is that whitespace


is not allowed between the stringprefix or bytesprefix and the rest of
the literal. The source character set is defined by the encoding declaration; it is
UTF-8 if no encoding declaration is given in the source file; see
section Encoding declarations.

In plain English: Both types of literals can be enclosed in matching single


quotes (') or double quotes ("). They can also be enclosed in matching groups
of three single or double quotes (these are generally referred to as triple-
quoted strings). The backslash (\) character is used to give special meaning to
otherwise ordinary characters like n, which means ‘newline’ when escaped (\
n). It can also be used to escape characters that otherwise have a special
meaning, such as newline, backslash itself, or the quote character. See escape
sequences below for examples.

Bytes literals are always prefixed with 'b' or 'B'; they produce an instance
of the bytes type instead of the str type. They may only contain ASCII
characters; bytes with a numeric value of 128 or greater must be expressed
with escapes.

Both string and bytes literals may optionally be prefixed with a


letter 'r' or 'R'; such constructs are called raw string literals and raw bytes
literals respectively and treat backslashes as literal characters. As a result, in
raw string literals, '\U' and '\u' escapes are not treated specially.

Added in version 3.3: The 'rb' prefix of raw bytes literals has been added as
a synonym of 'br'.

Support for the unicode legacy literal (u'value') was reintroduced to


simplify the maintenance of dual Python 2.x and 3.x codebases. See PEP
414 for more information.

A string literal with 'f' or 'F' in its prefix is a formatted string literal;
see f-strings. The 'f' may be combined with 'r', but not with 'b' or 'u',
therefore raw formatted strings are possible, but formatted bytes literals are
not.

In triple-quoted literals, unescaped newlines and quotes are allowed (and are
retained), except that three unescaped quotes in a row terminate the literal. (A
“quote” is the character used to open the literal, i.e. either ' or ".)

2.4.1.1. Escape sequences


Unless an 'r' or 'R' prefix is present, escape sequences in string and bytes
literals are interpreted according to rules similar to those used by Standard C.
The recognized escape sequences are:

Escape
Meaning Notes
Sequence
\<newline> Backslash and newline ignored (1)

\\ Backslash (\)

\' Single quote (')

\" Double quote (")

\a ASCII Bell (BEL)

\b ASCII Backspace (BS)

\f ASCII Formfeed (FF)

\n ASCII Linefeed (LF)


Escape
Meaning Notes
Sequence

\r ASCII Carriage Return (CR)

\t ASCII Horizontal Tab (TAB)

\v ASCII Vertical Tab (VT)

\ooo Character with octal value ooo (2,4)


\xhh Character with hex value hh (3,4)

Escape sequences only recognized in string literals are:

Escape
Meaning Notes
Sequence
\N{name} Character named name in the Unicode database (5)
\uxxxx Character with 16-bit hex value xxxx (6)
\Uxxxxxxxx Character with 32-bit hex value xxxxxxxx (7)

Notes:

1. A backslash can be added at the end of a line to ignore the newline:

>>>

>>> 'This string will not include \


... backslashes or newline characters.'
'This string will not include backslashes or
newline characters.'

The same result can be achieved using triple-quoted strings, or


parentheses and string literal concatenation.

2. As in Standard C, up to three octal digits are accepted.

Changed in version 3.11: Octal escapes with value larger


than 0o377 produce a DeprecationWarning.
Changed in version 3.12: Octal escapes with value larger
than 0o377 produce a SyntaxWarning. In a future Python version
they will be eventually a SyntaxError.

3. Unlike in Standard C, exactly two hex digits are required.


4. In a bytes literal, hexadecimal and octal escapes denote the byte with
the given value. In a string literal, these escapes denote a Unicode
character with the given value.
5. Changed in version 3.3: Support for name aliases [1] has been added.
6. Exactly four hex digits are required.
7. Any Unicode character can be encoded this way. Exactly eight hex
digits are required.

Unlike Standard C, all unrecognized escape sequences are left in the string
unchanged, i.e., the backslash is left in the result. (This behavior is useful
when debugging: if an escape sequence is mistyped, the resulting output is
more easily recognized as broken.) It is also important to note that the escape
sequences only recognized in string literals fall into the category of
unrecognized escapes for bytes literals.

Changed in version 3.6: Unrecognized escape sequences produce


a DeprecationWarning.

Changed in version 3.12: Unrecognized escape sequences produce


a SyntaxWarning. In a future Python version they will be eventually
a SyntaxError.

Even in a raw literal, quotes can be escaped with a backslash, but the
backslash remains in the result; for example, r"\"" is a valid string literal
consisting of two characters: a backslash and a double quote; r"\" is not a
valid string literal (even a raw string cannot end in an odd number of
backslashes). Specifically, a raw literal cannot end in a single
backslash (since the backslash would escape the following quote character).
Note also that a single backslash followed by a newline is interpreted as those
two characters as part of the literal, not as a line continuation.

2.4.2. String literal concatenation


Multiple adjacent string or bytes literals (delimited by whitespace), possibly
using different quoting conventions, are allowed, and their meaning is the
same as their concatenation. Thus, "hello" 'world' is equivalent
to "helloworld". This feature can be used to reduce the number of
backslashes needed, to split long strings conveniently across long lines, or
even to add comments to parts of strings, for example:
re.compile("[A-Za-z_]" # letter or underscore
"[A-Za-z0-9_]*" # letter, digit or
underscore
)

Note that this feature is defined at the syntactical level, but implemented at
compile time. The ‘+’ operator must be used to concatenate string expressions
at run time. Also note that literal concatenation can use different quoting
styles for each component (even mixing raw strings and triple quoted strings),
and formatted string literals may be concatenated with plain string literals.

2.4.3. f-strings
Added in version 3.6.

A formatted string literal or f-string is a string literal that is prefixed


with 'f' or 'F'. These strings may contain replacement fields, which are
expressions delimited by curly braces {}. While other string literals always
have a constant value, formatted strings are really expressions evaluated at run
time.

Escape sequences are decoded like in ordinary string literals (except when a
literal is also marked as a raw string). After decoding, the grammar for the
contents of the string is:

f_string ::= (literal_char | "{{" | "}}" |


replacement_field)*
replacement_field ::= "{" f_expression ["="] ["!"
conversion] [":" format_spec] "}"
f_expression ::= (conditional_expression | "*"
or_expr)
("," conditional_expression |
"," "*" or_expr)* [","]
| yield_expression
conversion ::= "s" | "r" | "a"
format_spec ::= (literal_char |
replacement_field)*
literal_char ::= <any code point except "{", "}"
or NULL>

The parts of the string outside curly braces are treated literally, except that any
doubled curly braces '{{' or '}}' are replaced with the corresponding
single curly brace. A single opening curly bracket '{' marks a replacement
field, which starts with a Python expression. To display both the expression
text and its value after evaluation, (useful in debugging), an equal
sign '=' may be added after the expression. A conversion field, introduced
by an exclamation point '!' may follow. A format specifier may also be
appended, introduced by a colon ':'. A replacement field ends with a closing
curly bracket '}'.

Expressions in formatted string literals are treated like regular Python


expressions surrounded by parentheses, with a few exceptions. An empty
expression is not allowed, and both lambda and assignment
expressions := must be surrounded by explicit parentheses. Each expression
is evaluated in the context where the formatted string literal appears, in order
from left to right. Replacement expressions can contain newlines in both
single-quoted and triple-quoted f-strings and they can contain comments.
Everything that comes after a # inside a replacement field is a comment (even
closing braces and quotes). In that case, replacement fields must be closed in a
different line.

>>> f"abc{a # This is a comment }"


... + 3}"
'abc5'

Changed in version 3.7: Prior to Python 3.7, an await expression and


comprehensions containing an async for clause were illegal in the
expressions in formatted string literals due to a problem with the
implementation.

Changed in version 3.12: Prior to Python 3.12, comments were not allowed
inside f-string replacement fields.

When the equal sign '=' is provided, the output will have the expression text,
the '=' and the evaluated value. Spaces after the opening brace '{', within
the expression and after the '=' are all retained in the output. By default,
the '=' causes the repr() of the expression to be provided, unless there is a
format specified. When a format is specified it defaults to the str() of the
expression unless a conversion '!r' is declared.

Added in version 3.8: The equal sign '='.

If a conversion is specified, the result of evaluating the expression is


converted before formatting. Conversion '!s' calls str() on the result, '!
r' calls repr(), and '!a' calls ascii().

The result is then formatted using the format() protocol. The format
specifier is passed to the __format__() method of the expression or
conversion result. An empty string is passed when the format specifier is
omitted. The formatted result is then included in the final value of the whole
string.

Top-level format specifiers may include nested replacement fields. These


nested fields may include their own conversion fields and format specifiers,
but may not include more deeply nested replacement fields. The format
specifier mini-language is the same as that used by
the str.format() method.

Formatted string literals may be concatenated, but replacement fields cannot


be split across literals.

Some examples of formatted string literals:

>>>

>>> name = "Fred"


>>> f"He said his name is {name!r}."
"He said his name is 'Fred'."
>>> f"He said his name is {repr(name)}." # repr() is
equivalent to !r
"He said his name is 'Fred'."
>>> width = 10
>>> precision = 4
>>> value = decimal.Decimal("12.34567")
>>> f"result: {value:{width}.{precision}}" # nested
fields
'result: 12.35'
>>> today = datetime(year=2017, month=1, day=27)
>>> f"{today:%B %d, %Y}" # using date format
specifier
'January 27, 2017'
>>> f"{today=:%B %d, %Y}" # using date format
specifier and debugging
'today=January 27, 2017'
>>> number = 1024
>>> f"{number:#0x}" # using integer format specifier
'0x400'
>>> foo = "bar"
>>> f"{ foo = }" # preserves whitespace
" foo = 'bar'"
>>> line = "The mill's closed"
>>> f"{line = }"
'line = "The mill\'s closed"'
>>> f"{line = :20}"
"line = The mill's closed "
>>> f"{line = !r:20}"
'line = "The mill\'s closed" '

Reusing the outer f-string quoting type inside a replacement field is permitted:

>>>

>>> a = dict(x=2)
>>> f"abc {a["x"]} def"
'abc 2 def'

Changed in version 3.12: Prior to Python 3.12, reuse of the same quoting type
of the outer f-string inside a replacement field was not possible.

Backslashes are also allowed in replacement fields and are evaluated the same
way as in any other context:

>>>

>>> a = ["a", "b", "c"]


>>> print(f"List a contains:\n{"\n".join(a)}")
List a contains:
a
b
c

Changed in version 3.12: Prior to Python 3.12, backslashes were not


permitted inside an f-string replacement field.

Formatted string literals cannot be used as docstrings, even if they do not


include expressions.

>>>

>>> def foo():


... f"Not a docstring"
...
>>> foo.__doc__ is None
True
See also PEP 498 for the proposal that added formatted string literals,
and str.format(), which uses a related format string mechanism.

2.4.4. Numeric literals


There are three types of numeric literals: integers, floating-point numbers, and
imaginary numbers. There are no complex literals (complex numbers can be
formed by adding a real number and an imaginary number).

Note that numeric literals do not include a sign; a phrase like -1 is actually an
expression composed of the unary operator ‘-’ and the literal 1.

2.4.5. Integer literals


Integer literals are described by the following lexical definitions:

integer ::= decinteger | bininteger | octinteger


| hexinteger
decinteger ::= nonzerodigit (["_"] digit)* | "0"+
(["_"] "0")*
bininteger ::= "0" ("b" | "B") (["_"] bindigit)+
octinteger ::= "0" ("o" | "O") (["_"] octdigit)+
hexinteger ::= "0" ("x" | "X") (["_"] hexdigit)+
nonzerodigit ::= "1"..."9"
digit ::= "0"..."9"
bindigit ::= "0" | "1"
octdigit ::= "0"..."7"
hexdigit ::= digit | "a"..."f" | "A"..."F"

There is no limit for the length of integer literals apart from what can be
stored in available memory.

Underscores are ignored for determining the numeric value of the literal. They
can be used to group digits for enhanced readability. One underscore can
occur between digits, and after base specifiers like 0x.

Note that leading zeros in a non-zero decimal number are not allowed. This is
for disambiguation with C-style octal literals, which Python used before
version 3.0.

Some examples of integer literals:


7 2147483647 0o177
0b100110111
3 79228162514264337593543950336 0o377
0xdeadbeef
100_000_000_000 0b_1110_0101

Changed in version 3.6: Underscores are now allowed for grouping purposes
in literals.

2.4.6. Floating-point literals


Floating-point literals are described by the following lexical definitions:

floatnumber ::= pointfloat | exponentfloat


pointfloat ::= [digitpart] fraction | digitpart "."
exponentfloat ::= (digitpart | pointfloat) exponent
digitpart ::= digit (["_"] digit)*
fraction ::= "." digitpart
exponent ::= ("e" | "E") ["+" | "-"] digitpart

Note that the integer and exponent parts are always interpreted using radix 10.
For example, 077e010 is legal, and denotes the same number as 77e10. The
allowed range of floating-point literals is implementation-dependent. As in
integer literals, underscores are supported for digit grouping.

Some examples of floating-point literals:

3.14 10. .001 1e100 3.14e-10 0e0


3.14_15_93

Changed in version 3.6: Underscores are now allowed for grouping purposes
in literals.

2.4.7. Imaginary literals


Imaginary literals are described by the following lexical definitions:

imagnumber ::= (floatnumber | digitpart) ("j" | "J")

An imaginary literal yields a complex number with a real part of 0.0. Complex
numbers are represented as a pair of floating-point numbers and have the same
restrictions on their range. To create a complex number with a nonzero real
part, add a floating-point number to it, e.g., (3+4j). Some examples of
imaginary literals:

3.14j 10.j 10j .001j 1e100j 3.14e-10j


3.14_15_93j

2.5. Operators
The following tokens are operators:

+ - * ** / // %
@
<< >> & | ^ ~ :=
< > <= >= == !=

2.6. Delimiters
The following tokens serve as delimiters in the grammar:

( ) [ ] { }
, : ! . ; @ =
-> += -= *= /= //= %=
@= &= |= ^= >>= <<= **=

The period can also occur in floating-point and imaginary literals. A sequence
of three periods has a special meaning as an ellipsis literal. The second half of
the list, the augmented assignment operators, serve lexically as delimiters, but
also perform an operation.

The following printing ASCII characters have special meaning as part of other
tokens or are otherwise significant to the lexical analyzer:

' " # \

The following printing ASCII characters are not used in Python. Their
occurrence outside string literals and comments is an unconditional error:

$ ? `

Footnotes

[1]
https://www.unicode.org/Public/15.0.0/ucd/NameAliases.txt

are syntactic definitions.

3. Data model
3.1. Objects, values and types
Objects are Python’s abstraction for data. All data in a Python program is represented by objects
or by relations between objects. (In a sense, and in conformance to Von Neumann’s model of a
“stored program computer”, code is also represented by objects.)

Every object has an identity, a type and a value. An object’s identity never changes once it has
been created; you may think of it as the object’s address in memory. The is operator compares
the identity of two objects; the id() function returns an integer representing its identity.

CPython implementation detail: For CPython, id(x) is the memory address where x is
stored.

An object’s type determines the operations that the object supports (e.g., “does it have a
length?”) and also defines the possible values for objects of that type. The type() function
returns an object’s type (which is an object itself). Like its identity, an object’s type is also
unchangeable. [1]

The value of some objects can change. Objects whose value can change are said to be mutable;
objects whose value is unchangeable once they are created are called immutable. (The value of
an immutable container object that contains a reference to a mutable object can change when the
latter’s value is changed; however the container is still considered immutable, because the
collection of objects it contains cannot be changed. So, immutability is not strictly the same as
having an unchangeable value, it is more subtle.) An object’s mutability is determined by its
type; for instance, numbers, strings and tuples are immutable, while dictionaries and lists are
mutable.

Objects are never explicitly destroyed; however, when they become unreachable they may be
garbage-collected. An implementation is allowed to postpone garbage collection or omit it
altogether — it is a matter of implementation quality how garbage collection is implemented, as
long as no objects are collected that are still reachable.

CPython implementation detail: CPython currently uses a reference-counting scheme with


(optional) delayed detection of cyclically linked garbage, which collects most objects as soon as
they become unreachable, but is not guaranteed to collect garbage containing circular references.
See the documentation of the gc module for information on controlling the collection of cyclic
garbage. Other implementations act differently and CPython may change. Do not depend on
immediate finalization of objects when they become unreachable (so you should always close
files explicitly).

Note that the use of the implementation’s tracing or debugging facilities may keep objects alive
that would normally be collectable. Also note that catching an exception with a try…
except statement may keep objects alive.

Some objects contain references to “external” resources such as open files or windows. It is
understood that these resources are freed when the object is garbage-collected, but since garbage
collection is not guaranteed to happen, such objects also provide an explicit way to release the
external resource, usually a close() method. Programs are strongly recommended to explicitly
close such objects. The try…finally statement and the with statement provide convenient
ways to do this.

Some objects contain references to other objects; these are called containers. Examples of
containers are tuples, lists and dictionaries. The references are part of a container’s value. In
most cases, when we talk about the value of a container, we imply the values, not the identities
of the contained objects; however, when we talk about the mutability of a container, only the
identities of the immediately contained objects are implied. So, if an immutable container (like a
tuple) contains a reference to a mutable object, its value changes if that mutable object is
changed.

Types affect almost all aspects of object behavior. Even the importance of object identity is
affected in some sense: for immutable types, operations that compute new values may actually
return a reference to any existing object with the same type and value, while for mutable objects
this is not allowed. For example, after a = 1; b = 1, a and b may or may not refer to the
same object with the value one, depending on the implementation. This is because int is an
immutable type, so the reference to 1 can be reused. This behaviour depends on the
implementation used, so should not be relied upon, but is something to be aware of when making
use of object identity tests. However, after c = []; d = [], c and d are guaranteed to refer to
two different, unique, newly created empty lists. (Note that e = f = [] assigns
the same object to both e and f.)

3.2. The standard type hierarchy


Below is a list of the types that are built into Python. Extension modules (written in C, Java, or
other languages, depending on the implementation) can define additional types. Future versions
of Python may add types to the type hierarchy (e.g., rational numbers, efficiently stored arrays of
integers, etc.), although such additions will often be provided via the standard library instead.

Some of the type descriptions below contain a paragraph listing ‘special attributes.’ These are
attributes that provide access to the implementation and are not intended for general use. Their
definition may change in the future.
3.2.1. None
This type has a single value. There is a single object with this value. This object is accessed
through the built-in name None. It is used to signify the absence of a value in many situations,
e.g., it is returned from functions that don’t explicitly return anything. Its truth value is false.

3.2.2. NotImplemented
This type has a single value. There is a single object with this value. This object is accessed
through the built-in name NotImplemented. Numeric methods and rich comparison methods
should return this value if they do not implement the operation for the operands provided. (The
interpreter will then try the reflected operation, or some other fallback, depending on the
operator.) It should not be evaluated in a boolean context.

See Implementing the arithmetic operations for more details.

Changed in version 3.9: Evaluating NotImplemented in a boolean context is deprecated.


While it currently evaluates as true, it will emit a DeprecationWarning. It will raise
a TypeError in a future version of Python.

3.2.3. Ellipsis
This type has a single value. There is a single object with this value. This object is accessed
through the literal ... or the built-in name Ellipsis. Its truth value is true.

3.2.4. numbers.Number
These are created by numeric literals and returned as results by arithmetic operators and
arithmetic built-in functions. Numeric objects are immutable; once created their value never
changes. Python numbers are of course strongly related to mathematical numbers, but subject to
the limitations of numerical representation in computers.

The string representations of the numeric classes, computed by __repr__() and __str__(),
have the following properties:

 They are valid numeric literals which, when passed to their class constructor, produce an
object having the value of the original numeric.
 The representation is in base 10, when possible.
 Leading zeros, possibly excepting a single zero before a decimal point, are not shown.
 Trailing zeros, possibly excepting a single zero after a decimal point, are not shown.
 A sign is shown only when the number is negative.

Python distinguishes between integers, floating-point numbers, and complex numbers:


3.2.4.1. numbers.Integral
These represent elements from the mathematical set of integers (positive and negative).

Note

The rules for integer representation are intended to give the most meaningful interpretation of
shift and mask operations involving negative integers.

There are two types of integers:

Integers (int)

These represent numbers in an unlimited range, subject to available (virtual) memory


only. For the purpose of shift and mask operations, a binary representation is assumed,
and negative numbers are represented in a variant of 2’s complement which gives the
illusion of an infinite string of sign bits extending to the left.

Booleans (bool)

These represent the truth values False and True. The two objects representing the
values False and True are the only Boolean objects. The Boolean type is a subtype of
the integer type, and Boolean values behave like the values 0 and 1, respectively, in
almost all contexts, the exception being that when converted to a string, the
strings "False" or "True" are returned, respectively.
3.2.4.2. numbers.Real (float)
These represent machine-level double precision floating-point numbers. You are at the
mercy of the underlying machine architecture (and C or Java implementation) for the
accepted range and handling of overflow. Python does not support single-precision
floating-point numbers; the savings in processor and memory usage that are usually the
reason for using these are dwarfed by the overhead of using objects in Python, so there
is no reason to complicate the language with two kinds of floating-point numbers.

3.2.4.3. numbers.Complex (complex)


These represent complex numbers as a pair of machine-level double precision floating-
point numbers. The same caveats apply as for floating-point numbers. The real and
imaginary parts of a complex number z can be retrieved through the read-only
attributes z.real and z.imag.
3.2.5. Sequences
These represent finite ordered sets indexed by non-negative numbers. The built-in
function len() returns the number of items of a sequence. When the length of a
sequence is n, the index set contains the numbers 0, 1, …, n-1. Item i of sequence a is
selected by a[i]. Some sequences, including built-in sequences, interpret negative
subscripts by adding the sequence length. For example, a[-2] equals a[n-2], the
second to last item of sequence a with length n.

Sequences also support slicing: a[i:j] selects all items with index k such
that i <= k < j. When used as an expression, a slice is a sequence of the same type. The
comment above about negative indexes also applies to negative slice positions.

Some sequences also support “extended slicing” with a third “step”


parameter: a[i:j:k] selects all items of a with
index x where x = i + n*k, n >= 0 and i <= x < j.

Sequences are distinguished according to their mutability:

3.2.5.1. Immutable sequences


An object of an immutable sequence type cannot change once it is created. (If the
object contains references to other objects, these other objects may be mutable and may
be changed; however, the collection of objects directly referenced by an immutable
object cannot change.)

The following types are immutable sequences:

Strings

A string is a sequence of values that represent Unicode code points. All the code points in
the range U+0000 - U+10FFFF can be represented in a string. Python doesn’t have
a char type; instead, every code point in the string is represented as a string object with
length 1. The built-in function ord() converts a code point from its string form to an
integer in the range 0 - 10FFFF; chr() converts an integer in the
range 0 - 10FFFF to the corresponding length 1 string object. str.encode() can be
used to convert a str to bytes using the given text encoding,
and bytes.decode() can be used to achieve the opposite.

Tuples

The items of a tuple are arbitrary Python objects. Tuples of two or more items are formed
by comma-separated lists of expressions. A tuple of one item (a ‘singleton’) can be
formed by affixing a comma to an expression (an expression by itself does not create a
tuple, since parentheses must be usable for grouping of expressions). An empty tuple can
be formed by an empty pair of parentheses.
Bytes

A bytes object is an immutable array. The items are 8-bit bytes, represented by integers in
the range 0 <= x < 256. Bytes literals (like b'abc') and the built-
in bytes() constructor can be used to create bytes objects. Also, bytes objects can be
decoded to strings via the decode() method.
3.2.5.2. Mutable sequences
Mutable sequences can be changed after they are created. The
subscription and slicing notations can be used as the target of assignment
and del (delete) statements.

Note

The collections and array module provide additional examples of


mutable sequence types.

There are currently two intrinsic mutable sequence types:

Lists

The items of a list are arbitrary Python objects. Lists are formed by placing a comma-
separated list of expressions in square brackets. (Note that there are no special cases
needed to form lists of length 0 or 1.)

Byte Arrays

A bytearray object is a mutable array. They are created by the built-


in bytearray() constructor. Aside from being mutable (and hence unhashable), byte
arrays otherwise provide the same interface and functionality as
immutable bytes objects.
3.2.6. Set types
These represent unordered, finite sets of unique, immutable
objects. As such, they cannot be indexed by any subscript.
However, they can be iterated over, and the built-in
function len() returns the number of items in a set. Common
uses for sets are fast membership testing, removing duplicates
from a sequence, and computing mathematical operations such
as intersection, union, difference, and symmetric difference.

For set elements, the same immutability rules apply as for


dictionary keys. Note that numeric types obey the normal rules
for numeric comparison: if two numbers compare equal
(e.g., 1 and 1.0), only one of them can be contained in a set.

There are currently two intrinsic set types:

Sets

These represent a mutable set. They are created by the built-in set() constructor and
can be modified afterwards by several methods, such as add().

Frozen sets

These represent an immutable set. They are created by the built-


in frozenset() constructor. As a frozenset is immutable and hashable, it can be used
again as an element of another set, or as a dictionary key.
3.2.7. Mappings
These represent finite sets of objects indexed by
arbitrary index sets. The subscript
notation a[k] selects the item indexed by k from the
mapping a; this can be used in expressions and as the
target of assignments or del statements. The built-in
function len() returns the number of items in a
mapping.

There is currently a single intrinsic mapping type:

3.2.7.1. Dictionaries
These represent finite sets of objects indexed by nearly
arbitrary values. The only types of values not
acceptable as keys are values containing lists or
dictionaries or other mutable types that are compared
by value rather than by object identity, the reason
being that the efficient implementation of dictionaries
requires a key’s hash value to remain constant.
Numeric types used for keys obey the normal rules for
numeric comparison: if two numbers compare equal
(e.g., 1 and 1.0) then they can be used
interchangeably to index the same dictionary entry.

Dictionaries preserve insertion order, meaning that


keys will be produced in the same order they were
added sequentially over the dictionary. Replacing an
existing key does not change the order, however
removing a key and re-inserting it will add it to the end
instead of keeping its old place.

Dictionaries are mutable; they can be created by


the {} notation (see section Dictionary displays).

The extension
modules dbm.ndbm and dbm.gnu provide additional
examples of mapping types, as does
the collections module.

Changed in version 3.7: Dictionaries did not preserve


insertion order in versions of Python before 3.6. In
CPython 3.6, insertion order was preserved, but it was
considered an implementation detail at that time rather
than a language guarantee.

3.2.8. Callable types


These are the types to which the function call
operation (see section Calls) can be applied:

3.2.8.1. User-defined functions


A user-defined function object is created by a function
definition (see section Function definitions). It should
be called with an argument list containing the same
number of items as the function’s formal parameter
list.

3.2.8.1.1. Special read-only


attributes
Attribute Meaning
A reference to
the dictionary that
holds the
function.__globa
function’s global
ls__ variables – the global
namespace of the
module in which the
function was defined.
function.__cl None or
osure__ a tuple of cells
Attribute Meaning
that contain
bindings for the
function’s free
variables.

A cell object has


the
attribute cell_c
ontents. This
can be used to get
the value of the
cell, as well as set
the value.
3.2.8.1.2. Special writable
attributes

Most of these attributes check the type of the assigned


value:

Attribute Meaning
The function’s
documentation
function.__doc__ string, or None if
unavailable. Not
inherited by
subclasses.
The function’s
function.__name name. See
__ also: __name_
_ attribut
es.
function.__q The
ualname__ function’s
qualified
name. See
also: __q
ualname
__ attr
ibutes.

Added in
version
Attribute Meaning

3.3.
The
name
of the
modul
e the
function. functi
__module on
__ was
define
d in,
or No
ne if
unava
ilable.
functi A
on.__
t
defau
u
lts__ p
l
e
c
o
n
t
a
i
n
i
n
g
d
e
f
a
u
lt
p
a
r
a
Attribute Meaning
m
e
t
e
r
v
a
l
u
e
s
f
o
r
t
h
o
s
e
p
a
r
a
m
e
t
e
r
s
t
h
a
t
h
a
v
e
d
e
f
a
u
lt
s,
Attribute Meaning
o
r
N
o
n
e
if
n
o
p
a
r
a
m
e
t
e
r
s
h
a
v
e
a
d
e
f
a
u
lt
v
a
l
u
e
.
fun Th
cti e
on. od
__ e
obj
co ect
de rep
__ res
Attribute Meaning
ent
ing
the
co
mp
ile
d
fun
cti
on
bo
dy.
f
u
n The
c namesp
t ace
i support
o ing
n arbitrar
. y
functio
_
n
_ attribut
d es. See
i also:
_dict
c __
t tribu
_ tes
_

func A
tion ary
.__ ing
annotations
ann
of
ota s. The keys
tio of the
ns_ dictionary
are the
_
parameter
names,
Attribute Meaning
and
rn'
return
annotation,
if provided.
See
also:
tions Best
Practices
functio A
n. containing
defaul defaults for
ts__ keyword-only
rameters

A
function. the
parameters
_type_par a
ams__
Added in versi
3.12.

Function objects also support getting and setting


arbitrary attributes, which can be used, for example, to
attach metadata to functions. Regular attribute dot-
notation is used to get and set such attributes.

CPython implementation detail: CPython’s current


implementation only supports function attributes on
user-defined functions. Function attributes on built-in
functions may be supported in the future.

Additional information about a function’s definition


can be retrieved from its code object (accessible via
the __code__ attribute).

3.2.8.2. Instance methods


An instance method object combines a class, a class
instance and any callable object (normally a user-
defined function).
Special read-only attributes:

Refers to the class


method.__self__
instance object to which
the method is bound
method.__fun Refers to the
c__ original function
object
The method’s
documentation
(same
as method._
method.__ _func__.__
doc__ doc__).
A string if
the original
function had a
docstring,
else None.
The name
of the
method method
.__na (same
me__ as metho
d.__fun
c__.__n
ame__)
The
name
met of the
hod modu
le the
._
meth
_m od
od was
ul defin
ed in,
e_ or No
_ ne if
unava
ilable
.
Methods also support accessing (but not setting) the
arbitrary function attributes on the underlying function
object.

User-defined method objects may be created when


getting an attribute of a class (perhaps via an instance
of that class), if that attribute is a user-defined function
object or a classmethod object.

When an instance method object is created by


retrieving a user-defined function object from a class
via one of its instances, its __self__ attribute is the
instance, and the method object is said to be bound.
The new method’s __func__ attribute is the original
function object.

When an instance method object is created by


retrieving a classmethod object from a class or
instance, its __self__ attribute is the class itself, and
its __func__ attribute is the function object
underlying the class method.

When an instance method object is called, the


underlying function (__func__) is called, inserting
the class instance (__self__) in front of the
argument list. For instance, when C is a class which
contains a definition for a function f(), and x is an
instance of C, calling x.f(1) is equivalent to
calling C.f(x, 1).

When an instance method object is derived from


a classmethod object, the “class instance” stored
in __self__ will actually be the class itself, so that
calling either x.f(1) or C.f(1) is equivalent to
calling f(C,1) where f is the underlying function.

It is important to note that user-defined functions


which are attributes of a class instance are not
converted to bound methods; this only happens when
the function is an attribute of the class.

3.2.8.3. Generator functions


A function or method which uses the yield statement
(see section The yield statement) is called a generator
function. Such a function, when called, always returns
an iterator object which can be used to execute the
body of the function: calling the
iterator’s iterator.__next__() method will
cause the function to execute until it provides a value
using the yield statement. When the function
executes a return statement or falls off the end,
a StopIteration exception is raised and the
iterator will have reached the end of the set of values
to be returned.

3.2.8.4. Coroutine functions


A function or method which is defined
using async def is called a coroutine function. Such
a function, when called, returns a coroutine object. It
may contain await expressions, as well
as async with and async for statements. See
also the Coroutine Objects section.

3.2.8.5. Asynchronous generator


functions
A function or method which is defined
using async def and which uses
the yield statement is called a asynchronous
generator function. Such a function, when called,
returns an asynchronous iterator object which can be
used in an async for statement to execute the body
of the function.

Calling the asynchronous


iterator’s aiterator.__anext__ method will
return an awaitable which when awaited will execute
until it provides a value using the yield expression.
When the function executes an
empty return statement or falls off the end,
a StopAsyncIteration exception is raised and the
asynchronous iterator will have reached the end of the
set of values to be yielded.

3.2.8.6. Built-in functions


A built-in function object is a wrapper around a C
function. Examples of built-in functions
are len() and math.sin() (math is a standard
built-in module). The number and type of the
arguments are determined by the C function. Special
read-only attributes:

 __doc__ is the function’s documentation


string, or None if unavailable.
See function.__doc__.
 __name__ is the function’s name.
See function.__name__.
 __self__ is set to None (but see the next
item).
 __module__ is the name of the module the
function was defined in or None if unavailable.
See function.__module__.
3.2.8.7. Built-in methods
This is really a different disguise of a built-in function,
this time containing an object passed to the C function
as an implicit extra argument. An example of a built-in
method is alist.append(), assuming alist is a list
object. In this case, the special read-only
attribute __self__ is set to the object denoted
by alist. (The attribute has the same semantics as it
does with other instance methods.)

3.2.8.8. Classes
Classes are callable. These objects normally act as
factories for new instances of themselves, but
variations are possible for class types that
override __new__(). The arguments of the call are
passed to __new__() and, in the typical case,
to __init__() to initialize the new instance.

3.2.8.9. Class Instances


Instances of arbitrary classes can be made callable by
defining a __call__() method in their class.

3.2.9. Modules
Modules are a basic organizational unit of Python
code, and are created by the import system as invoked
either by the import statement, or by calling
functions such
as importlib.import_module() and built-
in __import__(). A module object has a namespace
implemented by a dictionary object (this is the
dictionary referenced by the __globals__ attribute
of functions defined in the module). Attribute
references are translated to lookups in this dictionary,
e.g., m.x is equivalent to m.__dict__["x"]. A
module object does not contain the code object used to
initialize the module (since it isn’t needed once the
initialization is done).

Attribute assignment updates the module’s namespace


dictionary, e.g., m.x = 1 is equivalent
to m.__dict__["x"] = 1.

Predefined (writable) attributes:

__name__

The module’s name.

__doc__

The module’s documentation string, or None if unavailable.

__file__

The pathname of the file from which the module was loaded, if it was loaded from a file.
The __file__ attribute may be missing for certain types of modules, such as C modules
that are statically linked into the interpreter. For extension modules loaded dynamically
from a shared library, it’s the pathname of the shared library file.

__annotations__

A dictionary containing variable annotations collected during module body execution.


For best practices on working with __annotations__, please see Annotations Best
Practices.

Special read-only
attribute: __dict__ is the
module’s namespace as a dictionary
object.

CPython implementation
detail: Because of the way CPython
clears module dictionaries, the
module dictionary will be cleared
when the module falls out of scope
even if the dictionary still has live
references. To avoid this, copy the
dictionary or keep the module
around while using its dictionary
directly.
3.2.10. Custom
classes
Custom class types are typically
created by class definitions (see
section Class definitions). A class
has a namespace implemented by a
dictionary object. Class attribute
references are translated to lookups
in this dictionary, e.g., C.x is
translated
to C.__dict__["x"] (although
there are a number of hooks which
allow for other means of locating
attributes). When the attribute name
is not found there, the attribute
search continues in the base classes.
This search of the base classes uses
the C3 method resolution order
which behaves correctly even in the
presence of ‘diamond’ inheritance
structures where there are multiple
inheritance paths leading back to a
common ancestor. Additional
details on the C3 MRO used by
Python can be found at The Python
2.3 Method Resolution Order.

When a class attribute reference (for


class C, say) would yield a class
method object, it is transformed into
an instance method object
whose __self__ attribute is C.
When it would yield
a staticmethod object, it is
transformed into the object wrapped
by the static method object. See
section Implementing
Descriptors for another way in
which attributes retrieved from a
class may differ from those actually
contained in its __dict__.

Class attribute assignments update


the class’s dictionary, never the
dictionary of a base class.

A class object can be called (see


above) to yield a class instance (see
below).

Special attributes:

__name__

The class name.

__module__

The name of the module in which the class was defined.

__dict__

The dictionary containing the class’s namespace.

__bases__

A tuple containing the base classes, in the order of their occurrence in the base class list.

__doc__

The class’s documentation string, or None if undefined.

__annotat
ions__

A dictionary containing variable annotations collected during class body execution. For
best practices on working with __annotations__, please see Annotations Best
Practices.

__typ
e_par
ams__

A tuple containing the type parameters of a generic class.


3.
2.
1
1.
Cl
a
ss
in
st
a
n
c
e
s
A
cla
ss
inst
anc
e is
cre
ate
d
by
call
ing
a
cla
ss
obj
ect
(se
e
abo
ve)
.A
cla
ss
inst
anc
e
has
a
na
me
spa
ce
im
ple
me
nte
d
as
a
dict
ion
ary
whi
ch
is
the
firs
t
pla
ce
in
whi
ch
attr
ibu
te
ref
ere
nce
s
are
sea
rch
ed.
Wh
en
an
attr
ibu
te
is
not
fou
nd
the
re,
and
the
inst
anc
e’s
cla
ss
has
an
attr
ibu
te
by
that
na
me,
the
sea
rch
con
tin
ues
wit
h
the
cla
ss
attr
ibu
tes.
If a
cla
ss
attr
ibu
te
is
fou
nd
that
is a
use
r-
def
ine
d
fun
ctio
n
obj
ect,
it is
tra
nsf
or
me
d
int
o
an
inst
anc
e
met
hod
obj
ect
wh
ose
__
se
lf
__
attr
ibu
te
is
the
inst
anc
e.
Sta
tic
met
hod
and
cla
ss
met
hod
obj
ect
s
are
als
o
tra
nsf
or
me
d;
see
abo
ve
und
er
“Cl
ass
es”
.
See
sec
tio
nI
mp
lem
enti
ng
De
scri
pto
rs f
or
ano
the
r
wa
y
in
whi
ch
attr
ibu
tes
of
a
cla
ss
retr
iev
ed
via
its
inst
anc
es
ma
y
diff
er
fro
m
the
obj
ect
s
act
uall
y
stor
ed
in
the
cla
ss’s
__
di
ct
__.
If
no
cla
ss
attr
ibu
te
is
fou
nd,
and
the
obj
ect’
s
cla
ss
has
a_
_g
et
at
tr
__
()
met
hod
,
that
is
call
ed
to
sati
sfy
the
loo
kup
.

Att
rib
ute
assi
gn
me
nts
and
del
etio
ns
upd
ate
the
inst
anc
e’s
dict
ion
ary
,
nev
er a
cla
ss’s
dict
ion
ary
. If
the
cla
ss
has
a_
_s
et
at
tr
__
()
or
__
de
la
tt
r_
_(
)
met
hod
,
this
is
call
ed
inst
ead
of
upd
atin
g
the
inst
anc
e
dict
ion
ary
dir
ectl
y.

Cla
ss
inst
anc
es
can
pre
ten
d
to
be
nu
mb
ers,
seq
uen
ces
, or
ma
ppi
ngs
if
the
y
hav
e
met
hod
s
wit
h
cert
ain
spe
cial
na
me
s.
See
sec
tio
nS
pec
ial
met
hod
na
me
s.

Spe
cial
attr
ibu
tes:
__
di
ct
__
is
the
attr
ibu
te
dict
ion
ary
;_
_c
la
ss
__
is
the
inst
anc
e’s
cla
ss.
3.
2.
1
2.
I/
O
o
bj
e
ct
s
(a
ls
o
k
n
o
w
n
a
s
fil
e
o
bj
e
ct
s)
Af
ile
obj
ect
rep
res
ent
s
an
ope
n
file
.
Var
iou
s
sho
rtc
uts
are
ava
ilab
le
to
cre
ate
file
obj
ect
s:
the
op
en
()
bui
lt-
in
fun
ctio
n,
and
als
oo
s.
po
pe
n(
),
os
.f
do
pe
n(
),
and
the
ma
ke
fi
le
()
met
hod
of
soc
ket
obj
ect
s
(an
d
per
hap
s
by
oth
er
fun
ctio
ns
or
met
hod
s
pro
vid
ed
by
ext
ens
ion
mo
dul
es).
Th
e
obj
ect
ss
ys
.s
td
in,
sy
s.
st
do
ut
and
sy
s.
st
de
rr
are
init
iali
zed
to
file
obj
ect
s
cor
res
pon
din
g
to
the
inte
rpr
eter
’s
sta
nda
rd
inp
ut,
out
put
and
err
or
stre
am
s;
the
y
are
all
ope
n
in
text
mo
de
and
the
ref
ore
foll
ow
the
inte
rfa
ce
def
ine
d
by
the
io
.T
ex
tI
OB
as
ea
bstr
act
cla
ss.
3.
2.
1
3.
In
te
rn
al
ty
p
e
s
A
few
typ
es
use
d
inte
rna
lly
by
the
inte
rpr
eter
are
exp
ose
d
to
the
use
r.
Th
eir
def
init
ion
s
ma
y
cha
nge
wit
h
fut
ure
ver
sio
ns
of
the
inte
rpr
eter
,
but
the
y
are
me
nti
one
d
her
e
for
co
mp
lete
nes
s.

3.
2.
1
3.
1.
C
o
d
e
o
bj
ec
ts
Co
de
obj
ect
s
rep
res
ent
byt
e-
co
mpi
led
exe
cut
abl
e
Pyt
hon
cod
e,
or
byt
eco
de.
Th
e
diff
ere
nce
bet
we
en
a
cod
e
obj
ect
and
a
fun
ctio
n
obj
ect
is
that
the
fun
ctio
n
obj
ect
con
tain
s
an
exp
lici
t
ref
ere
nce
to
the
fun
ctio
n’s
glo
bal
s
(th
e
mo
dul
e in
whi
ch
it
wa
s
def
ine
d),
whi
le a
cod
e
obj
ect
con
tain
s
no
con
text
;
als
o
the
def
ault
arg
um
ent
val
ues
are
stor
ed
in
the
fun
ctio
n
obj
ect,
not
in
the
cod
e
obj
ect
(be
cau
se
the
y
rep
res
ent
val
ues
cal
cul
ate
d at
run
-
tim
e).
Unl
ike
fun
ctio
n
obj
ect
s,
cod
e
obj
ect
s
are
im
mu
tabl
e
and
con
tain
no
ref
ere
nce
s
(dir
ectl
y
or
ind
irec
tly)
to
mu
tabl
e
obj
ect
s.

3.
2.
1
3.
1.
1.
S
p
ec
ial
re
a
d-
o
nl
y
at
tri
b
ut
es
c The
o func
d tion
e nam
o e
b
j
e
c
t
.
c
o
_
n
a
m
e

cod
eobThe
jecfully
t. qualified
o_function
name
qu
alAdded in
naversion
me3.11.

The total
number of
positional
ameters
uding
codeob
positional-
ject.
only
o_arg
parameters
count
and
parameters
with default
values) that
the function
has
The number o
codeobject
positional-only
.co_poso
rameters
g arguments w
nlyargco
default values
unt
that the functio
has
The number o
codeobject.
keyword-only
_kwonlyargc
ers
arguments wit
ount
values) that th
function has
The number o
codeobject.
variables
locals
function (inclu
parameters)
A
codeobject.
of the local va
ames
function (start
parameter nam
A
codeobject.
of
s by nested func
function
A
codeobject.
variables in th
A string repres
codeobject.
of
A
codeobject.
the
A
codeobject.
the function
codeobject.
The name of t

codeobject.
The line numb

A string encod
numbers. For
codeobject.
Deprecated si
deprecated, an
codeobject.
The required s

codeobject.
An

Th
e
foll
owi
ng
fla
g
bits
are
def
ine
d
for
co
_f
la
gs:
bit
0x
04
is
set
if
the
fun
ctio
n
use
s
the
*a
rg
um
en
ts
syn
tax
to
acc
ept
an
arb
itra
ry
nu
mb
er
of
pos
itio
nal
arg
um
ent
s;
bit
0x
08
is
set
if
the
fun
ctio
n
use
s
the
**
ke
yw
or
ds
syn
tax
to
acc
ept
arb
itra
ry
key
wo
rd
arg
um
ent
s;
bit
0x
20
is
set
if
the
fun
ctio
n is
a
gen
erat
or.
See
Co
de
Obj
ect
s
Bit
Fla
gs f
or
det
ails
on
the
se
ma
ntic
s of
eac
h
fla
gs
that
mi
ght
be
pre
sen
t.

Fut
ure
feat
ure
dec
lara
tio
ns
(fr
om
__
fu
tu
re
__
im
po
rt
di
vi
si
on)
als
o
use
bits
in
co
_f
la
gs
to
ind
icat
e
wh
eth
er a
cod
e
obj
ect
wa
s
co
mp
iled
wit
ha
par
ticu
lar
feat
ure
ena
ble
d:
bit
0x
20
00
is
set
if
the
fun
ctio
n
wa
s
co
mp
iled
wit
h
fut
ure
div
isio
n
ena
ble
d;
bits
0x
10
and
0x
10
00
wer
e
use
d
in
earl
ier
ver
sio
ns
of
Pyt
hon
.
Oth
er
bits
in
co
_f
la
gs
are
res
erv
ed
for
inte
rna
l
use
.

If a
cod
e
obj
ect
rep
res
ent
sa
fun
ctio
n,
the
firs
t
ite
m
in
co
_c
on
st
s is
the
doc
um
ent
atio
n
stri
ng
of
the
fun
ctio
n,
or
No
ne
if
und
efi
ned
.

3.
2.
1
3.
1.
2.
M
et
h
o
ds
o
n
co
d
e
o
bj
ec
ts
co
de
ob
je
ct
.c
o_
po
si
ti
on
s(
)
Returns an iterable over the source code positions of each bytecode instruction in the
code object.

The iterator returns tuples containing


the (start_line, end_line, start_column, end_column). The i-th tuple
corresponds to the position of the source code that compiled to the i-th code unit. Column
information is 0-indexed utf-8 byte offsets on the given source line.

This positional information can be missing. A non-exhaustive lists of cases where this
may happen:

 Running the interpreter with -X no_debug_ranges.


 Loading a pyc file compiled while using -X no_debug_ranges.
 Position tuples corresponding to artificial instructions.
 Line and column numbers that can’t be represented due to implementation
specific limitations.

When this occurs, some or all of the tuple elements can be None.

Added in version 3.11.

Note

This feature requires storing column positions in code objects which may result in a small
increase of disk usage of compiled Python files or interpreter memory usage. To avoid
storing the extra information and/or deactivate printing the extra traceback information,
the -X no_debug_ranges command line flag or
the PYTHONNODEBUGRANGES environment variable can be used.
c
o
d
e
o
b
j
e
c
t
.
c
o
_
l
i
n
e
s
(
)
Returns an iterator that yields information about successive ranges of bytecodes. Each
item yielded is a (start, end, lineno) tuple:

 start (an int) represents the offset (inclusive) of the start of the bytecode range
 end (an int) represents the offset (exclusive) of the end of the bytecode range
 lineno is an int representing the line number of the bytecode range, or None if
the bytecodes in the given range have no line number

The items yielded will have the following properties:

 The first range yielded will have a start of 0.


 The (start, end) ranges will be non-decreasing and consecutive. That is, for
any pair of tuples, the start of the second will be equal to the end of the first.
 No range will be backwards: end >= start for all triples.
 The last tuple yielded will have end equal to the size of the bytecode.

Zero-width ranges, where start == end, are allowed. Zero-width ranges are used for
lines that are present in the source code, but have been eliminated by
the bytecode compiler.

Added in version 3.10.

See also
PEP 626 - Precise line numbers for debugging and other tools.

The PEP that introduced the co_lines() method.


code
obje
ct.r
epl
ace
(**k
warg
s)

Return a copy of the code object with new values for the specified fields.

Added in version 3.8.

3.2.13.
2. Fra
me
object
s
Frame
objects
represent
execution
frames.
They may
occur
in traceba
ck objects,
and are
also
passed to
registered
trace
functions.
3.2.13.
2.1. Sp
ecial
read-
only
attribu
tes

fra

3.2.13.
2.2. Sp
ecial
writabl
e
attribu
tes
fra
3.2.13.
2.3. Fr
ame
object
metho
ds

Frame
objects
support
one
method:

frame.c
lear()

This method clears all references to local variables held by the frame. Also, if the frame
belonged to a generator, the generator is finalized. This helps break reference cycles
involving frame objects (for example when catching an exception and storing
its traceback for later use).

RuntimeError is raised if the frame is currently executing.

Added in version 3.4.

3.2.13.3.
Tracebac
k objects
Traceback
objects
represent the
stack trace of
an exception.
A traceback
object is
implicitly
created when
an exception
occurs, and
may also be
explicitly
created by
calling types
.Traceback
Type.

Changed in
version
3.7: Traceback
objects can
now be
explicitly
instantiated
from Python
code.

For implicitly
created
tracebacks,
when the
search for an
exception
handler
unwinds the
execution
stack, at each
unwound level
a traceback
object is
inserted in
front of the
current
traceback.
When an
exception
handler is
entered, the
stack trace is
made available
to the program.
(See
section The try
statement.) It
is accessible as
the third item
of the tuple
returned
by sys.exc_
info(), and
as
the __traceb
ack__ attribut
e of the caught
exception.

When the
program
contains no
suitable
handler, the
stack trace is
written (nicely
formatted) to
the standard
error stream; if
the interpreter
is interactive, it
is also made
available to the
user
as sys.last
_traceback.

For explicitly
created
tracebacks, it is
up to the
creator of the
traceback to
determine how
the tb_next
attributes
should be
linked to form
a full stack
trace.

Special read-
only attributes:

The line
number and
last instruction
in the
traceback may
differ from the
line number of
its frame
object if the
exception
occurred in
a try stateme
nt with no
matching
except clause
or with
a finally cla
use.
traceback.t
b_next

The special writable attribute tb_next is the next level in the stack trace (towards the
frame where the exception occurred), or None if there is no next level.

Changed in version 3.7: This attribute is now writable

3.2.13.4. S
e objects
Slice objects ar
used to represen
slices
for __getitem
() methods. Th
are also created
the built-
in slice() fu
on.

Special read-on
attributes: star
the lower
bound; stop is
upper
bound; step is
step value; each
is None if omit
These attributes
have any type.

Slice objects
support one
method:

slice.indic
(self, leng
This method takes a single integer argument length and computes information about the
slice that the slice object would describe if applied to a sequence of length items. It
returns a tuple of three integers; respectively these are the start and stop indices and
the step or stride length of the slice. Missing or out-of-bounds indices are handled in a
manner consistent with regular slices.
3.2.13.5. S
method ob
Static method o
provide a way o
defeating the
transformation
function objects
method objects
described above
static method o
wrapper around
other object, us
user-defined me
object. When a
method object i
retrieved from a
a class instance
object actually
is the wrapped o
which is not sub
any further
transformation.
method objects
callable. Static
objects are crea
the built-
in staticmet
onstructor.

3.2.13.6. C
method ob
A class method
like a static met
object, is a wrap
around another
that alters the w
which that obje
retrieved from c
and class instan
behaviour of cla
method objects
such retrieval is
described above
under “instance
methods”. Clas
objects are crea
the built-
in classmeth
nstructor.

3.3. Spe
method
names
A class can imp
certain operatio
are invoked by
syntax (such as
arithmetic opera
subscripting and
by defining met
with special nam
is Python’s app
to operator ove
allowing classe
define their own
behavior with r
language operat
instance, if a cla
defines a metho
named __geti
), and x is an in
of this class,
then x[i] is ro
equivalent
to type(x)._
em__(x, i).
where mentione
attempts to exec
operation raise
exception when
appropriate met
defined
(typically Attr
rror or TypeE

Setting a specia
to None indicat
the correspondi
operation is not
available. For e
if a class
sets __iter__
one, the class i
iterable, so
calling iter()
instances will ra
a TypeError
falling back
to __getitem
2]

When impleme
class that emula
built-in type, it
important that t
emulation only
implemented to
degree that it m
sense for the ob
being modelled
example, some
sequences may
well with retrie
individual elem
extracting a slic
not make sense
example of this
the NodeList
in the W3C’s D
Object Model.)

3.3.1. Ba
customiz
object.__ne
ls[, ...])

Called to create a new instance of class cls. __new__() is a static method (special-cased
so you need not declare it as such) that takes the class of which an instance was requested
as its first argument. The remaining arguments are those passed to the object constructor
expression (the call to the class). The return value of __new__() should be the new
object instance (usually an instance of cls).
Typical implementations create a new instance of the class by invoking the
superclass’s __new__() method using super().__new__(cls[, ...]) with
appropriate arguments and then modifying the newly created instance as necessary before
returning it.

If __new__() is invoked during object construction and it returns an instance of cls,


then the new instance’s __init__() method will be invoked
like __init__(self[, ...]), where self is the new instance and the remaining
arguments are the same as were passed to the object constructor.

If __new__() does not return an instance of cls, then the new


instance’s __init__() method will not be invoked.

__new__() is intended mainly to allow subclasses of immutable types (like int, str, or
tuple) to customize instance creation. It is also commonly overridden in custom
metaclasses in order to customize class creation.
object.__in
f[, ...])

Called after the instance has been created (by __new__()), but before it is returned to
the caller. The arguments are those passed to the class constructor expression. If a base
class has an __init__() method, the derived class’s __init__() method, if any,
must explicitly call it to ensure proper initialization of the base class part of the instance;
for example: super().__init__([args...]).

Because __new__() and __init__() work together in constructing objects


(__new__() to create it, and __init__() to customize it), no non-None value may be
returned by __init__(); doing so will cause a TypeError to be raised at runtime.
object.__de

Called when the instance is about to be destroyed. This is also called a finalizer or
(improperly) a destructor. If a base class has a __del__() method, the derived
class’s __del__() method, if any, must explicitly call it to ensure proper deletion of the
base class part of the instance.

It is possible (though not recommended!) for the __del__() method to postpone


destruction of the instance by creating a new reference to it. This is called
object resurrection. It is implementation-dependent whether __del__() is called a
second time when a resurrected object is about to be destroyed; the
current CPython implementation only calls it once.

It is not guaranteed that __del__() methods are called for objects that still exist when
the interpreter exits. weakref.finalize provides a straightforward way to register a
cleanup function to be called when an object is garbage collected.

Note
del x doesn’t directly call x.__del__() — the former decrements the reference
count for x by one, and the latter is only called when x’s reference count reaches zero.
CPython implementation detail: It is possible for a reference cycle to prevent the
reference count of an object from going to zero. In this case, the cycle will be later
detected and deleted by the cyclic garbage collector. A common cause of reference cycles
is when an exception has been caught in a local variable. The frame’s locals then
reference the exception, which references its own traceback, which references the locals
of all frames caught in the traceback.
See also

Documentation for the gc module.


Warning

Due to the precarious circumstances under which __del__() methods are invoked,
exceptions that occur during their execution are ignored, and a warning is printed
to sys.stderr instead. In particular:
 __del__() can be invoked when arbitrary code is being executed, including
from any arbitrary thread. If __del__() needs to take a lock or invoke any other
blocking resource, it may deadlock as the resource may already be taken by the
code that gets interrupted to execute __del__().
 __del__() can be executed during interpreter shutdown. As a consequence, the
global variables it needs to access (including other modules) may already have
been deleted or set to None. Python guarantees that globals whose name begins
with a single underscore are deleted from their module before other globals are
deleted; if no other references to such globals exist, this may help in assuring that
imported modules are still available at the time when the __del__() method is
called.
object.__re

Called by the repr() built-in function to compute the “official” string representation of
an object. If at all possible, this should look like a valid Python expression that could be
used to recreate an object with the same value (given an appropriate environment). If this
is not possible, a string of the form <...some useful description...> should
be returned. The return value must be a string object. If a class defines __repr__() but
not __str__(), then __repr__() is also used when an “informal” string
representation of instances of that class is required.

This is typically used for debugging, so it is important that the representation is


information-rich and unambiguous.

object.__st
Called by str(object) and the built-in functions format() and print() to
compute the “informal” or nicely printable string representation of an object. The return
value must be a string object.

This method differs from object.__repr__() in that there is no expectation


that __str__() return a valid Python expression: a more convenient or concise
representation can be used.

The default implementation defined by the built-in


type object calls object.__repr__().
object.__by

Called by bytes to compute a byte-string representation of an object. This should return


a bytes object.

object.__fo

Called by the format() built-in function, and by extension, evaluation of formatted


string literals and the str.format() method, to produce a “formatted” string
representation of an object. The format_spec argument is a string that contains a
description of the formatting options desired. The interpretation of
the format_spec argument is up to the type implementing __format__(), however
most classes will either delegate formatting to one of the built-in types, or use a similar
formatting option syntax.

See Format Specification Mini-Language for a description of the standard formatting


syntax.

The return value must be a string object.

Changed in version 3.4: The __format__ method of object itself raises


a TypeError if passed any non-empty string.

Changed in version 3.7: object.__format__(x, '') is now equivalent


to str(x) rather than format(str(x), '').

object.__lt

object.__le

object.__eq

object.__ne

object.__gt
object.__ge

These are the so-called “rich comparison” methods. The correspondence between
operator symbols and method names is as
follows: x<y calls x.__lt__(y), x<=y calls x.__le__(y), x==y calls x.__eq__(
y), x!=y calls x.__ne__(y), x>y calls x.__gt__(y),
and x>=y calls x.__ge__(y).

A rich comparison method may return the singleton NotImplemented if it does not
implement the operation for a given pair of arguments. By
convention, False and True are returned for a successful comparison. However, these
methods can return any value, so if the comparison operator is used in a Boolean context
(e.g., in the condition of an if statement), Python will call bool() on the value to
determine if the result is true or false.

By default, object implements __eq__() by using is,


returning NotImplemented in the case of a false
comparison: True if x is y else NotImplemented. For __ne__(), by
default it delegates to __eq__() and inverts the result unless it is NotImplemented.
There are no other implied relationships among the comparison operators or default
implementations; for example, the truth of (x<y or x==y) does not imply x<=y. To
automatically generate ordering operations from a single root operation,
see functools.total_ordering().

See the paragraph on __hash__() for some important notes on


creating hashable objects which support custom comparison operations and are usable as
dictionary keys.

There are no swapped-argument versions of these methods (to be used when the left
argument does not support the operation but the right argument does);
rather, __lt__() and __gt__() are each other’s
reflection, __le__() and __ge__() are each other’s reflection,
and __eq__() and __ne__() are their own reflection. If the operands are of different
types, and the right operand’s type is a direct or indirect subclass of the left operand’s
type, the reflected method of the right operand has priority, otherwise the left operand’s
method has priority. Virtual subclassing is not considered.

When no appropriate method returns any value other than NotImplemented,


the == and != operators will fall back to is and is not, respectively.
object.__ha

Called by built-in function hash() and for operations on members of hashed collections
including set, frozenset, and dict. The __hash__() method should return an
integer. The only required property is that objects which compare equal have the same
hash value; it is advised to mix together the hash values of the components of the object
that also play a part in comparison of objects by packing them into a tuple and hashing
the tuple. Example:

def __hash__(self):
return hash((self.name, self.nick, self.color))
Note

hash() truncates the value returned from an object’s custom __hash__() method to
the size of a Py_ssize_t. This is typically 8 bytes on 64-bit builds and 4 bytes on 32-
bit builds. If an object’s __hash__() must interoperate on builds of different bit sizes,
be sure to check the width on all supported builds. An easy way to do this is
with python -c "import sys; print(sys.hash_info.width)".

If a class does not define an __eq__() method it should not define


a __hash__() operation either; if it defines __eq__() but not __hash__(), its
instances will not be usable as items in hashable collections. If a class defines mutable
objects and implements an __eq__() method, it should not implement __hash__(),
since the implementation of hashable collections requires that a key’s hash value is
immutable (if the object’s hash value changes, it will be in the wrong hash bucket).

User-defined classes have __eq__() and __hash__() methods by default; with them,
all objects compare unequal (except with themselves) and x.__hash__() returns an
appropriate value such that x == y implies both
that x is y and hash(x) == hash(y).

A class that overrides __eq__() and does not define __hash__() will have
its __hash__() implicitly set to None. When the __hash__() method of a class
is None, instances of the class will raise an appropriate TypeError when a program
attempts to retrieve their hash value, and will also be correctly identified as unhashable
when checking isinstance(obj, collections.abc.Hashable).

If a class that overrides __eq__() needs to retain the implementation


of __hash__() from a parent class, the interpreter must be told this explicitly by
setting __hash__ = <ParentClass>.__hash__.

If a class that does not override __eq__() wishes to suppress hash support, it should
include __hash__ = None in the class definition. A class which defines its
own __hash__() that explicitly raises a TypeError would be incorrectly identified as
hashable by an isinstance(obj, collections.abc.Hashable) call.

Note
By default, the __hash__() values of str and bytes objects are “salted” with an
unpredictable random value. Although they remain constant within an individual Python
process, they are not predictable between repeated invocations of Python.

This is intended to provide protection against a denial-of-service caused by carefully


chosen inputs that exploit the worst case performance of a dict insertion, O(n2)
complexity. See http://ocert.org/advisories/ocert-2011-003.html for details.

Changing hash values affects the iteration order of sets. Python has never made
guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds).

See also PYTHONHASHSEED.

Changed in version 3.3: Hash randomization is enabled by default.

object.__bo

Called to implement truth value testing and the built-in operation bool(); should
return False or True. When this method is not defined, __len__() is called, if it is
defined, and the object is considered true if its result is nonzero. If a class defines
neither __len__() nor __bool__(), all its instances are considered true.
3.3.2. Cu
The following m
attribute access
class instances.

object.__ge

Called when the default attribute access fails with


an AttributeError (either __getattribute__() raises
an AttributeError because name is not an instance attribute or an attribute in the
class tree for self; or __get__() of a name property raises AttributeError). This
method should either return the (computed) attribute value or raise
an AttributeError exception.

Note that if the attribute is found through the normal mechanism, __getattr__() is
not called. (This is an intentional asymmetry
between __getattr__() and __setattr__().) This is done both for efficiency
reasons and because otherwise __getattr__() would have no way to access other
attributes of the instance. Note that at least for instance variables, you can fake total
control by not inserting any values in the instance attribute dictionary (but instead
inserting them in another object). See the __getattribute__() method below for a
way to actually get total control over attribute access.
object.__ge
Called unconditionally to implement attribute accesses for instances of the class. If the
class also defines __getattr__(), the latter will not be called
unless __getattribute__() either calls it explicitly or raises an AttributeError.
This method should return the (computed) attribute value or raise
an AttributeError exception. In order to avoid infinite recursion in this method, its
implementation should always call the base class method with the same name to access
any attributes it needs, for example, object.__getattribute__(self, name).

Note

This method may still be bypassed when looking up special methods as the result of
implicit invocation via language syntax or built-in functions. See Special method lookup.

For certain sensitive attribute accesses, raises an auditing


event object.__getattr__ with arguments obj and name.

object.__se

Called when an attribute assignment is attempted. This is called instead of the normal
mechanism (i.e. store the value in the instance dictionary). name is the attribute
name, value is the value to be assigned to it.

If __setattr__() wants to assign to an instance attribute, it should call the base class
method with the same name, for
example, object.__setattr__(self, name, value).

For certain sensitive attribute assignments, raises an auditing


event object.__setattr__ with arguments obj, name, value.

object.__de

Like __setattr__() but for attribute deletion instead of assignment. This should only
be implemented if del obj.name is meaningful for the object.

For certain sensitive attribute deletions, raises an auditing


event object.__delattr__ with arguments obj and name.

object.__di

Called when dir() is called on the object. An iterable must be


returned. dir() converts the returned iterable to a list and sorts it.
3.3.2.1. Cu
Special names _
attributes. The _
is the name of a
attribute is not f
i.e. object.__
module __dic
attribute name a

The __dir__
represents the n
standard dir()

For a more fine


etc.), one can se
of types.Mod

import sys
from types

class Verbo
def __r
ret

def __s
pri
sup

sys.modules
Note

Defining modul
using the attribu
within the modu

Changed in ver

Added in versio

See also
PEP 562 - Modul

Describes the __getattr__ and __dir__ functions on modules.


3.3.2.2. Im
The following m
called descripto
dictionary or in
refers to the attr

object.__ge

Called to get the attribute of the owner class (class attribute access) or of an instance of
that class (instance attribute access). The optional owner argument is the owner class,
while instance is the instance that the attribute was accessed through, or None when the
attribute is accessed through the owner.

This method should return the computed attribute value or raise


an AttributeError exception.

PEP 252 specifies that __get__() is callable with one or two arguments. Python’s own
built-in descriptors support this specification; however, it is likely that some third-party
tools have descriptors that require both arguments. Python’s
own __getattribute__() implementation always passes in both arguments whether
they are required or not.
object.__se

Called to set the attribute on an instance instance of the owner class to a new
value, value.

Note, adding __set__() or __delete__() changes the kind of descriptor to a “data


descriptor”. See Invoking Descriptors for more details.
object.__de

Called to delete the attribute on an instance instance of the owner class.

Instances of des

object.__ob

The attribute __objclass__ is interpreted by the inspect module as specifying the


class where this object was defined (setting this appropriately can assist in runtime
introspection of dynamic class attributes). For callables, it may indicate that an instance
of the given type (or a subclass) is expected or required as the first positional argument
(for example, CPython sets this attribute for unbound methods that are implemented in
C).
3.3.2.3. In
In general, a de
by methods in t
defined for an o
The default beh
instance, a.x h
continuing thro

However, if the
default behavio
which descripto

The starting poi

Direct Call

The simplest and least common call is when user code directly invokes a descriptor
method: x.__get__(a).

Instance Binding

If binding to an object instance, a.x is transformed into the


call: type(a).__dict__['x'].__get__(a, type(a)).

Class Binding

If binding to a class, A.x is transformed into the


call: A.__dict__['x'].__get__(None, A).

Super Binding

A dotted lookup such as super(A, a).x searches a.__class__.__mro__ for a


base class B following A and then returns B.__dict__['x'].__get__(a, A). If
not a descriptor, x is returned unchanged.

For instance bin


any combinatio
return the descr
defines __set_
descriptors defi
with __get__(
contrast, non-da

Python methods
Accordingly, in
instances of the

The property

3.3.2.4. __
__slots__ allow
explicitly decla

The space saved

object.__sl

This class variable can be assigned a string, iterable, or sequence of strings with variable
names used by instances. __slots__ reserves space for the declared variables and prevents
the automatic creation of __dict__ and __weakref__ for each instance.

Notes on using

 When inher
 Without a _
unlisted var
of strings in
 Without a _
reference su
 __slots__ ar
default valu
 The action o
However, c
any addition
 If a class de
descriptor d
this.
 TypeErro
as int, byt
 Any non-str
 If a dictio
provide per-
 __class_
 Multiple inh
other bases
 If an iterato
empty iterat
3.3.3. Cu
Whenever a cla
which change th
they’re applied

classmethod
This method is called whenever the containing class is subclassed. cls is then the new
subclass. If defined as a normal instance method, this method is implicitly converted to a
class method.

Keyword arguments which are given to a new class are passed to the parent
class’s __init_subclass__. For compatibility with other classes
using __init_subclass__, one should take out the needed keyword arguments and
pass the others over to the base class, as in:

class Philosopher:
def __init_subclass__(cls, /, default_name, **kwargs):
super().__init_subclass__(**kwargs)
cls.default_name = default_name

class AustralianPhilosopher(Philosopher,
default_name="Bruce"):
pass

The default implementation object.__init_subclass__ does nothing, but raises


an error if it is called with any arguments.

Note

The metaclass hint metaclass is consumed by the rest of the type machinery, and is
never passed to __init_subclass__ implementations. The actual metaclass (rather
than the explicit hint) can be accessed as type(cls).

Added in version 3.6.

When a class is

object.__se

Automatically called at the time the owning class owner is created. The object has been
assigned to name in that class:

class A:
x = C() # Automatically calls: x.__set_name__(A, 'x')

If the class variable is assigned after the class is created, __set_name__() will not be
called automatically. If needed, __set_name__() can be called directly:
class A:
pass

c = C()
A.x = c # The hook is not called
c.__set_name__(A, 'x') # Manually invoke the hook

See Creating the class object for more details.

Added in version 3.6.

3.3.3.1. Me
By default, clas
of type(name

The class creati


existing class th

class Meta(
pass

class MyCla
pass

class MySub
pass

Any other keyw

When a class de

 MRO entrie
 the appropri
 the class na
 the class bo
 the class ob
3.3.3.2. Re
object.__mr

If a base that appears in a class definition is not an instance of type, then


an __mro_entries__() method is searched on the base. If
an __mro_entries__() method is found, the base is substituted with the result of a
call to __mro_entries__() when creating the class. The method is called with the
original bases tuple passed to the bases parameter, and must return a tuple of classes that
will be used instead of the base. The returned tuple may be empty: in these cases, the
original base is ignored.
See also
types.resol

Dynamically resolve bases that are not instances of type.


types.get_o

Retrieve a class’s “original bases” prior to modifications by __mro_entries__().


PEP 560

Core support for typing module and generic types.


3.3.3.3. De
The appropriate

 if no bases a
 if an explici
 if an instanc

The most derive


derived metacla
fail with TypeE

3.3.3.4. Pr
Once the appro
as namespace
The __prepar
object is created

If the metaclass

See also
PEP 3115 - Meta

Introduced the __prepare__ namespace hook


3.3.3.5. Ex
The class body
the class body (

However, even
accessed throug
3.3.3.6. Cr
Once the class n
calling metacl

This class objec


methods in a cla
scoping, while t

CPython imple
this must be pro

When using the


class object:

1. The type.
2. Those __se
3. The __ini

After the class o


defined class.

When a new cla


The new copy i

See also
PEP 3135 - New

Describes the implicit __class__ closure reference


3.3.3.7. Us
The potential us
creation, proxie

3.3.4. Cu
The following m

In particular, th
(including built

class.__ins

Return true if instance should be considered a (direct or indirect) instance of class. If


defined, called to implement isinstance(instance, class).
class.__sub

Return true if subclass should be considered a (direct or indirect) subclass of class. If


defined, called to implement issubclass(subclass, class).

Note that these


that are called o

See also
PEP 3119 - Introd

Includes the specification for


customizing isinstance() and issubclass() behavior
through __instancecheck__() and __subclasscheck__(), with motivation for
this functionality in the context of adding Abstract Base Classes (see the abc module) to
the language.
3.3.5. Em
When using typ
a list in whic

See also
PEP 484 - Type H

Introducing Python’s framework for type annotations


Generic Alias Typ

Documentation for objects representing parameterized generic classes


Generics, user-d

Documentation on how to implement generic classes that can be parameterized at runtime


and understood by static type-checkers.

A class can gen

classmethod

Return an object representing the specialization of a generic class by type arguments


found in key.

When defined on a class, __class_getitem__() is automatically a class method. As


such, there is no need for it to be decorated with @classmethod when it is defined.
3.3.5.1. Th
The purpose of
To implement c
implements __c

Custom implem
any class for pu

3.3.5.2. __
Usually, the sub
method __cla

Presented with

from inspec

def subscri
"""Retu

class_o

# If th
# call
if hasa
ret

# Else,
# call
elif is
ret

# Else,
else:
rai

In Python, all c
define __geti

>>>

>>> # list
>>> type(li
<class 'typ
>>> type(di
True
>>> # "list
>>> list[in
list[int]
>>> # list.
>>> type(li
<class 'typ

However, if a c

>>>

>>> from en
>>> class M
... """
... SPA
... BAC
...
>>> # Enum
>>> type(Me
<class 'enu
>>> # EnumM
>>> # so __
>>> # and t
>>> Menu['S
<Menu.SPAM:
>>> type(Me
<enum 'Menu
See also
PEP 560 - Core S

Introducing __class_getitem__(), and outlining when a subscription results


in __class_getitem__() being called instead of __getitem__()
3.3.6. Em
object.__ca

Called when the instance is “called” as a function; if this method is


defined, x(arg1, arg2, ...) roughly translates
to type(x).__call__(x, arg1, ...).
3.3.7. Em
The following m
first set of meth
sequence, or sl
methods keys(
The collecti
sequences shou
implement addi
they should not
the mapping’s k
for mappings, _

object.__le

Called to implement the built-in function len(). Should return the length of the object,
an integer >= 0. Also, an object that doesn’t define a __bool__() method and
whose __len__() method returns zero is considered to be false in a Boolean context.

CPython implementation detail: In CPython, the length is required to be at


most sys.maxsize. If the length is larger than sys.maxsize some features (such
as len()) may raise OverflowError. To prevent raising OverflowError by truth
value testing, an object must define a __bool__() method.
object.__le

Called to implement operator.length_hint(). Should return an estimated length


for the object (which may be greater or less than the actual length). The length must be an
integer >= 0. The return value may also be NotImplemented, which is treated the same
as if the __length_hint__ method didn’t exist at all. This method is purely an
optimization and is never required for correctness.

Added in version 3.4.

Note

Slicing is done
a[1:2] = b

is translated to
a[slice(1,

and so forth. M
object.__ge

Called to implement evaluation of self[key]. For sequence types, the accepted keys
should be integers. Optionally, they may support slice objects as well. Negative index
support is also optional. If key is of an inappropriate type, TypeError may be raised;
if key is a value outside the set of indexes for the sequence (after any special
interpretation of negative values), IndexError should be raised. For mapping types,
if key is missing (not in the container), KeyError should be raised.

Note

for loops expect that an IndexError will be raised for illegal indexes to allow proper
detection of the end of the sequence.
Note

When subscripting a class, the special class method __class_getitem__() may be


called instead of __getitem__(). See __class_getitem__ versus __getitem__ for more
details.
object.__se

Called to implement assignment to self[key]. Same note as for __getitem__().


This should only be implemented for mappings if the objects support changes to the
values for keys, or if new keys can be added, or for sequences if elements can be
replaced. The same exceptions should be raised for improper key values as for
the __getitem__() method.
object.__de

Called to implement deletion of self[key]. Same note as for __getitem__(). This


should only be implemented for mappings if the objects support removal of keys, or for
sequences if elements can be removed from the sequence. The same exceptions should be
raised for improper key values as for the __getitem__() method.

Called by dict.__getitem__() to implement self[key] for dict subclasses when


key is not in the dictionary.

This method is called when an iterator is required for a container. This method should
return a new iterator object that can iterate over all the objects in the container. For
mappings, it should iterate over the keys of the container.
Called (if present) by the reversed() built-in to implement reverse iteration. It should
return a new iterator object that iterates over all the objects in the container in reverse
order.

If the __reversed__() method is not provided, the reversed() built-in will fall
back to using the sequence protocol (__len__() and __getitem__()). Objects that
support the sequence protocol should only provide __reversed__() if they can
provide an implementation that is more efficient than the one provided by reversed().

Called to implement membership test operators. Should return true if item is in self, false
otherwise. For mapping objects, this should consider the keys of the mapping rather than
the values or the key-item pairs.

For objects that don’t define __contains__(), the membership test first tries iteration
via __iter__(), then the old sequence iteration protocol via __getitem__(),
see this section in the language reference.
These methods are called to implement the binary arithmetic operations
(+, -, *, @, /, //, %, divmod(), pow(), **, <<, >>, &, ^, |). For instance, to evaluate
the expression x + y, where x is an instance of a class that has
an __add__() method, type(x).__add__(x, y) is called.
The __divmod__() method should be the equivalent to
using __floordiv__() and __mod__(); it should not be related
to __truediv__(). Note that __pow__() should be defined to accept an optional
third argument if the ternary version of the built-in pow() function is to be supported.

If one of those methods does not support the operation with the supplied arguments, it
should return NotImplemented.
These methods are called to implement the binary arithmetic operations
(+, -, *, @, /, //, %, divmod(), pow(), **, <<, >>, &, ^, |) with reflected (swapped)
operands. These functions are only called if the left operand does not support the
corresponding operation [3] and the operands are of different types. [4] For instance, to
evaluate the expression x - y, where y is an instance of a class that has
an __rsub__() method, type(y).__rsub__(y, x) is called
if type(x).__sub__(x, y) returns NotImplemented.

Note that ternary pow() will not try calling __rpow__() (the coercion rules would
become too complicated).

Note

If the right operand’s type is a subclass of the left operand’s type and that subclass
provides a different implementation of the reflected method for the operation, this
method will be called before the left operand’s non-reflected method. This behavior
allows subclasses to override their ancestors’ operations.
These methods are called to implement the augmented arithmetic assignments ( +=, -
=, *=, @=, /=, //=, %=, **=, <<=, >>=, &=, ^=, |=). These methods should attempt to
do the operation in-place (modifying self) and return the result (which could be, but does
not have to be, self). If a specific method is not defined, or if that method
returns NotImplemented, the augmented assignment falls back to the normal methods.
For instance, if x is an instance of a class with an __iadd__() method, x += y is
equivalent to x = x.__iadd__(y) . If __iadd__() does not exist, or
if x.__iadd__(y) returns NotImplemented, x.__add__(y) and y.__radd__(
x) are considered, as with the evaluation of x + y. In certain situations, augmented
assignment can result in unexpected errors (see Why does a_tuple[i] += [‘item’] raise an
exception when the addition works?), but this behavior is in fact part of the data model.

Called to implement the unary arithmetic operations (-, +, abs() and ~).

Called to implement the built-in functions complex(), int() and float(). Should
return a value of the appropriate type.

Called to implement operator.index(), and whenever Python needs to losslessly


convert the numeric object to an integer object (such as in slicing, or in the built-
in bin(), hex() and oct() functions). Presence of this method indicates that the
numeric object is an integer type. Must return an integer.

If __int__(), __float__() and __complex__() are not defined then


corresponding built-in functions int(), float() and complex() fall back
to __index__().
Called to implement the built-in
function round() and math functions trunc(), floor() and ceil().
Unless ndigits is passed to __round__() all these methods should return the value of
the object truncated to an Integral (typically an int).

The built-in function int() falls back to __trunc__() if


neither __int__() nor __index__() is defined.

Changed in version 3.11: The delegation of int() to __trunc__() is deprecated.

mally invoked using the with statement (described in section The with statement), but can also be used by

Enter the runtime context related to this object. The with statement will bind this
method’s return value to the target(s) specified in the as clause of the statement, if any.

Exit the runtime context related to this object. The parameters describe the exception that
caused the context to be exited. If the context was exited without an exception, all three
arguments will be None.

If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent
it from being propagated), it should return a true value. Otherwise, the exception will be
processed normally upon exit from this method.

Note that __exit__() methods should not reraise the passed-in exception; this is the
caller’s responsibility.

The specification, background, and examples for the Python with statement.

ch_args__ attribute.
This class variable can be assigned a tuple of strings. When this class is used in a class
pattern with positional arguments, each positional argument will be converted into a
keyword argument, using the corresponding value in __match_args__ as the keyword.
The absence of this attribute is equivalent to setting it to ().

ller than or equal to the number of elements in __match_args__; if it is larger, the pattern match attempt will

The specification for the Python match statement.

s.

Called when a buffer is requested from self (for example, by


the memoryview constructor). The flags argument is an integer representing the kind of
buffer requested, affecting for example whether the returned buffer is read-only or
writable. inspect.BufferFlags provides a convenient way to interpret the flags.
The method must return a memoryview object.

Called when a buffer is no longer needed. The buffer argument is a memoryview object
that was previously returned by __buffer__(). The method must release any
resources associated with the buffer. This method should return None. Buffer objects that
do not need to perform any cleanup are not required to implement this method.

Introduces the Python __buffer__ and __release_buffer__ methods.

ABC for buffer types.


xception:

ntional lookup process, they would fail when invoked on the type object itself:
be set on the class object itself in order to be consistently invoked by the interpreter).

Must return an iterator. Should be used to implement awaitable objects. For


instance, asyncio.Future implements this method to be compatible with
the await expression.

Note

The language doesn’t place any restriction on the type or value of the objects yielded by
the iterator returned by __await__, as this is specific to the implementation of the
asynchronous execution framework (e.g. asyncio) that will be managing
the awaitable object.

ion, and the exception’s value attribute holds the return value. If the coroutine raises an exception, it is

Starts or resumes execution of the coroutine. If value is None, this is equivalent to


advancing the iterator returned by __await__(). If value is not None, this method
delegates to the send() method of the iterator that caused the coroutine to suspend. The
result (return value, StopIteration, or other exception) is the same as when iterating
over the __await__() return value, described above.

Raises the specified exception in the coroutine. This method delegates to


the throw() method of the iterator that caused the coroutine to suspend, if it has such a
method. Otherwise, the exception is raised at the suspension point. The result (return
value, StopIteration, or other exception) is the same as when iterating over
the __await__() return value, described above. If the exception is not caught in the
coroutine, it propagates back to the caller.

Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated
and may be removed in a future version of Python.

Causes the coroutine to clean itself up and exit. If the coroutine is suspended, this method
first delegates to the close() method of the iterator that caused the coroutine to
suspend, if it has such a method. Then it raises GeneratorExit at the suspension
point, causing the coroutine to immediately clean itself up. Finally, the coroutine is
marked as having finished executing, even if it was never started.

Coroutine objects are automatically closed using the above process when they are about
to be destroyed.

Must return an asynchronous iterator object.

Must return an awaitable resulting in a next value of the iterator. Should raise
a StopAsyncIteration error when the iteration is over.
Semantically similar to __enter__(), the only difference being that it must return
an awaitable.

Semantically similar to __exit__(), the only difference being that it must return
an awaitable.

ed incorrectly.

he behavior that None is not callable.

d’s reflected method—that will instead have the opposite effect of explicitly blocking such fallback.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy