Python
Python
You could write a Unix shell script or Windows batch files for some of these
tasks, but shell scripts are best at moving around files and changing text
data, not well-suited for GUI applications or games. You could write a C/C+
+/Java program, but it can take a lot of development time to get even a first-
draft program. Python is simpler to use, available on Windows, macOS, and
Unix operating systems, and will help you get the job done more quickly.
Python allows you to split your program into modules that can be reused in
other Python programs. It comes with a large collection of standard modules
that you can use as the basis of your programs — or as examples to start
learning to program in Python. Some of these modules provide things like file
I/O, system calls, sockets, and even interfaces to graphical user interface
toolkits like Tk.
By the way, the language is named after the BBC show “Monty Python’s
Flying Circus” and has nothing to do with reptiles. Making references to
Monty Python skits in documentation is not only allowed, it is encouraged!
Now that you are all excited about Python, you’ll want to examine it in some
more detail. Since the best way to learn a language is to use it, the tutorial
invites you to play with the Python interpreter as you read.
In the next chapter, the mechanics of using the interpreter are explained.
This is rather mundane information, but essential for trying out the examples
shown later.
The rest of the tutorial introduces various features of the Python language
and system through examples, beginning with simple expressions,
statements and data types, through functions and modules, and finally
touching upon advanced concepts like exceptions and user-defined classes.
python3.12
to the shell. [1] Since the choice of the directory where the interpreter lives is an installation
option, other places are possible; check with your local Python guru or system administrator.
(E.g., /usr/local/python is a popular alternative location.)
On Windows machines where you have installed Python from the Microsoft Store,
the python3.12 command will be available. If you have the py.exe launcher installed, you can
use the py command. See Excursus: Setting environment variables for other ways to launch
Python.
The interpreter’s line-editing features include interactive editing, history substitution and code
completion on systems that support the GNU Readline library. Perhaps the quickest check to see
whether command line editing is supported is typing Control-P to the first Python prompt you
get. If it beeps, you have command line editing; see Appendix Interactive Input Editing and
History Substitution for an introduction to the keys. If nothing appears to happen, or if ^P is
echoed, command line editing isn’t available; you’ll only be able to use backspace to remove
characters from the current line.
The interpreter operates somewhat like the Unix shell: when called with standard input
connected to a tty device, it reads and executes commands interactively; when called with a file
name argument or with a file as standard input, it reads and executes a script from that file.
A second way of starting the interpreter is python -c command [arg] ..., which
executes the statement(s) in command, analogous to the shell’s -c option. Since Python
statements often contain spaces or other characters that are special to the shell, it is usually
advised to quote command in its entirety.
Some Python modules are also useful as scripts. These can be invoked using python -
m module [arg] ..., which executes the source file for module as if you had spelled out its
full name on the command line.
When a script file is used, it is sometimes useful to be able to run the script and enter interactive
mode afterwards. This can be done by passing -i before the script.
All command line options are described in Command line and environment.
$ python3.12
Python 3.12 (default, April 4 2022, 09:25:04)
[GCC 10.2.0] on linux
Type "help", "copyright", "credits" or "license" for more
information.
>>>
Continuation lines are needed when entering a multi-line construct. As an example, take a look at
this if statement:
>>>
To declare an encoding other than the default one, a special comment line should be added as
the first line of the file. The syntax is as follows:
For example, to declare that Windows-1252 encoding is to be used, the first line of your source
code file should be:
One exception to the first line rule is when the source code starts with a UNIX “shebang” line. In
this case, the encoding declaration should be added as the second line of the file. For example:
#!/usr/bin/env python3
# -*- coding: cp1252 -*-
Footnotes
[1]
On Unix, the Python 3.x interpreter is by default not installed with the executable
named python, so that it
You can toggle the display of prompts and output by clicking on >>> in the
upper-right corner of an example box. If you hide the prompts and output for
an example, then you can easily copy and paste the input lines into your
interpreter.
Many of the examples in this manual, even those entered at the interactive
prompt, include comments. Comments in Python start with the hash
character, #, and extend to the end of the physical line. A comment may
appear at the start of a line or following whitespace or code, but not within a
string literal. A hash character within a string literal is just a hash character.
Since comments are to clarify code and are not interpreted by Python, they
may be omitted when typing in examples.
Some examples:
3.1.1. Numbers
The interpreter acts as a simple calculator: you can type an expression at it and it will write the
value. Expression syntax is straightforward: the operators +, -, * and / can be used to perform
arithmetic; parentheses (()) can be used for grouping. For example:
>>>
>>> 2 + 2
4
>>> 50 - 5*6
20
>>> (50 - 5*6) / 4
5.0
>>> 8 / 5 # division always returns a floating-point number
1.6
The integer numbers (e.g. 2, 4, 20) have type int, the ones with a fractional part
(e.g. 5.0, 1.6) have type float. We will see more about numeric types later in the tutorial.
Division (/) always returns a float. To do floor division and get an integer result you can use
the // operator; to calculate the remainder you can use %:
>>>
>>>
>>> 5 ** 2 # 5 squared
25
>>> 2 ** 7 # 2 to the power of 7
128
The equal sign (=) is used to assign a value to a variable. Afterwards, no result is displayed
before the next interactive prompt:
>>>
>>> width = 20
>>> height = 5 * 9
>>> width * height
900
If a variable is not “defined” (assigned a value), trying to use it will give you an error:
>>>
There is full support for floating point; operators with mixed type operands convert the integer
operand to floating point:
>>>
>>> 4 * 3.75 - 1
14.0
In interactive mode, the last printed expression is assigned to the variable _. This means that
when you are using Python as a desk calculator, it is somewhat easier to continue calculations,
for example:
>>>
This variable should be treated as read-only by the user. Don’t explicitly assign a value to it —
you would create an independent local variable with the same name masking the built-in variable
with its magic behavior.
In addition to int and float, Python supports other types of numbers, such
as Decimal and Fraction. Python also has built-in support for complex numbers, and uses
the j or J suffix to indicate the imaginary part (e.g. 3+5j).
3.1.2. Text
Python can manipulate text (represented by type str, so-called “strings”) as well as numbers.
This includes characters “!”, words “rabbit”, names “Paris”, sentences
“Got your back.”, etc. “Yay! :)”. They can be enclosed in single quotes ('...') or
double quotes ("...") with the same result [2].
>>>
To quote a quote, we need to “escape” it, by preceding it with \. Alternatively, we can use the
other type of quotation marks:
>>>
In the Python shell, the string definition and output string can look different.
The print() function produces a more readable output, by omitting the enclosing quotes and
by printing escaped and special characters:
>>>
>>>
There is one subtle aspect to raw strings: a raw string may not end in an odd number
of \ characters; see the FAQ entry for more information and workarounds.
String literals can span multiple lines. One way is using triple-
quotes: """...""" or '''...'''. End of lines are automatically included in the string, but
it’s possible to prevent this by adding a \ at the end of the line. The following example:
print("""\
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
""")
produces the following output (note that the initial newline is not included):
Strings can be concatenated (glued together) with the + operator, and repeated with *:
>>>
Two or more string literals (i.e. the ones enclosed between quotes) next to each other are
automatically concatenated.
>>>
>>> 'Py' 'thon'
'Python'
This feature is particularly useful when you want to break long strings:
>>>
This only works with two literals though, not with variables or expressions:
>>>
>>>
Strings can be indexed (subscripted), with the first character having index 0. There is no separate
character type; a character is simply a string of size one:
>>>
>>> word = 'Python'
>>> word[0] # character in position 0
'P'
>>> word[5] # character in position 5
'n'
Indices may also be negative numbers, to start counting from the right:
>>>
Note that since -0 is the same as 0, negative indices start from -1.
In addition to indexing, slicing is also supported. While indexing is used to obtain individual
characters, slicing allows you to obtain a substring:
>>>
Slice indices have useful defaults; an omitted first index defaults to zero, an omitted second
index defaults to the size of the string being sliced.
>>>
Note how the start is always included, and the end always excluded. This makes sure
that s[:i] + s[i:] is always equal to s:
>>>
One way to remember how slices work is to think of the indices as pointing between characters,
with the left edge of the first character numbered 0. Then the right edge of the last character of a
string of n characters has index n, for example:
+---+---+---+---+---+---+
| P | y | t | h | o | n |
+---+---+---+---+---+---+
0 1 2 3 4 5 6
-6 -5 -4 -3 -2 -1
The first row of numbers gives the position of the indices 0…6 in the string; the second row
gives the corresponding negative indices. The slice from i to j consists of all characters between
the edges labeled i and j, respectively.
For non-negative indices, the length of a slice is the difference of the indices, if both are within
bounds. For example, the length of word[1:3] is 2.
>>>
However, out of range slice indexes are handled gracefully when used for slicing:
>>>
>>> word[4:42]
'on'
>>> word[42:]
''
Python strings cannot be changed — they are immutable. Therefore, assigning to an indexed
position in the string results in an error:
>>>
>>>
>>>
>>> s = 'supercalifragilisticexpialidocious'
>>> len(s)
34
See also
Text Sequence Type — str
Strings are examples of sequence types, and support the common operations supported by
such types.
String Methods
Strings support a large number of methods for basic transformations and searching.
f-strings
The old formatting operations invoked when strings are the left operand of the % operator
are described in more detail here.
3.1.3. Lists
Python knows a number of compound data types, used to group together
other values. The most versatile is the list, which can be written as a list
of comma-separated values (items) between square brackets. Lists might
contain items of different types, but usually the items all have the same
type.
>>>
Like strings (and all other built-in sequence types), lists can be indexed
and sliced:
>>>
>>>
>>>
You can also add new items at the end of the list, by using
the list.append() method (we will see more about methods later):
>>>
Simple assignment in Python never copies data. When you assign a list to
a variable, the variable refers to the existing list. Any changes you make
to the list through one variable will be seen through all other variables
that refer to it.:
>>>
All slice operations return a new list containing the requested elements.
This means that the following slice returns a shallow copy of the list:
>>>
>>> correct_rgba = rgba[:]
>>> correct_rgba[-1] = "Alpha"
>>> correct_rgba
["Red", "Green", "Blue", "Alpha"]
>>> rgba
["Red", "Green", "Blue", "Alph"]
Assignment to slices is also possible, and this can even change the size of
the list or clear it entirely:
>>>
>>>
It is possible to nest lists (create lists containing other lists), for example:
>>>
>>>
>>>
>>> i = 256*256
>>> print('The value of i is', i)
The value of i is 65536
The keyword argument end can be used to avoid the newline after
the output, or end the output with a different string:
>>>
>>> a, b = 0, 1
>>> while a < 1000:
... print(a, end=',')
... a, b = b, a+b
...
0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,9
87,
Footnotes
[1]
4.1. if Statements
Perhaps the most well-known statement type is the if statement. For example:
>>>
There can be zero or more elif parts, and the else part is optional. The keyword ‘elif’ is
short for ‘else if’, and is useful to avoid excessive indentation. An if … elif … elif …
sequence is a substitute for the switch or case statements found in other languages.
If you’re comparing the same value to several constants, or checking for specific types or
attributes, you may also find the match statement useful. For more details see match Statements.
4.2. for Statements
The for statement in Python differs a bit from what you may be used to in C or Pascal. Rather
than always iterating over an arithmetic progression of numbers (like in Pascal), or giving the
user the ability to define both the iteration step and halting condition (as C),
Python’s for statement iterates over the items of any sequence (a list or a string), in the order
that they appear in the sequence. For example (no pun intended):
>>>
Code that modifies a collection while iterating over that same collection can be tricky to get
right. Instead, it is usually more straight-forward to loop over a copy of the collection or to create
a new collection:
>>>
>>> for i in range(5):
... print(i)
...
0
1
2
3
4
The given end point is never part of the generated sequence; range(10) generates 10 values,
the legal indices for items of a sequence of length 10. It is possible to let the range start at
another number, or to specify a different increment (even negative; sometimes this is called the
‘step’):
>>>
To iterate over the indices of a sequence, you can combine range() and len() as follows:
>>>
In most such cases, however, it is convenient to use the enumerate() function, see Looping
Techniques.
>>> range(10)
range(0, 10)
In many ways the object returned by range() behaves as if it is a list, but in fact it isn’t. It is an
object which returns the successive items of the desired sequence when you iterate over it, but it
doesn’t really make the list, thus saving space.
We say such an object is iterable, that is, suitable as a target for functions and constructs that
expect something from which they can obtain successive items until the supply is exhausted. We
have seen that the for statement is such a construct, while an example of a function that takes an
iterable is sum():
>>>
>>> sum(range(4)) # 0 + 1 + 2 + 3
6
Later we will see more functions that return iterables and take iterables as arguments. In
chapter Data Structures, we will discuss in more detail about list().
In a for loop, the else clause is executed after the loop reaches its final iteration.
In a while loop, it’s executed after the loop’s condition becomes false.
In either kind of loop, the else clause is not executed if the loop was terminated by a break.
This is exemplified in the following for loop, which searches for prime numbers:
>>>
(Yes, this is the correct code. Look closely: the else clause belongs to
the for loop, not the if statement.)
When used with a loop, the else clause has more in common with the else clause of
a try statement than it does with that of if statements: a try statement’s else clause runs
when no exception occurs, and a loop’s else clause runs when no break occurs. For more on
the try statement and exceptions, see Handling Exceptions.
The continue statement, also borrowed from C, continues with the next iteration of the loop:
>>>
>>>
>>>
Another place pass can be used is as a place-holder for a function or conditional body when you
are working on new code, allowing you to keep thinking at a more abstract level. The pass is
silently ignored:
>>>
The simplest form compares a subject value against one or more literals:
def http_error(status):
match status:
case 400:
return "Bad request"
case 404:
return "Not found"
case 418:
return "I'm a teapot"
case _:
return "Something's wrong with the internet"
Note the last block: the “variable name” _ acts as a wildcard and never fails to match. If no case
matches, none of the branches is executed.
Patterns can look like unpacking assignments, and can be used to bind variables:
Study that one carefully! The first pattern has two literals, and can be thought of as an extension
of the literal pattern shown above. But the next two patterns combine a literal and a variable, and
the variable binds a value from the subject (point). The fourth pattern captures two values,
which makes it conceptually similar to the unpacking assignment (x, y) = point.
If you are using classes to structure your data you can use the class name followed by an
argument list resembling a constructor, but with the ability to capture attributes into variables:
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def where_is(point):
match point:
case Point(x=0, y=0):
print("Origin")
case Point(x=0, y=y):
print(f"Y={y}")
case Point(x=x, y=0):
print(f"X={x}")
case Point():
print("Somewhere else")
case _:
print("Not a point")
You can use positional parameters with some builtin classes that provide an ordering for their
attributes (e.g. dataclasses). You can also define a specific position for attributes in patterns by
setting the __match_args__ special attribute in your classes. If it’s set to (“x”, “y”), the
following patterns are all equivalent (and all bind the y attribute to the var variable):
Point(1, var)
Point(1, y=var)
Point(x=1, y=var)
Point(y=var, x=1)
A recommended way to read patterns is to look at them as an extended form of what you would
put on the left of an assignment, to understand which variables would be set to what. Only the
standalone names (like var above) are assigned to by a match statement. Dotted names
(like foo.bar), attribute names (the x= and y= above) or class names (recognized by the “(…)”
next to them like Point above) are never assigned to.
Patterns can be arbitrarily nested. For example, if we have a short list of Points,
with __match_args__ added, we could match it like this:
class Point:
__match_args__ = ('x', 'y')
def __init__(self, x, y):
self.x = x
self.y = y
match points:
case []:
print("No points")
case [Point(0, 0)]:
print("The origin")
case [Point(x, y)]:
print(f"Single point {x}, {y}")
case [Point(0, y1), Point(0, y2)]:
print(f"Two on the Y axis at {y1}, {y2}")
case _:
print("Something else")
We can add an if clause to a pattern, known as a “guard”. If the guard is false, match goes on
to try the next case block. Note that value capture happens before the guard is evaluated:
match point:
case Point(x, y) if x == y:
print(f"Y=X at {x}")
case Point(x, y):
print(f"Not on the diagonal")
Like unpacking assignments, tuple and list patterns have exactly the same meaning and
actually match arbitrary sequences. An important exception is that they don’t match
iterators or strings.
will capture the second element of the input as p2 (as long as the input is a sequence of
two points)
For a more detailed explanation and additional examples, you can look into PEP 636 which is
written in a tutorial format.
>>>
The keyword def introduces a function definition. It must be followed by the function name and
the parenthesized list of formal parameters. The statements that form the body of the function
start at the next line, and must be indented.
The first statement of the function body can optionally be a string literal; this string literal is the
function’s documentation string, or docstring. (More about docstrings can be found in the
section Documentation Strings.) There are tools which use docstrings to automatically produce
online or printed documentation, or to let the user interactively browse through code; it’s good
practice to include docstrings in code that you write, so make a habit of it.
The execution of a function introduces a new symbol table used for the local variables of the
function. More precisely, all variable assignments in a function store the value in the local
symbol table; whereas variable references first look in the local symbol table, then in the local
symbol tables of enclosing functions, then in the global symbol table, and finally in the table of
built-in names. Thus, global variables and variables of enclosing functions cannot be directly
assigned a value within a function (unless, for global variables, named in a global statement,
or, for variables of enclosing functions, named in a nonlocal statement), although they may be
referenced.
The actual parameters (arguments) to a function call are introduced in the local symbol table of
the called function when it is called; thus, arguments are passed using call by value (where
the value is always an object reference, not the value of the object). [1] When a function calls
another function, or calls itself recursively, a new local symbol table is created for that call.
A function definition associates the function name with the function object in the current symbol
table. The interpreter recognizes the object pointed to by that name as a user-defined function.
Other names can also point to that same function object and can also be used to access the
function:
>>>
>>> fib
<function fib at 10042ed0>
>>> f = fib
>>> f(100)
0 1 1 2 3 5 8 13 21 34 55 89
Coming from other languages, you might object that fib is not a function but a procedure since
it doesn’t return a value. In fact, even functions without a return statement do return a value,
albeit a rather boring one. This value is called None (it’s a built-in name). Writing the
value None is normally suppressed by the interpreter if it would be the only value written. You
can see it if you really want to using print():
>>>
>>> fib(0)
>>> print(fib(0))
None
It is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of
printing it:
>>>
The return statement returns with a value from a function. return without an
expression argument returns None. Falling off the end of a function also returns None.
The statement result.append(a) calls a method of the list object result. A method
is a function that ‘belongs’ to an object and is named obj.methodname, where obj is
some object (this may be an expression), and methodname is the name of a method that
is defined by the object’s type. Different types define different methods. Methods of
different types may have the same name without causing ambiguity. (It is possible to
define your own object types and methods, using classes, see Classes) The
method append() shown in the example is defined for list objects; it adds a new
element at the end of the list. In this example it is equivalent
to result = result + [a], but more efficient.
4.8. More on Defining Functions
It is also possible to define functions with a variable number of arguments. There are three
forms, which can be combined.
This example also introduces the in keyword. This tests whether or not a sequence contains a
certain value.
The default values are evaluated at the point of function definition in the defining scope, so that
i = 5
def f(arg=i):
print(arg)
i = 6
f()
will print 5.
Important warning: The default value is evaluated only once. This makes a difference when the
default is a mutable object such as a list, dictionary, or instances of most classes. For example,
the following function accumulates the arguments passed to it on subsequent calls:
print(f(1))
print(f(2))
print(f(3))
[1]
[1, 2]
[1, 2, 3]
If you don’t want the default to be shared between subsequent calls, you can write the function
like this instead:
accepts one required argument (voltage) and three optional arguments (state, action,
and type). This function can be called in any of the following ways:
parrot(1000) # 1
positional argument
parrot(voltage=1000) # 1 keyword
argument
parrot(voltage=1000000, action='VOOOOOM') # 2 keyword
arguments
parrot(action='VOOOOOM', voltage=1000000) # 2 keyword
arguments
parrot('a million', 'bereft of life', 'jump') # 3
positional arguments
parrot('a thousand', state='pushing up the daisies') # 1
positional, 1 keyword
In a function call, keyword arguments must follow positional arguments. All the keyword
arguments passed must match one of the arguments accepted by the function (e.g. actor is not a
valid argument for the parrot function), and their order is not important. This also includes
non-optional arguments (e.g. parrot(voltage=1000) is valid too). No argument may
receive a value more than once. Here’s an example that fails due to this restriction:
>>>
When a final formal parameter of the form **name is present, it receives a dictionary
(see Mapping Types — dict) containing all keyword arguments except for those corresponding
to a formal parameter. This may be combined with a formal parameter of the
form *name (described in the next subsection) which receives a tuple containing the positional
arguments beyond the formal parameter list. (*name must occur before **name.) For example,
if we define a function like this:
Note that the order in which the keyword arguments are printed is guaranteed to match the order
in which they were provided in the function call.
>>>
>>>
>>> standard_arg(2)
2
>>> standard_arg(arg=2)
2
The second function pos_only_arg is restricted to only use positional parameters as there is
a / in the function definition:
>>>
>>> pos_only_arg(1)
1
>>> pos_only_arg(arg=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: pos_only_arg() got some positional-only arguments passed
as keyword arguments: 'arg'
The third function kwd_only_args only allows keyword arguments as indicated by a * in the
function definition:
>>>
>>> kwd_only_arg(3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: kwd_only_arg() takes 0 positional arguments but 1 was
given
>>> kwd_only_arg(arg=3)
3
And the last uses all three calling conventions in the same function definition:
>>>
>>> combined_example(1, 2, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: combined_example() takes 2 positional arguments but 3
were given
Finally, consider this function definition which has a potential collision between the positional
argument name and **kwds which has name as a key:
There is no possible call that will make it return True as the keyword 'name' will always bind
to the first parameter. For example:
>>>
But using / (positional only arguments), it is possible since it allows name as a positional
argument and 'name' as a key in the keyword arguments:
>>>
In other words, the names of positional-only parameters can be used in **kwds without
ambiguity.
4.8.3.5. Recap
The use case will determine which parameters to use in the function definition:
As guidance:
Use positional-only if you want the name of the parameters to not be available to the
user. This is useful when parameter names have no real meaning, if you want to enforce
the order of the arguments when the function is called or if you need to take some
positional parameters and arbitrary keywords.
Use keyword-only when names have meaning and the function definition is more
understandable by being explicit with names or you want to prevent users relying on the
position of the argument being passed.
For an API, use positional-only to prevent breaking API changes if the parameter’s name
is modified in the future.
4.8.4. Arbitrary Argument Lists
Finally, the least frequently used option is to specify that a function can be called with an
arbitrary number of arguments. These arguments will be wrapped up in a tuple (see Tuples and
Sequences). Before the variable number of arguments, zero or more normal arguments may
occur.
Normally, these variadic arguments will be last in the list of formal parameters, because they
scoop up all remaining input arguments that are passed to the function. Any formal parameters
which occur after the *args parameter are ‘keyword-only’ arguments, meaning that they can
only be used as keywords rather than positional arguments.
>>>
>>>
In the same fashion, dictionaries can deliver keyword arguments with the **-operator:
>>>
>>>
The above example uses a lambda expression to return a function. Another use is to pass a small
function as an argument:
>>>
>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]
>>> pairs.sort(key=lambda pair: pair[1])
>>> pairs
[(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]
The first line should always be a short, concise summary of the object’s purpose. For brevity, it
should not explicitly state the object’s name or type, since these are available by other means
(except if the name happens to be a verb describing a function’s operation). This line should
begin with a capital letter and end with a period.
If there are more lines in the documentation string, the second line should be blank, visually
separating the summary from the rest of the description. The following lines should be one or
more paragraphs describing the object’s calling conventions, its side effects, etc.
The Python parser does not strip indentation from multi-line string literals in Python, so tools
that process documentation have to strip indentation if desired. This is done using the following
convention. The first non-blank line after the first line of the string determines the amount of
indentation for the entire documentation string. (We can’t use the first line since it is generally
adjacent to the string’s opening quotes so its indentation is not apparent in the string literal.)
Whitespace “equivalent” to this indentation is then stripped from the start of all lines of the
string. Lines that are indented less should not occur, but if they occur all their leading whitespace
should be stripped. Equivalence of whitespace should be tested after expansion of tabs (to 8
spaces, normally).
>>>
Annotations are stored in the __annotations__ attribute of the function as a dictionary and
have no effect on any other part of the function. Parameter annotations are defined by a colon
after the parameter name, followed by an expression evaluating to the value of the annotation.
Return annotations are defined by a literal ->, followed by an expression, between the parameter
list and the colon denoting the end of the def statement. The following example has a required
argument, an optional argument, and the return value annotated:
>>>
For Python, PEP 8 has emerged as the style guide that most projects adhere to; it promotes a
very readable and eye-pleasing coding style. Every Python developer should read it at some
point; here are the most important points extracted for you:
4 spaces are a good compromise between small indentation (allows greater nesting depth)
and large indentation (easier to read). Tabs introduce confusion, and are best left out.
This helps users with small displays and makes it possible to have several code files side-
by-side on larger displays.
Use blank lines to separate functions and classes, and larger blocks of code inside
functions.
When possible, put comments on a line of their own.
Use docstrings.
Use spaces around operators and after commas, but not directly inside bracketing
constructs: a = f(1, 2) + g(3, 4).
Name your classes and functions consistently; the convention is to
use UpperCamelCase for classes and lowercase_with_underscores for
functions and methods. Always use self as the name for the first method argument
(see A First Look at Classes for more on classes and methods).
Don’t use fancy encodings if your code is meant to be used in international environments.
Python’s default, UTF-8, or even plain ASCII work best in any case.
Likewise, don’t use non-ASCII characters in identifiers if there is only the slightest
chance people speaking a different language will read or maintain the code.
Footnotes
[1]
Actually, call by object reference would be a better description, since if a mutable object is
passed, the caller will see any changes the callee makes to it (items inserted into a list).
5. Data Structures
This chapter describes some things you’ve learned about already in more
detail, and adds some new things as well.
list.append(x)
Extend the list by appending all the items from the iterable. Equivalent
to a[len(a):] = iterable.
list.insert(i, x)
Insert an item at a given position. The first argument is the index of the
element before which to insert, so a.insert(0, x) inserts at the front
of the list, and a.insert(len(a), x) is equivalent to a.append(x).
list.remove(x)
Remove the first item from the list whose value is equal to x. It raises
a ValueError if there is no such item.
list.pop([i])
Remove the item at the given position in the list, and return it. If no
index is specified, a.pop() removes and returns the last item in the
list. It raises an IndexError if the list is empty or the index is outside
the list range.
list.clear()
Return zero-based index in the list of the first item whose value is
equal to x. Raises a ValueError if there is no such item.
The optional arguments start and end are interpreted as in the slice
notation and are used to limit the search to a particular subsequence
of the list. The returned index is computed relative to the beginning of
the full sequence rather than the start argument.
list.count(x)
Sort the items of the list in place (the arguments can be used for sort
customization, see sorted() for their explanation).
list.reverse()
>>>
>>>
To implement a queue,
use collections.deque which was
designed to have fast appends and
pops from both ends. For example:
>>>
5.1.3. List
Comprehensions
List comprehensions provide a
concise way to create lists.
Common applications are to make
new lists where each element is the
result of some operations applied to
each member of another sequence
or iterable, or to create a
subsequence of those elements that
satisfy a certain condition.
>>>
>>> squares = []
>>> for x in range(10):
... squares.append(x**2)
...
>>> squares
[0, 1, 4, 9, 16, 25, 36, 49,
64, 81]
squares = list(map(lambda x:
x**2, range(10)))
or, equivalently:
>>>
>>>
>>> combs = []
>>> for x in [1,2,3]:
... for y in [3,1,4]:
... if x != y:
...
combs.append((x, y))
...
>>> combs
[(1, 3), (1, 4), (2, 3), (2,
1), (2, 4), (3, 1), (3, 4)]
>>>
>>>
>>>
>>> matrix = [
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12],
... ]
>>>
>>>
>>> transposed = []
>>> for i in range(4):
...
transposed.append([row[i] for
row in matrix])
...
>>> transposed
[[1, 5, 9], [2, 6, 10], [3, 7,
11], [4, 8, 12]]
>>>
>>> transposed = []
>>> for i in range(4):
... # the following 3 lines
implement the nested listcomp
... transposed_row = []
... for row in matrix:
...
transposed_row.append(row[i])
...
transposed.append(transposed_ro
w)
...
>>> transposed
[[1, 5, 9], [2, 6, 10], [3, 7,
11], [4, 8, 12]]
>>>
>>> list(zip(*matrix))
[(1, 5, 9), (2, 6, 10), (3, 7,
11), (4, 8, 12)]
>>>
>>> a = [-1, 1, 66.25, 333,
333, 1234.5]
>>> del a[0]
>>> a
[1, 66.25, 333, 333, 1234.5]
>>> del a[2:4]
>>> a
[1, 66.25, 1234.5]
>>> del a[:]
>>> a
[]
>>>
>>> del a
>>>
>>> empty = ()
>>> singleton = 'hello', #
<-- note trailing comma
>>> len(empty)
0
>>> len(singleton)
1
>>> singleton
('hello',)
The
statement t = 12345, 54321, 'hel
lo!' is an example of tuple packing:
the
values 12345, 54321 and 'hello!' a
re packed together in a tuple. The
reverse operation is also possible:
>>>
>>> x, y, z = t
5.4. Sets
Python also includes a data type
for sets. A set is an unordered
collection with no duplicate
elements. Basic uses include
membership testing and eliminating
duplicate entries. Set objects also
support mathematical operations
like union, intersection, difference,
and symmetric difference.
>>>
>>>
>>> a = {x for x in
'abracadabra' if x not in
'abc'}
>>> a
{'r', 'd'}
5.5. Dictionaries
Another useful data type built into
Python is
the dictionary (see Mapping Types
— dict). Dictionaries are sometimes
found in other languages as
“associative memories” or
“associative arrays”. Unlike
sequences, which are indexed by a
range of numbers, dictionaries are
indexed by keys, which can be any
immutable type; strings and
numbers can always be keys.
Tuples can be used as keys if they
contain only strings, numbers, or
tuples; if a tuple contains any
mutable object either directly or
indirectly, it cannot be used as a
key. You can’t use lists as keys,
since lists can be modified in place
using index assignments, slice
assignments, or methods
like append() and extend().
>>>
>>>
>>>
5.6. Looping
Techniques
When looping through dictionaries,
the key and corresponding value
can be retrieved at the same time
using the items() method.
>>>
>>>
>>> for i, v in
enumerate(['tic', 'tac',
'toe']):
... print(i, v)
...
0 tic
1 tac
2 toe
>>>
>>>
>>>
>>>
>>>
The comparison
operators in and not in are
membership tests that determine
whether a value is in (or not in) a
container. The
operators is and is not compare
whether two objects are really the
same object. All comparison
operators have the same priority,
which is lower than that of all
numerical operators.
The Boolean
operators and and or are so-
called short-circuit operators: their
arguments are evaluated from left
to right, and evaluation stops as
soon as the outcome is determined.
For example, if A and C are true
but B is false, A and B and C does
not evaluate the expression C.
When used as a general value and
not as a Boolean, the return value
of a short-circuit operator is the last
evaluated argument.
>>>
5.8. Comparing
Sequences and Other
Types
Sequence objects typically may be
compared to other objects with the
same sequence type. The
comparison
uses lexicographical ordering: first
the first two items are compared,
and if they differ this determines
the outcome of the comparison; if
they are equal, the next two items
are compared, and so on, until
either sequence is exhausted. If two
items to be compared are
themselves sequences of the same
type, the lexicographical
comparison is carried out
recursively. If all items of two
sequences compare equal, the
sequences are considered equal. If
one sequence is an initial sub-
sequence of the other, the shorter
sequence is the smaller (lesser)
one. Lexicographical ordering for
strings uses the Unicode code point
number to order individual
characters. Some examples of
comparisons between sequences of
the same type:
Footnotes
[1]
6. Modules
If you quit from the Python interpreter and enter it again, the definitions you
have made (functions and variables) are lost. Therefore, if you want to write
a somewhat longer program, you are better off using a text editor to prepare
the input for the interpreter and running it with that file as input instead. This
is known as creating a script. As your program gets longer, you may want to
split it into several files for easier maintenance. You may also want to use a
handy function that you’ve written in several programs without copying its
definition into each program.
To support this, Python has a way to put definitions in a file and use them in
a script or in an interactive instance of the interpreter. Such a file is called
a module; definitions from a module can be imported into other modules or
into the main module (the collection of variables that you have access to in a
script executed at the top level and in calculator mode).
Now enter the Python interpreter and import this module with the following
command:
>>>
This does not add the names of the functions defined in fibo directly to the
current namespace (see Python Scopes and Namespaces for more details); it
only adds the module name fibo there. Using the module name you can
access the functions:
>>>
>>> fibo.fib(1000)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987
>>> fibo.fib2(100)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
>>> fibo.__name__
'fibo'
If you intend to use a function often you can assign it to a local name:
>>>
Each module has its own private namespace, which is used as the global namespace by all
functions defined in the module. Thus, the author of a module can use global variables in the
module without worrying about accidental clashes with a user’s global variables. On the other
hand, if you know what you are doing you can touch a module’s global variables with the same
notation used to refer to its functions, modname.itemname.
Modules can import other modules. It is customary but not required to place
all import statements at the beginning of a module (or script, for that matter). The imported
module names, if placed at the top level of a module (outside any functions or classes), are added
to the module’s global namespace.
There is a variant of the import statement that imports names from a module directly into the
importing module’s namespace. For example:
>>>
This does not introduce the module name from which the imports are taken in the local
namespace (so in the example, fibo is not defined).
>>>
This imports all names except those beginning with an underscore (_). In most cases Python
programmers do not use this facility since it introduces an unknown set of names into the
interpreter, possibly hiding some things you have already defined.
Note that in general the practice of importing * from a module or package is frowned upon, since
it often causes poorly readable code. However, it is okay to use it to save typing in interactive
sessions.
If the module name is followed by as, then the name following as is bound directly to the
imported module.
>>>
This is effectively importing the module in the same way that import fibo will do, with the
only difference of it being available as fib.
>>>
For efficiency reasons, each module is only imported once per interpreter session. Therefore, if
you change your modules, you must restart the interpreter – or, if it’s just one module you want
to test interactively, use importlib.reload(),
e.g. import importlib; importlib.reload(modulename).
6.1.1. Executing modules as scripts
When you run a Python module with
the code in the module will be executed, just as if you imported it, but with the __name__ set
to "__main__". That means that by adding this code at the end of your module:
if __name__ == "__main__":
import sys
fib(int(sys.argv[1]))
you can make the file usable as a script as well as an importable module, because the code that
parses the command line only runs if the module is executed as the “main” file:
$ python fibo.py 50
0 1 1 2 3 5 8 13 21 34
>>>
This is often used either to provide a convenient user interface to a module, or for testing
purposes (running the module as a script executes a test suite).
The directory containing the input script (or the current directory when no file is
specified).
PYTHONPATH (a list of directory names, with the same syntax as the shell
variable PATH).
The installation-dependent default (by convention including a site-
packages directory, handled by the site module).
More details are at The initialization of the sys.path module search path.
Note
On file systems which support symlinks, the directory containing the input script is calculated
after the symlink is followed. In other words the directory containing the symlink is not added to
the module search path.
After initialization, Python programs can modify sys.path. The directory containing the script
being run is placed at the beginning of the search path, ahead of the standard library path. This
means that scripts in that directory will be loaded instead of modules of the same name in the
library directory. This is an error unless the replacement is intended. See section Standard
Modules for more information.
6.1.3. “Compiled” Python files
To speed up loading modules, Python caches the compiled version of each module in
the __pycache__ directory under the name module.version.pyc, where the version
encodes the format of the compiled file; it generally contains the Python version number. For
example, in CPython release 3.3 the compiled version of spam.py would be cached
as __pycache__/spam.cpython-33.pyc. This naming convention allows compiled
modules from different releases and different versions of Python to coexist.
Python checks the modification date of the source against the compiled version to see if it’s out
of date and needs to be recompiled. This is a completely automatic process. Also, the compiled
modules are platform-independent, so the same library can be shared among systems with
different architectures.
Python does not check the cache in two circumstances. First, it always recompiles and does not
store the result for the module that’s loaded directly from the command line. Second, it does not
check the cache if there is no source module. To support a non-source (compiled only)
distribution, the compiled module must be in the source directory, and there must not be a source
module.
You can use the -O or -OO switches on the Python command to reduce the size of a
compiled module. The -O switch removes assert statements, the -OO switch removes
both assert statements and __doc__ strings. Since some programs may rely on having
these available, you should only use this option if you know what you’re doing.
“Optimized” modules have an opt- tag and are usually smaller. Future releases may
change the effects of optimization.
A program doesn’t run any faster when it is read from a .pyc file than when it is read
from a .py file; the only thing that’s faster about .pyc files is the speed with which they
are loaded.
The module compileall can create .pyc files for all modules in a directory.
There is more detail on this process, including a flow chart of the decisions, in PEP 3147.
6.2. Standard Modules
Python comes with a library of standard modules, described in a separate document, the Python
Library Reference (“Library Reference” hereafter). Some modules are built into the interpreter;
these provide access to operations that are not part of the core of the language but are
nevertheless built in, either for efficiency or to provide access to operating system primitives
such as system calls. The set of such modules is a configuration option which also depends on
the underlying platform. For example, the winreg module is only provided on Windows
systems. One particular module deserves some attention: sys, which is built into every Python
interpreter. The variables sys.ps1 and sys.ps2 define the strings used as primary and
secondary prompts:
>>>
These two variables are only defined if the interpreter is in interactive mode.
The variable sys.path is a list of strings that determines the interpreter’s search path for
modules. It is initialized to a default path taken from the environment variable PYTHONPATH, or
from a built-in default if PYTHONPATH is not set. You can modify it using standard list
operations:
>>>
>>>
Without arguments, dir() lists the names you have defined currently:
>>>
>>> a = [1, 2, 3, 4, 5]
>>> import fibo
>>> fib = fibo.fib
>>> dir()
['__builtins__', '__name__', 'a', 'fib', 'fibo', 'sys']
Note that it lists all types of names: variables, modules, functions, etc.
dir() does not list the names of built-in functions and variables. If you want a list of those, they
are defined in the standard module builtins:
>>>
6.4. Packages
Packages are a way of structuring Python’s module namespace by using “dotted module names”.
For example, the module name A.B designates a submodule named B in a package named A. Just
like the use of modules saves the authors of different modules from having to worry about each
other’s global variable names, the use of dotted module names saves the authors of multi-module
packages like NumPy or Pillow from having to worry about each other’s module names.
Suppose you want to design a collection of modules (a “package”) for the uniform handling of
sound files and sound data. There are many different sound file formats (usually recognized by
their extension, for example: .wav, .aiff, .au), so you may need to create and maintain a
growing collection of modules for the conversion between the various file formats. There are
also many different operations you might want to perform on sound data (such as mixing, adding
echo, applying an equalizer function, creating an artificial stereo effect), so in addition you will
be writing a never-ending stream of modules to perform these operations. Here’s a possible
structure for your package (expressed in terms of a hierarchical filesystem):
When importing the package, Python searches through the directories on sys.path looking for
the package subdirectory.
The __init__.py files are required to make Python treat directories containing the file as
packages (unless using a namespace package, a relatively advanced feature). This prevents
directories with a common name, such as string, from unintentionally hiding valid modules
that occur later on the module search path. In the simplest case, __init__.py can just be an
empty file, but it can also execute initialization code for the package or set
the __all__ variable, described later.
Users of the package can import individual modules from the package, for example:
import sound.effects.echo
This loads the submodule sound.effects.echo. It must be referenced with its full name.
This also loads the submodule echo, and makes it available without its package prefix, so it can
be used as follows:
Again, this loads the submodule echo, but this makes its function echofilter() directly
available:
Note that when using from package import item, the item can be either a submodule (or
subpackage) of the package, or some other name defined in the package, like a function, class or
variable. The import statement first tests whether the item is defined in the package; if not, it
assumes it is a module and attempts to load it. If it fails to find it, an ImportError exception is
raised.
Contrarily, when using syntax like import item.subitem.subsubitem, each item except
for the last must be a package; the last item can be a module or a package but can’t be a class or
function or variable defined in the previous item.
The only solution is for the package author to provide an explicit index of the package.
The import statement uses the following convention: if a package’s __init__.py code
defines a list named __all__, it is taken to be the list of module names that should be imported
when from package import * is encountered. It is up to the package author to keep this
list up-to-date when a new version of the package is released. Package authors may also decide
not to support it, if they don’t see a use for importing * from their package. For example, the
file sound/effects/__init__.py could contain the following code:
This would mean that from sound.effects import * would import the three named
submodules of the sound.effects package.
Be aware that submodules might become shadowed by locally defined names. For example, if
you added a reverse function to the sound/effects/__init__.py file,
the from sound.effects import * would only import the two
submodules echo and surround, but not the reverse submodule, because it is shadowed by
the locally defined reverse function:
__all__ = [
"echo", # refers to the 'echo.py' file
"surround", # refers to the 'surround.py' file
"reverse", # !!! refers to the 'reverse' function now !!!
]
If __all__ is not defined, the statement from sound.effects import * does not import
all submodules from the package sound.effects into the current namespace; it only ensures
that the package sound.effects has been imported (possibly running any initialization code
in __init__.py) and then imports whatever names are defined in the package. This includes
any names defined (and submodules explicitly loaded) by __init__.py. It also includes any
submodules of the package that were explicitly loaded by previous import statements. Consider
this code:
import sound.effects.echo
import sound.effects.surround
from sound.effects import *
In this example, the echo and surround modules are imported in the current namespace
because they are defined in the sound.effects package when
the from...import statement is executed. (This also works when __all__ is defined.)
Although certain modules are designed to export only names that follow certain patterns when
you use import *, it is still considered bad practice in production code.
Note that relative imports are based on the name of the current module. Since the name of the
main module is always "__main__", modules intended for use as the main module of a Python
application must always use absolute imports.
While this feature is not often needed, it can be used to extend the set of modules found in a
package.
Footnotes
[1]
In fact function definitions are also ‘statements’ that are ‘executed’; the execution of a
module-level function definition adds the function name to the module’s global namespace.
Often you’ll want more control over the formatting of your output than simply printing space-
separated values. There are several ways to format output.
To use formatted string literals, begin a string with f or F before the opening quotation
mark or triple quotation mark. Inside this string, you can write a Python expression
between { and } characters that can refer to variables or literal values.
>>>
The str.format() method of strings requires more manual effort. You’ll still
use { and } to mark where a variable will be substituted and can provide detailed
formatting directives, but you’ll also need to provide the information to be formatted. In
the following code block there are two examples of how to format variables:
>>>
Notice how the yes_votes are padded with spaces and a negative sign only for
negative numbers. The example also prints percentage multiplied by 100, with 2
decimal places and followed by a percent sign (see Format Specification Mini-
Language for details).
Finally, you can do all the string handling yourself by using string slicing and
concatenation operations to create any layout you can imagine. The string type has some
methods that perform useful operations for padding strings to a given column width.
When you don’t need fancy output but just want a quick display of some variables for debugging
purposes, you can convert any value to a string with the repr() or str() functions.
The str() function is meant to return representations of values which are fairly human-
readable, while repr() is meant to generate representations which can be read by the
interpreter (or will force a SyntaxError if there is no equivalent syntax). For objects which
don’t have a particular representation for human consumption, str() will return the same value
as repr(). Many values, such as numbers or structures like lists and dictionaries, have the same
representation using either function. Strings, in particular, have two distinct representations.
Some examples:
>>>
The string module contains a Template class that offers yet another way to substitute values
into strings, using placeholders like $x and replacing them with values from a dictionary, but
offers much less control of the formatting.
An optional format specifier can follow the expression. This allows greater control over how the
value is formatted. The following example rounds pi to three places after the decimal:
>>>
Passing an integer after the ':' will cause that field to be a minimum number of characters
wide. This is useful for making columns line up.
>>>
Other modifiers can be used to convert the value before it is formatted. '!
a' applies ascii(), '!s' applies str(), and '!r' applies repr():
>>>
The = specifier can be used to expand an expression to the text of the expression, an equal sign,
then the representation of the evaluated expression:
>>>
See self-documenting expressions for more information on the = specifier. For a reference on
these format specifications, see the reference guide for the Format Specification Mini-Language.
7.1.2. The String format() Method
Basic usage of the str.format() method looks like this:
>>>
The brackets and characters within them (called format fields) are replaced with the objects
passed into the str.format() method. A number in the brackets can be used to refer to the
position of the object passed into the str.format() method.
>>>
If keyword arguments are used in the str.format() method, their values are referred to by
using the name of the argument.
>>>
>>>
If you have a really long format string that you don’t want to split up, it would be nice if you
could reference the variables to be formatted by name instead of by position. This can be done by
simply passing the dict and using square brackets '[]' to access the keys.
>>>
This could also be done by passing the table dictionary as keyword arguments with
the ** notation.
>>>
This is particularly useful in combination with the built-in function vars(), which returns a
dictionary containing all local variables:
>>>
As an example, the following lines produce a tidily aligned set of columns giving integers and
their squares and cubes:
>>>
For a complete overview of string formatting with str.format(), see Format String Syntax.
>>>
(Note that the one space between each column was added by the way print() works: it always
adds spaces between its arguments.)
The str.rjust() method of string objects right-justifies a string in a field of a given width by
padding it with spaces on the left. There are similar
methods str.ljust() and str.center(). These methods do not write anything, they just
return a new string. If the input string is too long, they don’t truncate it, but return it unchanged;
this will mess up your column lay-out but that’s usually better than the alternative, which would
be lying about a value. (If you really want truncation you can always add a slice operation, as
in x.ljust(n)[:n].)
There is another method, str.zfill(), which pads a numeric string on the left with zeros. It
understands about plus and minus signs:
>>>
>>> '12'.zfill(5)
'00012'
>>> '-3.14'.zfill(7)
'-003.14'
>>> '3.14159265359'.zfill(5)
'3.14159265359'
>>>
>>>
The first argument is a string containing the filename. The second argument is another string
containing a few characters describing the way in which the file will be used. mode can
be 'r' when the file will only be read, 'w' for only writing (an existing file with the same name
will be erased), and 'a' opens the file for appending; any data written to the file is automatically
added to the end. 'r+' opens the file for both reading and writing. The mode argument is
optional; 'r' will be assumed if it’s omitted.
Normally, files are opened in text mode, that means, you read and write strings from and to the
file, which are encoded in a specific encoding. If encoding is not specified, the default is
platform dependent (see open()). Because UTF-8 is the modern de-facto
standard, encoding="utf-8" is recommended unless you know that you need to use a
different encoding. Appending a 'b' to the mode opens the file in binary mode. Binary mode
data is read and written as bytes objects. You can not specify encoding when opening file in
binary mode.
In text mode, the default when reading is to convert platform-specific line endings (\n on
Unix, \r\n on Windows) to just \n. When writing in text mode, the default is to convert
occurrences of \n back to platform-specific line endings. This behind-the-scenes modification to
file data is fine for text files, but will corrupt binary data like that in JPEG or EXE files. Be very
careful to use binary mode when reading and writing such files.
It is good practice to use the with keyword when dealing with file objects. The advantage is that
the file is properly closed after its suite finishes, even if an exception is raised at some point.
Using with is also much shorter than writing equivalent try-finally blocks:
>>>
>>> # We can check that the file has been automatically closed.
>>> f.closed
True
If you’re not using the with keyword, then you should call f.close() to close the file and
immediately free up any system resources used by it.
Warning
Calling f.write() without using the with keyword or calling f.close() might result in
the arguments of f.write() not being completely written to the disk, even if the program exits
successfully.
After a file object is closed, either by a with statement or by calling f.close(), attempts to
use the file object will automatically fail.
>>>
>>> f.close()
>>> f.read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: I/O operation on closed file.
7.2.1. Methods of File Objects
The rest of the examples in this section will assume that a file object called f has already been
created.
To read a file’s contents, call f.read(size), which reads some quantity of data and returns it
as a string (in text mode) or bytes object (in binary mode). size is an optional numeric argument.
When size is omitted or negative, the entire contents of the file will be read and returned; it’s
your problem if the file is twice as large as your machine’s memory. Otherwise, at
most size characters (in text mode) or size bytes (in binary mode) are read and returned. If the
end of the file has been reached, f.read() will return an empty string ('').
>>>
>>> f.read()
'This is the entire file.\n'
>>> f.read()
''
f.readline() reads a single line from the file; a newline character (\n) is left at the end of
the string, and is only omitted on the last line of the file if the file doesn’t end in a newline. This
makes the return value unambiguous; if f.readline() returns an empty string, the end of the
file has been reached, while a blank line is represented by '\n', a string containing only a single
newline.
>>>
>>> f.readline()
'This is the first line of the file.\n'
>>> f.readline()
'Second line of the file\n'
>>> f.readline()
''
For reading lines from a file, you can loop over the file object. This is memory efficient, fast, and
leads to simple code:
>>>
If you want to read all the lines of a file in a list you can also
use list(f) or f.readlines().
f.write(string) writes the contents of string to the file, returning the number of characters
written.
>>>
Other types of objects need to be converted – either to a string (in text mode) or a bytes object (in
binary mode) – before writing them:
>>>
f.tell() returns an integer giving the file object’s current position in the file represented as
number of bytes from the beginning of the file when in binary mode and an opaque number
when in text mode.
To change the file object’s position, use f.seek(offset, whence). The position is
computed from adding offset to a reference point; the reference point is selected by
the whence argument. A whence value of 0 measures from the beginning of the file, 1 uses the
current file position, and 2 uses the end of the file as the reference point. whence can be omitted
and defaults to 0, using the beginning of the file as the reference point.
>>>
In text files (those opened without a b in the mode string), only seeks relative to the beginning of
the file are allowed (the exception being seeking to the very file end with seek(0, 2)) and the
only valid offset values are those returned from the f.tell(), or zero. Any other offset value
produces undefined behaviour.
File objects have some additional methods, such as isatty() and truncate() which are less
frequently used; consult the Library Reference for a complete guide to file objects.
Rather than having users constantly writing and debugging code to save complicated data types
to files, Python allows you to use the popular data interchange format called JSON (JavaScript
Object Notation). The standard module called json can take Python data hierarchies, and
convert them to string representations; this process is called serializing. Reconstructing the data
from the string representation is called deserializing. Between serializing and deserializing, the
string representing the object may have been stored in a file or data, or sent over a network
connection to some distant machine.
Note
The JSON format is commonly used by modern applications to allow for data exchange. Many
programmers are already familiar with it, which makes it a good choice for interoperability.
If you have an object x, you can view its JSON string representation with a simple line of code:
>>>
json.dump(x, f)
To decode the object again, if f is a binary file or text file object which has been opened for
reading:
x = json.load(f)
Note
JSON files must be encoded in UTF-8. Use encoding="utf-8" when opening JSON file as
a text file for both of reading and writing.
This simple serialization technique can handle lists and dictionaries, but serializing arbitrary
class instances in JSON requires a bit of extra effort. The reference for the json module
contains an explanation of this.
See also
Contrary to JSON, pickle is a protocol which allows the serialization of arbitrarily complex
Python objects. As such, it is specific to Python and cannot be used to communicate with
applications written in other languages. It is also insecure by default: deserializing pickle data
coming from an untrusted source can execute arbitrary code, if the data was crafted by a skilled
attacker.
>>>
>>> while True print('Hello world')
File "<stdin>", line 1
while True print('Hello world')
^^^^^
SyntaxError: invalid syntax
The parser repeats the offending line and displays little ‘arrow’s pointing at the token in the line
where the error was detected. The error may be caused by the absence of a token before the
indicated token. In the example, the error is detected at the function print(), since a colon
(':') is missing before it. File name and line number are printed so you know where to look in
case the input came from a script.
8.2. Exceptions
Even if a statement or expression is syntactically correct, it may cause an error when an attempt
is made to execute it. Errors detected during execution are called exceptions and are not
unconditionally fatal: you will soon learn how to handle them in Python programs. Most
exceptions are not handled by programs, however, and result in error messages as shown here:
>>>
>>> 10 * (1/0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
>>> 4 + spam*3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'spam' is not defined
>>> '2' + 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate str (not "int") to str
The last line of the error message indicates what happened. Exceptions come in different types,
and the type is printed as part of the message: the types in the example
are ZeroDivisionError, NameError and TypeError. The string printed as the exception
type is the name of the built-in exception that occurred. This is true for all built-in exceptions,
but need not be true for user-defined exceptions (although it is a useful convention). Standard
exception names are built-in identifiers (not reserved keywords).
The rest of the line provides detail based on the type of exception and what caused it.
The preceding part of the error message shows the context where the exception occurred, in the
form of a stack traceback. In general it contains a stack traceback listing source lines; however, it
will not display lines read from standard input.
>>>
First, the try clause (the statement(s) between the try and except keywords) is
executed.
If no exception occurs, the except clause is skipped and execution of the try statement is
finished.
If an exception occurs during execution of the try clause, the rest of the clause is
skipped. Then, if its type matches the exception named after the except keyword,
the except clause is executed, and then execution continues after the try/except block.
If an exception occurs which does not match the exception named in the except clause, it
is passed on to outer try statements; if no handler is found, it is an unhandled
exception and execution stops with an error message.
A try statement may have more than one except clause, to specify handlers for different
exceptions. At most one handler will be executed. Handlers only handle exceptions that occur in
the corresponding try clause, not in other handlers of the same try statement. An except
clause may name multiple exceptions as a parenthesized tuple, for example:
class B(Exception):
pass
class C(B):
pass
class D(C):
pass
Note that if the except clauses were reversed (with except B first), it would have printed B, B,
B — the first matching except clause is triggered.
When an exception occurs, it may have associated values, also known as the
exception’s arguments. The presence and types of the arguments depend on the exception type.
The except clause may specify a variable after the exception name. The variable is bound to the
exception instance which typically has an args attribute that stores the arguments. For
convenience, builtin exception types define __str__() to print all the arguments without
explicitly accessing .args.
>>>
>>> try:
... raise Exception('spam', 'eggs')
... except Exception as inst:
... print(type(inst)) # the exception type
... print(inst.args) # arguments stored in .args
... print(inst) # __str__ allows args to be printed
directly,
... # but may be overridden in exception
subclasses
... x, y = inst.args # unpack args
... print('x =', x)
... print('y =', y)
...
<class 'Exception'>
('spam', 'eggs')
('spam', 'eggs')
x = spam
y = eggs
The exception’s __str__() output is printed as the last part (‘detail’) of the message for
unhandled exceptions.
Exception can be used as a wildcard that catches (almost) everything. However, it is good
practice to be as specific as possible with the types of exceptions that we intend to handle, and to
allow any unexpected exceptions to propagate on.
The most common pattern for handling Exception is to print or log the exception and then re-
raise it (allowing a caller to handle the exception as well):
import sys
try:
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
except OSError as err:
print("OS error:", err)
except ValueError:
print("Could not convert data to an integer.")
except Exception as err:
print(f"Unexpected {err=}, {type(err)=}")
raise
The try … except statement has an optional else clause, which, when present, must follow
all except clauses. It is useful for code that must be executed if the try clause does not raise an
exception. For example:
The use of the else clause is better than adding additional code to the try clause because it
avoids accidentally catching an exception that wasn’t raised by the code being protected by
the try … except statement.
Exception handlers do not handle only exceptions that occur immediately in the try clause, but
also those that occur inside functions that are called (even indirectly) in the try clause. For
example:
>>>
>>>
If you need to determine whether an exception was raised but don’t intend to handle it, a simpler
form of the raise statement allows you to re-raise the exception:
>>>
>>> try:
... raise NameError('HiThere')
... except NameError:
... print('An exception flew by!')
... raise
...
An exception flew by!
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
NameError: HiThere
>>>
>>> try:
... open("database.sqlite")
... except OSError:
... raise RuntimeError("unable to handle error")
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
FileNotFoundError: [Errno 2] No such file or directory:
'database.sqlite'
To indicate that an exception is a direct consequence of another, the raise statement allows an
optional from clause:
This can be useful when you are transforming exceptions. For example:
>>>
It also allows disabling automatic exception chaining using the from None idiom:
>>>
>>> try:
... open('database.sqlite')
... except OSError:
... raise RuntimeError from None
...
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
RuntimeError
Exception classes can be defined which do anything any other class can do, but are usually kept
simple, often only offering a number of attributes that allow information about the error to be
extracted by handlers for the exception.
Most exceptions are defined with names that end in “Error”, similar to the naming of the
standard exceptions.
Many standard modules define their own exceptions to report errors that may occur in functions
they define.
>>>
>>> try:
... raise KeyboardInterrupt
... finally:
... print('Goodbye, world!')
...
Goodbye, world!
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
KeyboardInterrupt
If a finally clause is present, the finally clause will execute as the last task before
the try statement completes. The finally clause runs whether or not the try statement
produces an exception. The following points discuss more complex cases when an exception
occurs:
If an exception occurs during execution of the try clause, the exception may be handled
by an except clause. If the exception is not handled by an except clause, the
exception is re-raised after the finally clause has been executed.
An exception could occur during execution of an except or else clause. Again, the
exception is re-raised after the finally clause has been executed.
If the finally clause executes a break, continue or return statement, exceptions
are not re-raised.
If the try statement reaches a break, continue or return statement,
the finally clause will execute just prior to
the break, continue or return statement’s execution.
If a finally clause includes a return statement, the returned value will be the one
from the finally clause’s return statement, not the value from
the try clause’s return statement.
For example:
>>>
>>>
As you can see, the finally clause is executed in any event. The TypeError raised by
dividing two strings is not handled by the except clause and therefore re-raised after
the finally clause has been executed.
In real world applications, the finally clause is useful for releasing external resources (such as
files or network connections), regardless of whether the use of the resource was successful.
The problem with this code is that it leaves the file open for an indeterminate amount of time
after this part of the code has finished executing. This is not an issue in simple scripts, but can be
a problem for larger applications. The with statement allows objects like files to be used in a
way that ensures they are always cleaned up promptly and correctly.
with open("myfile.txt") as f:
for line in f:
print(line, end="")
After the statement is executed, the file f is always closed, even if a problem was encountered
while processing the lines. Objects which, like files, provide predefined clean-up actions will
indicate this in their documentation.
8.9. Raising and Handling Multiple Unrelated
Exceptions
There are situations where it is necessary to report several exceptions that have occurred. This is
often the case in concurrency frameworks, when several tasks may have failed in parallel, but
there are also other use cases where it is desirable to continue execution and collect multiple
errors rather than raise the first exception.
The builtin ExceptionGroup wraps a list of exception instances so that they can be raised
together. It is an exception itself, so it can be caught like any other exception.
>>>
By using except* instead of except, we can selectively handle only the exceptions in the
group that match a certain type. In the following example, which shows a nested exception
group, each except* clause extracts from the group exceptions of a certain type while letting
all other exceptions propagate to other clauses and eventually to be reraised.
>>>
>>> def f():
... raise ExceptionGroup(
... "group1",
... [
... OSError(1),
... SystemError(2),
... ExceptionGroup(
... "group2",
... [
... OSError(3),
... RecursionError(4)
... ]
... )
... ]
... )
...
>>> try:
... f()
... except* OSError as e:
... print("There were OSErrors")
... except* SystemError as e:
... print("There were SystemErrors")
...
There were OSErrors
There were SystemErrors
+ Exception Group Traceback (most recent call last):
| File "<stdin>", line 2, in <module>
| File "<stdin>", line 2, in f
| ExceptionGroup: group1
+-+---------------- 1 ----------------
| ExceptionGroup: group2
+-+---------------- 1 ----------------
| RecursionError: 4
+------------------------------------
>>>
Note that the exceptions nested in an exception group must be instances, not types. This is
because in practice the exceptions would typically be ones that have already been raised and
caught by the program, along the following pattern:
>>>
>>> excs = []
... for test in tests:
... try:
... test.run()
... except Exception as e:
... excs.append(e)
...
>>> if excs:
... raise ExceptionGroup("Test Failures", excs)
...
>>>
>>> try:
... raise TypeError('bad type')
... except Exception as e:
... e.add_note('Add some information')
... e.add_note('Add some more information')
... raise
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
TypeError: bad type
Add some information
Add some more information
>>>
For example, when collecting exceptions into an exception group, we may want to add context
information for the individual errors. In the following each exception in the group has a note
indicating when this error has occurred.
>>>
9. Classes
Classes provide a means of bundling data and functionality together.
Creating a new class creates a new type of object, allowing new instances of
that type to be made. Each class instance can have attributes attached to it
for maintaining its state. Class instances can also have methods (defined by
its class) for modifying its state.
Compared with other programming languages, Python’s class mechanism
adds classes with a minimum of new syntax and semantics. It is a mixture of
the class mechanisms found in C++ and Modula-3. Python classes provide all
the standard features of Object Oriented Programming: the class inheritance
mechanism allows multiple base classes, a derived class can override any
methods of its base class or classes, and a method can call the method of a
base class with the same name. Objects can contain arbitrary amounts and
kinds of data. As is true for modules, classes partake of the dynamic nature
of Python: they are created at runtime, and can be modified further after
creation.
A namespace is a mapping from names to objects. Most namespaces are currently implemented
as Python dictionaries, but that’s normally not noticeable in any way (except for performance),
and it may change in the future. Examples of namespaces are: the set of built-in names
(containing functions such as abs(), and built-in exception names); the global names in a
module; and the local names in a function invocation. In a sense the set of attributes of an object
also form a namespace. The important thing to know about namespaces is that there is absolutely
no relation between names in different namespaces; for instance, two different modules may both
define a function maximize without confusion — users of the modules must prefix it with the
module name.
By the way, I use the word attribute for any name following a dot — for example, in the
expression z.real, real is an attribute of the object z. Strictly speaking, references to names
in modules are attribute references: in the expression modname.funcname, modname is a
module object and funcname is an attribute of it. In this case there happens to be a
straightforward mapping between the module’s attributes and the global names defined in the
module: they share the same namespace! [1]
Attributes may be read-only or writable. In the latter case, assignment to attributes is possible.
Module attributes are writable: you can write modname.the_answer = 42. Writable
attributes may also be deleted with the del statement. For
example, del modname.the_answer will remove the attribute the_answer from the
object named by modname.
Namespaces are created at different moments and have different lifetimes. The namespace
containing the built-in names is created when the Python interpreter starts up, and is never
deleted. The global namespace for a module is created when the module definition is read in;
normally, module namespaces also last until the interpreter quits. The statements executed by the
top-level invocation of the interpreter, either read from a script file or interactively, are
considered part of a module called __main__, so they have their own global namespace. (The
built-in names actually also live in a module; this is called builtins.)
The local namespace for a function is created when the function is called, and deleted when the
function returns or raises an exception that is not handled within the function. (Actually,
forgetting would be a better way to describe what actually happens.) Of course, recursive
invocations each have their own local namespace.
the innermost scope, which is searched first, contains the local names
the scopes of any enclosing functions, which are searched starting with the nearest
enclosing scope, contain non-local, but also non-global names
the next-to-last scope contains the current module’s global names
the outermost scope (searched last) is the namespace containing built-in names
If a name is declared global, then all references and assignments go directly to the next-to-last
scope containing the module’s global names. To rebind variables found outside of the innermost
scope, the nonlocal statement can be used; if not declared nonlocal, those variables are read-
only (an attempt to write to such a variable will simply create a new local variable in the
innermost scope, leaving the identically named outer variable unchanged).
Usually, the local scope references the local names of the (textually) current function. Outside
functions, the local scope references the same namespace as the global scope: the module’s
namespace. Class definitions place yet another namespace in the local scope.
It is important to realize that scopes are determined textually: the global scope of a function
defined in a module is that module’s namespace, no matter from where or by what alias the
function is called. On the other hand, the actual search for names is done dynamically, at run
time — however, the language definition is evolving towards static name resolution, at
“compile” time, so don’t rely on dynamic name resolution! (In fact, local variables are already
determined statically.)
The global statement can be used to indicate that particular variables live in the global scope
and should be rebound there; the nonlocal statement indicates that particular variables live in
an enclosing scope and should be rebound there.
def scope_test():
def do_local():
spam = "local spam"
def do_nonlocal():
nonlocal spam
spam = "nonlocal spam"
def do_global():
global spam
spam = "global spam"
scope_test()
print("In global scope:", spam)
Note how the local assignment (which is default) didn’t change scope_test's binding of spam.
The nonlocal assignment changed scope_test's binding of spam, and the global assignment
changed the module-level binding.
You can also see that there was no previous binding for spam before the global assignment.
class ClassName:
<statement-1>
.
.
.
<statement-N>
Class definitions, like function definitions (def statements) must be executed before they have
any effect. (You could conceivably place a class definition in a branch of an if statement, or
inside a function.)
In practice, the statements inside a class definition will usually be function definitions, but other
statements are allowed, and sometimes useful — we’ll come back to this later. The function
definitions inside a class normally have a peculiar form of argument list, dictated by the calling
conventions for methods — again, this is explained later.
When a class definition is entered, a new namespace is created, and used as the local scope —
thus, all assignments to local variables go into this new namespace. In particular, function
definitions bind the name of the new function here.
When a class definition is left normally (via the end), a class object is created. This is basically a
wrapper around the contents of the namespace created by the class definition; we’ll learn more
about class objects in the next section. The original local scope (the one in effect just before the
class definition was entered) is reinstated, and the class object is bound here to the class name
given in the class definition header (ClassName in the example).
Attribute references use the standard syntax used for all attribute references in
Python: obj.name. Valid attribute names are all the names that were in the class’s namespace
when the class object was created. So, if the class definition looked like this:
class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
then MyClass.i and MyClass.f are valid attribute references, returning an integer and a
function object, respectively. Class attributes can also be assigned to, so you can change the
value of MyClass.i by assignment. __doc__ is also a valid attribute, returning the docstring
belonging to the class: "A simple example class".
Class instantiation uses function notation. Just pretend that the class object is a parameterless
function that returns a new instance of the class. For example (assuming the above class):
x = MyClass()
creates a new instance of the class and assigns this object to the local variable x.
The instantiation operation (“calling” a class object) creates an empty object. Many classes like
to create objects with instances customized to a specific initial state. Therefore a class may
define a special method named __init__(), like this:
def __init__(self):
self.data = []
x = MyClass()
Of course, the __init__() method may have arguments for greater flexibility. In that case,
arguments given to the class instantiation operator are passed on to __init__(). For example,
>>>
data attributes correspond to “instance variables” in Smalltalk, and to “data members” in C++.
Data attributes need not be declared; like local variables, they spring into existence when they
are first assigned to. For example, if x is the instance of MyClass created above, the following
piece of code will print the value 16, without leaving a trace:
x.counter = 1
while x.counter < 10:
x.counter = x.counter * 2
print(x.counter)
del x.counter
The other kind of instance attribute reference is a method. A method is a function that “belongs
to” an object.
Valid method names of an instance object depend on its class. By definition, all attributes of a
class that are function objects define corresponding methods of its instances. So in our
example, x.f is a valid method reference, since MyClass.f is a function, but x.i is not,
since MyClass.i is not. But x.f is not the same thing as MyClass.f — it is a method object,
not a function object.
x.f()
In the MyClass example, this will return the string 'hello world'. However, it is not
necessary to call a method right away: x.f is a method object, and can be stored away and
called at a later time. For example:
xf = x.f
while True:
print(xf())
What exactly happens when a method is called? You may have noticed that x.f() was called
without an argument above, even though the function definition for f() specified an argument.
What happened to the argument? Surely Python raises an exception when a function that requires
an argument is called without any — even if the argument isn’t actually used…
Actually, you may have guessed the answer: the special thing about methods is that the instance
object is passed as the first argument of the function. In our example, the call x.f() is exactly
equivalent to MyClass.f(x). In general, calling a method with a list of n arguments is
equivalent to calling the corresponding function with an argument list that is created by inserting
the method’s instance object before the first argument.
In general, methods work as follows. When a non-data attribute of an instance is referenced, the
instance’s class is searched. If the name denotes a valid class attribute that is a function object,
references to both the instance object and the function object are packed into a method object.
When the method object is called with an argument list, a new argument list is constructed from
the instance object and the argument list, and the function object is called with this new
argument list.
class Dog:
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.kind # shared by all dogs
'canine'
>>> e.kind # shared by all dogs
'canine'
>>> d.name # unique to d
'Fido'
>>> e.name # unique to e
'Buddy'
As discussed in A Word About Names and Objects, shared data can have possibly surprising
effects with involving mutable objects such as lists and dictionaries. For example, the tricks list
in the following code should not be used as a class variable because just a single list would be
shared by all Dog instances:
class Dog:
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks # unexpectedly shared by all dogs
['roll over', 'play dead']
class Dog:
>>> d = Dog('Fido')
>>> e = Dog('Buddy')
>>> d.add_trick('roll over')
>>> e.add_trick('play dead')
>>> d.tricks
['roll over']
>>> e.tricks
['play dead']
>>>
Clients should use data attributes with care — clients may mess up invariants maintained by the
methods by stamping on their data attributes. Note that clients may add data attributes of their
own to an instance object without affecting the validity of the methods, as long as name conflicts
are avoided — again, a naming convention can save a lot of headaches here.
There is no shorthand for referencing data attributes (or other methods!) from within methods. I
find that this actually increases the readability of methods: there is no chance of confusing local
variables and instance variables when glancing through a method.
Often, the first argument of a method is called self. This is nothing more than a convention: the
name self has absolutely no special meaning to Python. Note, however, that by not following
the convention your code may be less readable to other Python programmers, and it is also
conceivable that a class browser program might be written that relies upon such a convention.
Any function object that is a class attribute defines a method for instances of that class. It is not
necessary that the function definition is textually enclosed in the class definition: assigning a
function object to a local variable in the class is also ok. For example:
class C:
f = f1
def g(self):
return 'hello world'
h = g
Now f, g and h are all attributes of class C that refer to function objects, and consequently they
are all methods of instances of C — h being exactly equivalent to g. Note that this practice
usually only serves to confuse the reader of a program.
Methods may call other methods by using method attributes of the self argument:
class Bag:
def __init__(self):
self.data = []
Methods may reference global names in the same way as ordinary functions. The global scope
associated with a method is the module containing its definition. (A class is never used as a
global scope.) While one rarely encounters a good reason for using global data in a method, there
are many legitimate uses of the global scope: for one thing, functions and modules imported into
the global scope can be used by methods, as well as functions and classes defined in it. Usually,
the class containing the method is itself defined in this global scope, and in the next section we’ll
find some good reasons why a method would want to reference its own class.
Each value is an object, and therefore has a class (also called its type). It is stored
as object.__class__.
9.5. Inheritance
Of course, a language feature would not be worthy of the name “class” without supporting
inheritance. The syntax for a derived class definition looks like this:
class DerivedClassName(BaseClassName):
<statement-1>
.
.
.
<statement-N>
The name BaseClassName must be defined in a namespace accessible from the scope
containing the derived class definition. In place of a base class name, other arbitrary expressions
are also allowed. This can be useful, for example, when the base class is defined in another
module:
class DerivedClassName(modname.BaseClassName):
Execution of a derived class definition proceeds the same as for a base class. When the class
object is constructed, the base class is remembered. This is used for resolving attribute
references: if a requested attribute is not found in the class, the search proceeds to look in the
base class. This rule is applied recursively if the base class itself is derived from some other
class.
Derived classes may override methods of their base classes. Because methods have no special
privileges when calling other methods of the same object, a method of a base class that calls
another method defined in the same base class may end up calling a method of a derived class
that overrides it. (For C++ programmers: all methods in Python are effectively virtual.)
An overriding method in a derived class may in fact want to extend rather than simply replace
the base class method of the same name. There is a simple way to call the base class method
directly: just call BaseClassName.methodname(self, arguments). This is
occasionally useful to clients as well. (Note that this only works if the base class is accessible
as BaseClassName in the global scope.)
For most purposes, in the simplest cases, you can think of the search for attributes inherited from
a parent class as depth-first, left-to-right, not searching twice in the same class where there is an
overlap in the hierarchy. Thus, if an attribute is not found in DerivedClassName, it is
searched for in Base1, then (recursively) in the base classes of Base1, and if it was not found
there, it was searched for in Base2, and so on.
In fact, it is slightly more complex than that; the method resolution order changes dynamically to
support cooperative calls to super(). This approach is known in some other multiple-
inheritance languages as call-next-method and is more powerful than the super call found in
single-inheritance languages.
Dynamic ordering is necessary because all cases of multiple inheritance exhibit one or more
diamond relationships (where at least one of the parent classes can be accessed through multiple
paths from the bottommost class). For example, all classes inherit from object, so any case of
multiple inheritance provides more than one path to reach object. To keep the base classes
from being accessed more than once, the dynamic algorithm linearizes the search order in a way
that preserves the left-to-right ordering specified in each class, that calls each parent only once,
and that is monotonic (meaning that a class can be subclassed without affecting the precedence
order of its parents). Taken together, these properties make it possible to design reliable and
extensible classes with multiple inheritance. For more detail, see The Python 2.3 Method
Resolution Order.
Since there is a valid use-case for class-private members (namely to avoid name clashes of
names with names defined by subclasses), there is limited support for such a mechanism,
called name mangling. Any identifier of the form __spam (at least two leading underscores, at
most one trailing underscore) is textually replaced with _classname__spam,
where classname is the current class name with leading underscore(s) stripped. This mangling
is done without regard to the syntactic position of the identifier, as long as it occurs within the
definition of a class.
See also
The private name mangling specifications for details and special cases.
Name mangling is helpful for letting subclasses override methods without breaking intraclass
method calls. For example:
class Mapping:
def __init__(self, iterable):
self.items_list = []
self.__update(iterable)
class MappingSubclass(Mapping):
Note that the mangling rules are designed mostly to avoid accidents; it still is possible to access
or modify a variable that is considered private. This can even be useful in special circumstances,
such as in the debugger.
Notice that code passed to exec() or eval() does not consider the classname of the invoking
class to be the current class; this is similar to the effect of the global statement, the effect of
which is likewise restricted to code that is byte-compiled together. The same restriction applies
to getattr(), setattr() and delattr(), as well as when
referencing __dict__ directly.
9.7. Odds and Ends
Sometimes it is useful to have a data type similar to the Pascal “record” or C “struct”, bundling
together a few named data items. The idiomatic approach is to use dataclasses for this
purpose:
@dataclass
class Employee:
name: str
dept: str
salary: int
>>>
A piece of Python code that expects a particular abstract data type can often be passed a class
that emulates the methods of that data type instead. For instance, if you have a function that
formats some data from a file object, you can define a class with
methods read() and readline() that get the data from a string buffer instead, and pass it as
an argument.
Instance method objects have attributes, too: m.__self__ is the instance object with the
method m(), and m.__func__ is the function object corresponding to the method.
9.8. Iterators
By now you have probably noticed that most container objects can be looped over using
a for statement:
This style of access is clear, concise, and convenient. The use of iterators pervades and unifies
Python. Behind the scenes, the for statement calls iter() on the container object. The
function returns an iterator object that defines the method __next__() which accesses
elements in the container one at a time. When there are no more elements, __next__() raises
a StopIteration exception which tells the for loop to terminate. You can call
the __next__() method using the next() built-in function; this example shows how it all
works:
>>>
>>> s = 'abc'
>>> it = iter(s)
>>> it
<str_iterator object at 0x10c90e650>
>>> next(it)
'a'
>>> next(it)
'b'
>>> next(it)
'c'
>>> next(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
next(it)
StopIteration
Having seen the mechanics behind the iterator protocol, it is easy to add iterator behavior to your
classes. Define an __iter__() method which returns an object with a __next__() method.
If the class defines __next__(), then __iter__() can just return self:
class Reverse:
"""Iterator for looping over a sequence backwards."""
def __init__(self, data):
self.data = data
self.index = len(data)
def __iter__(self):
return self
def __next__(self):
if self.index == 0:
raise StopIteration
self.index = self.index - 1
return self.data[self.index]
>>>
9.9. Generators
Generators are a simple and powerful tool for creating iterators. They are written like regular
functions but use the yield statement whenever they want to return data. Each time next() is
called on it, the generator resumes where it left off (it remembers all the data values and which
statement was last executed). An example shows that generators can be trivially easy to create:
def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
>>>
Anything that can be done with generators can also be done with class-based iterators as
described in the previous section. What makes generators so compact is that
the __iter__() and __next__() methods are created automatically.
Another key feature is that the local variables and execution state are automatically saved
between calls. This made the function easier to write and much more clear than an approach
using instance variables like self.index and self.data.
In addition to automatic method creation and saving program state, when generators terminate,
they automatically raise StopIteration. In combination, these features make it easy to create
iterators with no more effort than writing a regular function.
Examples:
>>>
Footnotes
[1]
Except for one thing. Module objects have a secret read-only attribute
called __dict__ which returns the dictionary used to implement the module’s namespace;
the name __dict__ is an attribute but not a global name. Obviously, using this violates the
abstraction of namespace implementation, and should be restricted
>>>
>>> import os
>>> os.getcwd() # Return the current working directory
'C:\\Python312'
>>> os.chdir('/server/accesslogs') # Change current working
directory
>>> os.system('mkdir today') # Run the command mkdir in the
system shell
0
Be sure to use the import os style instead of from os import *. This will
keep os.open() from shadowing the built-in open() function which operates much
differently.
The built-in dir() and help() functions are useful as interactive aids for working with large
modules like os:
>>>
>>> import os
>>> dir(os)
<returns a list of all module functions>
>>> help(os)
<returns an extensive manual page created from the module's
docstrings>
For daily file and directory management tasks, the shutil module provides a higher level
interface that is easier to use:
>>>
>>> import shutil
>>> shutil.copyfile('data.db', 'archive.db')
'archive.db'
>>> shutil.move('/build/executables', 'installdir')
'installdir'
>>>
# File demo.py
import sys
print(sys.argv)
Here is the output from running python demo.py one two three at the command line:
The argparse module provides a more sophisticated mechanism to process command line
arguments. The following script extracts one or more filenames and an optional number of lines
to be displayed:
import argparse
parser = argparse.ArgumentParser(
prog='top',
description='Show top lines from each file')
parser.add_argument('filenames', nargs='+')
parser.add_argument('-l', '--lines', type=int, default=10)
args = parser.parse_args()
print(args)
>>>
>>>
>>> import re
>>> re.findall(r'\bf[a-z]*', 'which foot or hand fell fastest')
['foot', 'fell', 'fastest']
>>> re.sub(r'(\b[a-z]+) \1', r'\1', 'cat in the the hat')
'cat in the hat'
When only simple capabilities are needed, string methods are preferred because they are easier to
read and debug:
>>>
>>>
>>>
The statistics module calculates basic statistical properties (the mean, median, variance,
etc.) of numeric data:
>>>
The SciPy project <https://scipy.org> has many other modules for numerical computations.
10.7. Internet Access
There are a number of modules for accessing the internet and processing internet protocols. Two
of the simplest are urllib.request for retrieving data from URLs and smtplib for sending
mail:
>>>
>>>
>>> # dates are easily constructed and formatted
>>> from datetime import date
>>> now = date.today()
>>> now
datetime.date(2003, 12, 2)
>>> now.strftime("%m-%d-%y. %d %b %Y is a %A on the %d day of %B.")
'12-02-03. 02 Dec 2003 is a Tuesday on the 02 day of December.'
>>>
For example, it may be tempting to use the tuple packing and unpacking feature instead of the
traditional approach to swapping arguments. The timeit module quickly demonstrates a
modest performance advantage:
>>>
>>> from timeit import Timer
>>> Timer('t=a; a=b; b=t', 'a=1; b=2').timeit()
0.57535828626024577
>>> Timer('a,b = b,a', 'a=1; b=2').timeit()
0.54962537085770791
In contrast to timeit’s fine level of granularity, the profile and pstats modules provide
tools for identifying time critical sections in larger blocks of code.
The doctest module provides a tool for scanning a module and validating tests embedded in a
program’s docstrings. Test construction is as simple as cutting-and-pasting a typical call along
with its results into the docstring. This improves the documentation by providing the user with an
example and it allows the doctest module to make sure the code remains true to the
documentation:
def average(values):
"""Computes the arithmetic mean of a list of numbers.
import doctest
doctest.testmod() # automatically validate the embedded tests
The unittest module is not as effortless as the doctest module, but it allows a more
comprehensive set of tests to be maintained in a separate file:
import unittest
class TestStatisticalFunctions(unittest.TestCase):
def test_average(self):
self.assertEqual(average([20, 30, 70]), 40.0)
self.assertEqual(round(average([1, 5, 7]), 1), 4.3)
with self.assertRaises(ZeroDivisionError):
average([])
with self.assertRaises(TypeError):
average(20, 30, 70)
The pprint module offers more sophisticated control over printing both built-in and user
defined objects in a way that is readable by the interpreter. When the result is longer than one
line, the “pretty printer” adds line breaks and indentation to more clearly reveal data structure:
>>>
The textwrap module formats paragraphs of text to fit a given screen width:
>>>
The locale module accesses a database of culture specific data formats. The grouping attribute
of locale’s format function provides a direct way of formatting numbers with group separators:
>>>
11.2. Templating
The string module includes a versatile Template class with a simplified syntax suitable for
editing by end-users. This allows users to customize their applications without having to alter the
application.
The format uses placeholder names formed by $ with valid Python identifiers (alphanumeric
characters and underscores). Surrounding the placeholder with braces allows it to be followed by
more alphanumeric letters with no intervening spaces. Writing $$ creates a single escaped $:
>>>
>>>
Template subclasses can specify a custom delimiter. For example, a batch renaming utility for a
photo browser may elect to use percent signs for placeholders such as the current date, image
sequence number, or file format:
>>>
>>> t = BatchRename(fmt)
>>> date = time.strftime('%d%b%y')
>>> for i, filename in enumerate(photofiles):
... base, ext = os.path.splitext(filename)
... newname = t.substitute(d=date, n=i, f=ext)
... print('{0} --> {1}'.format(filename, newname))
Another application for templating is separating program logic from the details of multiple
output formats. This makes it possible to substitute custom templates for XML files, plain text
reports, and HTML web reports.
import struct
with open('myfile.zip', 'rb') as f:
data = f.read()
start = 0
for i in range(3): # show the first 3 file
headers
start += 14
fields = struct.unpack('<IIIHH', data[start:start+16])
crc32, comp_size, uncomp_size, filenamesize, extra_size =
fields
start += 16
filename = data[start:start+filenamesize]
start += filenamesize
extra = data[start:start+extra_size]
print(filename, hex(crc32), comp_size, uncomp_size)
11.4. Multi-threading
Threading is a technique for decoupling tasks which are not sequentially dependent. Threads can
be used to improve the responsiveness of applications that accept user input while other tasks run
in the background. A related use case is running I/O in parallel with computations in another
thread.
The following code shows how the high level threading module can run tasks in background
while the main program continues to run:
class AsyncZip(threading.Thread):
def __init__(self, infile, outfile):
threading.Thread.__init__(self)
self.infile = infile
self.outfile = outfile
def run(self):
f = zipfile.ZipFile(self.outfile, 'w',
zipfile.ZIP_DEFLATED)
f.write(self.infile)
f.close()
print('Finished background zip of:', self.infile)
background = AsyncZip('mydata.txt', 'myarchive.zip')
background.start()
print('The main program continues to run in foreground.')
The principal challenge of multi-threaded applications is coordinating threads that share data or
other resources. To that end, the threading module provides a number of synchronization
primitives including locks, events, condition variables, and semaphores.
While those tools are powerful, minor design errors can result in problems that are difficult to
reproduce. So, the preferred approach to task coordination is to concentrate all access to a
resource in a single thread and then use the queue module to feed that thread with requests from
other threads. Applications using Queue objects for inter-thread communication and
coordination are easier to design, more readable, and more reliable.
11.5. Logging
The logging module offers a full featured and flexible logging system. At its simplest, log
messages are sent to a file or to sys.stderr:
import logging
logging.debug('Debugging information')
logging.info('Informational message')
logging.warning('Warning:config file %s not found', 'server.conf')
logging.error('Error occurred')
logging.critical('Critical error -- shutting down')
By default, informational and debugging messages are suppressed and the output is sent to
standard error. Other output options include routing messages through email, datagrams, sockets,
or to an HTTP Server. New filters can select different routing based on message
priority: DEBUG, INFO, WARNING, ERROR, and CRITICAL.
The logging system can be configured directly from Python or can be loaded from a user editable
configuration file for customized logging without altering the application.
11.6. Weak References
Python does automatic memory management (reference counting for most objects and garbage
collection to eliminate cycles). The memory is freed shortly after the last reference to it has been
eliminated.
This approach works fine for most applications but occasionally there is a need to track objects
only as long as they are being used by something else. Unfortunately, just tracking them creates a
reference that makes them permanent. The weakref module provides tools for tracking objects
without creating a reference. When the object is no longer needed, it is automatically removed
from a weakref table and a callback is triggered for weakref objects. Typical applications include
caching objects that are expensive to create:
>>>
>>>
The collections module provides a deque object that is like a list with faster appends and
pops from the left side but slower lookups in the middle. These objects are well suited for
implementing queues and breadth first tree searches:
>>>
In addition to alternative list implementations, the library also offers other tools such as
the bisect module with functions for manipulating sorted lists:
>>>
The heapq module provides functions for implementing heaps based on regular lists. The lowest
valued entry is always kept at position zero. This is useful for applications which repeatedly
access the smallest element but do not want to run a full list sort:
>>>
financial applications and other uses which require exact decimal representation,
control over precision,
control over rounding to meet legal or regulatory requirements,
tracking of significant decimal places, or
applications where the user expects the results to match calculations done by hand.
For example, calculating a 5% tax on a 70 cent phone charge gives different results in decimal
floating point and binary floating point. The difference becomes significant if the results are
rounded to the nearest cent:
>>>
The Decimal result keeps a trailing zero, automatically inferring four place significance from
multiplicands with two place significance. Decimal reproduces mathematics as done by hand and
avoids issues that can arise when binary floating point cannot exactly represent decimal
quantities.
Exact representation enables the Decimal class to perform modulo calculations and equality
tests that are unsuitable for binary floating point:
>>>
>>>
>>> getcontext().prec = 36
>>> Decimal(1) / Decimal(7)
Decimal('0.142857142857142857142857142857142857')
This means it may not be possible for one Python installation to meet the requirements of every
application. If application A needs version 1.0 of a particular module but application B needs
version 2.0, then the requirements are in conflict and installing either version 1.0 or 2.0 will
leave one application unable to run.
The solution for this problem is to create a virtual environment, a self-contained directory tree
that contains a Python installation for a particular version of Python, plus a number of additional
packages.
Different applications can then use different virtual environments. To resolve the earlier example
of conflicting requirements, application A can have its own virtual environment with version 1.0
installed while application B has another virtual environment with version 2.0. If application B
requires a library be upgraded to version 3.0, this will not affect application A’s environment.
To create a virtual environment, decide upon a directory where you want to place it, and run
the venv module as a script with the directory path:
This will create the tutorial-env directory if it doesn’t exist, and also create directories
inside it containing a copy of the Python interpreter and various supporting files.
A common directory location for a virtual environment is .venv. This name keeps the directory
typically hidden in your shell and thus out of the way while giving it a name that explains why
the directory exists. It also prevents clashing with .env environment variable definition files
that some tooling supports.
On Windows, run:
tutorial-env\Scripts\activate
source tutorial-env/bin/activate
(This script is written for the bash shell. If you use the csh or fish shells, there are
alternate activate.csh and activate.fish scripts you should use instead.)
Activating the virtual environment will change your shell’s prompt to show what virtual
environment you’re using, and modify the environment so that running python will get you that
particular version and installation of Python. For example:
$ source ~/envs/tutorial-env/bin/activate
(tutorial-env) $ python
Python 3.5.1 (default, May 6 2016, 10:59:36)
...
>>> import sys
>>> sys.path
['', '/usr/local/lib/python35.zip', ...,
'~/envs/tutorial-env/lib/python3.5/site-packages']
>>>
deactivate
pip has a number of subcommands: “install”, “uninstall”, “freeze”, etc. (Consult the Installing
Python Modules guide for complete documentation for pip.)
You can install the latest version of a package by specifying a package’s name:
You can also install a specific version of a package by giving the package name followed
by == and the version number:
If you re-run this command, pip will notice that the requested version is already installed and do
nothing. You can supply a different version number to get that version, or you can
run python -m pip install --upgrade to upgrade the package to the latest version:
python -m pip uninstall followed by one or more package names will remove the
packages from the virtual environment.
python -m pip list will display all of the packages installed in the virtual environment:
python -m pip freeze will produce a similar list of the installed packages, but the output
uses the format that python -m pip install expects. A common convention is to put this
list in a requirements.txt file:
The requirements.txt can then be committed to version control and shipped as part of an
application. Users can then install all the necessary packages with install -r:
pip has many more options. Consult the Installing Python Modules guide for complete
documentation for pip. When you’ve written a package and want to make it available on the
Python Package Index, consult the Python packaging user guide.
You should browse through this manual, which gives complete (though
terse) reference material about types, functions, and the modules in
the standard library. The standard Python distribution includes a lot of
additional code. There are modules to read Unix mailboxes, retrieve
documents via HTTP, generate random numbers, parse command-line
options, compress data, and many other tasks. Skimming through the
Library Reference will give you an idea of what’s available.
For Python-related questions and problem reports, you can post to the
newsgroup comp.lang.python, or send them to the mailing list at python-
list@python.org. The newsgroup and mailing list are gatewayed, so
messages posted to one will automatically be forwarded to the other. There
are hundreds of postings a day, asking (and answering) questions,
suggesting new features, and announcing new modules. Mailing list archives
are available at https://mail.python.org/pipermail/.
Before posting, be sure to check the list of Frequently Asked Questions (also
called the FAQ). The FAQ answers many of the questions that come up again
and again, and may already contain the solution for your problem.
Footnotes
[1]
“Cheese Shop” is a Monty Python’s sketch: a customer enters a cheese shop, but whatever
cheese he asks for, the clerk says it’s missing.
One alternative enhanced interactive interpreter that has been around for quite some time
is IPython, which features tab completion, object exploration and advanced history management.
It can also be thoroughly customized and embedded into other applications. Another similar
enhanced interactive environment is bpython.
The problem is easier to understand at first in base 10. Consider the fraction
1/3. You can approximate that as a base 10 fraction:
0.3
or, better,
0.33
or, better,
0.333
and so on. No matter how many digits you’re willing to write down, the result
will never be exactly 1/3, but will be an increasingly better approximation of
1/3.
In the same way, no matter how many base 2 digits you’re willing to use, the
decimal value 0.1 cannot be represented exactly as a base 2 fraction. In
base 2, 1/10 is the infinitely repeating fraction
0.0001100110011001100110011001100110011001100110011...
Stop at any finite number of bits, and you get an approximation. On most
machines today, floats are approximated using a binary fraction with the
numerator using the first 53 bits starting with the most significant bit and
with the denominator as a power of two. In the case of 1/10, the binary
fraction is 3602879701896397 / 2 ** 55 which is close to but not exactly
equal to the true value of 1/10.
Many users are not aware of the approximation because of the way values
are displayed. Python only prints a decimal approximation to the true
decimal value of the binary approximation stored by the machine. On most
machines, if Python were to print the true decimal value of the binary
approximation stored for 0.1, it would have to display:
>>>
>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the
number of digits manageable by displaying a rounded value instead:
>>>
>>> 1 / 10
0.1
Just remember, even though the printed result looks like the exact value of
1/10, the actual stored value is the nearest representable binary fraction.
Interestingly, there are many different decimal numbers that share the same
nearest approximate binary fraction. For example, the
numbers 0.1 and 0.10000000000000001 and 0.10000000000000000555111512
31257827021181583404541015625 are all approximated
by 3602879701896397 / 2 ** 55. Since all of these decimal values share the
same approximation, any one of them could be displayed while still
preserving the invariant eval(repr(x)) == x.
Historically, the Python prompt and built-in repr() function would choose the
one with 17 significant digits, 0.10000000000000001. Starting with Python
3.1, Python (on most systems) is now able to choose the shortest of these
and simply display 0.1.
Note that this is in the very nature of binary floating point: this is not a bug in
Python, and it is not a bug in your code either. You’ll see the same kind of
thing in all languages that support your hardware’s floating-point arithmetic
(although some languages may not display the difference by default, or in all
output modes).
For more pleasant output, you may wish to use string formatting to produce
a limited number of significant digits:
>>>
>>> repr(math.pi)
'3.141592653589793'
It’s important to realize that this is, in a real sense, an illusion: you’re simply
rounding the display of the true machine value.
One illusion may beget another. For example, since 0.1 is not exactly 1/10,
summing three values of 0.1 may not yield exactly 0.3, either:
>>>
Also, since the 0.1 cannot get any closer to the exact value of 1/10 and 0.3
cannot get any closer to the exact value of 3/10, then pre-rounding
with round() function cannot help:
>>>
Though the numbers cannot be made closer to their intended exact values,
the math.isclose() function can be useful for comparing inexact values:
>>>
>>> math.isclose(0.1 + 0.1 + 0.1, 0.3)
True
>>>
Binary floating-point arithmetic holds many surprises like this. The problem
with “0.1” is explained in precise detail below, in the “Representation Error”
section. See Examples of Floating Point Problems for a pleasant summary of
how binary floating point works and the kinds of problems commonly
encountered in practice. Also see The Perils of Floating Point for a more
complete account of other common surprises.
As that says near the end, “there are no easy answers.” Still, don’t be unduly
wary of floating point! The errors in Python float operations are inherited
from the floating-point hardware, and on most machines are on the order of
no more than 1 part in 2**53 per operation. That’s more than adequate for
most tasks, but you do need to keep in mind that it’s not decimal arithmetic
and that every float operation can suffer a new rounding error.
For use cases which require exact decimal representation, try using
the decimal module which implements decimal arithmetic suitable for
accounting applications and high-precision applications.
If you are a heavy user of floating-point operations you should take a look at
the NumPy package and many other packages for mathematical and
statistical operations supplied by the SciPy project. See <https://scipy.org>.
Python provides tools that may help on those rare occasions when you
really do want to know the exact value of a float.
The float.as_integer_ratio() method expresses the value of a float as a
fraction:
>>>
>>> x = 3.14159
>>> x.as_integer_ratio()
(3537115888337719, 1125899906842624)
Since the ratio is exact, it can be used to losslessly recreate the original
value:
>>>
>>>
>>> x.hex()
'0x1.921f9f01b866ep+1'
>>>
>>> x == float.fromhex('0x1.921f9f01b866ep+1')
True
Another helpful tool is the sum() function which helps mitigate loss-of-
precision during summation. It uses extended precision for intermediate
rounding steps as values are added onto a running total. That can make a
difference in overall accuracy so that the errors do not accumulate to the
point where they affect the final total:
>>>
>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 ==
1.0
False
>>> sum([0.1] * 10) == 1.0
True
The math.fsum() goes further and tracks all of the “lost digits” as values are
added onto a running total so that the result has only a single rounding. This
is slower than sum() but will be more accurate in uncommon cases where
large magnitude inputs mostly cancel each other out leaving a final sum
near zero:
>>>
Representation error refers to the fact that some (most, actually) decimal fractions cannot be
represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C,
C++, Java, Fortran, and many others) often won’t display the exact decimal number you expect.
Why is that? 1/10 is not exactly representable as a binary fraction. Since at least 2000, almost all
machines use IEEE 754 binary floating-point arithmetic, and almost all platforms map Python
floats to IEEE 754 binary64 “double precision” values. IEEE 754 binary64 values contain 53 bits
of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the
form J/2**N where J is an integer containing exactly 53 bits. Rewriting
1 / 10 ~= J / (2**N)
as
J ~= 2**N / 10
and recalling that J has exactly 53 bits (is >= 2**52 but < 2**53), the best value for N is 56:
>>>
That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value
for J is then that quotient rounded:
>>>
Since the remainder is more than half of 10, the best approximation is obtained by rounding up:
>>>
>>> q+1
7205759403792794
Therefore the best possible approximation to 1/10 in IEEE 754 double precision is:
7205759403792794 / 2 ** 56
Dividing both the numerator and denominator by two reduces the fraction to:
3602879701896397 / 2 ** 55
Note that since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded
up, the quotient would have been a little bit smaller than 1/10. But in no case can it
be exactly 1/10!
So the computer never “sees” 1/10: what it sees is the exact fraction given above, the best IEEE
754 double approximation it can get:
>>>
>>> 0.1 * 2 ** 55
3602879701896397.0
If we multiply that fraction by 10**55, we can see the value out to 55 decimal digits:
>>>
>>> 3602879701896397 * 10 ** 55 // 2 ** 55
1000000000000000055511151231257827021181583404541015625
meaning that the exact number stored in the computer is equal to the decimal value
0.1000000000000000055511151231257827021181583404541015625. Instead of displaying the
full decimal value, many languages (including older versions of Python), round the result to 17
significant digits:
>>>
>>> Fraction.from_float(0.1)
Fraction(3602879701896397, 36028797018963968)
>>> (0.1).as_integer_ratio()
(3602879701896397, 36028797018963968)
>>> Decimal.from_float(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625'
)
16. Appendix
16.1. Interactive Mode
16.1.1. Error Handling
When an error occurs, the interpreter prints an error message and a stack trace. In interactive
mode, it then returns to the primary prompt; when input came from a file, it exits with a nonzero
exit status after printing the stack trace. (Exceptions handled by an except clause in
a try statement are not errors in this context.) Some errors are unconditionally fatal and cause
an exit with a nonzero exit status; this applies to internal inconsistencies and some cases of
running out of memory. All error messages are written to the standard error stream; normal
output from executed commands is written to standard output.
Typing the interrupt character (usually Control-C or Delete) to the primary or secondary
prompt cancels the input and returns to the primary prompt. [1] Typing an interrupt while a
command is executing raises the KeyboardInterrupt exception, which may be handled by
a try statement.
(assuming that the interpreter is on the user’s PATH) at the beginning of the script and giving the
file an executable mode. The #! must be the first two characters of the file. On some platforms,
this first line must end with a Unix-style line ending ('\n'), not a Windows ('\r\n') line
ending. Note that the hash, or pound, character, '#', is used to start a comment in Python.
The script can be given an executable mode, or permission, using the chmod command.
$ chmod +x myscript.py
This file is only read in interactive sessions, not when Python reads commands from a script, and
not when /dev/tty is given as the explicit source of commands (which otherwise behaves like
an interactive session). It is executed in the same namespace where interactive commands are
executed, so that objects that it defines or imports can be used without qualification in the
interactive session. You can also change the prompts sys.ps1 and sys.ps2 in this file.
If you want to read an additional start-up file from the current directory, you can program this in
the global start-up file using code
like if os.path.isfile('.pythonrc.py'): exec(open('.pythonrc.py').rea
d()). If you want to use the startup file in a script, you must do this explicitly in the script:
import os
filename = os.environ.get('PYTHONSTARTUP')
if filename and os.path.isfile(filename):
with open(filename) as fobj:
startup_file = fobj.read()
exec(startup_file)
16.1.4. The Customization Modules
Python provides two hooks to let you customize it: sitecustomize and usercustomize. To see how
it works, you need first to find the location of your user site-packages directory. Start Python and
run this code:
>>>
Now you can create a file named usercustomize.py in that directory and put anything you
want in it. It will affect every invocation of Python, unless it is started with the -s option to
disable the automatic import.
sitecustomize works in the same way, but is typically created by an administrator of the
computer in the global site-packages directory, and is imported before usercustomize. See the
documentation of the site module for more details.
Footnotes
[1]
https://docs.python.org/3.12/using/cmdline.html#:~:text=%7C-,1.%20Command%20line%20and
%20environment,Need%20Python%20configured%20with%20the%20%2D%2Dwith%2Dtrace%2Drefs
%20build%20option.,-Added%20in%20version
2. Using Python on Unix platforms
2.1. Getting and installing the latest version of
Python
2.1.1. On Linux
Python comes preinstalled on most Linux distributions, and is available as a package on all
others. However there are certain features you might want to use that are not available on your
distro’s package. You can easily compile the latest version of Python from source.
In the event that Python doesn’t come preinstalled and isn’t in the repositories as well, you can
easily make packages for your own distro. Have a look at the following links:
See also
https://www.debian.org/doc/manuals/maint-guide/first.en.html
For example i386 users get the 2.5.1 version of Python using:
pkg_add
ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages/i3
86/python-2.5.1p2.tgz
./configure
make
make install
Configuration options and caveats for specific Unix platforms are extensively
documented in the README.rst file in the root of the Python source tree.
Warning
For example, on most Linux systems, the default for both is /usr.
File/directory Meaning
Recommen
ded
exec_prefix/bin/python3 location of
the
interpreter.
prefix/lib/pythonversion, exec_prefix/lib/ Recommen
pythonversion ded
File/directory Meaning
locations of
the
directories
containing
the
standard
modules.
Recommen
ded
locations of
the
directories
containing
the include
prefix/include/pythonversion, exec_prefix/ files
include/pythonversion needed for
developing
Python
extensions
and
embedding
the
interpreter.
2.4. Miscellaneous
To easily use Python scripts on Unix, you need to make them executable, e.g.
with
$ chmod +x script
and put an appropriate Shebang line at the top of the script. A good choice is
usually
#!/usr/bin/env python3
which searches for the Python interpreter in the whole PATH. However, some
Unices may not have the env command, so you may need to
hardcode /usr/bin/python3 as the interpreter path.
16. Build Python with custom OpenSSL (see the configure --with-
openssl and --with-openssl-rpath options)
17. $ pushd python-3.x.x
18. $ ./configure -C \
19. --with-openssl=/usr/local/custom-openssl \
20. --with-openssl-rpath=auto \
21. --prefix=/usr/local/python-3.x.x
22. $ make -j8
23. $ make altinstall
Note
Changed in version 3.7: Thread support and OpenSSL 1.0.2 are now required.
Changed in version 3.11: C11 compiler, IEEE 754 and NaN support are now required. On
Windows, Visual Studio 2017 or later is required.
See also PEP 7 “Style Guide for C Code” and PEP 11 “CPython platform support”.
make regen-all
make regen-stdlib-module-names
make regen-limited-abi
make regen-configure
The Makefile.pre.in file documents generated files, their inputs, and tools used to
regenerate them. Search for regen-* make targets.
3.2.1. configure script
The make regen-configure command regenerates the aclocal.m4 file and
the configure script using the Tools/build/regen-configure.sh shell script which
uses an Ubuntu container to get the same tools versions and have a reproducible output.
./configure --help
--disable-ipv6
Disable IPv6 support (enabled by default if supported), see the socket module.
--enable-big-digits=[15|30]
The default suffix is .exe on Windows and macOS (python.exe executable), .js on
Emscripten node, .html on Emscripten browser, .wasm on WASI, and an empty string
on other platforms (python executable).
Select the default time zone search path for zoneinfo.TZPATH. See the Compile-time
configuration of the zoneinfo module.
Default: /usr/share/zoneinfo:/usr/lib/zoneinfo:/usr/share/lib/
zoneinfo:/etc/zoneinfo.
--without-decimal-contextvar
Build the _decimal extension module using a thread-local context rather than a
coroutine-local context (default), see the decimal module.
A valid value is a colon (:) separated string with the backend names:
ndbm;
gdbm;
bdb.
--without-c-locale-coercion
--with-platlibdir=DIRNAME
See sys.platlibdir.
--with-wheel-pkg-dir=PATH
--with-pkg-config=[check|yes|
no]
--enable-pystats
3.3.2. WebAssembly
Options
--with-emscripten-
target=[browser|node]
--enable-wasm-
dynamic-linking
Dynamic linking enables dlopen. File size of the executable increases due to limited
dead code elimination and additional features.
--enable-wasm-
pthreads
3.3.3. Install
Options
--
prefix=PREFIX
Don’t build nor install test modules, like the test package or the _testcapi extension
module (built and installed by default).
--
with-
ensur
epip=
[upgra
de|
instal
l|no]
--
en
ab
le
-
op
ti
mi
za
ti
on
s
The C compiler Clang requires llvm-profdata program for PGO. On macOS, GCC
also requires it: GCC is just an alias to Clang on macOS.
Note
During the build, you may encounter compiler warnings about profile data not being
available for some source files. These warnings are harmless, as only a subset of the code
is exercised during profile data acquisition. To disable these warnings on Clang,
manually suppress them by adding -Wno-profile-instr-unprofiled to CFLAGS.
Environment variable used in the Makefile: Python command line arguments for the PGO
generation task.
--
wit
h-
lto
=[fu
ll|
thin
|no|
yes]
The C compiler Clang requires llvm-ar for LTO (ar on macOS), as well as an LTO-
aware linker (ld.gold or lld).
Changed in version 3.12: Use ThinLTO as the default optimization policy on Clang if the
compiler accepts the flag.
--
enable
-bolt
BOLT is part of the LLVM project but is not always included in their binary
distributions. This flag requires that llvm-bolt and merge-fdata are available.
BOLT is still a fairly new project so this flag should be considered experimental for now.
Because this tool operates on machine code its success is dependent on a combination of
the build environment + the other optimization configure args + the CPU architecture,
and not all combinations are supported. BOLT versions before LLVM 16 are known to
crash BOLT under some scenarios. Use of LLVM 16 or newer for BOLT optimization is
strongly encouraged.
--with-
computed-
gotos
Disable static documentation strings to reduce the memory footprint (enabled by default).
Documentation strings defined in Python are not affected.
Effects of a deb
Display all
list of defau
in the warn
Add d to sy
Add sys.g
unction.
Add -X sh
line option.
Add -d com
and PYTHON
variable to d
Add suppor
the __lltr
low-level tr
evaluation l
defined.
Install debu
allocators to
and other m
Define Py_
UG macros.
Add runtim
by #ifdef
Enable ass
ct_ASSER
set the NDEB
the --with
option). Ma
o Add sanity
o Unicode and
memory fill
of uninitiali
o Ensure that
replace the
with an exc
o Check that d
the current e
o The garbage
(gc.colle
checks on o
o The Py_SA
for integer u
downcasting
Changed in ver
builds and debu
compatible: def
the Py_DEBUG
implies the Py_
(see the --wit
refs option), w
only ABI incom
3.3.6. De
--with-pyd
Build Python in debug mode: define the Py_DEBUG macro (disabled by default).
--with-tra
Effects:
This build is not ABI compatible with release build (default build) or debug build
(Py_DEBUG and Py_REF_DEBUG macros).
If set, the NDEBUG macro is not defined in the OPT compiler variable.
See also the --with-pydebug option (debug build) which also enables assertions.
--with-val
--with-add
--with-mem
--with-und
3.3.7. Lin
--enable-s
3.3.8. Lib
--with-lib
Build the pyexpat module using an installed expat library (default is no).
--with-sys
Build the _decimal extension module using an installed mpdec library, see
the decimal module (default is no).
--with-rea
--without-
--with-lib
3.3.9. Se
--with-has
siphash13 (default);
siphash24;
fnv.
--with-bui
md5;
sha1;
sha256;
sha512;
sha3 (with shake);
blake2.
--with-ssl
Changed in version 3.10: The settings python and STRING also set TLS 1.2 as
minimum protocol version.
3.3.10. m
See Mac/READ
--enable-u
--enable-u
Create a universal binary build. SDKDIR specifies which macOS SDK should be used to
perform the build (default is no).
--enable-f
--enable-f
Specify the kind of universal binary that should be created. This option is only valid
when --enable-universalsdk is set.
Options:
universal2;
32-bit;
64-bit;
3-way;
intel;
intel-32;
intel-64;
all.
--with-fra
Specify the name for the python framework on macOS only valid when --enable-
framework is set (default: Python).
3.3.11. C
Cross compiling
the build platfo
--build=BU
CONFIG_SIT
# config.site-aarch64
ac_cv_buggy_getaddrinfo=no
ac_cv_file__dev_ptmx=yes
ac_cv_file__dev_ptc=no
Cross compiling
CONFIG_SITE
--build
--host=
--with-
3.4. Pyth
3.4.1. Ma
configur
Makefile
pyconfig
Modules/
3.4.2. Ma
C files (.c)
A static lib
python.o
C extension
3.4.3. Ma
make: Build
make pla
make pro
the make co
make bui
default: 20 m
make ins
make reg
make cle
make dis
3.4.4. C e
Some C extensi
>>>
>>> import
>>> sys
<module 'sy
>>> sys.__f
Traceback (
File "<st
AttributeEr
Other C extensi
>>>
>>> import
>>> _asynci
<module '_a
>>> _asynci
'/usr/lib64
Modules/Set
the *shared*
The PyAPI_FU
Use Py_EX
Use Py_IM
If the Py_BUIL
import.
3.5. Com
Options set by t
3.5.1. Pre
CONFIGURE_
CPPFLAGS
Both CPPFLAGS and LDFLAGS need to contain the shell’s value to be able to build
extension modules using the directories specified in the environment variables.
BASECPPFLA
PY_CPPFLAG
Extra preprocessor flags added for building the interpreter object files.
3.5.2. Co
CC
C compiler command.
CFLAGS_NODIST is used for building the interpreter and stdlib C extensions. Use it
when a compiler flag should not be part of CFLAGS once Python is installed (gh-65320).
the compiler flag -I (for setting the search path for include files). The -I flags
are processed from left to right, and any flags in CFLAGS would take precedence
over user- and package-supplied -I flags.
hardening flags such as -Werror because distributions cannot control whether
packages installed by users conform to such heightened standards.
COMPILEALL
Options passed to the compileall command line when building PYC files
in make install. Default: -j0.
EXTRA_CFLA
CONFIGURE_
Optimization flags.
Strict or non-strict aliasing flags used to compile Python/dtoa.c.
Compiler flags to build a standard library extension module as a built-in module, like
the posix module.
Default: $(PY_STDMODULE_CFLAGS) -DPy_BUILD_CORE_BUILTIN.
Avoid assigning CFLAGS, LDFLAGS, etc. so users can use them on the command line to
append to these values without stomping the pre-set values.
the compiler flag -L (for setting the search path for libraries). The -L flags are
processed from left to right, and any flags in LDFLAGS would take precedence
over user- and package-supplied -L flags.
Linker flags, e.g. -Llib_dir if you have libraries in a nonstandard directory lib_dir.
Both CPPFLAGS and LDFLAGS need to contain the shell’s value to be able to build
extension modules using the directories specified in the environment variables.
Linker flags to pass libraries to the linker when linking the Python executable.
Example: -lrt.
Unlike most Unix systems and services, Windows does not include a system
supported installation of Python. To make Python available, the CPython
team has compiled Windows installers with every release for many years.
These installers are primarily intended to add a per-user installation of
Python, with the core interpreter and library being used by a single user. The
installer is also able to install for all users of a single machine, and a
separate ZIP file is available for application-local distributions.
As specified in PEP 11, a Python release only supports a Windows platform
while Microsoft considers the platform under extended support. This means
that Python 3.12 supports Windows 8.1 and newer. If you require Windows 7
support, please install Python 3.8.
There are a number of different installers available for Windows, each with
certain benefits and downsides.
The full installer contains all components and is the best option for
developers using Python for any kind of project.
You will not need to be an administrator (unless a system update for the C Runtime
Library is required or you install the Python Launcher for Windows for all users)
Python will be installed into your user directory
The Python Launcher for Windows will be installed according to the option at the bottom
of the first page
The standard library, test suite, launcher and pip will be installed
If selected, the install directory will be added to your PATH
Shortcuts will only be visible for the current user
Selecting “Customize installation” will allow you to select the features to install, the installation
location and other options or post-install actions. To install debugging symbols or binaries, you
will need to use this option.
To perform an all-users installation, you should select “Customize installation”. In this case:
In the latest versions of Windows, this limitation can be expanded to approximately 32,000
characters. Your administrator will need to activate the “Enable Win32 long paths” group policy,
or set LongPathsEnabled to 1 in the registry key HKEY_LOCAL_MACHINE\SYSTEM\
CurrentControlSet\Control\FileSystem.
This allows the open() function, the os module and most other path functionality to accept and
return paths longer than 260 characters.
Changed in version 3.6: Support for long paths was enabled in Python.
The following options (found by executing the installer with /?) can be passed into the installer:
Name Description
/passive to display progress without requiring user interaction
/quiet to install/uninstall without displaying any UI
/simple to prevent user customization
/uninstall to remove Python (without confirmation)
/layout
to pre-download all components
[directory]
/log [filename] to specify log files location
All other options are passed as name=value, where the value is usually 0 to disable a
feature, 1 to enable a feature, or a path. The full list of available options is shown below.
Name Description Default
Perform a
InstallAllUsers system-wide 0
installation.
The installation
TargetDir Selected based on InstallAllUsers
directory
The default
DefaultAllUser installation %ProgramFiles%\Python X.Y or %ProgramFiles
sTargetDir directory for all- (x86)%\Python X.Y
user installs
The default %LocalAppData%\Programs\Python\PythonXY or
DefaultJustFor install directory %LocalAppData%\Programs\Python\PythonXY-
MeTargetDir for just-for-me 32 or %LocalAppData%\Programs\Python\
installs PythonXY-64
The default
custom install
DefaultCustom
directory (empty)
TargetDir
displayed in the
UI
Create file
associations if
AssociateFiles 1
the launcher is
also installed.
Compile
CompileAll all .py files 0
to .pyc.
Prepend install
and Scripts
directories
PrependPath 0
to PATH and
add .PY to PATH
EXT
Append install
and Scripts
directories
AppendPath 0
to PATH and
add .PY to PATH
EXT
Shortcuts Create shortcuts 1
for the
interpreter,
Name Description Default
documentation
and IDLE if
installed.
Install Python
Include_doc 1
manual
Install debug
Include_debug 0
binaries
Install developer
headers and
libraries.
Include_dev Omitting this 1
may lead to an
unusable
installation.
Install python.
exe and related
files. Omitting
Include_exe 1
this may lead to
an unusable
installation.
Install Python
Include_launch
Launcher for 1
er
Windows.
Installs the
launcher for all
InstallLaunche users. Also
1
rAllUsers requires Includ
e_launcher to
be set to 1
Install standard
library and
extension
modules.
Include_lib 1
Omitting this
may lead to an
unusable
installation.
Install bundled
Include_pip pip and 1
setuptools
Name Description Default
Install debugging
Include_symbo
symbols 0
ls
(*.pdb)
Install Tcl/Tk
Include_tcltk support and 1
IDLE
Install standard
Include_test 1
library test suite
Install utility
Include_tools 1
scripts
Only installs the
launcher. This
LauncherOnly will override 0
most other
options.
Disable most
SimpleInstall 0
install UI
A custom
message to
SimpleInstallD
display when the (empty)
escription
simplified install
UI is used.
For example, to silently install a default, system-wide Python installation, you could use the
following command (from an elevated command prompt):
To allow users to easily install a personal copy of Python without the test suite, you could
provide a shortcut with the following command. This will display a simplified initial page and
disallow customization:
The options listed above can also be provided in a file named unattend.xml alongside the
executable. This file specifies a list of options and values. When a value is provided as an
attribute, it will be converted to a number if possible. Values provided as element text are always
left as strings. This example file sets the same options as the previous example:
<Options>
<Option Name="InstallAllUsers" Value="no" />
<Option Name="Include_launcher" Value="0" />
<Option Name="Include_test" Value="no" />
<Option Name="SimpleInstall" Value="yes" />
<Option Name="SimpleInstallDescription">Just for me, no test
suite</Option>
</Options>
Execute the following command from Command Prompt to download all possible required files.
Remember to substitute python-3.9.0.exe for the actual name of your installer, and to
create layouts in their own directories to avoid collisions between files with the same name.
You may also specify the /quiet option to hide the progress display.
“Modify” allows you to add or remove features by modifying the checkboxes - unchanged
checkboxes will not install or remove anything. Some options cannot be changed in this mode,
such as the install directory; to modify these, you will need to remove and then reinstall Python
completely.
“Repair” will verify all the files that should be installed using the current settings and replace any
that have been removed or modified.
“Uninstall” will remove Python entirely, with the exception of the Python Launcher for
Windows, which has its own entry in Programs and Features.
The Microsoft Store package is an easily installable Python interpreter that is intended mainly for
interactive use, for example, by students.
To install the package, ensure you have the latest Windows 10 updates and search the Microsoft
Store app for “Python 3.12”. Ensure that the app you select is published by the Python Software
Foundation, and install it.
Warning
Python will always be available for free on the Microsoft Store. If you are asked to pay for it,
you have not selected the correct package.
All three commands are also available with version number suffixes, for example,
as python3.exe and python3.x.exe as well as python.exe (where 3.x is the specific
version you want to launch, such as 3.12). Open “Manage App Execution Aliases” through Start
to select which version of Python is associated with each command. It is recommended to make
sure that pip and idle are consistent with whichever version of python is selected.
Virtual environments can be created with python -m venv and activated and used as normal.
If you have installed another version of Python and added it to your PATH variable, it will be
available as python.exe rather than the one from the Microsoft Store. To access the new
installation, use python3.exe or python3.x.exe.
The py.exe launcher will detect this Python installation, but will prefer installations from the
traditional installer.
To remove Python, open Settings and use Apps and Features, or else find Python in Start and
right-click to select Uninstall. Uninstalling will remove all packages you installed directly into
this Python installation, but will not remove any virtual environments
4.2.1. Known issues
4.2.1.1. Redirection of local data, registry, and temporary paths
Because of restrictions on Microsoft Store apps, Python scripts may not have full write access to
shared locations such as TEMP and the registry. Instead, it will write to a private copy. If your
scripts must modify the shared locations, you will need to install the full installer.
At runtime, Python will use a private copy of well-known Windows folders and the registry. For
example, if the environment variable %APPDATA% is c:\Users\<user>\AppData\, then
when writing to C:\Users\<user>\AppData\Local will write to C:\Users\<user>\
AppData\Local\Packages\
PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\Local\.
When reading files, Windows will return the file from the private folder, or if that does not exist,
the real Windows directory. For example reading C:\Windows\System32 returns the contents
of C:\Windows\System32 plus the contents of C:\Program Files\WindowsApps\
package_name\VFS\SystemX86.
You can find the real path of any existing file using os.path.realpath():
>>>
>>> import os
>>> test_file = 'C:\\Users\\example\\AppData\\Local\\test.txt'
>>> os.path.realpath(test_file)
'C:\\Users\\example\\AppData\\Local\\Packages\\
PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\
Local\\test.txt'
For more detail on the technical basis for these limitations, please consult Microsoft’s
documentation on packaged full-trust apps, currently available
at docs.microsoft.com/en-us/windows/msix/desktop/desktop-to-uwp-behind-the-scenes
4.3. The nuget.org packages
Added in version 3.5.2.
The nuget.org package is a reduced size Python environment intended for use on continuous
integration and build systems that do not have a system-wide install of Python. While nuget is
“the package manager for .NET”, it also works perfectly fine for packages containing build-time
tools.
Visit nuget.org for the most up-to-date information on using nuget. What follows is a summary
that is sufficient for Python developers.
To select a particular version, add a -Version 3.x.y. The output directory may be changed
from ., and the package will be installed into a subdirectory. By default, the subdirectory is
named the same as the package, and without the -ExcludeVersion option this name will
include the specific version installed. Inside the subdirectory is a tools directory that contains
the Python installation:
# Without -ExcludeVersion
> .\python.3.5.2\tools\python.exe -V
Python 3.5.2
# With -ExcludeVersion
> .\python\tools\python.exe -V
Python 3.5.2
In general, nuget packages are not upgradeable, and newer versions should be installed side-by-
side and referenced using the full path. Alternatively, delete the package directory manually and
install it again. Many CI systems will do this automatically if they do not preserve files between
builds.
The package information pages on nuget.org are www.nuget.org/packages/python for the 64-bit
version and www.nuget.org/packages/pythonx86 for the 32-bit version.
4.4. The embeddable package
Added in version 3.5.
The embedded distribution is a ZIP file containing a minimal Python environment. It is intended
for acting as part of another application, rather than being directly accessed by end-users.
When extracted, the embedded distribution is (almost) fully isolated from the user’s system,
including environment variables, system registry settings, and installed packages. The standard
library is included as pre-compiled and optimized .pyc files in a ZIP,
and python3.dll, python37.dll, python.exe and pythonw.exe are all provided.
Tcl/tk (including all dependents, such as Idle), pip and the Python documentation are not
included.
Note
The embedded distribution does not include the Microsoft C Runtime and it is the responsibility
of the application installer to provide this. The runtime may have already been installed on a
user’s system previously or automatically via Windows Update, and can be detected by
finding ucrtbase.dll in the system directory.
Third-party packages should be installed by the application installer alongside the embedded
distribution. Using pip to manage dependencies as for a regular Python installation is not
supported with this distribution, though with some care it may be possible to include and use pip
for automatic updates. In general, third-party packages should be treated as part of the
application (“vendoring”) so that the developer can ensure compatibility with newer versions
before providing updates to users.
The two recommended use cases for this distribution are described below.
Using a specialized executable as a launcher requires some coding, but provides the most
transparent experience for users. With a customized launcher, there are no obvious indications
that the program is running on Python: icons can be customized, company and version
information can be specified, and file associations behave properly. In most cases, a custom
launcher should simply be able to call Py_Main with a hard-coded command line.
The simpler approach is to provide a batch file or generated shortcut that directly calls
the python.exe or pythonw.exe with the required command-line arguments. In this case,
the application will appear to be Python and not its actual name, and users may have trouble
distinguishing it from other running Python processes or file associations.
With the latter approach, packages should be installed as directories alongside the Python
executable to ensure they are available on the path. With the specialized launcher, packages can
be located in other locations as there is an opportunity to specify the search path before
launching the application.
As with the application use, packages can be installed to any location as there is an opportunity
to specify search paths before initializing the interpreter. Otherwise, there is no fundamental
differences between using the embedded distribution and a regular installation.
ActivePython
Anaconda
Popular scientific modules (such as numpy, scipy and pandas) and the conda package
manager.
WinPython
Windows-specific distribution with prebuilt scientific packages and tools for building
packages.
Note that these packages may not include the latest versions of Python or
other libraries, and are not maintained or supported by the core Python team.
These changes will apply to any further commands executed in that console,
and will be inherited by any applications started from the console.
Including the variable name within percent signs will expand to the existing
value, allowing you to add your new value at either the start or the end.
Modifying PATH by adding the directory containing python.exe to the start is
a common way to ensure the correct version of Python is launched.
Windows will concatenate User variables after System variables, which may
cause unexpected results when modifying PATH.
Besides using the automatically created start menu entry for the
Python interpreter, you might want to start Python in the
command prompt. The installer has an option to set that up for
you.
If you don’t enable this option at install time, you can always re-
run the installer, select Modify, and enable it. Alternatively, you
can manually modify the PATH using the directions in Excursus:
Setting environment variables. You need to set
your PATH environment variable to include the directory of
your Python installation, delimited by a semicolon from other
entries. An example variable could look like this (assuming the
first two entries already existed):
C:\WINDOWS\system32;C:\WINDOWS;C:\Program
Files\Python 3.9
You can use the Python UTF-8 Mode to change the default text
encoding to UTF-8. You can enable the Python UTF-8
Mode via the -X utf8 command line option, or
the PYTHONUTF8=1 environment variable.
See PYTHONUTF8 for enabling UTF-8 mode, and Excursus:
Setting environment variables for how to modify environment
variables.
When the Python UTF-8 Mode is enabled, you can still use the
system encoding (the ANSI Code Page) via the “mbcs” codec.
Note
Unlike the PATH variable, the launcher will correctly select the
most appropriate version of Python. It will prefer per-user
installations over system-wide ones, and orders by language
version rather than using the most recently installed version.
py
You should find that the latest version of Python you have
installed is started - it can be exited as normal, and any
additional command-line arguments specified will be sent
directly to Python.
py -3.7
If you want the latest version of Python 2 you have installed, try
the command:
py -2
If you see the following error, you do not have the launcher
installed:
The command:
py --list
#! python
import sys
sys.stdout.write("hello from Python %s\n" %
(sys.version,))
py hello.py
You should notice the version number of your latest Python 2.x
installation is printed. Now try changing the first line to be:
#! python3
Re-executing the command should now print the latest Python
3.x information. As with the above command-line examples,
you can specify a more explicit version qualifier. Assuming you
have Python 3.7 installed, try changing the first line
to #! python3.7 and you should find the 3.7 version
information printed.
Note that unlike interactive use, a bare “python” will use the
latest version of Python 2.x that you have installed. This is for
backward compatibility and for compatibility with Unix, where
the command python typically refers to Python 2.
/usr/bin/env
/usr/bin/python
/usr/local/bin/python
python
Shebang lines that do not match any of these patterns are looked
up in the [commands] section of the launcher’s .INI file. This
may be used to handle certain commands in a way that makes
sense for your system. The name of the command must be a
single argument (no spaces in the shebang executable), and the
value substituted is the full path to the executable (additional
arguments specified in the .INI will be quoted as part of the
filename).
[commands]
/bin/xpython=C:\Program Files\XPython\
python.exe
#! /usr/bin/python -v
4.8.4. Customization
4.8.4.1. Customization via INI files
Two .ini files will be searched by the launcher - py.ini in the
current user’s application data directory (%LOCALAPPDATA
% or $env:LocalAppData) and py.ini in the same
directory as the launcher. The same .ini files are used for both
the ‘console’ version of the launcher (i.e. py.exe) and for the
‘windows’ version (i.e. pyw.exe).
Examples:
For example:
4.8.5. Diagnostics
If an environment variable PYLAUNCHER_DEBUG is set (to any
value), the launcher will print diagnostic information to stderr
(i.e. to the console). While this information manages to be
simultaneously verbose and terse, it should allow you to see
what versions of Python were located, why a particular version
was chosen and the exact command-line used to execute the
target Python. It is primarily intended for testing and debugging.
4.8.6. Dry Run
If an environment variable PYLAUNCHER_DRYRUN is set (to
any value), the launcher will output the command it would have
run, but will not actually launch Python. This may be useful for
tools that want to use the launcher to detect and then launch
Python directly. Note that the command written to standard
output is always encoded using UTF-8, and may not render
correctly in the console.
The names of codes are as used in the sources, and are only for
reference. There is no way to access or resolve them apart from
reading this page. Entries are listed in alphabetical order of
names.
4.10.1. PyWin32
The PyWin32 module by Mark Hammond is a collection of
modules for advanced Windows-specific support. This includes
utilities for:
See also
Win32 How Do I…?
by Tim Golden
Python and COM
A Python 3.12 folder in your Applications folder. In here you find IDLE, the
development environment that is a standard part of official Python distributions;
and Python Launcher, which handles double-clicking Python scripts from the Finder.
A framework /Library/Frameworks/Python.framework, which includes the
Python executable and libraries. The installer adds this location to your shell path. To
uninstall Python, you can remove these three things. A symlink to the Python executable
is placed in /usr/local/bin/.
Note
IDLE includes a Help menu that allows you to access Python documentation. If you are
completely new to Python you should start reading the tutorial introduction in that document.
If you are familiar with Python on other Unix platforms you should read the section on running
Python scripts from the Unix shell.
If you want to run Python scripts from the Terminal window command line or from the Finder
you first need an editor to create your script. macOS comes with a number of standard Unix
command line editors, vim nano among them. If you want a more Mac-like editor, BBEdit from
Bare Bones Software (see https://www.barebones.com/products/bbedit/index.html) are good
choices, as is TextMate (see https://macromates.com). Other editors
include MacVim (https://macvim.org) and Aquamacs (https://aquamacs.org).
To run your script from the Terminal window you must make sure that /usr/local/bin is in
your shell search path.
To run your script from the Finder you have two options:
For more information on installation Python packages, see section Installing Additional Python
Packages.
The standard Python GUI toolkit is tkinter, based on the cross-platform Tk toolkit
(https://www.tcl.tk). An Aqua-native version of Tk is bundled with macOS by Apple, and the
latest version can be downloaded and installed from https://www.activestate.com; it can also be
built from source.
https://www.python.org/community/sigs/current/pythonmac-sig/
https://wiki.python.org/moin/MacPython
1. Introduction
o 1.1. Alternate Implementations
o 1.2. Notation
2. Lexical analysis
o 2.1. Line structure
o 2.2. Other tokens
o 2.3. Identifiers and keywords
o 2.4. Literals
o 2.5. Operators
o 2.6. Delimiters
3. Data model
o 3.1. Objects, values and types
o 3.2. The standard type hierarchy
o 3.3. Special method names
o 3.4. Coroutines
4. Execution model
o 4.1. Structure of a program
o 4.2. Naming and binding
o 4.3. Exceptions
5. The import system
o 5.1. importlib
o 5.2. Packages
o 5.3. Searching
o 5.4. Loading
o 5.5. The Path Based Finder
o 5.6. Replacing the standard import system
o 5.7. Package Relative Imports
o 5.8. Special considerations for __main__
o 5.9. References
6. Expressions
o 6.1. Arithmetic conversions
o 6.2. Atoms
o 6.3. Primaries
o 6.4. Await expression
o 6.5. The power operator
o 6.6. Unary arithmetic and bitwise operations
o 6.7. Binary arithmetic operations
o 6.8. Shifting operations
o 6.9. Binary bitwise operations
o 6.10. Comparisons
o 6.11. Boolean operations
o 6.12. Assignment expressions
o 6.13. Conditional expressions
o 6.14. Lambdas
o 6.15. Expression lists
o 6.16. Evaluation order
o 6.17. Operator precedence
7. Simple statements
o 7.1. Expression statements
o 7.2. Assignment statements
o 7.3. The assert statement
o 7.4. The pass statement
o 7.5. The del statement
o 7.6. The return statement
o 7.7. The yield statement
o 7.8. The raise statement
o 7.9. The break statement
o 7.10. The continue statement
o 7.11. The import statement
o 7.12. The global statement
o 7.13. The nonlocal statement
o 7.14. The type statement
8. Compound statements
o 8.1. The if statement
o 8.2. The while statement
o 8.3. The for statement
o 8.4. The try statement
o 8.5. The with statement
o 8.6. The match statement
o 8.7. Function definitions
o 8.8. Class definitions
o 8.9. Coroutines
o 8.10. Type parameter lists
9. Top-level components
o 9.1. Complete Python programs
o 9.2. File input
o 9.3. Interactive input
o 9.4. Expression input
10. Full Grammar specification
1. Introduction
This reference manual describes the Python programming language. It is not
intended as a tutorial.
CPython
Jython
Python implemented in Java. This implementation can be used as a scripting language for
Java applications, or can be used to create applications using the Java class libraries. It is
also often used to create tests for Java libraries. More information can be found at the
Jython website.
This implementation actually uses the CPython implementation, but is a managed .NET
application and makes .NET libraries available. It was created by Brian Lloyd. For more
information, see the Python for .NET home page.
IronPython
PyPy
1.2. Notation
The descriptions of lexical analysis and syntax use a modified Backus–
Naur form (BNF) grammar notation. This uses the following style of
definition:
Each rule begins with a name (which is the name defined by the rule)
and ::=. A vertical bar (|) is used to separate alternatives; it is the least
binding operator in this notation. A star (*) means zero or more
repetitions of the preceding item; likewise, a plus (+) means one or more
repetitions, and a phrase enclosed in square brackets ([ ]) means zero or
one occurrences (in other words, the enclosed phrase is optional).
The * and + operators bind as tightly as possible; parentheses are used for
grouping. Literal strings are enclosed in quotes. White space is only
meaningful to separate tokens. Rules are normally contained on a single
line; rules with many alternatives may be formatted alternatively with
each line after the first beginning with a vertical bar.
In lexical definitions (as the example above), two more conventions are
used: Two literal characters separated by three dots mean a choice of any
single character in the given (inclusive) range of ASCII characters. A
phrase between angular brackets (<...>) gives an informal description
of the symbol defined; e.g., this could be used to describe the notion of
‘control character’ if needed.
Python reads program text as Unicode code points; the encoding of a source
file can be given by an encoding declaration and defaults to UTF-8, see PEP
3120 for details. If the source file cannot be decoded, a SyntaxError is
raised.
When embedding Python, source code strings should be passed to Python APIs using the
standard C conventions for newline characters (the \n character, representing ASCII LF, is the
line terminator).
2.1.3. Comments
A comment starts with a hash character (#) that is not part of a string literal, and ends at the end
of the physical line. A comment signifies the end of the logical line unless the implicit line
joining rules are invoked. Comments are ignored by the syntax.
# vim:fileencoding=<encoding-name>
If an encoding is declared, the encoding name must be recognized by Python (see Standard
Encodings). The encoding is used for all lexical analysis, including string literals, comments and
identifiers.
A line ending in a backslash cannot carry a comment. A backslash does not continue a comment.
A backslash does not continue a token except for string literals (i.e., tokens other than string
literals cannot be split across physical lines using a backslash). A backslash is illegal elsewhere
on a line outside a string literal.
Implicitly continued lines can carry comments. The indentation of the continuation lines is not
important. Blank continuation lines are allowed. There is no NEWLINE token between implicit
continuation lines. Implicitly continued lines can also occur within triple-quoted strings (see
below); in that case they cannot carry comments.
2.1.7. Blank lines
A logical line that contains only spaces, tabs, formfeeds and possibly a comment, is ignored (i.e.,
no NEWLINE token is generated). During interactive input of statements, handling of a blank
line may differ depending on the implementation of the read-eval-print loop. In the standard
interactive interpreter, an entirely blank logical line (i.e. one containing not even whitespace or a
comment) terminates a multi-line statement.
2.1.8. Indentation
Leading whitespace (spaces and tabs) at the beginning of a logical line is used to compute the
indentation level of the line, which in turn is used to determine the grouping of statements.
Tabs are replaced (from left to right) by one to eight spaces such that the total number of
characters up to and including the replacement is a multiple of eight (this is intended to be the
same rule as used by Unix). The total number of spaces preceding the first non-blank character
then determines the line’s indentation. Indentation cannot be split over multiple physical lines
using backslashes; the whitespace up to the first backslash determines the indentation.
Indentation is rejected as inconsistent if a source file mixes tabs and spaces in a way that makes
the meaning dependent on the worth of a tab in spaces; a TabError is raised in that case.
A formfeed character may be present at the start of the line; it will be ignored for the indentation
calculations above. Formfeed characters occurring elsewhere in the leading whitespace have an
undefined effect (for instance, they may reset the space count to zero).
The indentation levels of consecutive lines are used to generate INDENT and DEDENT tokens,
using a stack, as follows.
Before the first line of the file is read, a single zero is pushed on the stack; this will never be
popped off again. The numbers pushed on the stack will always be strictly increasing from
bottom to top. At the beginning of each logical line, the line’s indentation level is compared to
the top of the stack. If it is equal, nothing happens. If it is larger, it is pushed on the stack, and
one INDENT token is generated. If it is smaller, it must be one of the numbers occurring on the
stack; all numbers on the stack that are larger are popped off, and for each number popped off a
DEDENT token is generated. At the end of the file, a DEDENT token is generated for each
number remaining on the stack that is larger than zero.
(Actually, the first three errors are detected by the parser; only the last error is found by the
lexical analyzer — the indentation of return r does not match a level popped off the stack.)
The syntax of identifiers in Python is based on the Unicode standard annex UAX-31, with
elaboration and changes as defined below; see also PEP 3131 for further details.
Within the ASCII range (U+0001..U+007F), the valid characters for identifiers are the same as in
Python 2.x: the uppercase and lowercase letters A through Z, the underscore _ and, except for the
first character, the digits 0 through 9.
Python 3.0 introduces additional characters from outside the ASCII range (see PEP 3131). For
these characters, the classification uses the version of the Unicode Character Database as
included in the unicodedata module.
Lu - uppercase letters
Ll - lowercase letters
Lt - titlecase letters
Lm - modifier letters
Lo - other letters
Nl - letter numbers
Mn - nonspacing marks
Mc - spacing combining marks
Nd - decimal numbers
Pc - connector punctuations
Other_ID_Start - explicit list of characters in PropList.txt to support backwards
compatibility
Other_ID_Continue - likewise
All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers
is based on NFKC.
A non-normative HTML file listing all valid identifier characters for Unicode 15.0.0 can be
found at https://www.unicode.org/Public/15.0.0/ucd/DerivedCoreProperties.txt
2.3.1. Keywords
The following identifiers are used as reserved words, or keywords of the language, and cannot be
used as ordinary identifiers. They must be spelled exactly as written here:
Some identifiers are only reserved under specific contexts. These are known as soft keywords.
The identifiers match, case, type and _ can syntactically act as keywords in certain contexts,
but this distinction is done at the parser level, not when tokenizing.
As soft keywords, their use in the grammar is possible while still preserving compatibility with
existing code that uses these names as identifier names.
match, case, and _ are used in the match statement. type is used in the type statement.
_*
_
In a case pattern within a match statement, _ is a soft keyword that denotes a wildcard.
Separately, the interactive interpreter makes the result of the last evaluation available in
the variable _. (It is stored in the builtins module, alongside built-in functions
like print.)
Elsewhere, _ is a regular identifier. It is often used to name “special” items, but it is not
special to Python itself.
Note
System-defined names, informally known as “dunder” names. These names are defined
by the interpreter and its implementation (including the standard library). Current system
names are discussed in the Special method names section and elsewhere. More will likely
be defined in future versions of Python. Any use of __*__ names, in any context, that
does not follow explicitly documented use, is subject to breakage without warning.
__*
Class-private names. Names in this category, when used within the context of a class
definition, are re-written to use a mangled form to help avoid name clashes between
“private” attributes of base and derived classes. See section Identifiers (Names).
2.4. Literals
Literals are notations for constant values of some built-in types.
Bytes literals are always prefixed with 'b' or 'B'; they produce an instance
of the bytes type instead of the str type. They may only contain ASCII
characters; bytes with a numeric value of 128 or greater must be expressed
with escapes.
Added in version 3.3: The 'rb' prefix of raw bytes literals has been added as
a synonym of 'br'.
A string literal with 'f' or 'F' in its prefix is a formatted string literal;
see f-strings. The 'f' may be combined with 'r', but not with 'b' or 'u',
therefore raw formatted strings are possible, but formatted bytes literals are
not.
In triple-quoted literals, unescaped newlines and quotes are allowed (and are
retained), except that three unescaped quotes in a row terminate the literal. (A
“quote” is the character used to open the literal, i.e. either ' or ".)
Escape
Meaning Notes
Sequence
\<newline> Backslash and newline ignored (1)
\\ Backslash (\)
Escape
Meaning Notes
Sequence
\N{name} Character named name in the Unicode database (5)
\uxxxx Character with 16-bit hex value xxxx (6)
\Uxxxxxxxx Character with 32-bit hex value xxxxxxxx (7)
Notes:
>>>
Unlike Standard C, all unrecognized escape sequences are left in the string
unchanged, i.e., the backslash is left in the result. (This behavior is useful
when debugging: if an escape sequence is mistyped, the resulting output is
more easily recognized as broken.) It is also important to note that the escape
sequences only recognized in string literals fall into the category of
unrecognized escapes for bytes literals.
Even in a raw literal, quotes can be escaped with a backslash, but the
backslash remains in the result; for example, r"\"" is a valid string literal
consisting of two characters: a backslash and a double quote; r"\" is not a
valid string literal (even a raw string cannot end in an odd number of
backslashes). Specifically, a raw literal cannot end in a single
backslash (since the backslash would escape the following quote character).
Note also that a single backslash followed by a newline is interpreted as those
two characters as part of the literal, not as a line continuation.
Note that this feature is defined at the syntactical level, but implemented at
compile time. The ‘+’ operator must be used to concatenate string expressions
at run time. Also note that literal concatenation can use different quoting
styles for each component (even mixing raw strings and triple quoted strings),
and formatted string literals may be concatenated with plain string literals.
2.4.3. f-strings
Added in version 3.6.
Escape sequences are decoded like in ordinary string literals (except when a
literal is also marked as a raw string). After decoding, the grammar for the
contents of the string is:
The parts of the string outside curly braces are treated literally, except that any
doubled curly braces '{{' or '}}' are replaced with the corresponding
single curly brace. A single opening curly bracket '{' marks a replacement
field, which starts with a Python expression. To display both the expression
text and its value after evaluation, (useful in debugging), an equal
sign '=' may be added after the expression. A conversion field, introduced
by an exclamation point '!' may follow. A format specifier may also be
appended, introduced by a colon ':'. A replacement field ends with a closing
curly bracket '}'.
Changed in version 3.12: Prior to Python 3.12, comments were not allowed
inside f-string replacement fields.
When the equal sign '=' is provided, the output will have the expression text,
the '=' and the evaluated value. Spaces after the opening brace '{', within
the expression and after the '=' are all retained in the output. By default,
the '=' causes the repr() of the expression to be provided, unless there is a
format specified. When a format is specified it defaults to the str() of the
expression unless a conversion '!r' is declared.
The result is then formatted using the format() protocol. The format
specifier is passed to the __format__() method of the expression or
conversion result. An empty string is passed when the format specifier is
omitted. The formatted result is then included in the final value of the whole
string.
>>>
Reusing the outer f-string quoting type inside a replacement field is permitted:
>>>
>>> a = dict(x=2)
>>> f"abc {a["x"]} def"
'abc 2 def'
Changed in version 3.12: Prior to Python 3.12, reuse of the same quoting type
of the outer f-string inside a replacement field was not possible.
Backslashes are also allowed in replacement fields and are evaluated the same
way as in any other context:
>>>
>>>
Note that numeric literals do not include a sign; a phrase like -1 is actually an
expression composed of the unary operator ‘-’ and the literal 1.
There is no limit for the length of integer literals apart from what can be
stored in available memory.
Underscores are ignored for determining the numeric value of the literal. They
can be used to group digits for enhanced readability. One underscore can
occur between digits, and after base specifiers like 0x.
Note that leading zeros in a non-zero decimal number are not allowed. This is
for disambiguation with C-style octal literals, which Python used before
version 3.0.
Changed in version 3.6: Underscores are now allowed for grouping purposes
in literals.
Note that the integer and exponent parts are always interpreted using radix 10.
For example, 077e010 is legal, and denotes the same number as 77e10. The
allowed range of floating-point literals is implementation-dependent. As in
integer literals, underscores are supported for digit grouping.
Changed in version 3.6: Underscores are now allowed for grouping purposes
in literals.
An imaginary literal yields a complex number with a real part of 0.0. Complex
numbers are represented as a pair of floating-point numbers and have the same
restrictions on their range. To create a complex number with a nonzero real
part, add a floating-point number to it, e.g., (3+4j). Some examples of
imaginary literals:
2.5. Operators
The following tokens are operators:
+ - * ** / // %
@
<< >> & | ^ ~ :=
< > <= >= == !=
2.6. Delimiters
The following tokens serve as delimiters in the grammar:
( ) [ ] { }
, : ! . ; @ =
-> += -= *= /= //= %=
@= &= |= ^= >>= <<= **=
The period can also occur in floating-point and imaginary literals. A sequence
of three periods has a special meaning as an ellipsis literal. The second half of
the list, the augmented assignment operators, serve lexically as delimiters, but
also perform an operation.
The following printing ASCII characters have special meaning as part of other
tokens or are otherwise significant to the lexical analyzer:
' " # \
The following printing ASCII characters are not used in Python. Their
occurrence outside string literals and comments is an unconditional error:
$ ? `
Footnotes
[1]
https://www.unicode.org/Public/15.0.0/ucd/NameAliases.txt
3. Data model
3.1. Objects, values and types
Objects are Python’s abstraction for data. All data in a Python program is represented by objects
or by relations between objects. (In a sense, and in conformance to Von Neumann’s model of a
“stored program computer”, code is also represented by objects.)
Every object has an identity, a type and a value. An object’s identity never changes once it has
been created; you may think of it as the object’s address in memory. The is operator compares
the identity of two objects; the id() function returns an integer representing its identity.
CPython implementation detail: For CPython, id(x) is the memory address where x is
stored.
An object’s type determines the operations that the object supports (e.g., “does it have a
length?”) and also defines the possible values for objects of that type. The type() function
returns an object’s type (which is an object itself). Like its identity, an object’s type is also
unchangeable. [1]
The value of some objects can change. Objects whose value can change are said to be mutable;
objects whose value is unchangeable once they are created are called immutable. (The value of
an immutable container object that contains a reference to a mutable object can change when the
latter’s value is changed; however the container is still considered immutable, because the
collection of objects it contains cannot be changed. So, immutability is not strictly the same as
having an unchangeable value, it is more subtle.) An object’s mutability is determined by its
type; for instance, numbers, strings and tuples are immutable, while dictionaries and lists are
mutable.
Objects are never explicitly destroyed; however, when they become unreachable they may be
garbage-collected. An implementation is allowed to postpone garbage collection or omit it
altogether — it is a matter of implementation quality how garbage collection is implemented, as
long as no objects are collected that are still reachable.
Note that the use of the implementation’s tracing or debugging facilities may keep objects alive
that would normally be collectable. Also note that catching an exception with a try…
except statement may keep objects alive.
Some objects contain references to “external” resources such as open files or windows. It is
understood that these resources are freed when the object is garbage-collected, but since garbage
collection is not guaranteed to happen, such objects also provide an explicit way to release the
external resource, usually a close() method. Programs are strongly recommended to explicitly
close such objects. The try…finally statement and the with statement provide convenient
ways to do this.
Some objects contain references to other objects; these are called containers. Examples of
containers are tuples, lists and dictionaries. The references are part of a container’s value. In
most cases, when we talk about the value of a container, we imply the values, not the identities
of the contained objects; however, when we talk about the mutability of a container, only the
identities of the immediately contained objects are implied. So, if an immutable container (like a
tuple) contains a reference to a mutable object, its value changes if that mutable object is
changed.
Types affect almost all aspects of object behavior. Even the importance of object identity is
affected in some sense: for immutable types, operations that compute new values may actually
return a reference to any existing object with the same type and value, while for mutable objects
this is not allowed. For example, after a = 1; b = 1, a and b may or may not refer to the
same object with the value one, depending on the implementation. This is because int is an
immutable type, so the reference to 1 can be reused. This behaviour depends on the
implementation used, so should not be relied upon, but is something to be aware of when making
use of object identity tests. However, after c = []; d = [], c and d are guaranteed to refer to
two different, unique, newly created empty lists. (Note that e = f = [] assigns
the same object to both e and f.)
Some of the type descriptions below contain a paragraph listing ‘special attributes.’ These are
attributes that provide access to the implementation and are not intended for general use. Their
definition may change in the future.
3.2.1. None
This type has a single value. There is a single object with this value. This object is accessed
through the built-in name None. It is used to signify the absence of a value in many situations,
e.g., it is returned from functions that don’t explicitly return anything. Its truth value is false.
3.2.2. NotImplemented
This type has a single value. There is a single object with this value. This object is accessed
through the built-in name NotImplemented. Numeric methods and rich comparison methods
should return this value if they do not implement the operation for the operands provided. (The
interpreter will then try the reflected operation, or some other fallback, depending on the
operator.) It should not be evaluated in a boolean context.
3.2.3. Ellipsis
This type has a single value. There is a single object with this value. This object is accessed
through the literal ... or the built-in name Ellipsis. Its truth value is true.
3.2.4. numbers.Number
These are created by numeric literals and returned as results by arithmetic operators and
arithmetic built-in functions. Numeric objects are immutable; once created their value never
changes. Python numbers are of course strongly related to mathematical numbers, but subject to
the limitations of numerical representation in computers.
The string representations of the numeric classes, computed by __repr__() and __str__(),
have the following properties:
They are valid numeric literals which, when passed to their class constructor, produce an
object having the value of the original numeric.
The representation is in base 10, when possible.
Leading zeros, possibly excepting a single zero before a decimal point, are not shown.
Trailing zeros, possibly excepting a single zero after a decimal point, are not shown.
A sign is shown only when the number is negative.
Note
The rules for integer representation are intended to give the most meaningful interpretation of
shift and mask operations involving negative integers.
Integers (int)
Booleans (bool)
These represent the truth values False and True. The two objects representing the
values False and True are the only Boolean objects. The Boolean type is a subtype of
the integer type, and Boolean values behave like the values 0 and 1, respectively, in
almost all contexts, the exception being that when converted to a string, the
strings "False" or "True" are returned, respectively.
3.2.4.2. numbers.Real (float)
These represent machine-level double precision floating-point numbers. You are at the
mercy of the underlying machine architecture (and C or Java implementation) for the
accepted range and handling of overflow. Python does not support single-precision
floating-point numbers; the savings in processor and memory usage that are usually the
reason for using these are dwarfed by the overhead of using objects in Python, so there
is no reason to complicate the language with two kinds of floating-point numbers.
Sequences also support slicing: a[i:j] selects all items with index k such
that i <= k < j. When used as an expression, a slice is a sequence of the same type. The
comment above about negative indexes also applies to negative slice positions.
Strings
A string is a sequence of values that represent Unicode code points. All the code points in
the range U+0000 - U+10FFFF can be represented in a string. Python doesn’t have
a char type; instead, every code point in the string is represented as a string object with
length 1. The built-in function ord() converts a code point from its string form to an
integer in the range 0 - 10FFFF; chr() converts an integer in the
range 0 - 10FFFF to the corresponding length 1 string object. str.encode() can be
used to convert a str to bytes using the given text encoding,
and bytes.decode() can be used to achieve the opposite.
Tuples
The items of a tuple are arbitrary Python objects. Tuples of two or more items are formed
by comma-separated lists of expressions. A tuple of one item (a ‘singleton’) can be
formed by affixing a comma to an expression (an expression by itself does not create a
tuple, since parentheses must be usable for grouping of expressions). An empty tuple can
be formed by an empty pair of parentheses.
Bytes
A bytes object is an immutable array. The items are 8-bit bytes, represented by integers in
the range 0 <= x < 256. Bytes literals (like b'abc') and the built-
in bytes() constructor can be used to create bytes objects. Also, bytes objects can be
decoded to strings via the decode() method.
3.2.5.2. Mutable sequences
Mutable sequences can be changed after they are created. The
subscription and slicing notations can be used as the target of assignment
and del (delete) statements.
Note
Lists
The items of a list are arbitrary Python objects. Lists are formed by placing a comma-
separated list of expressions in square brackets. (Note that there are no special cases
needed to form lists of length 0 or 1.)
Byte Arrays
Sets
These represent a mutable set. They are created by the built-in set() constructor and
can be modified afterwards by several methods, such as add().
Frozen sets
3.2.7.1. Dictionaries
These represent finite sets of objects indexed by nearly
arbitrary values. The only types of values not
acceptable as keys are values containing lists or
dictionaries or other mutable types that are compared
by value rather than by object identity, the reason
being that the efficient implementation of dictionaries
requires a key’s hash value to remain constant.
Numeric types used for keys obey the normal rules for
numeric comparison: if two numbers compare equal
(e.g., 1 and 1.0) then they can be used
interchangeably to index the same dictionary entry.
The extension
modules dbm.ndbm and dbm.gnu provide additional
examples of mapping types, as does
the collections module.
Attribute Meaning
The function’s
documentation
function.__doc__ string, or None if
unavailable. Not
inherited by
subclasses.
The function’s
function.__name name. See
__ also: __name_
_ attribut
es.
function.__q The
ualname__ function’s
qualified
name. See
also: __q
ualname
__ attr
ibutes.
Added in
version
Attribute Meaning
3.3.
The
name
of the
modul
e the
function. functi
__module on
__ was
define
d in,
or No
ne if
unava
ilable.
functi A
on.__
t
defau
u
lts__ p
l
e
c
o
n
t
a
i
n
i
n
g
d
e
f
a
u
lt
p
a
r
a
Attribute Meaning
m
e
t
e
r
v
a
l
u
e
s
f
o
r
t
h
o
s
e
p
a
r
a
m
e
t
e
r
s
t
h
a
t
h
a
v
e
d
e
f
a
u
lt
s,
Attribute Meaning
o
r
N
o
n
e
if
n
o
p
a
r
a
m
e
t
e
r
s
h
a
v
e
a
d
e
f
a
u
lt
v
a
l
u
e
.
fun Th
cti e
on. od
__ e
obj
co ect
de rep
__ res
Attribute Meaning
ent
ing
the
co
mp
ile
d
fun
cti
on
bo
dy.
f
u
n The
c namesp
t ace
i support
o ing
n arbitrar
. y
functio
_
n
_ attribut
d es. See
i also:
_dict
c __
t tribu
_ tes
_
func A
tion ary
.__ ing
annotations
ann
of
ota s. The keys
tio of the
ns_ dictionary
are the
_
parameter
names,
Attribute Meaning
and
rn'
return
annotation,
if provided.
See
also:
tions Best
Practices
functio A
n. containing
defaul defaults for
ts__ keyword-only
rameters
A
function. the
parameters
_type_par a
ams__
Added in versi
3.12.
3.2.8.8. Classes
Classes are callable. These objects normally act as
factories for new instances of themselves, but
variations are possible for class types that
override __new__(). The arguments of the call are
passed to __new__() and, in the typical case,
to __init__() to initialize the new instance.
3.2.9. Modules
Modules are a basic organizational unit of Python
code, and are created by the import system as invoked
either by the import statement, or by calling
functions such
as importlib.import_module() and built-
in __import__(). A module object has a namespace
implemented by a dictionary object (this is the
dictionary referenced by the __globals__ attribute
of functions defined in the module). Attribute
references are translated to lookups in this dictionary,
e.g., m.x is equivalent to m.__dict__["x"]. A
module object does not contain the code object used to
initialize the module (since it isn’t needed once the
initialization is done).
__name__
__doc__
__file__
The pathname of the file from which the module was loaded, if it was loaded from a file.
The __file__ attribute may be missing for certain types of modules, such as C modules
that are statically linked into the interpreter. For extension modules loaded dynamically
from a shared library, it’s the pathname of the shared library file.
__annotations__
Special read-only
attribute: __dict__ is the
module’s namespace as a dictionary
object.
CPython implementation
detail: Because of the way CPython
clears module dictionaries, the
module dictionary will be cleared
when the module falls out of scope
even if the dictionary still has live
references. To avoid this, copy the
dictionary or keep the module
around while using its dictionary
directly.
3.2.10. Custom
classes
Custom class types are typically
created by class definitions (see
section Class definitions). A class
has a namespace implemented by a
dictionary object. Class attribute
references are translated to lookups
in this dictionary, e.g., C.x is
translated
to C.__dict__["x"] (although
there are a number of hooks which
allow for other means of locating
attributes). When the attribute name
is not found there, the attribute
search continues in the base classes.
This search of the base classes uses
the C3 method resolution order
which behaves correctly even in the
presence of ‘diamond’ inheritance
structures where there are multiple
inheritance paths leading back to a
common ancestor. Additional
details on the C3 MRO used by
Python can be found at The Python
2.3 Method Resolution Order.
Special attributes:
__name__
__module__
__dict__
__bases__
A tuple containing the base classes, in the order of their occurrence in the base class list.
__doc__
__annotat
ions__
A dictionary containing variable annotations collected during class body execution. For
best practices on working with __annotations__, please see Annotations Best
Practices.
__typ
e_par
ams__
Att
rib
ute
assi
gn
me
nts
and
del
etio
ns
upd
ate
the
inst
anc
e’s
dict
ion
ary
,
nev
er a
cla
ss’s
dict
ion
ary
. If
the
cla
ss
has
a_
_s
et
at
tr
__
()
or
__
de
la
tt
r_
_(
)
met
hod
,
this
is
call
ed
inst
ead
of
upd
atin
g
the
inst
anc
e
dict
ion
ary
dir
ectl
y.
Cla
ss
inst
anc
es
can
pre
ten
d
to
be
nu
mb
ers,
seq
uen
ces
, or
ma
ppi
ngs
if
the
y
hav
e
met
hod
s
wit
h
cert
ain
spe
cial
na
me
s.
See
sec
tio
nS
pec
ial
met
hod
na
me
s.
Spe
cial
attr
ibu
tes:
__
di
ct
__
is
the
attr
ibu
te
dict
ion
ary
;_
_c
la
ss
__
is
the
inst
anc
e’s
cla
ss.
3.
2.
1
2.
I/
O
o
bj
e
ct
s
(a
ls
o
k
n
o
w
n
a
s
fil
e
o
bj
e
ct
s)
Af
ile
obj
ect
rep
res
ent
s
an
ope
n
file
.
Var
iou
s
sho
rtc
uts
are
ava
ilab
le
to
cre
ate
file
obj
ect
s:
the
op
en
()
bui
lt-
in
fun
ctio
n,
and
als
oo
s.
po
pe
n(
),
os
.f
do
pe
n(
),
and
the
ma
ke
fi
le
()
met
hod
of
soc
ket
obj
ect
s
(an
d
per
hap
s
by
oth
er
fun
ctio
ns
or
met
hod
s
pro
vid
ed
by
ext
ens
ion
mo
dul
es).
Th
e
obj
ect
ss
ys
.s
td
in,
sy
s.
st
do
ut
and
sy
s.
st
de
rr
are
init
iali
zed
to
file
obj
ect
s
cor
res
pon
din
g
to
the
inte
rpr
eter
’s
sta
nda
rd
inp
ut,
out
put
and
err
or
stre
am
s;
the
y
are
all
ope
n
in
text
mo
de
and
the
ref
ore
foll
ow
the
inte
rfa
ce
def
ine
d
by
the
io
.T
ex
tI
OB
as
ea
bstr
act
cla
ss.
3.
2.
1
3.
In
te
rn
al
ty
p
e
s
A
few
typ
es
use
d
inte
rna
lly
by
the
inte
rpr
eter
are
exp
ose
d
to
the
use
r.
Th
eir
def
init
ion
s
ma
y
cha
nge
wit
h
fut
ure
ver
sio
ns
of
the
inte
rpr
eter
,
but
the
y
are
me
nti
one
d
her
e
for
co
mp
lete
nes
s.
3.
2.
1
3.
1.
C
o
d
e
o
bj
ec
ts
Co
de
obj
ect
s
rep
res
ent
byt
e-
co
mpi
led
exe
cut
abl
e
Pyt
hon
cod
e,
or
byt
eco
de.
Th
e
diff
ere
nce
bet
we
en
a
cod
e
obj
ect
and
a
fun
ctio
n
obj
ect
is
that
the
fun
ctio
n
obj
ect
con
tain
s
an
exp
lici
t
ref
ere
nce
to
the
fun
ctio
n’s
glo
bal
s
(th
e
mo
dul
e in
whi
ch
it
wa
s
def
ine
d),
whi
le a
cod
e
obj
ect
con
tain
s
no
con
text
;
als
o
the
def
ault
arg
um
ent
val
ues
are
stor
ed
in
the
fun
ctio
n
obj
ect,
not
in
the
cod
e
obj
ect
(be
cau
se
the
y
rep
res
ent
val
ues
cal
cul
ate
d at
run
-
tim
e).
Unl
ike
fun
ctio
n
obj
ect
s,
cod
e
obj
ect
s
are
im
mu
tabl
e
and
con
tain
no
ref
ere
nce
s
(dir
ectl
y
or
ind
irec
tly)
to
mu
tabl
e
obj
ect
s.
3.
2.
1
3.
1.
1.
S
p
ec
ial
re
a
d-
o
nl
y
at
tri
b
ut
es
c The
o func
d tion
e nam
o e
b
j
e
c
t
.
c
o
_
n
a
m
e
cod
eobThe
jecfully
t. qualified
o_function
name
qu
alAdded in
naversion
me3.11.
The total
number of
positional
ameters
uding
codeob
positional-
ject.
only
o_arg
parameters
count
and
parameters
with default
values) that
the function
has
The number o
codeobject
positional-only
.co_poso
rameters
g arguments w
nlyargco
default values
unt
that the functio
has
The number o
codeobject.
keyword-only
_kwonlyargc
ers
arguments wit
ount
values) that th
function has
The number o
codeobject.
variables
locals
function (inclu
parameters)
A
codeobject.
of the local va
ames
function (start
parameter nam
A
codeobject.
of
s by nested func
function
A
codeobject.
variables in th
A string repres
codeobject.
of
A
codeobject.
the
A
codeobject.
the function
codeobject.
The name of t
codeobject.
The line numb
A string encod
numbers. For
codeobject.
Deprecated si
deprecated, an
codeobject.
The required s
codeobject.
An
Th
e
foll
owi
ng
fla
g
bits
are
def
ine
d
for
co
_f
la
gs:
bit
0x
04
is
set
if
the
fun
ctio
n
use
s
the
*a
rg
um
en
ts
syn
tax
to
acc
ept
an
arb
itra
ry
nu
mb
er
of
pos
itio
nal
arg
um
ent
s;
bit
0x
08
is
set
if
the
fun
ctio
n
use
s
the
**
ke
yw
or
ds
syn
tax
to
acc
ept
arb
itra
ry
key
wo
rd
arg
um
ent
s;
bit
0x
20
is
set
if
the
fun
ctio
n is
a
gen
erat
or.
See
Co
de
Obj
ect
s
Bit
Fla
gs f
or
det
ails
on
the
se
ma
ntic
s of
eac
h
fla
gs
that
mi
ght
be
pre
sen
t.
Fut
ure
feat
ure
dec
lara
tio
ns
(fr
om
__
fu
tu
re
__
im
po
rt
di
vi
si
on)
als
o
use
bits
in
co
_f
la
gs
to
ind
icat
e
wh
eth
er a
cod
e
obj
ect
wa
s
co
mp
iled
wit
ha
par
ticu
lar
feat
ure
ena
ble
d:
bit
0x
20
00
is
set
if
the
fun
ctio
n
wa
s
co
mp
iled
wit
h
fut
ure
div
isio
n
ena
ble
d;
bits
0x
10
and
0x
10
00
wer
e
use
d
in
earl
ier
ver
sio
ns
of
Pyt
hon
.
Oth
er
bits
in
co
_f
la
gs
are
res
erv
ed
for
inte
rna
l
use
.
If a
cod
e
obj
ect
rep
res
ent
sa
fun
ctio
n,
the
firs
t
ite
m
in
co
_c
on
st
s is
the
doc
um
ent
atio
n
stri
ng
of
the
fun
ctio
n,
or
No
ne
if
und
efi
ned
.
3.
2.
1
3.
1.
2.
M
et
h
o
ds
o
n
co
d
e
o
bj
ec
ts
co
de
ob
je
ct
.c
o_
po
si
ti
on
s(
)
Returns an iterable over the source code positions of each bytecode instruction in the
code object.
This positional information can be missing. A non-exhaustive lists of cases where this
may happen:
When this occurs, some or all of the tuple elements can be None.
Note
This feature requires storing column positions in code objects which may result in a small
increase of disk usage of compiled Python files or interpreter memory usage. To avoid
storing the extra information and/or deactivate printing the extra traceback information,
the -X no_debug_ranges command line flag or
the PYTHONNODEBUGRANGES environment variable can be used.
c
o
d
e
o
b
j
e
c
t
.
c
o
_
l
i
n
e
s
(
)
Returns an iterator that yields information about successive ranges of bytecodes. Each
item yielded is a (start, end, lineno) tuple:
start (an int) represents the offset (inclusive) of the start of the bytecode range
end (an int) represents the offset (exclusive) of the end of the bytecode range
lineno is an int representing the line number of the bytecode range, or None if
the bytecodes in the given range have no line number
Zero-width ranges, where start == end, are allowed. Zero-width ranges are used for
lines that are present in the source code, but have been eliminated by
the bytecode compiler.
See also
PEP 626 - Precise line numbers for debugging and other tools.
Return a copy of the code object with new values for the specified fields.
3.2.13.
2. Fra
me
object
s
Frame
objects
represent
execution
frames.
They may
occur
in traceba
ck objects,
and are
also
passed to
registered
trace
functions.
3.2.13.
2.1. Sp
ecial
read-
only
attribu
tes
fra
3.2.13.
2.2. Sp
ecial
writabl
e
attribu
tes
fra
3.2.13.
2.3. Fr
ame
object
metho
ds
Frame
objects
support
one
method:
frame.c
lear()
This method clears all references to local variables held by the frame. Also, if the frame
belonged to a generator, the generator is finalized. This helps break reference cycles
involving frame objects (for example when catching an exception and storing
its traceback for later use).
3.2.13.3.
Tracebac
k objects
Traceback
objects
represent the
stack trace of
an exception.
A traceback
object is
implicitly
created when
an exception
occurs, and
may also be
explicitly
created by
calling types
.Traceback
Type.
Changed in
version
3.7: Traceback
objects can
now be
explicitly
instantiated
from Python
code.
For implicitly
created
tracebacks,
when the
search for an
exception
handler
unwinds the
execution
stack, at each
unwound level
a traceback
object is
inserted in
front of the
current
traceback.
When an
exception
handler is
entered, the
stack trace is
made available
to the program.
(See
section The try
statement.) It
is accessible as
the third item
of the tuple
returned
by sys.exc_
info(), and
as
the __traceb
ack__ attribut
e of the caught
exception.
When the
program
contains no
suitable
handler, the
stack trace is
written (nicely
formatted) to
the standard
error stream; if
the interpreter
is interactive, it
is also made
available to the
user
as sys.last
_traceback.
For explicitly
created
tracebacks, it is
up to the
creator of the
traceback to
determine how
the tb_next
attributes
should be
linked to form
a full stack
trace.
Special read-
only attributes:
The line
number and
last instruction
in the
traceback may
differ from the
line number of
its frame
object if the
exception
occurred in
a try stateme
nt with no
matching
except clause
or with
a finally cla
use.
traceback.t
b_next
The special writable attribute tb_next is the next level in the stack trace (towards the
frame where the exception occurred), or None if there is no next level.
3.2.13.4. S
e objects
Slice objects ar
used to represen
slices
for __getitem
() methods. Th
are also created
the built-
in slice() fu
on.
Special read-on
attributes: star
the lower
bound; stop is
upper
bound; step is
step value; each
is None if omit
These attributes
have any type.
Slice objects
support one
method:
slice.indic
(self, leng
This method takes a single integer argument length and computes information about the
slice that the slice object would describe if applied to a sequence of length items. It
returns a tuple of three integers; respectively these are the start and stop indices and
the step or stride length of the slice. Missing or out-of-bounds indices are handled in a
manner consistent with regular slices.
3.2.13.5. S
method ob
Static method o
provide a way o
defeating the
transformation
function objects
method objects
described above
static method o
wrapper around
other object, us
user-defined me
object. When a
method object i
retrieved from a
a class instance
object actually
is the wrapped o
which is not sub
any further
transformation.
method objects
callable. Static
objects are crea
the built-
in staticmet
onstructor.
3.2.13.6. C
method ob
A class method
like a static met
object, is a wrap
around another
that alters the w
which that obje
retrieved from c
and class instan
behaviour of cla
method objects
such retrieval is
described above
under “instance
methods”. Clas
objects are crea
the built-
in classmeth
nstructor.
3.3. Spe
method
names
A class can imp
certain operatio
are invoked by
syntax (such as
arithmetic opera
subscripting and
by defining met
with special nam
is Python’s app
to operator ove
allowing classe
define their own
behavior with r
language operat
instance, if a cla
defines a metho
named __geti
), and x is an in
of this class,
then x[i] is ro
equivalent
to type(x)._
em__(x, i).
where mentione
attempts to exec
operation raise
exception when
appropriate met
defined
(typically Attr
rror or TypeE
Setting a specia
to None indicat
the correspondi
operation is not
available. For e
if a class
sets __iter__
one, the class i
iterable, so
calling iter()
instances will ra
a TypeError
falling back
to __getitem
2]
When impleme
class that emula
built-in type, it
important that t
emulation only
implemented to
degree that it m
sense for the ob
being modelled
example, some
sequences may
well with retrie
individual elem
extracting a slic
not make sense
example of this
the NodeList
in the W3C’s D
Object Model.)
3.3.1. Ba
customiz
object.__ne
ls[, ...])
Called to create a new instance of class cls. __new__() is a static method (special-cased
so you need not declare it as such) that takes the class of which an instance was requested
as its first argument. The remaining arguments are those passed to the object constructor
expression (the call to the class). The return value of __new__() should be the new
object instance (usually an instance of cls).
Typical implementations create a new instance of the class by invoking the
superclass’s __new__() method using super().__new__(cls[, ...]) with
appropriate arguments and then modifying the newly created instance as necessary before
returning it.
__new__() is intended mainly to allow subclasses of immutable types (like int, str, or
tuple) to customize instance creation. It is also commonly overridden in custom
metaclasses in order to customize class creation.
object.__in
f[, ...])
Called after the instance has been created (by __new__()), but before it is returned to
the caller. The arguments are those passed to the class constructor expression. If a base
class has an __init__() method, the derived class’s __init__() method, if any,
must explicitly call it to ensure proper initialization of the base class part of the instance;
for example: super().__init__([args...]).
Called when the instance is about to be destroyed. This is also called a finalizer or
(improperly) a destructor. If a base class has a __del__() method, the derived
class’s __del__() method, if any, must explicitly call it to ensure proper deletion of the
base class part of the instance.
It is not guaranteed that __del__() methods are called for objects that still exist when
the interpreter exits. weakref.finalize provides a straightforward way to register a
cleanup function to be called when an object is garbage collected.
Note
del x doesn’t directly call x.__del__() — the former decrements the reference
count for x by one, and the latter is only called when x’s reference count reaches zero.
CPython implementation detail: It is possible for a reference cycle to prevent the
reference count of an object from going to zero. In this case, the cycle will be later
detected and deleted by the cyclic garbage collector. A common cause of reference cycles
is when an exception has been caught in a local variable. The frame’s locals then
reference the exception, which references its own traceback, which references the locals
of all frames caught in the traceback.
See also
Due to the precarious circumstances under which __del__() methods are invoked,
exceptions that occur during their execution are ignored, and a warning is printed
to sys.stderr instead. In particular:
__del__() can be invoked when arbitrary code is being executed, including
from any arbitrary thread. If __del__() needs to take a lock or invoke any other
blocking resource, it may deadlock as the resource may already be taken by the
code that gets interrupted to execute __del__().
__del__() can be executed during interpreter shutdown. As a consequence, the
global variables it needs to access (including other modules) may already have
been deleted or set to None. Python guarantees that globals whose name begins
with a single underscore are deleted from their module before other globals are
deleted; if no other references to such globals exist, this may help in assuring that
imported modules are still available at the time when the __del__() method is
called.
object.__re
Called by the repr() built-in function to compute the “official” string representation of
an object. If at all possible, this should look like a valid Python expression that could be
used to recreate an object with the same value (given an appropriate environment). If this
is not possible, a string of the form <...some useful description...> should
be returned. The return value must be a string object. If a class defines __repr__() but
not __str__(), then __repr__() is also used when an “informal” string
representation of instances of that class is required.
object.__st
Called by str(object) and the built-in functions format() and print() to
compute the “informal” or nicely printable string representation of an object. The return
value must be a string object.
object.__fo
object.__lt
object.__le
object.__eq
object.__ne
object.__gt
object.__ge
These are the so-called “rich comparison” methods. The correspondence between
operator symbols and method names is as
follows: x<y calls x.__lt__(y), x<=y calls x.__le__(y), x==y calls x.__eq__(
y), x!=y calls x.__ne__(y), x>y calls x.__gt__(y),
and x>=y calls x.__ge__(y).
A rich comparison method may return the singleton NotImplemented if it does not
implement the operation for a given pair of arguments. By
convention, False and True are returned for a successful comparison. However, these
methods can return any value, so if the comparison operator is used in a Boolean context
(e.g., in the condition of an if statement), Python will call bool() on the value to
determine if the result is true or false.
There are no swapped-argument versions of these methods (to be used when the left
argument does not support the operation but the right argument does);
rather, __lt__() and __gt__() are each other’s
reflection, __le__() and __ge__() are each other’s reflection,
and __eq__() and __ne__() are their own reflection. If the operands are of different
types, and the right operand’s type is a direct or indirect subclass of the left operand’s
type, the reflected method of the right operand has priority, otherwise the left operand’s
method has priority. Virtual subclassing is not considered.
Called by built-in function hash() and for operations on members of hashed collections
including set, frozenset, and dict. The __hash__() method should return an
integer. The only required property is that objects which compare equal have the same
hash value; it is advised to mix together the hash values of the components of the object
that also play a part in comparison of objects by packing them into a tuple and hashing
the tuple. Example:
def __hash__(self):
return hash((self.name, self.nick, self.color))
Note
hash() truncates the value returned from an object’s custom __hash__() method to
the size of a Py_ssize_t. This is typically 8 bytes on 64-bit builds and 4 bytes on 32-
bit builds. If an object’s __hash__() must interoperate on builds of different bit sizes,
be sure to check the width on all supported builds. An easy way to do this is
with python -c "import sys; print(sys.hash_info.width)".
User-defined classes have __eq__() and __hash__() methods by default; with them,
all objects compare unequal (except with themselves) and x.__hash__() returns an
appropriate value such that x == y implies both
that x is y and hash(x) == hash(y).
A class that overrides __eq__() and does not define __hash__() will have
its __hash__() implicitly set to None. When the __hash__() method of a class
is None, instances of the class will raise an appropriate TypeError when a program
attempts to retrieve their hash value, and will also be correctly identified as unhashable
when checking isinstance(obj, collections.abc.Hashable).
If a class that does not override __eq__() wishes to suppress hash support, it should
include __hash__ = None in the class definition. A class which defines its
own __hash__() that explicitly raises a TypeError would be incorrectly identified as
hashable by an isinstance(obj, collections.abc.Hashable) call.
Note
By default, the __hash__() values of str and bytes objects are “salted” with an
unpredictable random value. Although they remain constant within an individual Python
process, they are not predictable between repeated invocations of Python.
Changing hash values affects the iteration order of sets. Python has never made
guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds).
object.__bo
Called to implement truth value testing and the built-in operation bool(); should
return False or True. When this method is not defined, __len__() is called, if it is
defined, and the object is considered true if its result is nonzero. If a class defines
neither __len__() nor __bool__(), all its instances are considered true.
3.3.2. Cu
The following m
attribute access
class instances.
object.__ge
Note that if the attribute is found through the normal mechanism, __getattr__() is
not called. (This is an intentional asymmetry
between __getattr__() and __setattr__().) This is done both for efficiency
reasons and because otherwise __getattr__() would have no way to access other
attributes of the instance. Note that at least for instance variables, you can fake total
control by not inserting any values in the instance attribute dictionary (but instead
inserting them in another object). See the __getattribute__() method below for a
way to actually get total control over attribute access.
object.__ge
Called unconditionally to implement attribute accesses for instances of the class. If the
class also defines __getattr__(), the latter will not be called
unless __getattribute__() either calls it explicitly or raises an AttributeError.
This method should return the (computed) attribute value or raise
an AttributeError exception. In order to avoid infinite recursion in this method, its
implementation should always call the base class method with the same name to access
any attributes it needs, for example, object.__getattribute__(self, name).
Note
This method may still be bypassed when looking up special methods as the result of
implicit invocation via language syntax or built-in functions. See Special method lookup.
object.__se
Called when an attribute assignment is attempted. This is called instead of the normal
mechanism (i.e. store the value in the instance dictionary). name is the attribute
name, value is the value to be assigned to it.
If __setattr__() wants to assign to an instance attribute, it should call the base class
method with the same name, for
example, object.__setattr__(self, name, value).
object.__de
Like __setattr__() but for attribute deletion instead of assignment. This should only
be implemented if del obj.name is meaningful for the object.
object.__di
The __dir__
represents the n
standard dir()
import sys
from types
class Verbo
def __r
ret
def __s
pri
sup
sys.modules
Note
Defining modul
using the attribu
within the modu
Changed in ver
Added in versio
See also
PEP 562 - Modul
object.__ge
Called to get the attribute of the owner class (class attribute access) or of an instance of
that class (instance attribute access). The optional owner argument is the owner class,
while instance is the instance that the attribute was accessed through, or None when the
attribute is accessed through the owner.
PEP 252 specifies that __get__() is callable with one or two arguments. Python’s own
built-in descriptors support this specification; however, it is likely that some third-party
tools have descriptors that require both arguments. Python’s
own __getattribute__() implementation always passes in both arguments whether
they are required or not.
object.__se
Called to set the attribute on an instance instance of the owner class to a new
value, value.
Instances of des
object.__ob
However, if the
default behavio
which descripto
Direct Call
The simplest and least common call is when user code directly invokes a descriptor
method: x.__get__(a).
Instance Binding
Class Binding
Super Binding
Python methods
Accordingly, in
instances of the
The property
3.3.2.4. __
__slots__ allow
explicitly decla
object.__sl
This class variable can be assigned a string, iterable, or sequence of strings with variable
names used by instances. __slots__ reserves space for the declared variables and prevents
the automatic creation of __dict__ and __weakref__ for each instance.
Notes on using
When inher
Without a _
unlisted var
of strings in
Without a _
reference su
__slots__ ar
default valu
The action o
However, c
any addition
If a class de
descriptor d
this.
TypeErro
as int, byt
Any non-str
If a dictio
provide per-
__class_
Multiple inh
other bases
If an iterato
empty iterat
3.3.3. Cu
Whenever a cla
which change th
they’re applied
classmethod
This method is called whenever the containing class is subclassed. cls is then the new
subclass. If defined as a normal instance method, this method is implicitly converted to a
class method.
Keyword arguments which are given to a new class are passed to the parent
class’s __init_subclass__. For compatibility with other classes
using __init_subclass__, one should take out the needed keyword arguments and
pass the others over to the base class, as in:
class Philosopher:
def __init_subclass__(cls, /, default_name, **kwargs):
super().__init_subclass__(**kwargs)
cls.default_name = default_name
class AustralianPhilosopher(Philosopher,
default_name="Bruce"):
pass
Note
The metaclass hint metaclass is consumed by the rest of the type machinery, and is
never passed to __init_subclass__ implementations. The actual metaclass (rather
than the explicit hint) can be accessed as type(cls).
When a class is
object.__se
Automatically called at the time the owning class owner is created. The object has been
assigned to name in that class:
class A:
x = C() # Automatically calls: x.__set_name__(A, 'x')
If the class variable is assigned after the class is created, __set_name__() will not be
called automatically. If needed, __set_name__() can be called directly:
class A:
pass
c = C()
A.x = c # The hook is not called
c.__set_name__(A, 'x') # Manually invoke the hook
3.3.3.1. Me
By default, clas
of type(name
class Meta(
pass
class MyCla
pass
class MySub
pass
When a class de
MRO entrie
the appropri
the class na
the class bo
the class ob
3.3.3.2. Re
object.__mr
if no bases a
if an explici
if an instanc
3.3.3.4. Pr
Once the appro
as namespace
The __prepar
object is created
If the metaclass
See also
PEP 3115 - Meta
However, even
accessed throug
3.3.3.6. Cr
Once the class n
calling metacl
CPython imple
this must be pro
1. The type.
2. Those __se
3. The __ini
See also
PEP 3135 - New
3.3.4. Cu
The following m
In particular, th
(including built
class.__ins
See also
PEP 3119 - Introd
See also
PEP 484 - Type H
classmethod
Custom implem
any class for pu
3.3.5.2. __
Usually, the sub
method __cla
Presented with
from inspec
def subscri
"""Retu
class_o
# If th
# call
if hasa
ret
# Else,
# call
elif is
ret
# Else,
else:
rai
In Python, all c
define __geti
>>>
>>> # list
>>> type(li
<class 'typ
>>> type(di
True
>>> # "list
>>> list[in
list[int]
>>> # list.
>>> type(li
<class 'typ
However, if a c
>>>
>>> from en
>>> class M
... """
... SPA
... BAC
...
>>> # Enum
>>> type(Me
<class 'enu
>>> # EnumM
>>> # so __
>>> # and t
>>> Menu['S
<Menu.SPAM:
>>> type(Me
<enum 'Menu
See also
PEP 560 - Core S
object.__le
Called to implement the built-in function len(). Should return the length of the object,
an integer >= 0. Also, an object that doesn’t define a __bool__() method and
whose __len__() method returns zero is considered to be false in a Boolean context.
Note
Slicing is done
a[1:2] = b
is translated to
a[slice(1,
and so forth. M
object.__ge
Called to implement evaluation of self[key]. For sequence types, the accepted keys
should be integers. Optionally, they may support slice objects as well. Negative index
support is also optional. If key is of an inappropriate type, TypeError may be raised;
if key is a value outside the set of indexes for the sequence (after any special
interpretation of negative values), IndexError should be raised. For mapping types,
if key is missing (not in the container), KeyError should be raised.
Note
for loops expect that an IndexError will be raised for illegal indexes to allow proper
detection of the end of the sequence.
Note
This method is called when an iterator is required for a container. This method should
return a new iterator object that can iterate over all the objects in the container. For
mappings, it should iterate over the keys of the container.
Called (if present) by the reversed() built-in to implement reverse iteration. It should
return a new iterator object that iterates over all the objects in the container in reverse
order.
If the __reversed__() method is not provided, the reversed() built-in will fall
back to using the sequence protocol (__len__() and __getitem__()). Objects that
support the sequence protocol should only provide __reversed__() if they can
provide an implementation that is more efficient than the one provided by reversed().
Called to implement membership test operators. Should return true if item is in self, false
otherwise. For mapping objects, this should consider the keys of the mapping rather than
the values or the key-item pairs.
For objects that don’t define __contains__(), the membership test first tries iteration
via __iter__(), then the old sequence iteration protocol via __getitem__(),
see this section in the language reference.
These methods are called to implement the binary arithmetic operations
(+, -, *, @, /, //, %, divmod(), pow(), **, <<, >>, &, ^, |). For instance, to evaluate
the expression x + y, where x is an instance of a class that has
an __add__() method, type(x).__add__(x, y) is called.
The __divmod__() method should be the equivalent to
using __floordiv__() and __mod__(); it should not be related
to __truediv__(). Note that __pow__() should be defined to accept an optional
third argument if the ternary version of the built-in pow() function is to be supported.
If one of those methods does not support the operation with the supplied arguments, it
should return NotImplemented.
These methods are called to implement the binary arithmetic operations
(+, -, *, @, /, //, %, divmod(), pow(), **, <<, >>, &, ^, |) with reflected (swapped)
operands. These functions are only called if the left operand does not support the
corresponding operation [3] and the operands are of different types. [4] For instance, to
evaluate the expression x - y, where y is an instance of a class that has
an __rsub__() method, type(y).__rsub__(y, x) is called
if type(x).__sub__(x, y) returns NotImplemented.
Note that ternary pow() will not try calling __rpow__() (the coercion rules would
become too complicated).
Note
If the right operand’s type is a subclass of the left operand’s type and that subclass
provides a different implementation of the reflected method for the operation, this
method will be called before the left operand’s non-reflected method. This behavior
allows subclasses to override their ancestors’ operations.
These methods are called to implement the augmented arithmetic assignments ( +=, -
=, *=, @=, /=, //=, %=, **=, <<=, >>=, &=, ^=, |=). These methods should attempt to
do the operation in-place (modifying self) and return the result (which could be, but does
not have to be, self). If a specific method is not defined, or if that method
returns NotImplemented, the augmented assignment falls back to the normal methods.
For instance, if x is an instance of a class with an __iadd__() method, x += y is
equivalent to x = x.__iadd__(y) . If __iadd__() does not exist, or
if x.__iadd__(y) returns NotImplemented, x.__add__(y) and y.__radd__(
x) are considered, as with the evaluation of x + y. In certain situations, augmented
assignment can result in unexpected errors (see Why does a_tuple[i] += [‘item’] raise an
exception when the addition works?), but this behavior is in fact part of the data model.
Called to implement the unary arithmetic operations (-, +, abs() and ~).
Called to implement the built-in functions complex(), int() and float(). Should
return a value of the appropriate type.
mally invoked using the with statement (described in section The with statement), but can also be used by
Enter the runtime context related to this object. The with statement will bind this
method’s return value to the target(s) specified in the as clause of the statement, if any.
Exit the runtime context related to this object. The parameters describe the exception that
caused the context to be exited. If the context was exited without an exception, all three
arguments will be None.
If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent
it from being propagated), it should return a true value. Otherwise, the exception will be
processed normally upon exit from this method.
Note that __exit__() methods should not reraise the passed-in exception; this is the
caller’s responsibility.
The specification, background, and examples for the Python with statement.
ch_args__ attribute.
This class variable can be assigned a tuple of strings. When this class is used in a class
pattern with positional arguments, each positional argument will be converted into a
keyword argument, using the corresponding value in __match_args__ as the keyword.
The absence of this attribute is equivalent to setting it to ().
ller than or equal to the number of elements in __match_args__; if it is larger, the pattern match attempt will
s.
Called when a buffer is no longer needed. The buffer argument is a memoryview object
that was previously returned by __buffer__(). The method must release any
resources associated with the buffer. This method should return None. Buffer objects that
do not need to perform any cleanup are not required to implement this method.
ntional lookup process, they would fail when invoked on the type object itself:
be set on the class object itself in order to be consistently invoked by the interpreter).
Note
The language doesn’t place any restriction on the type or value of the objects yielded by
the iterator returned by __await__, as this is specific to the implementation of the
asynchronous execution framework (e.g. asyncio) that will be managing
the awaitable object.
ion, and the exception’s value attribute holds the return value. If the coroutine raises an exception, it is
Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated
and may be removed in a future version of Python.
Causes the coroutine to clean itself up and exit. If the coroutine is suspended, this method
first delegates to the close() method of the iterator that caused the coroutine to
suspend, if it has such a method. Then it raises GeneratorExit at the suspension
point, causing the coroutine to immediately clean itself up. Finally, the coroutine is
marked as having finished executing, even if it was never started.
Coroutine objects are automatically closed using the above process when they are about
to be destroyed.
Must return an awaitable resulting in a next value of the iterator. Should raise
a StopAsyncIteration error when the iteration is over.
Semantically similar to __enter__(), the only difference being that it must return
an awaitable.
Semantically similar to __exit__(), the only difference being that it must return
an awaitable.
ed incorrectly.
d’s reflected method—that will instead have the opposite effect of explicitly blocking such fallback.