Skip to content

Commit 32608f0

Browse files
committed
asyncio/taskgroup: Merge to current master.
Signed-off-by: Matthias Urlichs <matthias@urlichs.de>
2 parents db85427 + 6d448ed commit 32608f0

File tree

14 files changed

+1386
-10
lines changed

14 files changed

+1386
-10
lines changed

docs/library/asyncio.rst

Lines changed: 100 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,76 @@ class Task
112112
ignore this exception. Cleanup code may be run by trapping it, or via
113113
``try ... finally``.
114114

115+
class TaskGroup
116+
---------------
117+
118+
See Nathaniel J. Smith's `essay on Structured Concurrency
119+
<https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/>`_
120+
for an introduction why you should use taskgroups instead of starting
121+
"naked" tasks.
122+
123+
.. note::
124+
His "nursery" objects are called "taskgroup" in asyncio; the
125+
equivalent of a "go statement" is `Loop.create_task`.
126+
127+
.. class:: TaskGroup()
128+
129+
This object is an async context managed holding a group of tasks.
130+
Tasks can be added to the group using `TaskGroup.create_task`.
131+
132+
If a task belonging to the group fails, the remaining tasks in the
133+
group are cancelled with an :exc:`asyncio.CancelledError` exception.
134+
(This also holds for the code within the context manager's block.)
135+
No further tasks can then be added to the group.
136+
137+
When there is no exception, leaving the context manager waits for
138+
the taskgroup's member tasks to end before proceeding. It does not
139+
cancel these tasks and does not prevent the creation of new tasks.
140+
141+
.. method:: TaskGroup.create_task(coroutine)
142+
143+
Create a subtask that executes *coroutine* as part of this taskgroup.
144+
145+
Returns the new task.
146+
147+
.. method:: TaskGroup.cancel()
148+
149+
Stop the taskgroup, i.e. cancel all its tasks.
150+
151+
This method is equivalent to cancelling the task responsible for the
152+
body of the taskgroup, *if* that is what the task is currently doing.
153+
154+
.. exception:: Cancelled
155+
156+
This exception is raised in a task whose taskgroup is being cancelled.
157+
158+
This is a subclass of ``BaseException``; it should never be caught.
159+
160+
.. exception:: ExceptionGroup
161+
162+
If multiple subtasks raise exceptions in parallel, it's unclear which
163+
of them should be propagated. Thus an `ExceptionGroup` exception
164+
collects them and is raised instead.
165+
166+
.. method:: ExceptionGroup.split(typ)
167+
168+
This method can be used to filter the exceptions within an exception group.
169+
It returns two lists: the first contains those sub-exceptions which
170+
match *typ*, the second those which do not.
171+
172+
*typ* can be an exception class, a list of exception classes, or a
173+
callable that returns ``True`` if the exception passed to it should be
174+
returned in the first list.
175+
176+
MicroPython does not support CPython 3.11's syntax for filtering handling
177+
exception groups.
178+
179+
.. exception:: BaseExceptionGroup
180+
181+
Like `ExceptionGroup`, but used if one of the sub-exceptions is not a
182+
subclass of `Exception`.
183+
184+
115185
class Event
116186
-----------
117187

@@ -215,22 +285,41 @@ TCP stream connections
215285

216286
.. function:: start_server(callback, host, port, backlog=5, ssl=None)
217287

218-
Start a TCP server on the given *host* and *port*. The *callback* will be
219-
called with incoming, accepted connections, and be passed 2 arguments: reader
220-
and writer streams for the connection.
288+
Start a TCP server on the given *host* and *port*. For each incoming,
289+
accepted connection, *callback* will be called in a new task with
290+
2 arguments: reader and writer streams for the connection.
291+
292+
This function does **not** co-operate well with taskgroups.
293+
If you use them, you should use a function like `run_server`
294+
(supplied in ``examples/run_server.py``) instead.
221295

222296
If *ssl* is a `ssl.SSLContext` object, this context is used to create the transport.
223297

224298
Returns a `Server` object.
225299

226300
This is a coroutine.
227301

302+
.. function:: run_server(callback, host, port, backlog=5, taskgroup=None)
303+
304+
Start a TCP server on the given *host* and *port*. For each incoming,
305+
accepted connection, *callback* will be called in a new task with
306+
2 arguments: reader and writer streams for the connection.
307+
308+
The new task is started in *taskgroup*. An internal taskgroup will be
309+
used if none is passed in.
310+
311+
This is a coroutine. It does not return unless cancelled.
312+
313+
228314
.. class:: Stream()
229315

230316
This represents a TCP stream connection. To minimise code this class implements
231317
both a reader and a writer, and both ``StreamReader`` and ``StreamWriter`` alias to
232318
this class.
233319

320+
This class should be used as an async context manager. Leaving the context
321+
will close the connection.
322+
234323
.. method:: Stream.get_extra_info(v)
235324

236325
Get extra information about the stream, given by *v*. The valid values for *v* are:
@@ -240,6 +329,12 @@ TCP stream connections
240329

241330
Close the stream.
242331

332+
Depending on the stream's concrete implementation, this call may do
333+
nothing. You should call the `wait_closed` coroutine immediately
334+
afterwards.
335+
336+
Streams are closed implicitly when used as an async context manager.
337+
243338
.. method:: Stream.wait_closed()
244339

245340
Wait for the stream to close.
@@ -326,6 +421,8 @@ Event Loop
326421

327422
Create a task from the given *coro* and return the new `Task` object.
328423

424+
You should not call this function when you're using taskgroups.
425+
329426
.. method:: Loop.run_forever()
330427

331428
Run the event loop until `stop()` is called.

examples/run_server.py

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
import asyncio
2+
import socket
3+
from asyncio.stream import Stream
4+
5+
6+
# Helper to run a TCP stream server.
7+
# Callbacks (i.e. connection handlers) may run in a different taskgroup.
8+
async def run_server(cb, host, port, backlog=5, taskgroup=None):
9+
# Create and bind server socket.
10+
host = socket.getaddrinfo(host, port)[0] # TODO this is blocking!
11+
s = socket.socket()
12+
s.setblocking(False)
13+
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
14+
s.bind(host[-1])
15+
s.listen(backlog)
16+
17+
if taskgroup is None:
18+
from asyncio import TaskGroup
19+
20+
async with TaskGroup() as tg:
21+
await _run_server(tg, s, cb)
22+
else:
23+
await _run_server(taskgroup, s, cb)
24+
25+
26+
async def _run_server(tg, s, cb):
27+
try:
28+
while True:
29+
yield asyncio._io_queue.queue_read(s)
30+
try:
31+
s2, addr = s.accept()
32+
except Exception:
33+
# Ignore a failed accept
34+
continue
35+
36+
s2.setblocking(False)
37+
s2s = Stream(s2, {"peername": addr})
38+
tg.create_task(cb(s2s, s2s))
39+
finally:
40+
s.close()

extmod/asyncio/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
"start_server": "stream",
1717
"StreamReader": "stream",
1818
"StreamWriter": "stream",
19+
"TaskGroup": "taskgroup",
1920
}
2021

2122

extmod/asyncio/core.py

Lines changed: 24 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,25 @@
1515
# Exceptions
1616

1717

18+
class BaseExceptionGroup(BaseException):
19+
def split(self, typ):
20+
a, b = [], []
21+
if isinstance(typ, (BaseException, tuple)):
22+
for err in self.args[1]:
23+
(a if isinstance(err, typ) else b).append(err)
24+
else:
25+
for err in self.args[1]:
26+
(a if typ(err) else b).append(err)
27+
return a, b
28+
29+
30+
class ExceptionGroup(Exception): # TODO cannot also inherit from BaseExceptionGroup
31+
pass
32+
33+
34+
ExceptionGroup.split = BaseExceptionGroup.split
35+
36+
1837
class CancelledError(BaseException):
1938
pass
2039

@@ -32,7 +51,7 @@ class TimeoutError(Exception):
3251

3352

3453
# "Yield" once, then raise StopIteration
35-
class SingletonGenerator:
54+
class SleepHandler:
3655
def __init__(self):
3756
self.state = None
3857
self.exc = StopIteration()
@@ -51,9 +70,10 @@ def __next__(self):
5170

5271

5372
# Pause task execution for the given time (integer in milliseconds, uPy extension)
54-
# Use a SingletonGenerator to do it without allocating on the heap
55-
def sleep_ms(t, sgen=SingletonGenerator()):
56-
assert sgen.state is None
73+
# Try not to allocate a SleepHandler on the heap if possible
74+
def sleep_ms(t, sgen=SleepHandler()):
75+
if sgen.state is not None: # the static one is busy
76+
sgen = SleepHandler()
5777
sgen.state = ticks_add(ticks(), max(0, t))
5878
return sgen
5979

extmod/asyncio/manifest.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
"funcs.py",
1010
"lock.py",
1111
"stream.py",
12+
"taskgroup.py",
1213
),
1314
base_path="..",
1415
opt=3,

extmod/asyncio/stream.py

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55

66

77
class Stream:
8+
# The underlying socket must have an idempotent `close` method.
89
def __init__(self, s, e={}):
910
self.s = s
1011
self.e = e
@@ -13,11 +14,19 @@ def __init__(self, s, e={}):
1314
def get_extra_info(self, v):
1415
return self.e[v]
1516

17+
async def __aenter__(self):
18+
return self
19+
20+
async def __aexit__(self, exc_type, exc, tb):
21+
self.close()
22+
1623
def close(self):
17-
pass
24+
# The (old) CPython idiom is to call `close`, then immediately
25+
# follow up with `await stream.wait_closed`.
26+
self.s.close()
1827

1928
async def wait_closed(self):
20-
# TODO yield?
29+
# XXX yield?
2130
self.s.close()
2231

2332
# async

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy