Skip to content

Backport py3.11 asyncio's taskgroups. #8791

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 26 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
441d54a
extmod/uasyncio: Backport py3.11 asyncio's taskgroups.
smurfix Jun 20, 2022
51f2494
extmod/uasyncio: Task needs to be hashable.
smurfix Jun 22, 2022
ff9c669
extmod/uasyncio/taskgroup: Rewrite state machine.
smurfix Sep 2, 2023
fb5d591
extmod/uasyncio/core: Handle colliding sleep_ms calls.
smurfix Sep 2, 2023
e1c7829
extmod/asyncio/taskgroup.py: State machine update.
smurfix Sep 2, 2023
4beafa4
tests/run-test: Conditionally skip uasyncio_taskgroup.
smurfix Feb 17, 2023
2440fe4
tests/extmod/asyncio_taskgroup: Improve test output.
smurfix Sep 2, 2023
8dc64b0
extmod/asyncio/core: Implement BaseExceptionGroup.split().
smurfix Sep 2, 2023
d0eed1e
extmod/asyncio/taskgroup: Fixed a typo.
smurfix Sep 2, 2023
1740d74
extmod/asyncio/taskgroup: Colliding cancel vs. exception.
smurfix Sep 2, 2023
c4947ce
tests/extmod/asyncio_taskgroup: Do not nest taskgroups in native tasks.
smurfix Sep 2, 2023
36ae50a
tests/run-tests: Heap lock and native crashes.
smurfix Sep 2, 2023
e82d9b7
tests/run-tests: Exclude thread_exc1 test.
smurfix Sep 2, 2023
d95f4b2
extmod/asyncio/taskgroup: Exiting vs. aborting taskgroups.
smurfix Sep 2, 2023
4e56794
extmod/asyncio/taskgroup: NO-OP cancelling an inactive taskgroup.
smurfix Sep 3, 2023
12defaf
extmod/asyncio/stream: Handle closing a stream.
smurfix Sep 3, 2023
ce9ea8f
docs/library/asyncio: Add a section on taskgroups.
smurfix Sep 3, 2023
d1bdd99
Merge branch 'master' of https://github.com/micropython/micropython i…
smurfix Oct 2, 2023
e022a1c
extmod/asyncio: Merge to current main branch.
smurfix Aug 27, 2024
8aadd43
asyncio/core: Fix merge problem.
smurfix Aug 28, 2024
1391bcf
docs/asyncio: Fix capitalization error.
smurfix Aug 28, 2024
b61cc3e
asyncio/stream: Spurious "await".
smurfix Aug 28, 2024
59eb4ba
asyncio: Move `run_server` to examples.
smurfix Aug 28, 2024
f4f510b
webassembly/asyncio: Add support for taskgroups.
smurfix Aug 28, 2024
6d448ed
ports/webassembly: Add exception groups to its asyncio module.
smurfix Sep 13, 2024
32608f0
asyncio/taskgroup: Merge to current master.
smurfix Apr 11, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions extmod/asyncio/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,10 @@
"Lock": "lock",
"open_connection": "stream",
"start_server": "stream",
"run_server": "stream",
"StreamReader": "stream",
"StreamWriter": "stream",
"TaskGroup": "taskgroup",
}


Expand Down
31 changes: 27 additions & 4 deletions extmod/asyncio/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,25 @@
# Exceptions


class BaseExceptionGroup(BaseException):
def split(self, typ):
a, b = [], []
if isinstance(typ, (BaseException, tuple)):
for err in self.args[1]:
(a if isinstance(err, typ) else b).append(err)
else:
for err in self.args[1]:
(a if typ(err) else b).append(err)
return a, b


class ExceptionGroup(Exception): # TODO cannot also inherit from BaseExceptionGroup
pass


ExceptionGroup.split = BaseExceptionGroup.split


class CancelledError(BaseException):
pass

Expand All @@ -32,7 +51,7 @@ class TimeoutError(Exception):


# "Yield" once, then raise StopIteration
class SingletonGenerator:
class SleepHandler:
def __init__(self):
self.state = None
self.exc = StopIteration()
Expand All @@ -51,9 +70,10 @@ def __next__(self):


# Pause task execution for the given time (integer in milliseconds, uPy extension)
# Use a SingletonGenerator to do it without allocating on the heap
def sleep_ms(t, sgen=SingletonGenerator()):
assert sgen.state is None
# Try not to allocate a SleepHandler on the heap if possible
def sleep_ms(t, sgen=SleepHandler()):
if sgen.state is not None: # the static one is busy
sgen = SleepHandler()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you give some details on when / how the singleton is already in use?
It seems like it'll be confusing that sometimes sleep will allocate and sometimes it won't.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course. Simply start two tasks and let both of them sleep at the same time.

I don't think it's confusing, this is entirely internal to sleep_ms.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I very regularly have a large number of tasks with each of them sleeping for different lengths of time... I haven't noticed any issue.

What problem / symptoms do you see?

Copy link
Contributor Author

@smurfix smurfix Sep 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially found this bug via test 12 in tests/extmod/asyncio_taskgroup.py, according to my notes, though I don't remember details.

Test 12 no longer errors out when the change we're discussing here is removed, but adding a print statement to the if clause does show that the problem is still triggered by the test, and that it does cause the sleep time of some other task to be modified and/or ignored.

--- extmod/asyncio/core.py
+++ extmod/asyncio/core.py
@@ -73,7 +73,8 @@ class SleepHandler:
 # Try not to allocate a SleepHandler on the heap if possible
 def sleep_ms(t, sgen=SleepHandler()):
     if sgen.state is not None:  # the static one is busy
-        sgen = SleepHandler()
+        print("XXX WARN SLEEP XXX",sgen.state,ticks_add(ticks(), max(0, t)))
+        # sgen = SleepHandler()
     sgen.state = ticks_add(ticks(), max(0, t))
     return sgen
 

The original code depends on the scheduler to iterate the "singleton" object that's yielded by sleep_ms immediately, i.e.. before any other task which also happens to be scheduled calls sleep_ms. This condition seems to hold most of the time, but not always.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original code depends on the scheduler to iterate the "singleton" object that's yielded by sleep_ms immediately, i.e.. before any other task which also happens to be scheduled calls sleep_ms. This consition seems to hold most of the time, but not always.

Ah, this is quite interesting. I had wondered how the Singleton could ever work to provide multiple different sleep function usages.... I can see how multiple tasks started back to back (like in a new taskgroup setup) would be more likely to trigger this situation of multiple definitions before the task loop makes it back around to the start to process the sleeps.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This definitely needs to be understood better. It should be that the scheduler always iterates the singleton immediately. If that's no longer the case then we need to understand why and document it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can only assume that the "always iterate immediately" assumption was never true; we only thought it was. After all, building a test that actually fails when it's not is a nontrivial exercise, hence the print("XXX") patch that demonstrates the issue.

sgen.state = ticks_add(ticks(), max(0, t))
return sgen

Expand Down Expand Up @@ -222,6 +242,9 @@ def run_until_complete(main_task=None):
_exc_context["exception"] = exc
_exc_context["future"] = t
Loop.call_exception_handler(_exc_context)
# XXX if we do await it later,
# leaving t.data as None will cause a fault.
t.data = exc


# Create a new task from a coroutine and run it until it finishes
Expand Down
1 change: 1 addition & 0 deletions extmod/asyncio/manifest.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
"funcs.py",
"lock.py",
"stream.py",
"taskgroup.py",
),
base_path="..",
opt=3,
Expand Down
46 changes: 44 additions & 2 deletions extmod/asyncio/stream.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ async def __aenter__(self):
return self

async def __aexit__(self, exc_type, exc, tb):
await self.close()
self.s.close()
pass

def close(self):
pass
Expand Down Expand Up @@ -152,7 +153,7 @@ async def _serve(self, s, cb):


# Helper function to start a TCP stream server, running as a new task
# TODO could use an accept-callback on socket read activity instead of creating a task
# DOES NOT USE TASKGROUPS. Use run_server instead
async def start_server(cb, host, port, backlog=5):
import socket

Expand All @@ -170,6 +171,47 @@ async def start_server(cb, host, port, backlog=5):
return srv


# Helper task to run a TCP stream server.
# Callbacks (i.e. connection handlers) may run in a different taskgroup.
async def run_server(cb, host, port, backlog=5, taskgroup=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this function doesn't look like it's equivalent to any cpython function is it?

Does cpython handle start_server differently when it's run from within a TaskGroup?

Copy link
Contributor Author

@smurfix smurfix Sep 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpython doesn't have a taskgroup-aware way to run a server, it simply starts a new task for each client (for some value of "simply", its code is rather complex for historical reasons related to their protocol/transport split – in hindsight, that was a rather bad design decision).

start_server doesn't play well with taskgroups. A taskgroup guarantees that when you leave it, all of its resources, subtasks and whatnot are done with. As soon as you start non-taskgrsouped task, which start_server of course does, all those nice guarantees go out the window. You can't work around this problem without introducing a race condition.

Yes this is a MicroPython extension and thus something that we don't really want to do, but CPython doesn't yet have an equivalent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

… which reminds me that the whole taskgroup thing needs documentation. Duh. Working on it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So on cpython, if you run start _server in a TaskGroup it'll run, but not get cleaned up if the TaskGroup exits?

Do you know if this has raised as an issue for cpython?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no idea. I plan to investigate that (and submit a PR if necessary).

The "problem" is that most likely the issue hasn't come up yet because people who use taskgroups tend to do that with anyio or even trio. Both have their own wrappers to address this problem, which unfortunately are not a good fit for micropython.

Copy link
Contributor

@projectgus projectgus Aug 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you end up bringing this up with CPython, @smurfix? It's probably the biggest remaining hurdle - to try and keep asyncio module contents close to CPython.

I guess the alternative is to hide this in a different module somewhere (maybe in micropython-lib). However if CPython were up for solving this as well, that would be ideal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I didn't yet, no free time to shepherd a feature like that, esp. since taskgroups still are an under-utilized (and poorly understood, among old-time asyncio adherents) feature.

I'll drop it for now and move the code to examples/ until we find a better home for it.

import socket

# Create and bind server socket.
host = socket.getaddrinfo(host, port)[0] # TODO this is blocking!
s = socket.socket()
s.setblocking(False)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(host[-1])
s.listen(backlog)

if taskgroup is None:
from . import TaskGroup

async with TaskGroup() as tg:
await _run_server(tg, s, cb)
else:
await _run_server(taskgroup, s, cb)


async def _run_server(tg, s, cb):
while True:
try:
yield core._io_queue.queue_read(s)
except core.CancelledError:
# Shutdown server
s.close()
return
try:
s2, addr = s.accept()
except Exception:
# Ignore a failed accept
continue

s2.setblocking(False)
s2s = Stream(s2, {"peername": addr})
tg.create_task(cb(s2s, s2s))


################################################################################
# Legacy uasyncio compatibility

Expand Down
Loading
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy