Usage

Using trio from asyncio, or vice versa, requires two steps:

Because Trio and asyncio differ in some key semantics, most notably how they handle tasks and cancellation, usually their domains are strictly separated – i.e., you need to call a wrapper that translates from one to the other. While trio-asyncio includes a wrapper that allows you ignore that separation in some limited cases, you probably should not use it in non-trivial programs.

Startup and shutdown

Trio main loop

Typically, you start with a Trio program which you need to extend with asyncio code.

Before:

import trio

trio.run(async_main, *args)

After:

import trio
import trio_asyncio

trio_asyncio.run(async_main, *args)

Note that async_main here still must be a Trio-flavored async function! trio_asyncio.run() is trio.run() plus an additional asyncio context (which you can take advantage of using aio_as_trio()).

Equivalently, wrap your main function (or any other code that needs to talk to asyncio) in a trio_asyncio.open_loop() block:

import trio
import trio_asyncio

async def async_main_wrapper(*args):
    async with trio_asyncio.open_loop() as loop:
        assert loop == asyncio.get_event_loop()
        await async_main(*args)

trio.run(async_main_wrapper, *args)

In either case, within async_main, calls to asyncio.get_event_loop() will return the currently-running TrioEventLoop instance. (Since asyncio code uses the result of get_event_loop() as the default event loop in effectively all cases, this means you don’t need to pass loop= arguments around explicitly.)

TrioEventLoop has a few trio-asyncio specific methods in addition to the usual asyncio.AbstractEventLoop interface; these are documented in the appropriate sections below. In general you don’t need to care about any of them, though, as you can just use aio_as_trio() to run asyncio code. (See below for more details.)

async with trio_asyncio.open_loop(queue_len=None)

Returns a Trio-flavored async context manager which provides an asyncio event loop running on top of Trio.

The context manager evaluates to a new TrioEventLoop object.

Entering the context manager is not enough on its own to immediately run asyncio code; it just provides the context that makes running that code possible. You additionally need to wrap any asyncio functions that you want to run in aio_as_trio().

If you provide a queue_len, then any attempt to enqueue more than that many asyncio callbacks near-simultaneously (including, for example, new task creations) will fail. There is no way to backpressure asyncio callback registration, so the best we can do if the queue length is exceeded is raise an exception (trio.WouldBlock), which is likely to crash your whole program. It is suggested to leave the queue_len at its default of None (unlimited) unless you need to enforce hard constraints on memory use.

Exiting the context manager will attempt to do an orderly shutdown of the tasks it contains, analogously to asyncio.run(). Both asyncio-flavored tasks and Trio-flavored tasks (the latter started using trio_as_future(), run_trio_task(), trio_as_aio(), etc) are cancelled simultaneously, and the loop waits for them to exit in response to this cancellation before proceeding. All call_soon() callbacks that are submitted before exiting the context manager will run before starting this shutdown sequence, and all callbacks that are submitted before the last task exits will run before the loop closes. The exact point at which the loop stops running callbacks is not specified.

Warning

As with asyncio.run(), asyncio-flavored tasks that are started after exiting the context manager (such as by another task as it unwinds) may or may not be cancelled, and will be abandoned if they survive the shutdown sequence. This may lead to unclosed resources, stderr spew about “coroutine ignored GeneratorExit”, etc. Trio-flavored tasks do not have this hazard.

Example usage:

async def async_main(*args):
    async with trio_asyncio.open_loop() as loop:
        # async part of your main program here
        await trio.sleep(1)
        await trio_asyncio.aio_as_trio(asyncio.sleep)(2)
trio_asyncio.run(proc, *args, queue_len=None, **trio_run_options)

Run a Trio-flavored async function in a context that has an asyncio event loop also available.

This is exactly equivalent to using trio.run() plus wrapping the body of proc in async with trio_asyncio.open_loop():. See the documentation of open_loop() for more on the queue_len argument, which should usually be left at its default of None.

Stopping

The asyncio mainloop will be stopped automatically when the code within async with open_loop() / trio_asyncion.run() exits. trio-asyncio will process all outstanding callbacks and terminate. As in asyncio, callbacks which are added during this step will be ignored.

You cannot restart the loop, nor would you want to. You can always make another loop if you need one.

Asyncio main loop

Sometimes you instead start with asyncio code which you wish to extend with some Trio portions. The best-supported approach here is to wrap your entire asyncio program in a Trio event loop. In other words, you should transform this code:

def main():
    loop = asyncio.get_event_loop()
    loop.run_until_complete(async_main())

or (Python 3.7 and later):

def main():
    asyncio.run(async_main())

to this:

def main():
    trio_asyncio.run(trio_asyncio.aio_as_trio(async_main))

If your program makes multiple calls to run_until_complete() and/or run_forever(), or if the call to asyncio.run() is hidden inside a library you’re using, then this may be a somewhat challenging transformation. In such cases, you can instead keep the old approach (get_event_loop() + run_until_complete()) unchanged, and if you’ve imported trio_asyncio (and not changed the asyncio event loop policy) you’ll still be able to use trio_as_aio() to run Trio code from within your asyncio-flavored functions. This is referred to internally as a “sync loop” (SyncTrioEventLoop), as contrasted with the “async loop” that you use when you start from an existing Trio run. The sync loop is implemented using the greenlet library to switch out of a Trio run that has not yet completed, so it is less well-supported than the approach where you start in Trio. But as of trio-asyncio 0.14.0, we do think it should generally work.

Compatibility issues

Loop implementations

There are replacement event loops for asyncio, such as uvloop. trio-asyncio is not compatible with them.

Multithreading

trio-asyncio monkey-patches asyncio’s loop policy to be thread-local. This lets you use uvloop in one thread while running trio_asyncio in another.

Interrupting the asyncio loop

A trio-asyncio event loop created with open_loop() does not support run_until_complete or run_forever. If you need these features, you might be able to get away with using a “sync loop” as explained above, but it’s better to refactor your program so all of its async code runs within a single event loop invocation. For example, you might replace:

async def setup():
    pass # … start your services
async def shutdown():
    pass # … terminate services and clean up
    loop.stop()

loop = asyncio.get_event_loop()
loop.run_until_complete(setup)
loop.run_forever()

with:

stopped_event = trio.Event()
async def setup():
    pass # … start your services
async def cleanup():
    pass # … terminate services and clean up
async def shutdown():
    stopped_event.set()

async def async_main():
    await aio_as_trio(setup)()
    await stopped_event.wait()
    await aio_as_trio(cleanup)()
trio_asyncio.run(async_main)

Detecting the current function’s flavor

sniffio.current_async_library() correctly reports “asyncio” or “trio” when called from a trio-asyncio program, based on the flavor of function that’s calling it.

However, this feature should generally not be necessary, because you should know whether each function in your program is asyncio-flavored or Trio-flavored. (The two have different semantics, especially surrounding cancellation.) It’s provided mainly so that your trio-asyncio program can safely depend on libraries that use sniffio to support both flavors. It can also be helpful if you want to assert that you’re in the mode you think you’re in, using

assert sniffio.current_async_library() == "trio"

(or "asyncio") to detect mismatched flavors while porting code from asyncio to Trio.

Cross-calling

First, a bit of background.

For historical reasons, calling an async function (of any flavor) is a two-step process – that is, given

async def proc():
    pass

a call to await proc() does two things:

  • proc() returns an awaitable, i.e. something that has an __await__ method.

  • await proc() then hooks this awaitable up to your event loop, so that it can do whatever combination of execution and cooperative blocking it desires. (Technically, __await__() returns an iterable, which is iterated until it has been exhausted, and each yielded object is sent through to the event loop.)

asyncio traditionally uses awaitables for indirect procedure calls, so you often see the pattern:

async def some_code():
    pass
async def run(proc):
    await proc
await run(some_code())

This method has a problem: it decouples creating the awailable from running it. If you decide to add code to run that retries running proc when it encounters a specific error, you’re out of luck.

Trio, in contrast, uses (async) callables:

async def some_code():
    pass
async def run(proc):
    await proc()
await run(some_code)

Here, calling proc multiple times from within run is not a problem.

trio-asyncio adheres to Trio conventions, but the asyncio way is also supported when possible.

Calling asyncio from Trio

Wrap the callable, awaitable, generator, or iterator in trio_asyncio.aio_as_trio().

Thus, you can call an asyncio function from Trio as follows:

async def aio_sleep(sec=1):
    await asyncio.sleep(sec)
async def trio_sleep(sec=2):
    await aio_as_trio(aio_sleep)(sec)
trio_asyncio.run(trio_sleep, 3)

or use as a decorator to pre-wrap:

@aio_as_trio
async def trio_sleep(sec=1):
    await asyncio.sleep(sec)
trio_asyncio.run(trio_sleep, 3)

or pass an awaitable:

async def aio_sleep(sec=1):
    await asyncio.sleep(sec)
async def trio_sleep(sec=2):
    await aio_as_trio(aio_sleep(sec))
trio_asyncio.run(trio_sleep, 3)

If you have a choice between aio_as_trio(foo)(bar) and aio_as_trio(foo(bar)), choose the former. If foo() is an async function defined with async def, it doesn’t matter; they behave equivalently. But if foo() is a synchronous wrapper that does anything before delegating to an async function, the first approach will let the synchronous part of foo() determine the current asyncio task, and the second will not. The difference is relevant in practice for popular libraries such as aiohttp.

aio_as_trio() also accepts asyncio.Futures:

async def aio_sleep(sec=1):
    await asyncio.sleep(sec)
    return 42
async def trio_sleep(sec=2):
    f = aio_sleep(1)
    f = asyncio.ensure_future(f)
    r = await aio_as_trio(f)
    assert r == 42
trio_asyncio.run(trio_sleep, 3)

as well as async iterators (such as async generators):

async def aio_slow():
    n = 0
    while True:
        await asyncio.sleep(n)
        yield n
        n += 1
async def printer():
    async for n in aio_as_trio(aio_slow()):
        print(n)
trio_asyncio.run(printer)

and async context managers:

class AsyncCtx:
    async def __aenter__(self):
        await asyncio.sleep(1)
        return self
    async def __aexit__(self, *tb):
        await asyncio.sleep(1)
    async def delay(self, sec=1):
        await asyncio.sleep(sec)
async def trio_ctx():
    async with aio_as_trio(AsyncCtx()) as ctx:
        print("within")
        await aio_as_trio(ctx.delay)(2)
trio_asyncio.run(trio_ctx)

As you can see from the above example, aio_as_trio() handles wrapping the context entry and exit, but it doesn’t know anything about async methods that may exist on the object to which the context evaluates. You still need to treat them as asyncio methods and wrap them appropriately when you call them.

Note that creating the async context manager or async iterator is not itself an asynchronous process; i.e., AsyncCtx.__init__ or __aiter__ is a normal synchronous procedure. Only the __aenter__ and __aexit__ methods of an async context manager, or the __anext__ method of an async iterator, are asynchronous. This is why you need to wrap the context manager or iterator itself – unlike with a simple procedure call, you cannot wrap the call for generating the context handler.

Thus, the following code will not work:

async def trio_ctx():
    async with aio_as_trio(AsyncCtx)() as ctx:  # no!
        print("within")
    async for n in aio_as_trio(aio_slow)():  # also no!
        print(n)
trio_asyncio.aio_as_trio(proc, *, loop=None)

Return a Trio-flavored wrapper for an asyncio-flavored awaitable, async function, async context manager, or async iterator.

Alias: asyncio_as_trio()

This is the primary interface for calling asyncio code from Trio code. You can also use it as a decorator on an asyncio-flavored async function; the decorated function will be callable from Trio-flavored code without additional boilerplate.

Note that while adapting coroutines, i.e.:

await aio_as_trio(proc(*args))

is supported (because asyncio uses them a lot) they’re not a good idea because setting up the coroutine won’t run within an asyncio context. If possible, use:

await aio_as_trio(proc)(*args)

instead.

Too complicated?

There’s also a somewhat-magic wrapper (trio_asyncio.allow_asyncio()) which, as the name implies, allows you to directly call asyncio-flavored functions from a function that is otherwise Trio-flavored.

async def hybrid():
    await trio.sleep(1)
    await asyncio.sleep(1)
    print("Well, that worked")
trio_asyncio.run(trio_asyncio.allow_asyncio, hybrid)

This method works for one-off code. However, there are a couple of semantic differences between asyncio and Trio which trio_asyncio.allow_asyncio() is unable to account for. Additionally, the transparency support is only one-way; you can’t transparently call Trio from a function that’s used by asyncio callers. Thus, you really should not use it for “real” programs or libraries.

await trio_asyncio.allow_asyncio(fn, *args)

Execute await fn(*args) in a context that allows fn to call both Trio-flavored and asyncio-flavored functions without marking which ones are which.

This is a Trio-flavored async function. There is no asyncio-flavored equivalent.

This wrapper allows you to indiscrimnately mix trio and asyncio functions, generators, or iterators:

import trio
import asyncio
import trio_asyncio

async def hello(loop):
    await asyncio.sleep(1)
    print("Hello")
    await trio.sleep(1)
    print("World")

async def main():
    with trio_asyncio.open_loop() as loop:
        await trio_asyncio.allow_asyncio(hello, loop)
trio.run(main)

Unfortunately, there are issues with cancellation (specifically, asyncio function will see trio.Cancelled instead of concurrent.futures.CancelledError). Thus, this mode is not the default.

Calling Trio from asyncio

Wrap the callable, generator, or iterator in trio_asyncio.trio_as_aio().

Thus, you can call a Trio function from asyncio as follows:

async def trio_sleep(sec=1):
    await trio.sleep(sec)
async def aio_sleep(sec=2):
    await trio_as_aio(trio_sleep)(sec)
trio_asyncio.run(aio_as_trio, aio_sleep, 3)

or use a decorator to pre-wrap:

@trio_as_aio
async def aio_sleep(sec=2):
    await trio.sleep(sec)
trio_asyncio.run(aio_as_trio, aio_sleep, 3)

In contrast to aio_as_trio(), using an awaitable is not supported because that’s not an idiom Trio uses.

Calling a function wrapped with trio_as_aio() returns a regular asyncio.Future. Thus, you can call it from a synchronous context (e.g. a callback hook). Of course, you’re responsible for catching any errors – either arrange to await the future, or use add_done_callback():

async def trio_sleep(sec=1):
    await trio.sleep(sec)
    return 42
def cb(f):
    assert f.result == 42
async def aio_sleep(sec=2):
    f = trio_as_aio(trio_sleep)(1)
    f.add_done_callback(cb)
    r = await f
    assert r == 42
trio_asyncio.run(aio_as_trio, aio_sleep, 3)

You can wrap async context managers and async iterables just like with aio_as_trio().

trio_asyncio.trio_as_aio(proc, *, loop=None)

Return an asyncio-flavored wrapper for a Trio-flavored async function, async context manager, or async iterator.

Alias: trio_as_asyncio()

This is the primary interface for calling Trio code from asyncio code. You can also use it as a decorator on a Trio-flavored async function; the decorated function will be callable from asyncio-flavored code without additional boilerplate.

Note that adapting coroutines, i.e.:

await trio_as_aio(proc(*args))

is not supported, because Trio does not expose the existence of coroutine objects in its API. Instead, use:

await trio_as_aio(proc)(*args)

Or if you already have proc(*args) as a single object coro for some reason:

await trio_as_aio(lambda: coro)()

Warning

Be careful when using this to wrap an async context manager. There is currently no mechanism for running the entry and exit in the same Trio task, so if the async context manager wraps a nursery, havoc is likely to result. That is, instead of:

async def some_aio_func():
    async with trio_asyncio.trio_as_aio(trio.open_nursery()) as nursery:
        # code that uses nursery -- this will blow up

do something like:

async def some_aio_func():
    @trio_asyncio.aio_as_trio
    async def aio_body(nursery):
        # code that uses nursery -- this will work

    @trio_asyncio.trio_as_aio
    async def trio_body():
        async with trio.open_nursery() as nursery:
            await aio_body(nursery)

    await trio_body()

Trio background tasks

If you want to start a Trio task that should be monitored by trio_asyncio (i.e. an uncaught error will propagate to, and terminate, the asyncio event loop) instead of having its result wrapped in a asyncio.Future, use run_trio_task().

Multiple asyncio loops

trio-asyncio supports running multiple concurrent asyncio loops in different Trio tasks in the same thread. You may even nest them.

This means that you can write a trio-ish wrapper around an asyncio-using library without regard to whether the main loop or another library also use trio-asyncio.

You can use the event loop’s autoclose() method to tell trio-asyncio to auto-close a file descriptor when the loop terminates. This setting only applies to file descriptors that have been submitted to a loop’s add_reader() or add_writer() methods. As such, this method is mainly useful for servers and should be used to supplement, rather than replace, a finally: handler or a with closing(...): block.

Errors and cancellations

Errors and cancellations are propagated almost-transparently.

For errors, this is straightforward: if a cross-called function terminates with an exception, it continues to propagate out of the cross-call.

Cancellations are also propagated whenever possible. This means

  • the task started with run_trio() is cancelled when you cancel the future which run_trio() returns

  • if the task started with run_trio() is cancelled, the future gets cancelled

  • the future passed to run_aio_future() is cancelled when the Trio code calling it is cancelled

  • However, when the future passed to run_aio_future() is cancelled (i.e., when the task associated with it raises asyncio.CancelledError), that exception is passed along unchanged.

    This asymmetry is intentional since the code that waits for the future often is not within the cancellation context of the part that created it. Cancelling the future would thus impact the wrong (sub)task.

asyncio feature support notes

Deferred calls

call_soon() and friends work as usual.

Worker threads

run_in_executor() works as usual.

There is one caveat: the executor must be either None or an instance of trio_asyncio.TrioExecutor.

class trio_asyncio.TrioExecutor(limiter=None, thread_name_prefix=None, max_workers=None)

An executor that runs its job in a Trio worker thread.

Bases: concurrent.futures.ThreadPoolExecutor

Parameters:
  • limiter (trio.CapacityLimiter or None) – If specified, use this capacity limiter to control the number of threads in which this exeuctor can be running jobs.

  • thread_name_prefix – unused

  • max_workers (int or None) – If specified and limiter is not specified, create a new trio.CapacityLimiter with this value as its limit, and use that as the limiter.

File descriptors

add_reader() and add_writer() work as usual, if you really need them. Behind the scenes, these calls create a Trio task which waits for readability/writability and then runs the callback.

You might consider converting code using these calls to native Trio tasks.

Signals

add_signal_handler() works as usual.

Subprocesses

create_subprocess_exec() and create_subprocess_shell() work as usual.

You might want to convert these calls to use native Trio subprocesses.

Custom child watchers are not supported.

Low-level API reference

class trio_asyncio.BaseTrioEventLoop(queue_len=None)

An asyncio event loop that runs on top of Trio.

Bases: asyncio.SelectorEventLoop

All event loops created by trio-asyncio are of a type derived from BaseTrioEventLoop.

Parameters:

queue_len – The maximum length of the internal event queue. The default of None means unlimited. A limit should be specified only if you would rather crash your program than use too much memory, because it’s not feasible to enforce graceful backpressure here.

staticmethod run_aio_future(fut)

Alias for trio_asyncio.run_aio_future().

This is a Trio-flavored async function.

await run_aio_coroutine(coro)

Schedule an asyncio-flavored coroutine for execution on this loop by wrapping it in an asyncio.Task. Wait for it to complete, then return or raise its result.

Cancelling the current Trio scope will cancel the coroutine, which will throw a single asyncio.CancelledError into the coroutine (just like the usual asyncio behavior). If the coroutine then exits with a CancelledError exception, the call to run_aio_coroutine() will raise trio.Cancelled. But if it exits with CancelledError when the current Trio scope was not cancelled, the CancelledError will be passed along unchanged.

This is a Trio-flavored async function.

trio_as_future(proc, *args)

Start a new Trio task to run await proc(*args) asynchronously. Return an asyncio.Future that will resolve to the value or exception produced by that call.

Errors raised by the Trio call will only be used to resolve the returned Future; they won’t be propagated in any other way. Thus, if you want to notice exceptions, you had better not lose track of the returned Future. The easiest way to do this is to immediately await it in an asyncio-flavored function: await loop.trio_as_future(trio_func, *args).

Note that it’s the awaiting of the returned future, not the call to trio_as_future() itself, that’s asyncio-flavored. You can call trio_as_future() in a Trio-flavored function or even a synchronous context, as long as you plan to do something with the returned Future other than immediately awaiting it.

Cancelling the future will cancel the Trio task running your function, or prevent it from starting if that is still possible. If the Trio task exits due to this cancellation, the future will resolve to an asyncio.CancelledError.

Parameters:
  • proc – a Trio-flavored async function

  • args – arguments for proc

Returns:

an asyncio.Future which will resolve to the result of the call to proc

run_trio_task(proc, *args)

Start a new Trio task to run await proc(*args) asynchronously. If it raises an exception, allow the exception to propagate out of the trio-asyncio event loop (thus terminating it).

Parameters:
  • proc – a Trio-flavored async function

  • args – arguments for proc

Returns:

an asyncio.Handle which can be used to cancel the background task

await synchronize()

Suspend execution until all callbacks previously scheduled using call_soon() have been processed.

This is a Trio-flavored async function.

From asyncio, call await trio_as_aio(loop.synchronize)() instead of await asyncio.sleep(0) if you need to process all queued callbacks.

autoclose(fd)

Mark a file descriptor so that it’s auto-closed along with this loop.

This is a safety measure. You also should use appropriate finalizers.

Calling this method twice on the same file descriptor has no effect.

Parameters:

fd – Either an integer (Unix file descriptor) or an object with a fileno method providing one.

no_autoclose(fd)

Un-mark a file descriptor so that it’s no longer auto-closed along with this loop.

Call this method either before closing the file descriptor, or when passing it to code out of this loop’s scope.

Parameters:

fd – Either an integer (Unix file descriptor) or an object with a fileno() method providing one.

Raises:

KeyError – if the descriptor is not marked to be auto-closed.

await wait_stopped()

Wait until the event loop has stopped.

This is a Trio-flavored async function. You should call it from somewhere outside the async with open_loop() block to avoid a deadlock (the event loop can’t stop until all Trio tasks started within its scope have exited).

class trio_asyncio.TrioEventLoop(queue_len=None)

Bases: BaseTrioEventLoop

An asyncio event loop that runs on top of Trio, opened from within Trio code using open_loop().

trio_asyncio.current_loop

A contextvars.ContextVar whose value is the TrioEventLoop created by the nearest enclosing async with open_loop(): block. This is the same event loop that will be returned by calls to asyncio.get_event_loop(). If current_loop’s value is None, then asyncio.get_event_loop() will raise an error in Trio context. (Outside Trio context its value is always None and asyncio.get_event_loop() uses differnet logic.)

It is OK to modify this if you want the current scope to use a different trio-asyncio event loop, but make sure not to let your modifications leak past their intended scope.

await trio_asyncio.run_aio_future(future)

Wait for an asyncio-flavored future to become done, then return or raise its result.

Cancelling the current Trio scope will cancel the future. If this results in the future resolving to an asyncio.CancelledError exception, the call to run_aio_future() will raise trio.Cancelled. But if the future resolves to CancelledError when the current Trio scope was not cancelled, the CancelledError will be passed along unchanged.

This is a Trio-flavored async function.

async for ... in trio_asyncio.run_aio_generator(loop, async_generator)

Return a Trio-flavored async iterator which wraps the given asyncio-flavored async iterator (usually an async generator, but doesn’t have to be). The asyncio tasks that perform each iteration of async_generator will run in loop.

await trio_asyncio.run_aio_coroutine(coro)

Alias for a call to run_aio_coroutine() on the event loop returned by asyncio.get_event_loop().

This is a Trio-flavored async function which takes an asyncio-flavored coroutine object.

trio_asyncio.run_trio(proc, *args)

Alias for a call to trio_as_future() on the event loop returned by asyncio.get_event_loop().

This is a synchronous function which takes a Trio-flavored async function and returns an asyncio Future.

trio_asyncio.run_trio_task(proc, *args)

Alias for a call to run_trio_task() on the event loop returned by asyncio.get_event_loop().

This is a synchronous function which takes a Trio-flavored async function and returns nothing (the handle returned by BaseTrioEventLoop.run_trio_task is discarded). An uncaught error will propagate to, and terminate, the trio-asyncio loop.

exception trio_asyncio.TrioAsyncioDeprecationWarning

Warning emitted if you use deprecated trio-asyncio functionality.

This inherits from FutureWarning, not DeprecationWarning, for the same reasons described for trio.TrioDeprecationWarning.