18.5. asyncio – Asynchronous I/O, event loop, coroutines and tasks

New in version 3.4.

Source code: Lib/asyncio/


This module provides infrastructure for writing single-threaded concurrent code using coroutines, multiplexing I/O access over sockets and other resources, running network clients and servers, and other related primitives.

Here is a more detailed list of the package contents:

  • a pluggable event loop with various system-specific implementations;
  • transport and protocol abstractions (similar to those in Twisted);
  • concrete support for TCP, UDP, SSL, subprocess pipes, delayed calls, and others (some may be system-dependent);
  • a Future class that mimicks the one in the concurrent.futures module, but adapted for use with the event loop;
  • coroutines and tasks based on yield from (PEP 380), to help write concurrent code in a sequential fashion;
  • cancellation support for Futures and coroutines;
  • synchronization primitives for use between coroutines in a single thread, mimicking those in the threading module;
  • an interface for passing work off to a threadpool, for times when you absolutely, positively have to use a library that makes blocking I/O calls.

18.5.1. Disclaimer

Full documentation is not yet ready; we hope to have it written before Python 3.4 leaves beta. Until then, the best reference is PEP 3156. For a motivational primer on transports and protocols, see PEP 3153.

18.5.2. Event loops

The event loop is the central execution device provided by asyncio. It provides multiple facilities, amongst which:

  • Registering, executing and cancelling delayed calls (timeouts)
  • Creating client and server transports for various kinds of communication
  • Launching subprocesses and the associated transports for communication with an external program
  • Delegating costly function calls to a pool of threads

18.5.2.1. Getting an event loop

The easiest way to get an event loop is to call the get_event_loop() function.

asyncio.get_event_loop()

Get the event loop for current context. Returns an event loop object implementing BaseEventLoop interface, or raises an exception in case no event loop has been set for the current context and the current policy does not specify to create one. It should never return None.

18.5.2.2. Run an event loop

BaseEventLoop.run_forever()

Run until stop() is called.

BaseEventLoop.run_until_complete(future)

Run until the Future is done.

If the argument is a coroutine, it is wrapped in a Task.

Return the Future’s result, or raise its exception.

BaseEventLoop.is_running()

Returns running status of event loop.

asyncio.stop()

Stop running the event loop.

Every callback scheduled before stop() is called will run. Callback scheduled after stop() is called won’t. However, those callbacks will run if run_forever() is called again later.

BaseEventLoop.close()

Close the event loop.

This clears the queues and shuts down the executor, but does not wait for the executor to finish.

18.5.2.3. Calls

BaseEventLoop.call_soon(callback, *args)

Arrange for a callback to be called as soon as possible.

This operates as a FIFO queue, callbacks are called in the order in which they are registered. Each callback will be called exactly once.

Any positional arguments after the callback will be passed to the callback when it is called.

BaseEventLoop.call_soon_threadsafe(callback, *args)

Like call_soon(), but thread safe.

18.5.2.4. Delayed calls

The event loop has its own internal clock for computing timeouts. Which clock is used depends on the (platform-specific) event loop implementation; ideally it is a monotonic clock. This will generally be a different clock than time.time().

BaseEventLoop.call_later(delay, callback, *args)

Arrange for the callback to be called after the given delay seconds (either an int or float).

A “handle” is returned: an opaque object with a cancel() method that can be used to cancel the call.

callback will be called exactly once per call to call_later(). If two callbacks are scheduled for exactly the same time, it is undefined which will be called first.

The optional positional args will be passed to the callback when it is called. If you want the callback to be called with some named arguments, use a closure or functools.partial().

BaseEventLoop.call_at(when, callback, *args)

Arrange for the callback to be called at the given absolute timestamp when (an int or float), using the same time reference as time().

This method’s behavior is the same as call_later().

BaseEventLoop.time()

Return the current time, as a float value, according to the event loop’s internal clock.

18.5.2.5. Executor

Call a function in an Executor (pool of threads or pool of processes). By default, an event loop uses a thread pool executor (ThreadPoolExecutor).

BaseEventLoop.run_in_executor(executor, callback, *args)

Arrange for a callback to be called in the specified executor.

executor is a Executor instance, the default executor is used if executor is None.

BaseEventLoop.set_default_executor(executor)

Set the default executor used by run_in_executor().

18.5.2.6. Creating listening connections

BaseEventLoop.create_server(protocol_factory, host=None, port=None, *, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None)

XXX

  • protocol_factory
  • host, port
  • family
  • flags
  • sock
  • backlog : the maximum number of queued connections and should be at least 0; the maximum value is system-dependent (usually 5), the minimum value is forced to 0.
  • ssl: True or ssl.SSLContext
  • reuse_address: if True, set socket.SO_REUSEADDR option on the listening socket. Default value: True on POSIX systems, False on Windows.

This method returns a coroutine.

BaseEventLoop.create_datagram_endpoint(protocol_factory, local_addr=None, remote_addr=None, *, family=0, proto=0, flags=0)

XXX

This method returns a coroutine.

18.5.2.7. Creating connections

BaseEventLoop.create_connection(protocol_factory, host=None, port=None, **options)

Create a streaming transport connection to a given Internet host and port. protocol_factory must be a callable returning a protocol instance.

This method returns a coroutine which will try to establish the connection in the background. When successful, the coroutine returns a (transport, protocol) pair.

The chronological synopsis of the underlying operation is as follows:

  1. The connection is established, and a transport is created to represent it.
  2. protocol_factory is called without arguments and must return a protocol instance.
  3. The protocol instance is tied to the transport, and its connection_made() method is called.
  4. The coroutine returns successfully with the (transport, protocol) pair.

The created transport is an implementation-dependent bidirectional stream.

Note

protocol_factory can be any kind of callable, not necessarily a class. For example, if you want to use a pre-created protocol instance, you can pass lambda: my_protocol.

options are optional named arguments allowing to change how the connection is created:

  • ssl: if given and not false, a SSL/TLS transport is created (by default a plain TCP transport is created). If ssl is a ssl.SSLContext object, this context is used to create the transport; if ssl is True, a context with some unspecified default settings is used.
  • server_hostname, is only for use together with ssl, and sets or overrides the hostname that the target server’s certificate will be matched against. By default the value of the host argument is used. If host is empty, there is no default and you must pass a value for server_hostname. If server_hostname is an empty string, hostname matching is disabled (which is a serious security risk, allowing for man-in-the-middle-attacks).
  • family, proto, flags are the optional address family, protocol and flags to be passed through to getaddrinfo() for host resolution. If given, these should all be integers from the corresponding socket module constants.
  • sock, if given, should be an existing, already connected socket.socket object to be used by the transport. If sock is given, none of host, port, family, proto, flags and local_addr should be specified.
  • local_addr, if given, is a (local_host, local_port) tuple used to bind the socket to locally. The local_host and local_port are looked up using getaddrinfo(), similarly to host and port.
BaseEventLoop.connect_read_pipe(protocol_factory, pipe)

XXX

This method returns a coroutine.

BaseEventLoop.connect_write_pipe(protocol_factory, pipe)

XXX

This method returns a coroutine.

18.5.2.8. Resolve name

BaseEventLoop.getaddrinfo(host, port, *, family=0, type=0, proto=0, flags=0)

XXX

BaseEventLoop.getnameinfo(sockaddr, flags=0)

XXX

18.5.2.9. Running subprocesses

Run subprocesses asynchronously using the subprocess module.

BaseEventLoop.subprocess_shell(protocol_factory, cmd, *, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=False, shell=True, bufsize=0, **kwargs)

XXX

This method returns a coroutine.

See the constructor of the subprocess.Popen class for parameters.

BaseEventLoop.subprocess_exec(protocol_factory, *args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=False, shell=False, bufsize=0, **kwargs)

XXX

This method returns a coroutine.

See the constructor of the subprocess.Popen class for parameters.

18.5.3. Protocols

asyncio provides base classes that you can subclass to implement your network protocols. Those classes are used in conjunction with transports (see below): the protocol parses incoming data and asks for the writing of outgoing data, while the transport is responsible for the actual I/O and buffering.

When subclassing a protocol class, it is recommended you override certain methods. Those methods are callbacks: they will be called by the transport on certain events (for example when some data is received); you shouldn’t call them yourself, unless you are implementing a transport.

Note

All callbacks have default implementations, which are empty. Therefore, you only need to implement the callbacks for the events in which you are interested.

18.5.3.1. Protocol classes

class asyncio.Protocol

The base class for implementing streaming protocols (for use with e.g. TCP and SSL transports).

class asyncio.DatagramProtocol

The base class for implementing datagram protocols (for use with e.g. UDP transports).

class asyncio.SubprocessProtocol

The base class for implementing protocols communicating with child processes (through a set of unidirectional pipes).

18.5.3.2. Connection callbacks

These callbacks may be called on Protocol and SubprocessProtocol instances:

BaseProtocol.connection_made(transport)

Called when a connection is made.

The transport argument is the transport representing the connection. You are responsible for storing it somewhere (e.g. as an attribute) if you need to.

BaseProtocol.connection_lost(exc)

Called when the connection is lost or closed.

The argument is either an exception object or None. The latter means a regular EOF is received, or the connection was aborted or closed by this side of the connection.

connection_made() and connection_lost() are called exactly once per successful connection. All other callbacks will be called between those two methods, which allows for easier resource management in your protocol implementation.

The following callbacks may be called only on SubprocessProtocol instances:

SubprocessProtocol.pipe_data_received(fd, data)

Called when the child process writes data into its stdout or stderr pipe. fd is the integer file descriptor of the pipe. data is a non-empty bytes object containing the data.

SubprocessProtocol.pipe_connection_lost(fd, exc)

Called when one of the pipes communicating with the child process is closed. fd is the integer file descriptor that was closed.

SubprocessProtocol.process_exited()

Called when the child process has exited.

18.5.3.3. Data reception callbacks

18.5.3.3.1. Streaming protocols

The following callbacks are called on Protocol instances:

Protocol.data_received(data)

Called when some data is received. data is a non-empty bytes object containing the incoming data.

Note

Whether the data is buffered, chunked or reassembled depends on the transport. In general, you shouldn’t rely on specific semantics and instead make your parsing generic and flexible enough. However, data is always received in the correct order.

Protocol.eof_received()

Calls when the other end signals it won’t send any more data (for example by calling write_eof(), if the other end also uses asyncio).

This method may return a false value (including None), in which case the transport will close itself. Conversely, if this method returns a true value, closing the transport is up to the protocol. Since the default implementation returns None, it implicitly closes the connection.

Note

Some transports such as SSL don’t support half-closed connections, in which case returning true from this method will not prevent closing the connection.

data_received() can be called an arbitrary number of times during a connection. However, eof_received() is called at most once and, if called, data_received() won’t be called after it.

18.5.3.3.2. Datagram protocols

The following callbacks are called on DatagramProtocol instances.

DatagramProtocol.datagram_received(data, addr)

Called when a datagram is received. data is a bytes object containing the incoming data. addr is the address of the peer sending the data; the exact format depends on the transport.

DatagramProtocol.error_received(exc)

Called when a previous send or receive operation raises an OSError. exc is the OSError instance.

This method is called in rare conditions, when the transport (e.g. UDP) detects that a datagram couldn’t be delivered to its recipient. In many conditions though, undeliverable datagrams will be silently dropped.

18.5.3.4. Flow control callbacks

These callbacks may be called on Protocol and SubprocessProtocol instances:

BaseProtocol.pause_writing()

Called when the transport’s buffer goes over the high-water mark.

BaseProtocol.resume_writing()

Called when the transport’s buffer drains below the low-water mark.

pause_writing() and resume_writing() calls are paired – pause_writing() is called once when the buffer goes strictly over the high-water mark (even if subsequent writes increases the buffer size even more), and eventually resume_writing() is called once when the buffer size reaches the low-water mark.

Note

If the buffer size equals the high-water mark, pause_writing() is not called – it must go strictly over. Conversely, resume_writing() is called when the buffer size is equal or lower than the low-water mark. These end conditions are important to ensure that things go as expected when either mark is zero.

18.5.4. Transports

Transports are classed provided by asyncio in order to abstract various kinds of communication channels. You generally won’t instantiate a transport yourself; instead, you will call a BaseEventLoop method which will create the transport and try to initiate the underlying communication channel, calling you back when it succeeds.

Once the communication channel is established, a transport is always paired with a protocol instance. The protocol can then call the transport’s methods for various purposes.

asyncio currently implements transports for TCP, UDP, SSL, and subprocess pipes. The methods available on a transport depend on the transport’s kind.

18.5.4.1. Methods common to all transports

BaseTransport.close(self)

Close the transport. If the transport has a buffer for outgoing data, buffered data will be flushed asynchronously. No more data will be received. After all buffered data is flushed, the protocol’s connection_lost() method will be called with None as its argument.

BaseTransport.get_extra_info(name, default=None)

Return optional transport information. name is a string representing the piece of transport-specific information to get, default is the value to return if the information doesn’t exist.

This method allows transport implementations to easily expose channel-specific information.

18.5.4.2. Methods of readable streaming transports

ReadTransport.pause_reading()

Pause the receiving end of the transport. No data will be passed to the protocol’s data_received() method until meth:resume_reading is called.

ReadTransport.resume_reading()

Resume the receiving end. The protocol’s data_received() method will be called once again if some data is available for reading.

18.5.4.3. Methods of writable streaming transports

WriteTransport.write(data)

Write some data bytes to the transport.

This method does not block; it buffers the data and arranges for it to be sent out asynchronously.

WriteTransport.writelines(list_of_data)

Write a list (or any iterable) of data bytes to the transport. This is functionally equivalent to calling write() on each element yielded by the iterable, but may be implemented more efficiently.

WriteTransport.write_eof()

Close the write end of the transport after flushing buffered data. Data may still be received.

This method can raise NotImplementedError if the transport (e.g. SSL) doesn’t support half-closes.

WriteTransport.can_write_eof()

Return True if the transport supports write_eof(), False if not.

WriteTransport.abort()

Close the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol’s connection_lost() method will eventually be called with None as its argument.

WriteTransport.set_write_buffer_limits(high=None, low=None)

Set the high- and low-water limits for write flow control.

These two values control when call the protocol’s pause_writing() and resume_writing() methods are called. If specified, the low-water limit must be less than or equal to the high-water limit. Neither high nor low can be negative.

The defaults are implementation-specific. If only the high-water limit is given, the low-water limit defaults to a implementation-specific value less than or equal to the high-water limit. Setting high to zero forces low to zero as well, and causes pause_writing() to be called whenever the buffer becomes non-empty. Setting low to zero causes resume_writing() to be called only once the buffer is empty. Use of zero for either limit is generally sub-optimal as it reduces opportunities for doing I/O and computation concurrently.

WriteTransport.get_write_buffer_size()

Return the current size of the output buffer used by the transport.

18.5.4.4. Methods of datagram transports

DatagramTransport.sendto(data, addr=None)

Send the data bytes to the remote peer given by addr (a transport-dependent target address). If addr is None, the data is sent to the target address given on transport creation.

This method does not block; it buffers the data and arranges for it to be sent out asynchronously.

DatagramTransport.abort()

Close the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol’s connection_lost() method will eventually be called with None as its argument.

18.5.4.5. Methods of subprocess transports

BaseSubprocessTransport.get_pid()

Return the subprocess process id as an integer.

BaseSubprocessTransport.get_returncode()

Return the subprocess returncode as an integer or None if it hasn’t returned, similarly to the subprocess.Popen.returncode attribute.

BaseSubprocessTransport.get_pipe_transport(fd)

Return the transport for the communication pipe correspondong to the integer file descriptor fd. The return value can be a readable or writable streaming transport, depending on the fd. If fd doesn’t correspond to a pipe belonging to this transport, None is returned.

BaseSubprocessTransport.send_signal(signal)

Send the signal number to the subprocess, as in subprocess.Popen.send_signal().

BaseSubprocessTransport.terminate()

Ask the subprocess to stop, as in subprocess.Popen.terminate(). This method is an alias for the close() method.

On POSIX systems, this method sends SIGTERM to the subprocess. On Windows, the Windows API function TerminateProcess() is called to stop the subprocess.

BaseSubprocessTransport.kill(self)

Kill the subprocess, as in subprocess.Popen.kill()

On POSIX systems, the function sends SIGKILL to the subprocess. On Windows, this method is an alias for terminate().

18.5.5. Task functions

asyncio.as_completed(fs, *, loop=None, timeout=None)

Return an iterator whose values, when waited for, are Future instances.

Raises TimeoutError if the timeout occurs before all Futures are done.

Example:

for f in as_completed(fs):
    result = yield from f  # The 'yield from' may raise
    # Use result

Note

The futures f are not necessarily members of fs.

asyncio.async(coro_or_future, *, loop=None)

Wrap a coroutine in a future.

If the argument is a Future, it is returned directly.

asyncio.gather(*coros_or_futures, loop=None, return_exceptions=False)

Return a future aggregating results from the given coroutines or futures.

All futures must share the same event loop. If all the tasks are done successfully, the returned future’s result is the list of results (in the order of the original sequence, not necessarily the order of results arrival). If result_exception is True, exceptions in the tasks are treated the same as successful results, and gathered in the result list; otherwise, the first raised exception will be immediately propagated to the returned future.

Cancellation: if the outer Future is cancelled, all children (that have not completed yet) are also cancelled. If any child is cancelled, this is treated as if it raised CancelledError – the outer Future is not cancelled in this case. (This is to prevent the cancellation of one child to cause other children to be cancelled.)

asyncio.iscoroutinefunction(func)

Return True if func is a decorated coroutine function.

asyncio.iscoroutine(obj)

Return True if obj is a coroutine object.

asyncio.sleep(delay, result=None, *, loop=None)

Create a coroutine that completes after a given time (in seconds).

asyncio.shield(arg, *, loop=None)

Wait for a future, shielding it from cancellation.

The statement:

res = yield from shield(something())

is exactly equivalent to the statement:

res = yield from something()

except that if the coroutine containing it is cancelled, the task running in something() is not cancelled. From the point of view of something(), the cancellation did not happen. But its caller is still cancelled, so the yield-from expression still raises CancelledError. Note: If something() is cancelled by other means this will still cancel shield().

If you want to completely ignore cancellation (not recommended) you can combine shield() with a try/except clause, as follows:

try:
    res = yield from shield(something())
except CancelledError:
    res = None

18.5.6. Task

class asyncio.Task(coro, *, loop=None)

A coroutine wrapped in a Future.

classmethod all_tasks(loop=None)

Return a set of all tasks for an event loop.

By default all tasks for the current event loop are returned.

cancel()

Cancel the task.

get_stack(self, *, limit=None)

Return the list of stack frames for this task’s coroutine.

If the coroutine is active, this returns the stack where it is suspended. If the coroutine has completed successfully or was cancelled, this returns an empty list. If the coroutine was terminated by an exception, this returns the list of traceback frames.

The frames are always ordered from oldest to newest.

The optional limit gives the maximum nummber of frames to return; by default all available frames are returned. Its meaning differs depending on whether a stack or a traceback is returned: the newest frames of a stack are returned, but the oldest frames of a traceback are returned. (This matches the behavior of the traceback module.)

For reasons beyond our control, only one stack frame is returned for a suspended coroutine.

print_stack(*, limit=None, file=None)

Print the stack or traceback for this task’s coroutine.

This produces output similar to that of the traceback module, for the frames retrieved by get_stack(). The limit argument is passed to get_stack(). The file argument is an I/O stream to which the output goes; by default it goes to sys.stderr.

18.5.7. Protocols

asyncio provides base classes that you can subclass to implement your network protocols. Those classes are used in conjunction with transports (see below): the protocol parses incoming data and asks for the writing of outgoing data, while the transport is responsible for the actual I/O and buffering.

When subclassing a protocol class, it is recommended you override certain methods. Those methods are callbacks: they will be called by the transport on certain events (for example when some data is received); you shouldn’t call them yourself, unless you are implementing a transport.

Note

All callbacks have default implementations, which are empty. Therefore, you only need to implement the callbacks for the events in which you are interested.

18.5.8. Coroutines

A coroutine is a generator that follows certain conventions. For documentation purposes, all coroutines should be decorated with @asyncio.coroutine, but this cannot be strictly enforced.

Coroutines use the yield from syntax introduced in PEP 380, instead of the original yield syntax.

The word “coroutine”, like the word “generator”, is used for two different (though related) concepts:

  • The function that defines a coroutine (a function definition decorated with asyncio.coroutine). If disambiguation is needed we will call this a coroutine function.
  • The object obtained by calling a coroutine function. This object represents a computation or an I/O operation (usually a combination) that will complete eventually. If disambiguation is needed we will call it a coroutine object.

Things a coroutine can do:

  • result = yield from future – suspends the coroutine until the future is done, then returns the future’s result, or raises an exception, which will be propagated. (If the future is cancelled, it will raise a CancelledError exception.) Note that tasks are futures, and everything said about futures also applies to tasks.
  • result = yield from coroutine – wait for another coroutine to produce a result (or raise an exception, which will be propagated). The coroutine expression must be a call to another coroutine.
  • return expression – produce a result to the coroutine that is waiting for this one using yield from.
  • raise exception – raise an exception in the coroutine that is waiting for this one using yield from.

Calling a coroutine does not start its code running – it is just a generator, and the coroutine object returned by the call is really a generator object, which doesn’t do anything until you iterate over it. In the case of a coroutine object, there are two basic ways to start it running: call yield from coroutine from another coroutine (assuming the other coroutine is already running!), or convert it to a Task.

Coroutines (and tasks) can only run when the event loop is running.

18.5.9. Synchronization primitives

class asyncio.Lock(*, loop=None)

Primitive lock objects.

A primitive lock is a synchronization primitive that is not owned by a particular coroutine when locked. A primitive lock is in one of two states, ‘locked’ or ‘unlocked’.

It is created in the unlocked state. It has two basic methods, acquire() and release(). When the state is unlocked, acquire() changes the state to locked and returns immediately. When the state is locked, acquire() blocks until a call to release() in another coroutine changes it to unlocked, then the acquire() call resets it to locked and returns. The release() method should only be called in the locked state; it changes the state to unlocked and returns immediately. If an attempt is made to release an unlocked lock, a RuntimeError will be raised.

When more than one coroutine is blocked in acquire() waiting for the state to turn to unlocked, only one coroutine proceeds when a release() call resets the state to unlocked; first coroutine which is blocked in acquire() is being processed.

acquire() is a coroutine and should be called with yield from.

Locks also support the context manager protocol. (yield from lock) should be used as context manager expression.

Usage:

lock = Lock()
...
yield from lock
try:
    ...
finally:
    lock.release()

Context manager usage:

lock = Lock()
...
with (yield from lock):
     ...

Lock objects can be tested for locking state:

if not lock.locked():
   yield from lock
else:
   # lock is acquired
    ...
locked()

Return True if lock is acquired.

acquire()

Acquire a lock.

This method blocks until the lock is unlocked, then sets it to locked and returns True.

This method returns a coroutine.

release()

Release a lock.

When the lock is locked, reset it to unlocked, and return. If any other coroutines are blocked waiting for the lock to become unlocked, allow exactly one of them to proceed.

When invoked on an unlocked lock, a RuntimeError is raised.

There is no return value.

class asyncio.Event(*, loop=None)

An Event implementation, asynchronous equivalent to threading.Event.

Class implementing event objects. An event manages a flag that can be set to true with the set() method and reset to false with the clear() method. The wait() method blocks until the flag is true. The flag is initially false.

is_set()

Return True if and only if the internal flag is true.

set()

Set the internal flag to true. All coroutines waiting for it to become true are awakened. Coroutine that call wait() once the flag is true will not block at all.

clear()

Reset the internal flag to false. Subsequently, coroutines calling wait() will block until set() is called to set the internal flag to true again.

wait()

Block until the internal flag is true.

If the internal flag is true on entry, return True immediately. Otherwise, block until another coroutine calls set() to set the flag to true, then return True.

This method returns a coroutine.

class asyncio.Condition(*, loop=None)

A Condition implementation, asynchronous equivalent to threading.Condition.

This class implements condition variable objects. A condition variable allows one or more coroutines to wait until they are notified by another coroutine.

A new Lock object is created and used as the underlying lock.

wait()

Wait until notified.

If the calling coroutine has not acquired the lock when this method is called, a RuntimeError is raised.

This method releases the underlying lock, and then blocks until it is awakened by a notify() or notify_all() call for the same condition variable in another coroutine. Once awakened, it re-acquires the lock and returns True.

This method returns a coroutine.

wait_for(predicate)

Wait until a predicate becomes true.

The predicate should be a callable which result will be interpreted as a boolean value. The final predicate value is the return value.

This method returns a coroutine.

notify(n=1)

By default, wake up one coroutine waiting on this condition, if any. If the calling coroutine has not acquired the lock when this method is called, a RuntimeError is raised.

This method wakes up at most n of the coroutines waiting for the condition variable; it is a no-op if no coroutines are waiting.

Note

An awakened coroutine does not actually return from its wait() call until it can reacquire the lock. Since notify() does not release the lock, its caller should.

notify_all()

Wake up all threads waiting on this condition. This method acts like notify(), but wakes up all waiting threads instead of one. If the calling thread has not acquired the lock when this method is called, a RuntimeError is raised.

class asyncio.Semaphore(value=1, *, loop=None)

A Semaphore implementation.

A semaphore manages an internal counter which is decremented by each acquire() call and incremented by each release() call. The counter can never go below zero; when acquire() finds that it is zero, it blocks, waiting until some other thread calls release().

Semaphores also support the context manager protocol.

The optional argument gives the initial value for the internal counter; it defaults to 1. If the value given is less than 0, ValueError is raised.

locked()

Returns True if semaphore can not be acquired immediately.

acquire()

Acquire a semaphore.

If the internal counter is larger than zero on entry, decrement it by one and return True immediately. If it is zero on entry, block, waiting until some other coroutine has called release() to make it larger than 0, and then return True.

This method returns a coroutine.

release()

Release a semaphore, incrementing the internal counter by one. When it was zero on entry and another coroutine is waiting for it to become larger than zero again, wake up that coroutine.

class asyncio.BoundedSemaphore(value=1, *, loop=None)

A bounded semaphore implementation. Inherit from Semaphore.

This raises ValueError in release() if it would increase the value above the initial value.

class asyncio.Queue(maxsize=0, *, loop=None)

A queue, useful for coordinating producer and consumer coroutines.

If maxsize is less than or equal to zero, the queue size is infinite. If it is an integer greater than 0, then yield from put() will block when the queue reaches maxsize, until an item is removed by get().

Unlike the standard library queue, you can reliably know this Queue’s size with qsize(), since your single-threaded Tulip application won’t be interrupted between calling qsize() and doing an operation on the Queue.

empty()

Return True if the queue is empty, False otherwise.

full()

Return True if there are maxsize items in the queue.

Note

If the Queue was initialized with maxsize=0 (the default), then full() is never True.

get()

Remove and return an item from the queue.

If you yield from get(), wait until a item is available.

This method returns a coroutine.

get_nowait()

Remove and return an item from the queue.

Return an item if one is immediately available, else raise Empty.

put(item)

Put an item into the queue.

If you yield from put(), wait until a free slot is available before adding item.

This method returns a coroutine.

put_nowait(item)

Put an item into the queue without blocking.

If no free slot is immediately available, raise Full.

qsize()

Number of items in the queue.

maxsize

Number of items allowed in the queue.

class asyncio.PriorityQueue

A subclass of Queue; retrieves entries in priority order (lowest first).

Entries are typically tuples of the form: (priority number, data).

class asyncio.LifoQueue

A subclass of Queue that retrieves most recently added entries first.

class asyncio.JoinableQueue

A subclass of Queue with task_done() and join() methods.

task_done()

Indicate that a formerly enqueued task is complete.

Used by queue consumers. For each get() used to fetch a task, a subsequent call to task_done() tells the queue that the processing on the task is complete.

If a join() is currently blocking, it will resume when all items have been processed (meaning that a task_done() call was received for every item that had been put() into the queue).

Raises ValueError if called more times than there were items placed in the queue.

join()

Block until all items in the queue have been gotten and processed.

The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks.

This method returns a coroutine.

18.5.10. Examples

18.5.10.1. Hello World (callback)

Print Hello World every two seconds, using a callback:

import asyncio

def print_and_repeat(loop):
    print('Hello World')
    loop.call_later(2, print_and_repeat, loop)

loop = asyncio.get_event_loop()
print_and_repeat(loop)
loop.run_forever()

18.5.10.2. Hello World (callback)

Print Hello World every two seconds, using a coroutine:

import asyncio

@asyncio.coroutine
def greet_every_two_seconds():
    while True:
        print('Hello World')
        yield from asyncio.sleep(2)

loop = asyncio.get_event_loop()
loop.run_until_complete(greet_every_two_seconds())

18.5.10.3. Echo server

A Protocol implementing an echo server:

class EchoServer(asyncio.Protocol):

    TIMEOUT = 5.0

    def timeout(self):
        print('connection timeout, closing.')
        self.transport.close()

    def connection_made(self, transport):
        print('connection made')
        self.transport = transport

        # start 5 seconds timeout timer
        self.h_timeout = asyncio.get_event_loop().call_later(
            self.TIMEOUT, self.timeout)

    def data_received(self, data):
        print('data received: ', data.decode())
        self.transport.write(b'Re: ' + data)

        # restart timeout timer
        self.h_timeout.cancel()
        self.h_timeout = asyncio.get_event_loop().call_later(
            self.TIMEOUT, self.timeout)

    def eof_received(self):
        pass

    def connection_lost(self, exc):
        print('connection lost:', exc)
        self.h_timeout.cancel()