std::async
Defined in header <future>
|
||
(1) | ||
template< class Function, class... Args> std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...)>> |
(since C++11) (until C++17) |
|
template< class Function, class... Args> std::future<std::invoke_result_t<std::decay_t<Function>, |
(since C++17) (until C++20) |
|
template< class Function, class... Args> [[nodiscard]] |
(since C++20) | |
(2) | ||
template< class Function, class... Args > std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...)>> |
(since C++11) (until C++17) |
|
template< class Function, class... Args > std::future<std::invoke_result_t<std::decay_t<Function>, |
(since C++17) (until C++20) |
|
template< class Function, class... Args > [[nodiscard]] |
(since C++20) | |
The template function async
runs the function f
asynchronously (potentially in a separate thread which may be part of a thread pool) and returns a std::future that will eventually hold the result of that function call.
f
may be executed in another thread or it may be run synchronously when the resulting std::future is queried for a value.f
with arguments args
according to a specific launch policy policy
:
- If the async flag is set (i.e. (policy & std::launch::async) != 0), then
async
executes the callable objectf
on a new thread of execution (with all thread-locals initialized) as if spawned by std::thread(std::forward<F>(f), std::forward<Args>(args)...), except that if the functionf
returns a value or throws an exception, it is stored in the shared state accessible through the std::future thatasync
returns to the caller. - If the deferred flag is set (i.e. (policy & std::launch::deferred) != 0), then
async
convertsf
andargs...
the same way as by std::thread constructor, but does not spawn a new thread of execution. Instead, lazy evaluation is performed: the first call to a non-timed wait function on the std::future thatasync
returned to the caller will cause the copy off
to be invoked (as an rvalue) with the copies ofargs...
(also passed as rvalues) in the current thread (which does not have to be the thread that originally calledstd::async
). The result or exception is placed in the shared state associated with the future and only then it is made ready. All further accesses to the same std::future will return the result immediately. - If both the std::launch::async and std::launch::deferred flags are set in
policy
, it is up to the implementation whether to perform asynchronous execution or lazy evaluation.
- If the async flag is set (i.e. (policy & std::launch::async) != 0), then
|
(since C++14) |
In any case, the call to std::async
synchronizes-with (as defined in std::memory_order) the call to f
, and the completion of f
is sequenced-before making the shared state ready. If the async
policy is chosen, the associated thread completion synchronizes-with the successful return from the first function that is waiting on the shared state, or with the return of the last function that releases the shared state, whichever comes first.
Parameters
f | - | Callable object to call | ||||||
args... | - | parameters to pass to f
| ||||||
policy | - | bitmask value, where individual bits control the allowed methods of execution
| ||||||
Type requirements | ||||||||
-Function, Args must meet the requirements of MoveConstructible.
|
Return value
std::future referring to the shared state created by this call to std::async
.
Exceptions
Throws std::system_error with error condition std::errc::resource_unavailable_try_again if the launch policy equals std::launch::async and the implementation is unable to start a new thread (if the policy is async|deferred
or has additional bits set, it will fall back to deferred or the implementation-defined policies in this case), or std::bad_alloc if memory for the internal data structures could not be allocated.
Notes
The implementation may extend the behavior of the first overload of std::async by enabling additional (implementation-defined) bits in the default launch policy.
Examples of implementation-defined launch policies are the sync policy (execute immediately, within the async call) and the task policy (similar to async, but thread-locals are not cleared)
If the std::future
obtained from std::async
is not moved from or bound to a reference, the destructor of the std::future will block at the end of the full expression until the asynchronous operation completes, essentially making code such as the following synchronous:
std::async(std::launch::async, []{ f(); }); // temporary's dtor waits for f() std::async(std::launch::async, []{ g(); }); // does not start until f() completes
(note that the destructors of std::futures obtained by means other than a call to std::async never block)
Defect reports
The following behavior-changing defect reports were applied retroactively to previously published C++ standards.
DR | Applied to | Behavior as published | Correct behavior |
---|---|---|---|
LWG 2021 | C++11 | return type incorrect and value category of arguments unclear in the deferred case | corrected return type and clarified that rvalues are used |
Example
#include <iostream> #include <vector> #include <algorithm> #include <numeric> #include <future> #include <string> #include <mutex> std::mutex m; struct X { void foo(int i, const std::string& str) { std::lock_guard<std::mutex> lk(m); std::cout << str << ' ' << i << '\n'; } void bar(const std::string& str) { std::lock_guard<std::mutex> lk(m); std::cout << str << '\n'; } int operator()(int i) { std::lock_guard<std::mutex> lk(m); std::cout << i << '\n'; return i + 10; } }; template <typename RandomIt> int parallel_sum(RandomIt beg, RandomIt end) { auto len = end - beg; if (len < 1000) return std::accumulate(beg, end, 0); RandomIt mid = beg + len/2; auto handle = std::async(std::launch::async, parallel_sum<RandomIt>, mid, end); int sum = parallel_sum(beg, mid); return sum + handle.get(); } int main() { std::vector<int> v(10000, 1); std::cout << "The sum is " << parallel_sum(v.begin(), v.end()) << '\n'; X x; // Calls (&x)->foo(42, "Hello") with default policy: // may print "Hello 42" concurrently or defer execution auto a1 = std::async(&X::foo, &x, 42, "Hello"); // Calls x.bar("world!") with deferred policy // prints "world!" when a2.get() or a2.wait() is called auto a2 = std::async(std::launch::deferred, &X::bar, x, "world!"); // Calls X()(43); with async policy // prints "43" concurrently auto a3 = std::async(std::launch::async, X(), 43); a2.wait(); // prints "world!" std::cout << a3.get() << '\n'; // prints "53" } // if a1 is not done at this point, destructor of a1 prints "Hello 42" here
Possible output:
The sum is 10000 43 world! 53 Hello 42