c++ - Correct way to wait on a std::atomic<int> in Windows? -
the following code works has problem:
#include <atomic> #include "windows.h" std::atomic<int> foo; dword winapi baz(void *) { sleep(10000); foo.store(1); return 0;} int main() { foo.store(0); handle h = createthread(null, 0, baz, null, 0, null); while ( !foo.load() ) { sleep(0); } waitforsingleobject(h, infinite); closehandle(h); return 0; }
the program uses maximum cpu while waiting.
if change sleep(0);
sleep(1);
uses 0% cpu, worried couple of things:
- this introduces unnecessary delay program: waste microseconds if flag set in between polls
- this might still consuming more system resource necessary, in order wake , call
load()
every millisecond.
is there better way?
background: have code working using win32 events wake thread , using waitformultipleobjects
, i'm wondering if can use std::atomic
flags instead, aim of perhaps making code simpler, faster, and/or more portable. idk how os implements waitforsingleobject , waitformultipleobjects, e.g. whether using sleep(1)
on internally or if has smarter technique available.
note: atomic<int>
lock-free; generated assembly loop is:
movq __imp_sleep(%rip), %rbx movq %rax, %rsi jmp .l4 .p2align 4,,10 .l5: xorl %ecx, %ecx call *%rbx .l4: movl foo(%rip), %edx testl %edx, %edx je .l5
you shouldn't wait on std::atomic
, they're not designed that. if want non-busy wait want std::condition_variable
.
a std::condition_variable
designed able wait until signalled without using cpu , wake immediately.
their usage little more verbose , need couple them mutex once you're used them they're powerful:
#include <condition_variable> #include <mutex> #include <thread> std::condition_variable cv; std::mutex lock; int foo; void baz() { std::this_thread::sleep_for(std::chrono::seconds(10)); { auto ul = std::unique_lock<std::mutex>(lock); foo = 1; } cv.notify_one(); } int main() { foo = 0; auto thread = std::thread(baz); { auto ul = std::unique_lock<std::mutex>(lock); cv.wait(ul, [](){return foo != 0;}); } thread.join(); return 0; }
Comments
Post a Comment