Weeky C++ 10- std::thread (III)

Hi friends, after a long break we continue our weekly C ++ posts. I am with you about the third article of the series that we started earlier about the std :: thread library. If you have not read my other articles yet, I suggest you to read that articles through following links, especially the first article:

Weekly C++ 7- std::thread (I)

Weekly C++ 8- std::thread (II)

Weekly C++ 10- std::thread (III)

Introduction:

Let’s start with the subject of this article. In my previous writings, I mentioned the use of basic std::thread and utilities. In this article, I will tell you about the synchronization structures which have a very important place in multithreaded software development. As a matter of fact, the std::atomics I mentioned in my previous article is one of these synchronization structures, and although it does not meet all the situations, it can be used for some problems I will describe here.

Now, if you’d like, we can look at a simple example about accessing to common resources that I had mentioned in the previous paragraph:

The above method, simply stores a given number in the first place it finds in a simple sequence. Let us take a look what will happen when this code will behave when executed in parallel. In the following listing, I tried to give a sample instruction sequences that could occur if two different threads called this method at the same time (this could of course change).

Thread 2 Execution:
1) freeSlotIndex = foundFreeIndex;

4) storage[freeSlotIndex] = item;

5) foundFreeIndex++;

2) freeSlotIndex = foundFreeIndex;

3) storage[freeSlotIndex] = item;

6) foundFreeIndex++;

As you can see, the value written into empty slots of sequence can crush other one with respect to execution order. This problem is called the race condition in the multihreaded programming world. In case of access to common resources, such problems can occur. Of course, this problem is not always a situation that will occur (believe me to occur and catching them is much better for you :). The parts that can lead to such problems are called critical section. In particular, it is not always easy to find these race conditions, because they do not always occur. To prevent these problems, we protect critical sections. I would like you to pay particular attention to: resources access purpose, if each party only reads, there is no problem and they can all read safely. But if at least one party is going to write this data, then problems may arise. Let’s note that aside.

We looked at a problem that might occur in the above example, but what kind of problems can we face if we do not protect critical parts? Let’s see:

  • Unsynchronized Data Access: In fact, the example given above is in this group. If more than one thread is reading and writing to a common data in a parallel manner, which one may have written before problem may occur.
  • Half-written Data: Similarly, while a thread is writing data, the other thread can read it exactly in the middle of the writing process. You can have neither old nor new data 🙂 Let me try to explain this with a very simple example: 

long long x = 0;

A thread changes this data as follows:

x = -1;

The other reads as follows:

std::cout << x;

This is the that can be an example of exactly the problem we are talking about.

  • Reordered Statements: Code declarations in individual threads may have been modified for performance or similar reasons. Although these are not a problem alone (sequential running), in parallel runs, expected behavior will not be observed.

For more detailed information on these issues (especially for such problems and approaches), please see the following books I mentioned in my first article:

Now let’s look at the std::mutex, the first of these structures.

std::mutex:

Mutexs, known as also mutual exclusion, are called structures that provide simultaneous access mechanism for shared resource. Here, the parties do the lock operation through the std::mutex to access the resource and access the resource, while other threads are blocked from accessing the resource. Until the relevant party removed this lock. It was first described by Edsger W. Dijsktra. It is basically the most basic synchronization structure that provides single access to the shared resource.
To use the std::mutex class, you need to add the <mutex> header file. I tried to summarize the important classes that you can use in terms of mutex, and which C ++ version they introduced, with the following brief explanations. Reference information can be found here. Then, we will look at the use of them together with sample codes. For more detailed examples and uses, I will give you the addresses of the resources in the last section.

 

Basic mutex classes- I
std::mutex (C++11)
Provides basic mutual exclusion facility
std::timed_mutex (C++11)Provides mutual exclusion facility which implements locking with a timeout
recursive_mutex (C++11)Provides mutual exclusion facility which can be locked recursively by the same thread 
recursive_timed_mutex

(C++11)

Provides mutual exclusion facility which can be locked recursively
by the same thread and implements locking with a timeout 

 

lock_guard

(C++11)

Implements a strictly scope-based mutex ownership wrapper
unique_lock

(C++11)

Implements movable mutex ownership wrapper

 

Independent Functions

try_lock (C++11)Attempts to obtain ownership of mutexes via repeated calls to try_lock

 

lock (C++11)Locks specified mutexes, blocks if any are unavailable

 

As I mentioned in my first article, the std::thread library was originally introduced with C ++ 11, and the classes I mentioned above too. The additions that are introduced with C ++ 14 and 17 are as follows:

Basic Mutex Classes II
shared_mutex

(C++17)

Provides shared mutual exclusion facility 
shared_timed_mutex

(C++14)

Provides shared mutual exclusion facility and implements locking with a timeo
scoped_lock

(C++17)

Deadlock-avoiding RAII wrapper for multiple mutexes

https://stackoverflow.com/questions/17113619/whats-the-best-way-to-lock-multiple-stdmutexes/17113678

shared_lock

(C++14)

Implements movable shared mutex ownership wrapper

Well, now that we’ve learned what the ammunition we’ve got, let’s take a look at how we can use them now.

As our first example for multi-threaded software, I’m sure all of you wanted to put something on the console, and the experience you’ve experienced in the first place is probably a mess. Let’s look at how we can solve this by simply using the basic structures above. As you might expect, in this case the shared resource standard output stream (“standard output stream“).

In the code given above, both methods do the same. But std::scoped_lock is more readable (of course, me 🙂 It also eliminates the incorrect use of std::mutex unlock(), or lock () calls. Here I made a number of additions to the code (use of wait and put APIs) to reveal the distressed situation 🙂 In simple applications, std::cout can display it without any hassle. But, you can see what happens when you comment on the lines above.

Now, let’s take a look at the situations that require you to use std::recursive_lock, although not as much as the first case. This need usually occurs when mutex is used in each method and these methods need to call each other. Let’s look at an example:

As you can see in this example, when you call the multiply () or divide () methods, you will not encounter any problems. But when you call both () method, the application will be blocked, why do you think?

The both() method locks the mMutex lock in the first place, and when we call multiply(), we try to lock the same lock, then we call it deadlock. The same thread cannot lock a mutex twice. Now let’s replace all std::mutex uses with std::recursive_mutex and run it so you won’t have any problems.

In the above uses, the corresponding thread is blocked if the corresponding mutex is already locked. Instead, you may want to inquire if you can lock the corresponding mutex. In this case, you can use try_lock() and its variants. Let’s look at an example (from the reference page):

Now, let’s take a look at another usage and then close the mutex page. This usage is related to std::timed_mutex. This class offers the try_lock_for and try_lock_until APIs as well as the standard std::mutex APIs. The try_lock_for () API tries to lock the given mutex, if this mutex is locked, it waits for this mutex to be unlocked for the given time instead of waiting forever. If it can, it returns true, returns false if it cannot. In try_lock_until, it provides a similar use, except that it is time-consuming instead of time. Let’s look at the example given in the reference page:

Sometimes you want a method to be called only once from different threads. This library also offers std::call_once function for such situations. Let’s look at an example of this:

In the example code I have given above, the first part of the doSomenthing () method will be called only once with the corresponding std::once_flag used.

Together with this usage, we have covered very much about mutex. Any more left? Of course, there are a lot of issues (you can have a look at the resources for more detailed information, or you can always consult with google 🙂 but I think these are enough to get you started. In the next section we will look at another important structure, std::condition_variable.

std::condition_variable:

Another important structure presented with the thread library is std::condition_variable. In particular, when you develop multithreaded software, you experience situations where a thread waits for the other and it does the work or continues to work when a condition is met. This is exactly the case for std::conditional_variable structures can be used. Of course, you can say that the std::future mechanism can also be used to retrieve data from a thread, but its sole purpose is to return data from a different thread or to report a bad condition, in some cases, we need a better mechanism.

Waiting, Appointment, Schedule, Time, Hurry, Urgent

We haven’t talked about it before, but we can actually control the situation by waiting in a loop while doing a thread job. This approach is also called busy wait or polling. Let’s take a look at this situation immediately:

As you can see here, there is a continous control, and although this may seem like a small job, small waiting times can be difficult to perform, and longer periods may cause more delays. Here, you can use std::condition_variable instead. So what are these structures, let’s look at the reference document immediately, where a good definition is provided:

The condition_variable class is a synchronization primitive that can be used to block a thread, or multiple threads at the same time, until another thread both modifies a shared variable (the condition), and notifies the condition_variable.

Each conditional variable is necessarily associated with a mutex. To use conditional variables, you need to add <mutex> and <condition_variable> header files. For cases similar to the one I’ve given above:

  • One or more threads to tell if any conditions are met, depending on the number of threads to be notified:
    • One of notify_one() or notify_all() API is called
  • To be aware of or wait for the specified condition in a thread:
    • wait() API is used.

Şimdi ilk verdiğimiz örneği bu API’leri kullanarak yazalım:

It is worth mentioning an important situation here. Specifically, these conditional variables may sometimes stop blocking the relevant thread, even if the thread does not trigger them. In these cases, it is useful to check the relevant conditions again. On the http://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables page, this case is beautifully explained with examples, so you can take a look there.

Now let’s take a look at the use of conditional variable and other synchronization structures in a queue class that can be used by multi-thread applications:

With this example, we have seen how most synchronization structures can be used together.

The conditional variables I have mentioned here, have a few more APIs, but their use is similar, only some additional ease of use (such as wait until the given time or time) features are provided. You can also take a look at https://en.cppreference.com/w/cpp/thread/condition_variable for such features.

Conclusion:

Yes, friends, our three post thread adventure is finally coming to an end. With the help of these three articles, I tried to convey the most important structures of the C ++ thread library to you. The next step, of course, is to use them. Although I’m done with the library, there will be other posts on multithreaded programming,

I am yazilimperver, plentiful and entertaining codings 🙂

References:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.