In software development, in my opinion one of the cooler topics to understand is multi-threading, or more generically, concurrency (and yes, there’s a huge difference). To understand concurrency means you at some level understand how operating systems perform concurrent operations, how processors handle things like cache coherency, etc. However, it’s easy to learn the language of concurrency, and then not actually understand what you’re talking about. In this post, I want to discuss what a locking mechanism is, why they matter, and the difference between a mutex and a semaphore.
When writing concurrent code, the first question you must ask is “can I actually make this bit of code concurrent?”. In some cases, the answer will be yes and you can be on your way, in other cases, it’s a no, and in other cases, it’s a yes, but not all of the code can be executed concurrently. In the third case, the reason you typically can’t make an entire piece of code concurrent has to do with something called critical sections. Critical sections refer to portions of code, typically that deal with shared resources that cannot be worked on by multiple threads, that need to be single threaded execution. Sometimes, these critical sections need to be handled by one thread, sometimes they can be handled by n number threads, and if you’ve read by DropBox Api v2 integration guide for Xamarin Android, you’ll know why this is. Suffice to say, not everything can easily be multi-threaded.
In any case, a critical section that can either have one or n number of threads working on it at once requires what is called a lock. A lock is a mechanism that signals to other threads that they are either clear to work on that particular section of code, or they aren’t. Think stoplights, green allows you to go, red means wait, it’s quite literally the same thing when it comes to locking mechanisms.
There are different types of locking mechanisms as I mentioned earlier, one such locking mechanism is a mutex. Mutex may sound like a funky work to you, but actually it comes from two words mashed together, mutual and exclusion. This lock is called a mutex, or mutually exclusive lock, because it only allows one particular thread to work on a piece of code at any given time, this could mean only one thread can execute a particular loop, or one thread can work on a portion of image data, etc. but the idea is that a mutex ensures only one thread will ever work on some particular thing. In Win32, there are mutex structures, in C#, you can use a mutex via the System.Threading class, and most languages that have lower level operating system access like C#, C++, C, Objective-C, etc. will have some way of allowing you to use mutex.
If a mutex allows only one thread to work on one piece of code at a time, then conversely there must be a locking mechanism that limits the number of threads that are allowed to work on a single piece of code right? Well, that’s correct, and that type of lock is the semaphore. A semaphore allows you to let multiple threads access a particular block of memory up to a certain number. Why, you may be asking, is this useful? I had a scenario recently where a semaphore truly was the only option to solve my problem. I had a task to do asynchronous file uploads to Dropbox from an app, there was a problem though in that you can only upload a set number of files to Dropbox at once at any given time, otherwise you cause a rate limit exception. So what did I do? I created a semaphore that allowed up to three tasks to execute at once, and when one of those three tasks finished up, the semaphore allowed the next task to execute, in this case each task was a task to upload a file to Dropbox. Basically, I’ve found semaphores are very useful for whenever you need to limit the amount of hits a server gets, there are certainly other uses as well, but that’s the practical application I’ve found for them.
A word of caution
Locking mechanisms can cause all sorts of issues, blocking, deadlocking, and general application overhead and code complexity. The third issue mentioned may have you scratching your head, but in managed environments where a garbage collector is running, for the garbage collector to clean up memory, all thread stacks must be stopped and analyzed, and then cleaned. Well if you have 6 or 7 threads running, then your overhead for garbage collection increases 7 times. This can obviously be a pretty major performance hit in some cases, but not all. My point isn’t that threads are bad, they are great actually, my point is more to not assume that by making something multi-threaded that it automatically becomes better.
So in conclusion, there are two main types of locks in concurrent programming, mutexs and semaphores, and both have very specific and important uses. The point of this post is to share the very basics of concurrent locks, and hopefully I’ll be able to get around to doing more posts on concurrent programming, so look out for those posts in the future. Hopefully this was a helpful and easy to follow explanation of this topic, and if you have any questions, feel free to reach out to me and I’ll be happy to answer any questions. Thanks, and until next time.