Semaphores in General
Semaphores have been around since their inception in the early 1960s when Edsger Dijkstra developed the concept as a part of a multitasking operating system. At a high level semaphores are used to control access to a common object. They're useful for access synchronization and avoiding race conditions in concurrent systems. Generally, there are two types of semaphores.
- Counting semaphores: potentially allow for
n
number of resources to access the "semaphore" object - Binary semaphores: restrict access for the "semaphore" object to a single resource. Commonly used to implement locks.
Semaphores control access by knowing how many threads/processes can access an object concurrently and also keeping track of how many additional threads/processes can access the object at a given point in time.
When working with semaphores it is typical to have a couple of methods or properties available for use.
- Wait: Used to decrement the number of available resources that can access the object
- Signal/Release: Used to increment the number of available resources that can access the object
The common usage pattern is to first call Wait()
to acquire access to the resource. Once the process is finished with the resource it's important to call Release()
so that other processes can acquire access if needed.
Semaphores in C#
The .NET framework provides two options for using semaphores in C#. Both are similar but contain features that make them more or less useful in certain scenarios.
SemaphoreSlim
Semaphore
I'll briefly touch on both classes starting with SemaphoreSlim
since it is the recommended approach for achieving thread synchronization within an application.
SemaphoreSlim
The SemaphoreSlim
class is the lightweight alternative to the Semaphore
(note "Slim"
in the name). When creating a semaphore by instantiating a new SemaphoreSlim
object, we create a local semaphore. The semaphore's locality indicates that it only controls access for other threads or processes within the application. In the next section, we'll discuss named semaphores and how they behave differently.
SemaphoreSlim
follows the expected API of a semaphore with methods to increment the counter using WaitAsync()
or Wait()
and a method to decrement the counter with Release()
. At any point, we can check the value of the semaphore's current count with the CurrentCount
property.
Semaphore
I mentioned that SemaphoreSlim
is the preferred method for controlling access to resources within an application. So where does the Semaphore
class come into play? The Semaphore
class provides "named semaphores", which can control access to resources at the operating system level. This differs from the local semaphore functionality which only has access to resources within the scope of an application. The Semaphore
class can also create local semaphores, but in that case SemaphoreSlim
is preferred.
Since Semaphore
provides quite different functionality from SemaphoreSlim
, it's not surprising that the API for using it is a little different. Semaphore
inherits from the WaitHandle
class that provides the expected methods to Wait()
(decrement) for access to the semaphore. WaitHandle
is used to encapsulate OS-level objects that wait for access to shared resources. Semaphore
also provides a Release()
method that functions similarly to the SemaphoreSlim
method.
From my point of view, SemaphoreSlim
should be chosen over Semaphore
when possible. If the use case requires controlling access to resources outside of the scope of the application, then Semaphore
is the way to go.
C# Semaphore in Practice
This code example shows how we can use a semaphore as a "throttle" for performing asynchronous tasks concurrently. I'd like to note that the ideal way to do this would be to use the IAsyncEnumerable.ParallelForEachAsync
method, which was introduced in .NET 6. Unfortunately, not everyone is lucky enough to always use a modern version of .NET, so this code example is a little workaround.
// This is the value used to throttle the number of concurrent tasks
var maxParallelism = 5;
// The semaphore implements IDisposable, so we use a using block
// To limit the number of concurrent tasks we use maxParallelism for both initial and maximum count parameters
using var resource = new SemaphoreSlim(maxParallelism, maxParallelism);
var tasks = Enumerable.Range(0, 10).Select(async i =>
{
await resource.WaitAsync();
try
{
await DoSomeWork(i);
}
finally
{
resource.Release();
}
});
// Wait for all tasks to complete
await Task.WhenAll(tasks);
static async Task DoSomeWork(int i)
{
await Task.Delay(1000);
Console.WriteLine($"Done with work {i}");
}
I'll be the first to admit that concurrency and parallelism are hard to do right in any language, but when implemented thoughtfully they can improve performance significantly.
Resources
https://en.wikipedia.org/wiki/Semaphore_(programming)
https://en.wikipedia.org/wiki/Dining_philosophers_problem
https://learn.microsoft.com/en-us/dotnet/api/system.threading.semaphoreslim?view=net-7.0
https://learn.microsoft.com/en-us/dotnet/api/system.threading.semaphore?view=net-7.0