In this article, we are going to learn what Distributed Caching is and how to implement distributed caching in an ASP.NET Core application.
Let’s start.
Distributed Caching in ASP.NET Core
Distributed caching is a mechanism in which we maintain the cache as an external service that multiple app servers can use. This is different from In-Memory Caching where we cache the data in the app server’s memory. Distributed caching can greatly improve the performance and scalability of an app and is a great choice when we host our application on multiple servers or in the cloud.
ASP.NET Core supports different types of distributed cache implementations and it is very easy to change the implementation later by just changing the configuration. Regardless of the implementation that we choose, for working with the distributed cache, we always use the IDistributedCache
interface.
Different Distributed Caching Services in ASP.NET Core
Now let’s have a look at the different distributed caching services that ASP.NET Core supports:
Distributed Memory Cache
This is the simplest implementation of IDistributedCache
that stores data in the app server’s memory. While this is not a real distributed cache, it is a great choice for the development and testing of our application’s cache logic. Another scenario where this can be helpful is when we want to start with a single server in production, but it needs to have the flexibility to scale out to multiple servers in the future. Distributed memory cache allows us to implement a true distributed caching solution in the future by just switching to a different implementation.
For enabling the distributed memory caching, we just need to add one line of code to the Program
class:
builder.Services.AddDistributedMemoryCache();
After that, we can work with cache using the IDistributedCache
interface. We are going to see how to do that in the implementation section.
SQL Server Cache
The distributed SQL Server Cache implementation of IDistributedCache
stores data in an SQL server database. We can use the sql-cache
tool to configure an SQL Server database for distributed caching support. By using the sql-cache create
command, we can create a table for storing the cache items:
dotnet sql-cache create "Data Source=.;Initial Catalog=MyDB;Integrated Security=True;" dbo CacheItems
Note that we need to provide the connection string, schema, and table names as parameters, and it will create the table and index.
After this step, we just need to add the distributed SQL Server Cache service in the Program
class to enable it in our application:
builder.Services.AddDistributedSqlServerCache(options => { options.ConnectionString = builder.Configuration.GetConnectionString( "DBConnectionString"); options.SchemaName = "dbo"; options.TableName = "CacheItems"; });
This will require us to add Microsoft.Extensions.Caching.SqlServer
NuGet package reference. Note that we need to provide the same connection string, schema, and table names in the Program
class as well.
Redis Cache
Redis (Remote Dictionary Server) is one of the most popular open-source in-memory caching services that stores data as key-value pairs. We can either install Redis on one of our servers or use one of the cloud-based Redis services like the Azure Cache for Redis. In this example, we are going to use Azure Cache for Redis. We are going to explain the configuration process later in the article.
Once we configure the Redis service, we can integrate it into our application by adding the AddStackExchangeRedisCache()
method in the Program
class:
builder.Services.AddStackExchangeRedisCache(options => { options.Configuration = builder.Configuration["ConnectionString:Redis"]; options.InstanceName = "SampleInstance"; });
Of course, we have to add Microsoft.Extensions.Caching.StackExchangeRedis
NuGet package reference and provide the Redis connection string and instance name.
NCache Cache
NCache is another popular open-source caching implementation that is completely developed in .NET. We can configure it either locally or as a distributed cluster.
To configure NCache, first, we have to install the NCache NuGet. Then, we need to create a client configuration file and specify the cache cluster configuration. After that, we can call the AddNCacheDistributedCache()
method in the Program
class to add the NCache support:
builder.Services.AddNCacheDistributedCache(configuration => { configuration.CacheName = "TestNCache"; configuration.EnableLogs = true; configuration.ExceptionsEnabled = true; });
This requires Alachisoft.NCache.Caching.Distributed
namespace and we need to specify a cache name.
We have learned how to enable different distributed caching services. Next, we are going to look at how to implement caching in an ASP.NET Core application.
Implementing Distributed Caching in ASP.NET Core
To implement the distributed caching, first, let’s create an ASP.NET Core Web API using the ASP.NET Core Web API with EF Core Code-First Approach article. Once the API is ready, let’s implement the cache.
The inbuild methods of the IDistributedCache
interface work with the byte array. For convenience, let’s first add an extension class DistributedCacheExtensions
with some generic methods that we can reuse:
public static class DistributedCacheExtensions { public static Task SetAsync<T>(this IDistributedCache cache, string key, T value) { return SetAsync(cache, key, value, new DistributedCacheEntryOptions()); } public static Task SetAsync<T>(this IDistributedCache cache, string key, T value, DistributedCacheEntryOptions options) { var bytes = Encoding.UTF8.GetBytes(JsonSerializer.Serialize(value, GetJsonSerializerOptions())); return cache.SetAsync(key, bytes, options); } public static bool TryGetValue<T>(this IDistributedCache cache, string key, out T? value) { var val = cache.Get(key); value = default; if (val == null) return false; value = JsonSerializer.Deserialize<T>(val, GetJsonSerializerOptions()); return true; } private static JsonSerializerOptions GetJsonSerializerOptions() { return new JsonSerializerOptions() { PropertyNamingPolicy = null, WriteIndented = true, AllowTrailingCommas = true, DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull, }; } }
The first SetAsync()
method is just a wrapper without the options parameter, while the second overload converts the passed object to a byte array and passes it to the SetAsync()
method of the IDistributedCache
interface.
Similarly, the TryGetValue()
generic method gets the value from the cache as a byte array and converts it into the respective type.
GetJsonSerializerOptions()
is a helper method for initializing the JSON serializer options.
Very Important Note
For the example sake, we create a new GetJsonSerializerOptions
method to return the options for our JSON serializer. But in large-scale applications where we face a lot of calls that must be cached, it is better to create a static field:
private static JsonSerializerOptions _serializerOptions = new JsonSerializerOptions { PropertyNamingPolicy = null, WriteIndented = true, AllowTrailingCommas = true, DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull };
And then just reuse it with each call. This will improve the speed of large applications as well as allocated memory. You can read more about this in Matt’s comment.
Let’s Continue
Next, we are going to modify the controller to add the caching support:
private const string employeeListCacheKey = "employeeList"; private readonly IDataRepository<Employee> _dataRepository; private readonly IDistributedCache _cache; private readonly ILogger<EmployeeController> _logger; private static readonly SemaphoreSlim semaphore = new(1, 1); public EmployeeController(IDataRepository<Employee> dataRepository, IDistributedCache cache, ILogger<EmployeeController> logger) { _dataRepository = dataRepository ?? throw new ArgumentNullException(nameof(dataRepository)); _cache = cache ?? throw new ArgumentNullException(nameof(cache)); _logger = logger ?? throw new ArgumentNullException(nameof(logger)); }
First, we inject the IDistributedCache
and ILogger
into the controller. Along with that, we introduce a SemaphoreSlim
object for handling concurrency.
Then, let’s add the Get action:
[HttpGet] public async Task<IActionResult> GetAsync() { _logger.Log(LogLevel.Information, "Trying to fetch the list of employees from cache."); if (_cache.TryGetValue(employeeListCacheKey, out IEnumerable<Employee>? employees)) { _logger.Log(LogLevel.Information, "Employee list found in cache."); } else { try { await semaphore.WaitAsync(); if (_cache.TryGetValue(employeeListCacheKey, out employees)) { _logger.Log(LogLevel.Information, "Employee list found in cache."); } else { _logger.Log(LogLevel.Information, "Employee list not found in cache. Fetching from database."); employees = _dataRepository.GetAll(); var cacheEntryOptions = new DistributedCacheEntryOptions() .SetSlidingExpiration(TimeSpan.FromSeconds(60)) .SetAbsoluteExpiration(TimeSpan.FromSeconds(3600)); await _cache.SetAsync(employeeListCacheKey, employees, cacheEntryOptions); } } finally { semaphore.Release(); } } return Ok(employees); }
In the GetAsync()
method, we check for the employee data in the cache. If it is not available, we retrieve data from the database and set it in the cache. To ensure that it is thread-safe, we proceed only once the thread enters the semaphore. Finally, we release the semaphore before we exit the method. Note that we have set sliding expiration and absolute expiration values as well which will control when the item expires from the cache.
Lastly, let’s add the POST action:
[HttpPost] public IActionResult Post([FromBody] Employee employee) { if (employee == null) { return BadRequest("Employee is null."); } _dataRepository.Add(employee); _cache.Remove(employeeListCacheKey); return new ObjectResult(employee) { StatusCode = (int)HttpStatusCode.Created }; }
Here, we remove the cache entry along with inserting an employee record.
Testing the Distributed Caching
During the development and testing phases, we can configure the Distributed Memory Caching to verify that our caching implementation works as expected. We have explained how to configure distributed memory caching in the Distributed Memory Cache section while discussing the different distributed caching services in ASP.NET Core. Let’s follow the same steps to configure it.
After making the changes, let’s run the application and navigate to /api/employee
:
info: Trying to fetch the list of employees from cache. info: Employee list not found in cache. Fetching from database.
The first call will take a few seconds as the data is not yet available in the cache:
Status 200 OK Time: 4.89 s Size: 591 B
But the subsequent requests will fetch the data from the cache:
info: Trying to fetch the list of employees from cache. info: Employee list found in cache.
And we can observe that we get the response very quickly as well:
Status 200 OK Time: 31 ms Size: 591 B
Now we’re going to see how to change the implementation and integrate the Redis service.
Implementing Redis
Redis is a great choice for maintaining cache in production environments. Let’s create a Redis cache and integrate it into our application.
Azure Cache for Redis is a scalable Redis service offering high-speed data access and we are going to use that in this example. For creating a Redis service, we can navigate to the Azure Portal and create a new Azure Cache for Redis instance. While creating the service, let’s leave the default options for most of the fields, but provide a unique DNS name for the cache. Also, since the Redis cache is generally expensive due to the high-end and expensive hardware beneath it, let’s choose the most basic plan for this example.
Once we create the service, we need to copy the connection string from the Access Keys
menu and paste it into the ConnectionString
section of our appsettings
file. After that, we can configure the Redis cache in the Program
class as described in the earlier Redis Cache section.
Testing the Redis Implementation
Let’s test the application once again. We can observe that once the data is available in the cache, we get the response very quickly. However, the time required to access data from Redis will be slightly higher than the time required to get data from the distributed memory cache as we still need to make the call to an external service.
An interesting behavior that we can notice here is that the application will be able to fetch data from the cache even after it restarts. This is because the cache is unaffected by the changes in the application server as we maintain it as an external service. Moreover, Redis has very high-performance tiers, which are expensive but provide very quick data access which means we have the flexibility to upgrade it later as per our application’s requirement.
Pros and Cons of Distributed Caching
Now let’s discuss a few pros and cons of distributed caching:
Pros:
- They are simpler to maintain especially when we have multiple app servers
- Distributed caching services are generally very reliable and high-performing as it does not depend on the app server’s resources
- With distributed caching, there is no risk of data loss even when the app server is restarted
Cons:
- Accessing a distributed cache service will not be as fast as accessing an in-memory cache
- Distributed caching services are expensive as they require high-end hardware
- With distributed caching, there is a chance of a single point of failure. To overcome that, we need to set up a cache cluster, which will be more complex to implement and makes it furthermore expensive
Guidelines on Using Distributed Caching
Most kinds of caching services depend on in-memory storage to retrieve data quickly. Memory being a limited and expensive resource, we should cache only those data that are absolutely required. Moreover, when our app is hosted on multiple servers or the cloud, the distributed cache is the way to go.
While implementing a distributed cache, we can make use of the Distributed Memory Cache during the development and testing phases. We can always change this to one of the true distributed caching service implementations later.
While deciding on the right distributed caching service for our application, we need to consider a few factors like the existing infrastructure, performance requirements, cost, etc. Generally speaking, a Redis cache provides very high performance and quick response time than a SQL Server cache but is very expensive. So we have to do a cost vs performance analysis before deciding on this.
Similarly, if we are using SQL Server distributed cache, we should always use separate databases for cache and app’s data storage as it would impact the performance of both otherwise. We highly recommend using a dedicated SQL Server instance for maintaining the distributed cache.
Conclusion
In this article, we have looked at how distributed caching works and some of the different distributed caching services available for ASP.NET Core. Additionally, we learned how to integrate an ASP.NET Core application with Redis Cache. Finally, we looked at a few pros and cons of distributed caching and some guidelines on using those as well.