In this article, we are going to talk about the caching basics and how to implement In-Memory Caching in ASP.NET Core applications.
Let’s start.
VIDEO: Improve Performance of Your ASP.NET Core Apps With In-Memory Caching.
What is Caching?
Caching is the technique of storing frequently accessed data at a temporary location for quicker access in the future. This can significantly improve the performance of an application by reducing the time required for connecting with the data source frequently and sending data across the network. This works best with data that doesn’t change frequently but takes time to populate. Once cached, we can fetch this data very quickly. That said, we should never blindly depend on cached data and there should always be a fall-back mechanism. Moreover, we should periodically refresh the data stored in the cache so that it doesn’t become stale.
What is In-Memory Caching in ASP.NET Core?
ASP.NET Core supports two types of caching out of the box:
- In-Memory Caching – This stores data on the application server memory.
- Distributed Caching – This stores data on an external service that multiple application servers can share.
In-Memory Caching in ASP.NET Core is the simplest form of cache in which the application stores data in the memory of the webserver. This is based on the IMemoryCache
interface which represents a cache object stored in the application’s memory. Since the application maintains an in-memory cache on the server memory, if we want to run the app on multiple servers, we should ensure sessions are sticky. A Sticky session is a mechanism in which we make all requests from a client go to the same server.
Implementing an In-Memory Cache
Now let’s see how we can implement In-Memory caching in an ASP.NET Core application. Let’s start by creating an ASP.NET Core Web API using the ASP.NET Core Web API with EF Core Code-First Approach.
Once the API is ready, we are going to modify the employee listing endpoint and add the caching support to it:
[Route("api/[controller]")] [ApiController] public class EmployeeController : ControllerBase { private const string employeeListCacheKey = "employeeList"; private readonly IDataRepository<Employee> _dataRepository; private IMemoryCache _cache; private ILogger<EmployeeController> _logger; public EmployeeController(IDataRepository<Employee> dataRepository, IMemoryCache cache, ILogger<EmployeeController> logger) { _dataRepository = dataRepository ?? throw new ArgumentNullException(nameof(dataRepository)); _cache = cache ?? throw new ArgumentNullException(nameof(cache)); _logger = logger ?? throw new ArgumentNullException(nameof(logger)); } [HttpGet] public async Task<IActionResult> GetAsync() { _logger.Log(LogLevel.Information, "Trying to fetch the list of employees from cache."); if (_cache.TryGetValue(employeeListCacheKey, out IEnumerable<Employee> employees)) { _logger.Log(LogLevel.Information, "Employee list found in cache."); } else { _logger.Log(LogLevel.Information, "Employee list not found in cache. Fetching from database."); employees = _dataRepository.GetAll(); var cacheEntryOptions = new MemoryCacheEntryOptions() .SetSlidingExpiration(TimeSpan.FromSeconds(60)) .SetAbsoluteExpiration(TimeSpan.FromSeconds(3600)) .SetPriority(CacheItemPriority.Normal) .SetSize(1024); _cache.Set(employeeListCacheKey, employees, cacheEntryOptions); } return Ok(employees); } }
First, we inject the IMemoryCache
and ILogger
into the EmployeeController
. Then, in the listing action method, we check if the employeeList
data is available in the cache. If the data is present in the cache, we retrieve that. If the data is not available in the cache, we fetch the data from the database and at the same time populate it into the cache. Additionally, we are setting a sliding expiration time of 1 minute using the MemoryCacheEntryOptions
. We’ll learn more about MemoryCacheEntryOptions
in the next section.
For most types of apps, IMemoryCache
is enabled out of the box. For example, if we are calling AddMvc()
, AddControllersWithViews()
, AddRazorPages()
, AddMvcCore().AddRazorViewEngine()
etc., it enables the IMemoryCache
. However, for apps that don’t call any of these methods, it may be needed to call AddMemoryCache()
in the Program
class. Of course, if we’re using an older version of .NET that comes with the Startup
class, we need to call the AddMemoryCache()
in the Startup
class.
Configuring the Cache Options
We can configure the behavior of in-memory caching using the MemoryCacheEntryOptions
object. MemoryCacheEntryOptions
exposes several methods to set different cache properties:
var cacheEntryOptions = new MemoryCacheEntryOptions() .SetSlidingExpiration(TimeSpan.FromSeconds(60)) .SetAbsoluteExpiration(TimeSpan.FromSeconds(3600)) .SetPriority(CacheItemPriority.Normal);
SlidingExpiration – This determines how long a cache entry can be inactive before it is removed from the cache. It is a good practice to set a lower value like 1 minute or so. We can use the SetSlidingExpiration()
method for setting this value.
AbsoluteExpiration – The problem with sliding expiration is that if we keep on accessing the cache entry, it will never expire. Absolute expiration solve this by making sure that the cache entry expires by an absolute time irrespective of whether it is still active or not. It is a good practice to set this to a higher value like 1 hour or so. We can use the SetAbsoluteExpiration()
method for setting this value. A good caching strategy is to use a combination of sliding and absolute expiration.
Priority – This sets the priority of the cached object. By default, the priority will be Normal, but we can set it to Low, High, Never Remove, etc. depending on what priority we need to assign to the cache. We can use the SetPriority()
method for setting this value. As the server tries to free up the memory, the priority that we set for the cache item will determine if it will be removed from the cache.
Setting a Size Limit on Memory Cache
While using a MemoryCache
instance, there is an option to specify a size limit. The cache size limit does not have a defined unit, but it represents the number of cache entries that the cache can hold. Even though specifying a size limit for the MemoryCache
is completely optional, once we set a size limit for the cache, we should specify a size for all cache entries. Similarly, if no cache size limit is set, the size set on individual cache entries will be ignored.
For setting a size limit on cache, we need to create a custom MemoryCache
instance:
var cache = new MemoryCache(new MemoryCacheOptions { SizeLimit = 1024, });
In this example, we create a custom MemoryCache
instance by specifying a size limit of 1024. Now while creating individual cache entries, it is mandatory that we specify a limit or else it will throw an exception:
var options = new MemoryCacheEntryOptions().SetSize(2); cache.Set("myKey1", "123", options); cache.Set("myKey2", "456", options);
We can create cache entries in different sizes, but once the sum of all entries reaches the SizeLimit
, it cannot insert any more entries. For instance, in this example, we could create 1024 entries with size 1, 512 with size 2, or 256 with size 4, etc. The idea is that we can design different cache entries by giving varying sizes depending on the application’s requirement.
An interesting thing to note here is that once the cache reaches its limit, it does not remove the oldest entry to make room for new entries. Instead, it will just ignore the new entries and the cache insert operation will not throw an error as well. So we should take care while designing cache with a size limit or else it won’t be easy to troubleshoot cache-related issues later.
Testing the In-Memory Caching
Now it’s time to test our application to see In-Memory caching in action. Let’s run the application and navigate to the /api/Employee
endpoint. When we access it for the first time, it may take a few seconds to pull records from the database depending on where we host the database, how much data the result contains, the network speed, etc:
Status: 200 OK Time 3.67 s Size 451 B
From the logs, it is evident that the application connects with the database and fetches the data:
info: InMemoryCacheExample.Controllers.EmployeeController[0] Trying to fetch the list of employees from cache. info: InMemoryCacheExample.Controllers.EmployeeController[0] Employee list not found in cache. Fetching from database. info: Microsoft.EntityFrameworkCore.Database.Command[20101] Executed DbCommand (355ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] SELECT [e].[EmployeeId], [e].[DateOfBirth], [e].[Email], [e].[FirstName], [e].[LastName], [e].[PhoneNumber] FROM [Employees] AS [e]
Remember that while doing so, this would put the result in the cache as well. To verify that, let’s execute the same endpoint once again:
Status: 200 OK Time 22 ms Size 451 B
This time we can see that we get results very fast. On examining the logs, we can verify that this time it pulls the employee list from the cache:
info: InMemoryCacheExample.Controllers.EmployeeController[0] Trying to fetch the list of employees from cache. info: InMemoryCacheExample.Controllers.EmployeeController[0] Employee list found in cache.
In this example, by implementing an In-Memory Cache, we get results more than 150 times faster. That’s a great performance boost!
However, in the real world project it will depend on many external factors like where we host the database, how fast is the network, how big the data is, etc. and there could be slight variations. In any case, there is no doubt in the fact that we can improve the application performance by a great deal by caching these types of frequently accessed data in memory.
Removing Data From In-Memory Cache
The .NET Core runtime will remove the In-Memory cache items automatically in certain scenarios:
- When the application server is running short of memory, the .NET Core runtime will initiate the clean-up of In-Memory cache items other than the ones set with NeverRemove priority.
- Once we set the sliding expiration, the inactive entries will expire at that time. Similarly, once we set the absolute expiration, all entries will expire by that time.
Apart from this, there is an option to manually remove an item from the In-Memory cache if we want to. For instance, in our example, we might want to manually invalidate the cache when a new employee record is inserted into the database. We can use the Remove()
method of the IMemoryCache
to do that in the POST method:
[HttpPost] public IActionResult Post([FromBody] Employee employee) { if (employee == null) { return BadRequest("Employee is null."); } _dataRepository.Add(employee); _cache.Remove(employeeListCacheItem); return new ObjectResult(employee) { StatusCode = (int)HttpStatusCode.Created }; }
This will remove the employee list from the cache when a new employee is inserted into the database.
Managing Concurrent Access of In-Memory Cache
Now let’s assume that multiple users try to access the data from In-Memory Cache at the same time. Even though the IMemoryCache
is thread-safe, it is prone to race conditions. For instance, if the cache is empty and two users try to access data at the same time, there is a chance that both users may fetch the data from the database and populate the cache. This is not desirable. To solve these kinds of issues, we need to implement a locking mechanism for the cache.
For implementing locking for cache, we can use the SemaphoreSlim
class, which is a lightweight version of the Semaphore
class. This will help us control the number of threads that can access a resource concurrently. Let’s declare a SemaphoreSlim
object in the controller to implement locking of cache:
private static readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
Now let’s modify the listing endpoint to implement locking of cache:
[HttpGet] public async Task<IActionResult> GetAsync() { _logger.Log(LogLevel.Information, "Trying to fetch the list of employees from cache."); if (_cache.TryGetValue("employeeList", out IEnumerable<Employee> employees)) { _logger.Log(LogLevel.Information, "Employee list found in cache."); } else { try { await semaphore.WaitAsync(); if (_cache.TryGetValue("employeeList", out employees)) { _logger.Log(LogLevel.Information, "Employee list found in cache."); } else { _logger.Log(LogLevel.Information, "Employee list not found in cache. Fetching from database."); employees = _dataRepository.GetAll(); var cacheEntryOptions = new MemoryCacheEntryOptions() .SetSlidingExpiration(TimeSpan.FromSeconds(60)) .SetAbsoluteExpiration(TimeSpan.FromSeconds(3600)) .SetPriority(CacheItemPriority.Normal) .SetSize(1024); _cache.Set(employeeListCacheItem, employees, cacheEntryOptions); } } finally { semaphore.Release(); } } return Ok(employees); }
Here, if the entry is not found in the cache, the thread waits till it enters the semaphore. Once the thread enters the semaphore, it checks if the cache entry is populated by other threads by that time. If the entry is still not available in the cache, we proceed to fetch the data from the database and populate it into the cache. Finally, it is very important to make sure that we release the semaphore so that other threads can continue.
Pros & Cons of In-Memory Caching
We have seen how In-Memory caching improves the performance of data access. However, it has some limitations as well that we need to be aware of. Let’s take a look at some of the pros and cons of In-Memory caching.
Pros:
- Faster Data Access – When we’re accessing data from the cache, it will be very fast as no additional network communication is involved outside of the application.
- Highly Reliable – In-memory cache is considered highly reliable as it resides within the app server’s memory. The cache will work fine as long the application is running.
- Easy to Implement – Implementing an In-memory cache is very easy with a few simple steps without any additional infrastructure or third-party components and hence it is a good option for small to mid-scale apps.
Cons:
- Sticky Session Overhead – For large-scale apps running on multiple application servers, there will be an overhead to maintain sticky sessions.
- Server Resource Consumption – If not properly configured, It may consume a lot of the app server’s resources, especially memory.
In-Memory Caching Guidelines
Here are some important guidelines to consider while implementing In-Memory caching:
We should write and test our apps in such a way that they should never solely depend on the cached data. There should always be a fall-back mechanism in place to fetch the data from the actual data source in case the cached item is unavailable or expired.
It is a recommendation to always restrict the size of the cache to limit the consumption of server memory. ASP.NET Core runtime does not limit the cache size based on memory limitations, so it is up to the developers to set a limit for cache size.
We should use expiration to limit cache duration. It is always a good practice to design a caching strategy by using a combination of absolute expiration and sliding expiration depending on the application context.
Conclusion
In this article, we’ve learned caching basics and how In-Memory Caching works in ASP.NET Core. Also, we’ve looked at how to implement an In-Memory Caching in ASP.NET Core application, its pros and cons, and guidelines on how to use it.