In this article, we are going to discuss the use of a Content Security Policy (commonly referred to as “CSP”).

To download the source code for this article, you can visit our GitHub repository.

While we can apply a CSP to any web application running in a browser, in this article we will specifically focus on ASP.NET Core.

Why Do We Need a Content Security Policy?

The Content Security Policy helps tackle common browser exploits, such as XSS (cross-site scripting), and Clickjacking. While there are various methods to tackle each of these problems, the Content Security Policy provides clear instructions to the browser on how to tackle these problems, with a set of common standards and directives.

Support Code Maze on Patreon to get rid of ads and get the best discounts on our products!
Become a patron at Patreon!

Let’s note that whilst there are techniques that we apply on the server / API side (such as CSRF tokens), the Content Security Policy is all about the browser, so it only applies to Web Applications, not API applications. A good security policy, however, includes security at all levels.

Before we dive into how Content Security Policy works, let’s look at some common security violations that can occur. First, let’s set up a standard new ASP.NET Core Web Application, we can use this to demonstrate different violations, as well as apply the Content Security Policy to mitigate these attacks.

Cross-Site Scripting (XSS)

Cross-Site Scripting attacks are code injection attacks where a malicious actor executes code (usually JavaScript) in the user’s browser, with the predominant motivation to steal users’ data stored in the browser, such as cookies or session tokens. Usually, these attacks surface via an HTML form field, where a bad actor enters malicious JavaScript into the HTML field, which the browser executes.

On the Index.chtml page, let’s add the code to the end of the <div> tag:

User ID: @Html.Raw(Request.Query["userId"])

In this code, we are simply taking the input from the query string and rendering it on the UI. Seems simple, right? If we run the app with the URL  https://localhost:7197/?UserId=123, our app correctly displays “UserId 123” on the page. This would be a common example if we had a link on our site, or maybe in an email.

But what happens if we open the URL:

https://localhost:7197/?UserId=%3Cscript%20type=%22text/javascript%22%3E%20var%20adr%20=%20alert(escape(document.cookie));%20%3C/script%3E

Uh oh. The browser has just alerted our cookies. Imagine a malicious actor sent an email to one of our users pretending to be our site, then included the same link, but instead of alerting the browser, it did a POST to their own server. The user’s credentials in the cookie are submitted to an attacker, which can use a browser to log in as that user and perform actions. 

Shortly, we will look at how a Content Security Policy can help block this attack. For now, let’s move to another type of attack, Clickjacking.

Clickjacking

Click Jacking (also known as UI Redressing) is a browser attack in which a bad actor overlays some bad content over some legitimate content, and tricks the user into clicking the button. 

To demonstrate this, let’s add and modify a new standard html page:

<html>
<head>
    <title>Clickjack testing</title>
</head>
<body>
    <p>Oh no! Our website is vulnerable to clickjacking :(</p>
    <iframe src="https://localhost:7197/" width="500" height="500"></iframe>
</body>

If we open it in the browser, we can see our test application embedded in the page. Why is this a bad thing? Consider if a bad actor enhanced this sample HTML page, made it look exactly like our website, and overlayed an invisible button on top of a button on our site, then emailed our user. The user might think they are visiting our site, but in fact, it’s a malicious site. The user clicks a button thinking it’s our site, it actually clicks the bad actor’s button, and therefore the click is “jacked” and the bad actor can have control over the behavior of the user (for example, stealing data again).

Now that we’ve demonstrated two commonly used browser attacks, let’s jump into Content Security Policy, and mitigate against these attacks.

What is the Content Security Policy?

The Content Security Policy is a technique that can instruct browsers how to treat resources like scripts, images, and other content. Our current application does now have a Content Security Policy in place, and we demonstrated how attacks can exploit this.  CSP has support in all major browsers, but as with any new web feature, it takes time for the browsers to catch up. However, all the techniques we will use in this article are compatible with most major browsers.

Setting the Content Security Policy Header

The simplest way to set up a Content Security Policy is through a header sent by the web server.

Let’s open the Index.html.cs code behind the file, and update the OnGet() method:

public void OnGet()
{
    Response.Headers.Add("Content-Security-Policy", "default-src 'self';");
}

This is the basic setup of a CSP. The header named must be named Content-Security-Policy, and the value of the header is a string containing the policy. A Content Security Policy must contain a default-src directive, which gives the browser a fallback for how all sources should be treated, in the absence of specific sources such as javascript, styles, and images, as we’ll discuss shortly. The value of self instructs the browser that only sources from the current site’s origin (excluding subdomains) can be executed.

Let’s run our application again, and open the same XSS URL as before:

https://localhost:7197/?UserId=%3Cscript%20type=%22text/javascript%22%3E%20var%20adr%20=%20alert(escape(document.cookie));%20%3C/script%3E

This time, there is no JavaScript alert. If we open the browser console, we notice the error:

Refused to execute inline script because it violates the following Content Security Policy directive: "default-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-AMGj/XnLVSuXPUHA+OvmCNh1kKvSgPfWsh09/T79YAo='), or a nonce ('nonce-...') is required to enable inline execution. Note also that 'script-src' was not explicitly set, so 'default-src' is used as a fallback.

This is exactly what we want. The browser did not execute the malicious inline script, because it did not come from our origin. We have in one quick swoop enhanced our website and prevented an XSS exploit from being possible. 

It’s worth noting another way to set up a CSP header is via the <meta> element:

<meta http-equiv="Content-Security-Policy" content="default-src 'self';" />

However with this technique, we don’t get access to certain features (such as reporting, as we’ll discuss later in the article), so where possible it’s best to use the HTTP Header.

Let’s move on to some more configurations of the Content Security Policy header.

Configuration of the Content Security Policy

We currently have the most basic CSP setup, only allowing sources from our origin. However, a common setup for most web applications is to host static content on an external CDN. Our current CSP won’t allow this behavior. However, we can make a quick adjustment to adjust for this:

default-src 'self' cdn.ourdomain.com;

We now allow sources from our origin, but also our CDN. We even could extend this further using wildcards and setting the value to *.ourdomain.com.

Now let’s look at how we can get more granular, and adjust our CSP to treat scripts separately.

Configuring Scripts, Styles, and Images in our Content Security Policy

As with most security principles, it’s better to be as specific as possible to have the tightest security setup, then open up as needed. Therefore, instead of allowing all sources from our domain, let’s be more explicit and only allow JavaScript to execute from our scripts domain:

default-src 'self'; script-src 'self' scripts.ourdomain.com;

Opening our XSS exploit URL again, we see the script blocked. That’s because the script is inline, and not  from scripts.ourdomain.com.

Similarly, we can specify only styles from a particular domain:

default-src 'self'; script-src 'self' scripts.ourdomain.com; style-src 'self' styles.ourdomain.com

We can do the same for images:

default-src 'self'; script-src 'self' scripts.ourdomain.com; style-src 'self' styles.ourdomain; img-src 'self' images.ourdomain.com

We can prove all this is working, by making some adjustments to Index.cshtml:

@page
@model IndexModel
@{
    ViewData["Title"] = "Home page";
}

<style type="text/css">
 body {
     color: red;
 }
</style>

<script type="text/javascript">
    document.onload = alert('hi!');
</script>

<div class="text-center">
    <h1 class="display-4">Welcome</h1>
    <p>Learn about 
        <a href="https://docs.microsoft.com/aspnet/core">building Web apps with ASP.NET Core</a>.
    </p>
    
    User ID: @Html.Raw(Request.Query["userId"])

    <img src="https://example.com/img.jpg"/>
    
</div>

We’re adding some inline CSS and JavaScript and loading an image from an external URL. If we load the page, we see none of that occurs. Notice the browser console yields a bunch of errors saying the content has been blocked because it doesn’t adhere to our CSP policy. 

If we do want inline sources to load, we can do this via the 'unsafe-inline' directive:

default-src 'self'; script-src 'self' 'unsafe-inline' scripts.ourdomain.com; style-src 'self' 'unsafe-inline' styles.ourdomain; img-src 'self' 'unsafe-inline' images.ourdomain.com;

Loading the page again we see the popup and the CSS (the image still isn’t loading, as it’s not self, inline, or from our domain). Using the unsafe-inline (as the name implies) isn’t recommended, and it’s best to explicitly set some domains (or self) to load content from.

Now that we have configured how content is being loaded, let’s revisit our clickjacking example in the next section.

Prevent Clickjacking with a Content Security Policy

If we open our clickjacking.html page again, we see the exploit is still there. This is very simple to solve now with CSP, using the frame-ancestors attribute:

default-src 'self'; script-src 'self' 'unsafe-inline' scripts.ourdomain.com; style-src 'self' 'unsafe-inline' styles.ourdomain; img-src 'self' 'unsafe-inline' images.ourdomain.com; frame-ancestors 'none';

Let’s load the page. The iframe does not render, and an error exists in the console:

Refused to frame 'https://localhost:7197/' because an ancestor violates the following Content Security Policy directive: "frame-ancestors 'none'".

The frame-ancestors attribute indicates which parents are able to load our site via an iframe. In this case, we have said “none”, but if we do have a use case, we could specify those sources in a similar technique as previously done for scripts and styles. For example, maybe we want to load our page on a partner site via an iframe.

In the next section, let’s look at how to test our Content Security Policy in a less intrusive way.

Testing & Reporting our Content Security Policy

When we add a Content Security Policy to an existing site, it’s possible that the site itself may break due to existing problems with the way content loads. This could become very overwhelming, and make it hard to apply a CSP in a gradual manner. Furthermore, we might want to enforce CSP only in production environments, and not in dev/test environments.

For this purpose, we can set up the CSP to “report only”, by changing the header:

Response.Headers.Add("Content-Security-Policy-Report-Only", "default-src 'self'; script-src 'self' 'unsafe-inline' scripts.ourdomain.com; style-src 'self' 'unsafe-inline' styles.ourdomain; img-src 'self' 'unsafe-inline' images.ourdomain.com; frame-ancestors 'none';");

When we run our site now, we still see the errors in the console, but they are stating [Report Only]. This is a great approach for an initial setup, as we can discover the issues with our site, fix them then enable the CSP in non-report mode once we are happy.

We can take this one step further, and actually create an endpoint to store these CSP violations. This is useful for production, where we can turn on the CSP in non-report mode, but capture errors for analysis.

Creating an Endpoint to Capture CSP Violations

To do that, we need to make some slight modifications to our app. First, let’s enable controllers by modifying Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddControllers(options =>
{
    var jsonInputFormatter = options.InputFormatters
        .OfType<Microsoft.AspNetCore.Mvc.Formatters.SystemTextJsonInputFormatter>()
        .Single();
    jsonInputFormatter.SupportedMediaTypes.Add("application/csp-report");
});
builder.Services.AddRazorPages();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error");
    app.UseHsts();
}

app.UseHttpsRedirection();
app.UseStaticFiles();

app.UseRouting();

app.UseAuthorization();

app.UseEndpoints(endpoints =>
{
    app.MapRazorPages();
    app.MapControllers();
});

app.Run();

First off we add controllers, and we also create a custom input formatter. This is because the CSP report is a special content type application/csp-report, which ASP.NET Core by default will not allow. We then later call app.UseEndpoints() to enable both razor pages and controllers.

Let’s create a folder called Controllers and add a new controller called CspController:

    public class CspController : Controller
{
    private readonly ILogger<CspController> _logger;

    public CspController(ILogger<CspController> logger)
    {
        _logger = logger;
    }

    [HttpPost("csp-violations")]
    public IActionResult CSPReport([FromBody] CspViolation cspViolation)
    {
        _logger.LogWarning($"URI: {cspViolation.CspReport.DocumentUri}, Blocked: {cspViolation.CspReport.BlockedUri}");
     
        return Ok();
    }
}

Here we take the CSP Violation in the POST body and log into the console. 

Let’s now revert our CSP setup to work on non-report mode again:

Response.Headers.Add("Content-Security-Policy", "default-src 'self'; script-src 'self' 'unsafe-inline' scripts.ourdomain.com; style-src 'self' 'unsafe-inline' styles.ourdomain; img-src 'self' 'unsafe-inline' images.ourdomain.com; frame-ancestors 'none'; report-uri /csp-violations");

If we run our app again, then check out the console window we can see the CSP violations coming through:

warn: ContentSecurityPolicySample.Controllers.CspController[0]
      URI: https://localhost:7197/, Blocked: https://example.com/img.jpg
warn: ContentSecurityPolicySample.Controllers.CspController[0]
      URI: https://localhost:7197/, Blocked: wss://localhost:44395/ContentSecurityPolicySample/

In a real-world scenario, we could log to something more long-term, such as ApplicationInsights or a SQL Database. However, for demonstration purposes this is fine.

Conclusion

In this article, we looked at how a Content Security Policy helps mitigate common browser attacks such as XSS and Clickjacking. Security is an ever-growing concern for web developers, so it’s critical we leverage features like CSP to ensure our user’s data is safe. Combining client-side security like CSP with server-side security like CSRF tokens, SQL injection prevention, and general server protection (e.g. firewalls), ensures we have a well-rounded security strategy.

Hopefully, you can now go away and enable CSP on your web applications and take another step to improve your security posture.

Liked it? Take a second to support Code Maze on Patreon and get the ad free reading experience!
Become a patron at Patreon!