Hardware accelerated SIMD operations are almost necessary today for writing high-performance code. Prior to .NET Core 2.1, the only way to use the power of vectorization in .NET was via System.Numerics package, and Vector<T> type in particular (see the post SIMD with C# for the best introduction to this topic). Vector<T> provides great performance boost when applicable, but it offers fairly limited set of operations. Modern CPUs have hundreds of complex SIMD instructions, but Vector<T> offers only the most important ones, such as addition, multiplication, and comparison, among several others. That’s why the .NET team decided to implement something much more powerful.

.NET Core 2.1 is one of the most exciting .NET releases in recent years, especially for people interested in writing high-performance code. With Span<T>, Memory<T>, and friends, working with any kind of memory is easier, safer, and more efficient than ever before. Even though the addition of Span<T> and Memory<T> types and APIs is arguably the most important change in .NET Core 2.1, the feature I’m personally most excited about is the support for platform dependent intrinsics in the CoreCLR, exposed to users through System.Runtime.Intrinsics namespace in the experimental System.Runtime.Intrinsics.Experimental package.

At the heart of the new library are Vector64<T>, Vector128<T>, Vector256<T> types, and static functions operating on them. Intrinsic functions are grouped in static classes, each of them representing one instruction set. System.Runtime.Intrinsics.Arm.Arm64 namespace currently defines Aes, Base, Sha1, Sha256, and Simd classes, while System.Runtime.Intrinsics.X86 defines Aes, Avx, Avx2, Bmi1, Bmi2, Fma, Lzcnt, Pclmulqdq, Popcnt, Sse, Sse2, Sse3, Sse41, Sse42, and Ssse3 classes. Be warned, though, that even though these instruction sets are part of the public API, not all of the intrinsics are supported by the CoreCLR at the moment: if you are lucky, you will get PlatformNotSupportedException for the unsupported instruction, otherwise your program will just crash.

The API is still unstable, and the documentation non-existent (the best way for exploring the library is reading the CoreCLR and CoreFX source code). Even though the implementation is still in progress, and many intrinsics are not yet supported, much can be accomplished with what is currently available.

Egor Bogatov already implemented many interesting general-purpose algorithms using SSE/AVX instructions in his Intrinsics Playground repository, so you should definitely check that out. I’m more interested in cryptographic applications of intrinsics, so that’s the topic I’ll be focusing on in this series of posts. Originally, I wanted to talk about AES instructions on x86 first, but the JIT currently doesn’t support them. On the other hand, support for ARM intrinsics is much better at the moment, so in the introductory post I’m going to talk about implementing SHA-256 on ARMv8.

ARMv8 SHA-256 intrinsics

ARMv8 Cryptography Extension contains instructions that can accelerate the execution of AES, SHA1, and SHA-256 algorithms. The instructions we are currently interested in are:

  • SHA256H (SHA256 hash update accelerator)
  • SHA256H2 (SHA256 hash update accelerator, upper part)
  • SHA256SU0 (SHA256 schedule update accelerator, first part)
  • SHA256SU1 (SHA256 schedule update accelerator, second part)

Unless you are writing the assembly code by hand, these instructions are rarely used directly. ARM C Language Extensions provide easier and more portable way for dealing with them in the form of C/C++ data types and functions. The crypto intrinsics for SHA-256 algorithm are defined like this:

// Performs SHA256 hash update (part 1)
uint32x4_t vsha256hq_u32(uint32x4_t hash_abcd, uint32x4_t hash_efgh, uint32x4_t wk);

// Performs SHA256 hash update (part 2)
uint32x4_t vsha256h2q_u32(uint32x4_t hash_efgh, uint32x4_t hash_abcd, uint32x4_t wk);

// Performs SHA256 schedule update 0
uint32x4_t vsha256su0q_u32(uint32x4_t w0_3, uint32x4_t w4_7);

// Performs SHA256 schedule update 1
uint32x4_t vsha256su1q_u32(uint32x4_t tw0_3, uint32x4_t w8_11, uint32x4_t w12_15);

The C# API directly mirrors the functions listed above. Arm64 namespace defines the following class:

public static class Sha256
{
  public static bool IsSupported { get; }
  public static Vector128<uint> HashLower(Vector128<uint> hash_abcd, Vector128<uint> hash_efgh, Vector128<uint> wk);
  public static Vector128<uint> HashUpper(Vector128<uint> hash_efgh, Vector128<uint> hash_abcd, Vector128<uint> wk);
  public static Vector128<uint> SchedulePart1(Vector128<uint> w0_3, Vector128<uint> w4_7);
  public static Vector128<uint> SchedulePart2(Vector128<uint> w0_3, Vector128<uint> w8_11, Vector128<uint> w12_15);
}

As you can see, intrinsics in C# are the first-class citizen of the language. This is huge, as the only other modern programming language with the comparable support for intrinsics is Rust with its simd module (you can achieve the same goal by using Go’s assembler, which is how its standard library often accelerates the critical operations, but writing the assembly by hand doesn’t come close to the ease of use of static type system).

SHA-256 algorithm

Now that we are familiar with the available SHA-256 functions, how do we actually implement the algorithm? SHA-256 hash computation is defined in the NIST’s FIPS PUB 180-4, but the more user-friendly definition can be found on Wikipedia. As you can see in the Wikipedia definition, two main parts of the algorithm are the computation of the message schedule:

for i from 16 to 63
  s0 := (w[i-15] rightrotate 7) xor (w[i-15] rightrotate 18) xor (w[i-15] rightshift 3)
  s1 := (w[i-2] rightrotate 17) xor (w[i-2] rightrotate 19) xor (w[i-2] rightshift 10)
  w[i] := w[i-16] + s0 + w[i-7] + s1

and the main loop of the compression function:

for i from 0 to 63
  S1 := (e rightrotate 6) xor (e rightrotate 11) xor (e rightrotate 25)
  ch := (e and f) xor ((not e) and g)
  temp1 := h + S1 + ch + k[i] + w[i]
  S0 := (a rightrotate 2) xor (a rightrotate 13) xor (a rightrotate 22)
  maj := (a and b) xor (a and c) xor (b and c)
  temp2 := S0 + maj

SchedulePart1 and SchedulePart2 functions are designed to accelerate the first loop; HashLower and HashUpper accelerate the second. However, it’s far from obvious how to use them in practice. ARM documentation is useless, as it only lists the instructions/intrinsics, but doesn’t show you any usage examples (Intel is far superior in this aspect). Fortunately, somebody else figured out this stuff before me. SHA-Intrinsics GitHub repository contains source code for all compress functions from the SHA family, using both Intel and ARMv8 SHA intrinsics (my code is basically the C# port of the C code in sha256-arm.c).

C# implementation

The SHA-256 algorithm splits the message into 512-bit blocks, and then calls the compression function on each block. The compression function processes the block in 64 rounds, but intrinsic functions improve on that by working four rounds at a time in the following way:

/* Rounds 0-3 */
MSG0 = vsha256su0q_u32(MSG0, MSG1);
TMP2 = STATE0;
TMP1 = vaddq_u32(MSG1, vld1q_u32(&K[0x04]));
STATE0 = vsha256hq_u32(STATE0, STATE1, TMP0);
STATE1 = vsha256h2q_u32(STATE1, TMP2, TMP0);
MSG0 = vsha256su1q_u32(MSG0, MSG2, MSG3);

We are already familiar with vsha256su0q_u32, vsha256su1q_u32, vsha256hq_u32, and vsha256h2q_u32. What about vaddq_u32 and vld1q_u32? You can probably infer from their names and usage that the first one adds two 128-bit vectors, and the second one loads one 128-bit vector from the memory into a register. How do we achieve that in C#? Arm64 namespace defines the Simd class, which contains more than one hundred functions. Among them is this one:

public static class Simd
{
  public static Vector128<T> Add<T>(Vector128<T> left, Vector128<T> right);
}

It looks like we now know how to add two vectors, but how do we load one? Sse2 class in the X86 namespace contains bunch of LoadVector128 overloads, but there are no ARM equivalents. Could it be that ARM intrinsics are completely useless in the current state of the library? Fortunately, that’s not the case.

Unsafe class from the System.Runtime.CompilerServices.Unsafe package contains low-level functions for direct manipulation of pointers/references. One of the functions in this class is particularly interesting:

public static T ReadUnaligned<T>(ref byte source);

As you can probably guess, this function reads a value of type T from any given memory location. You can use it to read integers from a byte array, for example, but is the .NET Core JIT smart enough to replace the ReadUnaligned<Vector128<uint>> call with the correct load instructions? To my surprise, the answer to this question is yes! That means the following line of code works as a charm:

var temp = Unsafe.ReadUnaligned<Vector128<uint>>(ref Unsafe.As<uint, byte>(ref k[0x00]))

With the last obstacle out of our way, we can finally write the core processing unit of the compression function in C#:

// Rounds 0-3
wk = Simd.Add(msg0, Unsafe.ReadUnaligned<Vector128<uint>>(ref Unsafe.As<uint, byte>(ref k[0x00])));
msg0 = Sha256.SchedulePart1(msg0, msg1);
msg0 = Sha256.SchedulePart2(msg0, msg2, msg3);
temp_abcd = hash_abcd;
hash_abcd = Sha256.HashLower(hash_abcd, hash_efgh, wk);
hash_efgh = Sha256.HashUpper(hash_efgh, temp_abcd, wk);

Does this mean we are done? Not so fast! I still haven’t told you how to load the message block into registers in the first place. ARM architecture is little-endian, so before processing the message we have to reverse each four-byte block in the 128-bit vector like this:

/* Reverse for little endian */
MSG0 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG0)));
MSG1 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG1)));
MSG2 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG2)));
MSG3 = vreinterpretq_u32_u8(vrev32q_u8(vreinterpretq_u8_u32(MSG3)));

Notice the vrev32q_u8 instruction which does exactly that. But the C# API doesn’t implement it. There is an API proposal to add more SIMD intrinsics, but nobody knows when it’s going to be implemented. Until then, the only way to reverse the endianness is to do it manually (by calling BinaryPrimitives.ReverseEndianness, for example). The trouble with that approach is that it makes the implementation slower, which I will talk about in a moment.

Performance

Before measuring the performance, we have to actually run the code somehow (I guess that most of the people probably don’t have ARMv8 servers with cryptography extensions lying around their home). The simplest way I found is to purchase a monthly subscription for one Scaleway ARMv8 SSD Cloud Server, which costs only $3 (if you have some money burning a hole in your pocket, you could also have two Cavium ThunderX bare metal beasts for $0.50 per hour).

The good news was that after setting up .NET Core 2.1 SDK on Debian 9, my code was running and producing the correct results. The bad news was that the .NET SDK doesn’t really work as advertised on ARM64 devices: the runtime executes programs correctly, but the compiler fails to build even the simplest project without giving any useful error details. Why do you need compiler for benchmarking, you might ask?

Benchmarking is really hard. The de facto standard for benchmarking the .NET code is BenchmarkDotNet, and you should use it whenever possible. How does BenchmarkDotNet work?

BenchmarkRunner generates an isolated project per each benchmark method/job/params and builds it in Release mode.

In other words, if you don’t have a working compiler, you can’t run benchmarks. At least that’s what I though the first few times my benchmarks failed to run. At some point I almost decided to give up on collecting the correct performance numbers, but I tried for one last time to find something in the BenchmarkDotNet docs before finally surrendering. I was lucky, because the last section in the documentation on Toolchains had the information I needed:

InProcessToolchain is our toolchain which does not generate any new executable. It emits IL on the fly and runs it from within the process itself. It can be useful if want to run the benchmarks very fast or if you want to run them for framework which we don’t support. An example could be a local build of CoreCLR.

Hooray for InProcessAttribute! Now let’s measure performance! First, we have to establish some baseline against which we will compare our implementation. .NET Core standard library implements SHA-256, but it does that by forwarding all function calls to external crypto provider, which in the case of Linux is OpenSSL. OpenSSL is already using ARMv8 SHA-256 intrinsics in its implementation, so that’s not something we can beat. That’s why I also wanted to throw a managed SHA-256 implementation into the mix, just so we can see the difference between the naive implementation and the accelerated one. I chose BouncyCastle for that purpose (fun fact: .NET Standard defines SHA256Managed class, but the .NET Core version of it is just a proxy to a SHA256 class, which we already know is using OpenSSL). Here are the numbers for hashing 1K and 1M chunks of data:

Method Mean Error StdDev
OpenSsl1K 6.720 us 0.0378 us 0.0316 us
Intrinsics1K 6.673 us 0.0482 us 0.0451 us
BouncyCastle1K 89.930 us 0.2948 us 0.2462 us
OpenSsl1M 3,382.769 us 17.0888 us 15.9849 us
Intrinsics1M 5,851.249 us 19.3270 us 18.0785 us
BouncyCastle1M 86,494.253 us 377.8969 us 353.4850 us

The results are very close to what I was expecting. Managed implementation didn’t stand any chance against the specialized instructions. The lack of vrev32q_u8 intrinsic meant the C# code had to be slower once you hash chunk of data big enough, and the overhead of calling OpenSSL functions becomes negligible. But how much of the slowdown was really due to the missing intrinsic? I commented out the code that was reversing the endianness manually, and rerun the benchmarks:

Method Mean Error StdDev
OpenSsl1K 6.742 us 0.0334 us 0.0279 us
Intrinsics1K 5.149 us 0.0633 us 0.0592 us
OpenSsl1M 3,372.214 us 8.8382 us 8.2672 us
Intrinsics1M 4,179.431 us 9.3587 us 8.7542 us

The gap was much smaller this time. I have no idea what’s causing it. My guess is that the JIT may not be generating the optimal load instructions for the Unsafe.ReadUnaligned call, or that some of the other function calls are not correctly inlined. Viewing the instructions generated by JIT requires cross compiling the CoreCLR for ARM64 (I’m aware of the Disassembly Diagnoser, but it works only on Windows), which I will do in some future post in this series, as this post was only a warm-up.

Conclusion

You can find the complete C# implementation of SHA-256 in my ARMv8 SHA-256 intrinsics GitHub repository (it currently serves only as a demo, not as a production-ready implementation). This post is only the beginning—stay tuned for more intrinsic goodies in the following weeks!