LUKS and intel aes-ni performance part 2

Lately I've bought laptop with Intel i7-3720QM. Laptop got 2 disks: SSD Vertex 120GB and original WD Scorpio 7200rpm hdd. I have to check LUKS performance of this setup ('cause I use LUKS on daily basis). And it looks like this (of course bonnie++ results):

HT on, WD Scorpio 320GB 7200rpm, lvm, ext4, no encryption:

AES-NI on, HT on, WD Scorpio 320GB 7200rpm, lvm, cbc-essiv:sha256 256bit key, ext4:

AES-NI on, HT on, WD Scorpio 320GB 7200rpm, lvm, aes-xts-plain64 512bit key, ext4:

AES-NI off, HT on, WD Scorpio 320GB 7200rpm, lvm, cbc-essiv:sha1 256bit key, ext4:

AES-NI off, HT off, WD Scorpio 320GB 7200rpm, lvm, cbc-essiv:sha1 256bit key, ext4:

Like before, reads from LUKS device are faster then from nocrypted disk, so rewrites too (because of faster reads). Cannot unload aes-ni modules at the moment, cause even swap uses them, so cannot measure non aes-ni plain64. But for hard drive, there is almost no perfrmance loss (like 8% for writes). It almost doesn't matter for performance if you use aes-ni or default aes-asm kernel module with 7200 hard drive (less then 10% slower reads for no aes-ni one). But two more things. Using aes-ni module causes lower system load, lower iowaits, and as you can see in last 2 results using HT gives better performance. Almost no difference between 512bit xts-plain64 and 256bit cbc-aessiv with aes-ni ON, so the choice is obvious - xts-plain :).

Now time for SDD results:

HT on, Vertex 3 120GB, lvm, ext4, no encryption:

AES-NI off, HT on, Vertex 3 120GB, lvm, aes-xts-plain64 512bit key, ext4:

AES-NI ON, HT ON, Vertex 3 120GB, lvm, aes-xts-plain64 512bit key, ext4:

AES-NI ON, HT off, Vertex 3 120GB, lvm, aes-xts-plain64 512bit key, ext4:

AES-NI ON, HT ON, Vertex 3 120GB, lvm, aes-cbc-essiv:sha256 256bit, ext4:

Fastest vertex result ever (not on ubuntu, during ssd align with sysresccd):
HT on, Vertex 3 120GB, lvm, ext4, no encryption:

SSD is properly aligned (I hope so). I'm comparing all ubuntu results (not the last one). As you can see, there is huge performance loss with ssd device. Noncrypted device have 460MB/s write and 469MB/s read rate. The fastest crypted aes-xts-plain64 with 512bit key had 134MB/s write rate and more than 350MB/s read rate. So ~70% write speed performance loss and 24% read loss. Looks like disaster, but it's still faster then noncrypted hdd, and seeks rate is 50x faster!!! Again aes-ni gives lower system load, and lower iowait rates. HT gives better write rates when on. Again slower reads when aes-ni off, and in ssd case they are much slower (almost 40%). xts-plain 512bit even a bit faster then cbc:essiv 256bit, so obvious choice for ssd is xts-plain too. But looks like, aes-ni in i7-3720QM is not fast enough to utilize full ssd write speed (but maybe it's some ssd feature?). My earlier tests with Intel Xeon X5650 showed better write performance with cached hardware raid on regular hard drives. Maybe 2 times bigger cache in X5650, QPI and faster memory access got much more impact there. Hard to say when you got hardware for few hours to test.

Lastly HT is good choice when crypting disks under ubuntu 12.04 with LUKS. Using aes-ni module is also GOOD idea (better performance) both for hdd and ssd. But using LUKS with ssd causes big performance loss (in case of intel i7-3720QM). Of coures I stay with LUKS on my ssd, cause I like my data to stay private. Crypted ssd is still faster than noncrypted hdd, so I'm not crying that much :).

I would like to check some Xeon AES processor with ssd or some really fast storage (FC array), to see is there more AES-NI power in better CPUs. But sadly don't have access to such hardware at the moment.

And one more thing. Read speed from AES-NI xts-plain64 encrypted swap device is around 500MB/s. Fastest I've seen is ~540MB/s and slowest around 380MB/s. Above results during resuming from hibernate.

Part 1 here:

All above results in bonnie database: , so you can compare them with different setups.


Thanks for putting these benchmarks together.

I just wanted to point out that the main factor in your write performance loss was not exactly the overhead for the encryption, but rather the impact encrypted (incompressible) data has on that version of the Sandforce controller. The controller uses dedup/compression, so something like Bonnie++ writing zeros will have much higher throughput than say a compressed video file or encrypted data.

Your numbers are a bit lower than the incompressible write test on this drive at anandtech, but it's a comparable loss compared to writing zeros.

A lot of drives use sandforce or similar controllers, so it's definitely something to keep in mind if you're planning to encrypt an SSD at the OS layer.

Great stuff! Thanks for posting all this info

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Main menu

Article | by Dr. Radut