-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open-CAS vs dm-cache, dm-writecache, bcache #1221
Comments
Hi @mikabytes, To be more precise, here are some details about Open CAS that you may find useful:
You can find more info in Open CAS documentation (which other solutions also lack sometimes). Moreover, Open CAS is actively developed, maintained and tested, as it is used in many commercial production environments, which makes it in general more reliable then other solutions. The main disadvantage in comparison to the alternatives, is that Open CAS is not a kernel built-in. But the installation process should take no more effort than issuing simple Hope this answers you question, but feel free to ask any follow-up if needed. :) |
One more thing worth mention - Open CAS "engine" called OCF (Open CAS Framework) is a part of SPDK software that aims to increase storage performance even further, by omitting the kernel and putting all storage operations in userspace. |
Thank you for the detailed answer. That's excellent. Looking forward to giving it a good go once kernel 5.13+ support lands. |
Hi @mikabytes, Open CAS v22.6 was released a few days ago. The see the recent changes, please take a look at the release notes |
hi i saw it mentioned in #1414 and #1433 that preemptive mode is required in order to use opencas. Is that still the case and are there any plans to change that in the future? I ask because quite a few of the linux kernels now come shipped with preemptive mode by default.
this was on a ubuntu 22.04 server installation. |
@mikabytes did you ever set this up and what were your findings? I am also looking to experiment with Open-CAS on my Proxmox node to compare it to bcache performance. I haven't found many information online about anyone setting this up on proxmox. |
Hi @TheLinuxGuy While I did evaluate the other options, I never got around to trying Open-CAS once the Linux kernel support landed. I concluded that this kind of caching was an ill fit for my use case. Most of my big data is rarely accessed, and the data that is frequently accessed follows a known pattern. So, the ideal solution ended up being a script that retires data to rotational drives every night. I'm overlaying the devices with MergerFS so it's all transparent from the application layer. Since my initial post, a few years have passed and the price of SSDs kept dropping. Now, half my storage is already SSD, further decreasing my need for adaptive caching strategies. Sometimes the simplest solution is the best. I'd look into something like Open-CAS again if I had a large dataset that is less predictable though. |
Hi all, I just want to share results of my comparing bcache vs OpenCAS in flushing optimization, specifically merging neighbor dirty sectors and write them once in flush. TLDR: OpenCAS is better. We all know that most part of HDD latency is just waiting until required part of disk surface moves under heads. That's why random IOPS of ordinary HDD is about 100-200 (even disk with ideal heads has to wait on average half of one revolution, which gives 7200rpm/60seconds*2 = 240 readwrites per second). Internal cache and modern firmware can increase this number, but IOPS value order will remain. My idea was to create very slow block device using "delay" target of device mapper (dm-delay), then use it as backing device like HDD and use RAM disk like SSD cache.
Then I confirmed that writing to delayed disk either 4K or 128K block tooks roughly same time: 1 second. I.e. delay does not depend on request size. bcache:
OpenCAS:
Test was simple, I used fio with parameters: only writing randomly (readwrite=randwrite), data size equal to size of delayed_disk:
After test I started flush
for OpenCAS:
And eventually Resultsfor delayed_disk with size = 32MB, write data size = 32MB (full disk), RAM disk size = 512MB and write delay = 1000ms:
for delayed_disk with size = 256MB, write data size = 64MB (25% of disk), RAM disk size = 512MB and write delay = 1000ms:
Obviously OpenCAS is a winner. OpenCAS merges and flushes neighbour dirty sectors 10 times faster than bcache. In other words OpenCAS produces IOPS to HDD up to 10 times less than bcache in this artificial but illustrative experiment. Hope results can help someone to choose. PSBtw IOPS while fio'ing before flush also differ a much:
PPSIf someone wants to repeat tests attachment contains scripts I used in tests.
and one parameter for write delay of delayed_disk (milliseconds):
|
Question
Is this project aimed at solving the same problem as LVM cache strategies such as dm-cache, dm-writecache, or bcache? If it is, then what need is Open-CAS serving that isn't already met? If not, please help me understand in what situations Open-CAS is preferrable.
Motivation
I am currently investigating how to improve my homelab datacenter. So far, I've gone through Ceph, DRBD, GlusterFS. Backed by raided drives, pure drives, SSDs, HDDs, NVMes. It is clear that the limitations of HDD IOPS is a common problem no matter what software strategy I use. I could go for a pure SSD/NVMe solution, but then I have all these rotational drives just laying around... It would be good to be able to keep using them at least until they break on their own.
I have yet to fully test any of these mentioned caching solutions.
Thank you.
The text was updated successfully, but these errors were encountered: