smartctl causing errors in dmesg with SAS SSD drives

For some time, I've observed a weird behavior with several SSD SAS drives. Whenever I checked their info using 'smartctl -a /dev/sdX', an error message appeared in dmesg:

[  510.392322] sd 11:0:9:0: [sdk] tag#3990 Sense Key : Recovered Error [current]
[  510.392336] sd 11:0:9:0: [sdk] tag#3990 Add. Sense: Grown defect list not found

Finally, it came time that I decided to investigate it.

Read more…

Server-side file copying between smb shares on linux

I needed to do a simple task: copy a file between two SMB shares residing on the same server. It would be wasteful to transfer this file back and forth between server and client machine using the network, so why not use the server-side copy feature?

Trying regular 'cp' command copies the file across the network, so there must be some other way. After a little googling, I've found on Samba wiki (https://wiki.samba.org/index.php/Server-Side_Copy) that Linux in fact does support server-side copying. The site mentioned 'cp --reflink' - which did not work for me - along with a program called 'cloner' - which I could find for the life of me. It's time for a little digging.

Read more…

Exploring cryptsetup key generation

When playing with Linux device mapper trying to mount an encrypted file I naturally started utilizing cryptsetup. I new the passphrase that was used to encrypt the file and managed to mount it correctly, but checking the mapped device table info with '--showkey' option got me thinging: Knowing the passphrase, could I manage to do without cryptsetup and mount the volume directrly, only specyfing the 'table' parameters?

Let's start from the begining.

Read more…

Mapping ZFS disks access pattern

I was curious just how important TRIMming SSD drives is when using them with ZFS. We know, that classic NAND-based drives are experiencing performance degradation the more they are filled with data. But when using just a little area of the storage, leaving the rest untouched, would it still slow down the storage over time?

Read more…

Common pitfall when benchmarking ZFS with fio

Let's say you build yourself a new ZFS pool on top of some pretty fast NVMe drives and want to benchmark it to see how well it can run. You create a zvol and fire up fio to sequentially read some data from it. But anticipating a large number of IOPS, you don't want your CPU to bottleneck the performance, so naturally you include --numjobs=8 to be sure you get the most out of your NAND gates. Fio completes and TA-DAH: IOPS up the roof. But wait a minute... Your pool consisting of three NVMe drives, each capable of 3.2 GB/s sequential read, is read at the rate of 24 GB/s! Obviously the disk vendor would not undermine its product perfromance, so something might be wrong with the test.

Read more…