I actually think that the low write rate of the Samsung consumer SSDs is a rather good sign for the firmware implementation of the drives. When a cheap consumer class SSD like the Kingston v300 or the Intel 535 can achieve such a high sustained random 4K performance I get skeptical. Such a...
I use that exact setup, so I definitely know it works. And you won't need the sideband, as the sideband info will be transmitted in-band from the controller to the expander. The sideband is required if you connect the backplane directly to the controller and want to control the LEDs.
dd is not a benchmark. You have no control over syncs and queue depths. There is a post from me in this subforum how to use fio to properly benchmark sequential write speed.
Are you sure you should get more an an encrypted block device? The processor is somewhat old and does not have hardware...
Usually all drives 7200rpm and higher will exceed proper operating conditions (~50°C) without active cooling.
There are a few exceptions, notably cases where they are screwed to more solid metal parts.
How can proc be using space? It is a virtual filesystem not located on disk.
Unless Solaris/OpenIndiana works completely different than any *nix I ever used.
please post the output of
zfs list -r -t all rpool
du -h -x -d 1 /
(note that I use mainly Linux and do not know whether the...
The M.2 850 EVO is a SATA drive, it will not work in a PCIe-only slot.
Your options are Plextor M6e, Samsung XP941, Samsung SM951 and the HyperX Predator.
All three are good with the SM951 as the fastest.
I'm not sure something like that exists.
There are some copy stations around, but you would need something that is filesystem-aware.
I do not know of a single one that is.
My solution to your problem would be a well scripted/automated PC with some hotswap trays.
Resizing filesystems with bad...
A single fast PCIe SSD like the Intel P3700 will blow any caching solution out of the water. If you need redundancy buy two and make a RAID1.
Caching only makes sense if the footprint of the workload fits on the SSD cache.
This drive has not been around long enough to make such assessment.
I personally considered them as "storage" SSD for my server, but 850 EVO has much better burst and random read performance for almost the same price.
Is there a reason to replace the 840 EVO? If you do it because of the read...
The MTBF values are too similar to be based on actual failure rates.
Maybe they calculate them based on the failure rates of the individual parts. A a simple mistake in the firmware can destroy a drive much faster.
The 840 EVO still has one of the best price/GB ratios around.
Samsung drives seem to are very resilient against sudden power loss. I had to replace a MX100 in a specific laptop that often had filesystem corruption or bluescreens after suspend-to-disk. Since I replaced the drive by an 840 EVO...
How they are specified and rated can be read in various datasheets and press releases.
What I mean are actual technical differences, which are not published.
You would either have to talk with the engineers or open the drives yourself.
Patrick seemed to have some insight.
Following this logic you cannot buy Intel, Samsung, Crucial and OCZ. All of these had problems with SSDs, some were rather devastating (Intel 8MB bug, Crucial m400 bricks, several types of OCZ drives).
The twisted pair drivers consume significantly more power than DAC transceivers.
If it is for point to point and short range the X520 is preferrable.
If it is for a bit longer range and you can use fibre and can get the SFP+ modules cheap on ebay it is even better.
It is a budget drive, although one of the better. The price is comparable to the 840/850 EVO and MX100 drives.
If you look at the better drives like Intel 730 or Samsung 840/850 Pro, those are 50% more expensive.
I cannot comment on the MX200, but this behaviour is observable on a lot of budget SSDs.
The sector ranges that contain actual data have a much slower read speed than the empty (and TRIMmed) space, which basically reads at line speed.
I have seen this on the MX100 as well.
I could not observe...
The LSI RAID controllers are not cheap, if you are willing to spend that much it would be better to not go the RAID route and use an NVMe drive. The Intel P3x00 series are a good recommendation depending on the required write speed.
The Samsung EVO drives are not a good choice for that type of...
I doubt that the driver gets properly tested for things like sleep mode even if the basic support is genrally there.
Server hardware not running = no revenue, so this is a feature that not many people will actually use.
I have been using Linux mdraid for almost 15 years now and basically used external checksums for media files (CRC32) since the beginning. I never encountered a single corrupted file on a software RAID5/6.
There have been many unreadable sectors on disks, but since the disk does not read such...
I don't think you can do bait-and-switch tactics for OEM-only products, at least not without major consequences for future contracts.
The companies that will buy these SSDs will be well aware of the specs.
Or do they plan do make these SSDs a retail product after all? At least the name does not...
While the math is wrong the validity of the argument remains.
Highly compressed video really behaves like noise. It is impossible to deduplicate.
Even when it is lossless, good lossless compression is similar to noise as well.
We should not forget that lossless compression already is...
I agree with his other arguments, but this number seems wrong.
There are 256^(128*1024) possible permutations of a 128K block. This is roughly 10^315000, number with 315000 digits.
Note that the universe is estimated to comprise 10^80 protons.
These are two conflicting mechanics, dedup works well with small block sizes while compression is better with larger blocks.
To make effective use of dedup the NTFS cluster size (4k by default) should be at least as large as the ZFS volume block size (8k by default).
I did actually try to determine the space savings of my 12 GB media library with zdb. The deduplication space saving ratio was less than 1 percent. If you approach even 2x with dedup, it is because of large duplicate files, which can be taken care of by much simpler means - SnapRaid does it for...
What do you mean with "PCI-x but should work in PCI-e"?
PCI-X and PCIe are not mechanically or electrically compatible.
PCI-X may work in legacy PCI, however.
FC does block level links, comparable to what iSCSI does.
You cannot get normal network traffic across that link.
Yes, the UDMA CRC errors are way to high for a single power loss and they probably still increase if you take the checksum errors into account.
Check SATA cables, power cables and PSU.
It could probably also be the controller or controller ports that have issues.
So many parallel distributed errors on a newly created pool? You should check the SMART data of the disks and your power supply. Maybe the sudden power loss damaged it - unlikely if it is a quality PSU, but I had a dead Seasonic PSU after a blackout some years ago.