D
Deleted member 330132
Guest
Warthunder file: 3,384 files and 33.7gb(33.5 on disk)
That makes the average file size 10mb. So, 2mb chunk size?
Couldn't they logically make write on raid 10 write across all 6 disks and then do the redundancy like a file sync? It could split it across all disks and both stripes leaving a logical area to sync for the initial write. Then it could run sync the rest of the data after the fact in the form of a sync like activity. It could leave data vulnerable and hold up data, but it could get initial performance for home computers or certain applications. The data placed correctly across both stripes in the correct pattern could act like a mirror in various ways depending on the setup. This would make it write in the pattern of n1 instead of n2 with time afterwords to sync as long as the data is healthy. It would just have to read across both stripes intelligently. It could be useful for things with an internet or exterior check ability on the files afterwords like with steam or other offsite file backups or downloads. Maybe it could use an onsite backup like cache to hold commonly used data chunks for checking if desired.
That or make all file transfers a single large file and copy/check the old file(s) before syncing or deleting the origin of the copy job. You could move any checking of file bits till after the transfer and maintian the files physically in a large file. Maybe with data to find individual components by other software. The software could contain stop start data or have it added to the file by the file system.
I forgot. I read something about turning of or on direct something with file transfers to speed up writes. I wasn't sure what this was. If this is the write cache is it useful to turn off write cache for certain things?
That makes the average file size 10mb. So, 2mb chunk size?
Couldn't they logically make write on raid 10 write across all 6 disks and then do the redundancy like a file sync? It could split it across all disks and both stripes leaving a logical area to sync for the initial write. Then it could run sync the rest of the data after the fact in the form of a sync like activity. It could leave data vulnerable and hold up data, but it could get initial performance for home computers or certain applications. The data placed correctly across both stripes in the correct pattern could act like a mirror in various ways depending on the setup. This would make it write in the pattern of n1 instead of n2 with time afterwords to sync as long as the data is healthy. It would just have to read across both stripes intelligently. It could be useful for things with an internet or exterior check ability on the files afterwords like with steam or other offsite file backups or downloads. Maybe it could use an onsite backup like cache to hold commonly used data chunks for checking if desired.
That or make all file transfers a single large file and copy/check the old file(s) before syncing or deleting the origin of the copy job. You could move any checking of file bits till after the transfer and maintian the files physically in a large file. Maybe with data to find individual components by other software. The software could contain stop start data or have it added to the file by the file system.
I forgot. I read something about turning of or on direct something with file transfers to speed up writes. I wasn't sure what this was. If this is the write cache is it useful to turn off write cache for certain things?
Last edited by a moderator: