Is it normal for 100GB M-Discs to take 6x longer to read from than to write to?

hikes

n00b
Joined
Aug 15, 2022
Messages
6
OS: Win 10 pro 21H2 64b
HW: ASUS BW-16D1HT rev. 3.10 internal bluray writer
(Yeah, I know, that's probably the last firmware update, aka, the one designed to make you want to buy a new one. IF . . . IF it is, then I'll remember that when looking for a new writer.)
BD: Verbatim 100GB BDXL M-Disc
(I bought them from Amazon, and the boxes have Japanese writing all over them despite going to an American address. Hopefully that doesn't mean the quality of the discs is any less.)

I've been trying to backup my data to 100GB M-Discs. Aside from other problems I've encountered, like finding software that fully works, it's taking about 4 hours to fill a 100GB M-Disc, but then it takes about 24 hours to copy-paste all of that newly burned data from the 100GB M-Disc to the HDD.

If that's expected because M-Disc materials by nature take 6x longer to read, no problem. I just don't want the data to degrade and become impossible to copy. Whenever HDDs/SSDs take this long to read/copy, it's because the data is about to become inaccessible. Like if your HDD or SSD is this slow, that data is going to be inaccessible within months.

Has anyone else encountered this? Does it take you roughly 4 hours to burn and roughly 24 hours to copy-paste a 100GB M-Disc? In case you're wondering, I do a CRC check of the copy-pasted data to see if the data is still copyable from the newly burned disc.

I've checked this with both the BDXL M-Disc drive and a non-M-Disc BDXL drive. They both take about 24 hours to copy-paste all of the data.
 
Just off the top of my head I wonder if it's simply as when writing to the disc it was doing so sequentially, while reading back arbitrary data on different parts of the physical disc (particularly if they are many small files) it's doing so non-sequentially and might be slowing the whole process.

Same general idea is what occurs with hard drives, where sequential blocks of data can be read/written more quickly than files spread across different parts of the disk. It's why some like to add thousands of small files to a single archive file format to speed up transfers.
 
I wonder if it's simply as when writing to the disc it was doing so sequentially, while reading back arbitrary data on different parts of the physical disc (particularly if they are many small files) it's doing so non-sequentially and might be slowing the whole process.
Crazy test thought for OP: make an ISO copy of the disc on your hard drive. It's a sequential block by block read, so it would either confirm or eliminate random read slowness as a factor for slow reading from the drive.
 
It's probably random read slowness. Building an image or reading from an image on a HD has always been way faster than a filesystem copy off an optical. Try making an ISO of it like Grebuloner suggested. I bet it'll be much faster. If you have a ton of files on an optical disk and want to copy all of them it's usually faster to make an ISO image, mount the image, and copy files off the image than it is to copy files direct from the optical drive.
 
Just off the top of my head I wonder if it's simply as when writing to the disc it was doing so sequentially, while reading back arbitrary data on different parts of the physical disc (particularly if they are many small files) it's doing so non-sequentially and might be slowing the whole process.

Same general idea is what occurs with hard drives, where sequential blocks of data can be read/written more quickly than files spread across different parts of the disk. It's why some like to add thousands of small files to a single archive file format to speed up transfers.
You might just be right.

After reading your comment, I paid more attention to the copy-paste progress window, and I've since written a couple different sets of data to another two of the 100GB M-Discs that I had purchased.

The progress window seems to show the slowest, Byte-level, transfer rates when copying several small files, such as xml files that were inside of profile folders for various programs I had used.

The first set of data, the one I wrote this post about, was over 90,000 files. This second one, which didn't take all day, is over 14,000 files. The third one is only over 2,000 files, and it only took about an hour to copy-paste.

All three groups of data are within several hundred million bytes of 100 billion bytes. The 1024 OS rounding shows that to be close to, but just under 93GB. With the first set of data, I used a software to try and get within bytes of the maximum while still being able to finalize. Being so close to the maximum while still being able to finalize probably wasn't the issue. It was probably what you said, all the small files.

Thanks a lot! 🤩
Crazy test thought for OP: make an ISO copy of the disc on your hard drive. It's a sequential block by block read, so it would either confirm or eliminate random read slowness as a factor for slow reading from the drive.
Nice idea.

Another idea you gave me when I combine what you and Okatis said. Maybe the solution here to prevent a ridiculous ~24 hour copy-paste is to write larger archival files like ISO or zip directly to the optical media still contained within their archive.

Right now, the burning software is extracting the contents of ISO files that I create with Imgburn. Maybe I should have it write the ISO file directly as is, but that might be too large for my liking, 1 file that's 93GB is quite a lot. I might try this with multiple zip files instead.

Thanks.
It's probably random read slowness. Building an image or reading from an image on a HD has always been way faster than a filesystem copy off an optical. Try making an ISO of it like Grebuloner suggested. I bet it'll be much faster. If you have a ton of files on an optical disk and want to copy all of them it's usually faster to make an ISO image, mount the image, and copy files off the image than it is to copy files direct from the optical drive.
Thanks, the idea Grebuloner with Okatis gave me didn't actually dawn on me until I read your comment.

Doesn't your burning software have a verify option?
Yeah, but I'm not going to just trust it. I want a second software unrelated to the first to tell me the same thing. This is my most important data, you know.
 
Last edited:
Another idea you gave me when I combine what you and Okatis said. Maybe the solution here to prevent a ridiculous ~24 hour copy-paste is to write larger archival files like ISO or zip directly to the optical media still contained within their archive.
Yeah, that's what I was going to recommend trying. 7zip, zip, or tar the files into a few large archives. See if it reads faster. I believe zip and 7z files also store a checksum with the archive to check data integrity. I'm not sure about tar.
 
Yeah, that's what I was going to recommend trying. 7zip, zip, or tar the files into a few large archives. See if it reads faster. I believe zip and 7z files also store a checksum with the archive to check data integrity. I'm not sure about tar.
For data that's already compressed like a giant pile of jpegs I'd recommend finding an archiver that supports a no compression mode. Trying to compress jpegs just takes forever (ton of CPU time) and does basically nothing.

One time back in the '90s I was working for a little start-up remote control webcam company. We dumped like 20,000 jpegs on a CD-R. Burned no problem, but then someone tried to copy all the jpegs off. The drive cranked on it for several hours, then died. I ended up copying it to the hard drive on a Linux box with dd, then mounted it loopback to get the files off.
 
Crazy test thought for OP: make an ISO copy of the disc on your hard drive. It's a sequential block by block read, so it would either confirm or eliminate random read slowness as a factor for slow reading from the drive.

Holy freaking crepes!!! WT bananas! How in the world does somebody tell you, "It's taking 24 hours to copy-paste everything," and then you think, "Hmmm, well then making an ISO of it should only take you about an hour."

How the hockey does something like this make sense to you?!?!

Holy mother of Jesus, and protestants think also James and Jude?!

How does this work?!

Seriously! Are you people research computer scientists with specialties in optical media and transfer rates?!?!?!

You know, I only said "Nice idea" as a thank you for giving me an idea to try, but I didn't think in my wildest hopes it would actually work!

Really, it took an 1 hour and 9 minutes using Imgburn to create an ISO of the disc that takes 24 hours to copy-paste. Then I used 7zip manager to do a CRC of the data within the ISO by selecting all of the folders and files, and the CRC checks out, it's all there. Then copy-pasting from the ISO, as suggested, takes a normal amount of time.

I'm truly amazed.

Here, you deserve the star-eyed smileys, 🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭
The crying ones are gushing tears of joy and thankfulness.


🤩🤩🤩😭😭😭 - These ones are for Zandor.
 
Last edited:
Yeah, that's what I was going to recommend trying. 7zip, zip, or tar the files into a few large archives. See if it reads faster. I believe zip and 7z files also store a checksum with the archive to check data integrity. I'm not sure about tar.

For data that's already compressed like a giant pile of jpegs I'd recommend finding an archiver that supports a no compression mode. Trying to compress jpegs just takes forever (ton of CPU time) and does basically nothing.

One time back in the '90s I was working for a little start-up remote control webcam company. We dumped like 20,000 jpegs on a CD-R. Burned no problem, but then someone tried to copy all the jpegs off. The drive cranked on it for several hours, then died. I ended up copying it to the hard drive on a Linux box with dd, then mounted it loopback to get the files off.

Winrar has that, FYI :).

As I understand it, if you're trying to preserve data for as long as possible, it's always better to do so uncompressed.

The reason is because each bit in the compressed data applies to multiple locations in the uncompressed data.
 
Last edited:
Holy freaking crepes!!! WT bananas! How in the world does somebody tell you, "It's taking 24 hours to copy-paste everything," and then you think, "Hmmm, well then making an ISO of it should only take you about an hour."

How the hockey does something like this make sense to you?!?!

Holy mother of Jesus, and protestants think also James and Jude?!

How does this work?!

Seriously! Are you people research computer scientists with specialties in optical media and transfer rates?!?!?!

You know, I only said "Nice idea" as a thank you for giving me an idea to try, but I didn't think in my wildest hopes it would actually work!

Really, it took an 1 hour and 9 minutes using Imgburn to create an ISO of the disc that takes 24 hours to copy-paste. Then I used 7zip manager to do a CRC of the data within the ISO by selecting all of the folders and files, and the CRC checks out, it's all there. Then copy-pasting from the ISO, as suggested, takes a normal amount of time.

I'm truly amazed.

Here, you deserve the star-eyed smileys, 🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭
The crying ones are gushing tears of joy and thankfulness.


🤩🤩🤩😭😭😭 - These ones are for Zandor.
Lol, you're welcome! Glad it worked out for you!
 
LOL because it stops after every one of your itty bitty files and doesn't necessarily seek them in order of their position on the disk. If you image it... it will read the data sequentially with out slowing down.
I have noticed the same thing with flash drives and sd cards. Copy a bunch of tiny files and you are going to have a bad time. I have had a ~100mb folder full of ~100k guitar pro tabs for 18 years or so. You don't want to copy that to flash...
 
Just off the top of my head I wonder if it's simply as when writing to the disc it was doing so sequentially, while reading back arbitrary data on different parts of the physical disc (particularly if they are many small files) it's doing so non-sequentially and might be slowing the whole process.

Same general idea is what occurs with hard drives, where sequential blocks of data can be read/written more quickly than files spread across different parts of the disk. It's why some like to add thousands of small files to a single archive file format to speed up transfers.
Crazy test thought for OP: make an ISO copy of the disc on your hard drive. It's a sequential block by block read, so it would either confirm or eliminate random read slowness as a factor for slow reading from the drive.
absolutely agree. Whenever I use a CD/DVD the absolute first thing I do is make an ISO of it on my SSD. It makes using them so much easier and faster. If you get those X-Ray/MRI DVDs always make an ISO/rip the files to use them on your SSD. Same concept here. Make an ISO then from the ISO take what files you need. It will save you a lot of time and headache assuming you need a significant portion of the files.
 
You might just be right.

After reading your comment, I paid more attention to the copy-paste progress window, and I've since written a couple different sets of data to another two of the 100GB M-Discs that I had purchased.

The progress window seems to show the slowest, Byte-level, transfer rates when copying several small files, such as xml files that were inside of profile folders for various programs I had used.

The first set of data, the one I wrote this post about, was over 90,000 files. This second one, which didn't take all day, is over 14,000 files. The third one is only over 2,000 files, and it only took about an hour to copy-paste.

All three groups of data are within several hundred million bytes of 100 billion bytes. The 1024 OS rounding shows that to be close to, but just under 93GB. With the first set of data, I used a software to try and get within bytes of the maximum while still being able to finalize. Being so close to the maximum while still being able to finalize probably wasn't the issue. It was probably what you said, all the small files.

Thanks a lot! 🤩

Nice idea.

Another idea you gave me when I combine what you and Okatis said. Maybe the solution here to prevent a ridiculous ~24 hour copy-paste is to write larger archival files like ISO or zip directly to the optical media still contained within their archive.

Right now, the burning software is extracting the contents of ISO files that I create with Imgburn. Maybe I should have it write the ISO file directly as is, but that might be too large for my liking, 1 file that's 93GB is quite a lot. I might try this with multiple zip files instead.

Thanks.

Thanks, the idea Grebuloner with Okatis gave me didn't actually dawn on me until I read your comment.


Yeah, but I'm not going to just trust it. I want a second software unrelated to the first to tell me the same thing. This is my most important data, you know.
I actually write zips to my discs because a lot of the file attributes are lost if you write files directly that have things like the system or hidden attribute set--at least this is with CDs.

For bith-by-bit comparison of source to original, winmerge and windiff are my goto. There used to be a nice command line software called xcomp and xcompare, but they broke when long file names were introduced. I preferred those as they followed an xcopy /s command line structure so it was quick to set up and then work on something else. Keep in mind that it will be best for these drives to both be reading fully sequentually with no seeking unless they're an SSD where seeking won't have as high a penalty.
 
For data that's already compressed like a giant pile of jpegs I'd recommend finding an archiver that supports a no compression mode. Trying to compress jpegs just takes forever (ton of CPU time) and does basically nothing.

One time back in the '90s I was working for a little start-up remote control webcam company. We dumped like 20,000 jpegs on a CD-R. Burned no problem, but then someone tried to copy all the jpegs off. The drive cranked on it for several hours, then died. I ended up copying it to the hard drive on a Linux box with dd, then mounted it loopback to get the files off.
You can still get some extra compression on compressed stuff by using the extreme/ultra type of switches, but it's only an additional 1-2%. Still, the cpu time should be trivial on today's systems and if you're going to generate a crc anyways with an archive, might as well get that extra compression.

Ouch--poor drive. :(
 
Holy freaking crepes!!! WT bananas! How in the world does somebody tell you, "It's taking 24 hours to copy-paste everything," and then you think, "Hmmm, well then making an ISO of it should only take you about an hour."

How the hockey does something like this make sense to you?!?!

Holy mother of Jesus, and protestants think also James and Jude?!

How does this work?!

Seriously! Are you people research computer scientists with specialties in optical media and transfer rates?!?!?!

You know, I only said "Nice idea" as a thank you for giving me an idea to try, but I didn't think in my wildest hopes it would actually work!

Really, it took an 1 hour and 9 minutes using Imgburn to create an ISO of the disc that takes 24 hours to copy-paste. Then I used 7zip manager to do a CRC of the data within the ISO by selecting all of the folders and files, and the CRC checks out, it's all there. Then copy-pasting from the ISO, as suggested, takes a normal amount of time.

I'm truly amazed.

Here, you deserve the star-eyed smileys, 🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩🤩😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭😭
The crying ones are gushing tears of joy and thankfulness.


🤩🤩🤩😭😭😭 - These ones are for Zandor.
Those of us that live through the days of optical seeks times remember such methods all too well. ;)
 
As I understand it, if you're trying to preserve data for as long as possible, it's always better to do so uncompressed.

The reason is because each bit in the compressed data applies to multiple locations in the uncompressed data.
Makes sense because a single bit error is going to cause a lot more havoc when compressed. This is why uncompressed video (and even images) can have multiple bit flips and you can't tell in the viewable product.

If I'm doing something for pure archive, I'd only really be zipping for the file attributes. At least that's what I remember I used to do. I gave up on CDs once I realized you basically have to keep it online/nearline to avoid hidden or surprise data losses.
 
LOL because it stops after every one of your itty bitty files and doesn't necessarily seek them in order of their position on the disk. If you image it... it will read the data sequentially with out slowing down.
I have noticed the same thing with flash drives and sd cards. Copy a bunch of tiny files and you are going to have a bad time. I have had a ~100mb folder full of ~100k guitar pro tabs for 18 years or so. You don't want to copy that to flash...
Hard drives have the same issue too, but for some reason they're used to it or just built better so they typically won't fail unlike other media.
 
absolutely agree. Whenever I use a CD/DVD the absolute first thing I do is make an ISO of it on my SSD. It makes using them so much easier and faster. If you get those X-Ray/MRI DVDs always make an ISO/rip the files to use them on your SSD. Same concept here. Make an ISO then from the ISO take what files you need. It will save you a lot of time and headache assuming you need a significant portion of the files.
I think we also found back in the day that xcopy was faster than windows explorer since it copied files based on what order they were natively on the drive versus something explorer had 'sorted'. I could be remembering this wrong though as that's nearly 20 years ago now...
 
I think we also found back in the day that xcopy was faster than windows explorer since it copied files based on what order they were natively on the drive versus something explorer had 'sorted'. I could be remembering this wrong though as that's nearly 20 years ago now...
I just use teracopy and use SHA512 for crc. No idea if its better than what your using. Never heard of what your using. Teracopy is a god send though for me.
 
I just use teracopy and use SHA512 for crc. No idea if its better than what your using. Never heard of what your using. Teracopy is a god send though for me.
If it works for you, that works for me. (y) There's a thousand different ways to copy files and these days it's all about the same speed.
 
Those of us that live through the days of optical seeks times remember such methods all too well. ;)
Oh yes! Many MANY CDr's burned in my day, and MANY small files (usually pictures) sitting listening to your CD drive scanning back and forth back and forth.....waiting to see smoke! :D
 
I was more curious about the automated CRC features than speed.
The problem I have with CRC as a sidecar file or even while in flight is that it doesn't 100% eliminate errors. The only way to really be sure is to compare the files afterwards, and not just right after the file copy, but somewhere down the road as well.
 
Oh yes! Many MANY CDr's burned in my day, and MANY small files (usually pictures) sitting listening to your CD drive scanning back and forth back and forth.....waiting to see smoke! :D
Yeah, it was the surest way to ruin those cheap ide 'tray' drives (which then became the standard :rolleyes:). I think only our NEC Multispin 6x got killed by seek city. We had it repaired too, but it was sadly never the same. :(
 
Yeah, it was the surest way to ruin those cheap ide 'tray' drives (which then became the standard :rolleyes:). I think only our NEC Multispin 6x got killed by seek city. We had it repaired too, but it was sadly never the same. :(
My Ricoh , I think 4x burner? that thing rocked, i burned literalyl HUNDREDs of CD's with that thing when I had it! (i downloaded a lot of linux ISOs ;) ;) and burned everything)
 
My Ricoh , I think 4x burner? that thing rocked, i burned literalyl HUNDREDs of CD's with that thing when I had it! (i downloaded a lot of linux ISOs ;) ;) and burned everything)
That was probably a rebrand of the Yamaha 4x4x, which was the KING of CDR drives. I still have ours--full SCSI and caddied like the way they should have remained...
 
So, since this thread started, what has people's M disc experiences been? I made a copy of all of my extremely important photos and home videos. I was working on a practice restore. I have over 20 discs that store 100 GB. They all read super slow. I mean, I've been listening to CD drives for a long time. Normally when I hear it read over and over and over and barely make any progress, usually that means its having a hard time reading the disc and it isn't too much longer until the disc fails. Everyone one of these discs are doing that. Which really leaves me doubtful that these discs really last 1,000+ years.

I read how they tested it for 1,000 years. They just exposed some discs to extreme heat, cold, all spectrums of light and some other harsh tests for 24 hours and out of all of the discs they exposed to this test, the M discs survived. All other failed. But, they were tested for 24 hours and came to the conclusion they will last 1,000 years. That's a big stretch if you ask me to assume a 24 hour test can prove something can last 1,000 years. I mean I won't be around 1,000 years, but I would like to know that 30+ years from now, if I went to use them in my 80s, they'll be readable. I have my doubts.

These discs aren't my only backups. But, they're my latest backup.

But what worries me the most is how hard of a time it seems like the drive is having reading these discs and they're only about 6 months old.
 
I've heard seeking like this too and a good number of times it's actually the drive. Have you tried reading them in another drive?
 
Back
Top