# 500TB in silica glass about the size of CD.

#### KD5ZXG

##### Gawd

Though a caption originally attatched to news photo said like 1.5GByte per 8.8mm square.
How exactly that figures to 500TB? Best check the math, cause I only extrapolate 193GB
https://phys.org/news/2021-10-high-speed-laser-method-terabytes-cd-sized.html

I find the same article at SciTech Daily, this time with proper reference to original source.
https://www.osapublishing.org/optica/fulltext.cfm?uri=optica-8-11-1365&id=462661
"10^6 voxels/s, corresponding to a demonstrated fast information recording of ∼225kB/s
and a potentially high-density data storage of ∼500TB/disk.", so agrees the 500TB claim.

Any relationship to this old thread? Or maybe dredged an obsolete photo and caption?

Yeah, that's what confused: New article + Old photo. Makes sense now...

Last edited:
25TB HDDs high density platters were supposed to be the norm by now but SSDs are proving they dont have the capacity just speed.

If that :fast information recording of ∼225kB/s
Mean that the speed you write on it, that still a good while before being pratical.

500TB would be 25,000 days writing non stop (and maybe why they use theory instead of just testing it to evaluate potential size)

If that :fast information recording of ∼225kB/s
Mean that the speed you write on it, that still a good while before being pratical.

500TB would be 25,000 days writing non stop (and maybe why they use theory instead of just testing it to evaluate potential size)
It’s a stable storage medium that doesn’t degrade over time. You could store data on that for billions of years with no degradation. Volatile storage is a problem that plagues libraries and museums everywhere, their data, research, and documents are literally rotting away and they spend obscene amounts restoring photographs, transferring data between discs, tape, and HDD’s, just because they rot over time.

Last edited:
Well you can always use many of them if speed is to low for your type of data, my impractical comment was about being 500tb instead of 100tb having any value at that write speed for the purpose above, by the time it will take to fill that 100tb one can imagine a much better option would be there to buy.

25TB HDDs high density platters were supposed to be the norm by now but SSDs are proving they dont have the capacity just speed.
and as we know, we all desire SPEED right?

Well you can always use many of them if speed is to low for your type of data, my impractical comment was about being 500tb instead of 100tb having any value at that write speed for the purpose above, by the time it will take to fill that 100tb one can imagine a much better option would be there to buy.

You are ignoring the point again, it is not about speed, it is about longevity. Glass is the most practical method to store data indefinately (or near enough). This is a huge boon to any massive long term data storage, they won't be bothered by the write time.

You are ignoring the point again, it is not about speed, it is about longevity. Glass is the most practical method to store data indefinately (or near enough). This is a huge boon to any massive long term data storage, they won't be bothered by the write time.
You're missing his point. Capacity is useless if it is never possible to reach that capacity writing data continuously at some abysmal write rate.

Everyone can see the value of being able to have data integrity over time. There is no value in a 500tb drive vs a 100gb drive if it takes decades to write 100gb. Even less so on critical data backups as I would assume its preferred to quickly write and then store the drive cold for a long time.

You're missing his point. Capacity is useless if it is never possible to reach that capacity writing data continuously at some abysmal write rate.

Everyone can see the value of being able to have data integrity over time. There is no value in a 500tb drive vs a 100gb drive if it takes decades to write 100gb. Even less so on critical data backups as I would assume its preferred to quickly write and then store the drive cold for a long time.
The practical application for this is not a 10x10 sheet it’s something half the size of a credit card. And these are basically one time write items where a special write device can write down multiple data streams at once. These are long term archival storage not for an active system, is expensive microfiche for archives. Microsoft and Hitachi have been tackling 5D storage since 2013. They nailed it down in 2018 where the first 2 discs where they were loaded with the entire Arch Mission Foundations repository. One was gifted to Elon Musk where is lives in his personal library, the second is currently in the glovebox of a Tesla roadster that currently resides in Orbit.

Edit: Supposedly a third one of these disks was loaded with the Arch Mission Foundations database in 2019 and it was successfully crashed into the moon as part of the Israeli Beresheet project.

Last edited:
It's an experimental prototype, I'm sure they'll figure out the write speed- seems they're in the stage of just proving the thing works and usability refinements always come. If by the time the first product is available throughput is only increased by 100x (8.5 months to write 500TB) or 1000x (25 days) that would still be worth it for institutions that wish to create extremely long-term archives of large datasets that won't change. Doubt we'll see these replacing tapes for commercial backups anytime soon but even with low bandwidth it seems worth it for the "preserving cultural knowledge for thousands of years" type projects. The kind of organizations that make those archives have almost a religious approach (see Svalbard Seed Vault) I think they could handle waiting awhile for a disc write if their project is operating on long timescales.

You can already buy M-Disc, which is cheap, lasts 1,000 years, goes up to 100GB, and exists today.

https://www.mdisc.com/

It's an experimental prototype, I'm sure they'll figure out the write speed- seems they're in the stage of just proving the thing works and usability refinements always come. If by the time the first product is available throughput is only increased by 100x (8.5 months to write 500TB) or 1000x (25 days) that would still be worth it for institutions that wish to create extremely long-term archives of large datasets that won't change. Doubt we'll see these replacing tapes for commercial backups anytime soon but even with low bandwidth it seems worth it for the "preserving cultural knowledge for thousands of years" type projects. The kind of organizations that make those archives have almost a religious approach (see Svalbard Seed Vault) I think they could handle waiting awhile for a disc write if their project is operating on long timescales.
The article is not about the storage medium, those discs have been in use since 2018, the article is about the new laser optical effects and how they got the write speed 4x faster so they can fill that disc in 60 days now.

From the article:
The researchers used their new method to write 5 gigabytes of text data onto a silica glass disc about the size of a conventional compact disc with nearly 100% readout accuracy. Each voxel contained four bits of information, and every two voxels corresponded to a text character. With the writing density available from the method, the disc would be able to hold 500 terabytes of data. With upgrades to the system that allow parallel writing, the researchers say it should be feasible to write this amount of data in about 60 days.

Math checks out, it's ebay Math. Where 2TB USB drives for \$30 somehow only write about 30GB before filling up.

Still waiting on the memristor flash storage from 2014

You are ignoring the point again, it is not about speed, it is about longevity. Glass is the most practical method to store data indefinately (or near enough). This is a huge boon to any massive long term data storage, they won't be bothered by the write time.
Not sure how I am ignoring out any point by saying that if it would write at the quoted speed having that capacity sound impractical, I would argue you do not seem to address the point and there is a certain speed that either mean you acquire more information than the writing speed making the backup irrelevant or mean you need to have it on something else for year's anyway.

The correct answer to my objection is simply that they sound confident they could make it thousand of time faster: With upgrades to the system that allow parallel writing, the researchers say it should be feasible to write this amount of data in about 60 days.

they've been talking about this shit since I was in college. Can it be done? sure. Economically scalabe to anything useful to consumers? Yes, just after graphene hits store shelves en mass.

Write speed isn't remotely the point. This has absolutely nothing to do with data usage but rather data storage.

Something like this is a holy grail for backups. People think things on the internet are forever but data is really REALLY fragile right now. It is so ridiculously easy to have data lost forever due to a single drive failure at the wrong time. For physical storage like a piece of paper or acetate they are shockingly more stable since even if a book gets dunked into a water tank it might be salvageable.

My research data doesn't need speed as long as I know someone in 100 years can actually USE it.

You're missing his point. Capacity is useless if it is never possible to reach that capacity writing data continuously at some abysmal write rate.

Everyone can see the value of being able to have data integrity over time. There is no value in a 500tb drive vs a 100gb drive if it takes decades to write 100gb. Even less so on critical data backups as I would assume its preferred to quickly write and then store the drive cold for a long time.

This isn't critical data backups, its archival of historical data and knowledge. As long as the write speed is faster than the degradation of existing media then why does it matter how long it takes?

Not sure how I am ignoring out any point by saying that if it would write at the quoted speed having that capacity sound impractical, I would argue you do not seem to address the point and there is a certain speed that either mean you acquire more information than the writing speed making the backup irrelevant or mean you need to have it on something else for year's anyway.

The correct answer to my objection is simply that they sound confident they could make it thousand of time faster: With upgrades to the system that allow parallel writing, the researchers say it should be feasible to write this amount of data in about 60 days.
The correct answer is you have the incorrect assumption of use. see above post.

The correct answer is you have the incorrect assumption of use. see above post.
Under what usage a medium that take over 50 year's to fill when you write unstop at max speed 24/24 would be practical instead of buying one that would take 5-10 year's and after that time buying the newer better less expensive version ? (well maybe in a long space travel)

Write speed isn't remotely the point. This has absolutely nothing to do with data usage but rather data storage.

Something like this is a holy grail for backups. People think things on the internet are forever but data is really REALLY fragile right now. It is so ridiculously easy to have data lost forever due to a single drive failure at the wrong time. For physical storage like a piece of paper or acetate they are shockingly more stable since even if a book gets dunked into a water tank it might be salvageable.

My research data doesn't need speed as long as I know someone in 100 years can actually USE it.
The write speed is kind of the point, the whole article is about they invented a new type of low power laser that can be used in an array that took the write time down from hundreds of days down to 60…

But yes this is for archival data where this will sit on a shelf or in a bunker or in a crater on the moon or in a cars glovebox in low orbit….

Under what usage a medium that take over 50 year's to fill when you write unstop at max speed 24/24 would be practical instead of buying one that would take 5-10 year's and after that time buying the newer better less expensive version ? (well maybe in a long space travel)
Dude the article is about how they got the write speed up to the point where they can fill the disk in 60 days… we are already using these disks, we have them in orbit, on the surface of the moon, in Elon Musks personal library, and one scheduled for the doomsday vault in the arctic.

Last edited:
As long as the write speed is faster than the degradation of existing media then why does it matter how long it takes?
Would kind of suck, though, to wait 10 years and get a bad burn.

Would kind of suck, though, to wait 10 years and get a bad burn.
60 days according to the article, the new laser they invented can be used in an array.

Dude the article is about how they got the write speed up to the point where they can fill the disk in 60 days… we are already using these disks, we have them in orbit, on the surface of the moon, in Elon Musks personal library, and one scheduled for the doomsday vault in the arctic.
I am fully aware of that, what it had to do with I am not sure 500tb would be pratical at the speed of 7x time a floppy disk

People can simply answer if the fast information recording of ∼225kB/s number is misleading or they are writing in multiple time in parallels, etc...

If one argue that there is an usage to a medium that take 60 year's to fill at writing speed (instead of buying decade later a better one when a smaller one would still not be filled) do so.

The message was:
If that :fast information recording of ∼225kB/s
Mean that the speed you write on it, that still a good while before being pratical
.

The if at the beginning was not rhetorical, it was quite literal

I am fully aware of that, what it had to do with I am not sure 500tb would be pratical at the speed of 7x time a floppy disk

People can simply answer if the fast information recording of ∼225kB/s number is misleading or they are writing in multiple time in parallels, etc...

If one argue that there is an usage to a medium that take 60 year's to fill at writing speed (instead of buying decade later a better one when a smaller one would still not be filled) do so.

The message was:
If that :fast information recording of ∼225kB/s
Mean that the speed you write on it, that still a good while before being pratical
.

The if at the beginning was not rhetorical, it was quite literal
The storage mediums strengths are it can literally withstand a nuclear blast, we have them in orbit, on the moon, in doomsday vaults. These disks have been in use for the past number of years. The disk is not the new part of the article. The breakthrough here is how they discovered a new property of light that let them create a new laser that uses far less energy and can be used in an array. Allowing them to take the existing write time of over a year down to a few months.

“The researchers used their new method to write 5 gigabytes of text data onto a silica glass disc about the size of a conventional compact disc with nearly 100% readout accuracy. Each voxel contained four bits of information, and every two voxels corresponded to a text character. With the writing density available from the method, the disc would be able to hold 500 terabytes of data. With upgrades to the system that allow parallel writing, the researchers say it should be feasible to write this amount of data in about 60 days.”

Last edited:
That line is ambiguous. Are they writing 5GB or 500TB in 60 days? I guess if 5GB is not bad if it is text data.

There is a lot of text information, news articles, social media, blog posts, source code, books, etc. that could easily fit in 5GB.

But they would need to greatly improve, several orders of magnitude, the write speed before it is practical.

Again, I can write 100GB on an M-Disc right now (I have the burner) in about 2 hours, and those last 1,000 years.

Granted, it is optical format, so is still susceptible to scratches, fire, etc. but in a water/fireproof safe, I think that would be pretty solid.

This isn't critical data backups, its archival of historical data and knowledge. As long as the write speed is faster than the degradation of existing media then why does it matter how long it takes?
It still doesn't make sense since raid exists and planning for failures is easier than making something that can't fail.

It still doesn't make sense since raid exists and planning for failures is easier than making something that can't fail.
A war, natural disaster, or even extreme solar event can vaporize most methods of backup. Sure the stars have to align for most major failures like this but it is entirely plausible.
This kinda thing isn't for tomorrow's backup or even by the end of your life backups. Its storage for the future. Say we get whacked by a rock and infrastructure is toast. By the time people recover enough to be able to maintain HDD/SDD/Tape tech you now have the ability to bring up historical data and get a head start on rebuilding. It's also the only really viable generation ship information storage to 100% ensure history is retained.

While it's cool they figured out how to write faster write speed was never the point. It's just gravy on cool science.

It still doesn't make sense since raid exists and planning for failures is easier than making something that can't fail.
That works for active data, but not for offline or archival. Big libraries spend huge amounts just transferring data from old CD’s to new DVD’s, from the old DVD’s to the new BluRays, from the old BluRays, to the new storage server, then from storage server to storage server as they upgrade. And with each transfer there is some degradation, so then they are spending money on touching up and repairing digital files. These disks are basically modern bomb proof microfiche, you put it on there and it can sit in an archive on shelves untouched and unseen for the rest of this planets lifetime and it will still be exactly the same as the day it was first written. Archival libraries have been paying for this research since 2013, and have been getting them installed since 2018. Raid arrays and backups take tech, they are complex, they require constant monitoring, supervision, maintenance, planning. These disks need a shelf.

You are ignoring the point again, it is not about speed, it is about longevity. Glass is the most practical method to store data indefinately (or near enough). This is a huge boon to any massive long term data storage, they won't be bothered by the write time.

Not true at all. Glass is a liquid with a very very high viscosity. And if we need molecular stability to store info glass is NOT it. Windows are thicker at bottom after decades than at top. Thats material flowing.

We probably need to write our data on Bismuth crystals if you want stability.

It’s a stable storage medium that doesn’t degrade over time. You could store data on that for billions of years with no degradation. Volatile storage is a problem that plagues libraries and museums everywhere, their data, research, and documents are literally rotting away and they spend obscene amounts restoring photographs, transferring data between discs, tape, and HDD’s, just because they rot over time.
Biggest threat to the data I'd imagine, even if it were a viable, stable storage medium, is whether in such a long future timeline it would be both readable by the available tech and recognized by those who encountered it as a storage format.

I could foresee a scenario where something important ends up somewhere else and mistaken for some etched decorative piece With the Voyager's Golden Disc they considered this aspect and at attempted to draw attention to how it could be read (albeit in a high level way). Of course at this point it's all at the proof of concept stage but it's still interesting to consider.

Not true at all. Glass is a liquid with a very very high viscosity. And if we need molecular stability to store info glass is NOT it. Windows are thicker at bottom after decades than at top. Thats material flowing.

We probably need to write our data on Bismuth crystals if you want stability.
That's not exactly true, While technically glass does flow, it's far, far too slow to have any effect on any human timescale. The old windows that are thicker at the bottom are due to how they were manufactured, and a convention of installing them with the thicker part down, not due to changes over time. In fact if you look at actual medieval windows, you can sometimes notice ones that have the thicker part at the top because they were originally installed upside down.

If we were given the task of digitizing a library, would you wait until you have scanned every book before writing to disc, or would you burn them as you went along?
That writing speed would be fine if you burned as you went along.

Not true at all. Glass is a liquid with a very very high viscosity. And if we need molecular stability to store info glass is NOT it. Windows are thicker at bottom after decades than at top. Thats material flowing.

We probably need to write our data on Bismuth crystals if you want stability.
"Glass" isn't always glass. In fact, most non-window "Glass" is actually ceramic.

The thickness issues pre 1800s were mostly due to the fact that the manufacture of glass basically consisted of blowing air into a cylinder and then cutting it. Flat glass manufacture didn't happen in any form of quality until much later.

The glass used in this project is effectively a crystal. Look at any diamonds to find out how long they remain exactly the same.

Biggest threat to the data I'd imagine, even if it were a viable, stable storage medium, is whether in such a long future timeline it would be both readable by the available tech and recognized by those who encountered it as a storage format.

I could foresee a scenario where something important ends up somewhere else and mistaken for some etched decorative piece With the Voyager's Golden Disc they considered this aspect and at attempted to draw attention to how it could be read (albeit in a high level way). Of course at this point it's all at the proof of concept stage but it's still interesting to consider.
Well the Arch Mission Foundation did put one on the moon, and Elon Musk who is a supporter of the project put one of these disks loaded with their data in the glovebox of that Roadster they put in orbit. So it’s not impossible…

That's not exactly true, While technically glass does flow, it's far, far too slow to have any effect on any human timescale. The old windows that are thicker at the bottom are due to how they were manufactured, and a convention of installing them with the thicker part down, not due to changes over time. In fact if you look at actual medieval windows, you can sometimes notice ones that have the thicker part at the top because they were originally installed upside down.
Human timescale sure but the starement I was replying to said billions of years

That line is ambiguous. Are they writing 5GB or 500TB in 60 days? I guess if 5GB is not bad if it is text data.

There is a lot of text information, news articles, social media, blog posts, source code, books, etc. that could easily fit in 5GB.

But they would need to greatly improve, several orders of magnitude, the write speed before it is practical.

Again, I can write 100GB on an M-Disc right now (I have the burner) in about 2 hours, and those last 1,000 years.

Granted, it is optical format, so is still susceptible to scratches, fire, etc. but in a water/fireproof safe, I think that would be pretty solid.
It’s for the 500TB should their new lasers be installed into the appropriate array. But yeah an M-disk stored out of direct sunlight at room temp will last a long time if not damaged. So for many things it is an awesome solution.
These silicate discs are good for archives and museaums. Over time things get thrown in boxes, moved and shipped, handled, catalogued, organized, and then reboxed, by everything from professional curators to hung over college students. These can take that sort of punishment and more, I recall a demo Hitachi put on with these back in 2015 or so where they took a disc maybe 1.5” in diameter of this stuff and placed it on a cinder block then smashed the disc and said cinderblock with a sledge hammer picked it up dusted the disc off then put it in the reader and verified the data integrity.

Human timescale sure but the starement I was replying to said billions of years
From the page I linked to:
A mathematical model shows it would take longer than the universe has existed for room temperature cathedral glass to rearrange itself to appear melted
I don't know how they define "melted", but it's plausible that data stored on glass could be retrieved after billions of years.

From the page I linked to:

I don't know how they define "melted", but it's plausible that data stored on glass could be retrieved after billions of years.
Retrieved sure but not necessary directly read. Glass is brttle, it has its purposes but as a handled storage medium I’m not sure of…