Nanoscale 'abacus' Uses Pulses of Light Instead of Wooden Beads to Perform Calculations

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
12,892
This pioneering new technique could pave the way to new, more powerful computers that combine computing and storage functions in one element -- a move away from conventional computers that treat these two functions as separate. In my opinion, it's inventions like these that are destined to make super-computers of the future more powerful than we can imagine today. Pair this with AI and the sky is the limit. Read the study here.

"Computing with light - and not with electrons, as is the case with traditional computers -means that we can develop much faster systems which can be connected using integrated optical waveguides." adds co-author Prof. Harish Bhaskaran from the University of Oxford.
 
Light computers have never really taken off. There was some use for an organic glass compound to make a sort of gate array but it never really went anywhere. Photons get very messy, very quickly.
 
If they figure out the non-destructive reads, this could be the next big leap. And that's only because the cycle endurance seems a bit low. 10 to the 15th cycles sounds big, but not when we're talking "potential GHz operations". The data persistence measured in years is neat, which also adds lack of or minimal base current draw.

For those that don't want to RTA, it's essentially an optical abacus suspended in a phase change material that can be manipulated into many states using light pulses down to the picosecond in current form. Using a square grid array of the devices connected with waveguides, you can use intersecting pulses to perform complex ASMD functions with actual decimal numbers (2 place) as demonstrated. You could use base 16 or whatever according to the study, based on how many states you can assign. The really big deal is that the result is not only found but stored as well, just like an abacus, so there is no need to take that result and move it somewhere else if your going to run another operation against it or use it for other calculations. I wouldn't think of it replacing RAM and/or block storage, rather eliminating the juggle fest of current CPU cache levels. It also looks like these should be able to be stacked. If they can really scale it, things could get interesting.

It may not replace the mainstream PC any time soon, but I could see some specialty applications really benefiting from this once the software layer is worked out. Simulations come to mind, since if you could get a large enough array to fit the whole data set you could rep it into quiescence much faster then having to constantly juggle parts of it around through RAM/Cache. If you couldn't, then clusters with direct optical interconnects should scale better then we can today. I'm still trying to wrap my head around the data density, RAM caching, wear leveling, and software ramifications, but that seems more appropriate for a lazy afternoon with a nice single malt.

In the mean time, I'm still waiting for that Mr. Fusion I pre-ordered....
 
Back
Top