IBM's Holey Optochip

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
IBM's Holey Optochip is capable of transferring a terabit of information per second, it's the holes that make it go faster.

Known as the "Holey Optochip," the prototype optical chipset can transfer the equivalent of 500 high-definition movies a second, or the entire U.S. Library of Congress Web archive in an hour, Big Blue said. The innovation is possible because IBM's scientists figured out that, by drilling 48 minuscule holes in a standard quarter-inch silicon CMOS chip, they were able to ramp up data transfer rates from what was possible.
 
Did anyone else actually read the article? The amount of downs this reporter has is unimaginable. He doesn't once state what the holes actually do. And makes gross misrepresentations of how fast the "chip" is. 1 Terabit/s =/= 500 HD movies/sec

CNet needs to dump this clown of a reporter like a prom night dumpster baby.
 
Are these holes supposed to act in a similar fashion to Myelin sheath cells along an axon in a nerve cell? Because that would be brilliant if that sort of organic design started to make its way into electronics/computing.
 
He doesn't once state what the holes actually do.

Agreed. I was really looking forward to see what the holes achieve. Now I'm still as clueless as when I read the article. The author didn't explain crap.

Now I'm left with it's just another way to reach my "unlimited" data cap on my portable device in under a second. ;)
 
Are these holes supposed to act in a similar fashion to Myelin sheath cells along an axon in a nerve cell? Because that would be brilliant if that sort of organic design started to make its way into electronics/computing.


You just said that.
 
Looks like the chip actually transmits light pulses (information) across the open spaces using (really small) photodiodes.
 
Are these holes supposed to act in a similar fashion to Myelin sheath cells along an axon in a nerve cell? Because that would be brilliant if that sort of organic design started to make its way into electronics/computing.

Holt shit, the birth of the Terminator neural net processor! :eek:


Anyway, pretty cool stuff, IBM. Now get those to market by year end for dirt cheap.
 
Looks like the chip actually transmits light pulses (information) across the open spaces using (really small) photodiodes.

What do you do for a living? Out of curiosity. You seem to know a lot about this field.
 
What do you do for a living? Out of curiosity. You seem to know a lot about this field.

Mechanical Engineering and Business Economics Major (Third Year). I do IT work part time to get through college. I get most of my information from curious and diligent reading.
 
Did anyone else actually read the article? The amount of downs this reporter has is unimaginable. He doesn't once state what the holes actually do. And makes gross misrepresentations of how fast the "chip" is. 1 Terabit/s =/= 500 HD movies/sec

CNet needs to dump this clown of a reporter like a prom night dumpster baby.
Yeah, the reporter really doesn't know what they are talking about. Most of the analogies are taken directly from the IBM press release. Also. LOL at the idea of drilling through a silicon wafer. :p
 
It looks like an answer to the quest to have easily made silcon that can read optic signals. It skips the steps of converting light to electric. I believe Intel's lightpeak was looking for something like this too.
 
seriously /mindblown I want to know how that works like seriously how on earth can drilling holes in a microchip make it that much faster and also why did it take us 30 years to stumble on this.

Also mad props to IBM they are one of the most innovative forward thinking companies in the world and they really do not get enough credit for it.
 
Sweet, so we now see a glimpse of the future where one will hit your data cap in about a second :D
 
A bit more on the holes:

A single 90-nanometer IBM CMOS transceiver IC with 24 receiver and 24 transmitter circuits becomes a Holey Optochip with the fabrication of forty-eight through-silicon holes, or “optical vias” – one for each transmitter and receiver channel. Simple post-processing on completed CMOS wafers with all devices and standard wiring levels results in an entire wafer populated with Holey Optochips. The transceiver chip measures only 5.2 mm x 5.8 mm. Twenty-four channel, industry-standard 850-nm VCSEL (vertical cavity surface emitting laser) and photodiode arrays are directly flip-chip soldered to the Optochip. This direct packaging produces high-performance, chip-scale optical engines. The Holey Optochips are designed for direct coupling to a standard 48-channel multimode fiber array through an efficient microlens optical system that can be assembled with conventional high-volume packaging tools.

http://www.physorg.com/news/2012-03-holey-optochip-trillion-bits-power.html
 
I remember drilling holes in my motherboard to make a CPU cooler fit my board. It didn't make it any faster, it just made it stop working. Then again, I only drilled four, not 48.
 
sign me up for the "wtf do the holes do for it' camp. Thanks IBM! and don't forget to put an 11 on the volume dial !!
 
Are these holes supposed to act in a similar fashion to Myelin sheath cells along an axon in a nerve cell? Because that would be brilliant if that sort of organic design started to make its way into electronics/computing.

I was about to say that!
 
I just tried this on my gaming computer. I just drilled 48 holes in my Core i7, and now it won't boot. I call BS on this article..

/s/
 
speed%20holes.jpg
 
From Raffin's post / quote:

photodiode arrays are directly flip-chip soldered to the Optochip


Thats why the holes are there, without them, the photodiode array would be completely blocked from the flip-chip soldering to this chip. With the holes, the light signals going in and out of the photodiode array that is basicly sandwitched up to this chip can pass right thru unblocked.

And by doing a flip-chip mounting to the photodiode array, the interconnects between the 2 devices are basicly next to nothing (and no parasitic capacitance), which is how they are able to get the speed to such rediculous levels and still have very good power use.
 
1Tbs = 2.5 DL Blu-ray disc per second not 500... Maybe these HD movies he is talking about are very very short...

Standard Blu-ray encoding is about 20Mbps, so that is about 14 hours of video or roughly 7 movies. Math fail for the win :D
 
1Tbps = 125GB/s /= 500 HD movies per second. Reporter fail.
 
1Tbs = 2.5 DL Blu-ray disc per second not 500... Maybe these HD movies he is talking about are very very short...

Standard Blu-ray encoding is about 20Mbps, so that is about 14 hours of video or roughly 7 movies. Math fail for the win :D

That's per thread, and maybe this new chip is capable of 20 simultaneous threads. :eek:

LOL
 
That's per thread, and maybe this new chip is capable of 20 simultaneous threads. :eek:

LOL

Not sure what you mean by threads in this case. This is not a general purpose processor. It has "48 channels, each moving 20 gigabits per second, for a total of 960 gigabits".
 
seriously /mindblown I want to know how that works like seriously how on earth can drilling holes in a microchip make it that much faster and also why did it take us 30 years to stumble on this.

Also mad props to IBM they are one of the most innovative forward thinking companies in the world and they really do not get enough credit for it.
The thing is you don't drill holes in a silicon wafer. With 48 holes per die on a 12" wafer, you are looking at needing to drill more than 400,000 holes per wafer with a drill bit that is less than .1mm in diameter. You try doing that and you will probably just shatter every single wafer before you finish drilling them. The holes are made by etching, which is a pretty standard process for MEMs or through hole vias.

What this paper is proposing is a chip scale packaging technique to better interface the optical transceivers with the CMOS die. I have not looked at what are current packaging techniques for optical transceivers, but I suspect they are designed to be wirebonded so the lasers are pointing out of the package. What they are doing here is flip-chipping the optical transceivers onto the CMOS die and then flip-chipping than onto the PCB. The holes are needed since when you flip-chip the tranciever, the lasers are facing the CMOS die. The advantage of this technique is you are able to simplify the packaging by not needing to wirebond, and also you decrease the distance between the transmitters on the CMOS die that are driving the lasers reducing power and design complexity. What isn't talked about is how the CMOS die is now to be cooled. If this was done on a microprocessor, you would somehow need to figure out how to package the optical fibers through the heatsink without degrading the thermal transfer characteristics of the heatsink.
 
I would like to know how it works, my guess is optical wavelength tuning, much the same principle as RF tuning in radar array antennae.


1Tbs = 2.5 DL Blu-ray disc per second not 500... Maybe these HD movies he is talking about are very very short...

Standard Blu-ray encoding is about 20Mbps, so that is about 14 hours of video or roughly 7 movies. Math fail for the win :D

Stop with the bullshit already please!

If the average bluray has 20GB of movie data then it can do ~50 movies per second, which means the article was only off by one decimal place, which is probably a simple typo that you dolts just can't seem to stop making a big deal of.
 
Now here is the strange part. It looks like they did the same thing two years ago, just at a lower speed. The link to the presentation is given here: http://domino.research.ibm.com/comm/research_people.nsf/pages/rylyakov.pubs.html/$FILE/OFC_10_Holey_Transceiver_Presentation_v2.pdf I am having a hard time figuring out which paper in the conference this year the press release is referring to. None of the papers from IBM seem to match the press release.
 
Back
Top