PCoIP Can someone explain?

We use PCoIP systems from Verari

The nice thing is the blade machines are in our data center and all we have to deploy to the user is the 'puck' device

If one blade dies or has a problem, we reconfigure the puck to point to another blade and the user is all set, no running to bring a new machine and configure it, just point the user to a spare.

One thing to note....see how that says the 10/100/1000?
We are replacing ClearCube blades with the Verari PCoIP systems...in one of our areas we have ~400 Clearcubes, if we replaced that with 400 Verari's and say everyone started to watch some youtube video or something that hogged bandwith.,.

we tested watching IPTV on a PCoIP device, while also having Outlook and a few IE windows open and the puck pulled ~90mb, multiply that by 400 machines and it would hog our 10gb core connection.
With the Verari systems, you can limit the puck and the blade PCoIP speeds, we use 3mb on the pucks and 10mb on the blades.

Another nice thing is the clarity on the users screens is great, even being 1/2 mile from the blade, no lag, no hiccups, no problems !

The Verari uses standard Intel mobo's, with a 'Connexxus' PCoIP user device, it looks very simillar to that EVGA device.
I like the technology, especially that you can turn any machine with a PCIe slot into a PCoIP host.

http://www.verari.com/connexxus.asp
 
I have tested EVGA products as part of our take to market and they work great. For full disclosure, I am the Director of Biz Dev at Teradici (so no bias here ; ) .

PCoIP is a protocol for delivering a full desktop over a standard IP networks. There are some high level similarities to Voice-over-IP, but for PC-over-IP in addition to 2-way HD audio, USB and the user display(s) are delivered as well.

PCoIP uses breakthrough graphics compression that is custom built for delivering a user desktop over IP networks. It works in such a way as to support all graphics (full frame rate 3D for design engineering, or video gaming etc) all media (HD video, microsoft formats, youtube, google, quicktime, flash etc), all USB peripherals (a big problem for thin clients today), and all OS's on the host PC/Server. There are no special drivers required in the host PC/Server and no drivers at all on the desktop (makes it easy to manage).

To deliver a desktop over a network you cannot just use JPEG picture compression as the bandwidth required would be way to high - especially for 3d graphics and videos that operate at 24, 30 or for graphics up to 60 frames per second. Nor can you just use MPEG video compression either - while it would be great for any movies playing on part/all of the screen, MPEG compression destroys text (especially anti-aliased and cleartype texts), makes still pictures fuzzy and would wreck 3D graphic clarity. So Teradici has developed a compression that is specific to the types of display content you get on a PC display which could include text, still pictures, video, and 3D graphics.

By compressing the display image at the host PC/Server you avoid application interoperability issues that have plagued thin clients for years and you can quickly adapt to real networks (temporary increases or decreases in network bandwidth availability).

For most applications of PCoIP technology the user cannot tell that their PC is not at their desk anymore.

Long distance WAN networks are supported where users can tell their PC is not at their desk, however, if they need a long distance remote connection, PCoIP delivers many features to give a substantially better user experience compared to other remote desktop solutions (ie RDP, ICA, RGS etc).

Today PCoIP is implemented in hardware - you add a PCIe card to your host PC and then a dumb/stateless (aka zero client) at the user desk. Going forward software versions of PCoIP will be integrated in virtualized desktop solutions from VMware (called Vmware View) and Teradici is developing new hardware that is built grounds up for delivering highly virtualized desktops over standard IP networks.

Note the longest test to date was an oil&gas application over 8,000km (Houston, USA to London, UK).

Let me know if you have any questions. - Stu
 
just to mention this doesn't look like thin clients, its a 1 to 1 system so you always need the high end computer on the other side, cant run multiple PCoIP off one machine/server.

thats what i got out of it.
 
You are correct, the first generation chipset is 1:1 (one at the user desk and one in the host PC) but the host PC does not need to be high-end. The host PC can be any CPU or Graphics performance (high end or lower end mobile chipsets). Some OEMs are considering using many Intel Atom's on a dense blade - I expect more public info on this over the next couple of quarters.

When VMware View is available with Software PCoIP the solution can be virtualized with software on the host Server and either software (VMware View client with PCoIP) or hardware like the Samsung 930ND (19" integrated display or EVGA PCoIP desktop appliance. A future chip will be highly virtualized in the server to help scale the number of users per server.
 
If you want to play games on your HDTV without the noisy machine in your living room check this out playing Crysis remotely on your HDTV using linksys 802.11n wireless gaming extender: http://www.youtube.com/watch?v=7FAoJfU4Iwo

Of course, wired is better since you won't get out of order packets. But 802.11n works too.

You can now buy the PCIe card for your gaming machine and the appliance that would connect to your TV (if you don't have a DVI input in your TV, you'll need a HDMI to DVI converter - they're cheap).

http://www.cdw.com/shop/search/results.aspx?key=PCoIP&SortBy=MostRelevant&searchscope=All
 
Thank you. This has been very enlightening. I was under the assumption most are that you still needed a highend system on the client side, I didn't realize you were doing full compression end to end so all thats required is the client side card. I work in engineering at an isp. While the graphic intensive applications aren't that important to us, this might have some uses. I'll look into getting a presentation approved.

If you want to play games on your HDTV without the noisy machine in your living room check this out playing Crysis remotely on your HDTV using linksys 802.11n wireless gaming extender: http://www.youtube.com/watch?v=7FAoJfU4Iwo

Of course, wired is better since you won't get out of order packets. But 802.11n works too.

You can now buy the PCIe card for your gaming machine and the appliance that would connect to your TV (if you don't have a DVI input in your TV, you'll need a HDMI to DVI converter - they're cheap).

http://www.cdw.com/shop/search/results.aspx?key=PCoIP&SortBy=MostRelevant&searchscope=All
 
Just in case anyone is interested, we have the user device (portal) in Los Angeles, CA and the machine (host) in Bristol, CT.
There is a slight lag in mouse movement, but nothing that hampers its use.
full screen video works great too.
(CA to CT is a dual 10gb circuit I believe)
http://en.wikipedia.org/wiki/L.A._Live#Broadcasting
 
just to mention this doesn't look like thin clients, its a 1 to 1 system so you always need the high end computer on the other side, cant run multiple PCoIP off one machine/server.

thats what i got out of it.

I'm a bit late, eh ? :)

Has there been any new developement when it comes to this.
Or is there any other alternative to this.

Is there a way to do this but not 1 on 1 but 1 server (or cluser) to multiple workstations with complete 3D enviroment just like this product described.
 
two years late but you get credit for searching.

read up on virtual machines and remote desktops. plenty of stuff out there now a days. ive used citirix in the past for a few clients.
 
VMware View now supports software PCoIP so you can do multiple VDI clients on a single VMware ESX host out to multiple thin client devices. We've deployed a good many.
 
XenDesktop caught my attention.

Would you say that 1Gbit/s pipe would be enough for 10 workstations?
 
XenDesktop caught my attention.

Would you say that 1Gbit/s pipe would be enough for 10 workstations?

should be.

XenDesktop is much more I/O intensive. From everything I've read, XenDesktop works better in high latency/low bandwidth situations compared to VMWare View.

You can test out XenDesktop; they do offer a freebie version which can run on an ESXi box.

and they also offer a one-to-many VDI if using their Provisioning Services also.
 
XenDesktop caught my attention.

Would you say that 1Gbit/s pipe would be enough for 10 workstations?

More than enough. We run way more than that over WAN connections. The ICA protocol that XenDesktop uses is very good.
 
should be.

XenDesktop is much more I/O intensive. From everything I've read, XenDesktop works better in high latency/low bandwidth situations compared to VMWare View.

You can test out XenDesktop; they do offer a freebie version which can run on an ESXi box.

and they also offer a one-to-many VDI if using their Provisioning Services also.

freebie is what caught my attention.

I'm still getting used to this new terminology, since virtuatization is not really my "thing". But google and hardforum are my friends, right? :)

Also since I intend to expand (number of workstations will increase). Would you say i can run 20 workstations trough 1Gbit pipe?

Everything beyond that will mean I have more then 20 engineers employed meaning I'll probably be generating more money, meaning I'll be able to invest into network infrastructure.
 
freebie is what caught my attention.

I'm still getting used to this new terminology, since virtuatization is not really my "thing". But google and hardforum are my friends, right? :)

Also since I intend to expand (number of workstations will increase). Would you say i can run 20 workstations trough 1Gbit pipe?

Everything beyond that will mean I have more then 20 engineers employed meaning I'll probably be generating more money, meaning I'll be able to invest into network infrastructure.

Citrix has some documentation on how much bandwidth is required per VDI... although, of course, that number depends on what applications the users use.

Seeing as I haven't fully deployed a Citrix environment yet, I can't answer the question.

But, talking to a counter-part in a dif dept, they already have a large Citrix environment, using VDI to remote sites over the (relatively slow) WAN links. And it sounded like basically all their users used a VDI and dumb-terminal.
 
We use PCoIP systems from Verari

The nice thing is the blade machines are in our data center and all we have to deploy to the user is the 'puck' device

If one blade dies or has a problem, we reconfigure the puck to point to another blade and the user is all set, no running to bring a new machine and configure it, just point the user to a spare.

One thing to note....see how that says the 10/100/1000?
We are replacing ClearCube blades with the Verari PCoIP systems...in one of our areas we have ~400 Clearcubes, if we replaced that with 400 Verari's and say everyone started to watch some youtube video or something that hogged bandwith.,.

we tested watching IPTV on a PCoIP device, while also having Outlook and a few IE windows open and the puck pulled ~90mb, multiply that by 400 machines and it would hog our 10gb core connection.
With the Verari systems, you can limit the puck and the blade PCoIP speeds, we use 3mb on the pucks and 10mb on the blades.

Another nice thing is the clarity on the users screens is great, even being 1/2 mile from the blade, no lag, no hiccups, no problems !

The Verari uses standard Intel mobo's, with a 'Connexxus' PCoIP user device, it looks very simillar to that EVGA device.
I like the technology, especially that you can turn any machine with a PCIe slot into a PCoIP host.

http://www.verari.com/connexxus.asp

I need 1 PCoIP Host Card for each Zero Client?
 
Yes, 1 host card connects to 1 zero client, we just deployed some in Texas (Longhorn Network) and they connect to machines back in CT
 
Back
Top