Games that don't have Fisheye with Eyefinity?

No they don't. Some games may default to an FOV of 75, or cap at 100. BFBC2's default FOV according to the .ini file is 55.

BC2 exposes the fov vertically, not horizontally like some other games.

55 vertical fov effectively scales to something like 85 horizontal at 16:9. When you calculate the perspective, you don't need to scale the vertical fov for widescreen, just give it the aspect ratio.
 
Well, this has been a very informative read. I would like to talk everyone for putting in their piece, and I will no long bitch, moan or complain about it anymore unless the developers don't correct it in a few years. I'll still have 3 monitors for workflow reasons though ( 2 for work, 1 for movies =P ). Unknown AND Tudz. IMHO both of you both have VERY good examples of what is going on and what needs to be corrected. So, for now, I'll just try to adjust the FOV the best that I can and put these monitors closer. I'm going to need a tri-screen stand for that, but that's another topic.

This had me thinking though. If you think about real, everyday camera's, they also have a 1 point perspective as well. But, why don't they every have a fish-eye effect? Look at the lens, it's curved to compensate for that. From what I have read, Unknown is trying to find a fix. Is it possible to add a post-process modifier to the camera, that will correct the fish-eye effect like real world cameras? I know, simpler said than done, but it's just an idea. I have VERY little knowledge of programing (Q-Basic from high school, back in the day). Programing a "fix" like that has to be a real bitch!
 
You cannot treat the screens as windows for some very obvious reasons. When looking through a window, you're seeing free-standing three dimensional objects with your own eyes. As has been established, your eyes are incapable of looking away from their center point (you cannot look directly at your own peripheral vision) so the image is always correct.

A display shows a 2D projection from a fixed virtual camera. You may only observe the image in all its optically-correct glory if you match the position of your eyes and the direction of your gaze exactly to the virtual camera's perimeters. As soon as you leave this small area of correctness, you start to see the image from off-angle. The perspective is wrong, so you see distortion and it becomes immediately obvious that it's a flat surface and not a window.
Ok, So let's grant that your peripheral vision is indeed stretched compared to your center vision.

Why, does a 3d object (the one behind the window for example) behave differently from the 2d outline of it directly on the window.

So, here's the setup, I have a window off to the side of my peripheral, and i have bottle standing behind it. The bottle may appear stretched, maybe not, i'm not sure, but i'll grant to you that in my vision, that bottle is distorted in some way, since i'm not staring directly at it, this is really irrelevant to what i'm asking. Now, an artist draws an outline of that same bottle on the window in front of it. He draws it perfectly, so that from my perspective, from the corner of my eye, that image he drew perfectly matches that bottle. I am not looking at the bottle directly, it is from my peripheral vision.

Let's pretend that he is using a translucent crayon. This way, after he draws the outline, i can see that it perfectly outlines ever detail of the bottle behind. Again, i can't look at it directly, but i can see a general outline that matches.

Now, I understand that in order to draw that bottle, that artist may have had to distort it, maybe not, i'm not sure. Again, i dont really care for my question.

So, either way, there the bottle is standing, PERFECTLY alligned to its completely flat sketch in front of it so that the viewer sees them as one. This sketch, you must grant, can easily be reproduced on a flat screen. But enough of that

Here's my question: If the viewer now rotates his vision, and only rotates it, why should the outline not still perfectly outline the bottle behind it. The viewer only rotates his vision, he simply focuses directly on the bottle. Let's not forget, that before he was simply staring straight, somewhere other than the bottle, still completely in focus, but not staring at the bottle.

Why or How does the bottle or outline now appear different from its partner.
My question is not why either one looks different from when i was staring straight.

My question is why have they NOT taken on the same distortions, if any at all, so that the object is still alligned with its outline. What discrimination takes place to allow the 2d flat surface and 3d object to somehow disallign themselves, when as far as my eye is concerned, the two were showing the same image before, and yet simply by looking directly at the bottle and it outline, they have some how come out of alignment.
 
I don't believe there would be any misalignment. The human eye doesn't 'flatten' light rays which strike the retina's receptors because the retina itself is curved. This same phenomena doesn't hold with planar displays and perspective projections for obvious reasons. Thus, as the camera rotates along each axis, objects appear closer at the edges of the frustum and more distant near the center.
 
If they truly do not come out of disallignment, if , even by looking around, the image are still aligned, it can be argued that now, my artist can come back, and he can completely detail the outline of the bottle with how it should look, and take away the bottle, for i have an image that visually represents that bottle, whether I look directly at it or not.

As long as I do not move from my fixed point, as long as my eye is still in that fixed location, then that window with the flat 2d outline painted on it now correctly portrays what the bottle behind it would look like, regardless of whether or not i'm looking directly at the bottle. Now the focus will clearly be off, and stereoscopic vision will pickup that the outline is not only closer than the actual bottle, but actually flat. However, for the sake of a non stereoscopic image, and disregarded focus (assume everything is completely focused, since most 3d games usually are anyway). That detailed outline correctly portrays the bottle behind it.

Am i accurate in saying this?
 
There is nothing to disagree on, these are facts:

Once again, just because you say its a fact doesn't make it fact. I still think you're wrong, you haven't proved anything. At this stage I honestly dont think you understand how perspective and projection work. :p

I'll explain it using your picture as a springboard.

JvDe7.gif

Alright, sweet, lets look at these pictures for a second. The projection lines from one image ARE NOT the same projection lines in the next image, read on to understand why.

What do the lines represent? Hmm? They are projection lines for objects whos edges are parallel to a ray of light passing from your eye to the center of the screen. So lets imagine for a second you have 1 screen (the centre screen in your top image), you are standing in the middle of a road...

1withtext.png


The perspective lines on your image represent lines which are parallel to the arrow in my picture above.

So what you see on the screen is this (I've drawn in some kerbs on the side of the road)...

97857482.jpg


Notice which lines on the building are parallel to the "perspective lines" which are going to the vanishing point. Also note which lines on the building appear horizontal in the perspective image, they are the ones which are PERPENDICULAR to the perspective lines when viewing from the birdseye view.

NOW, lets add a 2nd screen/window/picture to your right. This time, the perspective lines are the ones parallel to the new arrow I've drawn on this picture.

2withtext.png


And what you view on the 2nd screen is this...

26359168.png


NOW, BEFORE YOU START WHINING "Oh Tudz, you're wrong, you can't stitch those 2 pictures together, wah wah wah", pay close attention to this.

Print the damn pictures out, and look at them in this arrangement...

paperarrange.png


Now if you're viewing it on an A4 sheet it will probably be a bit small so you might have to close one eye to get yourself in the correct viewing position.

Now, the blue lines on one image should line up with the blue lines on the other image, and the red lines on one image will line up with the red lines on the 2nd image. ALSO, everything will be perfectly sized and look correctle proportioned. REGARDLESS OF WHICH SCREEN YOU LOOK AT, AS LONG AS YOUR VIEW POINT IS LOCATED AS IN THE ABOVE PICTURE.

(you can view the images on a multiscreen set up too, but I haven't included any bezel correction in the image so you wont be able to line it up perfectly).

That's because what is a straight horizontal line in one image, in the other image is one of your angled "perspective lines".

So your image above with your perspective lines, should look like this (NOTE: This is the case where the screens are at 90 degrees to each other like the image above... they dont NEED to be at 90 degrees to each other, but if they aren't the projection lines get more complicated and I'd have to use some ugly maths to figure out the correct projection). So just to be clear, the case where the projection lines of one image are the horizontal lines of the 2nd image is a special case where the screens are at 90 degrees to each other... this does not mean they HAVE to be 90 degrees, they only have to be 90 degrees for the above example, its just easier to demonstrate when they are.

straightlines.png


The projection lines on the left picture (in red) are extended how they should (correctly) be extended on to the 2nd monitor. The projection lines on the right monitor (in blue), are extended on to the left monitor in their correct projection. You could add another screen either side of that and further extend the projection lines.


Now, if you still think I'm wrong I can't help but think you either dont understand anything beyond flat panel projection and you didn't print off those pictures and didn't view them in the correct position. I showed this to someone a few minutes ago (drew a 2 pictures on A3 sheets by hand, projected on 2 different planes) and their response was "oh cool, everything looks right once you angle them".

You need to stop thinking in a single flat plane. Obviously I can only show you pictures in a single plane, but you need to start thinking how would a world in 3D be projected onto some sort of viewing device and you'll realise there is a "correct" projection for ANY arrangement of viewing devices... the one I present here is just the one which I feel is most appropriate for gaming purposes.
 
Last edited:
Ok, I know there's been no response to my above post, but if you still dont get what I'm rambling on about, this is my last attempt to explain why what I'm rambling on about is optically correct.

Firstly, lets look at WHY stretched side monitors is optically correct when the monitors are arranged in a flat plane. So, why is this image correct (for a flat plane)...

perspectiveunknown.jpg


Rather than explain it in words, lets look back at the 3D world I made using CAD. If we take that 3D virtual world, insert a 2D window between a virtual observer and draw lines from the 3D objects to the observer, we get this... (I've only done the left side of the world coz I'm lazy).

Now, lets see where those lines intersect our 2D window, and where they intersect, place points.

Join those points, and we see how the boxes are projected onto our 2D window.

34770267.jpg


12257351.jpg


Now if we look at the "screen" from the front at a distance away from the screen, we see the image is skewed on the side (what would be your side monitors in an Eyefinity set up). Like this...

86997014.jpg


This is optically correct and indeed you can see the thin black lines that joined out objects to the gamer have actually appeared as perspective lines. However, through the magic of 3D CAD, we can place our virtual camera where the virtual gamer is sitting, to see what a correctly seated real life gamer would see, we see that it lines up perfectly well with the virtual world.

Note that the BLUE lines are actually displayed on our 2D window, but they perfectly match the real boxes behind, this is because it is optically correct.

16178896.jpg


If you dont believe the blue lines are on the 2D screen, see what happens when I slightly move the camera away from where the gamer should be seated.

87902476.jpg


Now we can do the exact same thing, except instead of a single flat screen (or 3 screens arranged flat if you think of it as Eyefinity ;)) we move to screens angled toward the gamer. We draw the same lines, see the points where they intersect our flat windows, and draw lines between those points, the exact same as we did for the flat screen set up.

4angled.jpg


If you have been paying attention, you'll note this produces the EXACT same images on each "screen" I posted a while back.

Now, as we did with the flat panel, through the magic of 3D CAD, we place our virtual camera where the virtual gamer would be seated in order to see what the real life gamer will see when correctly seated relative to the monitors.

5angledfov.jpg


Shazzam!! Its exactly the same as the flat panel of monitors. A nice, correctly proportioned image. This is because what is projected on the flat screens as I have been displaying in my previous posts IS OPTICALLY CORRECT. Here it is zoomed in a bit better so you can see...

51angledfovzoom.jpg


The difference is, to get a FOV of, say, 130 degrees using a flat array you have to sit unrealistically close to your monitors. A comfortable viewing distance will only give you probably 100-110 degrees FOV. However to get 130 degrees using the angled monitor set up, you can sit quite a comfortable distance away from your monitors and get a wide FOV, and indeed it leaves the door open for a 5 screen set up with an effective FOV of well over 180 degrees.

Ok, I am now officially retiring from this thread :) I've spent hours making bloody pictures, if you still dont get it then I can't explain it any other way that will help you. :p

P.S. In case anyone is wondering, the CAD program I used is capable of displaying both a parallel projection and also a perspective image, I flipped from one to another depending on what was most appropriate... for the "gamer perspective" images I used perspective cameras in CAD.
 
Ok, I've been gone from hardforums for a while but I came back because I was trying to set up an eyefinity display and everything is IMO crappy. I consider all this FOV talk silly. This is all digital. They should just be able to make a 3-camera array and tell the game to automatically set the cameras equidistant from each other shooting straight ahead to handle each screen. PROBLEM. SOLVED.

Do a simple check whether someone is running a normal res or an ultrawide multimonitor res and pick the number of cameras accordingly. Hell, let people set the number of displays in game so it knows how many subdivisions to make. Yes, it won't cover people with different size/res displays, but it's not unreasonable to expect people to triple up if they expect to run a tri-monitor setup. I understand this means that the side displays will move the most since the whole cameras are moving about the 1st camera. I don't really care. It's not impossible and they should code for it. The stretching that happens in every game I've tried with infinity is just stupid and not at all realistic. I don't care if it's how things are "supposed to look".

Alternatively they could run three cameras from different angles behind the same point. It really wouldn't be that tough for them to just let you set the angle of the side cameras so they match with the positioning of your display. This is something relatively basic in comparison to all the complicated physics games do. Anyone who says the "fisheye" method is better is just defending a system that can't handle more advanced setups. People have multimonitors - it's about time we get multi-cameras in game.

P.S. - Turdz, the fallacy of the flat plane layout is that it assumes that everyone wants to just stare at the center monitor. The fact is unless your monitors are actually glued directly in front of your eyeballs you're going to look around and you may want to at your characters peripheral to see what's there without having it blurred. If you're only going to stare at the center monitor, you get relatively little benefit from eyefinity in the first place. The whole point is to increase your range of vision. If you're looking at a picture that's blown up, it's not warped on the wall. If you're watching a movie, they don't stretch content for wide aspect ratio. They just pick a spot further back to shoot from. If you're looking at monitors, it's reasonable to expect that you want to be looking at flat picture planes, instead of some hokey warpage that "emulates your peripheral vision". It's a terrible argument and just amount to saying that eyefinity is too new so it's ok for devs to be lazy and not support it properly.
 
Last edited:
Alternatively they could run three cameras from different angles behind the same point. It really wouldn't be that tough for them to just let you set the angle of the side cameras so they match with the positioning of your display. This is something relatively basic in comparison to all the complicated physics games do. Anyone who says the "fisheye" method is better is just defending a system that can't handle more advanced setups. People have multimonitors - it's about time we get multi-cameras in game.
yeah, that would be ideal, but unfortunately that's not the way it is.

P.S. - Turdz,
nice typo :). Try to get it correct next time.

the fallacy of the flat plane layout is that it assumes that everyone wants to just stare at the center monitor.
Nay, the flat plane layout does not assume you stare at the center monitor, it assumes you have the monitors set up correctly (flat) and that you are the correct distance from the monitors OR the FOV of the game is set correctly. Otherwise, the side images WILL be distorted, regardless of whether or not you are looking at them directly.

The correct setup is very hard to attain in current games with current setups, but it will allow you to stare at the side screens all you want without distortion. The reason that it's hard to attain the correct setup is because all the game i've played usually set a default FOV that is too high for an eyefinity setup without sitting really close to your monitors.
 
Well I think a big part of the problem is that the human eye is not a camera. It doesn't fish eye with things on your peripheral vision. Don't believe me - take a pencil or pen, horizontal or vertical. Stare straight ahead and see if you notice and difference in size as you move it across - it should stay the same even inches from your eyes (note, make sure to use a pen that's big and dull - a thick marker is ideal so you don't poke your eye out). Point is that you don't. Maybe if stuff is literally glued to your eyes you would, but if you're looking at monitors from the recommended viewing distance (1-3 ft away) your eye effectively becomes a plane rather than funky camera with side slots to see out of.

I think all the people defending eyefinity stretch are not realizing that justifying it based on how the camera sees is irrelevant to how humans see. The best way to think about it is like when you're driving a car. They use flat pieces of glass that are optically clear and don't distort anything because it's assumed that your eyes will process everything on their own. when you have monitors a few feet out (most people who are not professional games and also do productivity things), your eyes become like a pair of camera/plane rather than a point camera that everyone seems to use as justification. You need to have the monitors extremely close (which is supposed to be an unhealthy distance anyway) for that fisheye effect to disappear. And even then, your normal peripheral vision is actually considerably narrow. Your eyes see out front primarily and you see by turning, not by seeing out of the whites of your eyes. Any fisheye you might normally see would be limited to the extreme edges of your vision, not front and center like happens with eyefinity.
 
Well I think a big part of the problem is that the human eye is not a camera. It doesn't fish eye with things on your peripheral vision. Don't believe me - take a pencil or pen, horizontal or vertical. Stare straight ahead and see if you notice and difference in size as you move it across - it should stay the same even inches from your eyes (note, make sure to use a pen that's big and dull - a thick marker is ideal so you don't poke your eye out). Point is that you don't. Maybe if stuff is literally glued to your eyes you would, but if you're looking at monitors from the recommended viewing distance (1-3 ft away) your eye effectively becomes a plane rather than funky camera with side slots to see out of.

I think all the people defending eyefinity stretch are not realizing that justifying it based on how the camera sees is irrelevant to how humans see. The best way to think about it is like when you're driving a car. They use flat pieces of glass that are optically clear and don't distort anything because it's assumed that your eyes will process everything on their own. when you have monitors a few feet out (most people who are not professional games and also do productivity things), your eyes become like a pair of camera/plane rather than a point camera that everyone seems to use as justification. You need to have the monitors extremely close (which is supposed to be an unhealthy distance anyway) for that fisheye effect to disappear. And even then, your normal peripheral vision is actually considerably narrow. Your eyes see out front primarily and you see by turning, not by seeing out of the whites of your eyes. Any fisheye you might normally see would be limited to the extreme edges of your vision, not front and center like happens with eyefinity.
I'm not sure what the difference is between a point camera and a plane camera.

However, the justification that the screen must distort everything on the sides in order for it to look normal to you is not based on how your eyes percieve information.

It is completely irrelevant to the argument whether or not your eyes actually distort images as they approach the peripheral of yours eyes.

Here's a very good image by Unknown One at the beginning of this thread:


This issue comes up so often, I decided to whip up a handy visual aid so everyone can better understand what's happening, and WHY it's happening.

unled1rc.jpg

The argument is based on a very simple premise, the shape and size of objects is dictated by the angle there light rays hit our camera or eye.

Regardless of whether or not there is distortion by our eyes, whether we're using a camera, an eye, or any of that.

When all is said and done, the positional information for light is based purely on the angle as it enters.

There are two important things on Unknown One's image.

1) the 3 flat screens (it doesnt have to be 3, it can be one giant screen). This is our display, it is flat.
2) the little circle. This is not an eye, or a camera, or anything of that nature. All that circle is doing, is allowing you to measure the angle, before it hits the actual perciever (which is a camera, or eye, or whatever, doesn't matter)

In order for two objects to appear the same, they MUST have the same angular shape. This is why the moon and sun look about the same size even though they are vastly different. (whether by the naked eye, or by camera, whether from your peripheral or straight on). As the image demonstrates, the ONLY way for a flat screen to show an object in the center, as being identical to an object on the side of the screen, is for that object to be distorted on the screen itself.

I don't like using that phrase, because in reality, there is nothing distorted about the image, it's how it supposed to look, but because everyone likes to look at screens perpinducarly, they decided this is "distorted".

If the object were to be drawn "normally" on the screen, looking at it from an angle would make it completely distorted, and instead of looking through a window into a gaming world, your stuck looking at a flat canvas, and your way too close to the painting.

Edit: here's another image that i created before I saw Unknown one's. I personally like Unknown One's better, but either way here you go:


The trees are roughly the same size, and the giant circle is showing that they are equidistant from the observer. Note that as the rays move from the trees, to the observer, they pass through a window, or in our case, our display. Notice that the side tree will have to be painted much wider on our screen than the center tree
 
Last edited:
Well here is what I'm saying. If you're in a car, vehicle, airplane, etc... when you look out a window or whatever, you're usually still looking straight ahead. So even though you're moving, you turn your head and you're still looking straight ahead. Presumably with an eyefinity display people want to actually look at the other displays and not just the center one - as you would in a vehicle - hence why you have side windows etc... I get this means that you're basically looking at a flat canvas in this case, but you're normally looking at essentially a flat canvas anyway - at least how your brain perceives things. When you turn your head in a car, you see a flat image (well 2 that get mixed to 1 for depth). When you turn your head on an eyefinity, the effect is that your eyes are looking through the corner of your eyes instead of straight ahead. The whole point is to give you more screen real estate - so that you can look at any point visible without having to turn the camera as much. It's far preferable IMHO to have each screen look independently correct when you're looking at it than have the center correct and the sides wildly distorted (assuming you have normal vision and don't have the side screens perpendicular to the sides of your head and the other one directly in front so that you're basically in a cube).

Basically, I see no reason why they shouldn't allow people to let all 3 screens function as a single canvas if that's what they want. That's what I want (esp. in tactics, non-FPS type games) and I shouldn't have to suffer with fisheye just because someone else thinks that's "correct". If they want to work on improving all the camera angles etc.. to have tri-cameras for each display that would be fine too and work on letting people customize the angles so it actually works. But at the core, it needs to let you choose to display a canvas if that's what you want. Otherwise it's worthless to me. For most games, because of the fisheye everything is getting stretched and textures get warped as a result. Even for gold certified games, they fisheye and stretch on the edges, which is useless for games like DAO, Total War games, etc.. where you actually want a more panoramic "canvas" view so you can see everything on the battlefield.

Fisheye pretty much just wastes real estate and looks terrible. What they need to do like I said is have a flat canvas option that's always available and then on top of that work on a multi-camera setup so that each screen outputs an undistorted perpendicular image without looking like a canvas. Anything short of that and it's a gimmick. I thought this would be awesome and fully developed by now, but after trying it, it's pretty clear that it's an afterthought to nearly everyone in the industry and to be perfectly honest I have little interest in playing any games on it. I would much rather play on a single monitor with working, non-stretched textures and objects and mouse movements that work w/o being distorted. Which does bring up another important point that I don't see addressed here. They should have a toggle so it knows to switch between absolute positioning and front monitor positioning. For FPS games, you almost never want the mouse moving off the front monitor and it just screws up your movement when it does. For a tactics game, it depends on whether you're trying to issue commands or change the camera angle. Actually it shouldn't be that tough to implement a solution where camera motion is limited to the center monitor but absolute mouse positioning can go wherever you point it.

As you can tell, I'm pretty fed up with this. I don't really care why people claim fisheye is normal. I understand the geometric explanation, but it's wrong if you aren't staring at the center monitor only, period. The whole point of eyefinity is to allow you to turn your head and see more without having to move or turn the camera. This is essentially nullified by fisheye because it means you're basically staring out of the corner of your eyes when you look at either of the side monitors, even though they should normally appear to look the same as the front monitor. It doesn't matter if a distorted fisheye is "more accurate to how you see". The point of eyefinity is so that your head can act as an internal camera within a viewing room that can itself rotate. If I want to be limited to the center monitor, I'll just play the darn thing on the center monitor. As is, the side monitors are wasted.
 
Put more succinct, basically what I'm saying is that everyone here defending fisheye is misinterpreting the point of multi-display gaming. Traditional gaming is based on the premise of flat, straight on canvas perspective, where you see something and you use the mouse or kb to rotate the screen. The whole point of having multiple monitors is so that you can rotate your physical head within that virtual camera and see whatever you want without distorting (in addition to being able to rotate via camera controls). In this respect it is much more like my vehicle analogy than the "this is how you normally see" explanations people love throwing out there. IMHO that explanation is technically correct saying what it's saying, but it's totally missing the point unless you have the monitors directly in front of your eyeballs. The way multimonitors are setup in practice are that your head is an internal camera that's freemoving within a "robot camera". As such, the most accurate setup is with 3 cameras firing 3 images to 3 screens (or more depending on the overall number of screens). Short of that, a "canvas" will be a lot more accurate and enjoyable for most content even if it does "unrealistically flatten things".
 
yes, I suppose for a lot of non first-person games one may prefer having a large flat top down view. Would definitely be cool if all of these different options were implemented.
 
yes, I suppose for a lot of non first-person games one may prefer having a large flat top down view. Would definitely be cool if all of these different options were implemented.

Well the fisheye is plain disgusting. I tried it and a canvas is FAR FAR preferable. I tried Dragon Age 2 again ("gold" certified game). It was fun for about an hour as long as I didn't try to move between the tactical and FPS view like I always did in DAO in single-monitor mode and kept the camera very close to the ground. After that, the motion finally got to me and I became physically nauteous all night. I'll try again, maybe this weekend when it won't kill me if I'm not productive the next day.
 
Last edited:
Well the fisheye is plain disgusting. I tried it and a canvas is FAR FAR preferable. I tried Dragon Age 2 again ("gold" certified game). It was fun for about an hour as long as I didn't try to move between the tactical and FPS view like I always did in DAO in single-monitor mode and kept the camera very close to the ground. After that, the motion finally got to me and I became physically nauteous all night. I'll try again, maybe this weekend when it won't kill me if I'm not productive the next day.
try to play a game where you can lower the Field of View (FOV). Lower it until the side screens do not look distorted when you look directly at them from your playing position. I dont know which games specifically support modifying the FOV honestly, so your on your own here (gzdoom supports it, half life 2 and source mods support it though it may be considered a cheat)

Your other option is to get closer to the screen until the side screens do not look distorted any more, unfortunately, you will probably have to get too close for this to be comfortable option.

edit: I forgot to add: if you want to minimize the fish eye effect, flatten out your 2 side monitors, make sure all the monitors are on a single plane, not angled towards you.
 
Last edited:
Alright gents, I have read through pretty much this entire thread and understood the majority of it. I have some things to say:

1) The camera seems to assume your face is pressed up against the middle of the center monitor. With the monitors placed in a straight line if I put my face right up against the middle monitor and look at the objects at the outside edges of the outer monitors they look perfect. This provides some additional insight as to how the camera in the 3d games is working... and it is stupid for eyefinity.

2) With the monitors in a straight line as many have suggested the distortion is minimized, but the side monitors are now almost completely useless for anything you do on your PC besides play games in eyefinity. Also the immersion is hardly improved at all and the benefit of peripheral vision is minimal.

I think the fact that putting the monitors in a straight line is ridiculous, and apparently just about everyone else does too... google "eyefinity triple monitors" and look at what you get. Every pic features the side monitors angled out toward the player. This includes $50,000 gaming setups.

3) The argument that some fisheye effect would still be apparent even if you used three cameras and put them on the three screens with the side ones angled out is a bit silly. Right now it is awful, imperfect is a lot better than awful.

4) Logically speaking there should be an adjustment possible to allow for both the players distance from the center monitor and also the angle at which the monitors are set up.

5) We know what the problem is. This thread has explored and demonstrated the problems inherent to the triple monitor setup very effectively with diagrams by some very educated and intelligent people to explain things. What we need to focus on is a solution!

The way I see it if we could increase the field of view (only possible in some games) and then apply a reverse fish eye filter to the image post-process (I don't know if this is possible/practical) it would go a long way toward fixing this (this would have to apply to the 3d image but not any UI overlay). Many widescreen TV's have a feature where they stretch a 4:3 image to cover the whole screen. Initially most stretched the image uniformly across the screen which looked awful, but later many had a setup where they maintained the aspect ratio in the center and progressively stretched the image more and more as the edge of the image was approached. This resulted in people (usually the most noticeable/irritating thing to see deformed) in the center of the screen now looking fine, but in cases where there was a person near the edge of the screen they were stretched horribly. I'm sure most of us have seen this.

We need eyefinity to do the exact opposite of that. We need to leave the center of the image alone, then progressively compress the image horizontally as we move toward the edges of the image. If we had a slider we'd be able to adjust it till it looks good to us.

This would pretty much necessitate increased field of view since the progressively horizontally compressed image would take up less space on the side screens. This would open up more real estate for increased field of view. This would be an added benefit.

My solution isn't anywhere near perfect, and it's not optimal but it would be a lot better than what we've got. A cylindrical image would probably be much better still, again not perfect.. but a lot less irritating distortion than what we've got currently. The best solution would be a three (or more) camera setup as many argued earlier in this thread. Yes, there would be some distortion, but right now we've got a TON of HORRIBLE distortion!

We know what the problem is, or at least a lot of us do (some persist in failing to understand) so let's have these knowledgeable and intelligent individuals put on their thinking caps, think outside the box, and try to figure out a way this problem might be able to be made less irritating with third party software or within the video card drivers/software.

Google will tell you definitively that people want their triple monitor setups to wrap around them, stop arguing for the impractical idea of keeping the monitors inline and figure out how to make it work in the logical wraparound configuration.

One more question, is it possible to use CURRENT head tracking equipment and software in the way suggested early in this thread? What I mean is to have the focal point change to where you are looking without actually moving the camera. This way the distortion is always in your peripheral vision even when you are looking at a side monitor. This solution sounds at least somewhat workable but I don't think there is any support for this presently. If there is I might have to start looking into what these setups cost.

PS- for reference I'm running triple 28" widescreen monitors and the tip of my nose is typically about 23" from the center screen. Ideally the side monitors would be angled in such a way as to be perpendicular to my line of sight when I turn my head and look at the center of them. This provides the best and most comfortable use of the desktop space when doing things other than gaming, and should also be pretty nice for gaming.

Thanks for the space.

Grymm
 

The main problem is that your monitor setup is seen as a flat plane by the game. To actually solve any kind of 'fisheye' effect, would require a complete change in how resolutions are detected, and how things are rendered.

We would need 'smart' monitors and display drivers; basically, your monitors would have to communicate with the display driver their orientation relative to each other, I guess you would need something like this:

You have three displays, labeled Left, Center and Right, each with sides A, B, C and D; Center is perpendicular to the central viewpoint (give it a measure of 0°);
Left is angled +~45° relative to Center and its side D is shared with Center side A;
Right is angled -~45° and its side A is shared with Center side D.

The viewport would have to treated like the inside of a hexagonal prism.
 
You guys are somewhat...obsessed, no? It's a little annoying in some games but I've found no reason to complain the vast majority of the time. I wonder whether some people's eyes or brains find it more annoying - akin to the 3D situation.

If it really bothers you line up three IPS in portrait and call it a day.
 
iracing has the option to render each screen with a separate camera. You type in all the relevant details of display angle, size, and distance and it creates 3 cameras which provide an approximation of a cylindrical panoramic projection. The result is a largely distortion free natural sense of space. The only downside to this is that it is slower than a single camera.

example :
http://www.youtube.com/watch?v=Yxm45qzEAnM&list=UUO1mkLrbg-bcEZbXIqWPVuQ&index=3&feature=plcp
 
You guys are somewhat...obsessed, no? It's a little annoying in some games but I've found no reason to complain the vast majority of the time. I wonder whether some people's eyes or brains find it more annoying - akin to the 3D situation.

If it really bothers you line up three IPS in portrait and call it a day.

Not only does it look awful when you look to your left or right at one of the other screens (which by the way IS the way an awful lot of people want/expect to use it), but also the stretch means that you get far less total field of view than what would fit on the same three screens if it were not stretched.

I do understand the reason it is stretched and I do understand that without a curved screen it can't be near perfect. I (and many others) are still of the opinion that this can and should be made to work "better" for people using the eyefinity setup the way it is advertised and demo'd.
 
iracing has the option to render each screen with a separate camera. You type in all the relevant details of display angle, size, and distance and it creates 3 cameras which provide an approximation of a cylindrical panoramic projection. The result is a largely distortion free natural sense of space. The only downside to this is that it is slower than a single camera.

example :
http://www.youtube.com/watch?v=Yxm45qzEAnM&list=UUO1mkLrbg-bcEZbXIqWPVuQ&index=3&feature=plcp

Thank you!! Now all the naysayers claiming that it can't be done without having awful transitions etc. will have to accept that this works. Now we just need to get it to do that in all the rest of the games we play =/

It is an issue that this solution takes considerably more computing power. As it is my system screams in pain if I crank eyefinity up to max resolution and set all the games settings to max in some games as it is, but a shortcoming of my hardware is my problem not theirs, these folks provided a satisfactory solution.
 
Last edited:
Thank you!! Now all the naysayers claiming that it can't be done without having awful transitions etc. will have to accept that this works. Now we just need to get it to do that in all the rest of the games we play =/

It is an issue that this solution takes considerably more computing power. As it is my system screams in pain if I crank eyefinity up to max resolution and set all the games settings to max in some games as it is, but a shortcoming of my hardware is my problem not theirs, these folks provided a satisfactory solution.
I agree with you. There should definitly be the option to customize your displays. Having a flat plane of 3 monitors is rediculous.
 
Back
Top