R600 = WET DREAM

trxjw said:
Still looks like R600 + RD600 + conroe for my first new build since 2002. Hopefully it will be out in time for christmas.. but I know it won't be. Not because I am "In the know" .. but simply because I don't have that kinda luck. :(

Why are so many people stuck on the RD600 chipset? dual x16 slots with x16 lanes aren't going to give you more performance using Crossfire. Additionally, no manufacturer has been able to surpass Intel for chipset stability and driver quality.
 
Blauman said:
while not huge I do have a 20.1 WS LCD. And the top tier current cards can play all games at the monitors native res with all the goodies turned on. Now if you are going to go with a 24" and up then got Xfire or SLI, but the new gen will do it with a single card solution more than likely. I am just not adopter of fresh new hardware.
I have (had) the same size monitor and I disagree: I cannot play CS:S online with all goodies on at 60fps min at the monitor's native resolution.
 
drizzt81 said:
I have (had) the same size monitor and I disagree: I cannot play CS:S online with all goodies on at 60fps min at the monitor's native resolution.

I haven't tried since I added my second card, but with one card I'll agree. I couldn't run Quake 4 at 4x AA with 16x AF at my monitors native resolution on one card and get acceptable performance.

Also, FEAR will absolutely not run fast enough for me at 1680x1050 with 4x AA and 16x AF either.

I just built this machine so there aren't alot of games installed, but I would be willing to bet there are more that won't give me a constant 60FPS or even be gauranteed of going over 30FPS 100% of the time at that resolution with AA and AF turned on to medium settings with only one card active.
 
Dan_D said:
Why are so many people stuck on the RD600 chipset? dual x16 slots with x16 lanes aren't going to give you more performance using Crossfire. Additionally, no manufacturer has been able to surpass Intel for chipset stability and driver quality.

I'm looking to benefit possibly in the future from the (rumored) dedicated x16 slot for physics processing on the RD600 chipset. I personally don't upgrade more than once every couple of years, so I would like to keep myself open for a physics card if the technology becomes prominent.
:)

Frankly, just moving from AGP to PCIx right now would give me better performance lol. I'm not even worried about the x8 vs x16 debate right now. I'm still running a 2600XP and a 9800pro on 4x AGP. :eek:

I'm still saving the money for my build, so honestly I'm trying to get the most for my money by the end of this year. It doesnt make sense for me to buy a DX9 computer right now because I wont have the money to dedicated to DX10 until 2008 at the earliest. I'm a student and between school loans and supplies, plus constantly upgrading software, I really need to spend my money wisely.
 
Dan_D said:
Why are so many people stuck on the RD600 chipset? dual x16 slots with x16 lanes aren't going to give you more performance using Crossfire. Additionally, no manufacturer has been able to surpass Intel for chipset stability and driver quality.

amen to that. I am camping on my i975x for a long while... until it can be proven that there are real benefits to moving to something with more BW on the peripheral bus.. wewt.
 
"True 512-bit memoryinterface (Nvidia G80 = 384-bit)
1024 MB GDDR4 mem (1.25Ghz), bandwidth 160GB/s (G80 = 86GB/s)
80nm"


Like we say back in the 80s NYC...OH SWEAT!!!!!! :D


With a 4 PCIe x 8 slots on a 4x4 RD790 motherboard???? :eek:
 
(cf)Eclipse said:
if it can do >11k in '06 at stock, i'll be impressed :D

I'm pretty sure the leaked G80 numbers are 12k at stock..so it better do at least that well.
 
DaCoOlNeSs said:
the only reason it has free 4x AA is it probably has the same on-die memory as the Xbox 360.

Not likely, the EDRAM systems does not make any sense for PC's.

The 10MB of EDRAM in 360 is not enough to hold a full 720P 4XAA framebuffer. So the solution is to split the screen into 2-3 tiles. However, this has a slight performance hit, and more to the point the game engine has to be built that way from the ground up. It's called predicated tiling.

Microsoft can mandate this on a console, it's fixed hardware and everyone will do it. However it doesn't work on PC, old games wont have it, even new games may not use it, as ATI cards (let alone theoretical; brand news ones that would use EDRAM) are <50% of the market in PC's. It would be years before all games used predicated tiling, that's if developers didn't tell ATI to jump off a bridge. In the meantime the EDRAM would be useless, and a giant waste of money.

Alternatly, you could just use huge sums of EDRAM to avoid tiling, however, this would get extremely costly extremely quickly. I believe some high resolution framebuffers (something like 2500X1600 with 8XAA) with AA would total 60 MB and more. That much EDRAM would be ridiculouly expensive, it would take up a whole current size die or more by itself.

ATI has a lot of things in there new memory system with R520/80 that made AA a lot closer to "free" anyway. The X1k's took much less of a hit with AA enabled than previous cards, or Nvidia cards. However, that new memory system was complex and expensive, so again it's not really "free". Just pointing out that "free" 4XAA is already nearly here in the X1k series, in the sense the hit isn't that much anyway.
 
nobody_here said:
to be honest, i think the 384bit memory interface of the G80 proposed specs is going to be more than enough to handle things, 512bit will be wasted IMO

Well first, these specs should be taken with salt as there's a very good chance they're fake etc.

Having said that, I think the hope/expectation would be that the rest of the chip must be able to use those absurd memory specifications as well, which would place it as a real monster.

It's not so much about the specs as what kind of chip would need those specs?

Some have speculated that ATI is really trying to break out of the each competitor being within 5-10% of the other cycle. These specs would point that direction..

However, they're still most likely false until proved..
 
Sharky974 said:
Not likely, the EDRAM systems does not make any sense for PC's.

The 10MB of EDRAM in 360 is not enough to hold a full 720P 4XAA framebuffer. So the solution is to split the screen into 2-3 tiles. However, this has a slight performance hit, and more to the point the game engine has to be built that way from the ground up. It's called predicated tiling.

Microsoft can mandate this on a console, it's fixed hardware and everyone will do it. However it doesn't work on PC, old games wont have it, even new games may not use it, as ATI cards (let alone theoretical; brand news ones that would use EDRAM) are <50% of the market in PC's. It would be years before all games used predicated tiling, that's if developers didn't tell ATI to jump off a bridge. In the meantime the EDRAM would be useless, and a giant waste of money.

Alternatly, you could just use huge sums of EDRAM to avoid tiling, however, this would get extremely costly extremely quickly. I believe some high resolution framebuffers (something like 2500X1600 with 8XAA) with AA would total 60 MB and more. That much EDRAM would be ridiculouly expensive, it would take up a whole current size die or more by itself.

ATI has a lot of things in there new memory system with R520/80 that made AA a lot closer to "free" anyway. The X1k's took much less of a hit with AA enabled than previous cards, or Nvidia cards. However, that new memory system was complex and expensive, so again it's not really "free". Just pointing out that "free" 4XAA is already nearly here in the X1k series, in the sense the hit isn't that much anyway.


100% QFT. Main problem with 512bit mem busses is pcb tracing. It basically gets so dense that standard pcb manufacture becomes a problem; ie very expensive pcbs....

We're only at the start of the DDR4 cycle, with our first true 384bit card about to hit. I'd be surprised if R600 was 512bit, but I could be wrong.

Edit: 512bit though, Damn!!
 
nobody_here said:
to be honest, i think the 384bit memory interface of the G80 proposed specs is going to be more than enough to handle things, 512bit will be wasted IMO

I am sure that the engineers at ATi that design GPUs for a living have taken the width of the memory bus into consideration when they started thinking about the R600.
 
ManicOne said:
100% QFT. Main problem with 512bit mem busses is pcb tracing. It basically gets so dense that standard pcb manufacture becomes a problem; ie very expensive pcbs....

We're only at the start of the DDR4 cycle, with our first true 384bit card about to hit. I'd be surprised if R600 was 512bit, but I could be wrong.

Edit: 512bit though, Damn!!

Yeah, I made note of that back on the first page of this thread. I figure at least 2 additional layers would be needed.
 
PRIME1 said:
Well if NVIDIA can pull off a hard launch on November 7th, no amount of marketing will erase having actual product to play with.

Very true!

Based on this information I will wait to see what/when it releases. I have a funny feeling about the "128 stream shader" marketing spin from NV right now and have not seen anything from them regarding improving their IQ (mainly AF).
 
drizzt81 said:
I am sure that the engineers at ATi that design GPUs for a living have taken the width of the memory bus into consideration when they started thinking about the R600.


well yeah, but consider other card features in the past that were nearly completely useless and only added to the cost, but still sold better because of marketing, just to name a couple:

SM3.0 support?
512Mb memory capacity?
etc...etc....

these features eventually did become useful, but the first cards to come out with it basically didn't benefit any from it because the rest of the card/system wasn't powerful enough to use it

that's what i see with the 384bit vs. 512bit memory interface and 768Mb vs. 1024Mb memory capacity thing, we are barely seeing 512Mb's of memory capacity being utilized as it is, the likely hood that we will see the need for a 512bit bus and 1024Mb's of video memory is pretty slim until the next gen comes out, i think 768Mb's on a 384bit bus will more likely be the super high end requirement for a select few games or close anyways, and then of course, like in the SM3.0 support example, by the time the rest of things are in place to take advantage of it, both major GPU designers will have a newer faster better card that can actually use it, and by then both will be running on 512bit busses with 1024Mb capacity

like the poster above i am more interested in seeing super high quality modes in regards to AA/AF and new forms of Transparency AA, etc........with minimal performance impact :)
 
If these rumored specs have any validity, we're talking about one expensive card!

R1ckCa1n said:
I have a funny feeling about the "128 stream shader" marketing spin from NV right now and have not seen anything from them regarding improving their IQ (mainly AF).
What marketing spin? nVidia hasn't officially announced G80/8800, let alone started marketing it. We have specifications and little else at this point.
 
nobody_here said:
well yeah, but consider other card features in the past that were nearly completely useless and only added to the cost, but still sold better because of marketing, just to name a couple:

SM3.0 support?
512Mb memory capacity?
etc...etc....
If there never was a first gen card to introduce SM3.0, do you think it would ever be adopted? Without hardware to support certain features, it's unlikely that anyone will ever produce software to take advantage of them.
nobody_here said:
that's what i see with the 384bit vs. 512bit memory interface and 768Mb vs. 1024Mb memory capacity thing, we are barely seeing 512Mb's of memory capacity being utilized as it is
indeed, with the exception of a few games 512MB appears to be a waste. However, it's important to realize that software companies are trying to sell games to everybody. If a game required 512MiB, we'd have less than 10% of the market that would be able to play this game. Also, if you think that 512 MiB are useless, you have the freedom not to purchase a card with 512MiB.
nobody_here said:
the likelihood that we will see the need for a 512bit bus and 1024Mb's of video memory is pretty slim until the next gen comes out, i think 768Mb's on a 384bit bus will more likely be the super high end requirement for a select few games or close [...]

I am a bit confused by your post: You are aware that the width of a bus and the amount of addressable memory are independent of each other, at least to a certain degree. It is not very difficult to build a 2 bit memroy bus that could address TiBytes of RAM.

phide said:
What marketing spin? nVidia hasn't officially announced G80/8800, let alone started marketing it. We have rumored specifications and little else at this point.
fixed and QFT.
 
nobody_here said:
that's what i see with the 384bit vs. 512bit memory interface and 768Mb vs. 1024Mb memory capacity thing, we are barely seeing 512Mb's of memory capacity being utilized as it is, the likely hood that we will see the need for a 512bit bus and 1024Mb's of video memory is pretty slim until the next gen comes out, i think 768Mb's on a 384bit bus will more likely be the super high end requirement for a select few games or close anyways, and then of course, like in the SM3.0 support example, by the time the rest of things are in place to take advantage of it, both major GPU designers will have a newer faster better card that can actually use it, and by then both will be running on 512bit busses with 1024Mb capacity

True 512bit memory interface = more bandwidth which makes any games go more faster (newer or older games)... As for 768mb or 1024mb cards I think it would matter if the games needed more textures and most games won't matter if you had 768mb vs 1024....

Always say yes to more bandwidth but maybe for VRAM...
 
drizzt81 said:
If there never was a first gen card to introduce SM3.0, do you think it would ever be adopted? Without hardware to support certain features, it's unlikely that anyone will ever produce software to take advantage of them.

yep, which is the good thing that came from the GeForce 6 series supporting it, even though it was slightly before it's time in regards to having the horsepower to fully utilize it

indeed, with the exception of a few games 512MB appears to be a waste. However, it's important to realize that software companies are trying to sell games to everybody. If a game required 512MiB, we'd have less than 10% of the market that would be able to play this game. Also, if you think that 512 MiB are useless, you have the freedom not to purchase a card with 512MiB.

i never did own one, 256Mb cards (as long as they had a 256bit memory bus) were more than adequate to do what i wanted, which included playing all my games at 1600x1200 with AA/AF, etc....however, with this generation, i would have at least a minimum 512Mb's of memory on my card i purchase, if not 768Mb's, but paying extra for an unused 256Mb's of memory will be fruitless, i am just saying i am glad there is a choice between 512Mb's and 1024Mb's, because right in the middle should be the sweet spot IMO


I am a bit confused by your post: You are aware that the width of a bus and the amount of addressable memory are independent of each other, at least to a certain degree. It is not very difficult to build a 2 bit memroy bus that could address TiBytes of RAM.

what's to be confused about? of course memory capacity and memory bus width are two different things, thus my reference to memory capacity and memory bus width, my point was that i dont see any game needing 1024Mb's of memory AND a 512bit memory bus at the same time or even needing one or the other fully as an independent requirement, and unless ATI changes the way they are rumored to be doing their high end cards, users who choose to buy ATI hardware will not have a choice but to pay extra for unused memory capacity and bandwidth

you do realize how much cost the extra 128bit bus width and 256Mb's of memory is going to add to production costs, the PCB investment will be huge comparatively, that's why i think NV's more "conservative" approach is going to be more worthwhile, at least as far as speculation goes at this point



:D
 
nobody_here said:
yep, which is the good thing that came from the GeForce 6 series supporting it, even though it was slightly before it's time in regards to having the horsepower to fully utilize it
The SM3.0 debate back then was tricky. For the most part, aside from full-fledged displacement mapping, SM3.0 was a very performance-oriented evolutionary step. Geometry instancing and dynamic branching were the main draws of SM3.0 at the time, and we've come to find out that they offer little in the way of performance (though with geometry instancing, it's nearly impossible to tell what kind of benefit that brings to the table). Displacement mapping, increased shader length limits and some small changes in lighting precision were brought to the table with SM3.0 as well, but I can't name a single game that has yet to utilize displacement maps (Flight Simulator X, maybe?), and few have even touched virtual displacement maps/parallax maps. I also don't believe we've hit the SM2.0 shader length roadblocks to this day, but that I'm not sure of.

So, did the 6800 series have enough power to utilize SM3.0? Unquestionably -- and they had a slight performance advantage when playing around with it.
 
phide said:
The SM3.0 debate back then was tricky. For the most part, aside from full-fledged displacement mapping, SM3.0 was a very performance-oriented evolutionary step. Geometry instancing and dynamic branching were the main draws of SM3.0 at the time, and we've come to find out that they offer little in the way of performance (though with geometry instancing, it's nearly impossible to tell what kind of benefit that brings to the table). Displacement mapping, increased shader length limits and some small changes in lighting precision were brought to the table with SM3.0 as well, but I can't name a single game that has yet to utilize displacement maps (Flight Simulator X, maybe?), and few have even touched virtual displacement maps/parallax maps. I also don't believe we've hit the SM2.0 shader length roadblocks to this day, but that I'm not sure of.

So, did the 6800 series have enough power to utilize SM3.0? Unquestionably -- and they had a slight performance advantage when playing around with it.

The only thing the Geforce 6 series had a problem with as far as performance was concerned was HDR. You really needed SLi'ed 6800GT's or Ultra's to do it at any decent resolution. The 7800's and 7900's were FAR better at it, but that was the only area that the 6's lacked in.
 
Looks like if these specs are true then ATI is being conservative with the manufacture process this time.
 
If ATI delays their release it will be 7800/X1800 all over again. Even if the R600 will be faster than the G80(X1800XT vs 7800GTX) people who cant wait will still buy the G80 just like everyone bought a 7800GT/GTX. The only time everyone purchased an ATI card was with the release of the X1900. It will be the same thing this time around. However if ATI does release within a reasonable of NVidia then the R600 would be a better purchase.
 
Endurancevm said:
If ATI delays their release it will be 7800/X1800 all over again. Even if the R600 will be faster than the G80(X1800XT vs 7800GTX) people who cant wait will still buy the G80 just like everyone bought a 7800GT/GTX. The only time everyone purchased an ATI card was with the release of the X1900. It will be the same thing this time around. However if ATI does release within a reasonable of NVidia then the R600 would be a better purchase.


if they can keep price inline, thats my biggest concern, that if they are actually bringing a full 512bit bus 1024Mb GDDR4 card to the table, it's not going to be cheap, and definately not cheaper than the G80, or at least, the profit margin will be very very thin because of largely increased manufacturing costs

dont get me wrong, if the R600 is 512bit/1024Mb and costs the same as the 768Mb G80 yet performs the same in regards to framerates and quality, it would be a must have, but if the only difference is the memory capacity and bus bandwidth yet it costs $100 more, that's only worth it to die hards who will only buy ATI anyways
 
Endurancevm said:
If ATI delays their release it will be 7800/X1800 all over again. Even if the R600 will be faster than the G80(X1800XT vs 7800GTX) people who cant wait will still buy the G80 just like everyone bought a 7800GT/GTX. The only time everyone purchased an ATI card was with the release of the X1900. It will be the same thing this time around. However if ATI does release within a reasonable of NVidia then the R600 would be a better purchase.

But the X1800XT was barely faster than 7800GTX, if at all.

R600 better do way better than that, too be worth the delay..
 
Sharky974 said:
But the X1800XT was barely faster than 7800GTX, if at all.

R600 better do way better than that, too be worth the delay..

i think it was better. just got 1 that i was selling and its good.700/800 in CCC stock volts. good card imo. but then again ive only tested a gtx never owned one.
 
Endurancevm said:
If ATI delays their release it will be 7800/X1800 all over again. Even if the R600 will be faster than the G80(X1800XT vs 7800GTX) people who cant wait will still buy the G80 just like everyone bought a 7800GT/GTX. The only time everyone purchased an ATI card was with the release of the X1900. It will be the same thing this time around. However if ATI does release within a reasonable of NVidia then the R600 would be a better purchase.
The problem this time is that they will miss out on all the holiday money.
 
Sharky974 said:
But the X1800XT was barely faster than 7800GTX, if at all.

R600 better do way better than that, too be worth the delay..

After driver updates the X1800XT was competetive with the 7800GTX 512.
 
Big Fat Duck said:
R600 better be faster than G80, since its coming out like 2-3 months after, right?

Not always, and it's hard to tell as ATi and nVidia's technologies have become so different, they are really hard to compare until they are both in the hands of the consumer.
 
What's the point of having more than 512MB? Games can use more than 256MB in some games at HD resolutions, but there's no way any are going over 512MB for a while, 768 on the G80 is more than enough and the memory difference betweent that and the R600 won't bring any performence benefits.
 
Ever play Oblivion? Check out the debug stats when you have a change -- 350+MB of texture usage is very typical. Factor in buffers, and you're down to very little storage headroom there. 768MB is welcome, and I see 1024MB being semi-important in as little as 9-12 months. Not a terrible investment, I don't think, and nVidia is perhaps making a good move by saving the memory increase for their refresh part, which means it will be more attractive to consumers without them having to do much re-design legwork (perhaps they won't even have to respin the GPU). By upping to more modules, nVidia has the capability to move from a 384-bit bus to a 512-bit bus wihout a tremdous fuss, which is, at the very least, attractive from a marketing perspective.

ATi is really dumping all their nuts in one basket with R600, it seems like. I'm impressed with the rumoured specs, and if ATi improves their already proven IQ, they have an excellent product on their hands. The problems I see, however, are supply issues with GDDR4 (a gig per card is a tall order to fill), power consumption, the possibilities of the refresh part and, inevitably, the price tag.
 
phide said:
Ever play Oblivion? Check out the debug stats when you have a change -- 350+MB of texture usage is very typical. Factor in buffers, and you're down to very little storage headroom there. 768MB is welcome, and I see 1024MB being semi-important in as little as 9-12 months. Not a terrible investment, I don't think, and nVidia is perhaps making a good move by saving the memory increase for their refresh part, which means it will be more attractive to consumers without them having to do much re-design legwork (perhaps they won't even have to respin the GPU). By upping to more modules, nVidia has the capability to move from a 384-bit bus to a 512-bit bus wihout a tremdous fuss, which is, at the very least, attractive from a marketing perspective.

ATi is really dumping all their nuts in one basket with R600, it seems like. I'm impressed with the rumoured specs, and if ATi improves their already proven IQ, they have an excellent product on their hands. The problems I see, however, are supply issues with GDDR4 (a gig per card is a tall order to fill), power consumption, the possibilities of the refresh part and, inevitably, the price tag.

I think who mentioned the 7800 GTX vs X1800 XT, is right.
G80 vs R600 might end up being the same. G80 comes out first, takes the lead and pushes DX9 performance to a new level with no competition, plus it's the only card with DX10 support. When Vista comes out, along with R600, NVIDIA had time to tweak the G80 and, if needed, they'll counter R600 with their improved G80.

I too am "worried" about the power comsumption and price of R600. If these rumored specs are true, the chip is going to be huge. And it will certainly get very hot. I really don't know what will cool it. Plus 1 GB of GDDR4 ? I agree with you phide, ATI seems to be putting "all their nuts in one basket" here, since R600 CAN'T be cheap to produce with these rumored specs, so after its launch, they better conquer the market entirely, or their expenses will be far greater than their profit.
 
It might be a bigger chip, put out more heat, and cost more but if it performs that much faster there might not be anything Nvidia can do to catch it. With the shear amount of bandwidth R600 would have G80 might not be able to come close to it without some major changes.

We very well could end up with the only battle being price/performance. If ATI really is planning on a 512bit bus I'd think ATI would have some way to actually use all the bandwidth. The other question is how efficient each card actually is. With both camps apparently going towards unified pipelines we have no idea what kind of gains they will achieve from unification alone.

I was also under the impression that yields on GDDR4 were very good so supply might not be as much of a problem as some might think.
 
Ok look first off ATi is late, secondly the g80 architecture is made so it can be upgraded to a 512 bus easily, (the gpu itself, not talking about the PCB), this is why you have the weird amount of vram and bus sizes on the gts and gtx versions, because the bus bit size, vram, and the amount of grouped stream processors (think of a stream processor as a array of pipelines, an array can have x number of shader alu's) are all interconnected.

The dev version of the g80 was a 512 mb 256 bit bus card, the gts is a 640 mb 320 bit bus card, the gtx is a 768 mb 384 bit card. 2 more arrays of stream processors will force the g80 or derivitive to have another 128 bit bus and another 256 mb for vram.

I'm not going to rule out ATi is going to go with a 512 bit bus, but that would be one large chip, around the same size if not bigger then the r580 on .80. Added to this the PCB complexity will be huge, is their enough space to make the interconnects on the PCB? I'm not sure if there is, 2 more layers as some else suggested isn't a easy task either.

sorry didn't mean same size or larger then the r580, replace r580 with a 90 nm g80.
 
Anarchist4000 said:
It might be a bigger chip, put out more heat, and cost more but if it performs that much faster there might not be anything Nvidia can do to catch it. With the shear amount of bandwidth R600 would have G80 might not be able to come close to it without some major changes.

We very well could end up with the only battle being price/performance. If ATI really is planning on a 512bit bus I'd think ATI would have some way to actually use all the bandwidth. The other question is how efficient each card actually is. With both camps apparently going towards unified pipelines we have no idea what kind of gains they will achieve from unification alone.

I was also under the impression that yields on GDDR4 were very good so supply might not be as much of a problem as some might think.

It doesn't matter that the G80 out in November can't really match R600, because G80 will certainly be improved by the time R600 is out. NVIDIA's been very smart in maintaining the advantage they have over ATI, since the launch of the GeForce 7 series and I believe they'll have a few surprises to counter the R600, if needed.

We have no idea how the R600 will perform against G80, but the fact is, a chip with that complexity will surely draw too much power.
ATI's been making cards that get too hot, that consume too much power for a while and keeping this trend is ok ?
I thought that new products exist to improve what the previous products had and also improve their faults.
 
R600 = wet dream......FOR THE POWER COMPANIES!!!

(jk, jk, I couldn't resist. See, this is what happens after all the AMD zealouts haggled me about my Prescott, I'm now anal about power usage)
 
unlikely they are true




what is the need for 512bit ? That is a huge engineering waste right now, considering you can use GDDR4 memory at around... 4ghz, and come up with the same bandwidth. What's the point of trying to manufacture a 512bit card ?



if it is, in fact 512bit.... it's gonna cost 700 bucks... maybe more
 
Back
Top