The 16GB of HBM2 on the Radeon VII Is Needed for Real World 4K Video Production

Discussion in '[H]ard|OCP Front Page News' started by cageymaru, Feb 7, 2019.

  1. M76

    M76 [H]ardForum Junkie

    Messages:
    8,563
    Joined:
    Jun 12, 2012
    I might have worded in incorrrectly. I don't mean the app needs to crash or BSOD, What I don't want an app to do is try to guess the ram requirement before even trying the task, and refuse it based on that. There are many tasks whose exact ram requirement is dependent on the data itself, and can only be determined by actually doing the task.
     
  2. DocNo

    DocNo Gawd

    Messages:
    625
    Joined:
    Apr 23, 2012
    Who are you to argue if people "need" something? I can tell you time is money and even if these cards only shave off 10% of time when working with video they will more than pay for themselves in a matter of months. If they flat out enable editing that doesn't work at all with cards with less money, then that's far from "helping".
     
    Revdarian and Red Falcon like this.
  3. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,197
    Joined:
    Jun 13, 2003
    If that's the case, then the users in question would have already purchased a proper professional card.
     
  4. Cerulean

    Cerulean [H]ardForum Junkie

    Messages:
    9,238
    Joined:
    Jul 27, 2006
    If you are serious about video production, do yourself a favor by doing research and realize that professional cards have nothing on non-professional cards in term of rendering performance. Professional cards won't provide any benefit or justification.

    EDIT: I did some more digging to find this enlightening on why some would choose a professional card vs non-professional. Non-professional cards have more relaxed tolerances and are not as strong in FP64 calculations. Because of these two factors, the quality of renders are slightly lower than if rendering with a CPU. The other disadvantage against CPUs if that once you run out of VRAM you run out of VRAM, whereas CPUs have a lot of DRAM available to them. For the majority of videos out there, non-professional hardware is more than adequate. But, if you are a studio where due to the risk/impact/demands of your client and audience there cannot be even the slightest and most minuet flaw on any frame, you will want FP64 with minimal tolerance in errors. This means you either render with CPU or with a professional card.

    See Mark Sin's post at https://www.quora.com/Why-are-CPUs-more-important-in-final-rendering-than-GPUs. It is a little lengthy but it is worth the read for the technical minds that want to understand why GPUs output a lower quality than CPUs.
     
    Last edited: Feb 11, 2019
  5. aokman

    aokman Gawd

    Messages:
    668
    Joined:
    Jan 3, 2012
    Premiere is such inefficient horseshit lol
     
    mikeo likes this.
  6. DocNo

    DocNo Gawd

    Messages:
    625
    Joined:
    Apr 23, 2012
    Why? You think businesses like throwing out money they don't need to spend for labels like "proper professional card"?!?

    lol - some may, but they usually aren't in business long doing stupid stuff like that.
     
    Red Falcon likes this.
  7. Algrim

    Algrim [H]ard|Gawd

    Messages:
    1,326
    Joined:
    Jun 1, 2016
    Most gaming cards don't have or support ECC memory whereas professional cards can. For the use cases where this matters you don't have much choice. For the workloads that aren't needing such precision gaming cards can become usable or even desirable (overclocked gaming cards can be compute monsters).
     
  8. Tsumi

    Tsumi [H]ardForum Junkie

    Messages:
    12,861
    Joined:
    Mar 18, 2010
    When something is labeled professional, the primary factor is not performance, but stability, reliability, and support. That is what businesses pay for, and often preventing lost working time is worth the additional cost of "professional."
     
    N4CR and IdiotInCharge like this.
  9. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    8,197
    Joined:
    Jun 13, 2003
    Yup. There's a thin margin between doing enough work that a high-end gaming card makes sense, but a professional card doesn't.
     
  10. DocNo

    DocNo Gawd

    Messages:
    625
    Joined:
    Apr 23, 2012
    Not always - it just depends on the company, the product and the costs involved.

    The amount of "IBM" level businesses that will just blindly play for things like "Professional" nomenclatures are tiny compared to the overall number of smaller businesses out there.

    Don't think so? Apple didn't overtake Microsoft in the enterprise.

    As with anything there are pro's and cons. Maybe in the final render path you put in the pro card - in the editing bay? Lots of friends are over the moon with this new card in new capabilities it will give them. For them it wasn't a matter of paying more for the pro card, it was a matter of having this card or not since the "pro" cards are simply out of the question.

    How that's a bad thing still mystifies me but in forums like these people like you with these arguments exist so here we are I guess.
     
    Cerulean likes this.
  11. Tsumi

    Tsumi [H]ardForum Junkie

    Messages:
    12,861
    Joined:
    Mar 18, 2010
    Again, we are not saying it's a bad thing. We are saying its purpose is extremely limited, and doesn't do much good towards gamers, which the majority of us on here are. It's not a halo card, it's not a great value card, it's a meh card at an even more meh price that offers only one special thing to an extremely small niche of people. I don't understand how hard that is to comprehend for you.

    While the number of businesses that "blindly" pay for things (I assure you, they don't, they have the cost-benefit already figured out) might be small, the volume they purchase is far from insignificant. Apple may not have taken over enterprise, but they do offer a lot of enterprise level services, which goes to say that enterprise cannot be ignored.

    I mean, we gave AMD hell for Bulldozer. It offered great multi-threaded performance for the price, but was so meh everywhere else that it was slammed. Same thing here. Great 4K video rendering capabilities for the price, meh everywhere else.
     
  12. gamerk2

    gamerk2 [H]ard|Gawd

    Messages:
    1,490
    Joined:
    Jul 9, 2012
    First off, you pretty much never work with the entire buffer in one go. You typically only need to allocate MB at a time. Secondly, you only need to break up the data in the cases where it won't fit, so there's zero performance loss otherwise. And I again note the alternative is "don't do it at all", so arguing performance is kind of the definition of ironic.
     
  13. gamerk2

    gamerk2 [H]ard|Gawd

    Messages:
    1,490
    Joined:
    Jul 9, 2012
    Which is how ALL memory management works. Application requests a memory allocation of X size from HW, and either gets a memory address to the start of a block of RAM or an error if the request can not be met. All Adobe needs to do here is move some data out of VRAM if a memory request fails and do the request again (which is pretty much how Paging works at the OS level).
     
  14. M76

    M76 [H]ardForum Junkie

    Messages:
    8,563
    Joined:
    Jun 12, 2012
    Obviously not, based on the examples.
     
  15. Nobu

    Nobu 2[H]4U

    Messages:
    2,539
    Joined:
    Jun 7, 2007
    It's not an easy problem to solve. When you're working with large data sets (rendering 4k video, which means working with uncompressed image data, lots of textures, meshes, etc), the program cannot always know how much data will be required to render a frame in advance, and a lot of data needs to be in memory in order to complete the operation.
    https://devtalk.nvidia.com/default/...error-thrown-by-the-driver-instead-of-opengl/
    https://blender.stackexchange.com/q...of-memory-how-to-identify-the-problem-objects
    Edit: and from a comment on Dan's question:
    http://blender.stackexchange.com/a/61421/1853

    Look at it this way: you have a timeline with say two 4k videos, an effect, and a transition between the two. You also have a 1080p video embedded in one of the two streams, using a green-screen effect. You need to have the 1080p video, the two 4k videos, the previous rendered frame (maybe multiple), the 4k and 1080p combined frame, and the current render buffer in video memory. You also have to use video memory for each of the individual operations (sometimes pixel granularity, sometimes multiple pixel, sometimes the whole frame). Sometimes an effect requires you to render the next frame before the current one or at the same time. You have to do this 60 times a second (or more), and you want them to guess the amount of vram needed each time? Or fetch a frame from system memory each time?

    The alternative is reducing the data size, removing effects, changing encoding settings (possibly reduced quality), or doing rendering on the CPU.
     
    Last edited: Feb 12, 2019
    DocNo, Cerulean and N4CR like this.
  16. Shagittarius

    Shagittarius n00b

    Messages:
    55
    Joined:
    May 3, 2016
    Because one company totaling a use for say a half dozen of these cards possibly is such a huge market indicator. I think the kiddies don't know what a mass market product is.
     
  17. gamerk2

    gamerk2 [H]ard|Gawd

    Messages:
    1,490
    Joined:
    Jul 9, 2012
    The failure in thinking here is that all these need to be in VRAM at the same time; yes, shuffling data across the PCI-E bus can certainly slow the operation down, but that's still preferable to the application crashing and getting the work done never.

    Both the OS and graphics APIs have mechanisms for recovering from memory allocation errors; it's up to the developer to use them properly.
     
    Algrim, mikeo and IdiotInCharge like this.