" The public beta is just around the corner and weâre aiming for a 1.0 launch in the first quarter of 2014. In terms of features, weâre about to release some very significant optimizations mostly related to blended (layered) materials. "
Q1. Redshift is a renderer accelerated by the GPU. Why do you develop a new renderer ?
GPUs have a massive amount of raw computational power and high memory
bandwidth when compared to CPUs. And rendering is computationally
expensive. Faster rendering can often directly translate into money
saved. Also faster rendering means shorter iteration times which
directly lead to better results. We developed Redshift because weâre
big believers in the power of the GPU and we felt that the existing GPU
renderers did not cover enough of the features and flexibility that
users want or need.
Q2. Rendering engine algorithms are often relatively simple, however the integration into DCC ( 3ds Max , Maya ... ) is something much longer. Can you confirm this analysis?
The basic rendering algorithms are indeed pretty simple. However, once you enable enough flexibility/customization in the rendering pipeline, things can quickly become more complicated. And once you start bringing the GPU into the equation, many of these simple algorithms can actually be quite difficult to implement in a way that uses the GPU efficiently.
To achieve a high level of DCC integration, you need a reasonably complete feature set on the rendering side, otherwise all the knobs and switches in the DCC arenât going to do much! Light linking, per-object visibility settings, flexible shading networks with texturable inputs, etc.. etcâŠ
Sometimes a less-than-stellar DCC integration can simply be due to a lack of support of these kinds of mini features on the part of the renderer.
Now of course, building a solid DCC integration is very challenging. Properly and thoroughly translating and handling a broad range of DCC settings requires a lot of work and testing. One of the most challenging aspects is that youâre dealing with the DCC APIs which can be buggy, slow, incomplete, poorly documented or any combination of the above. You almost always have to resort to dirty hacks and workarounds which themselves can introduce bugs or hurt performance. On top of that, DCC plugins can be more challenging to debug compared to full applications. If your code is causing the DCC to crash, you often donât know much more than âwhen I do this it crashesâ so it can be a bit of a wild goose chase to track down the source of the bug. In many cases you arenât even sure whether your code is doing something wrong or the crash is caused by a bug in the DCC itself.
So is doing a solid DCC integration for a renderer harder/longer than writing the renderer itself? In our experience, I would say no, but certainly the DCC integration is not to be underestimated in terms of difficulty and complexity.
Q3. What about the benefits of GPU in terms of speed ? In the end, what are the speed gains for the users ? Software engines such as Mental Ray or Arnold does not seem to suffer so much of GPU competitors ?
It gets very messy very quickly when you try to do fair and empirical performance comparisons of GPU and CPU renderers. There are simply too many variables! Even comparing CPU renderers is murky. Is Vray faster than Arnold? Iâm pretty sure the answer is âit dependsâ. The scene complexity, lighting, textures and rendering techniques all play very significant roles in the frame time. And when you try to compare GPU renderers to CPU renderers, youâre throwing hardware into the mix. If you compare Redshift to Arnold on a top-of-the-line 12 core Xeon workstation with a low end Quadro card, you probably wonât be too impressed :)
That being said, given the computational power and memory bandwidth of todayâs GPUs, the performance advantage of a GPU renderer can be very significant. Depending on the hardware, weâre hearing from our users that Redshift is anywhere from 4-5 times faster and up to 30-40 times faster than CPU renderers. But again, âyour mileage may varyâ so to speak, so we donât like making direct comparisons to other renderers. Funnily, there was a thread going on poplar 3D forum recently where the author of a GPU renderer (not Redshift!) published very direct comparisons between his renderer and other popular renderers. He got some pretty strong responses, and not in a good way. We obviously do our own internal testing against most renderers out there and know exactly how many times faster we are against them in a variety of shading/lighting situations on specific hardware configurations, but we really want our users or potential users to find out for themselves what kind of performance gains they can expect with Redshift by running their own tests.
As for why weâre not seeing GPU renderers take away any significant market share from CPU renderersâŠ Well when youâre talking about Mental Ray, Vray or even Arnold, these are mature products that have over time had pretty complex pipelines built around them. Plus, as CPU renderers, theyâre easily programmable which is likely a must-have for any large-ish studio. Also, at least in the case of Mental Ray and Vray, there are a ton of resources online to help get the most out of these renderers, from tutorials, to shaders, to tools. We believe, however, that the main reason GPU renderers havenât been widely adopted is that theyâre typically pretty thin on so-called âproduction-readyâ features, which is of course very vague, but true. The GPU renderers weâve seen so far feel like toys beside the big boys. Weâre trying to change that with Redshift.
Q4 . Redshift uses CUDA , is it also based on OptiX ? Can you explain the reasons of those choice ? Does it definitely close the door to AMD GPU ?
Redshift is indeed running CUDA but it does not use OptiX. Weâre using our own GPU code for ray tracing, shading, etc. We considered and rejected OptiX as a choice very early because we did not want to rely on a 3rd party library for such a fundamental part of our technology. We didnât want to get into a situation where we were waiting on NVidia to fix something or add a feature. Also, at least at the time we were looking at OptiX, it felt a bit ârigidâ and we were finding ourselves wondering if certain things we wanted to achieve would simply not be possible with OptiX.
Now just because Redshift makes heavy use of CUDA (and likely always will), the door is by no means closed on AMD GPUs! Developing an OpenCL version of Redshift is on our long term roadmap, though we havenât yet put much actual dev time into it yet as weâve been focusing on features and stability.
Q5 . An important issue is the GPU memory management. How does Redshift solve this problem?
Redshift does out-of-core texturing and ray tracing. This means that the textures and geometry are split into âtilesâ and are sent to the GPU as required. If the GPU memory is full, tiles are swapped out to make space. Itâs conceptually very simple and similar to paging data from RAM to your hard disk. Of course, if you are constantly swapping tiles out to make room for the tiles you need, you take a performance hit, but this mostly affects geometry. Texturing on the other hand works very well. Weâve done tests with many GBs worth of texture data and we never ran against excessive swap out situations. For geometry, going out of core can translate into a bigger performance hit, but weâre working on several new ways to deal with this, some of which should be ready in the next few months. Generally speaking, Redshift tries to be really efficient when it comes to memory management.
Q6 . From a qualitative point of view, how do images from Redshift GPU renderer perform vs images produced by CPU engine?
In terms of precision/quality, Iâd say there are no differences. We get asked about this a lot, probably because people associate GPU rendering with video game graphics which obviously look nothing like high-quality pre-rendered content. Iâm not trying to put down games - theyâre looking absolutely amazing and getting better all the time, and in fact we come from a video game development background - but itâs just not the same sport. Nothing about Redshift is like a game renderer. Redshift implements âofflineâ rendering techniques accurately without any sort of short cuts.
Q7 . Is Redshift ready for large projects : multi-gpu rendering, network, render farm ...
Multi-GPU rendering is supported. Network rendering is not natively supported but many of our (very resourceful!) users have successfully set up Redshift render farms using Royal Render. Anyone interested can find the information they need for this in our forums. In the future weâll have native support for network and distributed rendering. We were originally planning on implementing this earlier during development but, given that solutions already exist, we decided to focus on implementing features that have no obvious workarounds.
Q8 . Today, many non-biased engines emerge ( octane , cycles, arion iRay ... ). Why developing a biased engine? Can Redshift evolve to a physically correct renderer?
I am going to use the term âPhysically Plausibleâ instead of âPhysically Correctâ because itâs less absolute (you can easily say that no renderer is physically correct) and more appropriate for what renderers are trying to do which is make pretty and (sometimes) believable pictures rather than to run physics simulations. Anyway, Redshift is certainly already capable of physically plausible rendering. For example, Redshift supports IES lights, physically plausible materials and even âbrute-forceâ GI. Biased rendering and physically plausible rendering are not mutually exclusive. âBiasedâ means providing users the choice of rendering techniques that might be less precise (compared to unbiased rendering) but are faster and less noisy. âBiasedâ can sometimes also mean âflexibleâ. While some 3D artists need to emulate reality as closely as possible (where an unbiased renderer excels), many care more about flexibility and speed than about how closely the images matches reality in a theoretical sense.
Q9 . We enjoyed the integration of Redshift into Softimage . From the render region , to the pass render everything seems perfectly integrated . Do Maya and 3ds Max versions benefits of a great integration?
Thanks! As I said before, solid DCC integration can be tricky to get right but we strongly believe that itâs critical. Maya has a similar level of integration to Softimage. 3DSMax is currently being developed and we expect a similar level of integration there too.
Q10 . Redshift is currently in beta, but seems already used in production. Do you expect such a fast adoption?
I think Redshift is the renderer many people expected when they first heard about GPU accelerated renderingâŠ but never quite received! To be honest with you, we werenât 100% sure how users would react to Redshift but we werenât entirely surprised by the excitement.
Like you said, Redshift has already been used in several productions. If youâve been on our forums, you know that we (the Redshift team) are very active answering questions, offering advice, fixing bugs as quickly as possible - often the same day - and adding the odd missing feature. We want users to feel confident that, if they use Redshift and get âstuckâ somewhere, they wonât be left out in the cold.
Q11 . What are the next milestones for Redshift (RC , new features ...)?
The public beta is just around the corner and weâre aiming for a 1.0 launch in the first quarter of 2014. In terms of features, weâre about to release some very significant optimizations mostly related to blended (layered) materials. We have hair, ICE strands and support for ICE attributes in the render tree up next, some of which are at least partially implemented. And of course, weâre fixing bugs and clearing up small annoying things on an ongoing basis.
Q12 . You came from the game industry. It's strange that you are now focused on the precalculated rendering, isn't-it ?
It might seem strange, but I would say that our experience in games is what allowed us to make Redshift. Weâve been working intimately with the GPU for many many years so that gave us a really good and solid technical foundation on the hardware side. Also, weâre here now with perhaps a more open mind which allows us to think outside the box. The other thing to remember is that there is a significant amount of precalculated rendering happening in the videogame industry. All those âCall Of Dutyâ lightmaps are pre-rendered! Our technical lead, Panos, had already developed a full featured lightmap generator from scratch well before we starting throwing around ideas about Redshift.
Q13 .Cloud services becomes more popular, do you have any plan in this area?
âThe cloudâ is a popular topic these days! We are indeed thinking about a cloud version of Redshift but Iâd say this is at least a year out. So thereâs nothing to announce just yet.
Q14 . What do you think of the realtime/precalculated convergence? Do you have any other projects?
Iâm not totally convinced itâs converging in the sense that I doubt weâll be rendering final frames for Hollywood in real time any time soon. But certainly realtime rendering is making huge strides in quality with new solutions for GI, fuzzy reflections, more accurate DOF, motion blur, etc. But letâs not kid ourselves, itâs not really even close to pre-rendered quality. And yes, as hardware and software improves, âofflineâ rendering is getting faster. But despite some exaggerated marketing claims by some, itâs nowhere near realtime. And letâs not forget that audience expectations are actually outpacing the improvements. Last time I checked, Shrekâs law still holds up - CPU render hours for feature length animated films double every 3 years. Try watching something you remember as looking awesome 10 years ago. You might not be so impressed today.
Of course, that is not to say that the advances in realtime rendering are not going to be useful to film and television production and other areas that make use of âofflineâ rendering. Previz is seeing a huge benefit from realtime tech and Iâm sure realtime will make itâs way deeper into the pipeline. Given our video game experience we are obviously thinking about all these things all the time ;)
|Connexion & inscription | Mentions LĂ©gales | A propos de 3d-test et contact | Âź 2001 2010 3d-test.com|