Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
..the VRAM dedicated to emulate RTX on older cards is about 1GB
It seems GPU tech has reach its limits.
That's why they turn into AI for lower power consumption.
They could increase raw performance, but that would just increase the wattage a lot more
Maybe the tech has reach almost its limit.
I would bet 6090 would be more AI powered, instead of raw performance
I am not sure what you mean here - performance is improved, but they are headlining the improvements in its execution of AI related tasks since that is what is currently in.
Performance continues to improve and always has room for further enhancement. However, as GPU technology approaches its efficiency limits, achieving double or triple the raw performance will increasingly require a proportional increase in power consumption.
Since frames per second (FPS) is often the standard metric for GPU performance, it may become more efficient to rely on AI to maintain high FPS rather than depending solely on raw hardware capabilities. AI could enable GPUs to deliver similar performance at a significantly lower power cost with the Fake Generated Frames.
This situation parallels the evolution of binary supercomputers. In the past, supercomputers required enormous amounts of space and power. Over time, advancements in technology made them smaller and more energy-efficient. But as binary computing approaches its efficiency ceiling, doubling the capability of current supercomputers might require twice the space and power unless a breakthrough, such as quantum computing, emerges.
(Chat GPT help me to compose better wording, hahaha...)
So that's why I think in 6090 and above, we'll probably see more of AI enhancement, rather than raw performance, due to power consumption consideration.
It doesn't include an Iray benchmark, not sure that's currently possible, but this video seems like it would have some relevant benchmarks for us.
So, if we can expect a comparable iray performance boost as with what we see from his blender and maya arnold results, it would be around 40%. Maybe iray will be better because it's an Nvidia developed render engine.
My feeling is that AI enhancements isn't so much for power consumption, it 's more to what currently brings them money. And that is AI acceleration. Graphics became sidebussiness. They improve AI performance, their R&D goes that way since that's where their market currently is. "Raw performance" improvement seems to be more due to small tweaks.
That's why there 's little improvement in performance when compared to 4000 series when they turn DLSS off. And that 's why they "improved" DLSS frame generation to generate not one but three "fake" frames. It 's simply due to AI part being better when compared to last generation so it can do that in same amount of time.
As for power consumption, I just watched Hardware Unboxed video about MSI RTX 5090, it uses 750W. That 's not realy considerate to power consumption. Sure, you get more frames than 4090, but they 're product of DLSS Multi Frame Generation, so it seems that only real improvement is in that part of GPU.
I think for the GPUs and CPUs the biggest factor in performance improvement has been decreasing the size of the inner workings. The smaller nanometer components are able to do the same amount of work with less power, so they can do even more work with the same amount of power. If they hit a plateau, it will probably be because they can't go from 2nm to 1nm or something like that. Then they will have to either increase power consumption or let the marketing team hype up how the LED lighting effects improve performance.
It is impossible to benchmark a GPU in all fairness if we enable AI. AI does not represent the frame rate of a GPU. AI are fake frames. Similar to Photoshop when we color in a background versus let AI fill it with calculated pixels. AI is simply calculating pixels to render and what we get may not be as consistent or stable as what one would get from GPU. The 4090 has fake frames but not near as many, so essentially the 4090 would benchmark better against a 5090.
Okay everyone, head to this page for more info
https://www.neowin.net/news/specs-appeal-comparing-nvidia-rtx-5000-series-to-rtx-4000-and-3000/
Hope this helps
Need much, much faster raw ray tracing render engine.
I just got a 4060ti with 16gb of vram for a really good price and that is my first 40 series card. I wont touch a 50 series till the prices come down and we find out if they are gonna burn up LOL. I would look into getting a good deal on a 40 series card now that everyone is gonna go 50 series.
I'm interested by inZOI so ACE AI making smarter NPC is welcome.
That's what I have and for Daz I'm quite content with it. Both my monitors are 1080p and max out at 60fps, so in gaming I never seem to max it out, I can run ultra on most of what I have with no issues. Going above and beyond that seems overkill for me, but I know some people have far higher resolution monitors from mine, so the demands might be different for them. For a still-rendering program like Daz, a 5090 would be like buying a Ford Super Duty F-450 even though you don't have anything big to haul. Complete with horrible gas mileage. Or in this case starving for electricity.
And that rusty rail yard would still probably stutter in the viewport, haha.
On my PNY GeForce RTX 4070 12GB, I did some FHD renders of a Genesis 9 character in a T-Pose using one of Colm Jackson's iRay rendering light setup products and it does help speed up render times and gets competant lighting quickly, relative to my attempts, I still feel render times for FHD should be 30 seconds or less. That seems a few generates of cards off still yet.