The human eye can only see 1 frame per 18 hours so I consider this reasonably fast.
Are you a tree
Nah, just The Lorax.
You may need to consult a doctor.
i once took 12+ hours to raytrace on an 8mhz Amiga only to realize that it didn’t have any light sources and so was pitch black.
I share that memory. At least twice
It’s not that the bear dances well, it’s that the bear dances at all.
60 frames per 42.5 days, playable.
You could get a totally playable fps if you play in geological time scale
Edit. Not really fpS as s stands for second, but …
Frames Per Stratum
Nice, hitting that sweetspot at 42 fpm (frame per month)
I mean that’s pretty fucking impressive imo. I figured a RT frame would take days to render on hardware that old
And back when that computer was contemporary, it would have. We’ve learned a hell of a lot since Nvidia announced they had cracked real-time ray tracing all those years ago.
Now write it in Z80 assembly instead of basic and see how much faster you can get it to run.
So true.
When I switched from basic to assembler on a Trash 80 Model 1, it was truly night and day
This computer is illegal in Florida, Texas, and Russia.
Is this true? Sounds like there’s a story you’re not telling us
It’s a rainbow thing 🏳️🌈
Ha! You got me.
It’s like playing chess by mail, but with Doom.
I misread it at first and read it as 17 frames per hour and I thought, “Yeah, that’s reasonably fast.”
My brain initially assumed 17 fps and I was like dayamnnnn
700 years worth of compute to do about an hour of gaming that I just did on my pc at home in realtime … damn.
Did I math it right? I was averaging about 100 fps in hogwarts for about an hour.
Say you generated 86’400’000 frames. 17h a frame that’s roughly 16’767 years.
tbf that’s probably on par with the performance Cyberpunk 2077 was doing on release
deleted by creator
Still remember loading games from cassette tapes on this thing and the Z80.
“is it still loading or did it fail?”
ah, plus ça change…
Since my dedicated hybrid graphics card was broken, my gaming experience is almost the same as with this one.
What resolution? I’m guessing 64x48?
The strain of going from a 32 x 22 image to a 256 x 176 one is evident in how much longer this secondary image took to render. From 879.75 seconds (nearly 15 minutes) to 61,529.88 seconds (over 17 hours). Luckily, some optimisations and time-saving tweaks meant this could be brought down to 8,089.52, or near-ish two and a half hours.
Those are really reasonable values. I guess my laptop would take that long to render a 4k image as well.
Really depends on the complexity of the frame being rendered for how fast your laptop can render it
Ray tracing speed primary depends on the number of pixels, not the complexity of the scene.
The complexity of your scene makes a huge difference. If your scene has fewer things for light to bounce off of, doing the ray tracing is much faster
(Source: I do blender renders with cycles)
So I’m not exactly sure how Blender implements this. There can be a few details that can make a huge difference. Just for starters, is Blender rendering 100% ray tracing here, or is it a hybrid model with a rasterizer. Rasterizers tend to scale with the number of objects, while ray tracing scales with the number of pixels. A hybrid will be, obviously, something in between.
Then there is how it calculates collisions. There is a way to very quickly detect collisions of AABB boxes (basically rectangles that surround your more complicated object), but it takes a little effort to implement this and get the data structures right. You can actually do Good Enough sometimes by matching every ray to every AABB, and then you do more complex collision checking against what’s left, but there’s a certain scale where that breaks down.
Blender is generally very well done from what little I know of it, but I’m not sure how it handles all these tradeoffs.
So, as far as I know the cycles engine does ray tracing until it hits a noise threshold, then does ai denoising for the final cleanup. You can see where the more visually complex parts of your render are, because it will take a lot longer to render to a less noisy state. I don’t know specifics of how it works under the hood, but given how complex parts of your image take longer to tender to an acceptable threshold than simpler parts it seems obvious to me that render time scales with complexity.