Yesterday we shared one of our latest advances in real-time rendering of unbounded NeRF on the web (i.e. browser)
unbounded = everything the camera sees during capture, as much as possible, as far as possible
captures.lumalabs.ai/unbounded
pro tip: when visiting captures.lumalabs.ai/unbounded on a laptop/pc
be sure to use AWSD keys to move around the scene, mouse to rotate around, Escape to reset scene, segmented & mesh view toggle on top right
but it's still possible to view unbounded scenes on mobile too with limited controls
you can tap and move to rotate, 2 fingers to move scene, segmented, mesh view toggles, etc. too
captures.lumalabs.ai/unbounded
Imagine re-living precious life moments exactly that way it happened anytime you want like the place you used to play around as a kid, the place in your high school, etc., that's the kind of things unbounded NeRFs bring to the table
captures.lumalabs.ai/ebullient-reindeer-5X-387?mode=slf
It's just been amazing and mind boggling to see the use-cases people are talking to us about. this is the next evolution of 3D capture, reconstruction, rendering, etc.
just look at Ben Affleck’s bike 🏍
twitter.com/gravicle/status/1559355279640211456/video/1
We also announced an amazing gallery of captures done using Luma app with links to the actual real-time viewer on the web.
We will feature NeRF captures created with Luma by our early users and partners as a showcase of what’s possible.
captures.lumalabs.ai/
so, let's take a step back and understand what exactly they are, I mean what is a NeRF in simple words.
we all know what an image is, what jpegs, pngs, etc. are right, they're basically pixels / colors seen by a camera stored now imagine that for a 3D scene/captures
Each viewpoint is different for point in space for the capture, it's view dependent.