Real-time PathTracing with global illumination and progressive rendering, all on top of the Three.js WebGL framework.
-
Geometry Showcase Demo demonstrating some primitive shapes for ray tracing.
-
Ocean and Sky Demo mixes ray tracing with ray marching and models an enormous calm ocean underneath a realistic physical sky. Now has more photo-realistic procedural clouds!
-
Billiard Table Demo shows support for loading image textures (i.e. .jpg .png) to be used for materials. The billiard table cloth and two types of wood textures are demonstrated.
-
Cornell Box Demo This demo renders the famous old Cornell Box, but at 30-60 FPS - even on mobile!
For comparison, here is a real photograph of the original Cornell Box vs. a rendering with the three.js PathTracer:
-
Volumetric Rendering Demo renders objects inside a volume of dust/fog/etc.. Notice the cool volumetric caustics from the glass sphere on the left, rendered almost instantly!
-
Terrain Demo combines traditional raytracing with raymarching to render stunning outdoor environments in real time! Land is procedurally generated, can be altered with simple parameters. Total number of triangles processed for these worlds: 2! (for screen size quad) :-)
-
Arctic Circle Demo I was experimenting with my ray marching engine and what types of environments I could get out of it by just altering some parameters. When the scene first opens, it's almost like you're transported to the far north! The time of year for this demo is summer - notice how the sun never quite sets below the horizon.
-
Planet Demo (W.I.P.) takes raymarching and raytracing to the extreme and renders an entire Earth-like planet with physically-based atmosphere! Still a work in progress, the terrain is procedurely generated. Although the mountains/lakes are too repetitious (W.I.P.), this simulation demonstrates the power of path tracing: you can hover above the planet at high orbit (5000 Km altitude), then drop all the way down and land your camera right on top of a single rock or single lake water wave (1 meter). All planet/atmosphere measurements are to scale. The level of detail possible with raytracing is extraordinary!
-
Quadric Geometry Demo showing different quadric (mathematical) ray tracing shapes.
-
Water Rendering Demo Renders photo-realistic water and simulates waves at 30-60 FPS. No triangle meshes are needed, as opposed to other traditional engines/renderers. The water surface is achieved through ray marching.
-
BVH Point Light Source Demo Demonstrates the use of a point light to illuminate the famous Stanford Bunny (30,000+ triangles!). Normally a dark scene like this with a very bright small light would be very noisy, but thanks to randomized direct light targeting, the image converges almost instantly!
-
BVH Spot Light Source Demo A similar scene but this time a bright spotlight in the air is aimed at the Stanford Bunny, making him the star of the scene! The spotlight is made out of dark metal on the outside and a reflective metal on the inside. Notice the light falloff on the checkered floor.
-
Animated BVH Model Demo not only loads and renders a 15,000+ triangle GLTF model with correct PBR materials (albedo, emmisive, metallicRoughness, and normal maps), but it also translates and rotates the entire model and its BVH structure in real time! Loading and ray tracing bone animations for rigged models is still under investigation, but getting rigid models to move, rotate, and scale arbitrarily was a huge step forward for the pathtracing game engine!
-
HDRI Environment Demo shows how to load an equi-rectangular HDRI map to use as the scene's surrounding environment. This demo also uses the optimized BVH accelerator to load the famous Stanford Dragon model consisting of 100,000 triangles and renders the scene in real-time! I also added a material and color picker so you can instantly change the dragon's material type (glass, metal, ceramic) as well as its material color without missing a beat! Note: please allow 5-10 seconds to download the large HDR image
-
BVH Visualizer Demo Lets you peek under the hood of the BVH acceleration structure and see how the various axis-aligned bounding boxes are built all the way from the large surrounding root node box (level 0), to the small leaf node boxes (level 14+), to the individual triangles of the model that are contained within those leaf node boxes. This demo loads the famous Stanford Dragon (100,000 triangles!) and renders it as a purple light source inside yellow glass bounding boxes of its BVH.
-
GLTF Viewer Demo This cool viewer not only loads models in glTF format, but also uses three.js' RGBE.Loader to load an equi-rectangular HDR image as the background and for global lighting. Many thanks to github user n2k3 for his awesome contributions to this viewer! He implemented a slick loading animation as well as a GUI panel that allows you to change the sun angle, sun intensity, sun color, hdr intensity, and hdr exposure.
The following section deals with different techniques in Constructive Solid Geometry(CSG) - taking one 3D mathematical shape and either adding, removing, or intersecting a second shape.
The above image was my inspiration to embark on the years-long (and still ongoing!) journey to implement a complete library of analytically ray-traced mathematical shapes that can be rendered in realtime inside a browser. The image is a computer screen grab from an old cinema magazine article showing how the vintage CG company MAGI made their iconic imagery for the 1982 movie, TRON. I saw that movie in theaters when it came out (I was 9 years old, ha) and at first I thought, since it was a Disney movie, that their artists had hand-drawn all the crazy scenes and sci-fi vehicles. As the end credits rolled though, it said 'computer imagery and animation by MAGI'. Mind blown! At 9 years old in the early 1980's, I hadn't seen anything like that in a movie - I couldn't even comprehend how they made all those cool scenes/vehicles inside of a computer! The film really peaked my interest in computer graphics and nearly 40 years later, I am happy to report that my quest to be able to render all the shapes that MAGI could has been largely successful! If you combine the right primitives with my CSG viewer demo below, you should be able to recreate most every shape that you see in the picture above, and maybe even in the movie (after I implement multiple recursion of the CSG operations!). For those that are interested in the math, these are all quadric shapes - shapes that can be defined implicitly (i.e., a unit sphere: x2 + y2 + z2 - 1 = 0) and reduced to a quadratic equation in the ray's 't' value, which can be easily solved by a computer to quickly find the roots (t0, t1). Using these mathematical primitives, MAGI was able to construct all the cool vehicles featured in the movie. An interesting side note: they did not use triangles/polygon modeling like the CG industry does today - it was mainly these math shapes with pixel-perfect continuous-looking curves. Also noteworthy is that they used ray tracing to render the final animations. Each frame took 30 minutes to multiple hours. Well I'm happy to say that you won't have to wait that long to see an image now - my shapes render at 30-60 FPS inside your browser, even on mobile! ;-)
- Constructive Solid Geometry Viewer
This viewer allows you to easily experiment with different CSG configurations while seeing the results path-traced in real time! You can select a CSG Operation from the Operations list - Union (A+B), Difference (A-B), or Intersection (A^B). Briefly, a 'Union' operation means that the outside of shape A is fused with the outside of shape B, creating a new single shape with a single interior volume. A 'Difference' operation means that shape A is cut out with shape B (shape B by itself will be invisible, but its influence will be visible as a section missing from shape A where the two overlap). An 'Intersection' operation means that wherever shape A touches shape B, a new shape/volume will be created (the two shapes must overlap, otherwise no new shape will be seen). I added a detailed and fully-featured GUI menu system so that you can easily modify the CSG Operation type, both shapes' Transforms (Position, Scale, Skew, Rotation), both shapes' base geometry (Sphere, Box, Cylinder, Cone, Paraboloid, etc.), their material type (Diffuse, Transparent Refractive, Metal, ClearCoat Diffuse) and their RGB material color. I have spent hours trying various configuration possibilities, which are seemingly endless (ha)! I hope that you too will have fun experimenting with this viewer and seeing what new shapes you can create!
All of the following 4 demos feature a large dark glass sculpture in the center of the room, which shows Ellipsoid vs. Sphere CSG.
Along the back wall, a study in Box vs. Sphere CSG: CSG_Museum Demo #1
Along the right wall, a glass-encased monolith, and a study in Sphere vs. Cylinder CSG: CSG_Museum Demo #2
Along the wall behind the camera, a study in Ellipsoid vs. Sphere CSG: CSG_Museum Demo #3
Along the left wall, a study in Box vs. Cone CSG: CSG_Museum Demo #4
Important note! - There is a hidden Easter Egg in one of these 4 Museum demo rooms. Happy hunting!
-
Switching Materials Demo This demo showcases different surface material possibilities. The materials that are feautured are: Diffuse (matte wall paint/chalk), Refractive (glass/water), Specular (aluminum/gold), ClearCoat (billiard ball, plastic, porcelain), Car clearCoat (painted metal with clear coat), Translucent (skin/balloons, etc.), and shiny SubSurface scattering (polished Jade/wax/marble, etc.)
-
Material Roughness Demo Demonstrates increasing levels of roughness on different materials. From left to right, roughness on the left sphere is set at 0.0, then 0.1, 0.2, 0.3 etc., all the way to the max: 1.0 roughness on the right sphere. The demo starts out with a clearcoat cyan plastic-like material, but you can choose different material presets from the selection menu, as well as change the material color in realtime. I have researched and improved the importance sampling of specular lobes for various amounts of roughness, which results in very fast convergence, especially with smoother to medium-rough materials. Try all the presets for yourself!
-
Transforming Quadric Geometry Demo Using the game engine version of the three.js path tracer, this demo shows how to create multiple objects (a bunch of 'THREE.Object3d()'s, each with its own transform) on the JavaScript side when initializing three.js, and then send the objects over to the GPU for realtime pathtracing. The nice thing about having my pathtracer sitting on top of three.js is that I can use its built-in transformations such as Translate, Rotate, and Scale. Since these shapes are mathematical (all quadrics), I also included clipping parameters so you can have partial shapes and can even animate the cutting process! Note: this demo may take several seconds to compile
Arthur Appel is credited with the first formal mention of Ray Tracing (raycasting and shadow rays, shown above) in his 1968 paper Some Techniques for Shading Machine Renderings of Solids while working at IBM Research (TJW Center). Mr. Appel used this new technique to help visualize machine parts and architectural concepts on printed paper in black and white. The scene data was sent to an IBM 1627 (Calcomp) digital plotter that cleverly used text characters (like '+') with different spacing and brightness to differentiate the various shading of sides of a 3D model under a virtual light source. Here are a few examples of Mr. Appel's digital plot renderings from his 1968 paper:
For reference, here is a link to all the images featured in the research paper: Original Appel Renderings (click on the 'View All 14 Figures and Tables' button below the first images).
And here is a demo that lets you literally 'jump into' Appel's 1968 research paper and experience his groundbreaking techniques of per-pixel raycasting and shadow rays:
Scenes that used to take several minutes on Appel's digital plotting device now run at 60 fps in your browser! I think Arthur would get a kick out of dragging the sunlight around in real time on his classic scenes!
Until now (2021), actual photos of Arthur Appel were not publicly available (none can be found with a thorough internet search). All that was known was that he was working at IBM Research (TJW Center) at the time he wrote this seminal 1968 paper. I really wanted to see what Mr. Appel looked like, and to share and celebrate his image and contributions to the field of Ray Tracing and Rendering. With a little hesitation at first, I reached out to the IBM Corporate Archives in New York to see if they might have any remaining employee portraits of Arthur Appel. I'm so glad I did, because I met (via email) a wonderful IBM Archive employee, Max Campbell, who kindly searched the entire archives and found 2 rarely-seen photos of Mr. Appel. Since these images are copyrighted by IBM (and NOT a part of my repo's CC License), Max also kindly and graciously helped me to obtain permission from IBM to share these historic photos of the man who started it all! Click on the images to see the full resolution photos:
Arthur Appel, from the IBM Research Employee Gallery, ca. 1982
Reprint Courtesy of IBM Corporation ©
Arthur Appel demonstrating display architecture, from IBM Research Magazine ca. 1983
Reprint Courtesy of IBM Corporation ©
Many thanks to Max Campbell at IBM Research Archives for locating these rare photos and helping me to obtain permission to share them with everyone who is interested in ray tracing! It is so nice to be able to finally put a face with the name of one of my ray tracing heroes. Thank you Arthur Appel for your historic contributions to the field of Computer Graphics!
While working at Bell Labs and writing his now-famous paper An Improved Illumination Model for Shaded Display, J. Turner Whitted created an iconic ray traced scene which showcased his novel methods for producing more realistic images with a computer. Beginning work in 1978, he rendered a handful of scenes featuring spheres and planes with various materials and reflectivity, so that these images would be included in his paper (which would be published in June 1980). Then for an upcoming SIGGRAPH conference submission, Whitted decided to create an animated sequence of individual rendered images. Thus the first ever ray traced animation was born! This style of putting together single frames of pre-rendered images would continue through a great lineage of movies such as Tron, Toy Story, Cars, all the way to current animated feature films.
Vintage 1979 Video: 'The Compleat Angler' by J. Turner Whitted
Although this movie appears as a smooth animation, it took around 45 minutes to render each individual frame back in 1979! Fast forward to today and using WebGL 2.0 and the parallel processing power of GPUs, here is the same iconic scene rendered at 60 times a second in your browser! :
Thank you Dr. Whitted for your pioneering computer graphics work and for helping to start the rendered animation industry!
In 1986 James T. Kajiya published his famous paper The Rendering Equation, in which he presented an elegant and profound unifying integral equation for rendering. Since the equation is infinitely recursive and hopelessly multidimensional, he suggests using Monte Carlo integration (sampling and averaging) in order to converge on a solution. Thus Monte Carlo path tracing was born, which this repo follows very closely. At the end of his paper he included a sample rendered image that demonstrates global illumination through Monte Carlo path tracing:
And here is the same scene from 1986, rendered in real-time:
- In December of 1997, Eric Veach wrote a seminal PhD thesis paper on methods for light transport http://graphics.stanford.edu/papers/veach_thesis/ In Chapter 10, entitled Bi-Directional Path Tracing, Veach outlines a novel way to deal with difficult path tracing scenarios with hidden light sources (i.e. cove lighting, recessed lighting, spotlights, etc.). Instead of just shooting rays from the camera like we normally do, we also shoot rays from the light sources, and then later join the camera paths to the light paths. Although his full method is difficult to implement on GPUs because of memory storage requirements, I took the basic idea and applied it to real-time path tracing of his classic test scene with hidden light sources. For reference, here is a rendering made by Veach for his 1997 paper:
And here is the same room rendered in real-time by the three.js path tracer:
The following classic scene rendering comes from later in the same paper by Veach. This scene is intentionally difficult to converge because there is no direct light, only indirect light hitting the walls and ceiling from a crack in the doorway. Further complicating things is the fact that caustics must be captured by the glass teapot on the coffee table, without being able to directly connect with the light source.
And here is that scene rendered in real-time by the three.js path tracer: Try pressing 'E' and 'R' to open and close the door!
I only had the above images to go on - there are no scene dimensions specifications that I am aware of. However, I feel that I have captured the essence and purpose of his test scene rooms. I think Veach would be interested to know that his scenes, which probably took several minutes if not hours to render back in the 1990's, are now rendering real-time in a web browser! :-D
For more intuition and a direct comparison between regular path tracing and bi-directional path tracing, here is the old Cornell Box scene again but this time there is a blocker panel that blocks almost all of the light source in the ceiling. The naive approach is just to path trace normally and hope that the camera rays will be lucky enough to find the light source:
- Naive Approach to Blocked Light Source As we can painfully see, we will have to wait a long time to get a decent image! Enter Bi-Directional path tracing to the rescue!:
- Bi-Directional Approach to Blocked Light Source Like magic, the difficult scene comes into focus - in real-time!
Before I got into this world of path tracing, I was a 3D game programmer (and still am, although path tracing is consuming most of my coding time!). My first game was way back in 1998, using OpenGL 1 and the C language, back when these new things called graphics cards were all the rage! my old Binary Brotherz page Although using OpenGL back then and WebGL today was/is cool, I always wanted more in terms of lighting, shadows, reflections, diffuse color sharing, etc., in my game engines that I just couldn't get from rasterizing graphics APIs. Well, fast forward to 2019 and NVidia is releasing graphics cards dedicated to real-time ray tracing! I couldn't have imagined this back in the 90's! However, at the time I'm writing this, NVidia is only doing specular ray tracing as a bonus feature on top of the old rasterization technique. I wanted to see if I could 'overclock' my full path tracer's convergence so that you could see the beautiful light effects in real time, being able to possibly move a game character or 1st-person camera through a path-traced dynamic game environment at 30-60 fps, even on mobile. If you're willing to sacrifice some ultimate physical reality (like perfect converged reflected/refracted caustics), then you can have this!:
- Future Game Engine PathTracer Demo
To my knowledge, this is just about as fast as I can push the path tracing engine and WebGL in general, and still retain good lighting, accurate reflections, and almost instant image convergence. As computers get faster, this will be the heart of future game rendering engines - a simple path tracer that is just around 500 to 1000 lines of code, is easy to maintain and debug, and which gives photo-realistic real-time results! I already have some ideas for some simple 3d games that can use this technology. I'll keep you posted!
Update: 1/21/2021
In 1986 when I was 13 years old and on my Commodore 64 (I know, I'm old), Geoff Crammond released his masterpiece, The Sentinel. This iconic game featured true 3D filled polygons (an amazing feat running on underpowered 80's hardware!) and had a haunting look and atmosphere like no other before it (or after). This was the first game that I played that truly immersed me, surrounding the player from all angles with its sterile, other-worldly environment. I've always wanted to pay homage to my favorite game of all time, while maybe adding some of my personal path tracing touch to it. So it is with much joy that I present, The Sentinel: 2nd Look. This fully path traced remake contains a random landscape generator (which I had to figure out from looking at the classic over several months), an added day cycle, pixel-perfect raytraced shadows on the terrain and game objects, object self-shadowing, and true raytraced reflections on the white/black connector panels of the landscape.
Creating this remake forced me to figure out how to make a dynamic top-level BVH over many moving, rotating game objects/models, each with their own unique BVHs for their own triangle geometry. I'm happy to report that not only does my new system work, it can completely rebuild and update the whole top-level BVH in a split second, allowing for more complex, path traced dynamic game environments! As of now, this project is a W.I.P. (gameplay and game logic to be added soon), but I just wanted to share this passion project of mine, as well as the technical major step forward (in BVH technology) that will allow a wider range of real time games and applications to be path traced right inside your browser!
- The Sentinel: 2nd Look (W.I.P.) game on Desktop, click to capture Mouse
- The Sentinel: 2nd Look project
Update: 12/18/2020
- Continuing my series of path traced games for desktop and mobile, I happily present: Path Traced Pong! The iconic game of Pong holds a special place in my heart as it was my first computer game experience as a 6 year old in 1979, played on my brand new Atari 2600! My version of Pong brings the classic into 3D, and is played inside the CG-famous 'Cornell Box'. Path Traced Pong features real time raytraced reflections, soft shadows, transparency, dynamic light sources, and path traced global illumination. As with AntiGravity Pool, I made a dedicated repository for just this new game. I must say, once you start playing, it's hard to stop! I didn't realize how addictive it would become!
- Path Traced Pong game on Desktop, click to capture Mouse
- Path Traced Pong project
Update: 5/24/2019
- I am pleased to announce the first ever path traced game for desktop and mobile: AntiGravity Pool! If you've ever played American 8-ball before, then you already know how to play - except that gravity has been shut off! LOL. I tried to imagaine how our distant future descendants would enjoy the game of billiards while in the HoloDeck. Warping the 2D classic pool table into a 3D cube presents some unique and interesting challenges for the player. AntiGravity Pool features real-time raytraced reflections, soft shadows, and path traced global illumination from 8 light sources (which is challenging for path tracers). Since it uses a physics engine and various custom components, I decided to create a dedicated repository for just this new game. Be sure to check it out!
- AntiGravity Pool game on Desktop, press SPACEBAR to shoot! :)
- AntiGravity Pool project
A random sample rendering from the three.js pathtracing renderer as it was back in 2015!
- Real-time interactive Path Tracing at 30-60 FPS in your browser - even on your smartphone! ( What?! )
- First-Person camera navigation through the 3D scene.
- When camera is still, switches to progressive rendering mode and converges on a highest quality photo-realistic result!
- The accumulated render image will converge at around 500-3,000 samples (lower for simple scenes, higher for complex scenes).
- My custom randomized Direct Light targeting now makes images render/converge almost instantly!
- Both Uni-Directional (normal) and Bi-Directional path tracing approaches available for different lighting situations.
- Support for: Spheres, Planes, Discs, Quads, Triangles, and quadrics such as Cylinders, Cones, Ellipsoids, Paraboloids, Hyperboloids, Capsules, and Rings/Torii. Parametric/procedural surfaces (i.e. terrain, clouds, waves, etc.) are handled through Raymarching.
- Constructive Solid Geometry(CSG) allows you to combine 2 shapes using operations like addition, subtraction, and overlap.
- Support for loading models in .gltf and .glb formats
- BVH (Bounding Volume Hierarchy) greatly speeds up rendering of triangle models in gltf/glb format (tested up to 800,000 triangles!)
- Current material options: Metallic (mirrors, gold, etc.), Transparent (glass, water, etc.), Diffuse(matte, chalk, etc), ClearCoat(cars, plastic, polished wood, billiard balls, etc.), Translucent (skin, leaves, cloth, etc.), Subsurface w/ shiny coat (jelly beans, cherries, teeth, polished Jade, etc.)
- Solid transparent objects (i.e. glass tables, glass sculptures, tanks filled with water or other fluid, etc) now obey the Beer-Lambert law for ray color/energy attenuation.
- Support for PBR materials on models in gltf format (albedo diffuse, emissive, metallicRoughness, and normal maps)
- Diffuse/Matte objects use Monte Carlo integration (a random process, hence the visual noise) to sample the unit-hemisphere oriented around the normal of the ray-object hitpoint and collects any light that is being received. This is the key-difference between path tracing and simple old-fashioned ray tracing. This is what produces realistic global illumination effects such as color bleeding/sharing between diffuse objects and refractive caustics from specular/glass/water objects.
- Camera has Depth of Field with real-time adjustable Focal Distance and Aperture Size settings for a still-photography or cinematic look.
- SuperSampling gives beautiful, clean Anti-Aliasing (no jagged edges!)
The following demos show what I have been experimenting with most recently. They might not work 100% and might have small visual artifacts that I am trying to fix. I just wanted to share some more possible areas in the world of path tracing! :-)
Some pretty interesting shapes can be obtained by deforming objects and/or warping the ray space (position and direction) around these objects. This demo applies a twist warp to the spheres and mirror box and randomizes the positional space of the top purple sphere, creating an acceptable representation of a little cloud.
When rendering/raytracing Terrain, you can either raymarch a Perlin noise texture (as I have demonstrated in the above Terrain_Rendering and Planet_Rendering demos), or you can just load in a large pre-existing triangle terrain mesh and raytrace it in the traditional way. Both have their advantages and disadvantages. However, if you want to go the classical raytracing route, to make the land contours a little more convincing, there needs to be a lot of triangles! The following WIP preview demo uses the BVH acceleration structure to load in and quickly render a huge terrain mesh consisting of no less than 734,464 triangles! It really pushes my BVH code to the max - we're pretty near a million triangles here, pathtracing in WebGL! For now I just stuck a checker texture across the terrain and the environment is simply a large skylight dome. But the good news is that it doesn't crash the browser, and it runs slightly above 20 fps even on my humble laptop - it's amazing that all of this is happening inside a browser webpage! Note: because of the large BVH data set that needs to be built at startup, this demo might take a few seconds to compile - please be patient, it's worth the wait! ;-)
- BVH Large Terrain Demo (W.I.P.) Note: due to the large data set, it might take a few seconds or more to load and compile
Inspired by an older Shadertoy demo by user koiava that I came across - https://www.shadertoy.com/view/MtBSzd - I noticed that my mobile device didn't have any problems when trying that particular demo with 1000 triangles. I copied / edited / optimized the traversal code and then, I did the unthinkable (for me anyway) - I sent down over 2 million triangles to the engine to be raytraced, then raytraced yet again for the reflection/shadow ray pass (so effectively 4,200,000 triangles in a single frame, and .... my Samsung 9 still runs at nearly 60 fps! It didn't even blink an eye. Compilation takes maybe 1 second. I couldn't believe what I was seeing at first.
A technical note about what you are seeing: The data arrives to the fragment shader through a 1024x1024 heightmap texture (I randomly chose a DinoIsland.png heightmap, but it can be anything, even a realtime video texture). The acceleration structure handles sampling the texture and stepping the ray through each sample cell. The number of cells is up to you. At first I tried 32x32 cells, and each cell is a square, and each of the 4 corners of that square is a vertex that is used by 2 triangles sandwiched together back-to-back. So to get the number of triangles that you must raytrace, you take 32 cells width times 32 cells height and remember that each square cell contains 2 triangles, so multiply all that times 2, so 32Wx32Hx2t which is 2048 triangles representing the polygon heightmap. Now 2048 triangles sounds like a lot, and it is for raytracing, but the result mesh looks like an old-school low-poly terrain - it is not detailed enough. On a whim, I tried a resolution of 1024, so each little texel of the 1024x1024 source texture image has its own quad cell, and 2 triangles for every one of those quad cells. So now we have 1024x1024x2, or 2,097,152 triangles every frame! And since the grid looks up the texture to get the triangle vertices every frame, you can animate the height/depth of the displacement as well as even play an HD video (saved as textures) with an embossed-effect on the terrain in real time!
Oddly, it turns out that my Mobile device (a Samsung S9) trounces my laptop at this demo. The humble old laptop achieves maybe 20fps, whereas the newer smartphone rocks at 60fps. It may have to do with the cheap integrated graphics of my laptop, but in any case, this is a true testament to the power of modern smartphone GPUs!
-
December 16th, 2021: New Classic raytracing demo in the 'Ray Tracing History/Classic Scenes' section! Arthur Appel is credited with the first formal mention and use of raytracing (raycasting and shadow rays) in his famous research paper from 1968, entitled Some Techniques for Shading Machine Renderings of Solids. In this groundbreaking paper, Appel creates renderings of machine parts and architecture by casting a ray through every single pixel in the view plane (your screen). If these primary rays don't hit anything, the white background color is returned. If a ray hits a surface, a secondary ray is spawned from the hit point on the surface, aimed in the direction of the sun light. If this 2nd ray (known as a shadow ray), escapes the scene and reaches the sky without running into anything, the surface point must be in full sunlight and is shaded according to the usual diffuse Lambert's law. However, if this secondary ray hits anything at all on its way towards the sun light, the surface point must be in shadow, and a dark color is applied. Even though primitive by today's graphics standards, Appel's accompanying images to his research paper look more realistic and visually clear than anything before, in terms of the play of light and shadow on scene objects. His images inspired me to recreate these iconic scenes from scratch. There were a couple of challenges I had to face; namely, how to geometrically reconstruct the various models, and then how to shade these models in a retro-looking fashion in which a 1960's digital plotter, with its limited graphic capabilities, was used. If you look closely at Appel's original images, all of the shading is made up from nothing but tiny plus (+) signs! Arthur cleverly used different sizes and spacing of these tiny plus signs to differentiate the various grades of brightness/darkness. As for the first hurdle (geometry reconstruction), I relied on my previous CSG library which is able to combine multiple shapes with 3 different operations- Union, Difference, and Intersection. I ended up only needing the Union and Difference (combining shapes and cutting away from shapes, respectively), but I had to add extra code and algos to the CSG routines in order to faithfully recreate the classic models in the research paper. As for the second challenge (retro 1960's digital plotter shading look), I went into MS Paint and created 5 little plus signs of various sizes and thickness, so that they can be used as repeating textures for the models' surfaces. When you put all this together with my edge detector/renderer, I think the result is very close to the original! I had fun trying to reverse-engineer these classic images. I use the phrase 'reverse-engineer' lightly, as I will never fully comprehend the 1960's hardware that Arthur Appel had to work with. In his paper he mentions that it took several minutes for these images to be printed out with his digital plotter. I think Mr. Appel would get a kick out of trying our three.js pathtracing renderer version, and watching it render at 60 times a second, on a cell phone! And I almost know for a fact that if he were able to drag the sunlight direction slider around and watch the lighting and shadows change in real-time, it would bring a smile to his face for sure! :-)
-
October 28th, 2021: Major breakthrough! If you have been following my project here on GitHub, you were probably painfully aware that my heavier BVH demos (glTF triangle models) simply would not work at all on mobile devices like tablets and cell phones. The BVH demos wouldn’t even compile, or would just crash after a few seconds, even on my (2021) Samsung Galaxy S21 - and worse yet, there were no errors in the console. I had no idea why these particular demos would not work. This bug has been haunting me for years, literally. Well, a recent Twitter post by Garrett Johnson caught my eye because he was having similar problems when developing his BVH system for three.js. We got into a discussion about it, chalked it up to lack of mobile precision, and I was ready to give up. Not Garrett however; he is a robotics expert at NASA and he knows all about low level stuff and machine precision. He couldn’t let this go, because mobile devices should have as much floating point precision as desktops, as reported. Yet there was still a discrepancy. Many posts later and a multiple-expert discussion later (Ken Russell from Khronos, Romain Guy from Google, someone who knows mobile chips, etc…) it was discovered that for some reason (which I can’t quite fathom), the designers of Adreno mobile chips treat all members of structs{} as mediump or lowp precision, even if I specifically put 'highp' precision at the beginning of my shaders! I had structs all over the place, stuff like Ray{vec3 origin; vec3 direction;} Intersection{ vec3 hitPos, etc.}, BVHNode{vec4 data, etc.}. So the problem was that when traversing the very precise BVH, my mobile devices were unable to even start because floating point round-off errors were flying all over the place! All this was happening unbeknownst to me because I thought everything was in highp precision, like I had instructed on every shader - not so! As soon as I got rid of all my structs, and unpacked them into seperate variables, like vec3 rayOrigin instead of ray.origin for example, everything magically started to work! It was such a joy to be able to see the Stanford Bunny (30,000+ triangles) path traced at 60 fps,… on my phone! Many thanks to Garrett Johnson (@garrettkjohnson on Twitter) for helping me solve this multi-years-long bug!
-
May 2nd, 2021: My first attempt at a de-noiser! Well actually, it's more of a 'noise-smoother', but it still makes a big difference! In 2021 you may have seen some of NVIDIA'S demos/partnered games that use their real-time A.I. deep-learning denoising technology. Games like 'Minecraft RTX', 'Quake RTX', and the 'Marbles' demo feature this proprietary denoiser, which is basically magic. I really don't understand how it accepts such raw, incomplete, noisy input and produces a cleaned, refined image at 1080p. Even if my humble attempt at a de-noiser will never approach NVIDIA'S level of sophistication, their technology and latest games and demos served as inspiration to try my hand at it for this project here. I am happy to report that the initial results are promising! If you take a look at the Geometry_Showcase Demo, Quadric_Geometry_Showcase Demo, Cornell_Box Demo, and Billiard_Table Demo in particular and drag the camera around with your mouse, you'll notice that the moving, rotating image is much less noisy and when the camera does come to a full stop, the image converges almost instantly! A little technical note on this new denoising feature: if you take a look at Screen_Output_Fragment.glsl in the 'shaders' folder, you'll see that I have added a simple 9x9 box blur filter that averages the current center pixel with its immediate neighbor pixels, only if we say it's ok to blur inside the demo's path tracing fragment shader. Specifically, if it is a Diffuse (DIFF) surface or the diffuse portion of a clear coated diffuse surface (COAT), then it is allowed to blur with its neighbors, which greatly helps convergence speed on surfaces like room walls, floors, and ceilings. However, if it is a smooth mirror or glass specular surface, or it lies on the boundary edge between 2 pixels with greatly different reported intersection data, like the corner of a room where the surface normals change abruptly, or the boundaries between different scene objects, or an abrupt color change (like 2 differently colored squares of a checkerboard texture), then the blur filter is disabled for that pixel and thus it remains razor sharp. I have found that with some minor tweaks you can get the best of both worlds; sharp edges where it counts, and blurred, softened noise on diffuse surfaces where all we care about is the general final blended color anyway. In order to differentiate between edges and non-edge flat smooth areas, I employ the handy GLSL built-in functions, dFdx() and dFdy(). These derivative functions allow you to peek at the current pixel's immediate neighbors both vertically and horizontally to compare their ray's intersection results like surface normals, surface color, and object IDs. If any of these are too different past a threshold, an edge is recorded, and later when the box blur filter is applied, it will avoid these noted edges. This is called edge detection and is an absolute necessity for even the most basic denoiser like mine. NVIDIA'S denoiser takes this pixel averaging/smoothing to a whole other magical level by training A.I. software on noisy input images vs. their fully cleaned-up path traced resulting counterpart images. Although I'll probably never understand how they do this, I am quite pleased with my humble first attempt. I think you'll agree that even this basic addition to the Three.js PathTracing Renderer makes a big difference in smoothness and convergence speed, especially when the image is moving or dynamic! :-D
-
January 21st, 2021: New W.I.P. path traced game with new BVH technology! I am happy to present my fully path traced remake of the 1986 classic 'The Sentinel' by Geoff Crammond: it's called 'The Sentinel: 2nd Look'. As stated above in the games section, 'The Sentinel' is my all-time favorite game for several reasons. First, as a 13-year-old back in 1986 playing on my Commodore 64, 'The Sentinel' was the first true 3D filled polygon game on my Commodore 64 that I had ever seen. Experiencing Geoff Crammond's worlds changed how I viewed computer games and in my opinion, his unique creation rose above what it meant to be a computer game back in the 80's (the age of coin-op arcades), even rising to the level of high art. I don't know how Geoff Crammond got the brilliant idea to create this haunting, immersive game world, but its feel and imagery have stuck with me all these years. I've always wanted to pay homage to this iconic game by bringing it to the modern era with a 30 fps framerate (the original played like a slideshow, understandably!), as well as maybe add my own path tracing touch and technology to it. From the start, a major hurdle to overcome was being able to support many arbitrarily positioned/rotated dynamic game objects, each with their own personal BVH for their own triangle geometry. The solution was to create a dynamic over-arching top-level BVH that contains all the bounding boxes of all the possible game objects. I call it a 'BVH for the BVHs'! ;-) This top-level BVH contains all the objects' bounding volumes and pointers to their individual movement/rotation matrix transform data, and can be updated once every frame if needed, allowing for more complex, dynamic game environments to be path traced in real time. I had been thinking about doing this for years, but working on this 'Sentinel' passion project kind of forced me to come up with a solution that could work inside the browser on commodity hardware. Technically speaking, the new top-level BVH is created as a set of Vector4 uniforms to be quickly fed into the GPU (once every frame if needed). Another set of Matrix4 uniforms contains the inverse matrix transform data for each game object and can also be written to on every frame if needed. When the top-level BVH is traversed, the viewing ray is tested against the over-arching BVH to see if it hits a leaf, which in this case is a game object. The ray then gets pointed to the real-time inverse matrix transform data for that leaf object, so it can be transformed into the objects local space. Finally, it is tested against that particular object's own personal triangle BVH, which resides as a static data texture that was built upon game start-up and does not change during the lifetime of the application. This last part operates like all of my other, older BVH demos on this repo. The new part is the adding of the top-level dynamic BVH over all the 'regular', static model triangle-BVHs. As of now, 'The Sentinel: 2nd Look' is a W.I.P. (gameplay to be added soon), but I just had to share the exciting news of the new dynamic top-level BVH, which opens the door to a lot more games and applications that can be path traced in real time inside your browser! More updates coming soon! :)
-
December 18th, 2020: The 'Games' section above has a new game: Path Traced Pong! Continuing the series of real-time path traced games for any device with a browser (including mobile!), Path Traced Pong brings the iconic game of Pong into 3D, played inside the CG-famous Cornell Box. Throughout my journey into 3D graphics, starting with C and OpenGL 1.0 under Windows 98, I have made now 3 versions of this game. The first version was back in 1999, the second version was with a young three.js around late 2013, and now in 2020, Path Traced Pong uses the latest version of three.js and features real-time path traced global illumination with raytraced reflections, soft shadows, and dynamic light sources. It is fun to play on desktop, but even more fun is when you're on the go, you can play it on your phone! Each demo in this repo, and more currently, each new game that I try to program, presents me with new and interesting challenges, especially when trying to path trace a fast-paced game on any commodity hardware. I learned a little more about successfully rendering dynamic, quick-moving game objects, as well as more efficiently sampling dynamic light sources (such as the bright white game ball as it bounces against the room walls). My first 2 older versions featured basic network multiplayer over Sockets - in the future it would be nice to add WebSockets capability to these path traced games. This would bring exciting new possibilities as players would not only be able to play a quick game online together, they would experience real time path traced game worlds! I have more ideas for other simple, yet fun, 3D games that would benefit from this technology. As always, I'll keep you posted! :)
- For simple scenes without gltf models, instead of scene description hard-coded in the path tracing shader, let the scene be defined using familiar Three.js mesh creation commands (3/4/21 made progress in this area with my new CSG_Viewer demo - taught me how to create and manipulate object matrices like Three.js does)
- Figure out how to save pathtraced results into texture maps to be used for lightmaps (optimized baked lighting for static geometry)
- Dynamic Scene description/BVH rigged model animation streamed real-time to the GPU path tracer (1/21/21 made progress in this area by working on my new game The Sentinel: 2nd Look. Featues a dynamic top-level BVH that can change and update itself every animation frame)
- This began as a port of Kevin Beason's brilliant 'smallPT' ("small PathTracer") over to the Three.js WebGL framework. http://www.kevinbeason.com/smallpt/ Kevin's original 'smallPT' only supports spheres of various sizes and is meant to render offline, saving the image to a PPM text file (not real-time). I have so far added features such as real-time progressive rendering on any device with a browser, FirstPerson Camera controls with Depth of Field, more Ray-Primitive object intersection support (such as planes, triangles, quadrics, CSG shapes, etc.), loading and rendering glTF triangle models, static and dynamic GPU BVH acceleration structures, and support for additional materials like ClearCoat and SubSurface.
More examples, features, and content to come!