Final Project Page
Authors: Aishik, Tanush, Joshua, Destiny
Abstract:
Our project presents physically accurate rendering and simulation of thin-film materials (specifically bubbles). In particular we produced realistic videos showcasing the varying iridescence and movement of bubble surfaces.
These results were achieved by splitting the entirety of the work into 4 logical subparts:
- Bubble mesh generation and physical simulation
- In this part we implemented simulation through dynamics specified in [Ishida 2017]. This also required using isotropic mesh normalization once the new position of each vertex in the mesh was deduced.
- Developed a way to export mesh into a
.obj
file at a constant time interval so it could be ingested by the renderer and rendered in a scene.
- Later, features were added to this simulator including external forces such as placing a wind-field which was modeled using the Perlin-based curl-noise.
- Accurate bubble (thin-film) rendering
- Built a custom BSDF function on top of the project 3-1 Pathtracer code that outputs a physically accurate reflectance and transmittance for incoming rays.
- The function is able to produce these values at every wavelength, allowing for a spectral ray tracer implementation.
- Since incoming rays both refract and reflect, probabilistically chose the incoming ray to sample proportional to the intensity.
- Environment Map Lighting Support
- Augmented the Pathtracer to support rendering using an environment map so lighting can be defined by a chosen HDRI file.
- Animation, by compilation ojoif the completely rendered images at each timestep.
- Required us to write scripts that converted the .obj into an intermediate .scene file containing important mesh information as well as transform information for all the .obj files
- Updated Pathtracer to ingest .scene files and then add the mesh to its internal representation under the rendering pipeline
- Compiled rendered frames into a video using ffmpeg
Technical Approach:
Part 1: Mesh Creation and Re-Organization
The first part of this was bubble mesh generation and simulation. We borrowed the geometric approach from Ishida's 2017 paper "A Hyperbolic Geometric Flow for Evolving Films and Foams". The core idea behind it is a mean-curvature flow, where surface points are accelerated along their normal depending on the mean curvature at those points. This captures the idea that bubbles in the real world tend to minimize their surface area due to surface tension. Points where the bubble is bulging outward get pushed in, and points where the bubble is sunken in get pushed outwards.
Specifically, the equation of motion is given below, where denotes mean curvature, denotes surface normal, is a term describing the internal pressure of the bubble, and is a constant relating to the bubble’s surface tension coefficient.
Hyperbolic geometric flow on its own, corresponding to the left term of the equation, doesn’t preserve enclosed volume of the bubble, so the corrective term is added to remedy this. In practice, the simulator doesn’t calculate directly, and simply tries to ensure that the enclosed volume is constant by adjusting the mesh after each timestep.
To discretize the continuous hyperbolic geometric flow, we follow Ishida in using a triangle mesh and the well-studied discretization of mean-curvature for triangle meshes to implement the dynamics.
Where denotes mass matrix, the cotangent matrix, and the vertex positions. This discretization upholds an important invariant studied in differential geometry (Gauss-Bonnet Theorem) and is widely used in computer graphics.
Ishida’s work goes much farther in extending this flow to non-manifold geometry, replacing mean-curvature normal with a variational surface area functional and a means of computing it on a triangle mesh.
These dynamics were implemented using the libigl framework, which provides OpenGL skeleton rendering code and a variety of geometric / mesh processing operations to speed up prototyping. The C++ Eigen library was used to perform linear algebra operations.
To preserve numerical stability while running the simulation, we have to periodically remesh and remove degenerate geometry. We used an external library which performs isotropic remeshing, modifying it to prevent volume loss during projection step after tangential Laplacian smoothing as well as hooking into its edge-split and edge-collapse routines to interpolate velocity of newly created geometry.
Ishida’s paper utilizes a software package known as LosTopos for this purpose, as it supports multimaterial, non-manifold surface tracking and remeshing. Unfortunately, lack of documentation and time meant we were unable to pursue this route.
We also included two external forces within the simulation: wind and gravity. A wind field was created by computing the curl of a vector potential using 3 components of 3D Perlin Noise
By the identity , we know that is divergence-free, which is expected due to incompressibility of air. Along with gravity, is coupled into bubble dynamics during sympletic Euler integration (, ) as part of .
Finally, we implemented a small .obj
export module so the resulting meshes could be used within our pathtracer.
Future Extensions
A future extension would be implementing multi-bubble systems and interactions. Unfortunately, the nature of non-manifold meshes makes them hard to operate on, so it might be interesting to pursue a point-based method instead, where the mesh surface is instead discretized as a point cloud. This is the approach taken in many recent bubble simulation papers based on the SPH method of Computational Fluid Dynamics, but hasn’t been explored with the geometric-flow based bubble simulation approach described here. Methods for computing surface normals (using essentially PCA), mean curvature, and volume (”shrink-wrapping”) exist for point clouds, so in principle Hyperbolic Geometric Flow could be extended to that domain. Doing so avoids the need for costly remeshing operations and the general headache of working with non-manifold meshes. We were unable to explore this due to time constraints.
Part 2: Rendering, Reflections, and Lighting
In order to accurately render the bubble we relied on ray tracing. We extended the Pathtracer code implemented in Homework 3. The main challenge in realistically rendering thin-film objects was in describing a BSDF class that can handle the reflectance and refracting dynamics of thin-film materials.
The underlying physics of thin-film interference is what gives them their vibrant colors. Specifically, the wave nature of light interacting with an optical medium leads to interference effects, as shown by this diagram
Each incoming ray of light can bounce and transmit multiple times. Since we assume (thickness of bubble) is on the order of nanometers, approximating this using a BSDF is still reasonable, but we need to include the contribution of , , , etc. in the reflectance term and correspondingly , , etc. in the transmittance term.
The physically accurate way to compute the contribution of these bounces is using Fresnel Coefficients and Airy Summation. Fresnel coefficients describe the change in amplitude and phase of the electromagnetic waves at a boundary between two optically different mediums, and are derived from solving plane-wave solutions of Maxwell’s Equations.
Where ’s are indices of refraction, and are incident and transmitted angles respectively (calculated using Snell’s Law). Note that these coefficients can be complex valued, and in the cases of total internal reflection or conductors, these equations are not fully accurate (but this is not a concern for our basic case of a dielectric between two layers of air), describing a phase shift.
Refraction and distance traveled (the “optical path difference”) changes the phase of the light for each subsequent reflection and refraction. If two reflections are out of phase, then they will destructively interfere, so light of that wavelength have lower reflectance.
To capture contribution of all the number of bounces, we compute the Airy summation, which just adds up all of the difference bounces, adding a factor of with each one. are the Fresnel coefficients from earlier and is the phase shift due to optical path difference. Since one light wave travels different distance than the other, the two move out of phase proportional to distance traveled, which is both angle and wavelength dependent. Because of the representation as complex numbers, this simple expression captures destructive and constructive interference effects.
We can rewrite these as geometric series
To convert this into reflectance and transmittance intensities, we just take their square magnitudes and . We make a simplifying assumption (made by most raytracers, even production ones) that light is unpolarized, so simply taking the average of reflectance / transmittance for and polarized light is sufficient.
Much of the implementation required carefully translating the physical equations that govern the intensity of reflection and refraction that happens in thin-film objects as described above into code for the newly created BubbleBSDF
class. Since the direction of the transmitted ray and reflected ray are governed by the incident direction and the IOR values of the thin-film, this would be a delta BSDF.
At a high level, we implemented functions that would give the reflectance at a point given the wavelength of the ray, thickness of the film at the point, and the values of the film. We sample one wavelength per color channel (at 700nm, 546.1nm, and 435.8 nm, where the values of the RGB color matching functions achieve their maximum) Once we get these reflectance and transmittance values at our intersection point, the next issue is actually choosing whether to sample the transmitted ray or the reflected ray as the next bounce. This decision was made by simply probabilistically by choosing the ray proportional to the intensity (i.e. importance sampling based on reflectance / transmittance values).
Interestingly, the current BSDF implementation is general enough to get transmittance and reflectance values at arbitrary , , and values, leaving room for more complex rendering techniques, where the thickness of the surface is varying (possibly informed by underlying bubble simulation using Lagrangian advection of thickness like [Ishida 2020] or an SPH solver approach like MELP), or sampling a spectrum of wavelengths instead of just a constant wavelength for each color channel, and finally having different type of thin-film surfaces (by varying the value).
Future Extensions
Our current strategy of sampling just one wavelength to determine the reflectance / transmittance in RGB spectra is not very physically accurate, as demonstated by [Belcour and Parla 2017], often leading to out-of-gamut colors and clipping. Ideally, we would convert the class pathtracer to perform semi-spectral rendering like PBRT in order to maximize visual fidelity of thin-film material, but this was too out of scope / time-consuming.
Additionally, varying the thickness of the film is also another way to get a lot more realism: incorporating the effect of gravity onto film thickness and the turbulent flow that occurs on the surface of the bubble can add an extra layer of depth to the visual realism.
Part 3: Environment and Scene Setup
To create realistic and interesting lighting and reflections we opted for HDRi maps which hold real-world lighting data captured in high dynamic range in a spherically-projected 2D texture. Rather than being placed in a Cornell box, where the lighting effects are quite simple and thus lead to less interesting visual effects on the bubble, we thought it might prove more beneficial to use an environment map to create a more dynamic visual. This meant that we needed to also augment the Pathtracer to support environment lighting. Luckily the project 3-2 Pathtracer already had a lot of the skeleton to read in .exr
files and apply the necessary setup required to map each point in the sampled environment space to a radiance value for the raytracer. All that needed to be implemented were a few functions that initialized the conditional and pdf probability valued data structures used in the sample_dir
and sample_L
functions when sampling out in a direction toward the environment map. It also turned out that the starter code did not have native support for environment map lighting sampling on delta BSDF functions so the main ray tracing method had to be augmented to provide the functionality, which finally gave a working environment map lighting feature to the Pathtracer.
Part 4: Animation
In order to connect the mesh and the render portions of our code, we first use the mesh creation and re-meshing scripts to output object files. These object files do not hold transformation or camera rotation data, both important inputs into the renderer to get changing camera angle effects as well as the precise placement of the mesh in the rendering world space. As such we take those object files and put them through a custom written Python script that automates the entire pipeline, turning them into .scene files, which then attaches metadata, including transforms and camera positions, to the already existing object data. This script also changes the formatting of this information so that the pathtracer is able to use it. The pathtracer thus takes in this object data, transforms the given object, and then applies the render function to it. The python script, runner.py
also takes care of running this rendering loop for every single object file that exists in the directory, automating the need to invoke the Pathtracer for every frame we need to render with the necessary arguments that include outputting to a specified filename. In this manner, we could just run the python program and then later as a postcondition expect the output directory to be populated with all the rendered scene images. Finally, to produce the nice looking videos, we took the rendered frames and then imputed them into the CLI tool ffmpeg
to get the final output video, completing the entire pipeline.
Results:
Cube Bubble Render:
This was one of our rendered images of a bubble during a specific transformative time step, which uses a cube-shaped bubble as the initial starting point. The background uses the HDRi map mentioned earlier.
Floating Bubble Video:
Here is a short animation of a floating bubble consisting of a collection of rendered images. You can clearly see the reflection of umbrellas in the bubble although there is quite a bit of noise. With more time, we would like to create a script that takes in each image and then denoises it as well.
Bubble with camera spin
Bubble with camera spin + less noise
References:
While we were creating the bubble object, we were deeply considering two different types of models, a mesh-based system, and a point cloud system. For the mesh version of the bubble, we found it beneficial to familiarize ourselves with Mesh Representations and similar ideas from lecture and Project 2. Along with this, we also used libraries, such as libigl, which focuses on processing geometries, to better recreate the triangle mesh structure and achieve the results that we wanted. When simulating using these mesh structures we looked to articles such as A Hyperbolic Geometric Flow for Evolving Films and Foams and “Animation of Soap Bubble Dynamics, Cluster Formation, and Collision”, which both relied on mesh systems, as a reference.
Figure from Animation of Soap Bubble Dynamics, Cluster Formation, and Collision:
Figure from A Hyperbolic Geometric Flow for Evolving Films and Foams:
We were also thinking to create a point cloud-based system in which we referred to the articles 3D Object Detection from Point Cloud, Deep learning on point clouds for 3D object detection, and Physically Based Rendering Techniques to Visualize Thin-Film Smoothed Particle Hydrodynamics Fluid Simulations. However, we found it to be incompatible with the structures we were already working with. Furthermore, many of the other articles, regarding bubble mesh and simulating it, as outlined in the paragraph above, were already using mesh systems, giving the scheme more basis and options regarding implementation. As such, we decided to continue the project using our previous system.
Figure from 3D Object Detection from Point Cloud
When working on rendering the bubble, in order to properly represent the behavior of light and reflections on thin films, we used concepts such as Fresnel equations, from the article Thin Film Interference for Computer Graphics as well as the paper “A Practical Extension to Microfacet Theory for the Modeling of Varying Iridescence“ [Balcour and Barla 2017]. Both outlining a thin-film version of the BRDF class, as well as further guidance regarding sampling wavelengths.
Image from Thin Film Interference for Computer Graphics
When creating wind and camera movements we looked to articles and forums, such as talking about curl and perlin noise which allowed us to simulate the flow of air, an external force in our bubble animation. With a few changes as our bubble objects were not as rigid as the sphere objects they had used.
Figure from Curl-Noise for Procedural Fluid Flow
When walking around Berkeley, there were some individuals who were making giant bubbles so we decide to take a few pictures and videos to use as both a reference and a comparison to our simulated bubbles.
Actual Bubble Slow Motion Video Reference
Final Vid
Final Slides
Media Folder
Contributions
Tanush: Worked on writing the Pathtracer, obj-file connector code, Environment mapping augmentation to the Pathtracer, BubbleBSDF inside the Pathtracer, as well as the overarching automated runner python code, and the written deliverables.
Joshua: Implemented bubble simulation, mesh export + mesh import for pathtracer, initial experiments with thin-film interference calculations, denoising support
Destiny: Researched and wrote out information for website and slides, put up project website, worked on debugging and providing assistance where necessary.
Aishik: Active in the initial research and proposal process. Contributed to writeups and provided help whenever necessary.