senpie

Nothing much for today either. I was again playing with multi-threading and noticed a had a bug. The issue was that several cores were computing the result, however, because I had data races I would get a poor-quality picture. I didn't check the output image, that's why I didn't notice it yesterday. The idea is that even tho each core would do one sample, running on 8 cores would mean I have 8 samples per pixel when averaged. However, because random was shared it wouldn't do 8 samples, but some sequences would be corrupted and I get less than 8 samples. Here the code I finally ended up with:

static std::hash<std::thread::id> hasher;
static std::uniform_real_distribution<double> distribution(0.0, 1.0);

inline double random_double() {
  static thread_local std::mt19937 generator(
    static_cast<unsigned>(hasher(std::this_thread::get_id()))
  );
  return distribution(generator);
}

I have static thread_local, which says that each thread has its own random number generator. Furthermore, its constructor receives the hash of the current thread_id resulting in different seeds for each seed, so sampling on different threads would actually make sense. Nevertheless, there is a case where hash id could repeat and my threads' work would be redundant. Fortunately for my use case since I use very few threads, six on Windows, 8 on Mac (4 efficiency cores, 4 performance cores), and all threads start “at the same” it is little likely that id would repeat. On that note, I think the code I wrote that distributes the tasks to threads still looks kinda of ugly, and I can do better. For that specific purpose, I resumed reading Bjarne's book, specifically the “Threads and Tasks” section, to seek for better alternative. In the meantime, let's enjoy more renders of balls. This time in full HD, with 80 samples per pixel. Why again balls? you may ask. Because I am too lazy to write code for loading meshes and handling a ray-to-triangle intersection, but I will do it eventually, most probably tomorrow. This time the image took 25 minutes to render, which is quite good considering the other render took me 4 hours. Note the previous render was 120x675.

Render of balls, 80 samples per pixel, max depth 50, 1920x1080

I have finally added the multi-threading support. A screenshot showing 100% utilization of my CPU resources: OS X System Monitor/CPU Utilization CPU Utilization on MacBook Pro 13 m1, when rendering in multi-threading mode. There was 5x improvement in speed, which is amazing considering my computer has 6 cores ( tested on windows ). That's it for today, I will share with more insight tomorrow!

I am now on a path of darkness, and no tutorial shall help me. That is, I have finished the tutorial and I am experimenting on my own, therefore there is no one to hold my hand and tell me if I am doing something right, or wrong. Speaking of someone to hold my hand, this post has been sponsored by HedgeTheHog#andranik3949, who was kind enough to help me when I was completely lost debugging my code. Wish I could say the same for the compiler... The issue was that I was trying to use std::bind, to pass to the render function a reference to my world. HedgeTheHog found The arguments to bind are copied or moved, and are never passed by reference unless wrapped in std::ref or std::cref. Therefore, a solution would be to force pass the reference with the use of std::ref, where auto f = std::bind(func, std::ref(world));, then use f();. Another workaround is to use std::placeholders::_1, where auto f = std::bind(func, std::placeholders::_1); the pass the world in function call such as f(world);. There are some other errors I have yet to battle, but I will talk about them after I find a fix. The second challenge I have to face is to somehow use local instances of random generators. “Why?” you may ask. Because, if I have several threads using the same random number generator it's gonna be a bottleneck since random generators usually maintain some type of inner state. Therefore, all of the cpu cache across all of the cores will be invalidated. Someone smart reading this may think “Aha! Just use static thread_local, instead of static”. Unfortunately, that is useless, because I would have the same seed over all instances. I need to figure out a way to have that with different seeds on each thread and without making my code super ugly. That's it for today, see you!

Today, I have spent extra hours to finish up the project. The final result looks super cool. Since I haven't yet added support for multithreading this scene took me around four hours to render. It had 500 samples per pixel, with a max depth of 50 rays. Final rend For the last day, I have added defocus blur.

I am not sure yet, what I would want to add to this project, but I will decide soon. That's it for today, see you tomorrow!

Almost done with the series! Although next step would be to add simple improvements for quality of life. Here is the list of stuff done ( again in reverse chronological order ):

Added camera controls with lookfrom and lookat parameters.
Added glass material.
Added metal material fuzziness property.
Added materials.

Yet again, below is the evolution of the output image after each major change ( in chronological order ) Fuzzy Metal Fuzzy metal.

Glass Attempt Glass Attempt.

FOV experiment FOV experiment

Camera controls Camera controls.

Zoomed in Zoomed in.

That's it for today. Code is as always available in my github page. I have implemented some more stuff, but there is currently a bug, so I will leave it for tomorrow. See you!

Ray tracing rocks! Here is the list of stuff I've done today ( again in reverse chronological order ):

* Added Gamma Correction.
* Removed unnecessary sample.
* Added True Lambertian Reflection
* Patched shadow acne problem.
* Added ray bounce limit.
* Added simple diffuse material.
* Added basic anti-aliasing.
* Moved renderer logic into a separate camera class.
* Clean up and support for OS X.
* Fixed parameter list for main.
* Added utility class interval.
* Improved main. Sphere hit bugfix

Yet again, below is the evolution of the output image after each major change ( in chronological order ) With Ground With a ground ( actually another sphere ). If I didn't have a bug yesterday, you were supposed to see this.

Sphere with basic diffuse Sphere with basic diffuse and uniform light bounce.

Sphere after shadow acne fix Sphere after shadow acne fix. This one was an interesting case of floating point rounding errors. Sometimes a ray would bounce with a bit of offset from the actual hit point. This would result in the ray origin to be a bit above or below the surface. In case it was below, it would be reported extra dark area.

True Lambertian Reflection True Lambertian Reflection. Actually, it is much simpler, than what is sounds to be.

Gamma correction!!! Gamma correction!

That's it for today. Code is as always available in my github page. I have implemented some more stuff, but there is currently a bug, so I will leave it for tomorrow. See you!

Building a ray tracer is so much fun! It literally cannot stop coding. From the original red green gradient now I already have a cute little sphere. Following is list of stuff I have implemented today, in reverse order ( the feature on top is most recent ):

    * added hittable class for management
    * stores front/back face information
    * Added abstract hittable class, which represents everything that can be
    intersected with a ray.
    * Imlemented hittable in sphere class.
    * simplified sphere_hit code
    * Added Sphere Normals. Drawing sphere normals with normal xyz as rgb.
    * Added sphere. Added sphere and sphere intersection. Returns red for sphere.
    * Added camera, viewport and ray.
    * Simple vertical gradient from light blue to white.

And below is the evolution of output image after each major change ( in chronological order ) Blue Gradient With Blue Gradient

Sphere with red Returns red color if ray intersects the sphere.

Sphere Normals Uses the normal's xyz as rgb to debug.

That's it for today. Code is as always available in my github page. I have implemented some more stuff, but there is currently a bug, so I will leave it for tomorrow. See you!

Woah! Already 40 days have passed. Today I continued working on the ray tracer project. Firstly, since my project is not so complicated yet only three files, I decided to simplify the output structure, so instead of having ./out/build directory structure, I switched to simple ./out. Secondly, I implemented very basic vec3 class it's utilities, since it is impractical to do computer graphics without it:) It has three components, operator overloading for clean calculation, length, dot product, and cross product operations. It was fun to write down the cross product formula of top off my head, felt kinda empowered. Finally, fixed a tiny bug, where I would write out final status “Done!” into an image file. I have no clue how image was not corrupt and apps could open it. That's it for today! I do not have visual update, so will not include any files. You can see full commit here.

I've finally came to the point where I'm quite confident in my C++ knowledge and can to start working on simple projects to test my knowledge on battle. For that reason, I've started following a programming series teaching you to make a ray tracer in a weekend. I have spent most of the time setting up the project. For that I have learned the basics of CMake, although I am not quite happy with my current setup and would modify it in a future. The code is available on my GitHub page, if you'd wanna check it. For now I have followed the introduction and can generate a .ppm (portable pixmap format) with simple red-green gradient image. Here is a preview, although browsers seem not to provide .ppm and instead I converted it to .png.

256x256 red-green gradient

Finally, I promised you a code snippet from yesterday, which tells true if the template parameter list is homogeneous.

#include <iostream>

template<typename U, typename V>
struct is_same {
    static const bool value = false;
};

template<typename U>
struct is_same<U, U> {
    static const bool value = true;
};

template<typename First, typename Second, typename... Tail>
struct is_homogeneous {
    static const bool value = is_same<First, Second>::value && is_homogeneous<Second, Tail...>::value;
};

template<typename First, typename Second>
struct is_homogeneous<First, Second> {
    static const bool value = std::is_same<First, Second>::value;
};

int main ()
{
    std::cout << is_homogeneous<int, int, int>::value << std::endl; // gives true
    std::cout << is_homogeneous<int, bool, int>::value << std::endl; // gives false
    return 0;
}

That's it for today, see you tomorrow!

It was a bit risky since I had ten more minutes until the day ended, however, I can announce with ease that TEMPLATES NO MORE!!! Well at least for now, but if I will need more advanced knowledge I will return to it. I have watched all the classes from the C++ series mentioned in recent blogs on this topic and learned some fancy magicary-trickery techniques. I can with confidence now say, that compared to C++ templates the Java generics are a baby toy. Not that it is good or bad, personally I would want to avoid templates/generics as much as possible. Tomorrow I will finally start with my C++ raytracer series. As an end note, I will leave some code snipped, which uses templates to tell if the provided type list is homogenous or not.

p.s. nvm time is ending, I will post it as a part of the next blog post. See you!