[{"content":" ","date":"6 August 2025","externalUrl":null,"permalink":"/","section":"","summary":"","title":"","type":"page"},{"content":"","date":"6 August 2025","externalUrl":null,"permalink":"/articles/","section":"Articles","summary":"","title":"Articles","type":"articles"},{"content":"","date":"6 August 2025","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"6 August 2025","externalUrl":null,"permalink":"/categories/project/","section":"Categories","summary":"","title":"Project","type":"categories"},{"content":"","date":"6 August 2025","externalUrl":null,"permalink":"/tags/project/","section":"Tags","summary":"","title":"Project","type":"tags"},{"content":"","date":"6 August 2025","externalUrl":null,"permalink":"/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"","date":"6 August 2025","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":" Coming soon\u0026hellip; # ","date":"6 August 2025","externalUrl":null,"permalink":"/articles/uecolortoolsarticle/","section":"Articles","summary":"","title":"UE Color Tools ","type":"articles"},{"content":" Overview # This is an Unreal Engine plugin that provides a vectorscope, a histogram and a waveform, tools which are available by default in lots of other production software.\nAll those tools could prove useful in the production pipeline, helping artists get objective insights into the colorization of a scene, without having to exit UE.\nThe plugin is now available on FAB, and it is free for personal use. You can find it here.\nGoals # Learn more about color theory and apply Make a professional tool, both esthetically and functionally Distribute the plugin for free on FAB Implementation # Coming soon\u0026hellip;\n","date":"6 August 2025","externalUrl":null,"permalink":"/projects/ue5colortools/","section":"Projects","summary":"Work in Progress\u0026hellip; - UE5 plugin providing 3 color inspection tools (vectorscope, histogram, waveform)","title":"UE Color Tools Plugin","type":"projects"},{"content":" Umbra Mortis # Throughout this project I learned a bunch of new things, such as developing multiplayer games adn using networking concepts, using the Scene View Extension to add custom shader passes, or using Niagara.\nVisual Effects # Since our team lacked a VFX artist, I took upon this role, applying my shader experience to using Niagara. I would regularly receive feedback from the Visual Artists in our team, ensuring that the effects are cohesive with the rest of the game.\nAmmo Box # Bell Aura # Disease Visualization # Key Pickup # Bell AOE Visualization # Dissolving Effect # Canals Water # The water is reactive to the environment, creating waves around objects that have contact with it.\nTHE STINKY FISH (my proudest achievement so far) # Gameplay # Lobby Practice Targets # Headshot Hitmarkers # Working on this lead to discovery that the shooting was not reliable and accurate for all the players in a session, which we eventually fixed.\nTeammate Outline Post Process # Downed Post Process # Networking # The networking aspect of the game did make the development a little bit more difficult. Even for the visual elements of the game, some required replication of different parameters.\nResearch # In the early stages of the project, when the vision was not yet clear, I decided to experiment with the Scene View Extension class in Unreal Engine 5. This allows injecting custom shader passes at different stages of the rendering pipeline. This allowed me to create some effects that we did not end up using in the game, but are worth showing.\nI made use of a depth stencil buffer to create a ghosting effect and a scanning effect.\nThe research I did with those has lead me to start working on this project:\nUE Color Tools Plugin 6 August 2025\u0026middot;102 words Work in Progress\u0026hellip; - UE5 plugin providing 3 color inspection tools (vectorscope, histogram, waveform) ","date":"6 July 2025","externalUrl":null,"permalink":"/rabbithole/umbramortiswork/","section":"Rabbitholes","summary":"Multiplayer zombie FPS made in UE5","title":"My contribution to Umbra Mortis","type":"rabbithole"},{"content":"","date":"6 July 2025","externalUrl":null,"permalink":"/rabbithole/","section":"Rabbitholes","summary":"","title":"Rabbitholes","type":"rabbithole"},{"content":" Role Duration Platform Team size Engine Generalist 1 Year Windows 6 Programmers, 6 Designers, 10 Artists Unreal Engine 5 Overview # A multiplayer zombie shooter, developed with Unreal Engine 5. This was a one year long project I worked on in my third year at BUAS.\nGoals # Work in a much larger team than before Get to learn how to use UE5 to a much deeper level Contribute to releasing a nice game on Steam My contributions # My contribution to Umbra Mortis 6 July 2025\u0026middot;320 words Multiplayer zombie FPS made in UE5 Links: # Umbra Mortis Steam Page\n","date":"6 July 2025","externalUrl":null,"permalink":"/projects/umbramortis/","section":"Projects","summary":"Multiplayer zombie FPS made in UE5","title":"Umbra Mortis","type":"projects"},{"content":" PEPI Renderer # A renderer developed in 16 weeks (on top of a hybrid ray tracing renderer, which took another 16 weeks) for the PEPI engine, which was then used to create Owlet, a small RTS game.\nRay-Traced Shadows # I made use of my hybrid ray tracing pipeline to trace shadow rays, allowing for soft shadows.\nBloom # I made use of compute shaders, doing one horizontal and one vertical pass, creating the bloom effect for colors with emissive values.\nBlended Skeletal Animations # The renderer allows smoothly blending between multiple skeletal animations\nUI System Porting # The UI system was initially implemented in OpenGL(by another programmer in our team). I ported this complex system to DX12, keeping all its functionality intact.\nMesh Instancing # The renderer makes use of instanced rendering, allowing to renderer all the identical meshes in one single draw call. Great for particle systems, but also great for RTS games that have a bunch of identical meshes.\nCompatibility with Editor Tools # The renderer can be used to render in real time all the changes made by the editor, such as editing lights, modifying meshes and materials and the terrain editor.\nIndexed Materials # This allows instanced rendering while still being able to have different materials on instanced meshes.\nMip-Mapping # Made use of compute shaders to generate mip maps for all the textures.\nSteam-Deck Support # For the first 8 weeks of the project, the engine could run on a steamdeck. We eventually abandoned it, as the project requirements did not ask for multiple platform support of the engine anymore.\n","date":"4 September 2024","externalUrl":null,"permalink":"/rabbithole/pepirenderer/","section":"Rabbitholes","summary":"DX12/DXR renderer built specifically for RTS games","title":"PEPI Engine Renderer ","type":"rabbithole"},{"content":" Overview # Role Duration Platform Team size Graphics API Graphics Programmer 16 weeks Windows + Steam Deck 8 Programmers, 3 Designers, 3 Artists DX12/DXR Owlet is an RTS game developed using the PEPI custom engine. I worked as a graphics programmer in the team, building the foundation of the PEPI renderer(first 8 weeks) and then collaborationg with one more graphics programmer to build the tools demanded by our artists for the game and optimizing the performance of the renderer(for 8 more weeks). The PEPI renderer was built on top of my Hybrid RayTracing renderer, making use of some of its features.\nGoals # Experiencing working in a larger team of programmers Developing tools requested by other disciplines in the team Contributing to building a fully working game engine, later to be used in making a game My Contributions: # PEPI Engine Renderer 4 September 2024\u0026middot;266 words DX12/DXR renderer built specifically for RTS games Links: # Owlet Trailer\nOwlet Itch Page\n","date":"1 July 2024","externalUrl":null,"permalink":"/projects/owlet/","section":"Projects","summary":"RTS game made using the PEPI custom engine","title":"Owlet / PEPI engine","type":"projects"},{"content":" Overview # This was my second project in my second year at BUAS, a deferred hybrid ray tracer I have built on top of a basic PBR DX12 renderer(which was my first project of Y2), using DXR.\nGoals # Further exploring ray tracing concepts and learning how to use DXR\nGetting to understand and implement a deferred renderer\nGetting to write an article on about it\nExploring a modern rendering technique\nFeatures # Deferred rendering\nPBR materials\nSoft Shadows\nAmbient occlusion\nReflections\nImplementation # I have also made an article about the technicalities. You can check it out here if interested:\nImplementing a Hybrid Ray Tracer with DXR 4 December 2023\u0026middot;3171 words A cool project I made ","date":"4 December 2023","externalUrl":null,"permalink":"/projects/hybridraytracer/","section":"Projects","summary":"A hybrid ray tracer built with DXR","title":"DXR Hybrid Ray tracer","type":"projects"},{"content":" Overview # If you\u0026rsquo;re here, I\u0026rsquo;m assuming you\u0026rsquo;re interested ray tracing. Cool. This article documents the creation of Hybrid RayTracer, which proved to be a great learning exercise when it comes to DX12, Ray Tracing and rendering in general.\nI am going to describe my journey and the concepts I learned, along with the problems I encountered.\nThis project was my first experience with 3D ray tracing (and DXR), so please take all the information with a grain of salt.\nIf you are not experienced, I recommend watching this video as an introduction to ray tracing. While my article lays down the fundamental concepts, additional visual context can enhance your understanding.\nThe plan: # My focus was on building a ray tracer that would run in real time. For that purpose, I chose to implement the following features:\nHybrid Pipeline\nShadows\nReflections\nAmbient Occlusion\nRefractions\nSupport for multiple lights\nIn the limited time I had (8 weeks), I only managed to implement the bolded features above.\nSome basic theory # Hybrid Ray Tracing # A hybrid ray tracing pipeline involves a rasterization pass, creating a G-Buffer for scene reconstruction in the ray tracer.\nUsing a rasterizer to write a G-Buffer # A G-Buffer comprises multiple textures that hold data about the scene’s geometry from the camera\u0026rsquo;s viewpoint. This step uses a rasterizer to render multiple render targets containing screen space information about the scene.\nFor my project, I used four render targets to store the world positions of pixels, surface normals, albedo color, and material information (roughness and metallic properties).\nWhy do this? # Rasterizers are fast, so leveraging a G-Buffer to avoid the need for primary rays(the rays traced from the camera towards the scene) in the ray tracer boosts performance, particularly on older hardware not optimized for ray tracing.\nUsing the G-Buffer in the ray tracer # Normally, a ray has to be traced for each of the pixels on the screen, from the camera to the scene. Each ray has to be checked against the geometry in the scene and return values like color or distance.\nAll of the data that that is collected with the primary rays, can also be collected from the G-Buffer with a lower performance cost.\nTextures in the ray generation shader access this data, aiding in tracing other ray types like shadow, reflection, and ambient occlusion rays.\nCapabilities of this Ray Tracer # This distributed ray tracer traces multiple samples for each effect and integrates their contributions. While this process introduces noise due to its sthocastic nature, more samples yield results closer to realistic illumination. Past frame samples can also be used for refinement.\nShadows # Realistic shadows require a smooth falloff, as lights in reality are not point sources but areas, causing the light to fall unevenly around an object\u0026rsquo;s shadow, making it softer towards the edges.\nSoft shadows and are achieved by tracing multiple samples towards the light source. Each ray targets a random point on the light source, making edge-near samples less likely to intersect the shadow-casting object.\nBy summing up all the sampled results, the shadow becomes smoother towards the edges.\nAmbient Occlusion # Ambient occlusion helps with defining the environmental occlusion of the ambiental light. This is achieved by tracing rays in random directions from a surface and using the number of hits and the distances to impacted geometry to determine surface occlusion.\nReflections and PBR # The reflections are the part of indirect lighting, that are affected by the angle of incidence(between the light ray and the surface normal) and the angle of the camera relative to the reflected ray.\nIn ray tracing, light calculation is reversed compared to real life. A ray sent from the camera to a surface uses the angle with the surface normal to compute the reflected ray, which then interacts with other illuminated geometry.\nPBR (Physically Based Rendering) enhances realism. It defines material properties with two values: roughness and metallic. The Cook-Torrance microfacet model, a widely used illumination model, considers surfaces as collections of tiny, perfectly reflecting microfacets.\nRoughness affects the scattering of these microfacet angles. In ray tracing, this translates to sampling random microfacet directions, with the distribution based on a mathematical formula. We then use the angle of the microfacets instead of the surface normal, to calculate the reflected rays. Different angles lead to varied light contributions, altering the material\u0026rsquo;s appearance.\nThe metallic value influences each reflection ray\u0026rsquo;s contribution.\nFirefly reduction # Firefly reduction is a method of filtering noise, targeting the elimination of abnormally bright pixels. These bright spots, often referred to as \u0026lsquo;fireflies\u0026rsquo;, arise from sampling directions that, while low in probability, yield disproportionately high energy. This could employ loss of energy throughout the scene, since removing the bright pixels means removal of energy. However, this loss could be a worthwhile sacrifice for the significant improvement it brings to the image quality.\nMy journey implementing all this in DXR # Setting up DXR # As a code base for this project, I used my previous DX12 rasterizer project.\nI started with my DX12 rasterizer project, setting up the DXR pipeline, shader tables, resource descriptor heap, and acceleration structures using NVIDIA\u0026rsquo;s DXR helpers. Here you can find the Nvidia tutorials on doing all this.\nThe helpers they provided had some issues with the shader table allignment, requiring manual adjustments in \u0026ldquo;ShaderBindingTableGenerator.cpp\u0026rdquo;, changing the miss shader entry size to allign to 64 bytes.\nShadows and Random numbers on GPU # Once I reached this stage, I decided to deviate from my initial plan and prioritize the implementation of shadows. This approach would provide a feature to test the effectiveness of the hybrid pipeline.\nImplementing Shadows # To implement this, I used a function to generate a random point within a unit sphere. This vector is then scaled by the size of the light source and added to the light\u0026rsquo;s position. Using this calculated position, I determine the direction and length of the ray from the shaded surface to the random point.\nfor (int i = 0; i \u0026lt; sample_count; i++) { //generating a random point in a sphere float3 sphere = RandomInUnitSphere(i * 7127 + seed1, i * 20749 + seed2, i * 6841 + seed3); float3 lightPos; lightPos = light.light_position + sphere * light.size; float3 lightDir = lightPos - hitLocation; I made sure to avoid calling a closest hit shader since it would be unnecessary (free performance!). I use 2 flags for the TraceRay function to bypass unnecessary checks, resulting in only the miss shader being called, which returns a value of false.\n... ShadowHitInfo shadowPayload; shadowPayload.isHit = true; // by default, isHit is set to true, and is only changed if the miss shader is executed uint rayFlags = RAY_FLAG_ACCEPT_FIRST_HIT_AND_END_SEARCH | RAY_FLAG_SKIP_CLOSEST_HIT_SHADER; TraceRay(SceneBVH, rayFlags, 0xFF, 0, 0, 0, ray, shadowPayload); ... When the miss shader is invoked, a counter tracking the hits is incremented. After iterating through all the samples, the counter is divided by the total number of samples, yielding the shadow\u0026rsquo;s intensity at that specific surface point. This value is then multiplied with the direct illumination radiance.\n... ... // incrementing the counter if no geometry is hit counter += !shadowPayload.isHit; } return (float(counter) / float(sample_count)); In my project, I only have support for one single point light, which makes things much simpler. Here\u0026rsquo;s some directions for multiple light support.\nRandom numbers on the GPU # Generating random numbers presented a unique challenge, as I discovered HLSL lacks a built-in function for this purpose. Based on my research, the best function to do this would be a PCG HASH. It has both great distribution and is fast compared to other methods.\nuint pcg_hash(uint input) { uint state = input * 747796405u + 2891336453u; uint word = ((state \u0026gt;\u0026gt; ((state \u0026gt;\u0026gt; 28u) + 4u)) ^ state) * 277803737u; return (word \u0026gt;\u0026gt; 22u) ^ word; } This function returns pseudo-random numbers, requiring a distinct seed for each pixel or sample to ensure randomness. Various elements can be used to construct this seed, such as the launch index of the ray(for good distribution,both x and y of the launch index should be used), or the world position of the fragment.\nFor temporal accumulation, incorporating the frame index into the seed proved necessary. Additionally, varying the seed with each sample\u0026rsquo;s index helps generate distinct values for each. To minimize correlation between these values, it\u0026rsquo;s ideal to multiply the seed components with large prime numbers.\n//precalculating the invariable part of the seed outside the loop float seed1 = launchIndex.x * 4057 + launchIndex.y * 17929 + frame * 7919; float seed2 = launchIndex.x * 7919 + launchIndex.y * 5801 + frame * 4273; float seed3 = launchIndex.x * 5801 + launchIndex.y * 7127 + frame * 13591; for (int i = 0; i \u0026lt; sample_count; i++) { //using the precalculated seed and the sample index float3 sphere = RandomInUnitSphere(i * 7127 + seed1, i * 20749 + seed2, i * 6841 + seed3); Reflections # For the reflections, I used a GGX microfacet distribution function that would generate the normal of a microfacet within a cone. The angle of these microfacets is linked to the material\u0026rsquo;s roughness. The generated microfacet is used to reflect the ray originating from the camera and then trace the ray in this newly calculated direction.\n// the GGX distribution function for the rays float3 GGXMicrofacet(uint randSeed, uint randSeed2, float roughness, float3 normal) { //generating random values based on the seed float2 randVal; randVal.x = float(pcg_hash(randSeed)) / 0xFFFFFFFF; randVal.y = float(pcg_hash(randSeed2)) / 0xFFFFFFFF; //calculate tangent/bitangent based on the normal float3 up = abs(normal.y) \u0026lt; 0.999 ? float3(0, 0, 1) : float3(1, 0, 0); float3 tangent = normalize(cross(up, normal)); float3 bitangent = cross(normal, tangent); //sample the normal distribution float a = roughness * roughness; float a2 = a * a; float cosThetaH = sqrt(max(0.0f, (1.0 - randVal.x) / ((a2 - 1.0) * randVal.x + 1))); float sinThetaH = sqrt(max(0.0f, 1.0f - cosThetaH * cosThetaH)); float phiH = randVal.y * 3.14159f * 2.0f; return tangent * (sinThetaH * cos(phiH)) + bitangent * (sinThetaH * sin(phiH)) + normal * cosThetaH; } //generate a microfacet float3 H = GGXMicrofacet(pcg_hash(i * 7127 + rand1), pcg_hash(i * 20749 + rand2), roughness, normal); //reflect the incident Ray(or the view ray) float3 L = normalize(2.f * dot(normalize(-incidentRay), H) * H - normalize(-incidentRay)); RayDesc ray; ray.Origin = hitLocation; ray.Direction = L; ... ... //trace the ray using the ray description TraceRay(SceneBVH, RAY_FLAG_NONE, 0xFF, 0, 0, 1, ray, relfectionPayload); Tracing the Reflection Ray # When tracing the reflection ray, the hit shader returns the color and distance to the point of impact. Utilizing this distance, along with the ray\u0026rsquo;s direction and origin, I calculated the precise location of the hit. This enabled me to trace additional shadow rays, making reflections more accurate. The traced shadows will be multiplied with the contribution of the ray, enriching the final illumination effect.\nfloat3 r_hitLocation; r_hitLocation = hitLocation + L * relfectionPayload.colorAndDistance.w; float shadow = TraceShadowRays(r_hitLocation, reflected_shadow_SC, launchIndex, frame_index); Light Contribution and Cook-Torrance BRDF # For the contribution of the incoming light, I used a Cook-Torrance BRDF(Bidirectional Reflectance Distribution Function ). This involved the computation of the GGX normal distribution, geometry factor, and Fresnel effect – key components that influence how light interacts with surfaces. More on PBR here.\n// calculating all the dot products required for this BRDF float NoV = clamp(dot(normal, normalize(-incidentRay)), 0.0, 1.0); float NoL = clamp(dot(normal, L), 0.0, 1.0); float NoH = clamp(dot(normal, H), 0.0, 1.0); float VoH = clamp(dot(normalize(-incidentRay), H), 0.0, 1.0); float LdotH = clamp(dot(L, H), 0.0, 1.0); float3 F = fresnelSchlick(VoH, F0); float D = DistributionGGX(NoH, max(roughness, 0.00001)); float G = GeometrySmith(NoV, NoL, roughness); float3 BRDF = D * F * G / max(0.00001, (4.0 * max(NoV, 0.00001) * max(NoL, 0.00001))); Calculating Reflection Probabilities # Achieving accurate reflections necessitated the calculation of probabilities for choosing specific directions for the reflection rays. For this, I utilized a probability density function based on the GGX distribution, ensuring that each reflection direction contributed to the final image with consideration to its likelihood.\nfloat GGXProb = D * NoH / (4 * LdotH); Integrating Contributions for Final Pixel Color # After adding up all the contributions from all the samples, I divided the result by the number of samples. The returned value is added to the direct illumination, forming the final color of the pixel.\n//adding the sample contribution in the loop color_r = color_r + relfectionPayload.colorAndDistance.rgb * (NoL * GGXBRDF / (GGXProb)) * shadow; } return (color_r) / float(reflection_SC); Hybrid pipeline # For my hybrid pipeline, I am using 4 buffers. For each of them I stored one render target descriptor in a descriptor heap designated for render targets, and one UAV descriptor in the descriptor heap designated for the resources used in ray tracing pass(UAV\u0026rsquo;s are nice because they allow both reading and writing). This way,I can use the same resource for both the rasterizer and for the ray tracer, without having to copy anything.\n//Creating the RTV descriptors CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHandle(m_RTVDescriptorHeap-\u0026gt;GetCPUDescriptorHandleForHeapStart()); m_posBuffer = CreateRTVBuffer(device, rtvHandle); m_normalBuffer = CreateRTVBuffer(device, rtvHandle); //the same for all the other buffers //creating the UAV descriptors D3D12_CPU_DESCRIPTOR_HANDLE srvHandle = m_srvUavHeap-\u0026gt;GetCPUDescriptorHandleForHeapStart(); { D3D12_UNORDERED_ACCESS_VIEW_DESC uavDesc = {}; uavDesc.ViewDimension = D3D12_UAV_DIMENSION_TEXTURE2D; m_device_manager-\u0026gt;m_Device-\u0026gt;CreateUnorderedAccessView(m_device_manager-\u0026gt;m_posBuffer.Get(), nullptr, \u0026amp;uavDesc, srvHandle); srvHandle.ptr += m_device_manager-\u0026gt;m_Device-\u0026gt;GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV); }//this part is the same for all buffers Binding Render Targets and Managing Resource States # The render targets integrated into the the pipeline through the OMSetRenderTargets function. It is crucial to accurately reference the position within the descriptor heap of the render targets when creating the CPU descriptor handles.\n//creating an array of CPU descriptor handles for the buffers CD3DX12_CPU_DESCRIPTOR_HANDLE rtv[4]; rtv[0] = CD3DX12_CPU_DESCRIPTOR_HANDLE(m_device_manager-\u0026gt;m_RTVDescriptorHeap-\u0026gt;GetCPUDescriptorHandleForHeapStart(), 3, m_device_manager-\u0026gt;m_RTVDescriptorSize); rtv[1] = CD3DX12_CPU_DESCRIPTOR_HANDLE(m_device_manager-\u0026gt;m_RTVDescriptorHeap-\u0026gt;GetCPUDescriptorHandleForHeapStart(), 4, m_device_manager-\u0026gt;m_RTVDescriptorSize); rtv[2] = CD3DX12_CPU_DESCRIPTOR_HANDLE(m_device_manager-\u0026gt;m_RTVDescriptorHeap-\u0026gt;GetCPUDescriptorHandleForHeapStart(), 5, m_device_manager-\u0026gt;m_RTVDescriptorSize); rtv[3] = CD3DX12_CPU_DESCRIPTOR_HANDLE(m_device_manager-\u0026gt;m_RTVDescriptorHeap-\u0026gt;GetCPUDescriptorHandleForHeapStart(), 6, m_device_manager-\u0026gt;m_RTVDescriptorSize); // binding them to the pipeline m_device_manager-\u0026gt;GetCommandList()-\u0026gt;OMSetRenderTargets(_countof(rtv), rtv, FALSE, \u0026amp;dsvHandle); For developers utilizing ImGUI, it\u0026rsquo;s important to bind your back buffers using OMSetRenderTargets before executing the rendering process for ImGUI.\nAn important aspect of managing these resources is the transition of their states. Post-rasterization, the resource states are shifted from RTV (Render Target View) to UAV, and then reverted to the RTV state after the ray tracing pass.\n//transitioning the resource states CD3DX12_RESOURCE_BARRIER barrierrr5 = CD3DX12_RESOURCE_BARRIER::Transition( m_device_manager-\u0026gt;m_posBuffer.Get(), D3D12_RESOURCE_STATE_RENDER_TARGET, D3D12_RESOURCE_STATE_UNORDERED_ACCESS); CD3DX12_RESOURCE_BARRIER barrierrr6 = CD3DX12_RESOURCE_BARRIER::Transition( m_device_manager-\u0026gt;m_normalBuffer.Get(), D3D12_RESOURCE_STATE_RENDER_TARGET, D3D12_RESOURCE_STATE_UNORDERED_ACCESS); ... ... { D3D12_RESOURCE_BARRIER barriers[5] = {barrierrr5, barrierrr6, barrierrr7, barrierrr8, barrierrr9}; m_device_manager-\u0026gt;GetCommandList()-\u0026gt;ResourceBarrier(5, barriers); } Preparing for Ray Tracing # Prior to calling DispatchRays, the descriptor heap containing the views for the necessary resources must be bound to the pipeline. This step is crucial for the seamless functioning of the ray tracing process.\nstd::vector\u0026lt;ID3D12DescriptorHeap*\u0026gt; heaps = {m_resource_manager-\u0026gt;m_srvUavHeap.Get()}; m_device_manager-\u0026gt;GetCommandList()-\u0026gt;SetDescriptorHeaps(static_cast\u0026lt;UINT\u0026gt;(heaps.size()), heaps.data()); Utilizing the G-Buffer # An efficient approach in this pipeline is the use of the G-Buffer as an SRV (Shader Resource View) during the ray tracing pass. Given that no writing is performed on these resources at this stage, an UAV is not necessary.\nAccessing the G-Buffer in the ray generation shader is very convenient. The texture is declared as an array, with the offset from the first UAV in the descriptor heap serving as an index to access the desired texture.\nRWTexture2D\u0026lt;float4\u0026gt; uavTextures[] : register(u0); The data collected from the texture is then used for the computation of the shadows, reflections, ambient occlusion and direct lighting.\n// extracting the data from the shaders float3 hitLocation = uavTextures[1][launchIndex].rgb; float3 albedo = uavTextures[3][launchIndex].rgb; float3 norm = uavTextures[2][launchIndex].rgb; float roughness = uavTextures[4][launchIndex].g; float metallic = uavTextures[4][launchIndex].b; Ambient Occlusion # Ambient occlusion was the simplest feature to implement, yet it\u0026rsquo;s a feature with a great impact, adding depth and realism to the scene.\nGenerating Rays for Ambient Occlusion # The process begins with the generation of a random point on a hemisphere, oriented around the surface normal at each point of the geometry. The ray is traced in the direction of these generated points.\n//generate random direction in the hemisphere float3 dir = RandomInHemisphere(normal, float(pcg_hash(i * 7127 + randAo1)), float(pcg_hash(i * 20749 + randAo2))); // set the ray description RayDesc ray; ray.Origin = hitLocation; ray.Direction = normalize(dir); ray.TMin = 0.01f; ray.TMax = g_RayLength; No Hit Scenario: If a ray does not intersect with any geometry, it indicates that the point is unobstructed and receives full ambient light. In such cases, a counter is incremented by 1, representing complete exposure to ambient light.\nHit Scenario: When a ray intersects with geometry, the hit shader returns the distance to the point of impact. This distance is used to calculate the amount of occlusion. The counter is incremented by the ratio between the returned distance and the ray\u0026rsquo;s maximum length.\n//calculate the contribution of every ray based on distance if (aoPayload.distance \u0026gt; 0)// the miss shader returns -1, the hit shader returns the actual distance { float occlusionFactor = (aoPayload.distance / g_RayLength); occlusion += pow(occlusionFactor, g_OcclusionPower); } else occlusion += 1; The counter is then divided by the number of samples and the returned result is then multiplied with the direct illumination radiance.\nfinal_color = radiance * shadow * ambient_occlusion + reflection + emissive; Firefly Reduction # I used a very crude algorithm that uses the brightness of the surrounding pixels. I only cap the brightness of the reflection, since that is the only effect that creates fireflies in my application.\nThis algorithm uses an user input value as a threshold, to keep the pixels that are not bright from being changed. I calculate average color for the surrounding pixels and then I used the value to bring the bright value of the current pixel down.\n//check the brightness of our current pixel if (dot(reflection, reflection) \u0026gt; firefly_reduction) { //calculate average color for surrounding pixels float3 med = (uavTextures[0][launchIndex + uint2(0, 1)] + uavTextures[0][launchIndex + uint2(0, -1)] + uavTextures[0][launchIndex + uint2(1, 0)] + uavTextures[0][launchIndex + uint2(-1, 0)] + uavTextures[0][launchIndex + uint2(-1, -1)] + uavTextures[0][launchIndex + uint2(-1, 1)] + uavTextures[0][launchIndex + uint2(1, 1)] + uavTextures[0][launchIndex + uint2(1, -1)]) / 9; //if the current pixel is brighter than the average, its color value is multiplied by the brightness of the surrounding pixels if (dot(reflection, reflection) \u0026gt; dot(med, med)) reflection = reflection * dot(med, med); } Conclusions # The information presented here is minimal, but hopefully, it sparks interest in this field. This project proved to be a great source of knowledge for me, and I totally recommend giving it a try.\nBased on my experience, DX12 was a great API to use for this purpose. Admittedly, the learning stage was challenging. Yet, once mastered, DX12 proved to be an exceptionally powerful tool\nThis project lays the groundwork for further exploration and the integration of more advanced features.\nFor those seeking to dive deeper into this field, here are some of the resources I used:\nThis helped me implement PBR: this is a series of tutorials on creating a DXR hybrid renderer\nNvidia tutorial series for DXR:Lays down the basics of using DXR\nRay Tracing gems: A series of papers on ray tracing. Part VI talks about Hybrid approaches.\nMicrosoft documentation for DXR: Could prove useful if the nvidia helpers create issues or if you don\u0026rsquo;t want to use them\n","date":"4 December 2023","externalUrl":null,"permalink":"/articles/hybridrtarticle/","section":"Articles","summary":"A cool project I made","title":"Implementing a Hybrid Ray Tracer with DXR","type":"articles"},{"content":" Duration Platform Team size Language 8 weeks Windows Solo C++ Overview # This is a 2D Ray Tracer I implemented as my third project at BUAS in my first year. The renderer operates entirely on the CPU, making use of an Open GL template to display the image. I then build a simple car game to showcase the capabilites of the renderer\nGoals # This was my first contact with ray tracing, helping me grasp the fundamental concepts of it\nImproving C++ skills and gaining experience with CPU-based algorithm optimization techniques\nFeatures # Hard/Soft shadows support\nSupport for mirror surfaces\nRay intersection for lines and circles\nOptimizated for supporting a large number of primitives/ lights\nPoint/ Spot lights\nA really cool car game that makes use of those features\n","date":"4 September 2023","externalUrl":null,"permalink":"/projects/2draytracer/","section":"Projects","summary":"2D Ray-Tracing renderer","title":"2D Ray Tracer","type":"projects"},{"content":" Overview # Role Duration Platform Team size Generalist 8 weeks Windows 3 Programmers, 5 Designers, 5 Artists Galactic Goo is a plaform puzzle game made in UE5 in my first year. This was my first experience working in a multidisciplinary team.\nGoals # Experiencing work within a team and colaborating with multiple disciplines. Getting more experience with Unreal Engine Learning how to apply AGILE / SCRUM practices My contribution: # My contribution on Galactic Goo 4 September 2023\u0026middot;424 words Links: # Galactic Goo Trailer\nGalactic Goo Itch Page\n","date":"4 September 2023","externalUrl":null,"permalink":"/projects/galacticgoo/","section":"Projects","summary":"Puzzle-platform game made with UE5","title":"Galactic Goo ","type":"projects"},{"content":" Galactic Goo # Many of my contributions to Galactic Goo directly impact the gameplay and the aesthetics of the game. Below is a brief description of my work.\nCamera # The game was heavily inspired by Mercury Meltdown. We used it as an example for the camera and player movement.\nThe example below showcases a comparison between Mercury Meltdown and our previous camera movement, which did not match the vision.\nI eventually managed to replicate the correct camera rotation, by making the camera rotate around the camera arm axis and not around its own.\nI implemented the movement by applying a gravitational force with the desired direction on the player actor, which is using a sphere for colision.\nVisual Effects # Slime Simulation # Once the project theme was decided, I took the initiative to experiment with Niagara Fluids.\nI have created multiple iterations, with different behaviours, making it easier to decide on the desired look.\nSimulation customization # The game gives the player the option to change the smile color. I made sure that the simulation color is accessible and linked it to the UI.\nMenu Animations # As the project finish deadline was approaching, I took on some aesthetic taks, like animating the elements of the main menu, giving it more life.\nSmoke # The final rocket animation needed some smoke, so I used Niagara do add that.\nGame Mechanics # The game has multiple mechanics that required some visual feedback.\nFans # Throughout the levels, fans can be found, which push the slime in one direction. I used a force module in Niagara to apply some light push to the particles in the required direction, making it much more visually satisfying.\nLosing Mass # As the slime travels around, it leaves trails behind, leading to it losing its mass. This is visually conveyed by shrinking the size of the slime. Likewise, when the slime regains its mass, it regains its size as well.\nThis is done by having a sphere spawner that continously creates new particles, while having a smaller sphere that kills the particles at the center. The rate at which those 2 perform the operations deteremines the size of the slime, and I had to carefully determine which ratios would produce the best results.\nSplitting Mechanic # The splitting mechanic allows the player to leave smaller slime blobs behind, leading to some mass loss. Those smaller blobs have the particle spawn/kill ratios differently tweaked, ensuring that they are always visible, with a consistent behavior and a minimal performance impact.\n","date":"4 September 2023","externalUrl":null,"permalink":"/rabbithole/galacticgoowork/","section":"Rabbitholes","summary":"","title":"My contribution on Galactic Goo ","type":"rabbithole"},{"content":" Duration Platform Team size Language 8 weeks Windows Solo C++ Overview # My very first project at BUAS in Year 1. A 2D RTS tank game, rendered on the CPU and making use of an Open GL template to display the image.\nGoals # Getting exposed to some basic concepts involved in game development, like AI implementation, sprite rendering and collision detection Features # The player can select and control the actions of the units Unit grouping Enemy AI and individual tank AI Unit local collision Procedural map generation, with rivers, valeys and forests ","date":"4 September 2022","externalUrl":null,"permalink":"/projects/2drts/","section":"Projects","summary":"2D RTS tank game","title":"2D RTS","type":"projects"},{"content":" Duration Platform Team size Language Tools 8 weeks Linux(Raspberry Pi) Solo C++ Visual Studio Overview # This was my second project at BUAS. We have taken the first project up a notch, turning it into a 3D game, runing on a Raspberry Pi and making use of a physics engine.\nGoals # Getting some hands on experience with Open GLES on an embeded environment\nFirst interaction with a physics engine\nGetting to learn how to use ImGui, preparing me for future projects\nFeatures # Running on a Raspberry Pi, being rendered with Open GLES\nUsing the Bullet Physics library\nProcedurally generated map\nCombat AI controling physcial objects\nMultiple game modes\n","date":"4 September 2022","externalUrl":null,"permalink":"/projects/3drts/","section":"Projects","summary":"3D RTS tank game on a raspbery PI","title":"3D RTS","type":"projects"},{"content":" I am mostly confused. But I also enjoy coding, painting, playing guitar and making music. I treat everything I do as a form of art. Did I mention I\u0026rsquo;m a philosopher? Some other potentially useful information about me # My name is Bogdan Deparateanu. I am a Games student in Y4 at BUAS. I have specialized as a graphics programmer, but my learning ability is flexible and I am open to trying out new things. I also have some experience with working in multidisciplinary teams and developing games in Unreal Engine 5. Languages Graphics APIs Tools \u0026 Engines Debugging \u0026 Profiling ","externalUrl":"","permalink":"/about/","section":"","summary":"","title":"About Me","type":"page"},{"content":"","externalUrl":null,"permalink":"/categories/about-me/","section":"Categories","summary":"","title":"About Me","type":"categories"},{"content":"","externalUrl":null,"permalink":"/tags/about-me/","section":"Tags","summary":"","title":"About Me","type":"tags"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]