Luke Harris XR

Treat this site as a collection of technical developments and workflows in the realm of digital and technical art. Something to that effect.

genAI - archViz



I’ve started experimenting with the use of stable diffusion and LoRA models trained specifically with architectural renderings.

Just using the IFC CAD model, I was able to place some people and camers into the 3d scene and quickly render out a depth and line pass, while foregoing the need to add lighting, materials, shaders and post-processing.

This was then fed into a controlnet with text prompts to output the final renders. Tthe images speak for themselves, while not perfect, they communicate the exterior site effectively enough without too much uncanny valley.



Observations:

  • It’s still difficult to control material finishes without a lot fo masking and manual prompting. Next time I’ll rendering out an object mask and see if I can control this and aim for more consistancy with finishes

  • All rendering was done in a day, compared to roughly a week of work doing it manually/”the old fashioned way”


note: the artwork still needed to be rendered using a rendering/lighting system ​(fstorm) as surface finish and position was important to get precise




AI mocap / Edutech AR


While with TAFE NSW, I was researching AI motion capture tools using video input (move.ai / plask.ai). While the services were limited to 10sec clips, I managed to blend them all together using proceedural animation tools in Houdini.

Stats

  •  1.2 GB BIM model (19 embedded working files)
  •  700k objects condenced down into 75 meshes with material data
  •  32 million polys down to 4 million (unfortunately the fire service piping didn’t make the import due to too much detail)
  •  All processed on a laptop with 32GB of RAM (unheard of 5 years a


genAI ArchViz



I've been upskilling in stablle diffusion lately, specially using screenshots of 3d scenes as image prompts for generating “realistic photos”.

While there are still some caveats, the minimal time spent and the results are fantastic! These examples took between 5-15 minutes each, once I was happy with my prompts and checkpoint models. No inpainting.

In the future, rendering will be as much a process of combining 3D models, prompt refinement, and materiality, then simply pressing a button.

Have I wasted the last 15 years learning material shaders, lighting systems, and offline rendering?

More tests to come...