This is a series of drawings that were made with photogrammetry software and a grasshopper script.
First, I take a video of a space as I walk through it. The video is usually 1 or 2 minutes long.
I take the video into Photoshop, and convert it into several still images using the Export > Render Video > Photoshop Image sequence tool.
I use COLMAP, a Structure from Motion pipeline to build a point cloud of the space from the image sequence.
I use a grasshopper script to export the point cloud from COLMAP file formats into Rhino.
Another script is used to connect the points in the point cloud depending on their proximity to each other. I export images of these meshes with varying density back into Photoshop and blend them.
Sometimes I use the real color of the scene, or exaggerate certain color families.
The results are usually unpredictable, and vary depending on what happened to be picked up by the camera on that day. The light falling on objects and the direction of the camera influence the point cloud that gets constructed. The whole process usually takes less than an hour for each image, making for an interesting snapshot of what a space was like at a particular time.
The resulting meshes can be thought of as what the camera and computer “see” when they try to make sense of the space. Computer vision picks up on the most visually dense and recognizable pixels in the image, rather than uniform planes and fields of color. That’s why the images tend to be made of edges and other high-contrast or detailed objects like street lights, foliage, and ornamental facades.