Confused Object
Random found objects taped together and plastered over. Results scanned and modelled with phone camera and photogrammetry software. Facets and vectors of 3D models reduced. Simplified models composed for series of 2D Prints.
Random found objects taped together and plastered over. Results scanned and modelled with phone camera and photogrammetry software. Facets and vectors of 3D models reduced. Simplified models composed for series of 2D Prints.
Concrete tower and sound object. Hysteria listens to internet audio and parses it with the open-source CMU Sphinx2 speech recognition engine. It then plays those sounds it has identified as laughter.
Transcript: Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha. Ha.
One camera points up. Another points down. A third tracks the moon. Video installation. Photographs.
Random 3D model downloaded and reduced to a single vector. That reduction is undone one vector at a time until a form begins to take shape. A wireframe is then extracted from the model, and passed to a Grasshopper definition that automatically calculates and models the nodes.
The nodes are fabricated with a 3d printer. The vectors are measured and aluminium pipe is cut to those dimensions. The pipe and their respective nodes are fitted together.
I never know what the model is or where it comes from.
2014 – ongoing. Series of photogrammes. 325mm x 325mm.
Plots of the darkest points in the night sky. The computer takes photographs of the night sky and examines each pixel for its luminance level. It then joins the darkest points to make constellations. The results are plotted with a green laser on black and white photographic paper in the dark room.
Portraits using Metropolis code to rearrange pixels according to their HSL (Hue, Saturation and Luminance) levels.
The Mission takes images from a live video feed of the sky through a 6 inch refractor telescope and uses them as a score for a seven channel theremin sound composition.
Internet work that collects the latest images from all-sky cameras from across the world and composites them into a single view. Updated every 10 minutes.
Computer controlled bank of signal generators that speaks by modulating sine waves to match the frequencies of the human voice. Each voice consists of 4 generators, each of which produce waves within a different band of frequencies. Signal sings out a text of its own composition.
Variety of work that began with an attempt to look out into the universe in all directions at once.
Sky-cam images from approximate antipodes stitched together into a binocular video that redirects its focus every 7 days. A day is mapped to approximately to a minute of video. PE: 7 Days is an Exeter Phoenix Digital Art Commission.
2009 – ongoing. Series of digital inkjet prints. 1000mm x 1000mm.
Photographs of urban architecture rearranged according to the HSL – Hue, Saturation and Lightness – values of the individual pixels.
The Atrocity Machine searches through and speaks DNA sequences. Within the range of 4 bases to 5 billion bases the machine accesses and spells out the organic structure of any living creature.