In the creator’s own words: “Machinic Doodles is a live, interactive drawing installation that facilitates collaboration between a human and a robot named NORAA, a machine that is learning how to draw. It explores how we communicate ideas through the strokes of a drawing, and how might a machine also be taught to draw through learning, instead of via pre-programmed, explicit instruction.”
NORAA reads drawn strokes and uses Google Creative Lab’s QuickDraw to classify strokes. The result can match an existing magenta sketch-rnn model otherwise a semantically similar model is selected since QuickDraw can classify 345 categories where as sketch-rnn has 130 categories (including custom trained models). Once a sketch-rnn model is selected NORAA will draw the prediction including the user’s input.
My role was covering overall system design, supporting electronics (wiring, soldering, firmware programming), Dynamixel smart servo communication, QuickDraw classification (based on the official TensorFlow RNN tutorial), creating scripts to batch train multiple sketch-rnn models.
The project was exhibited at the V&A, part of London Digital Festival and got featured on creativeapplications.net as well as the NeurIPS conference art gallery