Jessica In invited me to collaborate on her project: NORAA.
In the creator’s own words:
“Machinic Doodles is a live, interactive drawing installation that facilitates collaboration between a human and a robot named NORAA, a machine that is learning how to draw. It explores how we communicate ideas through the strokes of a drawing, and how might a machine also be taught to draw through learning, instead of via pre-programmed, explicit instruction.”
NORAA reads user drawn strokes (through robot kinematics) and uses Google Creative Labs NYC’s QuickDraw Dataset to classify them. The result can match an existing magenta sketch-rnn model otherwise a semantically similar model is selected since QuickDraw can classify 345 categories where as sketch-rnn has 130 categories (including custom trained models). Once a sketch-rnn model is selected NORAA will draw the prediction including the user’s input.
The project was exhibited at the V&A, part of London Digital Festival and got featured on creativeapplications.net, the NeurIPS conference art gallery and was is installed at Ars Electronica Center in Linz. Noraa was also part of the Siggraph Asia Art Gallery.
My role was covering overall system design, supporting electronics (wiring, soldering, firmware programming), Dynamixel smart servo communication, QuickDraw classification (based on the official TensorFlow RNN tutorial), creating scripts to batch train multiple sketch-rnn models.
Custom parts have been carefully machined by Sam Price.
Precision servo motion consulting James McVay.
I would like to thank Nick Fox-Gieg, Jonas Jongejan and Kyle McDonald for generously offering their time to meet and provide wise advice on QuickDraw classification and SketchRNN model training and of course to David Ha for his support on github.