Publications

Environment-Scale Fabrication: Replicating Outdoor Climbing Experiences

CHI 2017 (Paper)

Emily Whiting, Nada Ouf, Liane Makatura, Christos Mousas, Zhenyu Shu & Ladislav Kavan

 

Other Research

IMG_6712.jpg

In Progress: Honors Research Thesis

2016-2017

An exciting exercise in computational fabrication research -- currently under wraps (or sawdust, as the case may be) but many more details coming soon!

A snapshot of our final UI, in the midst of a user's mobile creation.

A snapshot of our final UI, in the midst of a user's mobile creation.

Balancing Act: An Interactive Tool for Fabricating Calder-Style Hanging Mobiles

Spring 2016  |  Prof. Emily Whiting  |  CS89:  Computational Fabrication

Project Duration: 4 weeks

Teammates: Catherine Most, Gloria Li

This project featured several interesting challenges, as we sought a program that enabled intuitive yet unrestricted creative control for custom mobile design while also guaranteeing a fabricable result matching the specified configuration. I worked with two partners, but was primarily responsible for the proposition of our novel design paradigm. I also implemented the necessary structures, physical calculations, and interactive elements for our prototype, while contributing to the final demos and writeup for presentation.

Demo  |  Code  |  Paper

 
t-SNE visualization depicting CGI/photo separation achieved by the finetuned features of AlexNet, after being trained on our nearest-neighbors training set (as explained in the paper). 

t-SNE visualization depicting CGI/photo separation achieved by the finetuned features of AlexNet, after being trained on our nearest-neighbors training set (as explained in the paper). 

Deep Forensics: Using CNNs to Differentiate CGI from Photographic Images

Winter 2016  |  Prof. Lorenzo Torresani  |  CS89: Visual Recognition

Project Duration: 5 weeks

Teammate: Shruti Agarwal

For our final project, my partner and I explored the possibility of using deep convolutional neural networks (CNNs) to learn a representation that would be capable of discriminating between photographs and computer generated images directly from the image pixels themselves. We created a custom dataset for this purpose, which we were encouraged to polish for public release into the research community. After obtaining results which averaged 70\% accuracy over several strategically devised (and intentionally challenging) conditions, we confirmed our belief that CNNs could be a powerful and effective tool for this purpose. We are actively refining our experiments and approaches with the hope of obtaining publishable results.

Paper