Building a winner at TechCrunch Disrupt

Disrupt is a 20 hour hackathon that took place this September in San Francisco. This post describes an unusual hardware/software project and how it managed to win a handful of prizes.

Inside Disrupt

The dream

For the past year, I'd been doing a lot of high-altitude balloon projects, even sending a potato to 100,000 ft via Kickstarter. I was looking for a way to parlay high-altitude imaging into social good, with an idea on the backburner for balloons to monitor crop health and water in infrared.

Near-infrared imaging spots plants made unhealthy by irrigation problems and other causes, which is particularly topical here in California due to drought. Well-watered photosynthesizing plants scatter solar radiation in near-infrared.

After some debate, I convinced a few friends it was worth building. We called it Harvest, a toolkit that helps farmers understand plant health and spot water waste using infrared aerial photography.

In less than 20 hours, we built:

  1. a cheap infrared imaging device,

  2. processing software that takes infrared and runs NDVI analysis, which indicates whether plants are healthy.

  3. an interface to tie it all together.

Harvest NDVI

Harvest's Normalized Difference Vegetation Index (NDVI) view.


The project sounds complex but was very doable after breaking it into parts. The most difficult part was that we needed to finish the hardware and collect images within the first ~6 hours before sundown.

I had ordered a balloon, rope, etc. beforehand, as well as a cheap Canon point-and-shoot from eBay and a replacement "infrablue" filter, which shifts infrared light into the red part of the spectrum, so the camera records it. The total cost of all this was $12 for a used camera, $10 for the filter, $10 for a small weather balloon (on sale), and $10 for 500 ft of rope, for a total of ~$42.

It was a pain to find helium in SF but we managed to get 55 cu ft, more than enough (thanks SF Party).

I flashed the camera with special firmware called the Canon Hack Development Kit and put a Lua script on it that adjusted the focus, flash, and most importantly, took a picture every 10 seconds.

The field test

We went to a nearby park and got to work. Though it wasn't a farm, we figured there'd be enough grass/trees/pavement to show variation in watering and plant happiness.

Everything was secured with rubber bands and tape and we filled up the balloon.

It was a 5ft, 150g weather balloon.

The camera was just put in a cardboard box with a hole.

Serious cardboard cutting.

Tying off a weather balloon is always pretty scary. We managed to not have everything fly away by accident.

After being secured to a rope, the balloon was tethered to the ground. The payload flew smoothly, balanced by the downward-facing camera.

Field test balloon in flight

Field test in progress over Dogpatch, SF.

We didn't fly it to the full 500 feet because the balloon was underinflated and we were worried about power lines. It would've been too annoying to undo the seal and fill up the balloon more.

The software

Now that we had a bunch of near-infrared images of a nearby park, we had to process them. NDVI processing is a well-known technique, for every pixel:

NDVI = ((IR - R)/(IR + R))
IR = pixel values from the infrared band
R = pixel values from the red band

PublicLab is an excellent resource for DIY-NDVI projects and provided a lot of the knowledge and inspiration necessary to complete this. They have code, pre-made kits, and a great community around aerial observation and data collection.

I used the Python Imaging Library (PIL) to do this, scaled the coloring to best fit our field test data, and built a pipeline that converted all images.

The frontend then displayed these images nicely:

Infrared and processed NDVI imagery.

Our frontend whiz also did this awesome slider view.

And we incorporated satellite imagery from LandSat.

The pitch

It helps to start your pitch early, practice pitching to strangers, and build a good landing page. These were all done by ~10pm the first day.

These needs were compounded by the fact that Disrupt is only 20 hours and everything was judged solely on 1 minute presentations with no Q&A. This was a bit surprising - I think science-fair style judging like at LAUNCH or YC Hacks does a better job of fully evaluating a hack.

I worried the balloon was too much of a gimmick. It was just a demo; the meat of the project is not balloon-related at all and I wanted people to realize that. Our device could go on an airplane, drone, or even a very long pole!

Overall, the pitch basically went fine. There is only so much you can say/do/screw up in 60 seconds.

The outcome

We won prizes from CircleCI, Weather Underground, and PCH. We were 3rd overall, so we got to return later that week and present to the conference as hackathon winners. All in all, I felt silly carrying around a red balloon all weekend but it was totally worth it to see the idea come to fruition and achieve recognition.

With the CircleCI team, who singlehandedly restored my faith in CI.

The future of Harvest

I feel Q&A would've helped this hack shine more, plus maybe a modified pitch. The only feedback we got from judges was that it was an interesting idea, not a ready-made business. Which makes me think the goal of the Disrupt Hackathon could be better clarified.

Agricultural crop loss is a huge problem measured in billions of dollars. Existing businesses do thermal and infrared imaging via airplane, drone, and satellite at anywhere between 100-10,000x the cost. Every farmer we spoke with at the local market said they were interested in the product, so I think there's a potentially sizable market for a cheap infrared imaging solution that requires no training.

Irrigation problems in a field.

Some interesting takeaways from surveying farmers:

  1. Agriculture is not technologically backwards by any stretch. One farmer said infrared imaging tech sounds like it's far away, but he would've said the same thing about GPS 10 years ago. Now he can't imagine planting without GPS. Farmers are eager to adopt technology where it helps.

  2. Farmers constantly worry about their crops. One described how she compulsively checks the few statistics she has. She would love to be able to do more electronic monitoring.

  3. Watering is the most important knowledge in agriculture. This makes sense but you don't get it until you talk to farmers. Different plants like to be watered in different ways. These systems leak and fail in different ways, and catching a variety of problems is hard. One farmer remarked that we would've caught a recent pump problem, which would've been huge.

Hackathon tips

Hackathons are great but can be overwhelming. Here are some thoughts:

  1. Pitch strangers early. People at unrelated sponsor booths are usually willing to listen.

  2. Optimize for demos. For a 20 hour event, you need to be realistic about cutting corners. This means terrible code is sometimes ok.

  3. Sleep a lot. Powering through the night is unhealthy and I doubt the returns are worth it. We slept about 7 hours.

  4. Talk with sponsors and try to get a relationship going. This helps problem solve and puts your project in their minds.

That's it...for now

If you liked this, follow me on twitter, check out my other projects, or read another hackathon post.

80,000 visitors on Christmas morning: a post-mortem

I woke up on Christmas morning to many emails and tweets about Asterank, a 3D space visualization. Intrigued, I checked mixpanel and was shocked: in less than 8 hours overnight, I had received over 75,000 uniques thanks to a reddit submission.

Then, to my horror, I realized the site had been down for about 3 hours.

Quick background: Asterank runs on nginx, node, and mongo on EC2 with Cloudflare for a CDN. The 3D visualization loads a static page and grabs data via AJAX.

Mongo crashes

Server logs immediately told me mongo wasn't running. I had mongo crashes like this before, caused by large, unique queries with limited RAM.

When mongo crashes due to some combination of insufficient CPU/RAM, it leaves a .lock file in /var/lib/mongo. If this file exists, mongo refuses to start and tells you to repair the database. Unfortunately, repairing wouldn't work in my crashed state, so the only solution was to manually delete the lock.


Although mongo was up, the endpoint site was not returning results. I spent about 15 minutes trying to figure out what the issue was and frustratedly restarting node. I also purged the Cloudflare cache, which was ineffective because AJAX results are not considered static data.

Finally, I remembered that I'd configured nginx to cache things. Clearing the cache manually and restarting nginx did the trick, and the site was finally online after a little more than 3 hours of downtime.

The Aftermath

The site was back up, but the traffic and reddit post had understandably stagnated beause of the 3 hours of downtime. About 20,000 unique visitors had gone to the site and seen an error message. Although I continue to get traffic from reddit and other sources, it is nowhere near the 2+ users per second I was getting overnight.

Things I did right

nginx caching

My nginx cache setup saved multiple mongo queries per second until it was brought down by a few too many unique queries, which are generated on the main site. This was essential in dealing with a massive increase in traffic for as long as I did.


I started using Cloudflare on a whim a couple weeks ago, in conjunction with my move off a tiny 512MB linode. This was fortuitous, as it saved me a ton of bandwidth and vastly improved load times under heavy traffic.

Things I did wrong

Accurate HTTP status for AJAX endpoints

If I had programmed my AJAX endpoint to give a 500 on mongo failure, it's possible that this downtime would have been avoided altogether. Cloudflare has an always-up feature which caches pages in case things go down. Also, nginx would have cached the failed results for a much shorter period of time.

Unnecessary dependency on mongo

The 3D portion of the site could be made completely static. The interface constrains users to 3 possible queries, each for a different scoring function. I should have cron'ed mongo results to a file every 24 hours or so. This is an obvious optimization and would have protected the visualization from a mongo failure.


Since I knew that mongo could die like this, I had written a script to automatically recover. But I wasn't running it because I'd recently upgaded servers and the crash stopped happening.

For additional peace of mind, I could have set up an endpoint that checks the health of the mongo server. This could have been done with Pingdom or even manually with a cron + my free sms api (shameless plug).

Should I have been prepared?

In the tech community, emphasis tends to be on moving fast, iterating, getting eyeballs and feedback. People apply this advice to side projects, but in my case it would've been good to prepare more.

I can kick myself for disappointing more than 20,000 people, but Asterank is a science project that doesn't generate any revenue. Making reddit's frontpage was unthinkable until it actually happened.

Should I have showed the world right now, or I should I have spent a couple days optimizing it for a single, improbable event. Who knows.