Wednesday, July 10, 2013

Speed up hadoop development with progressive testing

Debugging Hadoop jobs can be a huge pain.  The cycle time is slow, and error messages are often uninformative --- especially if you're using Hadoop streaming, or working on EMR.

I once found myself trying to debug a job that took a full six hours to fail.  It took more than a week -- a whole week! -- to find and fix the problem.  Of course, I was doing other things at the same time, but the need to constantly check up on the status of the job was a huge drain on my energy and productivity.  It was a Very Bad Week.



Painful experiences like this have taught me to follow a test-driven approach to hadoop development.  Whenever I'm working on a new hadoop-based data pipe, my goal is to isolate six distinct kinds of problems that arise in hadoop development.

  1. Explore the data: The pipe must accept data from a given format, which might not be fully understood at the outset.
  2. Test basic logic: The pipe must execute the intended data transformation for "normal" data. 
  3. Test edge cases: The pipe must deal gracefully with edge cases, missing or misformatted fields, rare divide-by-zeroes, etc. 
  4. Test deployment parameters: The pipe must be deployable on hadoop, with all the right filenames, code dependencies, and permissions.
  5. Test cluster performance: For big enough jobs, the pipe must run efficiently.  If not, we need to tune or scale up the cluster.
  6. Test scheduling parameters: Once pipes are built, routine jobs must be scheduled and executed.

Each of these steps requires different test data and different methods for trapping and diagnosing errors.  Therefore, the goal is to make sure to (1) tackle problems one at a time, and (2) solve each kind of problem in the environment with the fastest cycle time.

Steps 1 through 3 should be solved locally, using progressively larger data sets.  Steps 4 and 5 must be run remotely, again using progressively larger data sets.

Step 6 depends on your scheduling system and has a very slow cycle time (i.e. you must wait a day to test whether your daily jobs run on the proper schedule.).  However, it's independent of hadoop, so you can build, test, and deploy it separately.  (There may be some crossover with #4, but you can test this with small data sets.)

Going through six different rounds of testing may seem like overkill, but in my experience it's absolutely worth it.  Very likely, you'll encounter at least one new bug/mistake/unanticipated case at each stage.  Progressive testing ensures that each bug is dealt with as quickly as possible, and prevents them from ganging up on you.

Other suggestions:
  • Definitely use an abstraction layer that allows you to seamlessly deploy local code to your staging and production clusters.  Cascalog and mrJob are good examples.  Otherwise, you'll find yourself solving steps 2 and 3 all over again in deployment.
  • Config files and object-oriented code can reduce a lot of headaches in step 4.  Most of your deployment hooks can be written once and saved in a config file.  If you have strong naming conventions, then most of your filenames can be constructed (and tested) programmatically.  It's amazing how many hours you can waste debugging a simple typo in hadoop.  Good OOP will spare you many of these headaches.
  • Part of the beauty of Hive and HBase is that they abstract away most of the potential pitfalls on the deployment side, especially in step 4.  By the same token, tools like Azkaban and Oozie can take a lot of the pain out of step 6.  (Be careful, though -- each of these scheduling tools has its limitations.)