
Abstract: Big Data is dependent upon supercomputing environments capable of storing and processing massive amounts of data. Hadoop is the most prominent Big Data platform in production today; it is an impressive architecture providing tremendous capabilities. Deep dive into this architecture to discover how it really works, and from this knowledge gain insights into data lifecycle management and job optimization for both MapReduce and Spark compute jobs. Improve your value to the Big Data team by learning the functions and systems of Hadoop.
Bio: Pushing data from small to large to huge summarize Will’s 20-year career in technology. A natural educator he shares his enthusiasm for the technology of Hadoop with clear explanations mixed with acute insights wrapped in humorous antidotes from the many lessons he has learned over the years. Will currently works in the center of the Big Data vortex for Hortonworks, where he travels the globe teaching Hadoop engineers.

William Dailey
Title
Senior Hadoop Engineer and Educator at Hortonworks
