Ryan Boyd

Ryan Boyd

Co-founder at MotherDuck

Ryan Boyd is a Boulder-based software engineer, data + authNZ geek and technology executive. He's currently a co-founder at MotherDuck, where they're making data analytics fun, frictionless and ducking awesome. He previously led developer relations teams at Databricks, Neo4j and Google Cloud. He's the author of O'Reilly's Getting Started with OAuth 2.0. Ryan advises B2B SaaS startups on growth marketing and developer relations as a Partner at Hypergrowth Partners.

All Sessions by Ryan Boyd

Data Infrastructure through the Lens of Scale, Performance and Usability

Data Engineering | All Levels

Silicon Valley engineers and engineering challenges have ruled the data world for the last 20 years. The net result is data infrastructure companies focusing on being the highest scale, fastest systems to process enormous amounts of data– usability be damned. We don’t all have movie libraries the size of Netflix, search indexes the size of Google or social graphs the size of Meta. This talk explores the changes in hardware and mindsets enabling a new breed of software that is optimized for the 95% of us who do not have petabytes to process daily. I worked on Google BigQuery in 2012. At the time, the max size of memory on an EC2 machine was 60.5GB. Today, we have EC2 machines with 25TB of RAM. Our software design for data services, focused on distributed architectures, hasn’t taken into account that massive 400x change in the amount of memory available. At the same time, our laptops have gotten so much more powerful - with 16x the amount of RAM available in today’s Macbook Pro vs the ones offered in 2012. Shouldn’t our data infrastructure be adapted to take advantage of this local compute? What does this change in hardware and software mean for the user experience? Instead of focusing on consensus algorithms for large-scale distributed compute, can our engineers instead focus on making data more accessible, more usable and reduce the time between “problem statement” and “answer?” That’s the dream that I’m exploring and where I want to push our industry over the next 5 years.

Data Infrastructure through the Lens of Scale, Performance and Usability

Data Engineering | All Levels

Silicon Valley engineers and engineering challenges have ruled the data world for the last 20 years. The net result is data infrastructure companies focusing on being the highest scale, fastest systems to process enormous amounts of data– usability be damned. We don’t all have movie libraries the size of Netflix, search indexes the size of Google or social graphs the size of Meta. This talk explores the changes in hardware and mindsets enabling a new breed of software that is optimized for the 95% of us who do not have petabytes to process daily. I worked on Google BigQuery in 2012. At the time, the max size of memory on an EC2 machine was 60.5GB. Today, we have EC2 machines with 25TB of RAM. Our software design for data services, focused on distributed architectures, hasn’t taken into account that massive 400x change in the amount of memory available. At the same time, our laptops have gotten so much more powerful - with 16x the amount of RAM available in today’s Macbook Pro vs the ones offered in 2012. Shouldn’t our data infrastructure be adapted to take advantage of this local compute? What does this change in hardware and software mean for the user experience? Instead of focusing on consensus algorithms for large-scale distributed compute, can our engineers instead focus on making data more accessible, more usable and reduce the time between “problem statement” and “answer?” That’s the dream that I’m exploring and where I want to push our industry over the next 5 years.

Data Infrastructure through the Lens of Scale, Performance and Usability

Data Engineering | All Levels

Silicon Valley engineers and engineering challenges have ruled the data world for the last 20 years. The net result is data infrastructure companies focusing on being the highest scale, fastest systems to process enormous amounts of data– usability be damned. We don’t all have movie libraries the size of Netflix, search indexes the size of Google or social graphs the size of Meta. This talk explores the changes in hardware and mindsets enabling a new breed of software that is optimized for the 95% of us who do not have petabytes to process daily. I worked on Google BigQuery in 2012. At the time, the max size of memory on an EC2 machine was 60.5GB. Today, we have EC2 machines with 25TB of RAM. Our software design for data services, focused on distributed architectures, hasn’t taken into account that massive 400x change in the amount of memory available. At the same time, our laptops have gotten so much more powerful - with 16x the amount of RAM available in today’s Macbook Pro vs the ones offered in 2012. Shouldn’t our data infrastructure be adapted to take advantage of this local compute? What does this change in hardware and software mean for the user experience? Instead of focusing on consensus algorithms for large-scale distributed compute, can our engineers instead focus on making data more accessible, more usable and reduce the time between “problem statement” and “answer?” That’s the dream that I’m exploring and where I want to push our industry over the next 5 years.

Data Infrastructure through the Lens of Scale, Performance and Usability

Data Engineering | All Levels

Silicon Valley engineers and engineering challenges have ruled the data world for the last 20 years. The net result is data infrastructure companies focusing on being the highest scale, fastest systems to process enormous amounts of data– usability be damned. We don’t all have movie libraries the size of Netflix, search indexes the size of Google or social graphs the size of Meta. This talk explores the changes in hardware and mindsets enabling a new breed of software that is optimized for the 95% of us who do not have petabytes to process daily. I worked on Google BigQuery in 2012. At the time, the max size of memory on an EC2 machine was 60.5GB. Today, we have EC2 machines with 25TB of RAM. Our software design for data services, focused on distributed architectures, hasn’t taken into account that massive 400x change in the amount of memory available. At the same time, our laptops have gotten so much more powerful - with 16x the amount of RAM available in today’s Macbook Pro vs the ones offered in 2012. Shouldn’t our data infrastructure be adapted to take advantage of this local compute? What does this change in hardware and software mean for the user experience? Instead of focusing on consensus algorithms for large-scale distributed compute, can our engineers instead focus on making data more accessible, more usable and reduce the time between “problem statement” and “answer?” That’s the dream that I’m exploring and where I want to push our industry over the next 5 years.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google