
Abstract: Microscopes are age-old technology which let us visualize and research phenomena too small to be perceived by the human eye. Macroscoopes, by contrast, are systems designed to reveal spatial and temporal patterns which are so big or slow that they escape normal human perception. We can get some sense of these when we fly, or when we look at time-lapse imagery. But to really make progress requires both the ability to collect planetary-scale information over time, and to harness modern compute technologies to support interactive visualization and interrogation of such data.
The convergence of three technologies is finally allowing the construction of macroscopes, and making them practical for daily use in both scientific and business contexts. First, we have the advent of LIDAR, GPS cell phones and ‘cube’ sats. These supply raw observation data from nearly everywhere, capturing both human and natural activities. Then we have machine learning methods which can classify patterns of movement or pixels. As applied to geodata, we call this “geoML.” Last but not least we have modern computing architectures which move algorithms to data and stream highly-distilled information to client applications.
In this presentation, we explore three applications of the macroscope concept, applying it to the monitoring of individual tree health for hundreds of millions of trees, the exposure of static and moving assets to weather, and the analysis of ship movement patterns.
Bio: Abhishek Damera’s work as a Data Scientist at OmniSci involves using the state of art machine learning algorithms to capture the underlying trends in the geospatial data. Prior to this, he has done his Master’s at UC Berkeley in Transportation Engineering, where most of his work is focused on classifying the roads according to vehicular speed profiles.