Big Data Super Bowl
Big Data Super Bowl by Dan Gatti, Big Data I/O Forum
How many teams are going to win the Big Data Super Bowl? There is $200 million available to teams that have a high scoring offense and can query, visualize and disseminate big data information.
Collaboration and creativity focused on a cross section of agencies will result in making the playoffs. How you can improve the velocity of response in recent disasters caused by earthquakes, oil spills, fires and tornadoes?
Is there a solution that makes first responders job easier by querying Big Data from USGS, DOE, DHS, NASA, NSF and NIH? That will be the all star team that could visualize the what if scenarios in Dr. Eric Frost’s VizLab at San Diego State and disseminate to the disaster team.
From climate change to earthquake response paradigms to real-time response capabilities for natural disasters, there is tremendous need for new investments in technologies that fall under the purview of the USGS.
From Datanami The US Geological Survey has doled out grants to fund big geological science projects via its John Wesley Powell Center for Analysis and Synthesis. According to the center, it hopes to spur new ways of thinking about Earth system science by providing scientists with “a place and time for in-depth analysis, state of the art computing capabilities, and collaborative tools invaluable for making sense of huge data sets.”
Some examples of accepted projects using “big data” are:
- Understanding and Managing for Resilience in the Face of Global Change – This project will use Great Lakes deep-water fisheries and invertebrate annual survey data since 1927 to look for regime shifts in ecological systems. The researchers will also use extensive data collected from the Great Barrier Reef.
- Mercury Cycling, Bioaccumulation, and Risk Across Western North America – This project relies on extensive remote sensing data of land cover types to model the probable locations of mercury methylation for the western U.S., Canada and Mexico. Through a compilation of decades of data records on mercury, the scientists will conduct a tri-national synthesis of mercury cycling and bioaccumulation throughout western North America to quantify the influence of land use, habitat, and climatological factors on mercury risk.
- Global Probabilistic Modeling of Earthquake Recurrence Rates and Maximum Magnitudes – This project will use the Global Earthquake Model (GEM) databases, including the global instrumental earthquake catalog, global active faults and seismic source data base, global earthquake consequences database, and new vulnerability estimation and data capture tools. The research goal is to improve models of seismic threats to humanity, and base U.S. hazard estimates on a more robust global dataset and analysis.
Category: Uncategorized