Los Angeles Geospatial Summit

Earlier this year I attended the Los Angeles Geospatial Summit.  This event is for those interested or working in the fields of geospatial science, technology and applications.

I arrived during the first session of student papers, thus only catching part of the paper on rooftop community gardens.  The next student paper was by Steve Strand, GIS Analyst for the Orange County Water District.  His paper was about data logging and visualizing data in a 3D format featuring a space-time path while participating in autocross.  Yes, autocross!  The student recorded the driver’s (himself) heart rate and speed while racing.  He used new sensor data, Bluetooth, to generate better location accuracy.  His visualizations were taken from automated table data.

GIS-enabled heart rate data.  Image by Steve Strand.

GIS-enabled heart rate data. Image by Steve Strand.

Next up was the panel on Unmanned Aerial Systems.  Val Vaughn and Mark Vogel discussed “How aerospace is using multi-rotor drones to supplement data needs for image processing R&D”.

Then Dr. Bingcai Zhang talked about “GPU + Drones + 3D Imaging for Precision Framing”.  He defined GPU as a graphic processing unit.  He stressed that through 3D imaging, GPU and drones, such data collection could be made affordable.  He named several uses for it including agriculture, disaster management, surveying, animal observation, land fill survey, forestry conservation, environmental management, construction and mining.  His main focus though was on how farmers can use such technology to help create a type of precision farming.  This would include variable rate fertilizer that could help save 27% of fertilizer usage.  He also mentioned the use of CUDA, Compute Unified Device Architecture, which runs on GPU.

Mark Sarojak was the final presenter for this panel section.  His talk was on “Big Data for Little Platforms”.  He first mentioned the data sizes of such platforms as YouTube with 1 petabyte daily, 500 terabytes daily or 100 gigabytes daily.  In terms of real big data there are the WorldView2 surveillance satellites.  For the government, there are the unmanned aircrafts such as RQ-7 Shadow and RQ-4 Global Hawk for military surveillance. For commercial uses of big data there are oil and gas, utility monitoring, etc.  Personal uses include as toys or hobbyists.  More data is thus a product of better collection platforms and sensors.  Denser data is a result of sensors getting better.  More users also contribute to the commercialization of Unmanned Aerial Systems in fields such as agriculture, automotive, construction, financial analysis, insurance, retail, safety and transportation.

The next panel featured GIS and Community Health.  The big picture presented here was in cancer control.  How do we use cancer surveillance data to stop people dying from cancer and how can the spatial aspect help?  Every occurrence of cancer in a population is counted, described in terms of the person (demographics), tumor (biological) and place.  Cancer control thus is a way to understand disease etiology (cause and effect).  There is also a need to identify recurrent at risk people.

The first place examined was the world.  You can view more data at this site: globalcancersolution.org.  The second place was Los Angeles County.  This is “truly” population-based since 1972 with 1.4 million records.  There is high quality geocoding of diagnoses and addresses.  The population at risk is seen using census tracts and block groups due to combinations of age, sex, race/ethnicity.  When describing cancer occurrence by place there are confidentiality concerns (not identifying cases by mapping them).  They also discussed targeting ad hoc screening efforts with 22% of the Los Angeles census tract showing 94% of late stage disease and 25% of in situ.

Robert Fassett discussed two huge threats to our nation’s continued prosperity.  One is healthcare waste totaling about $765 billion in 2012.  The other is obesity, which can lead to diabetes.  It is predicted that by 2030, 30% of the population will have diabetes.  One way to help combat this is by using Geopulse, which has three features: audience, proximity and contextGeotagging can also help locate where such healthcare waste and obesity are occurring to better prevent it in the future.

Estella Geraghty from ESRI spoke about “Telling Your Mission Critical Stories with Greater Impact”.  She started by asking “Why are maps so compelling?”.  Maps communicate and interaction is powerful.  New technology is extending GIS, allowing health professionals to inform and educate.  Because health issues are complex a key goal is to engage people through persuasion, beliefs and behaviors.  In other words, action and interaction.  It is then important to tell the right and accurate story.  One example she brought up was the measles story map called “The state of vaccinations“.  One thing to note though is how well are these stories and maps at predicting information that people will need and remember?  They can include news issues and must tell the whole story.  Another story map shown was on low birth weight babies.  Geraghty ended by mentioning the “Health Enterprise Data Flow” shared with ESRI and New Mexico, which is an open data program.  This allows ESRI to collect data that can help New Mexico with health situations.

The next session of student papers started with Adarsh Sinha’s “Mapping Social Media”.  Specifically he looked at mapping people’s moods.  He asked “What would we use to map mood? Data”.  This involves categorizing and processing data as well as visualization.  Questions would be answered using Sentiment Analysis, asking how people feel.  Problems though would involve context, data access, accuracy, books being judged by covers and the increasing use of such information for government and surveillance.

Michael Quant spoke about “Rail Equipment Locator Maps for LA Metro Rail Maintenance Department”.  Los Angeles is the third largest U.S. city.  Currently there are six rail lines (light and heavy), which include the blue, expo, gold, green, purple, and red.  Certain lines are experiencing expansion: expo to Santa Monica, gold to Foothill and purple to Westwood.  New lines are also being built such as the Crenshaw/LAX line.  Quant then mentioned how there are miles of high voltage electrical cable along poles at railways.  Rail expansion involves new equipment and trained personnel.  Equipment location can be difficult to determine without experience or engineering and there are requests by rail maintenance for electrical power equipment.  This is where maps come in.

LA Metro Lines.  Image from Metro website.

LA Metro Lines. Image from Metro website.

They must be designed to utilize standard paper sizes (letter, legal, tabloid) and fold into pocket size for portability.  They are printed using color copies and placed at multiple distribution sites.  The quantity is monitored daily and starting in May 2013 to Jan 2015, 2,200 maps were distributed.  These are also used by other departments.  Updates to the maps are through user requests and can be for rail station name changes and environmental changes, such as when multiple rail lines intersect.  When creating the maps duplicate spatial attribute information can occur.  There is also a web/mobile device platform called SpeedFind with secure login and a website for each detailed map.  They use ArcGIS online/server and can combine desktop GIS tools and local database management to create better maps.

Arman Majidi discussed “GIS Analysis of Industry Clusters –” (I wasn’t able to get the whole title).  He assessed the reliance of industry clusters on freight infrastructure and analyzed the impact of freight on economic growth.  Industry clusters are groups of related industries in a defined geographic area that share consumer markets.  His methods included utilizing GIS to visualize economic, infrastructure and fright data flows.  One example he provided was the impact of BNSF (a railway company) on Minnesota’s economy.  For the Inland Empire, 50% of freight enters Southern California ports thus making the Inland Empire the center for logistics in the U.S.  He also mentioned the National Freight Atlas Project.

The Inland Empire.  Map from Google Maps.

The Inland Empire. Map from Google Maps.

Sarah Gehring talked about “Semi-automated Visualization of Spatial Information in Unstructured Text“.  The goal for this study was to create a semi-automated geoparsing, data integration and visualization approach to convert locations in text-based news articles to mapped locations.  Her methods included extracting spatial entities from text, assigning locations a spatial reference and identifying relationships between locations.

A case study was presented by Isaiah Mack and Jon Gipson titled “Comparing Consumer and Mapping Grade GPS Receivers and Applications”.  They discussed the difference between consumer and mapping grade software.  Most smartphones now have GNSS (Global Navigation Satellite System), quick convergence and high variability.  Types of mapping grade GPS include attributes, differential correction and the Geo 7x.  Besides mapping grade GPS receivers there are also survey “grade” ones that are converging with the mapping grade ones.  The choice here lies in getting a device that is position vs accuracy based.  Workplace trends include BYOD (bring your own device), company owned devices and employee owned devices.  One device used is the iBeacon.

Geo 7 series.  Image from Trimble website.

Geo 7 series. Image from Trimble website.

Assisted GPS units can have positioning of 5-8 meters while wi-fi positioning can be 74 meters and cellphone network positioning is 600 meters.  There are several platforms, such as TerraSync/PFO (pathfinder), ArcPad/positions, TerraFlex with the pros of price, automatic background layers, easy to learn and use and cloud syncing.  The advanced also includes data uploads and updates.  The con is the reliance on cellphones for background layers.  The final point stressed was without corrections no device is accurate.

Chris Mattmann ended the event by speaking about “An Open Source Big Data Ecosystem Challenges”.  He asked, “How do we handle 700 TB in a sea of data?”  Scientists will not change their algorithms for overly large data sets.  He also discussed comparing petabytes of data in variety of formats with petabytes of remote sensing data.  One example he gave was of cataloging all of NASA’s current planetary scientific data.  Intuitive long term objectives include focusing on data science using the Journal of Big Data, the Apache Foundation and Tika (content analysis/detection toolkit).

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s