Tuesday, November 24, 2015

Lab 8: Developing a Project for Arc Collector and Gathering Data

Introduction

In this week's lab we created geodatabases for use in Arc Collector where we can collect attribute data tied to GPS points on mobile devices using the Arc Collector app. Smart phones and mobile devices are perfect for collecting geospatial data in the app because they feature powerful computing power which is far higher than most GPS units and their connectivity allows for access to online data which is useful for syncing information and updating geodatabases on the fly. In order to have a working data collection map on Arc Collector there are several prerequisites. First you must have an ESRI online account and create a geodatabase with feature classes, domains, fields, and sub-types. Then this map must be published to ArcGIS online and transferred into a map document. Then if all the necessary feature services are set up right this map can be accessed via a mobile platform and new data points can be input to the automatically updating map. Having preset domains and sub-types speeds up the data acquisition and cuts down on human errors. In total for this lab we created three geodatabases. The first was an in class tutorial to get ourselves familiar with the basics. The second was an on campus feature collection geodatabase of our choice. Lastly the third was an off campus feature collection geodatabase of our choice.

Study Area/Methods

During the in class tutorial our professor showed us the basics of creating a new geodatabase by using domains and feature classes that are linked to the domains. To do so we selected a parent folder and chose the option "new/file geodatabase" and once this was created the domains could be created. Inside the properties menu of the geodatabase, under the Domains tab the domain names could be created and given a description of their function. Then in the domain properties section the field type could be chosen, as well as the domain type which is usually a single option. If coded values were chosen for the domain type, down under the coded values section the code values could be named and given descriptions. These are the values that can be quickly selected while using the Arc Collector App when filling in feature attribute information. Next inside that geodatabase we chose "new/feature class". Doing so will prompt you to name the feature class, give it an alias, and choose what type of feature is to be stored (in our case point features). Next a projection is chosen, in our case WGS 1984 Web Mercator (auxiliary sphere) was chosen. Then the x,y tolerance default is accepted and lastly the field names must be created. The field names are the attribute data of the features to be collected and can be any number of choices including numeric short integer or long integer, decimal point float, double, text, date, blob, guid, or raster. In the field properties the default values could be set, the length allowance set, and the domain value that is tied to the geodatabase could be denoted which allows for quick input by users. After accepting a few more default setting the feature class is now created. Before doing anything else the feature class has to be properly symbolized according to what is needed to be conveyed by the data points and size or color must be taken into account. This can then be shared with the ArcGIS online database that an ESRI online member has access to. In order to do so you must first be logged into an ESRI account and then choose the share as/ service option under the main file heading. Next publish a service is selected, followed by choosing a connection which is either the default hosted services server or another server can be connected to, the service is named, and then the next portion of the publishing ensues. Inside the service editor window that appears the tiled mapping option is turned off and the feature access is turned on, which is what allows end users to manipulate and add data to the service. Then in the feature access tab the options create, delete, query, sync, and update must all be selected to ensure full functionality. Lastly in the item description tab a summary and tags are required, and a description, access and use constraints, and credits are optional input. Then once this has all been finalized you can finally select the analyze button, and then the publish button, which will export the service to the users ArcGIS online account which can then be accessed from the ArcGIS online website.

The next step in the process is to access the ArcGIS online site and log into your account. Once in click on the map ribbon and zoom to the desired study area extent. Then after choosing a basemap select the add layer function and search for the previously published features in the My Content section. After testing if the edit function works properly and all the desired fields are usable for attribute input save the map and enter in the desired map title, tags, summary, and save folder destination. Lastly the application settings can be customized by selecting the "About this Map" icon selecting the share button it allows for either everyone, your organization, or subgroups of your organization to access the map and edit it. Once this is complete the map is full accessible from the Arc Collector app and can be used to gather spatial attribute points in the field.

By following the previously outlined methodology for creating and publishing custom geodatabases with point features, I then created two maps; one for on campus and the other off campus. My on campus map was a simple Garbage Receptacle marker but had several fields and domains linked to the geodatabase. The fields were material composition (combination of materials, metal exterior, cobbled aggregate rock exterior, wooden exterior, or plastic exterior), bin size (small, medium, large), date, recycling combo (either singular unit or combination recycling and trash) and smokers receptacle (if the unit was a smokers receptacle or contained one). The study area for this map was the general area of the UWEC lower campus from Hibbard to Davies.

Figure 1: Garbage Receptacles data table after collection
My second off campus map was one of the sidewalk features lining a block on water street. The first field was the feature type (sitting bench, stop sign, parking sign, no biking sign, garbage bin, street lamp, or tree), the second field was its condition (either well maintained, minimal wear and tear, or unkempt weathered or vandalized). The final field was the position on side walk (either closer to the road of further from the road).

Figure 2: Off campus water street side walk features data table after collection


Results

On campus trash bins map

http://uwec.maps.arcgis.com/home/webmap/viewer.html?webmap=9494669fa20648419b1160299e73d5e1

Figure 3: On Campus Trash Bin collector map after data collection, in ArcGIS online map viewer


Off campus water street sidewalk features

http://uwec.maps.arcgis.com/home/webmap/viewer.html?webmap=58dbfd8153fa4733bbd0d54513c000a7

Figure 4: Off campus water street sidewalk features after data collection, in ArcGIS online map viewer


Conclusion

Fields and domains in geodatabases make data collection in the field an extremely efficient and easy task so long as they are properly formatted. With the use of Arc Collector we were able to quickly make a working map and utilize it in the field with limited programming knowledge. One major bonus of this method of collecting data to an automatically updating database is the added mobility and processing speed of having the app directly on our smart phone devices as opposed to having a bulky GPS device with. The intuitive app interface made for quick data acquisition. Overall the Arc Collector app seems to be a very powerful medium for collecting geospatial data, and the option of having multiple users and simultaneous data syncing allows for a highly efficient platform for projects.

Monday, November 9, 2015

Lab 7: Conducting a Topographic Survey with a Dual-Frequency GPS and Topcon Total Station

Introduction

In these two conjoined labs we were introduced to the procedures and units involved in survey grade GPS analysis. The four units we used were the Tesla (on the fly sub-centimeter accuracy gps capable handheld unit with touch screen interface), HIPER (high accuracy GPS receiver), MIFI (4G modible hotspot device) and Topcon Total Station (survey grade optical laser distance and bearing measurement deice). Using all four of the units, my lab partner and I were to survey out a roughly 25 x 25 meter study area and plot out 100 points (first lab) and 25 points (second lab) over its surface area. The resulting data would be akin to our first and second labs where we used rudimentary surveying methods in our topographic sandbox's, but in this lab's case the x,y,z points would be highly accurate survey grade data. After all our classmates had collected their data, our data set would be capable of being combined with other group's data sets if they were in the same study area.

Study area/Methods

Dual frequency GPS

The study area for this lab was (for my group) the campus mall extent inside the surrounding sidewalk at UWEC. Once we had collected the neccessary equipment from out department storage room (HIPER, TESLA, MIFI, and Mounting Tripod) we set out to the mall and began setup. Once the Tesla handheld was powered on we entered the Magnet Field Program used for surveying and created a new Job with our groups identification information. Then we configured the GPS to use the HIPER SR RTK NET OC to ensure we were getting the highest possible accuracy for our location and elevation data. The Projection were kept as UTM North Zone 15 90w, as well as the same Datum NAD83(2011) and GeoID. After continuing through the rest of the options set to default and okaying the selection we hit the home button to return to the main menu screen. Then we entered the Connect sub menu and chose to connect to the HIPER, beforehand ensuring the MIFI device was turned on and connecting to the HIPER device which would further enhance localization accuracy by having a pinpointed 4G hotspot. After returning to the home menu we then began the process of collecting data points by entering the Survey menu, and then the Topo menu. We were group 1 so our data points began at 100 and would go to 200. The height of the unit was denoted as 2m and the code for the data points was set to elev (elevation). To ensure the accuracy of the data points we set the number of averaged points collected per single end result data point to be 10. Once these prerequisite fields were filled out we began the process of collecting a points, and relocating to other parts of the campus mall, making sure we covered the whole extent of the mall until we reached out 100 point cap. Due to the fact that the Tesla unit was stuck in demo mode and would cap our data points collected per job to 25 we actually ended up having to have four separate projects, which we would later merge into a master data set for use in ArcMap geoprocessing of topological elevation maps. The point data we acquired over the four data sets was then exported onto a thumb drive in a txt. file format which after slight alteration of the field names was easily compatible with ArcMap's create feature layer from x,y coordinates option.The feature layer was then used to create a topgraphic map of the campus mall using Spline interpolation and also exported as a tif for use in 3D analysis in ArcScene.


Figure 1: Campus Mall microtopography and x,y points in Arc Map  from Tesla/HIPER/MIFI  using spline interpolation

Figure 2: Campus mall microtopography in Arc Map from Tesla/HIPER/MIFI using spline interpolation

Figure 3: Campus mall 3D microtopography in Arc Scene from Tesla/HIPER/MIFI 

Topcon Total Station

On our second outing to the campus mall we were instructed to use the Topcon Total Station, Prism Rod, Tesla handheld unit, and the MIFI portable 4G hotspot to gather location and elevation data points. This time the major difference was that instead of using the Tesla as a GPS to gather the points, it instead would be bluetoothed to the Total Station which would be collecting the data from its distance and bearing data relative to the Occupied point and backsights. To gather the occupied point (exactly where the center of the Total Station would be positioned) and the backsights (denoted points which are used as a reference for true north in the Total Stations inner computer) we followed the same workflow used in the first lab outing by using the Tesla, HIPER, and MIFI. Once those points were collected the Total Station was constructed atop a sturdy tripod with adjustable legs. Care had to be taken in order to have the unit exactly level to the ground in order for the Total Station to function properly, and this process entailed adjusting two of the legs, and then the leveling screws, and then adjusting the legs again. Once the station was set up the Tesla had to be restarted and the Total Station turned on and bluetooth activated for the Tesla to sync to it. Before connecting the HIPER had to be disconnected, and once the Tesla and Total Station were connected we were then able to designate the backsight we would be using by doing the following workflow we used for the normal points. Afterwards we would begin collecting data points using the Total Stations Optical distancing laser. One of our group members (myself) had to move to the desired point with the Prism Rod while the second group member (Ally) would re-position the optical lens to aim directly at the Prism mirrors. The third group member (Matt) would then hit the record button on the Tesla and so long as the Optical laser hit directly within the Prism Rod's center the data point would be collected. This continued for 21 points (due to the demo mode cap of 25 and the already recorded Occupied point and three backsights) all across the campus mall and once we had finished and disassambled the Total Station we transferred the data points to a thumb drive in the form of a txt. file and renamed the attribute fields to better suit integration into ArcMap create feature class from x,y. This feature class was then also used to create a topographic map using Spline interpolation and also exported as a tif for use in 3D analysis in ArcScene.
Figure 4: Campus mall microtopography and x,y points in Arc Map from Total Station

Figure 5: Campus mall microtopography in Arc Map from Total Station


Figure 6: Campus mall 3D microtopography in Arc Scene using Total Station

Results/Discussion

Based on the results of spline interpolation in both data sets, it is hard to determine any reliable difference in accuracy that can be used for comparison. This is due to the fact that the first data set using the Tesla/HIPER/MIFI was a collaboration of several groups x,y data sets and has much higher accuracy due to a higher number of elevation points used during interpolation. The higher point density makes for a much more accurate representation of the campus mall as opposed to the low point density of the Total Station data set which produced an inaccurate and generalized topography due to only 16 points being taken (3 backsights and 1 occupied point took up 4 of the total 20 allowed in demo mode). The scale of the study area is also marginally smaller (data points do not extend to left 1/4 of the campus mall study area) due to the difficulty sighting the Total Station's optics to the prism during lowlight hours of the evening which caused us to have to take elevation points closer to the total station's occupied point.  These elevation points could not be collaborated with other groups due to the lack of groups sharing their data sets in our departments communal temp folder.

Conclusion

The two methods of collecting microtopography data using the "Dual Frequency GPS" and the "Total Station" each have differing pros and cons. The dual frequency gps method has much more mobility and is not impeded by line of sight from the Total station to the prism rod due to the ability of taking the Tesla/HIPER/MIFI to any location and setting up for point collection. This can also be a drawback though in cases of uneven or easily shifted terrain such as sand or loose gravel/dirt. The Total Station also suffers from lack of mobility but in this case from a single occupied point, but with practice the set up can and take down be greatly accelerated. From several occupied points it is capable of gathering more points at great distances in a shorter period of time which are not quite as impeded by uneven terrain so long as the occupied point is on sturdy ground. Each method had it's own technical difficulties such as connections between devices and data point collection (with the total station needing exact line of sight to the prism rod or else the point would not collect). Overall, each method of microtopography surveying has its preferred uses based on strengths and weaknesses but neither can render the other obsolete due to the vastly differing scope and goals of surveying projects which make use of both methods.

Sunday, November 1, 2015

Lab 6: Navigation with Map and Compass

Introduction

In this weeks lab portion we did the second half of the navigation lab, using our previously constructed navigational maps in UTM an Decimal Degrees grid format as a means of navigating the UWEC Priory. In order to navigate we would be using course navigation points that had been mapped beforehand by past student which we could use in reference to our navigation maps and use the directional bearing, map distance, compass, and a GPS unit as a last resort reference to navigate our group's designated course. In a previous class session we measured out our pace, or average length per two steps, which would be used in calculations to find distances to points and determine our location in respect to our desired destinations.

Methods/Study Area

The study area for this lab encompassed the extent of the woodlands surrounding the UWEC Priory, as was denoted in our navigational maps by a feature class supplied to us by our professor. When we arrived at The Priory parking lot we were supplied with our printed maps we had submitted to our professor that he had printed for us, as well as our navigational course's points in decimal degrees which we then used to determine the location points on our navigation maps that we would be plotting a course to with our compass. After marking the 5 points A through B on the maps we measured the distances to each, starting from the corner of the parking lot which was easily noticeable on the navigation map and continuing to each subsequent point. The distance between points was measured using the centimeters markings on our compass and calculated to reflect the number of paces needed to be taken based on our group's pace counter's average pace per 100m. From the initial point to the desired destination point we then measured the bearing on our navigation map and then pointed the compass towards that azimuth relative to true north. My role in making it to each point in the navigation course was to be the azimuth control, which entailed staying back at the previous point and maintaining a straight line to the desired forward point, be it a landmark or the expected destination point. As our pace counter ventured outward and counted his paces he would relay his distance information back to me and I would log the total amount of paces taken, and then inform the pace counter and the "leap frogger" (who would be the secondary pace counter) how many paces remained before we reached our destination point. Trekking through the woods and maintaining a constant and reliable pace proved difficult and several times we missed our desired end location denoted on our maps and had to search for the bright orange/ pink marker showing where the navigation course points should be.

Results/Discussion

By the end of the class session we had only found two of the five points for our groups navigational course, some of the fault lying in our navigational abilities and our misplaced confidence in our ability to use a compass which caused us to take more time than initially planned to find the points, but also in the real life placement of the navigation point banners. After finding the two that we could, we checked our current GPS location and found that the coordinates were off by a large degree, which had misplaced our point on the navigation maps and thus made our bearing erroneous. In the end we as a class were not able to complete our second portion of the lab which would have involved running another part of the navigational course, using a GPS this time to hone in on the points, due to other groups having equal difficulty in navigating the rough terrain and finding the hidden or removed by locals navigation point ribbons.

Conclusion

Reflecting back on our experience in the field, our group should have double checked our methods wit our professor in setting a course using compass bearings in order to properly make use of our time in the field. Too much time was taken trouble shooting and second guessing our location which threw off the acquisition of the rest of our points, and in the end we gave up and headed back in frustration once we had run out of time to complete the first portion of the navigation lab. I was surprised that my time spent in Boy Scouts and the fact that I am an Eagle Scout did not help our situation, but to be honest we only rarely did orienteering so it had been several years since I had done any similar exercises with compass navigation.

Sunday, October 25, 2015

Lab 5: Development of a Field Navigation Map and Learning distance/bearing Navigation

Introduction

For this lab the grand purpose was to give us the tools and resources necessary to conduct distance bearing navigation at the priory for the next upcoming lab, and there were several concepts to be learned and maps to be made in order to properly do so. A few of the concepts we had to familiarize ourselves with included proper use and dissemination of coordinates systems (be it State Plane or UTM), how to accurately measure our pace count, and how to construct a field ready map for use in distance bearing navigation with a compass. Each component of the preparation for an actual field outing can be weighted heavily because if even one element is either not accurate or unusable then the entirety of the process can fail. For example if the pace count is off then any distances measured in the field will be faulty, or if the map grid lines or topology is shifted then the whole of the operation would be offset. By the end of this lab we were prepared with the knowledge and maps needed to conduct the following lab 6 at The Priory.

Methods

Once we had acquainted ourselves with the groups of three we would be in for lab 6 we headed out to the edge of the Phillips parking lot and measured out 100 meters along the sidewalk going towards the Davies center. We then counted the number of steps it took to walk to the 100 meter mark and then divided that number by two (or simply only counted the steps taken by our right foot) which could then be used as a reference to determine distances in the field. For me it would be 58 steps on my right foot for every 100 meters traveled, which I could later use to calculate the number of paces between points in the navigational course at the Priory.

Next was the larger portion of the lab where we would construct two maps of the extent of The Priory navigational course by utilizing data given to us by our professor which included a box outlining the priory's extent, a base map, and the topographic contour intervals (5 meter or 2 foot) for the terrain. The first of the maps was to have an overlayed UTM grid system of 50 meter spacing, which I applied the 5 meter contour lines layer to as a reference for the terrain elevation as we would be navigating the course. The Second map was created using a Geographic Coordinates grid which would display the grid distances in decimal degrees. Either of these maps could be used for the upcoming Lab 6 based on what type of coordinates are supplied to us.

Results

Figure 1: UTM grid system

Figure 2: Geographic coordinates grid system


Discussion

In regards to coordinate systems used, the geographic coordinate system for each of the maps is GCS_North_American_1983 due to it being what was supplied to us in the basemap which should be used as a coordinate system reference for the rest of the data overlayed. However, the projected coordinate systems for the two different maps differs due to the necessity of having two different grid systems, one being based on the UTM at 50 meter spacing and the other a geographic coordinates system utilizing decimal degree measurements.

For each map the GCS_North_American_1983 ensures higher localized accuracy in North America as opposed to a world wide GCS. For the first map in figure 1, the NAD 1983 UTM zone 15 N projection is the North American Datum Universal Transverse Mercator in zone 15 in the northern hemisphere which utilizes a two dimensional Cartesian coordinate system to pinpoint locations on the Earth's surface. This system divides the Earth into sixty zones each being six degrees in longitude across. It was developed by the US Army Corps of Engineers in the 1940s and was based upon an ellipsoidal model of the Earth. For this lab our study area at the Priory lies within the 15th zone that bisects Wisconsin just left of the dividing line. For the second map in figure 2 the NAD83_Wisconsin_Transverse_Mercator projection is designed as a single zone for the whole state and has an accuracy of less than 1 meter. This projection is based off the Transverse Mercator projection but is localized to Wisconsin for higher accuracy.

Saturday, October 17, 2015

Lab 4: Unmanned Aerial System Mission Planning

Introduction

In this lab we got a whirlwind introduction to the use of Unmanned Aerial Systems and all its various components along with the software to utilize the gathered data such as Mission Planner, Real Flight Simulator, and Pix4D. Proper foreknowledge about differing platforms and programs is key when considering a geographic issue that requires the use of a UAS to solve, and having the right UAV, sensor, mission plan, and software to run the data can either spell success or disaster and the loss of time and money. UAS is a powerful tool capable to increasing efficiency and safety of existing projects or allowing the creation of new ones due to the relatively low cost of materials and high mobility of the UAV models.

Methods

Part 1: UAS demonstration and flight

For the first hands on demo of the lab our professor showcased the numerous features, pros, cons, and functionalities of the fixed wing and multi-rotor UAS platforms.

The fixed wing platform he showed us boasted a built in flight controller that relays to a base station and flies itself autonomously based on the programmed flight mission. It has an Rx radio receiver for manual controller capabilities, is battery powered, a personal ozone monitor, camera attachment, and a bottom hook for bungee launching. The pros of the system are that a fixed wing model can allow for flight times as long as 1.5 hours due to its lithium polymer battery and can capture long swathes of data. The cons of the system are that the batteries are highly volatile, it requires a large turn radius at high speeds, needs open launching and landing zones, and requires more skill than a multi-rotor to manually control.

Figure 1: Fixed wing platform in flight

The next platform our professor showed us was a pair of multi-rotors, one with four rotors and the other with ten which was a far more sizable model. Each came with sensor attatchments, battery placements, and fixed landing gear. The pros of these platforms allow for vertical launches in confined spaces, higher agility in wind, and can maintain specific heights for precision data capture. The cons of multi-rotors are that they have very limited flight times and travel speeds.

Figure 2: Multi-rotor platform with two camera attachments
After demoing those platforms we headed out to below the student footbridge on the point bank to watch our professor fly his DJI Phantom and capture images of the surrounding terrain for use in our Pix4D section later on. The Phantom showcased how agile and maneuverable multi-rotors could be and how useful those features are when it come to remote sensing applications in the real world.

Part 2: Mission Planner

The mission planner software is used for creating detailed flight plans with many adjustable variables such as the UAS platform you wish to use, the camera or sensor, study area, altitudes, and several other we did not get into detail in this lab.

For my flight plans I chose to survey the entirety of the Chippewa Valley technical college's parking lot, which is a study area of 0.04 square kilometers. The UAS platform I chose was the 3DR_Y6 multirotor due to the relatively small study area and mobility of the platform, but I had to raise the travel speed to 10m/s in order to stay under 8-10 minutes due to its battery life limitations.
Figure 3: 3DR Y6 multi-rotor model
Flight plan mission 1 took place at 100m altitude and had an overall flight time of 2 minutes. I used the Canon sx230 HS as my sensor which has a 35mm focal length, giving it a moderately narrow field of view for capturing images, which then requires a higher altitude. The ground resolution would be 3.08 cm/pixel, which although is unnecessarily detailed will allow for better analysis of the study area. The number of images captured will be 18 in portrait mode in order to reduce the total number of images needed to capture the entirety of the parking lot. Due to small study area and moderately high altitude of image capture this flight mission only has six flight lines, which greatly reduces the flight time.

Figure 4: Flight plan mission 1 using 3DR Y6 and Canon sx230HS
Flight plan mission two took place at an altitude of 50m, giving it a flight time minimum of 4 minutes. For a sensor I used the Nex5 16mm fixed zoom camera, which due to its 16mm focal length has a 83 degree field of view allowing for lower elevation image capturing while not increasing the flight time. The ground resolution would be 1.60 cm/pixel and the number of images taken in portrait mode would be 44. Due to the lower elevation this flight plan requires far more flight paths than the first mission, totaling  11.

Figure 5: Flight plan mission 2 using Nex5 16mm lense
Flight plan mission 3 took place at 150m altitude, giving it a brief flight time of only 1 minute. The camera I would be using is the NX1000 which has a variable zoom for 20-50mm focal length (we would be using 50mm) which would give it a very narrow field of view, forcing us to capture images at much higher altitudes than with other cameras. The ground resolution would be 3.23 cm/pixel and the number of images would be 7 in landscape mode. This mission only takes three flight paths to complete due to its high altitude and narrow field of view of the camera.

Figure 6: Flight plan mission 3 using NX1000 50mm camera

Part 3: Real flight simulator

The platforms I chose to practice using were the electrifly electrostreak fixed wing and the DJI Phantom multi-rotor.

The electrostreak was relatively easy to use due to the lightweight frame and simple control mechanisms, had a top speed of 38mph, topped out in its flight time around 10-15 minutes, and had a fairly stable flight path. This platform would be useful for very brief missions of lengthy study areas and due to its small turn radius can make return passes with ease.

The DJI phantom was moderately easy to use but had several more nuanced features to its control scheme which took some time getting used to. Its speed maxed out around 15-18 mph when i disabled the self stabilizing function to allow for sharper trajectory angles. The flight time peaked around 8-10 minutes and the flight patterns were extremely stable. Due its ability to do vertical takeoffs and hover at fixed altitudes without moving, this multi-rotor would be excellent for remote sensing land surfaces or any other physical features for that matter due to its maneuverability
 and relative ease of operation. It does have slight limitations for how long it can be used for large study areas due to its battery life but then again so do all multi-rotors.

Part 4: Pix4D

After our professor had collected and stored the aerial imagery of the point bar underneath the student footbridge we were able to utilize those photos in the Pix4D program which creates 3D images from remotely sensed features by matching up each pixel and accounting for image depth. The resulting images of my processing shows the outline of an encircled 24 made out of rocks on the point bar, and the slight change in elevation throughout the images. The two image types my processing produced were a digital surface model (DSM) and an Orthomosaic. After reviewing the resulting images it is notable that the images have a moderate level of accuracy given that they account for each individual rock in the study area, but still have a slight lack of definition in some areas.




Discussion

Consultation scenario: Power line company

Based on the supplied information on how expensive helicopters can be and the necessity to have a nearby airport I would highly recommend employing a UAS specialist with knowledge of multi-rotor platforms or perhaps purchase one unit and become proficient with it through either field experience or flight simulators. The cost of one or two multi-rotor platforms and the needed training will be far more equitable than recurring employment of a helicopter company to do the imaging, given that the fuel to fly the helicopters far exceeds the cost of charging a battery. Depending on the study area and what is needing to be inspected I would suggest a digital camera with a very short focal length (15mm and below) in order to have a large field of view for up close image capturing of the whole of the width of the tower. If any other sensors are required to be attached to the platform it may be advisable to purchase a larger model with higher weight bearing capacity. Lastly I would recommend the use of Pix4D to create a high resolution three dimensional model of the towers to create an intuitive way to inspect and allocate resources to potential problem areas.


UAS systems and software:

Being that the field of remote sensing is booming and Unmanned aerial systems are becoming more of a business and less of a hobby there is an ever expanding array of options for which platforms to choose for certain projects and the nuances of each provides numerous niches that can only be filled by certain models. The fixed wing platforms excel at distance and speed as well as flight time and handling as I experienced in the flight simulator. Yet still their limitations open many opportunities up to the booming market of multi-rotors. From what I experienced with the mission planner software each has its place in differing assignments based on the study area, takeoff and landing availability, and which sensors are being utilized. In regards to sensors it seemed that when the UAS was equipped with cameras that had smaller fields of view due to longer lenses the altitude of the mission had to be increased, which depending on weather patterns and climate may be a hazard for the platform and an incurred risk for the UAS operator. Lastly in regards to Pix4D it was amazing to see that such a program exists that can create three dimensional imagery from two dimensional images, and how expansive the applications for such a program could be.

Sources

http://low-powerdesign.com/donovansbrain/wp-content/uploads/2013/06/arduplane.jpg

http://best-quadcopter.net/wp-content/uploads/2015/06/quadcopter-with-camera-2.jpg

Sunday, October 4, 2015

Lab 3: Conducting a Distance Azimuth Survey.

Introduction

In this lab we were given an introduction on how to conduct a survey of spatial points using distance and azimuth measurements from a fixed focal position. This method of gathering data can be very handy considering that you will not always be able to rely on accurate GPS technology if you either cannot find satellite coverage, if your study area is obstructed by overhead debris such as tree canopies, or if you do not have access to a GPS unit. To conduct a distance azimuth survey all you need is a compass and a range finder, and using ArcGIS tools it is possible to construct an array of points more accurate than many standard GPS units. To get an introduction to the process before conducting our own surveys, our professor gave us a quick tutorial outside Phillips Science Hall using the TruLazer rangefinder that had an integrated compass.

Methods

For our study area, me and my group partner chose Carson Park, specifically on the crest of a hill behind the baseball area and next to the playground equipment. This vantage point gave us plenty of features to record and we had just enough space between several trees overhead to get a GPS signal for our X,Y point in the form of Latitude (Y) and Longitude (X) on our Garmin Etrex.

Figure 1: Study area in Carson Park for conducting the distance azimuth survey
Once we had our work station set up and a focal point chosen I began listing off the object name, distance, and bearing for each feature we chose and my lab partner Josie would record them onto a printed off excel document. We recorded information for features such as trees (small, medium, and large), cars, signs, buildings, benches, light posts, and other park utilities. After we had reached 100 features we transcribed the data into an excel file for use in ArcGIS.

Figure 2: Excel table after having feature data transcribed
To use the excel file data first we constructed a file geodatabase and imported the excel spreadsheet into it. Then we ran the tool "bearing distance to line" which utilized our distance and azimuth fields to create lines of data radiating from our focal point to where the features lied. We then added a basemap for comparing our recorded data to the real life surroundings and based on the location of our point data the surveying techniques had been accurate and a success.

Figure 3: Line data of our features radiating from our focal point
Then we converted our line data to points using the "feature vertices to points" tool, which gave us the final product we were aiming for.

Figure 4: Point data for our features
Afterwards I symbolized the feature points by their object id to help show the distribution of specific feature types in relation to the basemap. Many of the points were trees, benches, and signs, but we also had a few singular points such as the baseball related buildings.

Figure 5: Carson park feature points


Discussion

By looking at the point data in relation to our basemap it is evident to see that although most of out features line up with where they belong, some are outliers likely caused by error in measurement when using the distance finder. The further out the feature was and the smaller the outline the more likely I was to miss the actual object and detect the distance to something behind the object. 

Conclusion

By the end of the lab it was easy to see how useful a skill using the distance azimuth method for surveying could be considering how little technology it utilizes, but yet it was evident that the method had several limitations and time consuming constraint (such as the TruLazer's inability to operate correctly in areas with electromagnetic interference). Our generated points were very close to their real life counterparts when looking at our basemap despite some user error with the equipment.

Wednesday, September 23, 2015

Lab 2: Visualizing and refining terrain survey

Introduction

This second lab was a revision and refinement of our first lab in regards to methodology and surveying techniques used to map the terrain of our first sandbox. The goal of the second lab was to assess and alter our surveying/measurement techniques depending on how accurate our initial excel data was and to either revisit our first sandbox or construct a new landscape if our old one was destroyed during the week (which ours was so we created a new study area). Following our in class discussion and demonstration of each groups survey techniques, our group found that our current techniques would suffice for the second rendition of the lab so long as we took the advice of our professor and employed more extensive measurement of key features. We decided that additional points would be measured around sharp changes in elevation to ensure that their topography was as accurate as possible which would result in a more defined map and 3D elevation model in ArcScene. Our first sandbox's excel xy data in ArcMap looked relatively accurate when we ran a spline spatial analyst tool on it because we used a 14 by 15 grid of points (210 total) which would be further augmented by higher point density on key features.


Methods

Our first study area had been under the UWEC foot bridge but the sand was not especially malleable or damp enough to hold its form so we chose for our second sandbox to be set up in the Putnam Hall sand Volleyball court. Luckily for us there had been slight precipitation that morning and it was an overcast day so we had the whole of the court to ourselves and did not have to dodge incoming volleyballs. We set up the 4 feet by 4 feet wooden perimeter as we had before, making sure that the study area was level beforehand and set about creating our hill, ridge, valley, depression, and plain features.
Figure 1: Construction of survey area

We measured and marked our 14 by 15 grid every 4 and 8 centimeters (initial measurements moved inward by 1 cm to allow for whole number grid squares) and this time only laid string across the x axis because we would be using the segmented ruler as a mobile y axis marker for where each measurement would be taken.
Figure 2: Measuring x y grid
While taking measurements on flat or gradual sloped terrain we would record elevation data every 8 centimeters but once we approached our key features we switched to recording every 4 centimeters. This time instead of putting our data into a xy grid akin to battleship we used three separate x, y, and z fields which could more easily be translated into an ArcMap document (we had quite a bit of trouble reformatting our last excel table to ensure compatibility with the software). Once all 238 points had been measured and recorded in the correct table format they were transcribed into an excel spreadsheet which we then could analyze using interpolation techniques.

Once the excel data table was imported into ArcMap we created a feature class from the xy fields which we could then run through several interpolation methods, and after the program had run the interpolation we exported the layer as an image file to view in ArcScene.

The first interpolation method was IDW which interpolates a raster surface from our points by using an inverse distance weighted technique. An advantage of IDW is that it works well with dense point collections, but the disadvantages are that it requires close proximity of points and large gaps between points will cause a generalization of the surface, and also the terrain model has little smoothing between points which creates an unrealistic looking surface. Compared to the real life elevations of our study area this interpolation method does not fit the landscape very well due to its frequent bumps in elevation.

Figure 3: Overhead symbolized view of IDW interpolation in ArcMap

Figure 4: Oblique view of IDW interpolation in ArcScene
The second method was Natural neighbors interpolation, which uses the same basic equation as IDW interpolation but smooths the elevations between data points to give the landforms a more realistic look. The advantage of using Natrual Neighbors is that it is a simple interpolation method capable of large point datasets, and its disadvantage is that it still distorts the elevation model enough to make it look unrealistic. This method fits out real life elevations quite a bit better but elevation between data points is still skewed and looks unnaturally pointed.

Figure 5: Overhead view of Natural Neighbors interpolation in ArcMap

Figure 6: Oblique view of Natural Neighbors interpolation in ArcScene
The third interpolation technique used was Kriging, which takes into account spatial correlation that can help explain variation in the surface. The advantages of Kriging are that it is very useful in the fields of soil science and geology, and its disadvantage is that for it to be accurate you have to know the spatially correlated distance or directional bias. Kriging is not a method relevant to our lab due to how it represents elevation features, despite its usefulness in other fields of work.

FIgure 6: Overhead view of Kriging interpolation in ArcMap

Figure 7: Oblique view of Kriging interpolation in ArcScene
The fourth interpolation method used was Spline which estimates the elevation values using a mathematical function that tries to minimize the overall surface's curvature, which results in a smooth surface passing through the point data's elevation. Its advantages are that it can accurately predict ridges and valleys in data sets, and its disadvantage is that its usefulness is limited to specific fields of work. Spline gave us the most accurate 3D model representation of our point data and mirrored the real life study area's landscape the best.

Figure 8: Overhead view of Spline interpolation in ArcMap

Figure 9: Oblique view of Spline interpolation in ArcScene

The last interpolation method was TIN (Triangular Irregular Networks) which are vector based digital geographic data the are comprised of triangulated vertices connected to other vertices to create triangles, all of which display that vertices elevation in relation to other vertices. The advantages of TIN are that it requires fewer data points to be stored than a DEM, making data input more flexible. The disadvantages of TIN are that it is less suitable for certain GIS applications like surface slope and aspect. This interpolation method created a fairly accurate representation despite its angular vertices because it shows distinct elevation changes and shadows which helps visualize the terrain.

Figure 10: Overhead view of TIN interpolation in ArcMap

Figure 11: Oblique view of TIN interpolation in ArcScene
Of all the interpolation methods we used Spline gave the most accurate representation of the real life sandbox terrain.

Discussion

The terrain elevation data we gathered in our second lab was a much better representation of the actual physical surface than our first lab's attempt because we now knew to hone in on specific features, and how to catalog our data points properly and efficiently. Our study area was much more suitable to our needs this time which allowed us to create an study environment with both higher and lower elevation features. Although we were not able to resample our old study area due to it being destroyed by pedestrian traffic and rain, we used our experience from the first attempt to reduce the number of errors and mistakes made in the new study area. With the addition of the maps produced from running the five interpolation methods we now know of, the analysis of our study area's physical features is much more complex and can be approached from several angles depending on which model we use.

Conclusion

Over the two weeks it took to conduct these labs my knowledge of rudimentary surveying skills has grown and been refined through trial and error, consulting my fellow peers and professor, and tinkering with ESRI programs like ArcGIS and ArcScene. I now have several skill sets that will easily transfer over to other classes I am currently taking such as Web GIS and Computer Mapping, and any future geography classes I will be attending. Having to conduct elevation surveys twice using rudimentary and time consuming methods has given me renewed appreciation for the technology that automates those processes, and has instilled a certain curiosity on how they will work which will hopefully be answered in future labs.