Sunday, November 29, 2015

Activity 8: Developing a Project on ArcCollector

Introduction


Smartphones now hold the same and more capabilities than some GPS units. They are able to locate coordinates and also access online information. With the development of the ArcCollector app, smartphones now enable you to map information and update it live online. The objective of this exercise was to gain practice in creating a map with appropriate features and domains and collect data points using the ArcCollector app. In my project, I mapped squirrels around lower campus to see if there was any distributional pattern between types of squirrels and if there were any behavioral tendencies. Data collection was done on Friday November 27, 2015 between the hours of 2pm and 3.30pm. The study area consisted of all of the University of Wisconsin-Eau Claire lower campus region on the southern side of the Chippewa River foot bridge and the southern path in Putnam Park behind Davies Center. Temperature was approximately 25 degrees F (-3.9 C) and the weather was sunny and calm.


Methods


To begin the activity, I first created a geodatabase to store the domains and then created domains and a feature class in ArcCatalog. I created domains to set acceptable values for attributes regarding the squirrels I would be mapping later. For each domain, I had to choose a type: range domain (set allowable range of values for an attribute) or coded domain (set codes as values for an attribute). I created a coded domain for the squirrel color (grey, red, black, or white), a range domain for date, a range domain for temperature, a coded domain for behavior (eating, watching, or burying food), and a coded domain for location (ground, tree, or garbage can).

To practice, I then created a point feature class from the domain and named it "Squirrels" as this feature class would later be used to collect data points in the ArcCollector app on squirrel locations and behavior. Once this was finished, I published the feature class on ArcGIS online by following the instructions on ArcGIS for preparing data. Once the feature class was published, I created a map on ArcGIS by simply adding a basemap and setting the address to the UWEC campus address so it would zoom in on the study area and added the published feature class to the map. The steps to prepare a map on ArcCollector can be found here. It was then ready for data collection.

The data collection process ran smoothly as it is as simple as clicking the plus symbol to add a point and choosing the correct coded value or entering the correct range value for each of the attributes set by  the domains created previously. I walked the entire area of the southern lower campus and added all the squirrels I could visually spot. I did not to add squirrels if I walked a particular route a second time. This was a measure taken to eliminate any double entries. However, I realized I had made my domains rather poorly. For instance, I had one variable labeled incorrectly and I observed more behaviors than just the couple I had set as codes for the behavior domain.

To fix this, I went back to ArcMap and edited my domains to re-label the incorrect domain and added the code "running" under the Behavior domain. I re-published the feature class to ArcGIS and created a new map on the ArcGIS website. I then collected data in the same manner as before with the new feature class (Fig. 1). Only 22 squirrels were found and mapped. This time, there was no error in the domains.

Fig. 1: Final Domains within the Database created into a Feature Class. This figure shows the Behavior domain highlighted at the top and its code values listed on the bottom.

Results


The resulting map showed 22 squirrels found throughout the University of Wisconsin-Eau Claire lower campus on November 27 (Fig. 2). Interestingly, The squirrels were found to be distributed in a clustered manner--squirrels were found in groupings rather than spread randomly across the study area. It also found that there is a potential territory based on color of the squirrel as black squirrels were found only in one area near Katherine Hall and little grey, red, or white squirrels were found in this area. Grey and red squirrels were found in all other groups in abundance while no black squirrels were found in these areas. The map can be found either by following this link: 
http://uwec.maps.arcgis.com/home/item.html?id=67ec369bb4d542a786ae164e0ad5fcdf or by logging on to ArcGIS online and locating the map in either the UWEC group or public. It is published as "Aumann_Collector_Map." Metadata for the squirrel feature class was created in ArcGIS (Fig. 3).



Fig. 2: Map of squirrels by color on the lower campus of UW-Eau Claire using the feature class and map published on ArcGIS online. Cartographic aids (legend, compass rose, etc..,) were added after online map was brought into ArcMap for desktop.
Fig. 3: Metadata of Squirrel feature class after the online map was brought into ArcGIS.

Discussion


After completing this activity, I found that using the mobile app for data collection is a great alternative to the GPS. It allows you to update the data on a basemap on the fly and is very user-friendly. However, you cannot edit domains once they are uploaded to ArcGIS and it is crucial to have a detailed and well-done feature class created before uploading it online and collecting data. Creating your own geodatabase, feature class, and setting domains is not as complicated as it may sound, though it is very easy to miss details when setting the domains. The biggest problem is not knowing what you could run into in the field and not taking account for all the unknown factors. I think it would be best practice to test the potential domains before creating the final product for your data collection process or at least add a couple fields to log locations of your study subject that has attributes not accounted for in the domains so you can go back and edit them after your data collection is completed. 

Conclusion


During this activity, I practiced creating a geodatabase with a feature class created by domains I set within the geodatabase and publishing this feature class to ArcGIS online for creation of a map. I took this map and collected data using the ArcCollector mobile App. I learned preparation is key before data collection and it is easy to forget or to not even be aware of factors needed for the data collection process when setting the domains. In future projects, I will make sure a preliminary test is done in the field before creating the final map or fields are added to the feature class in which I can store locations and edit the attributes after the data collection process is finished.


Sources:


"Prepare your data in ArcGIS for Desktop" Instructions: http://doc.arcgis.com/en/collector/windows/create-maps/prepare-data-desktop.htm




Sunday, November 22, 2015

Activity 7: Topographic Surveys with Dual-Frequency GPS and Total Station

Introduction


The goal of this activity was to learn how to set up and utilize both a Dual-Frequency GPS and a Total Station to run topographic surveys. During the first week of this activity, we used a Dual-Frequency GPS and created a TIN from the resulting topographic data. During the second week, we did the same with a Total Station. 

Our study area was within the campus common areas on the University of Wisconsin- Eau Claire campus. My groups surveyed the outskirts of the commons area along Little Niagara Creek in front of Davies Center. The first week, my group surveyed the eastern portion of the creek near Phillips Hall on the east side of the bridge (Fig. 1) and the second week, my group surveyed the western side of the same bridge. The first week, using the Dual-Frequency GPS unit, we surveyed on November 5 from 9am-11am. It was cloudy and chilly, but no wind. The second week, using the Total Station, we surveyed on November 12 from 12pm-2pm and there was a significant amount of wind.

Fig. 1: Study area for week one using the Dual-Frequency GPS to survey
the eastern side of the bridge crossing Little Niagara nearest to Phillips Hall.
Fig. 2: Study area for week two using the Total Station to survey
the western side of the bridge crossing Little Niagara nearest to Phillips Hall.

Methods


Equipment

Equipment used for surveying with a Dual-Frequency GPS included the TopCon HiPer S4, the TopCon Tesla, a MiFi portable hotspot, and a tripod stand on which to attach both TopCon devices (Fig. 3). The TopCon HiPer S4 served as the GPS receiver unit and it screwed in to the top of our tripod stand to assure the height from the ground was consistent. We used the TopCon Tesla to create our files and record our data. The MiFi portable Hotspot was used as such to assure we always had a connection. For week two, the total station included both the MiFi portable hotspot and Tesla as well as the TopCon Total Station all situated on a tripod stand and a prism (Fig. 4). With these pieces of equipment, we used the Tesla to record points, the Mifi to assure connection, and the Total Station to shoot points towards the Prism to gather the positional data. Because the Total Station did not record its elevation from the ground as a standard, we needed to measure the elevation of the Total station manually.

Fig. 3: TopCon HiPer S4 (left), TopCon Tesla (center), and MiFi Portable Hotspot (right) comprised the Dual-Frequency GPS unit from which we surveyed topography the first week.

Fig. 4: The TopCon Total Station (left) and the Prism (right) in addition to the equipment listed in use for the Dual-Frequency GPS survey were used to survey topography the second week.


Recording Points with the Dual-Frequency GPS


Fig. 5: Collecting points using the Tesla after leveling the tripod.
My partner, Scott, and I set up our tripod stand in our study area with all necessary equipment attached. Because the TopCon HiPer S4 collected elevation data on its own, it was not needed for us to measure this manually. We simply created our job in the Tesla, leveled the tripod with attached equipment, and gathered the point with the Tesla. This was a fairly simple process once we created our job as all that was necessary was for us to level the tripod and press the "collect" button. We did this 100 times to collect a total of 100 points (Fig. 5). In this lab, one person leveled the tripod as the other person collected the points on the Tesla. However, because the Tesla was in demo mode, we were forced to create 4 jobs each collecting a maximum of 25 points, to complete the data set. While collecting points, we tried to stay fairly regular in spacing between each point except for areas the slope of the landscape was more drastic. In these areas, we tried to collect more points to accurately survey the slope.


Fig. 6: Screen shot of a portion of the resulting
combined text file table later imported to ArcMap.


At the end of the collection process, we saved our data and exported each job to a file. We then transferred the files from the Tesla to the computer via USB. The resulting text tables had to be combined then normalized to fit headings transferable from a text file to ArcMap on one single table. To do this, we simply copy pasted data from 3 of the text files from 3 of the jobs onto one text file from one of the jobs then altered the top row of text to include the name of the point, the latitude (N), longitude (E), and the height with appropriate commas separating each column (Fig. 6). To properly import it into ArcMap, we needed to specify our X value as Lat (N), our Y value as Lon (E), and our Z value as Ht (Z) in the Import XY Data Window.






Recording Points with the Total Station


For this portion of the activity, we were broken into groups of 3 as opposed to 2 like we were for the first portion of the activity. This is because it was easier to work the equipment with three people--one to shoot the Total Station at the Prism, one to hold the Prism over the area we were collecting a point for, and one person to collect the points with the Tesla.

To begin this survey, we first collected a back point in order to collect the location of the Total Station. We did this by collecting one point using the same method we used during the Dual-Frequency GPS survey and the same equipment. This back point was logged in the same job as the other collected points. We then began to set up the Total Station--the most time-consuming portion of the activity. We first positioned the Total Station atop the tripod and began to level it in such a manner that the laser from the Total Station facing the ground was over our desired point--the occupancy point (Fig. 7).

Fig. 7: The laser from the Total Station line
 up over the occupancy point.
Once the Total Station was leveled on the tripod stand over the
occupancy point, we leveled the Total Station itself. We swiveled the Total station in all three directions it allowed in ordert to point the laser on one of the faces of the Total Station with which we shot the Prism to record the data. We leveled the Total Station when it faced each of these directions by twisting the circular knobs on its base which we positioned at "neutral" to begin leveling properly (Fig. 8). We made sure to only twist one knob each time we re-directed the Total Station so as to not interfere with previous levelings.

Fig. 8: My group mate, Peter, leveling the Total Station by twisting the knobs at its base.

When we finished leveling the Total Station, we began shooting our points with the laser from the Total Station to the Prism. We made sure we were not too far from the Total Station and not too close so the laser could be received and sent back to the Total Station relatively quickly and without trouble. We sampled both sides of the rive and the edge of the river itself. This time, we only recorded 25 points on one job. The resulting able had to be normalized in the same manner as we normalized the text file table with the Dual-Frequency GPS (Fig. 8). We then imported the data as XY data and specified the Y, X, and Z values. From the resulting point data, I created a TIN and a break line to characterize the river edge.

Fig. 8: Total Station normalized text file table.

Results


I created a TIN for the Dual-Frequency GPS and a TIN for the Total Station survey points and displayed them on a map (Fig. 9). Metadata was added to each TIN (Fig. 10).

Fig. 9: TINs of the Dual Frequency GPS and the Total Station survey points.

Fig. 10: Metadata created for Dual Frequency GPS TIN (top) and Total Station TIN (bottom).

Discussion


Being that both the Dual Frequency GPS and the Total Station are both survey-grade instruments, I was interested in comparing the accuracy between the two. However, because we operated only on demo versions with the Tesla and the amount of time needed to do these tasks, I was not able to collect the same amount of points for both TINs. As we also had different groups from the Dual Frequency GPS portion of this lab and the Total Station portion, I also did not get the opportunity to survey the same area. However, I was able to survey an area in very close proximity and similar elevation characteristics for both portions of the lab. With this, I can at least say that both pieces of equipment detected very similar elevations around the bank of Little Niagara.

Because the back point was included in the TIN, however, I am inclined to say our result for the Total Station may be less accurate as there were not enough points to characterize the space between our study area and the back point included. This may be why the elevation near the side walk nearest the TINs in the North are so different.

The Total Station was a lot more time consuming to set up than the Dual-Frequency GPS was. Set up time for the Total Station was approximately 1 hour, while the set up for the Dual-Frequency GPS was almost instant.However, because the Dual-Frequency GPS unit needed to be physically moved to each survey point and re-leveled for every point, the Total Station  may be a better choice if many points needed to be collected. Once set-up for the Total Station was complete, collecting the points only took approximately 1-1.5 minutes/point.

A draw back to the time-efficiency of the Total Station may be that it is fairly weather-dependent for its accuracy and efficiency. The Prism is held on a monopole and faced towards the laser on  the Total Station. The prism must not move otherwise the point will either not be collected or will be collected improperly. While we were out collecting points with the Total Station, my group had some difficulty at the beginning with the wind twisting the Prism face away from the Total Station resulting in its inability to collected. It is also rather difficult to keep the monopole steadied in such winds and occasionally, the pole would sway too much and result, again, in the Total Station's inability to collect the point. 


Conclusion

Though this exercise, I gained experience using the Dual-Frequency GPS unit and the Total Station unit to collect elevation data. Overall, both instruments have relatively accurate data collecting capabilities as both are survey-grade. However, they vary slightly in set up time and data collection time--the Total Station taking more time to set up but less time to collect individual points than the Dual-Frequency GPS. The Total Station is more touchy, though, and may not collect the data points if it is too windy or the person holding the monopole with the Prism is not steady enough. There exist advantages and disadvantages of both pieces of equipment and ultimately it is the data collector's choice on what they prefer to use. Knowledge of these differences, however, are important to be aware of before beginning a project.



Sunday, November 1, 2015

Activity 6 Navigation: Priory Navigation

Introduction


This week's lab is a continuation of last week's development of navigational maps exercise. For this lab, the class was split in groups of 3 and used the maps created from last week's lab to navigate through the Priory in Eau Claire, Wisconsin. Our objective was to find 5 locations previously marked by the professor and other classes with pink ribbons labeled with a site number. To do so, we were limited to our maps and a standard compass equipped with a ruler, only using a GPS (Fig.2) to log our trail for later evaluation and in cases where no marked location could be found (as the locations were marked in previous years, ribbons may have fallen off or blown away by the wind). The intention of this lab was to help us gain experience using basic navigational tools for situations in which either newer technology fails or is inaccessible. We also used this lab to evaluate the effectiveness of our maps created from the previous lab.

Fig. 1: Standard compass (left) equipped with ruler on the edge to plan our routes and
navigate and a GPS unit (right) to log our track.

Methods


The first step was to mark on our maps the coordinates of the five locations given to us by the professor (Fig. 3). We then compared our mapped locations with each group members' locations to assure we correctly marked each location. We then chose a starting point from which to navigate to our first location. We chose to start from a tree near the access of the parking lot closest to the forested area of the priory as this was an easily found location on the map and we could correctly determine the distance and direction we would need to travel to find our first marked location.

Fig. 3: Mapping coordinates of our 5 locations and determining routes.

After mapping our locations and determining a starting point from which to navigate to our locations, we had to plan our routes to each of the 5 locations. This was done by using the compass edge to draw straight lines between each point location on the map and measuring the length of that route and convert it from centimeters to meters using the scale on the map (1 cm: 35 m) (Fig. 4). The pace count was determined as we navigated from point to point as the person counting paces and keeping direction using the compass switched periodically throughout the lab. This was important to determine pace count between locations on the fly as opposed to at the beginning of the lab as each person has a different pace count. For instance, I walked 67 steps per 100 meters while another groupmate, Katie, walked 65 steps per 100 meters.

Fig. 5: Scott connecting each point location and measuring route length to plan our routes.

Next, we began navigating through the forest to our 5 locations. We used our UTM map as opposed to our Decimal Degrees map as it was easier and more accurate to calculate our distances in meters and pace count conversions using UTM. It is important to note here that each group member played a different role during the navigation portion of this exercise. One person would count paces holding the compass close to their body to make sure they were walking in the correct direction while keeping track of the distance traveled. Another person would stay behind and make sure the person navigating would be traveling in the correct direction as sometimes if the person must avoid brush or trees, they can skew their angle of direction. The last person would either walk with the navigator keeping count of paces and assure direction or remove brush in the path ahead. Normally the third person would be holding the compass and walking with the person counting paces, however, we thought it was easier for the person counting paces to be looking at the compass themselves and for another person to help clear the path as the terrain was fairly rugged and we wanted to limit the amount of error coming from pace changes due to obstacles en route.

Our route to point one was calculated to be approximately 280 meters in length as the line drawn on the map from our starting point to point 1 was 8cm (8cm * 35m/1cm=280m). I was the first to count paces and navigate, so we used my pace count (67 steps/meter) to calculate the amount of paces it would take to reach 280 meters and determined it would take approximately 188steps (280m * 67/100m= 187.6m). After the bearing was set in the correct direction towards the first location, I started navigating towards the first point with the compass held to my chest and counting paces while my groupmates made sure I was heading in the same direction as I had started heading and clearing the path in front of me as best they could (Fig. 6). We followed this same method for locating the remaining 4 locations, but switching roles periodically.

Fig. 6: Navigating to the first location with Katie in front clearing the brush, myself counting paces and keeping an eye on the compass, and Scott making sure we did not alter our direction unknowingly.

Some routes between locations were across steep ravines. In these instances, we estimated approximately how much the slope would change our pace count by looking at the distance between our feet upon taking a first step on the slope and comparing it to the distance between our feet on a normal step in a flatter terrain.

In instances where trees both upright and fallen were creating obstacles in the direction we needed to travel, we would stop to visualize a path that was the most straight-lined possible and chose an easily recognizable feature in the line of bearing to assure we were staying true to the bearing and to help eliminate some pace count error while avoiding obstacles.

Results


We were able to successfully locate only 3 of our 5 marked locations and not all were found right away. For the first location, after walking the correct amount of paces in what thought to have been the correct direction, the marker was no where to be found. Katie had wondered ahead to find the marker while Scott and I stayed back to keep our current position so as to not get lost. Katie had found a marker. However, we figured, looking at the map contour lines, that this was not the correct marker we were searching for. We consulted the GPS unit and found we were headed in the correct direction, but we had not traveled enough steps in the direction to reach the correct marked location. We continued on to find another marker. From here, we calculated the amount of paces to the next location with Scott as the navigator and pace counter. However, upon arrival of the next location, we figured this was actually our location 1 marker as opposed to our location 2 marker. Because these locations were in a similar bearing from the origin location, we must not have traveled far enough still to get to our location 1 marker. From here, we readjusted our bearing and used the same pace calculations to reach our location 2 marker.

Upon reaching the supposed location of our second marker, it was no where to be seen either. We again consulted the GPS unit to discover our pace count was correct despite the steep ravines that were traversed in the process, but we had traveled slightly more to the west than was needed. We headed toward the correct direction, but was still unable to locate the marker until we used the GPS unit entirely as opposed to following the compass to find the marker.

Our 3rd location marker was found with little difficulty, having the correct distance in paces, but were only minimally off from the bearing. Our fourth marker was never found, however, we found a tree in the exact location the marker should have been--using the GPS to verify. This must mean that we had found the location without trouble, but the marker had disappeared. For location marker 5, we experienced the same problem--finding the location it was supposed to be in, but having no marker. From here, we navigated back to our origin point successfully. 

Discussion


After navigating to our designated locations, I felt the maps we used required adjusting. Though ornamental, the imagery basemap did not aid in helping our navigation through the wooded regions and was less important than I had originally assumed. Though it was help in finding an origin point from which to start the navigation process to each of our marked locations. However, our contour lines, which we relied on the most, could stand to be of higher precision as a big problem we faced was the accuracy of our pace counting over steep terrain. This could have also helped us locate the first and second marked locations as we should have been able to tell by the slopes of the terrain where we were in relation to the marked locations. 

Conclusion


This lab allowed us to practice useful navigation techniques using more traditional pace counting and compass methods and operation of GPS in the field. The more traditional methods of pace counting and compass reading can be used in situations where a GPS can not receive a signal. One thing to always consider when navigating using this technique is to always adjust estimated pace count distances according to the slope of the terrain and the amount of obstacles in your straight path. Using landmarks such as distinct trees and rocks is useful in maintaining a straight course. It is always useful to make sure prior planning is also done correctly and in a detailed manner. If you are selecting or creating your own map to navigate a study area, make sure the information and the type of projection is suitable for our study area. 







Friday, October 23, 2015

Activity 5: Development of a Field Navigation Map

Introduction


In next week's exercise, we will be navigating through the Priory in Eau Claire. In order to prepare, just as in any other navigational task, we needed to create a tool to help us navigate. Sometimes this tool can be a GPS, maps, or even the sun and stars as seafarers often and primarily used as late as the late 1700s. For our navigation through the Priory, we made a map from which we can find locations by estimating foot steps as an approximate, defined distance (pace count method) and both a compass and GPS.

Methods and Results


We were placed randomly into groups of three. Each group member created two maps of their own using the UTM coordinate system for our area and the WGCS in decimal degrees. We then handed in our best map for printing to use in next week's lab.

We used data from a Priory Geodatabase created by Dr. J. Hupy. Our first step was to decide which features to include on our maps. From this data, I decided to use Priory aerial imagery for a basemap and 12-foot interval contour lines as any interval less than that cluttered the map and any more would not give us an accurate reading of elevation changes. I also used the Priory boundary to depict our study area and clip contour lines to the area of interest. I left out other irrelevant feature classes in the geodatabase such as the no shooting zone. 

I added all the necessary map information including:
  • north arrow
  • scale bar (meters)
  • RF scale
  • projection name
  • coordinate system name
  • labeled grid
  • basemap
  • list of sources
  • my name

The first map utilized WGS84 (World Geodetic System 1984) (Fig. 1). WGS is a coordinate system in which the coordinates are generated by using the Earth's ellipsoid shape as a spatial reference (2). The positions units are given in decimal degrees. Maps utilizing a WGS are typically used for operations in large study areas.
Fig. 1: Priory Map utilizing GCS_WGS_1984.


For the second map, I created gridlines representing coordinates in the NAD 1983 UTM Zone 15N. Universal Transverse Mercator grids (UTM) split the world into 60 zones, each only 6 degrees in longitudinal width. This lessens the distortion in the area due to projection from the spheroid shape of the Earth to a flat map surface (1). Because the grid lines cover a more localized region and the associated minimal distortion, UTMs are typically used for smaller study areas such as states and counties. Metadata for both maps was then created in Map Document Properties (Fig. 3).
Fig. 2: Priory map utilizing NAD 1983 UTM Zone 15N.


Fig. 3: Metadata for the GCS map (right) and the UTM map (left).


Discussion


For this project, we had to think critically about the amount and type of information we included in our maps in order to navigate effectively and map points within a relatively accurate margin in a later exercise. I decided to include 12-foot contour lines, a boundary of the study area and aerial imagery. Having too much information on the map would crowd the map and make navigating more difficult. It is for this reason, I included 12-foot contour lines as opposed to 2-foot contour lines to show elevation data useful in pace-counting as your step changes with any slope. I added aerial imagery as a base map as this may help visually find features and a boundary of our study area for good measure of the area we should remain inside during our navigation exercise. 

Conclusion


This week's lab was meant to solidify knowledge on designing maps. It is critical to understand the coordinate systems, projections, and grids in order to create a map that will be most useful in a particular scenario. It is also important to understand the reason for making the map and what it will be used for as this will effect the amount and type of information that must be represented in the map. The purpose of each map will dictate somewhat how the map should be designed and it is important to understand this purpose in order to create a map that will be most efficient in completing the goals of its purpose.


References


1) http://pubs.usgs.gov/fs/2001/0077/report.pdf
2) http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf


Thursday, October 15, 2015

Activity 4: Unmanned Aerial Systems Mission Planning

Introduction


Unmanned aerial systems are useful tools in collection of a variety of different data types. They can be used to collect atmospheric attribute measurements such as temperatures, ozone content, and humidity values. They can also be used to create aerial maps of specific features of interest such as mines, college campuses, construction sites, and logging locations. Though valuable in the field as a relatively quick method of acquiring quality data, unmanned aerial systems (UAS) require extensive planning before use. In this lab, our objective was to become familiar enough with different types of UAS's and mission planning through demonstration flights and computer software to be able to apply our knowledge to a hypothetical scenario in which a client seeks advice on UAS types to complete their own objectives in data collection.

Methods and Results

Demonstration Flights

At the start of the activity, our professor, Dr. Hupy, familiarized us with the two current types of UAS's: Fixed wing systems (Fig.1) and rotary wing systems (Fig.2). 

Fixed Wing Systems

Fixed wing systems are called so for their immovable wings. They usually have one wing over the top of the body of the vehicle and are moved through air via propellers located at the nose of the vehicle--much resembling commercial aircraft. A major advantage of these UAS's are their ability to fly long distances at higher speeds and for longer durations than multi-rotors as these UASs can fly up to 1.5 hours and at no less than 14 meters/second. This allows them to consequently cover larger areas over a shorter amount of time. In addition, they are generally able to carry heavier equipment, though this depends on the particular UAS.
Among all these advantageous qualities, fixed wing systems also have some disadvantages one must take into account before choosing them as the UAS to complete the job. One thing to consider before choosing a fixed wing system is the space it requires to launch, land, and change direction. To launch, a fixed wing must have enough space to gain lift from first gaining speed on land. Similarly, it will require a certain amount of space to slow to a halt after returning to the ground and changing direction requires a minimum of 160 feet.
Fig. 1: Fixed-wing unmanned aerial system. Picture taken from http://www.buildadrone.co.uk/what-do-i-need.html.
Multi-Rotor Systems

Multi-rotor systems more closely resemble helicopters. They typically have between 1 and 6 rotary blades that they rely on to generate lift. A major advantage of the multi-rotor system is the fact that their blades allow them to move easily in any direction. Unlike the fixed wing, systems, multi-rotor systems are able to change directions without any required space to do so. Similarly, they do not require any space for launch and landing as they can move straight upward and downward. These characteristics in addition to their automatic adjustment for wind make multi-rotor systems very easy to fly. It's ability to hover, adjust for wind, and change direction easily allow the multi-rotor system to capture detailed data and focus on areas with complex features.
Some disadvantages of the multi-rotor systems, however, include its slow speed (do not fly over 12 meters/second), their short flight plan (generally around 30 minutes),  and their cost. Because of their complexity in design, multi-rotors are typically more expensive than their fixed wing counterparts.
Fig. 2: Multi-rotor unmanned aerial system. Picture taken from http://www.dohenydrones.com/the-drone-for-you-fixed-wing-versus-rotary-wing.

Flying the Quadcopter

After learning of the types of unmanned aerial systems and their qualities, the class took the university's DJI Phantom quadcopter (4-blade multi-rotor system) to map features along a portion of the Chippewa River Floodplain in Eau Claire, Wisconsin below the campus footbridge. The DJI Phantom was flown manually as we captured aerial photos of several features in the floodplain such as a straight stretch of the floodplain running parallel with the river, our class sculpted landscapes from Activity 1, and a "24" symbol made of rocks (Fig. 3). This UAS type was appropriate for our data collection because we were focusing on obtaining high resolution and accuracy on a few small features within a small area. Speed and long flight plans were not required for this task. We needed a UAS that was easily maneuverable over varying altitudes and could hover over small features to obtain many photographs of the same feature. A total of 322 images were taken between these three features with emphasis on these three features and little attention to spatial connectivity. Since the features we were collecting aerial imagery of were relatively small, we were able to collect photos with enough overlap to avoid gaps in the spatial data. Images were later uploaded and processed into maps using Pix4D.

Fig. 3: Flying the DJI Phantom quadcopter to collect aerial photography on  a stretch of the Chippewa River floodplain running parallel with the river.


Using Software

Three different computer software programs were used to complete this activity. Pix4D was used to create DSM and orthomosaic maps from the DJI Phantom aerial photographs, MissionPlanner was used to investigate parameters of planning or programming automatic flight plans, and RealFlight Flight Simulator was used for practice in flying UAS's and observing flight behaviors of the two different types of UAS's. 

Pix4D

After the aerial imagery was taken with the DJI Phantom quadcopter, we chose a single feature in the dataset from which to create a DSM and orthomosaic. I decided to map the "24" symbol. A total of 91 images were collected specifically of the "24" symbol, but only 20-30 images were necessary to create an accurate DSM (digital surface model) and orthomosaic. Any more than 30 images will result in diminishing returns on time spent processing to attain higher accuracy. Processing aerial photographs in Pix4D will take a lot of time and requires a substantial amount of core processing ability from the computer you use to run the program, so it is suggested to only use as many photographs as necessary to create an accurate product, but no more as it will only take up time. For this reason, I uploaded 28 photos that were taken consecutively by the quadcopter as to avoid missing portions of the feature and varied little in coloration as at the moment the photos were captured, clouds were moving in front of and away from the sun.

Fig. 4: Finished product in Pix4D of the orthomosaic map. This shows both the overlay of the photos and the mosaic itself.

Fig. 5: The DSM finished product of the "24" symbol feature. This displays elevation where dark color indicates lower elevation and lighter color indicates higher elevation.

Fig. 6: Orthomosaic. This contains information on the elevation and depicts real-life map of the feature of interest--the "24" symbol. 

The resulting DSM and orthomosaic shows little error. In the DSM, the elevation data symbolized by a gradient color scheme from black to white (dark colors indicating low elevation and light colors indicating high elevation) accurately depicts the elevation changes in the feature (Fig. 5). The Orthomosaic seamlessly blends the borders of each individual aerial photograph. The difference in coloration of the land is due to the differences in coloration of the different photos involved in the creation of the orthomosaic (Fig. 6). This coloration difference could have been avoided by selecting only photos of similar coloration--photos taken when the sun was either shining or blocked by the clouds, but not both. Metadata for both the DSM (Fig. 6) and the othomosaic (Fig.7)are pictured below.

Fig. 6: Metadata for the finished DSM.

Fig. 6: Metadata for the finished orthomosaic.



Mission Planner

We used mission planner to investigate the parameters involved in pre-planning a UAS flight. This pre-planning can be used before programming automatic flight plans and manual flight plans as it is always important to consider parameters and plan before using a UAS. In Mission Planner, we adjusted camera angles, altitudes, and speeds of the UAS to see how this would effect approximated flight time and number of flight paths.
Upon adjusting these parameters, I concluded a few notable things to take into consideration when planning a UAS flight. The higher the altitude of the UAS, the less flight paths are required to fully cover the study area. This is because as the UAS increases in altitude, the camera's field of view increases (Fig. 8).

Fig. 8: A camera's field of view (FOV) increases as distance from the object increases. Photo found at https://www.melown.com/maps/docs/acquisition-guide.html.


Because the camera's increased field of view leads to less required flight paths during a UAS flight, this also means the approximated flight time will decrease because the UAS does not need to fly longer distances. However, if you are using the UAS to map digital surface models, altitude should be taken into consideration as the higher the altitude, the lesser the resolution will be on your DSM.


RealFlight Flight Simulator

The last software program we used was RealFlight Flight Simulator. This software allows you to practice flying different types of  UAS's and observe their flight behaviors such as ease of operation, speed, flight time, and stability. We were required to fly two different UAS platforms and observe these behaviors. I chose to fly the "Cap 232," a fixed-wing system and The "Classic," a quadcopter multi-rotor system (Fig. 9).

Fig. 9: The two different types of UAS's flown on RealFlight Flight Simulator. On the right is a fixed-wing system, "Cap 232," and on the left is a multi-rotor system, "Classic."

The "Classic" quadcopter was a very easy craft to maneuver as it could hold a stable position and altitude even with your hands off the controller or against wind. It was also incredibly easy to change direction as it had a zero turn-radius and "pivoted" rather than "turned." However, when turning, it is difficult to keep track of which part of the quadcopter is the front, making it hard to gauge how far to move the controls to turn towards your destination of interest. The maximum speed this UAS could fly was only 14 MPH and this was achieved only when moving in one direction: up, down, or side to side, but diagonal directions slowed the UAS slightly. It's minimum speed was 8 MPH. This particular UAS ran on battery life with a 2500mAh total. After 30 minutes, only 636mAh remained, indicating this system can only have a maximum flight time of just over 30 minutes.

The "Cap 232" fixed-wing system was less easy to maneuver. It must constantly fly and does not have the capability of hovering like the quadcopter. It took longer and more space for this system to gain altitude, however, it was still able to maintain a specific altitude with ease. This craft's top speed was 82 MPH, so if many flight paths are required in a flight plan, it would take some skill to manually fly. It also had a large turning radius and required some space to change directions. This particular system ran on fuel with a total of 15.8 oz in the tank. I was unable to fly this craft for long as expected. After only approximately 20 minutes of constant movement, it had run out of fuel. If flying a fixed wing system for a project, battery power as opposed to fuel would be more effective in obtaining longer flights.

Scenarios


Each UAS has a set of advantages and disadvantages as you have seen in the previous sections of this blog. These qualities as advantages and disadvantages are relative to the project you are working on. For instance, someone working on gathering aerial photographs for a digital surface model of a small area will find the speed of a fixed-wing system to be a disadvantage, while someone wishing to collect data on atmospheric particle content over multiple miles will find the speed of the fixed wing system advantageous. For this reason, after all the information on unmanned aerial systems was collected, we were instructed to apply the information to one of seven real-life scenarios Dr. Hupy has experienced as a consultant for projects using unmanned aerial systems to demonstrate our ability to recommend certain UAS's according to their qualities and the demands of the project. The scenario is as follows:

"An atmospheric chemist is looking to place an ozone monitor, and other meteorological instruments onboard a UAS. She wants to put this over Lake Michigan, and would like to have this platform up as long as possible, and out several miles if she can."

I would suggest a fixed-wing system for this project. Because the client wants to collect data over a large study area, a fixed-wing system is a great way to cover the large distance required for the data collection process and it has the ability to stay in flight about three times as long as a rotary-wing system.


I used Mission Planner to assess the distance a fixed-wing system could fly in less than it's estimated battery life of one hour and thirty minutes to make sure a significant amount of mileage could be covered in less than this time in order to gather the appropriate amount of data without losing the UAS over Lake Michigan (Fig. 10). Even with multiple flight paths for resolution of data (not completely necessary for obtaining ozone data), I was able to hypothetically fly a UAS fixed wing system over 30.32 miles in approximately 51 minutes and account for the space required for its turning radius.
Fig. 10: A potential flight plan to collect ozone over Lake Michigan.

If the client were to take a straight path flight, she would be able to fly a longer distance with each flight to collect even more data. I would suggest keeping the flight time to approximately one hour instead of running the UAS for the full one hour and a half to account for extra power used by the battery when faced with correcting against wind.

Several online sources suggest different altitudes for ozone data collection and I suggest either flying multiple flights each at one steady altitude, or shortening the distance covered by the UAS to fly at several different altitudes in one continuous flight. It is important to keep in mind that with a fixed-wing system, changing altitude is relatively easy, but it takes a certain amount of distance to reach different altitudes and depletes battery life. For altitude, multi-rotor systems are easier to manage, however, they are not able to cover long distances and cannot fly for long periods of time--both of which are a requirement in this project.

Additionally, since the client needs to carry an ozone monitor along with other meterological instruments, the fixed-wing system will be best for this job as they are able to carry heavier weight loads than multi-rotors.

Discussion


The two different types of unmanned aerial systems, fixed-wing systems and rotary-wing systems provide a means of data collection that is taking off fast in the geospatial community. Their advantages and disadvantages are only relative to the type of project you are doing as both have qualities that allow them to accomplish different tasks. The fixed-wing systems are a good fit for any project requiring data collection over large study areas and do not necessarily need a degree of high resolution for any particular land features. This can include projects like the one in the above scenario, collecting ozone, or in the process of data collection for general maps like those used for transportation or land use mapping. The rotary-wing systems are great for small study areas in which high resolution is required for all or some features. This can include projects like creating digital surface models for a single or a few features, or mapping a nesting site of endangered birds. Both systems have qualities that can be of great help when collecting data--their benefit is only dependent upon the type of project you wish to apply them to.

Conclusion


Unmanned aerial systems are useful in gathering many types of data for a variety of different fields and it is only limited by our imaginations. The system itself can be thought of as merely the vehicle and the data it can collect depends on what instruments you wish it to carry. Cameras can create visual maps for applications such as transportation or agriculture as well as elevation models and meteorological instruments can measure ozone, air quality, and humidity among other things. These systems are becoming of great use in not only the geographic community, but to biologists, city planners, and geologists among others. As technology advances, we can only expect to see improvements in these systems and a widening scope of their capabilities. It is important to become familiar with these systems to make using them in projects easier and more efficient.

Sources


http://onlinelibrary.wiley.com/doi/10.1002/asl2.496/full
http://uas.noaa.gov/news/skywisp.html
https://www.melown.com/maps/docs/acquisition-guide.html
http://www.dohenydrones.com/the-drone-for-you-fixed-wing-versus-rotary-wing


Sunday, October 4, 2015

Activity 3: Distance Azimuth Survey

Introduction


This week's lecture focused on problems that can occur in the field while collecting data. As technology often fails--for instance, a GPS may not work well in a densely forested region that interferes with satellite connection--we must be prepared with methods to counter any unexpected failing of technology. It is for this reason that this week's lab focused on Distance/Azimuth surveys--a survey method that does not rely heavily on technology. It is possible to create a map of a local study area from having little or no GPS signal. A researcher could potentially view satellite imagery of their study area after collecting points with a distance/azimuth device to find their origin GPS point if their origin is an easily recognizable feature. All objects within the study area will be accurate to that origin point if the researcher is able to find the distance of another feature from their origin point and its azimuth. To gain experience in this method of data collection, my partner and I went out to the campus common area to take these distance and azimuth measurements and create a data table and upload it to ArcMap where we ran several tools on the data to create a map with our feature locations.

Methods


Choosing the Study Area:

We were assigned to obtain 100 data points from a local area of our choice and create a map of our data including distance and azimuth data and attribute data of the features we were to map. My Partner, Scott Nesbit, and I chose the University of Wisconsin-Eau Claire campus commons as our study site because it was a large, open area that could easily be seen on satellite imagery which we could use as a basemap (Fig. 1). It also contained a large number of easily identifiable objects we could use to map our 100 points. Within our study area, we chose an origin point from which to gather distance and azimuth data in a standing position (Fig. 2). This origin point was a site that had an unobstructed view of most of the landscape and was hindered little by elevation differences within the site.

Fig. 1: The Study area (University of Wisconsin-Eau Claire Commons Area).

Fig. 2: Origin Point from which all our data points were collected, It was an easily identifiable and open area.

Collecting Data:


On September 3, 2015 at 3pm, we went out to our study area to collect our data points using a TruPulse 360, a laser range finder equipped with both the ability to calculate distance and azimuth direction, and a GPS unit to find the latitude and longitude location of our origin point. We used the SD mode on the TruPulse 360 for distance and AZ for azimuth calculation.


Fig. 3: Equipment used to take the Distance, Azimuth, and Origin Point data. On the left is the GPS unit, on the right is the TruPulse 360 laser range finder.


We then organized a data table in excel to include the XY coordinate of the origin point, Distance (meters), Azimuth (degrees), and a feature ID describing the type of feature being mapped (Fig.4). We entered all data directly into the laptop to be uploaded to ArcMap after collection. The objects we mapped included the large rock benches, large trees, signs, and all lamp posts within view. Some lamp posts were not recorded as the laser was obstructed by various objects in front of them. Because the lamp posts were also a narrow target to hit with the laser when collecting the distance and azimuth data, we aimed for the top region of the lamp post where it was the widest. Data was not entered for the lamp posts until several re-shots with the TruPulse 360 were taken to be sure the correct distance and azimuth data was being taken. We were able to gather a total of 107 data points.

Fig. 4: Data table in Excel showing XY, Object ID information, Distance, and Azimuth columns. 

Once out of the field, we each created our own map on ArcMap by first importing our data table from excel to ArcMap using the import (single) table tool. I then ran the Bearing Distance to Line tool found under Data Management tools in ArcMap (Fig. 5) to create the distance lines using our data table (Fig. 6).
Fig. 5: Bearing Distance To Line Tool showing selections for all parameters. 


Fig. 6: Bearing Distance To Line Tool output showing lines generated from the Distance and Azimuth data.


After obtaining our line features, I used the Feature Vertices to Points tool under Data Management tools to convert the endpoints of our distance lines into feature class points (Fig. 7).

Fig. 7: Feature Vertices to Points tool output displaying the endpoints of the previously generated lines as a new feature class containing the locations of all objects.


These feature class points are symbolized as Row 1-5 Benches, Lamp posts, trees, and ground signs. Because the Bearing Distance to Line tool does not retain the attribute data, I had to acquire this information by using a table join in which the Feature Vertices to Points feature class was the destination table and the original data table was the source table. I joined the tables using their ID, listed as "ThingID" in our data table to avoid nonrecognition by ArcMap, as their common attribute. I was then able to successfully symbolize the different objects withing this feature class. A basemap was then imported into ArcMap from the University's geospatial data file (Band 1 image from the 2709_29 folder under City_EauClaire_3in in County Data)  to show the accuracy of our data. As the last step, metadata was created for the data (Fig. 9).

Fig. 8: Symbolizing the Feature Vertices to Point feature class.


Fig. 9: Metadata of Feature Vertices to Point output.

Analyzing the Data:


The features in our map did not seem to match well with the imagery basemap (Fig. 10). To fix this problem, I thought perhaps the coordinate systems were not similar. I used the define projection tool on the distance lines as well as the feature vertices to point features. This did not change the outcome by any significant measure. The data seems skewed in such a manner that if the data were to pivot westward on its axis, it may line up better with the features in the imagery map. This inaccuracy could be due to the natural magnetic declination. However, data collected further from the origin point also seems too skewed for any compensation in magnetic declination to correct the error.



Fig. 10: Final Map showing all features within the Campus Commons. You can see the westward shift tendencies of the data points and the increasing skewness of the data points as distance from the origin increases.







Discussion:


Our study area proved to be a good place to collect our data points as it was relatively flattened and open with little obstruction in our view of our objects. There was some problem with obstruction when collecting distant lamp posts, but we tried to compensate by firing the laser to collect multiple distance and azimuth points for each object until we could gauge the most accurate reading. Even in doing so, however, there seemed to be much inaccuracy with the more distant features of our map. This could be due to the equipment. I would suggest having multiple origin points from which to collect the distance and azimuth data to lessen the effect of distance on the accuracy of the TruPulse 360 readings. Another problem with the data seemed to be the effect of the magnetic declination. All data points were shifted westward and were not corrected after defining a projection for data to math the GCS of the basemap. 

Conclusion


This lab allowed me to gain experience with the distance azimuth survey method and to be aware of any potential problems that may arise in the field when collecting data and the usefulness of alternative surveying methods to counteract problems as well as save time. This survey method allowed my partner and I to collect data points in about half the time it did for my group in previous labs. This method could be used in situations where many data points must be collected in a small area. To save time from needing to collect GPS points of every feature, a distance azimuth survey method could be used to instead calculate the locations of each feature from only one or a few GPS points. 





Sunday, September 27, 2015

Activity 2: Visualizing and Refining Terrain Survey

Introduction:

This lab activity is a continuation of our previous lab activity, Activity 1: Creating a Coordinate System, during which we surveyed a study area using a grid system for the purpose of creating a digital elevation model. In this activity, we exported our data collected during Activity 1 into ArcMap 10.3.1 and created our 3D models using interpolation tools in ArcMap and importing them into ArcScene 10.3.1. We then revised the data to locate portions of our study area requiring resampling. We followed up our revision of data with additional data collection in the areas lacking clarity on the 3D models and recreated these models for a more accurate image.


Methods:

We began creation of our 3D models by first loading our excel data into ArcMap and displaying our XY data, where X and Y were representative of the location within the grid system and we defined a Z coordinate to contain the depth at each XY coordinate. We then exported this data to allow us to run our four interpolation tools: IDW, Kiging, Spline, and Natural Neighbor. These tools created a raster containing all elevation data we had previously collected and allowed us to observe areas within our data that did not adequately depict our study area surface.

Assessing the Interpolation Tools:


Inverse Distance Weighted (IDW):
This interpolation tool smooths a surface created by point values by calculating cell values based on an inverse distance function that weights the points surrounding the point of interest.

 
Figure 1: Inverse distance weighted (IDW) interpolation results. The left image depicts the raster file displayed in ArcMap and the right image depicts the 3D model displayed in ArcScene. These outputs depict our survey study area, however, the surface is not smooth enough to give an accurate image. There is also a loss of detail within the valley region of our study area.



Spline:
The Spline tool uses a mathematical function applied to nearest surrounding points of the point of interest and minimizes the curvature of the surface between each point. The result is a smooth surface that meets all point values.

 


Figure 2: Spline interpolation outputs. The left image shows the raster file in ArcMap while the right image shows the 3D model in ArcScene. These outputs depict an accurate image of our study area particularly in the 2 hills, however, there is significant loss of accuracy in the valley.

Kriging:
Kriging weights surrounding values to calculate an unmeasured location using a statistical formula. The weight of each value is based on both the distance between the points and the arrangement of the points. The result is an accurate and smoothed surface.

 



Figure 3: Kriging outputs. The left image shows the raster file in ArcMap and the right image shows the 3D model in ArcScene. These outputs display all landforms within our study area with significant lack of detail. Particular loss of accuracy is shown in the valley region of our study area.

Natural Neighbor:
The Natural Neighbor interpolation tool calculates values of points in close proximity. It uses values of points only within a close subset of values and weights the values based on proportionate area. It is a localized interpolation method.

 
Figure 4: Natural Neighbor outputs. The left image depicts the nearest neighbor raster file in ArcMap and the right image displays the 3D model in ArcScene. The outputs show detail in the 2 hills, the depression, and some detailing on the slope of the ridge, but lacks significant detailing of the valley.



Each interpolation tool showed a lack of accuracy in the detailing of the valley surface of our study area. For this reason, we decided to resample this portion of our study area.

Resampling 

On September 22, 2015 between 3pm and 6pm, we headed back out to our study area. Because there was little sign of erosion on our landscape, we decided to simply resample the area as it was.

Figure 5: A new grid was created only over the valley of our study
area and elevation data was only collected in this region.
The grid cells were smaller than the first data collection session.



We created a grid over the valley portion of our landscape with smaller cell sizes (4cm x 4cm) to provide more detail the surface of this region. Elevation data was only collected in this portion of the study area and recorded in excel as negative values. A total of 392 new data points were gathered and added to the data previously collected. This created an overall stratified grid for our study area.









Rerunning Interpolation Tools

After the new data was added to the previously collected data on excel, we then imported the data to ArcMap, displayed the XY data and defined the Z coordinate as our elevation data.


Figure 6: The metadata added to the XY data feature class.
We exported the data and added metadata to the resulting feature class to provide information on the data before running interpolation tools. In this manner, the metadata could carry over to each resulting file.










The resulting raster files and 3D models showed an improved level of accuracy in our valley landform of our study area. We then chose the interpolation tool that best depicted our study area. The kriging tool oversimplified most of our landforms, resulting in a great loss of detail while the IDW tool overexaggerated the them, resulting in inaccurate peaks and valleys. Spline, though it depicted our study area surface well, did not depict our landforms quite as accurately as Natural Neighbor appeared to do. Thus, natural neighbor was the best tool to display our elevation data.





Figure 7: Natural Neighbor Interpolation output in ArcMap (left) as a raster file and in ArcScene (right). These outputs show great detail in the valley region of our study area as well as all other landforms. This tool was chosen to depict our study area surface because it accurately smoothed our surface and connected the data points without leveling peaks and low areas. 



Discussion:

After running interpolation tools on our data collected from our Activity 1, we discovered all outputs showed a loss of detail in the valley landform within our study area. We proceeded to resample our study area using a stratified grid approach--only sampling within the portion that housed the valley landform and with smaller grid cell size. We then added the newly collected data to our data from the first activity to create our stratified grid and used this data in ArcMap and ArcScene to recreate our raster files and 3D models. The resulting outputs all showed an improved detailing of the valley landform in our study area and an accurate depiction of the remaining landscape. Of all the interpolation tool outputs, the Natural Neighbor provided the most accurate image of our study area as it connected our elevation data points smoothly while still providing a high level of detail on all landforms. 

The resulting Natural Neighbor output, though resembling our study area quite well and containing a notable improvement in the accuracy of the valley, still lacks some detailing in the slope of the ridge. Our stratified grid method worked well in gathering more data for the valley feature, however, I think it would be even more accurate if all features, save for the plain, had a smaller grid cell size applied to them. 


Conclusion:

During this activity, we applied our critical thinking skills to assess the results of interpolation tools and develop a method to better our output while still using previously collected data. We now know how to run interpolation tools in ArcMap 10.3.1 and import the output into ArcScene 10.3.1 to create digital evlevation models. We have also gained experience in interpreting the output of these tools and catching errors and areas of lesser clarity.