Monday, February 29, 2016

Lab 4: Geodatabases, Attributes, and Domains

Introduction

One of the unique components of ESRI products is the Geodatabase. The geodatabase allows for the users to assemble large amounts of varying data in a normilized, inclusive structure.  In the GIS industry, having properly set of geodatabases can make working with geospatial data much more consistent and reliable.  To create this level of consistency, its important that the creator of the geodatabase use domains when creating or importing the attributed information into the fields of the geospatail data that makes up a feature class.  In this lab, the step will be outlined for how to create a very simple geodatabase, along with a brief example of how they can be used to simplify data collection in the field by exporting the geodatabase into ArcPad. 

ArcPad, in conjunction with Juno GPS devices, were used to create a geodatabase designed for collecting micro-climate data.  By uploading the geodatabase via ArcPad into the Juno, once in the field, students could fill out the information that we as a class was important data ascribed to collecting climate data over a small space. 

The Information that we decided to collect were as follows: 

1. GCS location
2. Group number 
3. Temperature ( F )
4. Cardinal direction
5. Relative humidity 
6. Dew point
7. Date
8. Notes 


The rest of this lab will discuss the detailed components of creating these fields for a point feature class, as well as the data collection process. This includes the creating domains, and assigning other limiting factors onto the  data so that it more likely will not be erroneously miss represented 

Methods


The first step is to create a personal geodatabase in organized location using ArcCatalog.  Once that is done, by simply right clicking on the new geodatabase in the Catalog Tree and > New > feature class, and selecting point feature class,  the point feature class for micro-climates can be created.
Once that is complete, the user can establish the fields and what type of data each field represnts. Figure 1, below, shows all the relevant field attributes and their datatype of each field for the newly created micro-climate feature class.

figure 1: Attributes of micro-climate feature class 
Notice, in the figure, that there is good range of datatypes that make up this feature class.  As such, some of the fields can be easily corrupted by a simple slip of the finger during data entry, and create an erroneous point. To lower this risk of this event, its recommended that the user establish domains and limiting ranges that will lessen the likely hood of such bad data being published.


To create domains,  the user need only right click on the geodatabase in the catalog tree > domains.  Once to the domains page, the user can establish domains that can be applied to a feature classes for varying data types to limit the input options.  Once created, the user can go back into the properties of the micro-climate feature-class, and begin assigning them to the necessary fields. Table 1, below, shows the domains created and what field attribute those domains were assigned too.

Table 1: list of domains available for feature classes with in the microclimate.gdb
   
Now that the necessary domains have been created, they can be assigned to the appropriate fields, which is the final step before exporting the geodatabase to ArcPad and ultimately going into the field to collect the data. Of the field attributes that were created in conjunction with this feature class, several, but not all were assigned domains. Table 2, below, shows these relationships

Table 2: Field Attribute and Domains associated Domain

Now that are feature class is as we want it, with appropriate domains having the specific ranges and sub-types that it can incorporate, the feature class can be exported to ArcPad. To do this, the ArcPad extension must be turned on in ArcMap.  Once a connection is established between the Juno Device and the computer, the file containing your feature-class should be exported into ArcPad.

Note: The order of creating the Domains vs the Feature Class Attributes is arbitrary.  The user can make either one before the other depending on what they prefer. 

For this project, the point feature classes were exported to the juno along with a tiff file of the UWEC campus.  This image was stored in the device as a background tiff while the points were exported into the device as and editable feature.  As such, the tiff is only used as reference for students as they go about and collect GPS points.In the field, students could begin recording point locations once their Juno devices were adequately triangulated by the satellites. One that was accomplished, students could begin recording on the points on the ArcPad Juno Interface using the pen tool that is attached to the Juno Device.  Once a point was taken, a box would subsequently open up that contained the field attributes previously made in the lab.

Using Kestrel weather monitoring devices, students were able to record the temperature, humidity, dew point, and directional data needed to populate the attribute table.  Using these two devices, students could take the requisite climate information at various points, and input them into the Juno in real time by using the default values and domains established earlier in the creation of the geodatabase and micro-climate feature class.  Upon taking a point location, an attribute box would then open up in the Juno and and students could then populate the various fields by recording the data taken from the Kestrel with the confidence of knowing incorrect information would not be allowed to be entered because of the domains established prior.

Discussion

This past summer I worked an internship in West Africa where much of what i did was standerize the data collection process of locating and getting area measurments of cocao farms in the country of Liberia.  The software that I was working with there, was open source Q-GIS, and thus did not have the access to working with Geodatabases, which are unique to ESRI products.  As a result, domains could not be used in direct connection with the GIS software, and extra measures had to be taken in order to ensure that data was collected the right way, without errors.  

Comparing projects like that of this lab, to the experiences i had in Liberia, not having access to this sort of organizational element in the the data collection process can be both problematic and time consuming to deal with.  All that being said, you have to work with the resources that are available and a fairl effiecient system was created, but ESRI geodatabases domain system would have allowed the collection process to be even more simple and hands on than the system i created at the time.

Conclusion 

Although this is not my first time creating a geodatabase, it is always good to continually improve upon your knowledge of working with geodatabases. Establishing domains is the geospatial equivalent of having all your ducks in a row.  It also highlights one of the most important features of ESRI products, that you can use parameters within geodatabases to create limiting extents for what users can input into attribute fields when creating data and creating feature classes.  This becomes especially important when your collecting a relatively high amount of data per observation, and have more of an opportunity to enter in data that has mistakes. Using domains provides a safe guard by lowering the ways erroneous data can input.  

Domains were also especially important for me when collecting the mircro climate data, because i had never collected climate data and was unfamiliar with the process.  however, because of the domains and sub-types the class had previously established, I was able to collect several points in the field outside of Phillips Hall.  Working with the Juno device and the attribute menu, i feel very comfortable that the larger data collection project that will be conducted for next weeks lab  will be very smooth. 


Thursday, February 25, 2016

Lab 3: Navigation Map

Introduction 


In the initial part of this two part lab activity, students were required to construct a navigation map document, composed of two maps.  Ultimately, my partner Audrey and I will use the maps we created, in conjunction with our pace count, to complete a navigational course which will be the 2nd part of this lab exersize. The first portion of this lab is essentially getting our resources in order so that we are prepared to complete phase two to the best of our ability.  The methodology discussed in this lab report will detail the creation of navigational maps, collecting our pace count, and discussing important factors of our navigation maps.  

Methods

Pace Count
As a class on 2/16/16, we utilized the unseasonably warm weather to record our pace count. The class left the classroom to conduct this activity at about 330 pm, and we reconvened outside of the Phillips science hall.  My partner Audrey and I were left to create the 100m course which the class was to use to record their individual pace count.  Two do this, we had to use two 50 m long measuring tapes and put them end to end.  While the whole class went, I stayed at the junction of both the measuring ropes and held them in place to provide the rest of the class with an accurate 100 m distance.  After they were done, someone replaced my position, and i was able to conduct my 2 trials, both of which yielded 60 strides for a 100 m.  

The definition of 'a stride', that the we as class used was every time the right foot made a forward plant.

After about 30 minutes, everyone had their pace counts and we went back into the computer lab to start working on our navigational maps:

Navigation Maps 

A navigation map is a map that preserves direction in order to allow those who use it two get from point A,B,C by providing accurate representation of feature locations in a given area.  There is no stead fast rule about what essential elements are needed to create a navigation map, in principle. However, there are guidelines that one should follow if a navigation map is to be of a any use for naivigational purpose.  The general rule is this: Make sure you map contains only features and details that are are helpful to those relying on it to navigate.  

The area assigned to the class to map is what is known as The Priory.  The Priory is a plot of land owned by the University of Wisconsin Eau Claire, and is located just south of the city of Eau Claire along a highway.  The land is used for university research, and is composed of a mix of open land, forest, and building structures.  Along with these varying land types, the area also has varying topography it has both hills and flat land.  When dealing with varying topography,  its important to someone how reference that in the map.  Just as well, when dealing with varying land types, both natural and man made, those also must be referred to in the map.

On top of mapping the actual area itself, its also important to include an appropriate reference grid that allows users to provide a generally accurate coordinate to their assumed position.

To make sure that both of these elements are highlighted in a navigation map, the most important things that the map needs to have are some sort of elements that makes both of these things apparent to the user.  For this reason, I decided to use imagery of the area, contour lines, and a boundary box as the main features for the Priory's navigational map. below, in figure 1, is what i ended up creating.

figure 1: On the left is a navigational map of the priory.  On the right, is a reference map providing context to where the priory is in the greater Eau Claire Area.

Discussion 

As you can see, the map here shows the priory location at two different scales.  Doing this shows both detailed elements of the Priory to be highlighted while also providing a location context to where the priory is in the greater Eau Claire Area. The more detailed map, on the left, was assigned a UTM grid that provides 50 x 50 meter grid blocks. The important numbers are the larger black ones since they are the only ones which will variate in the small area that makes up the priory ( up to the thousand value).   The map on the right, uses a GCS degree coordinates, which better suits areas represented at smaller scales.

Overall, I thing the maps i created could be used to serviceably navigate the area.  However, I think there could have been a better way to represent both the topography and the land cover in a way that doesn't compromise the detail of the other. The contour lines, especially in the hilly regions, hides the land cover beneath and shows little detail, besides the presence of a hill.

Perhaps the best approach would have been to digitize the the land cover based on its differing types (grassland, natural forest, planted forest, building, parking lot, ect) and overlay that with the contour lines at a not such a precise level, having lines every 5-8 feet instead of every 2 feet.  That way, a user could more clearly see the nature of the land cover while also understanding the topography around them.  If available, another interesting way to do this idea would be to create 3 maps, using both of the of the current maps, but removing the contour lines.  In the 3rd map, there would be a lidar based DEM that would show highly accurate elevation information.  This new 3rd map, would be at the same scale as the map with the imagery, with the same grid, so that the user could refer to them both simultaneously.

Conclusion.

The key to making a good navigation map is to not do to much.  Its important that the what the user is seeing is not over crammed with details.  It needs only what is necessary.  In that sense, making a good navigation map is not that difficult,  yet all the the while, its equally as easy to mess one one. Along with the map, its also important to include information about the spatial reference system being used, along with information containing where the data was obtained from, through what organization, and for what use.  

Sunday, February 14, 2016

Lab 2: Visualizing and Refining Terrain Survey

Introduction

 In the previous lab my group members and I created a terrain out of snow in a 2.34 x 1.12 planter box. This labs purpose will be to the data we collected and input it in such a way that can be turned into a digital terrain model using ArcMap and Arc-scene. Using a variety of interpolation methods, we are to choose which one method which we like best, recreate our terrain and take a more dense point recording of the parts of our terrain that were not portrayed accurately in the original data set.  To do this, it will be necessary for us to in some manner shorten some of the intervals on the y and/or x axis.  This increased density will provide a better representation of the true terrain.  With that new data, a final terrain model will be created using the interpolation method we as group decided represented our surface best.

Upon reviewing the terrain models created by using:
  •   IDW
  •   Natural Neighbors
  •   Kriging
  •   Spline
  •   TIN
Upon reviewing the products of these interpolation processes, it was apparent that the roughest data was in an area that spans the whole y axis and about 1/5 of the total x axis.  This area, in our initial design was intended to represent a brief, slightly rounded ridge formation, with valleys on either sides.  Here forth, this blog will discus the methods of collection and results acquired from visualizing, refining and interpolating our data into a 3D digital format.

Methods

After analyzing our results from the first data collection, my group members and I decided that we were happy with what 80% of our data, and that 20% remainder was the data from the ridge.  As such, when redoing data we determined that the only areas that need to have a finer grid over it was ridge area. We defined the ridge area as any part of the landscape that showed a sharp contrast in height from both of the adjacent valleys.  The first thing we did, in preparation for our measurements ,was to recreate our landscape within the planter box as best we could so that it more or less resembled the initial terrain made in lab 1.

The replication was conducted on Wednesday, February 10th at 9 AM. My group member Andrew were the two who were to recreate and collect the new data.  Alexanders responsibility was to convert said data into an appropriate three column excel file. The conditions were were somewhat mild in comparison to the last collection.  There was little wind or clouds, and the temperature stayed at 14 degrees during our 1 hour and 20 minute collection process.

To do this, my group members and I used our hands and shovels to recreate varying hills, slopes, ridges, valleys and depressions.  Once we were satisfied with the replication, we used water from a water bottle help compact the snow surface and freeze the formation in place.  Doing this allowed for us to be less concerned about potentially damaging our landscape accidental as we took our measurements or created our grid. The next thing we did was create our grid, 10 meters in per interval in the x and y direction, excluding the areas of the ridge which we reclassified as a point per 5cm, not 10 cm, on the x axis.  The final product produced a grid made to collect 305 points, compared to the previous lab in which there were 242.  We than recollect the data of the ridge by taking measurements from the newly created, more dense, surface grid. 

Before creating our final Interpolation model we a s group had to explore the various interpolation models available within ArcMap.  Using the original data from lab one, we interpolated our resaults using IDW, Kriging, Natural Neighbor, Spline, and TIN  operations.  After doing this, we as a group will decide which interpolation models our terrain most accurately, and use that method on our 2nd point data set.

Interpolating the Initial Data - understanding process and discussing results. 

IDW: The Inverse Distance Weighted interpolation method determines cell values using a linear-weighted combination of sample points. The weight assigned is a function of the distance of an input point form the output  cell. The greater the distance is, the less of an influence the cell has on that specific cell.   This type of interpolation requires a high density of data points to to be able to create a a good representation of a surface.  As figure 2 displays below, the IDW of our initial data collection produced very lumpy DSM that misrepresented the true surface we created.

Figure 2: IDW interpolation





Natural Neighbors: The Natural Neighbor interpolation method used weighted averages,  and is the equasion used to create the model is very similar to the one used in IDW.  This method uses local coordinates to define the amount of influence ny scatter point will have on another point in the output of the model.  Referring to the map in figure 3 below, the surface created is similar to the product of the IDW Interpolation, but is smoother, and less lumpy.  Again, though, the area that is the most different from the actual planter terrain is is the ridged area near the middle.

figure 3: Natural Neighbor Interpolation 


Kriging: The Kriging method is very powerful statistically based method of interpolation.  Kriging assumes that the distance or direction between sample points reflects a spatial correlation that can be used to explain variation in the surface. Its product is produced to fit a function to a specified number of points or all points within a specified radius in order to determine the output value. To get these predicted values , Kriging derives a relationship between points by using sample measurements that use a sophisticated weighted averaging technique.  Revering to the Kriging Map below in figure 4, we as a group liked how smooth and gradual this particular DSM produced, and was most similar to the actual terrain of the planter box of all the interpolation methods thus far.  The terrain we created had was very smooth, with few if any sharp features.  With that being said, the ridge is again misrepresented, and is shown to be not continuous across the the entity of the Y axis, as it was in actuality.
figure 4: Kriging Interpolation 



Spline:
The Spline interpolation method estimates values using a math function that minimizes the overall surface curvature.  Spline models are then typically very smooth, and passes directly through input points.  In essence, it is like bending a piece of rubber to flow through two points while also minimizing the total amount of curvature between those two points. Very effective at showing surfaces that gradually change over space, like temperature models.  

figure 5: Spline Interpolation


TIN:
The Triangulated Irregular Network interpolation is very different from the other methods previously discussed. Uses points to create triangle based geometry that then makes up the surface model.  The 3 vertexes of each triangle is represented by the exact location of a data point, and the space between in the plane of the triangle is what is is interpolated based off of the z values of each point.  Referring to figure 6 below, the rigidness of the TIN interpolation creates a very sharp and abrupt spatial representation of the planter box terrain.   As a group we decided that because of its rigidness, it was poor model for the smooth terrain we had created in the field.  With that being said, the TIN interpolation method did the best job at representing the the ridge, which so far has been grossly misrepresented by all the other interpolation methods. 


Re-interpolating Planter Surface with New Data


Once obtained, the new data was subsequently loaded into Excel. Each grid space is represented by a corresponding x,y and z value. Once completed, the table was added to newly created personal geodatabase and turned into a point layer. 

Next, we had to determine what interpolation and re-sampling measures we were going to employ to get a better result.   

The decision as to what interpolation method to use for our 2nd data set was objectively based on which method best represented the majority of our created terrain.  Given the opportunity of being able to re sample a portion or all of the planter box, we knew that if we created a finer grid from which we would take our z measurement from, we could fix the misrepresentation of the interpolated DSMs. Coming to that conclusion, we as a group decided to decrease the the distance between grid spaces by shortening the increments to 5 cm as apposed to 10 on the axis for the portion which contained the ridge feature.  The data collected can be seen below in the form of a point feature class, in figure 6 

figure 6: Point feature class created from CSV file of collected points, notice the portion of the layer that is more dense. That is the region where the ridge is.
The decision as to what interpolation method to use for our 2nd data set was objectively decided based on which method best represented the majority of our created terrain.  Given the opportunity of being able to re sample a portion or all of the planter box, we knew that if we created a finer grid from which we would take our z measurement from, we could fix the misrepresentation of the interpolated DSMs. Coming to that conclusion, we as a group decided to decrease the the distance between grid spaces by shortening the increments to 5 cm as apposed to 10 on the axis for the portion which contained the ridge feature. With this newly created, denser data, we felt that by using an Ordinary Kriging Interpolation, we could obtain the most accurate results. 

Results: Ordinary Kriging of 2nd planter terrain point data set

The final map made was by-far the best representation of any the true terrain created thus far in this excersize.  This map can be seen below in figure 7.  As you can, see, the ridge is now continuous across the width of the y axis, and variates very little at the point of plateau across its y extent. This, ultimately, was what we set out to do when recreated our terrain and data collection process.  Apart from the newly created ridge, we were also very happy with overall smoothness that from the Kriging process.  Because the surface that we were working with on our terrain was snow, the continuous, smooth nature of a Kriging DSM, works very well. 


figure 7: Final Kriging interpolation with new data. 


Data Discussion

During our data re-collection, our goal was to condense the point cloud over the ridge in order to create more values that would in turn produce better results with the interpolation.  A denser point cloud allows for the weighted averaging operations to model the surface with higher accuracy and consistency.   Given that we reduced the size of the grid to produce better results over the ridge on the x axis, doing the same to the y would have lead to an even denser dataset which would further enable the technology within ArcMap to create accurate representations.  A concern i have with us only making the x values smaller is that the grid we ended up creating more rectangular over the ridge area.  Because they were not uniform squares, this created more room for human error in that we could have been inconsistent in which part of the rectangle we were taking our re-measuremnts.  I would like o think that we were pretty consistent, but this factor increases the opportunity for us to add a level of unwanted randomness to our collection process .  Another thing that we could have done is been more precise when collecting our z values.  For both datasets, we recorded all values relative to the nearest  1/2 cm. Increasing that precision to the 1/4 cm would have even furthur increased the precision of our data, and would further able the computer models to produce more accurate results.

Monday, February 1, 2016

Lab 1: Creation of a Digital Elevation Surface

Introduction

This objective of this activity was to introduce students the steps required to create an elevation model for a landscape using measurements of individual grid squares.  This field work was conducted by myself and two other individuals, Andrew Faris and Alexander Kerr.  Students were provided with a set of tools and were left to their own discretion on how use the tools to both set up a landscape and to subsequently, create a grid, and take measurements of the surface properties.  The materials that we were provided with included tacks, yarn, meter sticks, rulers, and a foldable measuring stick.  The  Students were instructed to create landscapes out of snow within planter box's located within the courtyard of the Phillips Building structure, on the University of Eau Claire's lower campus. An image captured from google earth in figure one shows this location more specifically.   

Figure 1: The site of this Field project was the courtyard of of the Phillips Science Builiding, located on the lower campus of the University of Wisconsin Eau Claire.

 We as a group conducted our measurements on the morning of January 27th  and we convened at 930 AM and were finished with taking our measurements by 11:40 AM. The conditions got increasingly colder and windier during our time outside.  at the start, the temperature on my mobile phone, a Galaxy Prime made by Samsung, indicated that it was 21 degrees with a windchill of 15.  By the end of our time, the temperature read 18 degrees with a windchill of 8. 

Methods

figure 2: grid created over planter box
The first thing we did in preparation for our measurements is to create our landscape within the planter box.  To do this, my group members and I used our hands and shovels to create varying hills, slopes, ridges, valleys and depressions.  Once we were satisfied with what we had made, we used water from a water bottle help compact the snow surface and freeze the formation in place.  Doing this allowed for us to be less concerned about potentially damaging our landcape accidental as we took our measurements or created our grid. The next thing we did was create our grid, a final product which can be seen in figure 2 blow to the right.  To create this grid, my group members and I decided that it was best to have the x axis be divided into 22 sections and the Y axis into 11 sections.  For both axis's, the divisions were made every 10 cm so that each box covered a square meter. However, because the dimensions of the planter box were not equally divisible by 10, the row on the right side of the box has a y dimension of 12 cm and the column near side had an x dimension of 14 cm ( reference the image in figure 2). otherwise, 210/242 are all the 1 square meter in size. To label the grid blocks, we used a numbering system that began at 0 and when up incrementally by one until the end of the planter in both the X and Y direction.  The point of origin can be seen in figure 2 near the top left of the image. The zero point that we decided for the z value would be the top of the planter.

In terms of collecting heights, my group members and I used two rulers to collect the heights of the surface for each cell. If a cell's surface is below the flush surface of the planter, a ruler was laid on the wood and extended out to the cell and provided a visual place marker on the ruler, showing us how below planter top the surface is.  If the surface was visibly above the planter, the rule used as the visual place marker was held by me as I tried to make it as level possible with the other ruler which was propped up on top of the planter bored to keep.  Where the horizontal met the vertical meter stick, would be the height above the planter the surface was for that cell block.  I did the best i could to keep the horizontal meter stick level and and also tried to measure from the center most of each cell, regardless if it the cells surface was below, above, or level with the edge of the planter.

Results

As we we did our measurements, the figures were recorded in a field notebook.  After that, the numbers were entered into a excel table. This table can be seen in figure 3 below.

figure 3: The numbers in each cell represents the z value of surface in the planter.The redder the lower. The greener the higher.


 
Discussion of Data 

Given the cold conditions, there was some level of haste to this whole operation.  Near the end of the collecting process, everyone was ready to get out of there and were likely working at faster pace, and not being as concise with our measurements.  Also, when collecting the the surface heights of portions that were above the planter surface, bend in the ruler or a poor job of holding the ruler level by me, could have lead us to record figures that were either to low or too high. Going forward, we as a group should consider things like this and alter our methods as such.  I think in the future, on days when the conditions are not good, it would be wise to take breaks inside so that we can stay focused on what we are doing and not not trying to keep adequate warm blood flowing to our hands and feet. Otherwise, suck it up an be ready to face the elements. 

Conclusion

The surface has been made, the data collected, and the results have been discussed, but is this enough?  In the next lab we will learn how to bring this table into an ArcMap Geodatabase, and ultimately conduct geospatial operations on it so that we can better represent the surface we created. by re-creating this project in a digital format, we can then begin to compare and contrast this data collection method with other ones, and see which ones could be more appropriate for similar projects going forward.