Tuesday, June 30, 2020

Corridor Analysis and Least Cost Path

The second map I've produced for Application in GIS is a corridor analysis of potential black bear movement between two protected areas of Coronado National Forest.  This sort of analysis is useful for conservation efforts so environmentalists and park rangers can have some idea where black bears are most likely to be encountered.

For the creation of this corridor I took into account three factors: distance from roads (bears prefer to be farther away), elevation (bears have a preference for elevations 1,200 to 2,000 meters), and land cover (certain types of land are more suitable to black bears).  For the road distance, I took the roads shapefile and used the euclidean distance tool.  From there I reclassed the distances appropriately to create a new raster.  I also used the reclassify tool on the elevation and land cover raster to take into account suitability.

Then, I combined these rasters using the weighted overlay tool.  I gave a weight of 60% to land cover and 20% to the two remaining rasters.  This creates a raster that shows the general suitability of an area for a black bear.

For the creation of a least cost raster, I used the reclassify tool to invert the values.  The new values, or the cost surface, where calculated by 10 minus the suitability score.  After that it was necessary to create two cost distance rasters.  The least cost surface raster was combined two separate times.  Finally, these two cost distance rasters are combined using the corridor tool to create the desired corridor output.

The darker the red the more likely a black bear is to travel within that area when going between the two protected forested zones.
The next step in getting the map ready for production is to determined the desired extent of the corridor.  I did this by finding the lowest overall value in the corridor and multiplying that by 1.1, 1.2, and 1.3 for each of the red bands where the bears are the most likely to travel.

With this map it is much easier for the park rangers to know where the bears are likely to travel, especially in areas near to roads.  They can put up signage or other warnings to alert travels that they are more likely to see a bear there.  Or they could try to petition for changes such as a wildlife bridge which serves to keep both people and animals safer.

Friday, June 26, 2020

Suitability Analysis

This is the first week of Applications in GIS.  For just starting out we have a pretty rigorous initial project - land suitability analysis.  Land suitability analysis is where certain factors are taken into account, such as slope or land cover type, and assigned values corresponding to their qualities and the desired usage of the land.  These different attributes are combined with the weighted overlay tool to find the areas that most fit the desired properties.

This particular map was designed with land development in mind.  If a developer wanted to buy up empty lots in an undeveloped area there are several things they would want to take into account before making their purchase.  This spatial analysis takes into account slope (lower the better), land cover (which ones are easier to develop), soil type (certain types are more conducive to easy construction), distance from rivers (don't want to be on top of a stream in case it floods), and distance from roads (closer is better).  Using spatial analysis it is possible to assign all these traits certain values and combine them together to identify the best regions.

The map on the left treats all the properties with equal weight while the map of the right puts greater emphasis on slope and less on distance from rivers and roads.  Using alternative weights can be useful as building on a steep slope could be much costlier and more undesirable to live on than living further away from a road. The resulting areas are slightly different, especially for the equal weight areas by the river which are mostly excluded from the prime valuation.  Comparing different methods is useful in judging what decision is best for the project.
This map shows how weighting all the different traits equally versus giving them different weights produces a different final result.
This first assignment really feels like we are starting to get to the core of GIS.  There is a lot less step-by-step guidance in the homework at this point.  I am starting to feel more confident in my skills as time goes on.

Thursday, June 25, 2020

About Me

Hello!

My name is Leigh Koszarsky.  I received my undergraduate degree from University of Rochester in history and anthropology.  My Master's degree is from UMass Boston in Historical archaeology.  My goal with learning GIS is to eventually be able to apply it back to my work in archaeology.

I enjoy illustration, reading, and playing Pokemon Go.

I also made a story map here.


Saturday, June 20, 2020

Other Python Applications

This blog is typically dedicated to my GIS work, but since I've had lots of time during COVID-19 quarantine to work on other things I thought I'd use this space to show off something else I've made in python.  I made a twitter bot that generates characters for the popular table top game Dungeons and Dragons.  This bot comes up with a name, personality trait, fantasy race, and occupation for the character.  It also generates a short bio and gives them a list of stats.

It started out as a very simple few lines of code to see if I could recreate rolling a six sided die:
A d6 is just a normal 6-sided die like you would use in a normal board game.  The "quant" parameter asks for how many dice you want to roll.  By default it is set to just one.
Well that was easy enough.  What can I do with this function?

A Dungeon and Dragon's character has six different stats: strength, dexterity, constitution, intelligence, wisdom, and charisma.  Typically these stats range from 0 (very poor) to 20 (excellent).  These stats determine how likely a character is to succeed or fail at a certain task.  Throwing a javelin requires strength, while picking a lock involves dexterity.  The next logical step with this program is to use dice rolling to assign numbers to these stats.  The most straight forward way to roll for stats is simply rolling three six-sided die for each stat.  In python this looks as follows:

This calls the d6 function six times and appends the value generated with three virtual dice to a list.
Great!  But what if you wanted a more complicated method of rolling?  Some methods are designed to generate more powerful or more balanced characters.  I ended up developing eleven different roll methods in all (some coming from this source and others of my own creation).

Here's one of the more complicated ones.  This one rolls four six-sided dice for each stat and drops the lowest of the for dice in the set.  The tricky part comes in where the roll method dictates that at least one of the stats must be above 15.  I achieve this with if any(num > 15 for num in rollstats).
If any value in the list is above 15 the code will continue and return the list.  If not the function becomes recursive and runs itself again until it meets the desired criteria.
This method continues recursively until it returns a list where at least one value is above 15.
Now that I have different ways of assigning stats to a character the next step is to give them a name and some basic attributes.  I accomplish this by making a different lists - firstname, lastname, trait, fantasyrace, and occupation.  I fill these up with as many different values that I can think of and then use random.choice on each list to pick out a value for each one.

I added an extra level of depth here by making each fantasy race a tuple instead of just a simple string value.  Why a tuple?  Because each species has slightly different stat modifiers.  Orcs are strong and dwarves are tough so that extra boosts should be reflected in their stats on top of those they roll for.  Calling on a certain point in each tuple to add these values is much faster and easier than writing a complicated if/else statement that achieves the same thing.

There are a lot of different species available to play as in the game each with their own unique traits.
After this to add a bit of flavor to the characters I gave them a short bio.  This picks one of over forty sentences to call upon and pick different nouns from a variety of lists.  This adds variety to the characters and gives the game master a little bit more to work with in terms of providing the character with a backstory or set of motivations.

Once all the individual pieces are created they are assigned to a single variable that makes up the whole character.  With a little bit of work in git, heroku, and getting a twitter developer account approved the script was ready!  In heroku the script is programmed to tweet out a new character four times a day.  The code itself can be viewed here on Github.
This guy is pretty strong as long as no one puts cream in his coffee.




Friday, June 19, 2020

Working with Rasters

To compliment the previous module where we had vector data, this module is with raster data.  This was an optional assignment but I decided to do it since I thought it would be useful to know how arcpy can operate on rasters.

The assignment was to create a raster that identifies an area of a map that fits all the following criteria: forested landcover, slope between 5 degrees and 20 degrees, aspect between 150 degrees and 270 degrees.  This sort of output would be useful in identifying things like where a particular endangered species of bird might nest and could be used to restricting construction or mining activities.

After importing the appropriate modules and setting up the environment, the first thing to do was to make sure that the spatial analysis extension was available.  The spatial analysis extension is a separately licensed feature and the number an institution has available (if any) is dependent on how many licenses they have and how many are currently in use.  I used a simple if/else statement to check this.  If one is available the program checks it, preforms the desired functions, then checks it back in at the end so others can use it.

The next step is to reassign the various forested landcover values all to the same value then reclassify those values.  I also then extracted by attributes on this raster so that only those with the assigned value existed in the new raster.  The creates a raster with only the appropriately forested areas.

Then I took the elevation raster and assigned it to a variable using the raster function.  I used this variable to create the slope and aspect raster using spatial analysis's slope and aspect functions.  I made sure that the slope function was set to degrees since that's what we would be measuring in for a later step.

I used map algebra to restrict the rasters created in the previous step.  Four temporary rasters where made: where the slope is greater than 5 degrees, where the slope is less than 20 degrees, where the aspect is greater than 150 degrees, and where the aspect is less than 270 degrees.  Both of the slope and aspect layers where then combined with the "&" to create a new raster with all of the restrictions.  I then combined that raster with the extracted landcover values raster for the final project.  Up to this point, all the created rasters have been temporary.  Since this is the final raster that I want, I made sure to save it with the .save method.

The green shows the suitable land that fits all the criteria.

Monday, June 15, 2020

Working with Geometries


The lesson this week was working with geometries.  This is a very useful and practical skill, as you can use python to transcribe data into a text file, or to take data and use it to create something within ArcGIS.  When you can have data for hundreds of polygons each with multiple points this is much easier than doing it all manually.

Our assignment was to create a .txt file within python and then write data from a river shapefile to that document.  This is done by creating a search cursor and setting the appropriate parameters that we will use to loop through each feature.

This process uses a nested loop.  The first loop uses the cursor to iterate through each row.  We then need a second loop to iterate through every point of that feature.  Within the previous loop, I created a variable called "vertexID" and set it to zero.  In the next loop, every point it goes through it increments up one.  This labels each point within a feature.  In the nested loop we use the .getPart() function on the row to create an array of points within each feature.  At this point I have the program transcribe the object ID, vertex ID, X coordinate, Y coordinate, and feature name to a line of text in the created file.  I also have the script print out each of these lines in the console so I can see the program's progress and check for errors.

The output of the program, showing each point within each feature.
A flow chart demonstrating how the nested loops work within the program.




Friday, June 12, 2020

Exploring & Manipulating Data

The focus of this week's assignment was to use python to create a geodatabase, populate it with copied files, and then learn how to use the search cursor function and how dictionaries operate.

Creating a file geodatabase uses arcpy's CreateFileGDB_management function.  The parameters are the output location of the fGDB and the name of it.
Code displaying that the fGDB was made.

Next was the matter of populating the fGDB.  We already had the files we already needed elsewhere and it is easy to copy each one over with a loop.  First, I made a list and populated it with arcpy.ListFeatureClasses().  Then for each feature class in the list I had it iterated over in a loop.  Within the loop  I used arcpy.CopyFeatures_management().  The first parameter was the feature class the loop was currently on, then the second parameter was made by concatenating the output environment path with the fGDB name and then using arcpy's describe function to get the basename of the file added on at the end.

Each feature class being copied over to the new fGDB.
 Now that the files have all been copied over, we have to change the workspace to the fGDB.  The next task is to iterate over all the county seats in the city feature class.  This is done by creating a search cursor.  A search cursor takes the parameters of a feature class and then a SQL query.  For this query I specified that the feature must be equal to 'County Seat'.  As it goes through the loop the program prints the name, feature type, and the population of the city in the year 2000.
The successful output of the search cursor.
Lastly, we learned the basic functionality of python dictionaries.  I started by creating a blank dictionary and then populating it with the same search cursor as was created in the previous step.  With the loop established, just one more line of code assigns a key based off of the city name and the population count as the corresponding value.

The creation and contents of the dictionary.

The flow chart showing the list of steps the code runs through from creation of the fGDB to the dictionary output.

Wednesday, June 3, 2020

Geoprocessing

This week we started to apply our python knowledge within ArcGIS itself.  Using scripting can make doing actions much faster and less monotonous than doing everything manually.

In the first part of the lab, we used ArcPro's model builder to create a multistep process that outputs a shapefile.  With the model builder you can drag and drop files and tools and combine them to preform actions in a certain order.  For our lab we made a clipping of all the soils within the extent of a basin.  Then simultaneously used the select tool to pick out all the soils that were classified as "not prime farmland" with a SQL query.  Lastly, we used the generated not prime farmland shapefile to erase from the soils within the file clipped from the basin.  This created a final product where all the good farmland within the basin area remained.

For the second part of the lab, we shifted gears and used the Spyder IDE.  There were three primary goals of the program I wrote: assign XY coordinates to a hospitals shapefile, create a 1000m buffer around those hospitals, and create a dissolved 1000m buffer around the hospitals.  This entailed importing the arcpy module to the freshly created script.  Then I set the workspace environment and set it so it was possible to overwrite output files which made it much easier to re-run scripts over and over while tweaking them.

Adding the XY coordinates to the hospitals was a matter of just running the appropriate module with the correct shapefile fed into the parameter.  The buffers were somewhat more involved since they take more parameters.  The first buffer needed three parameters: the file to create the buffer around, the path of the output file, and the size of the buffer itself.  The dissolved buffer took and extra three parameters, two of which were left blank since they don't apply to this situation and the last one set to 'ALL' so that the buffers would dissolve.  A dissolved buffer is were any overlap between buffers is combined so that the outputted polygons and melded together.

Lastly, I used the GetMessages module with stated the start time of each procedure, if it was successful, and how long each method took.  This shows that the program ran correctly, though I also double checked in ArcPro itself.


The program that gave the hospitals shapefile XY coordinates and also created the buffer layers successfully running.
Overall, I thought scripting would be much more intimidating for someone with a minimal computer science background.  I see how even fairly straightforward applications of Python can make doing operations in ArcGIS much easier as well as faster and I look forward to learning more applications.