-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy path03_spatial_joins.Rmd
More file actions
511 lines (347 loc) · 20.3 KB
/
03_spatial_joins.Rmd
File metadata and controls
511 lines (347 loc) · 20.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
---
pagetitle: "spatial joins with sf"
editor_options:
chunk_output_type: console
---
<br>
## Spatial Joins in R with `sf`
Some of the most common and useful geospatial operations are **joins** based on some component of the spatial topology. For example, you want to figure out what attributes of certain points that are associated with or within certain polygons on the landscape...like bus-stops in a county or river gaging stations within a watershed.
Spatial joins are based on the intersection between two spatial objects, often points and the polygons. There are many ways we can join objects, which may include [specific options](https://r-spatial.github.io/sf/reference/geos_binary_pred.html) like *crosses*,*near*, *within*, *touches*, etc. The point being, we can do all this in R! Robin Lovelace et al. have a great online book available: <https://geocompr.robinlovelace.net/spatial-operations.html> that covers some of this material. Check it out!
Let's **load the libraries** we're going to need first.
```{r loadlibs, echo=T, eval=F}
library(here)
library(sf)
library(dplyr)
library(viridis)
library(ggplot2)
library(USAboundaries)
library(rnaturalearth)
library(GSODR)
library(ggrepel)
library(cowplot)
```
```{r libseval, eval=T, echo=F, message=F, show=FALSE}
suppressPackageStartupMessages({
library(here);
library(sf);
library(dplyr);
library(viridis);
library(ggplot2);
library(USAboundaries);
library(rnaturalearth);
library(GSODR);
library(ggrepel);
library(cowplot)
})
```
### Polygon Data
We'll be using California and CA counties pulled from the `USAboundaries` package.
```{r getBoundaryDat, echo=T, eval=T, purl=FALSE}
# get USA states, filter out Puerto Rico, Alaska, and Hawaii for now
us <- us_states(resolution = "low") %>%
filter(!state_abbr %in% c("PR", "AK", "HI"))
# get CA boundary with high definition
ca <- USAboundaries::us_states(resolution = "high", states = "CA")
# make a box around CA (a grid with an n=1) for inset
ca_box <- st_make_grid(ca, n = 1)
# get CA county boundary
ca_co <- USAboundaries::us_counties(resolution = "high", states = "CA")
# make sure we have all the pieces with a quick test plot
plot(us$geometry)
plot(ca$geometry, add=T, col="gray50", border="maroon")
plot(ca_co$geometry, add=T, border="pink", col=NA)
plot(ca_box, add=T, border="red3", col=NA, lwd=2)
```
<br>
### Point Data
Now we have some polygon data to work with...let's add some climate data and practice joining polygons to points and points to polygons! First let's use the [`GSODR` (Global Surface Summary of the Day) package](https://ropensci.github.io/GSODR/) to get global climate station locations. Then we can join to a few specific states/counties, and plot. First the GSOD data:
```{r getGSODR, echo=TRUE, eval=FALSE}
# load the station metadata file from GSODR (this loads `isd_history` in your
# R session)
load(system.file("extdata", "isd_history.rda", package = "GSODR"))
# make spatial
isd_history <- as.data.frame(isd_history) %>%
st_as_sf(coords=c("LON","LAT"), crs=4326, remove=FALSE)
# filter to US and CA, many sites out in buoys along coast
isd_history_ca <- dplyr::filter(isd_history, CTRY=="US", STATE=="CA")
```
Note, this is a fairly large set of point data, with `28,104` observations globally. Let's map this so we can see how dense this dataset actually is. Let's use a nice set of global maps from the `rnaturalearth` package. Because the points are so dense, let's plot those first, then we'll add a layer of world country outlines.
```{r, echo=F, eval=T}
load(paste0(here::here(),"/data/isd_files.rda"))
```
```{r plotGSODR, echo=TRUE, eval=TRUE, purl=FALSE}
# view!
library(rnaturalearth)
library(rnaturalearthdata)
world <- ne_countries(scale = "medium", returnclass = "sf")
plot(isd_history$geometry, pch=16, cex=0.2, col="gray50")
plot(world$geometry, add=T)
title("GSOD Climate Stations")
```
That's a lot of points! Let's look at just California to make this a little more manageable.
```{r tstPlotCAISD, echo=TRUE, eval=TRUE, purl=FALSE, warning=FALSE}
# look at CA sites only
plot(isd_history_ca$geometry, cex=0.5)
plot(ca$geometry, col=alpha("gray", 0.5), border="#440154FF", lwd=1.5, add=TRUE)
plot(isd_history_ca$geometry, add=T, pch=21, bg="#21908CFF", cex=0.7, col="black")
title("GSOD Climate Stations labeled as CA")
```
Great, now we have a dataframe in our environment that has both global climate station locations, and only stations associated with California, USA. You'll notice there are a number of stations that fall outside of the CA border, largely those associated with buoys along the coast.
<hr>
## Spatial Joins
### Select *POLYGONS* containing *POINTS*
This first approach only selects polygons that contain points. For demonstration sake, let's use the larger global point dataset. Note this does not modify the polygon dataframe in any form (i.e., add attributes, update, summarize, etc). It is only selecting or *filtering* to the polygons that contain points using a spatial join.
```{r polyptJoin, echo=TRUE, eval=TRUE, purl=FALSE, warning=FALSE}
# Get CA county POLYGONS only that contain ISD points within county boundaries
# does not bring attributes from points forward
ca_co_isd_poly <- ca_co[isd_history, ]
plot(ca_co_isd_poly$geometry, col=alpha("blue",0.3))
```
### Anti-Join Non-Matching Objects
So most counties have at least one point present. What if we specifically wanted to find the counties that *don't* have a climate GSOD station in them? We can use something called an "`anti_join`", which does precisely that, it identifies the items that don't have a match. There's a few possible ways to do this, but the most flexible I've found is using the following, because it's easy to return whatever spatial object you prefer (e.g., points, polygons, lines).
The key is to use the same subsetting `[ ]` option, but add the `!lengths()` function to return a logical vector of all the *non-matching* objects. We are essentially filtering by this vector, so this doesn't actually add any data from one layer to the other, it simply filters where there aren't any overlapping bits.
```{r polyptAntiJoin, echo=TRUE, eval=TRUE, purl=FALSE, warning=FALSE}
# Find ISD points DON'T within county boundaries
ca_co_poly_anti <- isd_history_ca[!lengths(st_intersects(isd_history_ca, ca_co)), ]
# Find Counties that don't contain ISD points
ca_co_poly_anti2 <- ca_co[!lengths(st_intersects(ca_co, isd_history_ca)), ]
# Plot it!
plot(ca_co$geometry, col=alpha("pink",0.3))
plot(ca_co_poly_anti$geometry, pch=15, add=T, col="maroon")
plot(ca_co_poly_anti2$geometry, add=T, col="darkgreen")
title("Anti-joins with CA GSOD Climate Stations:
Points outside of county boundaries (squares),
Counties with no stations (green)")
```
<br>
### Join Attributes: *POINTS* inside *POLYGONS*
Great, what about joining the data attributes? Let's look for points that fall within CA counties, and add ATTRIBUTES from the county polygons to the climate station points. Just a reminder, here's the data columns (or *attributes*) in the polygon dataset:
```{r headCACOUNTY, echo=FALSE, eval=TRUE, purl=FALSE, warning=FALSE}
head(as.data.frame(ca_co))
```
<br>
So in this case, let's say we want to add the county **`name`** attribute to our POINT dataset, which looks like this (notice there's no `county` field or `name` field):
```{r headISDpt, echo=FALSE, eval=TRUE, purl=FALSE, warning=FALSE}
head(as.data.frame(isd_history))
```
<br>
So to spatially join the county `name` attribute with the appropriate point locations, let's use `st_join`. If we use `left=TRUE` here, our result will retain *all* the points in the dataset rather than just the the spatial overlaps (where points fall inside polygons). So `left=TRUE` is essentially a `dplyr::left_join`, and `left=FALSE` is equivalent to a `dplyr::inner_join`.
```{r polyJoinAttribs, echo=TRUE, eval=TRUE, purl=FALSE, warning=FALSE, message=FALSE}
# For POINTS that fall within CA_counties, adds ATTRIBUTES, retains ALL pts if left=TRUE, otherwise uses inner_join
isd_ca_co_pts <- st_join(isd_history, left = FALSE, ca_co["name"]) # join points
# plot
plot(isd_ca_co_pts$geometry, pch=21, cex=0.7, col="purple", bg="gray80")
plot(ca_co$geometry, border="gray20", col=NA, add=T)
```
```{r headISDpt2, echo=FALSE, eval=TRUE, purl=FALSE, warning=FALSE}
head(as.data.frame(isd_ca_co_pts))
```
<br>
Now we have only points that fall *inside* of a CA county, **AND** the new data frame now has a new column/attribute called "`name`" (all our climate station points have a named CA county associated with them). We could easily specify additional columns inside our `st_join` function, or if we don't specify any columns, then all columns from the polygon dataframe that spatially joined/matched the points data would be added to the points dataframe.
> `isd_ca_co_pts <- st_join(isd_history, left = FALSE, ca_co) # join all columns`
<hr>
## Practice with Climate Data Example!
Hopefully the above was useful...but let's actually practice how we may use this by actually using some spatial joins to select and download some climate data from the `GSODR` package, and then make some visualizations. To start, let's take a look at what stations have data between 1980 and 2018. Check the [`GSODR`](https://ropensci.github.io/GSODR/articles/GSODR.html) vignette for more details, I'm just applying some of the commands they lay describe.
<br>
### Check Stations for Data Availability
Here we check what stations contain data between a given date range. Some of these stations go back to the 1930's, but we'll focus on 1980--2018.
```{r checkStations, echo=TRUE, eval=TRUE, purl=FALSE, warning=FALSE}
# check availability for a date range
stations <- isd_ca_co_pts[isd_ca_co_pts$BEGIN <= 19800101 &
isd_ca_co_pts$END >= 20181231, ]
# Plot where these stations are?
ggplot() +
geom_sf(data=ca, fill="steelblue", alpha=0.4) +
geom_sf(data=ca_co, color="gray50", lwd=0.4, alpha=0.4) +
geom_sf(data = stations, size = 2, pch=21, color="thistle1", fill="black") +
theme_bw()+
ggtitle("GSOD Stations with data 1980-2018")
```
<br>
### Calculate Stations Per County
Looks like there are 53 stations, and some counties have more than one. Let's apply our spatial join powers to filter this list down a bit. Let's:
- Summarize our station data using the spatial_joined county `name` attribute so we can calculate how many stations we have per county.
- Create a dataset that includes only stations from counties with a single station
- Create a dataset that contains stations that are within a set distance from the centroid of the county
We'll mainly use some basic `dplyr` here, which is possible because as `sf` objects, these are still simple data frames.
```{r Calcpercounty, echo=TRUE, eval=TRUE, purl=FALSE, message=FALSE, warning=FALSE}
#class(stations) # sf object
# calc number per county
stations_n <- stations %>%
rename(cnty_name=name) %>% # make column amore clear
group_by(cnty_name) %>%
mutate(n=n()) %>%
ungroup()
#class(stations_n) # still sf object
# join back to county layer with a spatial join, using left=FALSE (same as "inner_join")
ca_co_stations <- st_join(ca_co, stations_n, left = FALSE)
# plot!
ggplot() +
geom_sf(data=ca) +
geom_sf(data=ca_co_stations, aes(fill=as.factor(n)), lwd=0.4, alpha=0.4) +
theme_bw()+
scale_fill_viridis_d("No. of Stations")+
ggtitle("Number of GSOD Stations with data 1980-2018 per County")
```
<br>
### Pick Station Nearest the Centroid of County
Well great, what do we do for counties with multiple stations? How about picking the station nearest the centroid of the county. Steps:
1. We need to work with just counties with more than one station.
2. We need to add the centroid of each of the counties in question.
3. We need to select the station nearest the centroid.
For **Step 2**, we're going to use the `purrr` package to `map` or `apply` the `st_centroid` function over each county in our dataframe. This is the equivalent of a **for-loop**, it just looks very different.
```{r filterpercounty, echo=TRUE, eval=TRUE, purl=FALSE, warning=FALSE}
# STEP 1: filter to stations with n>1
stations_n2 <- stations_n %>% filter(n > 1)
# STEP 2: Use "purrr" package with sf to get Centroid points for our counties with n>1
library(purrr)
ca_co_centroid_pts <- ca_co_stations %>%
filter(n > 1) %>%
# add centroid values using the geometry column
mutate(cent_lon=map_dbl(geometry, ~st_centroid(.x)[[1]]),
cent_lat=map_dbl(geometry, ~st_centroid(.x)[[2]])) %>%
as.data.frame() # convert to simple dataframe
# make back into an sf object, but only using the centroid X/Ys we just made
ca_co_centroid_pts <- st_as_sf(ca_co_centroid_pts, coords = c("cent_lon","cent_lat"), crs=4326, remove = FALSE)
# plot!
ggplot() +
geom_sf(data=ca_co) +
geom_sf(data=ca_co_centroid_pts, aes(fill=as.factor(n)), pch=21, size=4) +
theme_bw()+
scale_fill_viridis_d("No. of Stations") +
labs(subtitle = "Centroids for Counties with >1 GSOD Station")
```
> **Step 3**: This is the trickiest part...
There are probably a few different ways to do this, but let's try to use the one that seems simplest and uses the fewest lines of code. A handy package (`st_nn`) will allows us to nearest neighbors between points/lines/polygons, and can provide distances as well. So let's get the station nearest our county centroid for all the counties with stations \> 1.
```{r nearestCent, echo=TRUE, eval=TRUE, purl=FALSE, warning=FALSE}
library(nngeo)
# find the first nearest neighbor GSOD station for each centroid, returns a list
nearest_stations <- st_nn(ca_co_centroid_pts, stations_n2, returnDist = T, k = 1, progress = FALSE)
# add distances and centroid ID and make into a dataframe, get only distinct IDs
nearest_stations_df <- tibble(dist=nearest_stations$dist, centrID=do.call(rbind, nearest_stations$nn)) %>%
dplyr::distinct(.keep_all = T)
# make a df of stations in counties where n=1
stations_n1 <- stations_n %>% filter(n==1)
# now select the "nearest to centroid" stations and bind to the n1 dataset
stations_final <- stations_n2[nearest_stations_df$centrID,] %>%
rbind(stations_n1)
# Plot it to be sure
ggplot() +
geom_sf(data=ca_co) +
geom_sf(data=ca_co_centroid_pts, pch=21, size=2) +
geom_sf(data=stations_n2, col="orange", alpha=0.9, pch=17, size=1.8) +
geom_sf(data=stations_final, col="blue3", alpha=0.9) +
theme_bw() + labs(x="Lon",y="Lat")+
geom_label_repel(data=stations_final, aes(x=st_coordinates(geometry)[,1],
y=st_coordinates(geometry)[,2], label=CALL), size=2.5) +
labs(caption = "County Centroids (open circle),
Final Stations Selected (blue circle),
Stations n>1 (orange)")
```
That was a lot! But Looks like all works, and now we have a final set of stations we can use to download data. For simplicity in this example, I'm picking three stations, one with the lowest elevation, one with the highest elevation, and one with the greatest latitude.
<br>
### Download Climate Data
Now we can use our station list to download daily data for each station for any period of record we want. Note, this code works but took 3-4 minutes to run. To speed things up I've saved the file [here](https://github.com/ryanpeek/mapping-in-R-workshop/raw/master/data/CA_stations_GSOD_19800101-20181231.rda), or just grab the summarized data shown in the next section.
```{r getClimDat, echo=TRUE, eval=FALSE, purl=FALSE, warning=FALSE}
# Get highest and lowest elevation Station IDs and the northernmost (highest) latitude:
(stationIDs <- stations_final %>%
filter(min(ELEV_M)==ELEV_M | max(ELEV_M)==ELEV_M | CALL=="KCEC") %>%
select(STNID, CALL, -geometry) %>% as.data.frame() %>%
select(-geometry))
# get data:
climdata <- get_GSOD(station = stationIDs$STNID, years = c(1980:2018))
# save it!!
save(climdata, file = "data/CA_stations_GSOD_19800101-20181231.rda")
```
<hr>
### Summarize and Visualize!
Finally, we have data, let's summarize and visualize these data. I'll aggregate these data to monthly means so we can see different timing patterns in precipitation and temperature across the state.
```{r loadGSOD, eval=T, echo=T, purl=FALSE}
load(paste0(here::here(),"/data/CA_stations_GSOD_19800101-20181231.rda"))
climdata_stations <- dplyr::distinct(climdata, CALL, .keep_all = TRUE) %>%
st_as_sf(coords=c("LON","LAT"), crs=4326, remove = F)
```
#### Monthly Average
Let's calculate averages and then see how these look for these different stations.
```{r visMonthly, eval=F, echo=T, purl=FALSE}
# MONTH: filter missing data out:
clim_month <- climdata %>%
filter(!is.na(PRCP)) %>%
group_by(CALL, MONTH) %>%
summarize_at(.vars=c("TEMP","PRCP"), .funs = list(min=min, mean=mean, max=max)) %>%
ungroup() %>%
mutate(CALL=forcats::fct_relevel(CALL,c("KCEC","KBLU","KTRM"))) # relevel order
# monthly prcp
mPPT <- ggplot() + geom_col(data=clim_month, aes(x=MONTH, y=PRCP_mean, fill=PRCP_mean), show.legend = T)+
theme_minimal() + labs(y="", x="")+
theme(plot.background = element_blank(),
#legend.position = "left",
legend.position = c(-0.25, 0.55),
legend.key.height = unit(.15,units = "in"),
legend.key.width = unit(.1, units = "in"),
panel.border = element_blank(),
axis.text.y = element_blank(),
plot.margin = unit(c(0, 0, 0 ,0), "mm")) +
scale_fill_viridis_c("Mean \nPPT(in)") +
coord_polar() +
facet_wrap(clim_month$CALL~., nrow = 3)
# make a map
siteMap <- ggplot() +
geom_sf(data=ca_co, fill="sienna",alpha=0.3) +
geom_sf(data=climdata_stations, fill="dodgerblue", alpha=0.9, pch=21, size=4) +
geom_label_repel(data=climdata_stations, aes(x=LON, y=LAT, label=CALL), size=3.7) +
theme_bw() + theme(panel.border = element_blank()) +
coord_sf(datum = NA)+
labs(subtitle="Monthly Precipitation for GSOD Stations, 1980-2018", y="",x="",
caption="Polar plots of monthly precip for each station")
# Plot it together using cowplot
ggdraw(siteMap) +
draw_plot(mPPT, width = 0.6, height = .53, x = 0.48, y = 0.42)
ggsave("img/monthly_ppt_map.png", width = 6, height = 8, units = "in", dpi=300)
```
```{r ggPPTMapOut, eval=TRUE, echo=FALSE, out.height="100%", out.width="100%", purl=FALSE}
knitr::include_graphics(paste0(here::here(), "/img/monthly_ppt_map.png"))
```
<br>
#### Daily
Let's do the same thing for temperature!
```{r visDaily, eval=F, echo=T, purl=FALSE}
# YDAY: filter
clim_day <- climdata %>%
filter(!is.na(PRCP)) %>%
group_by(CALL, YDAY) %>%
summarize_at(.vars=vars(TEMP,PRCP), .funs = list(min=min, mean=mean, max=max))
# daily mean temp
(dTEMP <- ggplot() + geom_col(data=clim_day, aes(x=YDAY, y=TEMP_mean, fill=TEMP_mean), show.legend = T)+
theme_minimal() + labs(y="", x="") +
scale_x_continuous(breaks=c(1, 90, 180, 270),
labels=c("Jan","Apr","Jul","Oct")) +
theme(plot.background = element_blank(),
legend.position = c(-0.25, 0.55),
legend.key.height = unit(.15,units = "in"),
legend.key.width = unit(.1, units = "in"),
panel.border = element_blank(),
axis.text.y = element_blank(),
plot.margin = unit(c(0, 0, 0 ,0), "mm")) +
scale_fill_viridis_c("Mean \nTemp(C)") +
coord_polar() +
facet_wrap(CALL~., nrow = 3))
siteMap2 <- ggplot() +
geom_sf(data=ca_co, fill="sienna",alpha=0.3) +
geom_sf(data=climdata_stations, fill="dodgerblue", alpha=0.9, pch=21, size=4) +
geom_label_repel(data=climdata_stations, aes(x=LON, y=LAT, label=CALL), size=3.7) +
theme_bw() + theme(panel.border = element_blank()) +
coord_sf(datum = NA)+
labs(subtitle="Daily Mean Temperature for GSOD Stations, 1980-2018", y="",x="",
caption="Polar plots of daily temp for each station")
# Plot it together using cowplot
ggdraw(siteMap2) +
draw_plot(dTEMP, width = 0.63, height = .53, x = 0.48, y = 0.42)
ggsave("img/daily_temp_map.png", width = 6, height = 8, units = "in", dpi=300)
```
```{r ggTEMPMapOut, eval=TRUE, echo=FALSE, out.height="100%", out.width="100%", purl=FALSE}
knitr::include_graphics(paste0(here::here(), "/img/daily_temp_map.png"))
```
### Summary
California is an interesting place because we get most of our water during the winter, and typically have a long warm/dry period through the summer (a common Mediterranean climate pattern). It's also striking to see the difference in mean precipitation (over a \~40 year period) across the state.
Hopefully there were some useful tidbits in this lesson/post. Please let me know if you have questions/comments (see the Contact tab).