@@ -643,12 +637,12 @@ i Use the conflicted package (<http://conflicted.r-lib.org/>) to force all
If you inspect our chart, you can see that we’re getting closer, but it’s not really that helpful to compare the totals. What we need to do is get percentages that can be compared side by side. This is easy to do using another dplyr
feature mutate
:
Until recently, most data science books didn’t have a section on geospatial data. It was considered a specialist form of research best left to GIS technicians who tended to use proprietary tools like ArcGIS. This has changed significantly in the past five years, but you’ll still be hard pressed to find an introduction to the subject which strays very far from a few simple data sets (mostly of the USA) and relatively uncomplicated geospatial operations. I actually first began learning R, back in 2013, right when open source geospatial research tools were beginning to be developed with quite a lot more energy and geospatial data is my personal favourite data science playground, so in this book we’re going to go much deeper than is usual. There are also good reasons to take things a few steps further in the particular forms of data and inquiry that religion takes us into.
Geospatial data is, in the most basic form, working with maps. This means that most of your data can be a quite simple dataframe, e.g. just a list of names or categories associated with a set of X and Y coordinates. Once you have a set of items, however, things get interesting very quickly, as you can layer data sets on top of one another. We’re going to begin this chapter by developing a geolocated data set of churches in the UK. This information is readily and freely available online thanks to the UK Ordnance Survey, a quasi-governmental agency which maintains the various (now digital) maps of Britain. Lucky for us, the Ordnance Survey has an open data product that anyone can use!
Before we begin, there are some key things we should note about geospatial data. Geospatial data tends to fall into one of two kinds: points and polygons. Points can be any kind of feature: a house, a church, a pub, someone’s favourite ancient oak tree, or some kind of sacred relic. Polygons tend to be associated with wider areas, and as such can be used to describe large features, e.g. an Ocean, a local authority, or a mountain, or also demographic features, like a census Output Area with associated census summaries. Points are very simple data representations, an X and Y coordinate. Polygons are much more complex, often containing dozens or even thousands of points.
The most complex aspect of point data relates to the ways that coordinates are encoded, as they will aways need to be associated with a coordinate reference system (CRS) which describes how they are situated with respect to the planet earth. The most common CRS is the WGS, though for our data sets we’ll also come into contact with the BGS, a specifically British coordinate reference system. There are dozens of CRS, usually mapping onto a specific geographical region. Bearing in mind the way that you need to use a CRS to understand how coordinates can be associated with specific parts of the earth, you can see how this is a bit like survey data, where you also need a “codebook” to understand what the specific response values map onto, e.g. a “1” means “strongly agree” and so on. We also saw, in a previous chapter, how some forms of data have the codebook already baked into the factor data, simplifying the process of interpreting the data. In a similar way, some types of geospatial data sets can also come with CRS “baked in” while we’ll need to define CRS for other types. Here are some of the most common types of geospatial data files:
CSV: “comma separated values” a plain text file containing various coordinates Shapefile: a legacy file format, often still in use, but being replaced by others for a variety of good reasons. For more on this see [http://switchfromshapefile.org/] Geopackage: one of the more recent ways of packaging up geospatial data. Geopackages can contain a wide variety of different data and are easily portable. GeoJSON: a file format commonly used in other forms of coding, the “JSON” (an acronym for JavaScript Object Notation) is meant to be easily interchangeable across various platforms. GeoJSON is an augmented version of JSON data with coordinates added in.
Now that you have a sense of some of the basic aspects of geospatial data, let’s dive in and do a bit of learning in action.
References
diff --git a/docs/search.json b/docs/search.json
index aa2099a..5c80665 100644
--- a/docs/search.json
+++ b/docs/search.json
@@ -60,7 +60,7 @@
"href": "chapter_1.html#your-first-project-the-uk-census",
"title": "2 The 2021 UK Census",
"section": "2.1 Your first project: the UK Census",
- "text": "2.1 Your first project: the UK Census\nLet’s start by importing some data into R. Because R is what is called an object-oriented programming language, we’ll always take our information and give it a home inside a named object. There are many different kinds of objects, which you can specify, but usually R will assign a type that seems to fit best.\nIf you’d like to explore this all in a bit more depth, you can find a very helpful summary in R for Data Science, chapter 8, “data import”.\nIn the example below, we’re going to read in data from a comma separated value file (“csv”) which has rows of information on separate lines in a text file with each column separated by a comma. This is one of the standard plain text file formats. R has a function you can use to import this efficiently called “read.csv”. Each line of code in R usually starts with the object, and then follows with instructions on what we’re going to put inside it, where that comes from, and how to format it:\n\nsetwd(\"/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion\")\nlibrary(here) # much better way to manage working paths in R across multiple instances\n\nhere() starts at /Users/kidwellj/gits/hacking_religion_textbook\n\nlibrary(tidyverse)\n\n-- Attaching core tidyverse packages ------------------------ tidyverse 2.0.0 --\nv dplyr 1.1.3 v readr 2.1.4\nv forcats 1.0.0 v stringr 1.5.0\nv ggplot2 3.4.3 v tibble 3.2.1\nv lubridate 1.9.3 v tidyr 1.3.0\nv purrr 1.0.2 \n\n\n-- Conflicts ------------------------------------------ tidyverse_conflicts() --\nx dplyr::filter() masks stats::filter()\nx dplyr::lag() masks stats::lag()\ni Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors\n\nhere::i_am(\"chapter_1.qmd\")\n\nhere() starts at /Users/kidwellj/gits/hacking_religion_textbook/hacking_religion\n\nuk_census_2021_religion <- read.csv(here(\"example_data\", \"census2021-ts030-rgn.csv\"))"
+ "text": "2.1 Your first project: the UK Census\nLet’s start by importing some data into R. Because R is what is called an object-oriented programming language, we’ll always take our information and give it a home inside a named object. There are many different kinds of objects, which you can specify, but usually R will assign a type that seems to fit best.\nIf you’d like to explore this all in a bit more depth, you can find a very helpful summary in R for Data Science, chapter 8, “data import”.\nIn the example below, we’re going to read in data from a comma separated value file (“csv”) which has rows of information on separate lines in a text file with each column separated by a comma. This is one of the standard plain text file formats. R has a function you can use to import this efficiently called “read.csv”. Each line of code in R usually starts with the object, and then follows with instructions on what we’re going to put inside it, where that comes from, and how to format it:\n\nsetwd(\"/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion\")\nlibrary(here) |> suppressPackageStartupMessages()\nlibrary(tidyverse) |> suppressPackageStartupMessages()\nhere::i_am(\"chapter_1.qmd\")\n\nhere() starts at /Users/kidwellj/gits/hacking_religion_textbook/hacking_religion\n\n# Set up local workspace:\nif (dir.exists(\"data\") == FALSE) {\n dir.create(\"data\") \n}\nif (dir.exists(\"figures\") == FALSE) {\n dir.create(\"figures\") \n}\nif (dir.exists(\"derivedData\") == FALSE) {\n dir.create(\"derivedData\")\n}\n\nuk_census_2021_religion <- read.csv(here(\"example_data\", \"census2021-ts030-rgn.csv\"))"
},
{
"objectID": "chapter_1.html#examining-data",
@@ -102,7 +102,7 @@
"href": "chapter_1.html#multifactor-visualisation",
"title": "2 The 2021 UK Census",
"section": "2.7 Multifactor Visualisation",
- "text": "2.7 Multifactor Visualisation\nOne element of R data analysis that can get really interesting is working with multiple variables. Above we’ve looked at the breakdown of religious affiliation across the whole of England and Wales (Scotland operates an independent census), and by placing this data alongside a specific region, we’ve already made a basic entry into working with multiple variables but this can get much more interesting. Adding an additional quantative variable (also known as bivariate data) into the mix, however can also generate a lot more information and we have to think about visualising it in different ways which can still communicate with visual clarity in spite of the additional visual noise which is inevitable with enhanced complexity. Let’s have a look at the way that religion in England and Wales breaks down by ethnicity.\n\n\n\n\n\n\nWhat is Nomis?\n\n\n\nFor the UK, census data is made available for programmatic research like this via an organisation called NOMIS. Luckily for us, there is an R library you can use to access nomis directly which greatly simplifies the process of pulling data down from the platform. It’s worth noting that if you’re not in the UK, there are similar options for other countries. Nearly every R textbook I’ve ever seen works with USA census data, so you’ll find plenty of documentation available on the tools you can use for US Census data. Similarly for the EU, Canada, Austrailia etc.\nHere’s the process to identify a dataset within the nomis platform:\n\n# Process to explore nomis() data for specific datasets\nlibrary(nomisr)\n# temporarily commenting out until renv can be implemented and runtime errors in other environments avoided:\n#religion_search <- nomis_search(name = \"*Religion*\")\n#religion_measures <- nomis_get_metadata(\"ST104\", \"measures\")\n#tibble::glimpse(religion_measures)\n#religion_geography <- nomis_get_metadata(\"NM_529_1\", \"geography\", \"TYPE\")\n\n\n\n\nlibrary(nomisr)\n# Get table of Census 2011 religion data from nomis\n# temporarily commenting out until renv can be implemented and runtime errors in other environments avoided:\n#z <- nomis_get_data(id = \"NM_529_1\", time = \"latest\", geography = \"TYPE499\", measures=c(20301))\n#saveRDS(z, file = \"z.rds\")\nz <- readRDS(file = (here(\"example_data\", \"z.rds\")))\n\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2011_religion <- filter(z, GEOGRAPHY_NAME==\"England and Wales\" & RURAL_URBAN_NAME==\"Total\" & C_RELPUK11_NAME != \"All categories: Religion\")\n# Drop unnecessary columns\nuk_census_2011_religion <- select(uk_census_2011_religion, C_RELPUK11_NAME, OBS_VALUE)\n# Plot results\nplot1 <- ggplot(uk_census_2011_religion, aes(x = C_RELPUK11_NAME, y = OBS_VALUE)) + geom_bar(stat = \"identity\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n# ggsave(filename = \"plot.png\", plot = plot1)\n\n# grab daata from nomis for 2001 census religion / ethnicity\n\n# temporarily commenting out until renv can be implemented and runtime errors in other environments avoided:\n#z0 <- nomis_get_data(id = \"NM_1872_1\", time = \"latest\", geography = \"TYPE499\", measures=c(20100))\n#saveRDS(z0, file = \"z0.rds\")\nz0 <- readRDS(file = (here(\"example_data\", \"z0.rds\")))\n\n# select relevant columns\nuk_census_2001_religion_ethnicity <- select(z0, GEOGRAPHY_NAME, C_RELPUK11_NAME, C_ETHHUK11_NAME, OBS_VALUE)\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2001_religion_ethnicity <- filter(uk_census_2001_religion_ethnicity, GEOGRAPHY_NAME==\"England and Wales\" & C_RELPUK11_NAME != \"All categories: Religion\")\n# Simplify data to only include general totals and omit subcategories\nuk_census_2001_religion_ethnicity <- uk_census_2001_religion_ethnicity %>% filter(grepl('Total', C_ETHHUK11_NAME))\n\n# grab data from nomis for 2011 census religion / ethnicity table\n# commenting out nomis_get temporarily until I can get renv working properly here\n#z1 <- nomis_get_data(id = \"NM_659_1\", time = \"latest\", geography = \"TYPE499\", measures=c(20100))\n#saveRDS(z1, file = \"z1.rds\")\nz1 <- readRDS(file = (here(\"example_data\", \"z1.rds\")))\n\n# select relevant columns\nuk_census_2011_religion_ethnicity <- select(z1, GEOGRAPHY_NAME, C_RELPUK11_NAME, C_ETHPUK11_NAME, OBS_VALUE)\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2011_religion_ethnicity <- filter(uk_census_2011_religion_ethnicity, GEOGRAPHY_NAME==\"England and Wales\" & C_RELPUK11_NAME != \"All categories: Religion\" & C_ETHPUK11_NAME != \"All categories: Ethnic group\")\n# Simplify data to only include general totals and omit subcategories\nuk_census_2011_religion_ethnicity <- uk_census_2011_religion_ethnicity %>% filter(grepl('Total', C_ETHPUK11_NAME))\n\n# grab data from nomis for 2021 census religion / ethnicity table\n#z2 <- nomis_get_data(id = \"NM_2131_1\", time = \"latest\", geography = \"TYPE499\", measures=c(20100))\n#saveRDS(z2, file = \"z2.rds\")\nz2 <- readRDS(file = (here(\"example_data\", \"z2.rds\")))\n\n# select relevant columns\nuk_census_2021_religion_ethnicity <- select(z2, GEOGRAPHY_NAME, C2021_RELIGION_10_NAME, C2021_ETH_8_NAME, OBS_VALUE)\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2021_religion_ethnicity <- filter(uk_census_2021_religion_ethnicity, GEOGRAPHY_NAME==\"England and Wales\" & C2021_RELIGION_10_NAME != \"Total\" & C2021_ETH_8_NAME != \"Total\")\n# 2021 census includes white sub-groups so we need to omit those so we just have totals:\nuk_census_2021_religion_ethnicity <- filter(uk_census_2021_religion_ethnicity, C2021_ETH_8_NAME != \"White: English, Welsh, Scottish, Northern Irish or British\" & C2021_ETH_8_NAME != \"White: Irish\" & C2021_ETH_8_NAME != \"White: Gypsy or Irish Traveller, Roma or Other White\")\n\nggplot(uk_census_2011_religion_ethnicity, aes(fill=C_ETHPUK11_NAME, x=C_RELPUK11_NAME, y=OBS_VALUE)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + scale_fill_brewer(palette = \"Set1\") + ggtitle(\"Religious Affiliation in the 2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nThe trouble with using grouped bars here, as you can see, is that there are quite sharp disparities which make it hard to compare in meaningful ways. We could use logarithmic rather than linear scaling as an option, but this is hard for many general public audiences to apprecaite without guidance. One alternative quick fix is to extract data from “white” respondents which can then be placed in a separate chart with a different scale.\n\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2011_religion_ethnicity_white <- filter(uk_census_2011_religion_ethnicity, C_ETHPUK11_NAME == \"White: Total\")\nuk_census_2011_religion_ethnicity_nonwhite <- filter(uk_census_2011_religion_ethnicity, C_ETHPUK11_NAME != \"White: Total\")\n\nggplot(uk_census_2011_religion_ethnicity_nonwhite, aes(fill=C_ETHPUK11_NAME, x=C_RELPUK11_NAME, y=OBS_VALUE)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + scale_fill_brewer(palette = \"Set1\") + ggtitle(\"Religious Affiliation in the 2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nThis still doesn’t quite render with as much visual clarity and communication as I’d like. For a better look, we can use a technique in R called “faceting” to create a series of small charts which can be viewed alongside one another.\n\nggplot(uk_census_2011_religion_ethnicity_nonwhite, aes(x=C_RELPUK11_NAME, y=OBS_VALUE)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~C_ETHPUK11_NAME, ncol = 2) + scale_fill_brewer(palette = \"Set1\") + ggtitle(\"Religious Affiliation in the 2011 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nFor our finale chart, I’d like to take the faceted chart we’ve just done, and add in totals for the previous two census years (2001 and 2011) so we can see how trends are changing in terms of religious affiliation within ethnic self-identification categories. We’ll draw on some techniques we’re already developed above using rbind() to connect up each of these charts (after we’ve added a column identifying each chart by the census year). We will also need to use one new technique to change the wording of ethnic categories as this isn’t consistent from one census to the next and ggplot will struggle to chart things if the terms being used are exactly the same. We’ll use mutate() again to accomplish this with some slightly different code.\n\n# First add column to each dataframe so we don't lose track of the census it comes from:\nuk_census_2001_religion_ethnicity$dataset <- c(\"2001\")\nuk_census_2011_religion_ethnicity$dataset <- c(\"2011\")\nuk_census_2021_religion_ethnicity$dataset <- c(\"2021\")\n\n# Let's tidy the names of each column:\n\nnames(uk_census_2001_religion_ethnicity) <- c(\"Geography\", \"Religion\", \"Ethnicity\", \"Value\", \"Year\")\nnames(uk_census_2011_religion_ethnicity) <- c(\"Geography\", \"Religion\", \"Ethnicity\", \"Value\", \"Year\")\nnames(uk_census_2021_religion_ethnicity) <- c(\"Geography\", \"Religion\", \"Ethnicity\", \"Value\", \"Year\")\n\n# Next we need to change the terms using mutate()\nuk_census_2001_religion_ethnicity <- uk_census_2001_religion_ethnicity %>% \n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^White: Total$\", replacement = \"White\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Mixed: Total$\", replacement = \"Mixed\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Asian: Total$\", replacement = \"Asian\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Black or Black British: Total$\", replacement = \"Black\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Chinese or Other ethnic group: Total$\", replacement = \"Other\"))\n \nuk_census_2011_religion_ethnicity <- uk_census_2011_religion_ethnicity %>% \n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^White: Total$\", replacement = \"White\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Mixed/multiple ethnic group: Total$\", replacement = \"Mixed\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Asian/Asian British: Total$\", replacement = \"Asian\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Black/African/Caribbean/Black British: Total$\", replacement = \"Black\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Other ethnic group: Total$\", replacement = \"Other\"))\n\nuk_census_2021_religion_ethnicity <- uk_census_2021_religion_ethnicity %>% \n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^White: Total$\", replacement = \"White\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Mixed or Multiple ethnic groups$\", replacement = \"Mixed\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Asian, Asian British or Asian Welsh$\", replacement = \"Asian\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Black, Black British, Black Welsh, Caribbean or African$\", replacement = \"Black\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Other ethnic group$\", replacement = \"Other\"))\n\n# Now let's merge the tables:\n\nuk_census_merged_religion_ethnicity <- rbind(uk_census_2021_religion_ethnicity, uk_census_2011_religion_ethnicity)\n\nuk_census_merged_religion_ethnicity <- rbind(uk_census_merged_religion_ethnicity, uk_census_2001_religion_ethnicity)\n\n# As above, we'll split out non-white and white:\n\nuk_census_merged_religion_ethnicity_nonwhite <- filter(uk_census_merged_religion_ethnicity, Ethnicity != \"White\")\n\n# Time to plot!\n\nggplot(uk_census_merged_religion_ethnicity_nonwhite, aes(fill=Year, x=Religion, y=Value)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~Ethnicity, ncol = 2) + scale_fill_brewer(palette = \"Set1\") + ggtitle(\"Religious Affiliation in the 2001-2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nThere are a few formatting issues which remain. Our y-axis number labels are in scientific format which isn’t really very easy to read. You can use the very powerful and flexible scales() library to bring in some more readable formatting of numbers in a variety of places in R including in ggplot visualizations.\n\nlibrary(scales)\n\n\nAttaching package: 'scales'\n\n\nThe following object is masked from 'package:purrr':\n\n discard\n\n\nThe following object is masked from 'package:readr':\n\n col_factor\n\nggplot(uk_census_merged_religion_ethnicity_nonwhite, aes(fill=Year, x=Religion, y=Value)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~Ethnicity, ncol = 2) + scale_fill_brewer(palette = \"Set1\") + scale_y_continuous(labels = unit_format(unit = \"M\", scale = 1e-6), breaks = breaks_extended(8)) + ggtitle(\"Religious Affiliation in the 2001-2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n# https://ggplot2-book.org/scales-position#sec-position-continuous-breaks\n\nThis chart shows an increase in almost every category, though it’s a bit hard to read in some cases. However, this information is based on the increase in raw numbers. It’s possbile that numbers may be going up, but in some cases the percentage share for a particular category has actually gone down. Let’s transform and visualise our data as percentages to see what kind of trends we can actually isolate:\n\nuk_census_merged_religion_ethnicity <- uk_census_merged_religion_ethnicity %>%\n group_by(Ethnicity, Year) %>%\n dplyr::mutate(Percent = Value/sum(Value))\n\nggplot(uk_census_merged_religion_ethnicity, aes(fill=Year, x=Religion, y=Percent)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~Ethnicity, scales=\"free_x\") + scale_fill_brewer(palette = \"Set1\") + scale_y_continuous(labels = scales::percent) + ggtitle(\"Religious Affiliation in the 2001-2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nNow you can see why this shift is important - the visualisation tells a completely different story in some cases across the two different charts. In the first, working off raw numbers we see a net increase in Christianity across all categories. But if we take into account the fact that the overall share of population is growing for each of these groups, their actual composition is changing in a different direction. The proportion of each group is declining across the three census periods (albeit with an exception for the “Other” category from 2011 to 2021).\nTo highlight a few features of this final plot, I’ve used a specific feature within facet_wrap scales = \"free_x\" to let each of the individual facets adjust the total range on the x-axis. Since we’re looking at trends here and not absolute values, having correspondence across scales isn’t important and this makes for something a bit more visually tidy. I’ve also shifted the code for scale_y_continuous to render values as percentages (rather than millions).\nIn case you want to print this plot out and hang it on your wall, you can use the ggsave tool to render the chart as an image file:\n\nplot1 <- ggplot(uk_census_merged_religion_ethnicity, aes(fill=Year, x=Religion, y=Percent)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~Ethnicity, scales=\"free_x\") + scale_fill_brewer(palette = \"Set1\") + scale_y_continuous(labels = scales::percent) + ggtitle(\"Religious Affiliation in the 2001-2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\nggsave(\"chart.png\", plot=plot1, width = 8, height = 10, units=c(\"in\"))"
+ "text": "2.7 Multifactor Visualisation\nOne element of R data analysis that can get really interesting is working with multiple variables. Above we’ve looked at the breakdown of religious affiliation across the whole of England and Wales (Scotland operates an independent census), and by placing this data alongside a specific region, we’ve already made a basic entry into working with multiple variables but this can get much more interesting. Adding an additional quantative variable (also known as bivariate data) into the mix, however can also generate a lot more information and we have to think about visualising it in different ways which can still communicate with visual clarity in spite of the additional visual noise which is inevitable with enhanced complexity. Let’s have a look at the way that religion in England and Wales breaks down by ethnicity.\n\n\n\n\n\n\nWhat is Nomis?\n\n\n\nFor the UK, census data is made available for programmatic research like this via an organisation called NOMIS. Luckily for us, there is an R library you can use to access nomis directly which greatly simplifies the process of pulling data down from the platform. It’s worth noting that if you’re not in the UK, there are similar options for other countries. Nearly every R textbook I’ve ever seen works with USA census data, so you’ll find plenty of documentation available on the tools you can use for US Census data. Similarly for the EU, Canada, Austrailia etc.\nIf you want to draw some data from the nomis platform yourself in R, have a look at the companion cookbook repository.\n\n\n\n# Get table of Census 2011 religion data from nomis\n# Note: for reproducible code used to generate the dataset used in the book, see the cookbook here: \nz <- readRDS(file = (here(\"example_data\", \"z.rds\")))\n\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2011_religion <- filter(z, GEOGRAPHY_NAME==\"England and Wales\" & RURAL_URBAN_NAME==\"Total\" & C_RELPUK11_NAME != \"All categories: Religion\")\n# Drop unnecessary columns\nuk_census_2011_religion <- select(uk_census_2011_religion, C_RELPUK11_NAME, OBS_VALUE)\n# Plot results\nplot1 <- ggplot(uk_census_2011_religion, aes(x = C_RELPUK11_NAME, y = OBS_VALUE)) + geom_bar(stat = \"identity\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n# ggsave(filename = \"plot.png\", plot = plot1)\n\n# grab daata from nomis for 2001 census religion / ethnicity\nz0 <- readRDS(file = (here(\"example_data\", \"z0.rds\")))\n\n# select relevant columns\nuk_census_2001_religion_ethnicity <- select(z0, GEOGRAPHY_NAME, C_RELPUK11_NAME, C_ETHHUK11_NAME, OBS_VALUE)\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2001_religion_ethnicity <- filter(uk_census_2001_religion_ethnicity, GEOGRAPHY_NAME==\"England and Wales\" & C_RELPUK11_NAME != \"All categories: Religion\")\n# Simplify data to only include general totals and omit subcategories\nuk_census_2001_religion_ethnicity <- uk_census_2001_religion_ethnicity %>% filter(grepl('Total', C_ETHHUK11_NAME))\n\n# grab data from nomis for 2011 census religion / ethnicity table\nz1 <- readRDS(file = (here(\"example_data\", \"z1.rds\")))\n\n# select relevant columns\nuk_census_2011_religion_ethnicity <- select(z1, GEOGRAPHY_NAME, C_RELPUK11_NAME, C_ETHPUK11_NAME, OBS_VALUE)\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2011_religion_ethnicity <- filter(uk_census_2011_religion_ethnicity, GEOGRAPHY_NAME==\"England and Wales\" & C_RELPUK11_NAME != \"All categories: Religion\" & C_ETHPUK11_NAME != \"All categories: Ethnic group\")\n# Simplify data to only include general totals and omit subcategories\nuk_census_2011_religion_ethnicity <- uk_census_2011_religion_ethnicity %>% filter(grepl('Total', C_ETHPUK11_NAME))\n\n# grab data from nomis for 2021 census religion / ethnicity table\nz2 <- readRDS(file = (here(\"example_data\", \"z2.rds\")))\n\n# select relevant columns\nuk_census_2021_religion_ethnicity <- select(z2, GEOGRAPHY_NAME, C2021_RELIGION_10_NAME, C2021_ETH_8_NAME, OBS_VALUE)\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2021_religion_ethnicity <- filter(uk_census_2021_religion_ethnicity, GEOGRAPHY_NAME==\"England and Wales\" & C2021_RELIGION_10_NAME != \"Total\" & C2021_ETH_8_NAME != \"Total\")\n# 2021 census includes white sub-groups so we need to omit those so we just have totals:\nuk_census_2021_religion_ethnicity <- filter(uk_census_2021_religion_ethnicity, C2021_ETH_8_NAME != \"White: English, Welsh, Scottish, Northern Irish or British\" & C2021_ETH_8_NAME != \"White: Irish\" & C2021_ETH_8_NAME != \"White: Gypsy or Irish Traveller, Roma or Other White\")\n\nggplot(uk_census_2011_religion_ethnicity, aes(fill=C_ETHPUK11_NAME, x=C_RELPUK11_NAME, y=OBS_VALUE)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + scale_fill_brewer(palette = \"Set1\") + ggtitle(\"Religious Affiliation in the 2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nThe trouble with using grouped bars here, as you can see, is that there are quite sharp disparities which make it hard to compare in meaningful ways. We could use logarithmic rather than linear scaling as an option, but this is hard for many general public audiences to apprecaite without guidance. One alternative quick fix is to extract data from “white” respondents which can then be placed in a separate chart with a different scale.\n\n# Filter down to simplified dataset with England / Wales and percentages without totals\nuk_census_2011_religion_ethnicity_white <- filter(uk_census_2011_religion_ethnicity, C_ETHPUK11_NAME == \"White: Total\")\nuk_census_2011_religion_ethnicity_nonwhite <- filter(uk_census_2011_religion_ethnicity, C_ETHPUK11_NAME != \"White: Total\")\n\nggplot(uk_census_2011_religion_ethnicity_nonwhite, aes(fill=C_ETHPUK11_NAME, x=C_RELPUK11_NAME, y=OBS_VALUE)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + scale_fill_brewer(palette = \"Set1\") + ggtitle(\"Religious Affiliation in the 2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nThis still doesn’t quite render with as much visual clarity and communication as I’d like. For a better look, we can use a technique in R called “faceting” to create a series of small charts which can be viewed alongside one another.\n\nggplot(uk_census_2011_religion_ethnicity_nonwhite, aes(x=C_RELPUK11_NAME, y=OBS_VALUE)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~C_ETHPUK11_NAME, ncol = 2) + scale_fill_brewer(palette = \"Set1\") + ggtitle(\"Religious Affiliation in the 2011 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nFor our finale chart, I’d like to take the faceted chart we’ve just done, and add in totals for the previous two census years (2001 and 2011) so we can see how trends are changing in terms of religious affiliation within ethnic self-identification categories. We’ll draw on some techniques we’re already developed above using rbind() to connect up each of these charts (after we’ve added a column identifying each chart by the census year). We will also need to use one new technique to change the wording of ethnic categories as this isn’t consistent from one census to the next and ggplot will struggle to chart things if the terms being used are exactly the same. We’ll use mutate() again to accomplish this with some slightly different code.\n\n# First add column to each dataframe so we don't lose track of the census it comes from:\nuk_census_2001_religion_ethnicity$dataset <- c(\"2001\")\nuk_census_2011_religion_ethnicity$dataset <- c(\"2011\")\nuk_census_2021_religion_ethnicity$dataset <- c(\"2021\")\n\n# Let's tidy the names of each column:\n\nnames(uk_census_2001_religion_ethnicity) <- c(\"Geography\", \"Religion\", \"Ethnicity\", \"Value\", \"Year\")\nnames(uk_census_2011_religion_ethnicity) <- c(\"Geography\", \"Religion\", \"Ethnicity\", \"Value\", \"Year\")\nnames(uk_census_2021_religion_ethnicity) <- c(\"Geography\", \"Religion\", \"Ethnicity\", \"Value\", \"Year\")\n\n# Next we need to change the terms using mutate()\nuk_census_2001_religion_ethnicity <- uk_census_2001_religion_ethnicity %>% \n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^White: Total$\", replacement = \"White\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Mixed: Total$\", replacement = \"Mixed\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Asian: Total$\", replacement = \"Asian\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Black or Black British: Total$\", replacement = \"Black\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Chinese or Other ethnic group: Total$\", replacement = \"Other\"))\n \nuk_census_2011_religion_ethnicity <- uk_census_2011_religion_ethnicity %>% \n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^White: Total$\", replacement = \"White\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Mixed/multiple ethnic group: Total$\", replacement = \"Mixed\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Asian/Asian British: Total$\", replacement = \"Asian\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Black/African/Caribbean/Black British: Total$\", replacement = \"Black\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Other ethnic group: Total$\", replacement = \"Other\"))\n\nuk_census_2021_religion_ethnicity <- uk_census_2021_religion_ethnicity %>% \n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^White: Total$\", replacement = \"White\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Mixed or Multiple ethnic groups$\", replacement = \"Mixed\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Asian, Asian British or Asian Welsh$\", replacement = \"Asian\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Black, Black British, Black Welsh, Caribbean or African$\", replacement = \"Black\")) %>%\n mutate(Ethnicity = str_replace_all(Ethnicity, \n pattern = \"^Other ethnic group$\", replacement = \"Other\"))\n\n# Now let's merge the tables:\n\nuk_census_merged_religion_ethnicity <- rbind(uk_census_2021_religion_ethnicity, uk_census_2011_religion_ethnicity)\n\nuk_census_merged_religion_ethnicity <- rbind(uk_census_merged_religion_ethnicity, uk_census_2001_religion_ethnicity)\n\n# As above, we'll split out non-white and white:\n\nuk_census_merged_religion_ethnicity_nonwhite <- filter(uk_census_merged_religion_ethnicity, Ethnicity != \"White\")\n\n# Time to plot!\n\nggplot(uk_census_merged_religion_ethnicity_nonwhite, aes(fill=Year, x=Religion, y=Value)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~Ethnicity, ncol = 2) + scale_fill_brewer(palette = \"Set1\") + ggtitle(\"Religious Affiliation in the 2001-2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nThere are a few formatting issues which remain. Our y-axis number labels are in scientific format which isn’t really very easy to read. You can use the very powerful and flexible scales() library to bring in some more readable formatting of numbers in a variety of places in R including in ggplot visualizations.\n\nlibrary(scales) |> suppressPackageStartupMessages()\nggplot(uk_census_merged_religion_ethnicity_nonwhite, aes(fill=Year, x=Religion, y=Value)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~Ethnicity, ncol = 2) + scale_fill_brewer(palette = \"Set1\") + scale_y_continuous(labels = unit_format(unit = \"M\", scale = 1e-6), breaks = breaks_extended(8)) + ggtitle(\"Religious Affiliation in the 2001-2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n# https://ggplot2-book.org/scales-position#sec-position-continuous-breaks\n\nThis chart shows an increase in almost every category, though it’s a bit hard to read in some cases. However, this information is based on the increase in raw numbers. It’s possbile that numbers may be going up, but in some cases the percentage share for a particular category has actually gone down. Let’s transform and visualise our data as percentages to see what kind of trends we can actually isolate:\n\nuk_census_merged_religion_ethnicity <- uk_census_merged_religion_ethnicity %>%\n group_by(Ethnicity, Year) %>%\n dplyr::mutate(Percent = Value/sum(Value))\n\nggplot(uk_census_merged_religion_ethnicity, aes(fill=Year, x=Religion, y=Percent)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~Ethnicity, scales=\"free_x\") + scale_fill_brewer(palette = \"Set1\") + scale_y_continuous(labels = scales::percent) + ggtitle(\"Religious Affiliation in the 2001-2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\n\n\n\nNow you can see why this shift is important - the visualisation tells a completely different story in some cases across the two different charts. In the first, working off raw numbers we see a net increase in Christianity across all categories. But if we take into account the fact that the overall share of population is growing for each of these groups, their actual composition is changing in a different direction. The proportion of each group is declining across the three census periods (albeit with an exception for the “Other” category from 2011 to 2021).\nTo highlight a few features of this final plot, I’ve used a specific feature within facet_wrap scales = \"free_x\" to let each of the individual facets adjust the total range on the x-axis. Since we’re looking at trends here and not absolute values, having correspondence across scales isn’t important and this makes for something a bit more visually tidy. I’ve also shifted the code for scale_y_continuous to render values as percentages (rather than millions).\nIn case you want to print this plot out and hang it on your wall, you can use the ggsave tool to render the chart as an image file:\n\nplot1 <- ggplot(uk_census_merged_religion_ethnicity, aes(fill=Year, x=Religion, y=Percent)) + geom_bar(position=\"dodge\", stat =\"identity\", colour = \"black\") + facet_wrap(~Ethnicity, scales=\"free_x\") + scale_fill_brewer(palette = \"Set1\") + scale_y_continuous(labels = scales::percent) + ggtitle(\"Religious Affiliation in the 2001-2021 Census of England and Wales\") + xlab(\"\") + ylab(\"\") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))\n\nggsave(\"chart.png\", plot=plot1, width = 8, height = 10, units=c(\"in\"))"
},
{
"objectID": "chapter_2.html",
@@ -130,7 +130,7 @@
"href": "chapter_3.html",
"title": "4 Mapping churches: geospatial data science",
"section": "",
- "text": "Guides to geographies: https://rconsortium.github.io/censusguide/ https://ocsi.uk/2019/03/18/lsoas-leps-and-lookups-a-beginners-guide-to-statistical-geographies/\nExtact places of worship from Ordnance survey open data set Calculate proximity to pubs\n\nReferences"
+ "text": "Until recently, most data science books didn’t have a section on geospatial data. It was considered a specialist form of research best left to GIS technicians who tended to use proprietary tools like ArcGIS. This has changed significantly in the past five years, but you’ll still be hard pressed to find an introduction to the subject which strays very far from a few simple data sets (mostly of the USA) and relatively uncomplicated geospatial operations. I actually first began learning R, back in 2013, right when open source geospatial research tools were beginning to be developed with quite a lot more energy and geospatial data is my personal favourite data science playground, so in this book we’re going to go much deeper than is usual. There are also good reasons to take things a few steps further in the particular forms of data and inquiry that religion takes us into.\nRecommend https://r-spatial.org/book/\nGeospatial data is, in the most basic form, working with maps. This means that most of your data can be a quite simple dataframe, e.g. just a list of names or categories associated with a set of X and Y coordinates. Once you have a set of items, however, things get interesting very quickly, as you can layer data sets on top of one another. We’re going to begin this chapter by developing a geolocated data set of churches in the UK. This information is readily and freely available online thanks to the UK Ordnance Survey, a quasi-governmental agency which maintains the various (now digital) maps of Britain. Lucky for us, the Ordnance Survey has an open data product that anyone can use!\nBefore we begin, there are some key things we should note about geospatial data. Geospatial data tends to fall into one of two kinds: points and polygons. Points can be any kind of feature: a house, a church, a pub, someone’s favourite ancient oak tree, or some kind of sacred relic. Polygons tend to be associated with wider areas, and as such can be used to describe large features, e.g. an Ocean, a local authority, or a mountain, or also demographic features, like a census Output Area with associated census summaries. Points are very simple data representations, an X and Y coordinate. Polygons are much more complex, often containing dozens or even thousands of points.\nThe most complex aspect of point data relates to the ways that coordinates are encoded, as they will aways need to be associated with a coordinate reference system (CRS) which describes how they are situated with respect to the planet earth. The most common CRS is the WGS, though for our data sets we’ll also come into contact with the BGS, a specifically British coordinate reference system. There are dozens of CRS, usually mapping onto a specific geographical region. Bearing in mind the way that you need to use a CRS to understand how coordinates can be associated with specific parts of the earth, you can see how this is a bit like survey data, where you also need a “codebook” to understand what the specific response values map onto, e.g. a “1” means “strongly agree” and so on. We also saw, in a previous chapter, how some forms of data have the codebook already baked into the factor data, simplifying the process of interpreting the data. In a similar way, some types of geospatial data sets can also come with CRS “baked in” while we’ll need to define CRS for other types. Here are some of the most common types of geospatial data files:\nCSV: “comma separated values” a plain text file containing various coordinates Shapefile: a legacy file format, often still in use, but being replaced by others for a variety of good reasons. For more on this see [http://switchfromshapefile.org/] Geopackage: one of the more recent ways of packaging up geospatial data. Geopackages can contain a wide variety of different data and are easily portable. GeoJSON: a file format commonly used in other forms of coding, the “JSON” (an acronym for JavaScript Object Notation) is meant to be easily interchangeable across various platforms. GeoJSON is an augmented version of JSON data with coordinates added in.\nNow that you have a sense of some of the basic aspects of geospatial data, let’s dive in and do a bit of learning in action.\n\n5 Administrative shapes - the UK\nA good starting point is to aquire some adminstrative data. This is a way of referring to political boundaries, whether country borders or those of a local authority or some other administrative unit. For our purposes, we’re going to import several different types of administrative boundary which will be used at different points in our visualisations below. It’s worth noting that the data we use here was prepared to support the 2011 census, and make use of the shapefile format.\n\nlibrary(sf) |> suppressPackageStartupMessages()\nlibrary(here) |> suppressPackageStartupMessages()\nlibrary(tidyverse) |> suppressPackageStartupMessages()\nsetwd(\"/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion\")\nhere::i_am(\"chapter_3.qmd\")\n\nhere() starts at /Users/kidwellj/gits/hacking_religion_textbook/hacking_religion\n\n# Download administrative boundaries for whole UK at country level\nif (file.exists(here(\"data\", \"infuse_uk_2011_clipped.shp\")) == FALSE) {\ndownload.file(\"https://borders.ukdataservice.ac.uk/ukborders/easy_download/prebuilt/shape/infuse_uk_2011_clipped.zip\", destfile = \"data/infuse_uk_2011_clipped.zip\")\nunzip(\"data/infuse_uk_2011_clipped.zip\", exdir = \"data\")\n}\nuk_countries <- st_read(here(\"data\", \"infuse_uk_2011_clipped.shp\"))\n\nReading layer `infuse_uk_2011_clipped' from data source \n `/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion/data/infuse_uk_2011_clipped.shp' \n using driver `ESRI Shapefile'\nSimple feature collection with 1 feature and 3 fields\nGeometry type: MULTIPOLYGON\nDimension: XY\nBounding box: xmin: -69.1254 ymin: 5337.9 xmax: 655604.7 ymax: 1220302\nProjected CRS: OSGB36 / British National Grid\n\n# Download administrative boundaries for whole UK at regions level\nif (file.exists(here(\"data\", \"infuse_rgn_2011_clipped.shp\")) == FALSE) {\ndownload.file(\"https://borders.ukdataservice.ac.uk/ukborders/easy_download/prebuilt/shape/infuse_rgn_2011_clipped.zip\", destfile = \"data/infuse_rgn_2011_clipped.zip\")\nunzip(\"data/infuse_rgn_2011_clipped.zip\", exdir = \"data\")\n}\nuk_rgn <- st_read(here(\"data\", \"infuse_rgn_2011_clipped.shp\"))\n\nReading layer `infuse_rgn_2011_clipped' from data source \n `/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion/data/infuse_rgn_2011_clipped.shp' \n using driver `ESRI Shapefile'\nSimple feature collection with 9 features and 2 fields\nGeometry type: MULTIPOLYGON\nDimension: XY\nBounding box: xmin: 82672 ymin: 5337.9 xmax: 655604.7 ymax: 657534.1\nProjected CRS: OSGB36 / British National Grid\n\n# Download administrative boundaries for whole UK at local authority level\nif (file.exists(here(\"data\", \"infuse_dist_lyr_2011_clipped.shp\")) == FALSE) {\ndownload.file(\"https://borders.ukdataservice.ac.uk/ukborders/easy_download/prebuilt/shape/infuse_dist_lyr_2011_clipped.zip\", destfile = \"data/infuse_dist_lyr_2011_clipped.zip\")\nunzip(\"data/infuse_dist_lyr_2011_clipped.zip\", exdir = \"data\")\n}\nlocal_authorities <- st_read(here(\"data\", \"infuse_dist_lyr_2011_clipped.shp\"))\n\nReading layer `infuse_dist_lyr_2011_clipped' from data source \n `/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion/data/infuse_dist_lyr_2011_clipped.shp' \n using driver `ESRI Shapefile'\nSimple feature collection with 404 features and 3 fields\nGeometry type: MULTIPOLYGON\nDimension: XY\nBounding box: xmin: -69.1254 ymin: 5337.9 xmax: 655604.7 ymax: 1220302\nProjected CRS: OSGB36 / British National Grid\n\n# Download building outlines for whole UK\nif (file.exists(here(\"data\", \"infuse_dist_lyr_2011_simplified_100m_buildings_simplified.gpkg\")) == FALSE) {\n download.file(\"https://zenodo.org/record/6395804/files/infuse_dist_lyr_2011_simplified_100m_buildings_overlay_simplified.gpkg\", destfile = here(\"data\", \"infuse_dist_lyr_2011_simplified_100m_buildings_simplified.gpkg\"))}\nlocal_authorities_buildings_clip <- st_read(here(\"data\", \"infuse_dist_lyr_2011_simplified_100m_buildings_simplified.gpkg\"))\n\nReading layer `infuse_dist_lyr_2011_simplified_100_buildings_overlay_simplified' from data source `/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion/data/infuse_dist_lyr_2011_simplified_100m_buildings_simplified.gpkg' \n using driver `GPKG'\nSimple feature collection with 403 features and 0 fields\nGeometry type: MULTIPOLYGON\nDimension: XY\nBounding box: xmin: -69.1254 ymin: 5524.797 xmax: 655986.4 ymax: 1219597\nProjected CRS: OSGB36 / British National Grid\n\n\nBefore we move on, let’s plot a simple map and have a look at one of our administrative layers. We can use ggplot with a new type of shape geom_sf() to plot the contents of a geospatial data file with polygons which is loaded as a simplefeature in R.\n\nggplot(uk_countries) +\n geom_sf()\n\n\n\n\n\n\n6 Load in Ordnance Survey OpenMap Points Data\n\n# Note: for more advanced reproducible scripts which demonstrate how these data surces have been \n# obtained, see the companion cookbook here: https://github.com/kidwellj/hacking_religion_cookbook/blob/main/ordnance_survey.R\n\nos_openmap_pow <- st_read(here(\"data\", \"os_openmap_pow.gpkg\"))\n\nReading layer `os_openmap_pow' from data source \n `/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion/data/os_openmap_pow.gpkg' \n using driver `GPKG'\nSimple feature collection with 48759 features and 5 fields\nGeometry type: POLYGON\nDimension: XY\nBounding box: xmin: 64594.12 ymin: 8287.54 xmax: 655238.1 ymax: 1214662\nProjected CRS: OSGB36 / British National Grid\n\nggplot(os_openmap_pow) +\n geom_sf()\n\n\n\n\nIt’s worth noting that the way that you load geospatial data in R has changed quite dramatically since 2020 with the introduction of the simplefeature class in R. Much of the documentation you will come across “out there” will make reference to a set of functions which are now deprecated.\nLet’s use that data we’ve just loaded to make our first map:\n\n# Generate choropleth map of respondent locations\n# using temporary palette here so that 0s are white\nlibrary(tmap) |> suppressPackageStartupMessages()\n# palette <- c(white, \"#a8ddb5\", \"#43a2ca\")\nmap1 <- tm_shape(local_authorities) + \n# tm_fill(col = \"surveys_count\", , palette = palette, title = \"Concentration of survey respondents\") +\n tm_borders(alpha=.5, lwd=0.1) +\n # for intermediate polygon geometries\n # tm_shape(local_authorities) +\n # tm_borders(lwd=0.6) +\n # for dots from original dataset\n # tm_dots(\"red\", size = .05, alpha = .4) +\n tm_scale_bar(position = c(\"right\", \"bottom\")) +\n tm_style(\"gray\") +\n tm_credits(\"Data: UK Data Service (OGL)\\n& Jeremy H. Kidwell,\\nGraphic is CC-by-SA 4.0\", \n size = 0.4, \n position = c(\"left\", \"bottom\"),\n just = c(\"left\", \"bottom\"),\n align = \"left\") +\n tm_layout(asp = NA,\n frame = FALSE, \n title = \"Figure 1a\", \n title.size = .7,\n legend.title.size = .7,\n inner.margins = c(0.1, 0.1, 0.05, 0.05)\n )\n\nmap1\n\n\n\n# save image\ntmap_save(map1, here(\"figures\", \"map.png\"), width=1920, height=1080, asp=0)\n\nMap saved to /Users/kidwellj/gits/hacking_religion_textbook/hacking_religion/figures/map.png\n\n\nResolution: 1920 by 1080 pixels\n\n\nSize: 6.4 by 3.6 inches (300 dpi)\n\n\n\n# subsetting ordnance survey openmap data for measuring clusters and proximity\n\nos_openmap_important_buildings <- st_read(here(\"data\", \"os_openmap_important_buildings.gpkg\"))\n\nReading layer `important_buildings' from data source \n `/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion/data/os_openmap_important_buildings.gpkg' \n using driver `GPKG'\nSimple feature collection with 229800 features and 5 fields\nGeometry type: POLYGON\nDimension: XY\nBounding box: xmin: 64594.12 ymin: 8125.44 xmax: 655500.5 ymax: 1214662\nProjected CRS: OSGB36 / British National Grid\n\n# add pubs, check_cashing, pawnbrokers, SSSI\n## subsets\n\n\n# OSM data\n\n# Note: for more advanced reproducible scripts which demonstrate how these data surces have been \n# obtained, see the companion cookbook here: https://github.com/kidwellj/hacking_religion_cookbook/blob/main/ordnance_survey.R\n\n\n# osm_uk_points <- st_read(system.file(here(\"data\", \"pow_osm.gpkg\", package = \"spData\")))\n# vector_filepath = system.file(\"data/osm-gb-2018Aug29_pow_osm.pbf\", package = \"sf\")\n# osm_uk_points = st_read(vector_filepath)\n\nGuides to geographies: https://rconsortium.github.io/censusguide/ https://ocsi.uk/2019/03/18/lsoas-leps-and-lookups-a-beginners-guide-to-statistical-geographies/\nExtact places of worship from Ordnance survey open data set Calculate proximity to pubs\n\n\nReferences"
},
{
"objectID": "chapter_4.html",
diff --git a/hacking_religion/chapter_1.qmd b/hacking_religion/chapter_1.qmd
index 441f8c0..04fa4e0 100644
--- a/hacking_religion/chapter_1.qmd
+++ b/hacking_religion/chapter_1.qmd
@@ -13,9 +13,21 @@ In the example below, we're going to read in data from a comma separated value f
#| include: true
#| label: fig-polar
setwd("/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion")
-library(here) # much better way to manage working paths in R across multiple instances
-library(tidyverse)
+library(here) |> suppressPackageStartupMessages()
+library(tidyverse) |> suppressPackageStartupMessages()
here::i_am("chapter_1.qmd")
+
+# Set up local workspace:
+if (dir.exists("data") == FALSE) {
+ dir.create("data")
+}
+if (dir.exists("figures") == FALSE) {
+ dir.create("figures")
+}
+if (dir.exists("derivedData") == FALSE) {
+ dir.create("derivedData")
+}
+
uk_census_2021_religion <- read.csv(here("example_data", "census2021-ts030-rgn.csv"))
```
@@ -196,27 +208,14 @@ One element of R data analysis that can get really interesting is working with m
For the UK, census data is made available for programmatic research like this via an organisation called NOMIS. Luckily for us, there is an R library you can use to access nomis directly which greatly simplifies the process of pulling data down from the platform. It's worth noting that if you're not in the UK, there are similar options for other countries. Nearly every R textbook I've ever seen works with USA census data, so you'll find plenty of documentation available on the tools you can use for US Census data. Similarly for the EU, Canada, Austrailia etc.
-Here's the process to identify a dataset within the nomis platform:
-
-```{r}
-# Process to explore nomis() data for specific datasets
-library(nomisr)
-# temporarily commenting out until renv can be implemented and runtime errors in other environments avoided:
-#religion_search <- nomis_search(name = "*Religion*")
-#religion_measures <- nomis_get_metadata("ST104", "measures")
-#tibble::glimpse(religion_measures)
-#religion_geography <- nomis_get_metadata("NM_529_1", "geography", "TYPE")
-```
+If you want to draw some data from the nomis platform yourself in R, have a look at the [companion cookbook repository](https://github.com/kidwellj/hacking_religion_cookbook/blob/main/nomis.R).
:::
```{r}
-library(nomisr)
# Get table of Census 2011 religion data from nomis
-# temporarily commenting out until renv can be implemented and runtime errors in other environments avoided:
-#z <- nomis_get_data(id = "NM_529_1", time = "latest", geography = "TYPE499", measures=c(20301))
-#saveRDS(z, file = "z.rds")
+# Note: for reproducible code used to generate the dataset used in the book, see the cookbook here:
z <- readRDS(file = (here("example_data", "z.rds")))
# Filter down to simplified dataset with England / Wales and percentages without totals
@@ -228,10 +227,6 @@ plot1 <- ggplot(uk_census_2011_religion, aes(x = C_RELPUK11_NAME, y = OBS_VALUE)
# ggsave(filename = "plot.png", plot = plot1)
# grab daata from nomis for 2001 census religion / ethnicity
-
-# temporarily commenting out until renv can be implemented and runtime errors in other environments avoided:
-#z0 <- nomis_get_data(id = "NM_1872_1", time = "latest", geography = "TYPE499", measures=c(20100))
-#saveRDS(z0, file = "z0.rds")
z0 <- readRDS(file = (here("example_data", "z0.rds")))
# select relevant columns
@@ -242,9 +237,6 @@ uk_census_2001_religion_ethnicity <- filter(uk_census_2001_religion_ethnicity, G
uk_census_2001_religion_ethnicity <- uk_census_2001_religion_ethnicity %>% filter(grepl('Total', C_ETHHUK11_NAME))
# grab data from nomis for 2011 census religion / ethnicity table
-# commenting out nomis_get temporarily until I can get renv working properly here
-#z1 <- nomis_get_data(id = "NM_659_1", time = "latest", geography = "TYPE499", measures=c(20100))
-#saveRDS(z1, file = "z1.rds")
z1 <- readRDS(file = (here("example_data", "z1.rds")))
# select relevant columns
@@ -255,8 +247,6 @@ uk_census_2011_religion_ethnicity <- filter(uk_census_2011_religion_ethnicity, G
uk_census_2011_religion_ethnicity <- uk_census_2011_religion_ethnicity %>% filter(grepl('Total', C_ETHPUK11_NAME))
# grab data from nomis for 2021 census religion / ethnicity table
-#z2 <- nomis_get_data(id = "NM_2131_1", time = "latest", geography = "TYPE499", measures=c(20100))
-#saveRDS(z2, file = "z2.rds")
z2 <- readRDS(file = (here("example_data", "z2.rds")))
# select relevant columns
@@ -355,7 +345,7 @@ ggplot(uk_census_merged_religion_ethnicity_nonwhite, aes(fill=Year, x=Religion,
There are a few formatting issues which remain. Our y-axis number labels are in scientific format which isn't really very easy to read. You can use the very powerful and flexible `scales()` library to bring in some more readable formatting of numbers in a variety of places in R including in ggplot visualizations.
```{r}
-library(scales)
+library(scales) |> suppressPackageStartupMessages()
ggplot(uk_census_merged_religion_ethnicity_nonwhite, aes(fill=Year, x=Religion, y=Value)) + geom_bar(position="dodge", stat ="identity", colour = "black") + facet_wrap(~Ethnicity, ncol = 2) + scale_fill_brewer(palette = "Set1") + scale_y_continuous(labels = unit_format(unit = "M", scale = 1e-6), breaks = breaks_extended(8)) + ggtitle("Religious Affiliation in the 2001-2021 Census of England and Wales") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
# https://ggplot2-book.org/scales-position#sec-position-continuous-breaks
diff --git a/hacking_religion/chapter_3.qmd b/hacking_religion/chapter_3.qmd
index 3ef51d9..b20f899 100644
--- a/hacking_religion/chapter_3.qmd
+++ b/hacking_religion/chapter_3.qmd
@@ -1,10 +1,147 @@
# Mapping churches: geospatial data science
+Until recently, most data science books didn't have a section on geospatial data. It was considered a specialist form of research best left to GIS technicians who tended to use proprietary tools like ArcGIS. This has changed significantly in the past five years, but you'll still be hard pressed to find an introduction to the subject which strays very far from a few simple data sets (mostly of the USA) and relatively uncomplicated geospatial operations. I actually first began learning R, back in 2013, right when open source geospatial research tools were beginning to be developed with quite a lot more energy and geospatial data is my personal favourite data science playground, so in this book we're going to go much deeper than is usual. There are also good reasons to take things a few steps further in the particular forms of data and inquiry that religion takes us into.
+
+Recommend https://r-spatial.org/book/
+
+Geospatial data is, in the most basic form, working with maps. This means that most of your data can be a quite simple dataframe, e.g. just a list of names or categories associated with a set of X and Y coordinates. Once you have a set of items, however, things get interesting very quickly, as you can layer data sets on top of one another. We're going to begin this chapter by developing a geolocated data set of churches in the UK. This information is readily and freely available online thanks to the UK Ordnance Survey, a quasi-governmental agency which maintains the various (now digital) maps of Britain. Lucky for us, the Ordnance Survey has an open data product that anyone can use!
+
+Before we begin, there are some key things we should note about geospatial data. Geospatial data tends to fall into one of two kinds: points and polygons. Points can be any kind of feature: a house, a church, a pub, someone's favourite ancient oak tree, or some kind of sacred relic. Polygons tend to be associated with wider areas, and as such can be used to describe large features, e.g. an Ocean, a local authority, or a mountain, or also demographic features, like a census Output Area with associated census summaries. Points are very simple data representations, an X and Y coordinate. Polygons are much more complex, often containing dozens or even thousands of points.
+
+The most complex aspect of point data relates to the ways that coordinates are encoded, as they will aways need to be associated with a coordinate reference system (CRS) which describes how they are situated with respect to the planet earth. The most common CRS is the WGS, though for our data sets we'll also come into contact with the BGS, a specifically British coordinate reference system. There are dozens of CRS, usually mapping onto a specific geographical region. Bearing in mind the way that you need to use a CRS to understand how coordinates can be associated with specific parts of the earth, you can see how this is a bit like survey data, where you also need a "codebook" to understand what the specific response values map onto, e.g. a "1" means "strongly agree" and so on. We also saw, in a previous chapter, how some forms of data have the codebook already baked into the factor data, simplifying the process of interpreting the data. In a similar way, some types of geospatial data sets can also come with CRS "baked in" while we'll need to define CRS for other types. Here are some of the most common types of geospatial data files:
+
+CSV: "comma separated values" a plain text file containing various coordinates
+Shapefile: a legacy file format, often still in use, but being replaced by others for a variety of good reasons. For more on this see [http://switchfromshapefile.org/]
+Geopackage: one of the more recent ways of packaging up geospatial data. Geopackages can contain a wide variety of different data and are easily portable.
+GeoJSON: a file format commonly used in other forms of coding, the "JSON" (an acronym for JavaScript Object Notation) is meant to be easily interchangeable across various platforms. GeoJSON is an augmented version of JSON data with coordinates added in.
+
+Now that you have a sense of some of the basic aspects of geospatial data, let's dive in and do a bit of learning in action.
+
+# Administrative shapes - the UK
+
+A good starting point is to aquire some adminstrative data. This is a way of referring to political boundaries, whether country borders or those of a local authority or some other administrative unit. For our purposes, we're going to import several different types of administrative boundary which will be used at different points in our visualisations below. It's worth noting that the data we use here was prepared to support the 2011 census, and make use of the shapefile format.
+
+```{R}
+library(sf) |> suppressPackageStartupMessages()
+library(here) |> suppressPackageStartupMessages()
+library(tidyverse)
+# better video device, more accurate and faster rendering, esp. on macos. Also should enable system fonts for display
+library(ragg) |> suppressPackageStartupMessages()
+
+setwd("/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion")
+here::i_am("chapter_3.qmd")
+
+# Download administrative boundaries for whole UK at country level
+if (file.exists(here("data", "infuse_uk_2011_clipped.shp")) == FALSE) {
+download.file("https://borders.ukdataservice.ac.uk/ukborders/easy_download/prebuilt/shape/infuse_uk_2011_clipped.zip", destfile = "data/infuse_uk_2011_clipped.zip")
+unzip("data/infuse_uk_2011_clipped.zip", exdir = "data")
+}
+uk_countries <- st_read(here("data", "infuse_uk_2011_clipped.shp"))
+
+# Download administrative boundaries for whole UK at regions level
+if (file.exists(here("data", "infuse_rgn_2011_clipped.shp")) == FALSE) {
+download.file("https://borders.ukdataservice.ac.uk/ukborders/easy_download/prebuilt/shape/infuse_rgn_2011_clipped.zip", destfile = "data/infuse_rgn_2011_clipped.zip")
+unzip("data/infuse_rgn_2011_clipped.zip", exdir = "data")
+}
+uk_rgn <- st_read(here("data", "infuse_rgn_2011_clipped.shp"))
+
+# Download administrative boundaries for whole UK at local authority level
+if (file.exists(here("data", "infuse_dist_lyr_2011_clipped.shp")) == FALSE) {
+download.file("https://borders.ukdataservice.ac.uk/ukborders/easy_download/prebuilt/shape/infuse_dist_lyr_2011_clipped.zip", destfile = "data/infuse_dist_lyr_2011_clipped.zip")
+unzip("data/infuse_dist_lyr_2011_clipped.zip", exdir = "data")
+}
+local_authorities <- st_read(here("data", "infuse_dist_lyr_2011_clipped.shp"))
+
+# Download building outlines for whole UK
+if (file.exists(here("data", "infuse_dist_lyr_2011_simplified_100m_buildings_simplified.gpkg")) == FALSE) {
+ download.file("https://zenodo.org/record/6395804/files/infuse_dist_lyr_2011_simplified_100m_buildings_overlay_simplified.gpkg", destfile = here("data", "infuse_dist_lyr_2011_simplified_100m_buildings_simplified.gpkg"))}
+local_authorities_buildings_clip <- st_read(here("data", "infuse_dist_lyr_2011_simplified_100m_buildings_simplified.gpkg"))
+```
+Before we move on, let's plot a simple map and have a look at one of our administrative layers. We can use ggplot with a new type of shape `geom_sf()` to plot the contents of a geospatial data file with polygons which is loaded as a `simplefeature` in R.
+
+```{r}
+ggplot(uk_countries) +
+ geom_sf()
+```
+
+# Load in Ordnance Survey OpenMap Points Data
+
+```{r}
+
+# Note: for more advanced reproducible scripts which demonstrate how these data surces have been
+# obtained, see the companion cookbook here: https://github.com/kidwellj/hacking_religion_cookbook/blob/main/ordnance_survey.R
+
+os_openmap_pow <- st_read(here("data", "os_openmap_pow.gpkg"))
+
+ggplot(os_openmap_pow) +
+ geom_sf()
+
+```
+
+It's worth noting that the way that you load geospatial data in R has changed quite dramatically since 2020 with the introduction of the simplefeature class in R. Much of the documentation you will come across "out there" will make reference to a set of functions which are now deprecated.
+
+Let's use that data we've just loaded to make our first map:
+
+```{r}
+# Generate choropleth map of respondent locations
+# using temporary palette here so that 0s are white
+library(tmap) |> suppressPackageStartupMessages()
+# palette <- c(white, "#a8ddb5", "#43a2ca")
+map1 <- tm_shape(local_authorities) +
+# tm_fill(col = "surveys_count", , palette = palette, title = "Concentration of survey respondents") +
+ tm_borders(alpha=.5, lwd=0.1) +
+ # for intermediate polygon geometries
+ # tm_shape(local_authorities) +
+ # tm_borders(lwd=0.6) +
+ # for dots from original dataset
+ # tm_dots("red", size = .05, alpha = .4) +
+ tm_scale_bar(position = c("right", "bottom")) +
+ tm_style("gray") +
+ tm_credits("Data: UK Data Service (OGL)\n& Jeremy H. Kidwell,\nGraphic is CC-by-SA 4.0",
+ size = 0.4,
+ position = c("left", "bottom"),
+ just = c("left", "bottom"),
+ align = "left") +
+ tm_layout(asp = NA,
+ frame = FALSE,
+ title = "Figure 1a",
+ title.size = .7,
+ legend.title.size = .7,
+ inner.margins = c(0.1, 0.1, 0.05, 0.05)
+ )
+
+map1
+
+# save image
+tmap_save(map1, here("figures", "map.png"), width=1920, height=1080, asp=0)
+```
+
+```{r}
+# subsetting ordnance survey openmap data for measuring clusters and proximity
+
+os_openmap_important_buildings <- st_read(here("data", "os_openmap_important_buildings.gpkg"))
+
+# add pubs, check_cashing, pawnbrokers, SSSI
+## subsets
+```
+
+
+```{r}
+# OSM data
+
+# Note: for more advanced reproducible scripts which demonstrate how these data surces have been
+# obtained, see the companion cookbook here: https://github.com/kidwellj/hacking_religion_cookbook/blob/main/ordnance_survey.R
+
+
+# osm_uk_points <- st_read(system.file(here("data", "pow_osm.gpkg", package = "spData")))
+# vector_filepath = system.file("data/osm-gb-2018Aug29_pow_osm.pbf", package = "sf")
+# osm_uk_points = st_read(vector_filepath)
+```
+
Guides to geographies:
https://rconsortium.github.io/censusguide/
https://ocsi.uk/2019/03/18/lsoas-leps-and-lookups-a-beginners-guide-to-statistical-geographies/
-Extact places of worship from Ordnance survey open data set
Calculate proximity to pubs
# References {.unnumbered}