final draft of chapter 1 complete

This commit is contained in:
Jeremy Kidwell 2025-01-09 12:00:02 +00:00
parent 59c8c06412
commit 59902f218f
3 changed files with 107 additions and 54 deletions

View file

@ -1,4 +1,6 @@
In this chapter we're going to do some exciting things with census data. This is a very important dataset, often analysed, but much less frequently with regards to the subject of religion and almost never with the level of granularity you'll learn to work with over the course of this chapter. We'll get to the good stuff in a moment, but first we need to do a bit of setup. The code provided here is intended to set up your workspace and is also necessary for the `quarto` application we use to build this book. Quarto is an application which blends together text and blocks of code. You can ignore most of it for now, though if you're running the code as we go along, you'll definitely want to include these lines, as they create directories where your files will go as you create charts and extract data below and tells R where to find those files:
In this chapter we're going to do some exciting things with census data. This is a very important dataset, often analysed, but much less frequently with regards to the subject of religion and almost never with the level of granularity you'll learn to work with over the course of this chapter.
We'll get to the good stuff in a moment, but first we need to do a bit of setup. The code provided here is intended to set up your workspace and is also necessary for the `quarto` application we use to build this book. If you hadn't already noticed, this book is also generated by live (and living!) R code. Quarto is an application which blends together text and blocks of code to produce books. You can ignore most of it for now, though if you're running the code as we go along, you'll definitely want to include these lines, as they create directories where your files will go as you create charts and extract data below and tells R where to find those files:
```{r}
#| include: true
@ -51,7 +53,7 @@ This is actually a fairly ugly table, so I'll use an R tool called `kable` to gi
knitr::kable(head(uk_census_2021_religion))
```
You can see how I've nested the previous command inside the `kable` command. For reference, in some cases when you're working with really complex scripts with many different libraries and functions, they may end up with functions that have the same name. You can specify the library where the function is meant to come from by preceding it with :: as we've done `knitr::` above. The same kind of output can be gotten using `tail`:
You can see how I've nested the previous command inside the `kable` command. For reference, in some cases when you're working with really complex scripts with many different libraries and functions, they may end up with functions that have the same name, and you may unwittingly run a function from the wrong library. You can specify the library where the function is meant to come from by preceding it with :: as we've done `knitr::` above. The same kind of output can be gotten using `tail` which shows the final lines of a given data object:
```{r}
knitr::kable(tail(uk_census_2021_religion))
@ -59,16 +61,20 @@ knitr::kable(tail(uk_census_2021_religion))
# Parsing and Exploring your data
The first thing you're going to want to do is to take a smaller subset of a large data set, either by filtering out certain columns or rows. Now let's say we want to just work with the data from the West Midlands, and we'd like to omit some of the columns. We can choose a specific range of columns using `select`, like this:
The first thing you're going to want to do is to take a smaller subset of a large data set, either by filtering out certain columns or rows. Let's say we want to just work with the data from the West Midlands and we'd like to omit some of the other columns which relate to different geographic areas. We can choose a specific range of columns using `select`, like this:
You can use the `filter` command to do this. To give an example, `filter` can pick a single row in the following way:
You can use the `filter` command to do this. To give an example, `filter` can pick a single *row* in the following way:
```{r}
uk_census_2021_religion_wmids <- uk_census_2021_religion %>% filter(geography=="West Midlands")
```
Now we'll use select in a different way to narrow our data to specific columns that are needed (no totals!).
In the line above, you'll see that we've created a new object which contains this more specific subset of the original data. You can also overwrite your original object with the new information, and as you go along you'll need to make decisions about whether to keep many iterations as different objects, or if you want to try and hold onto only the bare essentials.
It's also worth noting that there are only a few rules for naming objects (you can't have spaces, for one thing), so you'll want to come up with a specific convention that works for you. I tend to assign a name for each object that indicates the dataset it has come from and then chain on further names using underscore characters which indicate what kind of subset it is. You may want to be careful about letting your names get too long, and find comprehensible ways to abbreviate.
Now we'll use select in a different way to narrow our data to specific *columns* that are needed (no totals!).
[Some readers will want to pause here and check out Hadley Wickham's "R For Data Science" book, in the section, ["Data visualisation"](https://r4ds.hadley.nz/data-visualize#introduction) to get a fuller explanation of how to explore your data.]{.aside}
@ -85,10 +91,12 @@ uk_census_2021_religion_wmids <- gather(uk_census_2021_religion_wmids)
```
There are two basic ways to do visualisations in R. You can work with basic functions in R, often called "base R" or you can work with an alternative library called ggplot:
There are two basic ways to do visualisations in R. You can work with basic functions in R, often called "base R" or you can work with an alternative (and extremely popular) library called ggplot which aims to streamline the coding you need to make a chart:
## Base R
Here's the code you can use to create a new data object which contains the information necessary for our chart. I've just used the generic name "df" because we won't hold on to this chart. You'll also see that I've organised the data in descending order using the base R function `order()`. In the next line, we use the Base R function "barplot" to create a chart.
```{r}
df <- uk_census_2021_religion_wmids[order(uk_census_2021_religion_wmids$value,decreasing = TRUE),]
barplot(height=df$value, names=df$key)
@ -97,15 +105,17 @@ barplot(height=df$value, names=df$key)
## GGPlot
The conventions of GGPlot take a bit of getting used to, but it's a very powerful tool which will scale to quite complicated charts.
```{r}
ggplot(uk_census_2021_religion_wmids, aes(x = key, y = value)) + geom_bar(stat = "identity") # <1>
ggplot(uk_census_2021_religion_wmids, aes(x= reorder(key,-value),value)) + geom_bar(stat ="identity") # <2>
```
1. First we'll plot the data using `ggplot` and then...
2. We'll re-order the column by size.
1. First we'll plot the data using `ggplot`
2. Then we re-order the column by size.
Let's assume we're working with a data set that doesn't include a "totals" column and that we might want to get sums for each column. This is pretty easy to do in R. As you'll see below, we are going to take the original table, and overwrite it with a new column added:
This initial chart doesn't include a "totals" column, as it isn't in the data and these plotting tools simply represent whatever data you put into them. It's nice to have a list of sums for each column, and this is pretty easy to do in R. As you'll see below, we are going to take the original table, and overwrite it with a new column added:
```{r}
uk_census_2021_religion_totals <- uk_census_2021_religion %>% select(no_religion:no_response) # <1>
@ -117,16 +127,16 @@ ggplot(uk_census_2021_religion_totals, aes(x= reorder(key,-value),value)) + geom
1. First, remove the column with region names and the totals for the regions as we want just integer data.
2. Second calculate the totals. In this example we use the tidyverse library `dplyr()`, but you can also do this using base R with `colsums()` like this: `uk_census_2021_religion_totals <- colSums(uk_census_2021_religion_totals, na.rm = TRUE)`. The downside with base R is that you'll also need to convert the result into a dataframe for `ggplot` like this: `uk_census_2021_religion_totals <- as.data.frame(uk_census_2021_religion_totals)`
3. In order to visualise this data using ggplot, we need to shift this data from wide to long format. This is a quick job using gather()
3. In order to visualise this data using ggplot, we need to shift this data from wide to long format. This is a quick job using `gather()`
4. Now plot it out and have a look!
You might have noticed that these two dataframes give us somewhat different results. But with data science, it's much more interesting to compare these two side-by-side in a visualisation. We can join these two dataframes and plot the bars side by side using `bind()` - which can be done by columns with cbind() and rows using rbind():
You might notice that these two dataframes give us somewhat different results. But with data science, it's much more interesting to compare these two side-by-side in a visualisation. We can join these two dataframes and plot the bars side by side using `bind()` - which can be done by columns with `cbind()` and rows using `rbind()`:
```{r}
uk_census_2021_religion_merged <- rbind(uk_census_2021_religion_totals, uk_census_2021_religion_wmids)
```
Do you notice there's going to be a problem here? How can we tell one set from the other? We need to add in something idenfiable first! This isn't too hard to do as we can simply create a new column for each with identifiable information before we bind them:
Do you notice there's going to be a problem here? How can we tell one set from the other? We need to add in something idenfiable first! To do this we can simply create a new column for each with identifiable information before we bind them:
```{r}
uk_census_2021_religion_totals$dataset <- c("totals")
@ -156,11 +166,11 @@ uk_census_2021_religion_wmids <- uk_census_2021_religion_wmids %>%
uk_census_2021_religion_merged <- rbind(uk_census_2021_religion_totals, uk_census_2021_religion_wmids)
ggplot(uk_census_2021_religion_merged, aes(fill=dataset, x=key, y=perc)) + geom_bar(position="dodge", stat ="identity")
```
Now you can see a very rough comparison, which sets bars from the West Midlands data and UK-wide total data side by side for each category. The same principles that we've used here can be applied to draw in more data. You could, for example, compare census data from different years, e.g. 2001 2011 and 2021, as we'll do below. Our use of `dplyr::mutate` above can be repeated to add an infinite number of further series' which can be plotted in bar groups.
This chart gives us a comparison which sets bars from the West Midlands data and UK-wide total data side by side for each category. The same principles that we've used here can be applied to draw in more data. You could, for example, compare census data from different years, e.g. 2001 2011 and 2021, as we'll do below. Our use of `dplyr::mutate` above can be repeated to add an infinite number of further series' which can be plotted in bar groups.
We'll draw this data into comparison with later sets in the next chapter. But the one glaring issue which remains for our chart is that it's lacking in really any aesthetic refinements. This is where `ggplot` really shines as a tool as you can add all sorts of things.
The `ggplot` tool basically works by stacking on additional elements using `+`. So, for example, let's say we want to improve the colours used for our bars. You can specify the formatting for the fill on the `scale` by tacking on `scale_fill_brewer`. This uses a particular tool (and a personal favourite of mine) called `colorbrewer`. Part of my appreciation of this tool is that you can pick colours which are not just visually pleasing, and produce useful contrast / complementary schemes, but you can also work proactively to accommodate colourblindness. Working with colour schemes which can be divergent in a visually obvious way will be even more important when we work on geospatial data and maps in a later chapter.
The `ggplot` tool works by stacking additional elements on to your original plot using `+`. So, for example, let's say we want to improve the colours used for our bars. You can specify the formatting for the fill on the `scale` by tacking on `scale_fill_brewer`. This uses a particular tool (and a personal favourite of mine) called `colorbrewer`. Part of my appreciation of this tool is that you can pick colours which are not just visually pleasing, and produce useful contrast / complementary schemes, but you can also work proactively to accommodate colourblindness. Working with colour schemes which can be divergent in a visually obvious way will be even more important when we work on geospatial data and maps in a later chapter.
```{r}
ggplot(uk_census_2021_religion_merged, aes(fill=dataset, x=key, y=perc)) + geom_bar(position="dodge", stat ="identity") + scale_fill_brewer(palette = "Set1")
@ -178,19 +188,25 @@ We can fine tune a few other visual features here as well, like adding a title w
```{r}
ggplot(uk_census_2021_religion_merged, aes(fill=fct_reorder(dataset, value), x=reorder(key,-value),value, y=perc)) + geom_bar(position="dodge", stat ="identity", colour = "black") + scale_fill_brewer(palette = "Set1") + ggtitle("Religious Affiliation in the UK: 2021") + xlab("") + ylab("")
```
It's also a bit hard to read our Y-axis labels with everything getting cramped down there, so let's rotate that text to 180 degrees so those labels are clear:
```{r}
ggplot(uk_census_2021_religion_merged, aes(fill=fct_reorder(dataset, value), x=reorder(key,-value),value, y=perc)) + geom_bar(position="dodge", stat ="identity", colour = "black") + scale_fill_brewer(palette = "Set1") + ggtitle("Religious Affiliation in the UK: 2021") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
```
# Telling the truth in data science: Is your chart accurate?
If you've been following along up until this point, you'll now have produced a fairly complete data visualisation for the UK census. There is some technical work yet to be done fine-tuning the visualisation of our chart here, but I'd like to pause for a moment and consider an ethical question drawn from the principles I outlined in the introduction: is the title of this chart truthful and accurate?
If you've been following along up until this point, you'll have produced a fairly complete data visualisation for the UK census. There is some technical work yet to be done fine-tuning the visualisation of our chart here, but I'd like to pause for a moment and consider an ethical question drawn from the principles I outlined in the introduction: is the title of this chart truthful and accurate?
On one hand, it is a straight-forward reference to the nature of the question asked on the 2021 census survey instrument, e.g. something like "what is your religious affiliation". However, as you will see in the next chapter, large data sets from the same year which asked a fairly similar question yield different results. Part of this could be attributed to the amount of non-respose to this specific question which, in the 2021 census is between 5-6% across many demographics. It's possible (though perhaps unlikely) that all those non-responses were Sikh respondents who felt uncomfortable identifying themselves on such a survey. If even half of the non-responses were of this nature, this would dramatically shift the results especially in comparison to other minority groups. So there is some work for us to do here in representing non-response as a category on the census.
On one hand, it is a straight-forward reference to the nature of the question asked on the 2021 census survey instrument, e.g. something like "what is your religious affiliation". However, as you will see in the next chapter, other large data sets from the same year which involved a similar question yielded different results. Part of this could be attributed to the amount of non-respose to this specific question which, in the 2021 census is between 5-6% across many demographics. It's possible (though perhaps unlikely) that all those non-responses were (to pick one random example) Jedi religion practitioners who felt uncomfortable identifying themselves on such a census survey. If even half of the non-responses were of this nature, this would dramatically shift the results especially in comparison to other minority groups. So there is some work for us to do here in representing non-response as a category on the census.
It's equally possible that someone might feel uncertain when answering, but nonetheless land on a particular decision marking "Christian" when they wondered if they should instead tick "no religion. Some surveys attempt to capture uncertainty in this way, asking respondents to mark how confident they are about their answers, or allowing respondents to choose multiple answers, but the census hasn't captured this so we simply don't know. It's possible that a large portion of respondents in the "Christian" category were hovering between this and another response, and they might shift their answers when responding on a different day or in the context of a particular experience like a good or bad day attending church, or perhaps having just had a conversation with a friend which shifted their thinking.
It's equally possible that someone might feel uncertain when answering, but nonetheless land on a particular decision marking "Christian" when they wondered if they should instead tick "no religion. Some surveys attempt to capture uncertainty in this way, asking respondents to mark how confident they are about their answers, or allowing respondents to choose multiple answers, but the makers of the census made a specific choice not to capture this so we simply don't know. It's possible that a large portion of respondents in the "Christian" category were hovering between this and another response and they might shift their answers when responding on a different day or in the context of a particular experience like a good or bad day attending church, or perhaps having just had a conversation with a friend which shifted their thinking.
Even the inertia of survey design can have an effect on this, so responding to other questions in a particular way, thinking about ethnic identity, for example, can prime a person to think about their religious identity in a different or more focussed way, altering their response to the question. If someone were to ask you on a survey "are you hungry" you might say "no," but if they'd previously asked you a hundred questions about your favourite pizza toppings you might have been primed to think about food and when you arrive at the same question, even at the same time in the day, answer differently. This can be the case for some ethnicity and religion pairings, which we'll explore a bit more in the next chapter.
Even the inertia of survey design can have an effect on this, so responding to other questions in a particular way, thinking about ethnic identity, for example, can prime a person to think about their religious identity in a different or more focussed way, altering their response to the question. If someone were to ask you on a survey "are you hungry" you might say "no," but if they'd previously asked you a hundred questions about your favourite pizza toppings you might have been primed to think about food and when you arrive at the same question, even at the same time in the day, your answer would be an enthusiastic "yes". This can be the case for some ethnicity and religion pairings which may have priming interrelations, which we'll explore a bit more in the next chapter.
Given this challenge, some survey instruments randomise the order of questions. This hasn't been done on the census (which would have been quite hard work given that most of the instruments were printed hard copies!), so again, we can't really be sure if those answers given are stable in such a way.
Finally, researchers have also found that when people are asked to mark their religious affiliation, sometimes they can prefer to mark more than one answer. A person might consider themselves to be "Muslim" but also "Spiritual but not religious" preferring the combination of those identities. It is also the case that respondents can identify with more unexpected hybrid religious identities, such as "Christian" and "Hindu". One might assume that these are different religions without many doctrinal overlaps, but researchers have found that in actual practice, it's perfectly possible for people to inhabit two categories which the researcher might assume are opposed.
Finally, researchers have also found that when people are asked to mark their religious affiliation, sometimes they can prefer to mark more than one answer. A person might consider themselves to be "Muslim" but also "Spiritual but not religious" preferring the combination of those identities. It is also the case that respondents do in practice identify with less expected hybrid religious identities as well, such as "Christian" and "Hindu". One might assume that these are different religions without many doctrinal overlaps, but researchers have found that in actual practice, it's perfectly possible for some people to inhabit two or more categories which the researcher might assume are opposed.
The UK census only allows respondents to tick a single box for the religion category. It is worth noting that, in contrast, the responses for ethnicity allow for combinations. Given that this is the case, it's impossible to know which way a person went at the fork in the road as they were forced to choose just one half of this kind of hybrid identity. Did they feel a bit more Buddhist that day? Or spiritual?
@ -201,17 +217,17 @@ What does this mean for our results? Are they completely unreliable and invalid?
So if we are going to fine-tune our visuals to ensure they comport with our hacker principles and speak truthfully, we should also probably do something different with those non-responses:
```{r}
ggplot(uk_census_2021_religion_merged, aes(fill=fct_reorder(dataset, value), x=reorder(key,-value),value, y=perc)) + geom_bar(position="dodge", stat ="identity", colour = "black") + scale_fill_brewer(palette = "Set1") + ggtitle("Religious Affiliation in the 2021 Census of England and Wales") + xlab("") + ylab("")
ggplot(uk_census_2021_religion_merged, aes(fill=fct_reorder(dataset, value), x=reorder(key,-value),value, y=perc)) + geom_bar(position="dodge", stat ="identity", colour = "black") + scale_fill_brewer(palette = "Set1") + ggtitle("Religious Affiliation in the 2021 Census of England and Wales") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
```
# Multifactor Visualisation
One element of R data analysis of census datasets that can get really interesting is working with multiple variables. Above we've looked at the breakdown of religious affiliation across the whole of England and Wales (Scotland operates an independent census), and by placing this data alongside a specific region, we've already made a basic entry into working with multiple variables but this can get much more interesting. Adding an additional quantitative variable (also known as bivariate data when you have *two* variables) into the mix, however can also generate a lot more information and we have to think about visualising it in different ways which can still communicate with visual clarity in spite of the additional visual noise which is inevitable with enhanced complexity. Let's have a look at the way that religion in England and Wales breaks down by ethnicity.
One element of R data analysis of census datasets that can get really interesting is working with multiple variables. Above we've looked at the breakdown of religious affiliation across the whole of England and Wales (Scotland operates an independent census so we haven't included it here) and by placing this data alongside a specific region, we've already made a basic entry into working with multiple variables but this can get much more interesting. Adding an additional quantitative variable (also known as bivariate data when you have *two* variables) into the mix, however can also generate a lot more information and we have to think about visualising it in different ways which can still communicate with visual clarity in spite of the additional visual noise which is inevitable with enhanced complexity. Let's have a look at the way that religion in England and Wales breaks down by ethnicity.
::: {.callout-tip collapse="true"}
## What is Nomis?
For the UK, census data is made available for programmatic research like this via an organisation called NOMIS. Luckily for us, there is an R library you can use to access nomis directly which greatly simplifies the process of pulling data down from the platform. It's worth noting that if you're not in the UK, there are similar options for other countries. Nearly every R textbook I've ever seen works with USA census data, so you'll find plenty of documentation available on the tools you can use for US Census data. Similarly for the EU, Canada, Austrailia etc.
For the UK, census data is made available for programmatic research like this via an organisation called NOMIS. Luckily for us, there is an R library you can use to access nomis directly which greatly simplifies the process of pulling data down from the platform. It's worth noting that if you're not in the UK, there are similar options for other countries. Nearly every R textbook I've ever seen works with USA census data (which is part of the reason I've taken the opportunity to work with a different national census dataset here in this book), so you'll find plenty of documentation available on the tools you can use for US Census data. Similarly for the EU, Canada, Austrailia etc.
If you want to draw some data from the nomis platform yourself in R, have a look at the nomis script in our [companion cookbook repository](https://github.com/kidwellj/hacking_religion_cookbook/blob/main/nomis.R). For now, we'll provide some data extracts for you to use.
@ -226,12 +242,18 @@ nomis_extract_census2021 <- readRDS(file = (here("example_data", "nomis_extract_
I'm hoping that readers of this book will feel free to pause along the way and "hack" the code to explore questions of their own, perhaps in this case probing the NOMIS data for answers to their own questions. If I tidy things up too much, however, you're likely to be surprised when you get to the real life data sets. So that you can use the code in this book in a reproducible way, I've started this exercise with what is a more or less raw dump from NOMIS. This means that the data is a bit messy and needs to be filtered down quite a bit so that it only includes the basic stuff that we'd like to examine for this particular question. The upside of this is that you can modify this code to draw in different columns etc.
```{r}
#| column: margin
uk_census_2021_religion_ethnicity <- select(nomis_extract_census2021, GEOGRAPHY_NAME, C2021_RELIGION_10_NAME, C2021_ETH_8_NAME, OBS_VALUE) # <1>
uk_census_2021_religion_ethnicity <- filter(uk_census_2021_religion_ethnicity, GEOGRAPHY_NAME=="England and Wales" & C2021_RELIGION_10_NAME != "Total" & C2021_ETH_8_NAME != "Total") # <2>
uk_census_2021_religion_ethnicity <- filter(uk_census_2021_religion_ethnicity, C2021_ETH_8_NAME != "White: English, Welsh, Scottish, Northern Irish or British" & C2021_ETH_8_NAME != "White: Irish" & C2021_ETH_8_NAME != "White: Gypsy or Irish Traveller, Roma or Other White") # <3>
ggplot(uk_census_2021_religion_ethnicity, aes(fill=C2021_ETH_8_NAME, x=C2021_RELIGION_10_NAME, y=OBS_VALUE)) + geom_bar(position="dodge", stat ="identity", colour = "black") + scale_fill_brewer(palette = "Set1") + ggtitle("Religious Affiliation in the 2021 Census of England and Wales") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1)) # <4>
ggplot(uk_census_2021_religion_ethnicity, aes(fill=C2021_ETH_8_NAME, x=C2021_RELIGION_10_NAME, y=OBS_VALUE)) +
geom_bar(position="dodge", stat ="identity", colour = "black") +
scale_fill_brewer(palette = "Set1") +
ggtitle("Religious Affiliation in the 2021 Census of England and Wales") +
xlab("") + ylab("") +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1)) # <4>
```
1. Select relevant columns
@ -245,15 +267,16 @@ The trouble with using grouped bars here, as you can see, is that there are quit
## Statistics 101: Logarithmic Visualisation
Content TBD.
Usually, when we display data we think of numbers in a linear way, that is, each centimetre of the x-axis on our chart represents the same quantity as the cm above and below it. This is generally a preferred way to display data, and as close to a "common sense" way of showing things as we might get. However, this kind of linear visualisation works best only in cases where the difference between one category on our chart and the next is relatively uniform. This is, for the most part, the case with our charts above. However, we've hit another scenario here, the difference between the "White" subcategory and all the others is large enough that those other four categories aren't really easily perceived on our chart. One way to address this is to leave behind a linear approach to displaying that x-axis data. What if, for example, each step up on our chart didn't represent the same amount of value, e.g. 10, 20, 30, 40, 50 etc. but instead represented an increase which followed orders of magnitude, so something more like 10, 100, 1000, 10000, etc. That's the essence of a logarithmic visualisation, which can much more easily display data that has a very large range or with disparities from one category to another.
:::
```{r}
#| column: margin
uk_census_2021_religion_ethnicity_white <- filter(uk_census_2021_religion_ethnicity, C2021_ETH_8_NAME == "White") # <1>
uk_census_2021_religion_ethnicity_nonwhite <- filter(uk_census_2021_religion_ethnicity, C2021_ETH_8_NAME != "White") # <2>
ggplot(uk_census_2021_religion_ethnicity_nonwhite, aes(fill=C2021_ETH_8_NAME, x=C2021_RELIGION_10_NAME, y=OBS_VALUE)) + geom_bar(position="dodge", stat ="identity", colour = "black") + scale_fill_brewer(palette = "Set1") + ggtitle("Religious Affiliation in the 2021 Census of England and Wales") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1)) # <3>
```
@ -261,17 +284,35 @@ ggplot(uk_census_2021_religion_ethnicity_nonwhite, aes(fill=C2021_ETH_8_NAME, x=
2. Filtering with `!=` allows us to create a subset where that response is excluded
3. Let's plot it out and see where we've gotten to!
As you'll notice, this is a bit better, but this still doesn't quite render with as much visual clarity and communication as I'd like. For a better look, we can use a technique in R called "faceting" to create a series of small charts which can be viewed alongside one another. This is just intended to whet you appetite for facetted plots, so I won't break down all the separate elements in great detail as there are other guides which will walk you through the full details of how to use this technique if you want to do a deep dive. For now, you'll want to observe that we've augmented the `ggplot` with a new element called `facet_wrap` which takes the ethnicity data column as the basis for rendering separate charts.
As you'll notice, this is a bit better, but this still doesn't quite render with as much visual clarity and communication as I'd like. Another approach we can take is to represent each bar as a percentage of the total for that ethnicity subgroup rather than as raw values. We can do this by adding an extra step to our visualisation drawing on the mutate() function which enables us to create a series of groups based on a specific column (e.g. C2021_ETH_8_NAME) and then create an additional column in our dataframe which represents values within each of our groups as percentages of the total rather than raw values:
```{r}
ggplot(uk_census_2021_religion_ethnicity_nonwhite, aes(x=C2021_RELIGION_10_NAME, y=OBS_VALUE)) + geom_bar(position="dodge", stat ="identity", colour = "black") + facet_wrap(~C2021_ETH_8_NAME, ncol = 2) + scale_fill_brewer(palette = "Set1") + ggtitle("Religious Affiliation in the 2021 Census of England and Wales") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
uk_census_2021_religion_ethnicity_percents <- uk_census_2021_religion_ethnicity %>%
group_by(C2021_ETH_8_NAME) %>%
mutate(Percentage = OBS_VALUE / sum(OBS_VALUE) * 100)
ggplot(uk_census_2021_religion_ethnicity_percents, aes(fill=C2021_ETH_8_NAME, x=C2021_RELIGION_10_NAME, y=Percentage)) +
geom_bar(position="dodge", stat ="identity", colour = "black") +
scale_fill_brewer(palette = "Set1") +
ggtitle("Religious Affiliation in the 2021 Census of England and Wales") +
xlab("") + ylab("") +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
```
As you can see, this gives us a really different sense of representation within each group. Another option we can use here is a technique in R called "faceting" which creates a series of small charts which can be viewed alongside one another. This is just intended to whet you appetite for facetted plots, so I won't break down all the separate elements in great detail as there are other guides which will walk you through the full details of how to use this technique if you want to do a deep dive. For now, you'll want to observe that we've augmented the `ggplot` with a new element called `facet_wrap` which takes the ethnicity data column as the basis for rendering separate charts.
```{r}
ggplot(uk_census_2021_religion_ethnicity_nonwhite, aes(x=C2021_RELIGION_10_NAME, y=OBS_VALUE)) +
geom_bar(position="dodge", stat ="identity", colour = "black") +
facet_wrap(~C2021_ETH_8_NAME, ncol = 2) + scale_fill_brewer(palette = "Set1") +
ggtitle("Religious Affiliation in the 2021 Census of England and Wales") + xlab("") + ylab("") +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
```
That's a bit better! Now we have a much more accessible set of visual information which compares across categories and renders most of the information we're trying to capture.
To take this chart just one step further, I'd like to take the faceted chart we've just done and add in totals for the previous two census years (2001 and 2011) so we can see how trends are changing in terms of religious affiliation within ethnic self-identification categories. We'll draw on some techniques we're already developed above using `rbind()` to connect up each of these charts (after we've added a column identifying each chart by the census year). We will also need to use one new technique to change the wording of ethnic categories as this isn't consistent from one census to the next and ggplot will struggle to chart things if the terms being used are exactly the same. We'll use `mutate()` again to accomplish this with some slightly different code.
First we need to get the tables of Census 2011 and 2001 religion data from nomis:
```{r}
@ -295,7 +336,7 @@ uk_census_2011_religion_plot <- ggplot(uk_census_2011_religion_ethnicity, aes(x
2. Filter down to simplified dataset with England / Wales and percentages without totals
3. Drop unnecessary columns
The `bind` tool we're going to use is very picky and expects everything to match perfectly so that it don't join up data that is unrelated. Unfortunately, the census table data format has changed in each decade, so we need to harmonise the column titles so that we can join the data and avoid confusing R.
The `bind` tool we're going to use is very picky and expects everything to match perfectly so that it doesn't join up data that is unrelated. Unfortunately, the census table data format has changed in each decade, so we need to harmonise the column titles so that we can join the data and avoid confusing R. This is a pretty common problem you'll face in working with multiple datasets in the same chart, so well worth noticing the extra necessary step here.
```{r}
uk_census_2001_religion_ethnicity$dataset <- c("2001") # <1>
@ -367,7 +408,13 @@ uk_census_merged_religion_ethnicity_nonwhite <- filter(uk_census_merged_religion
Hopefully if everything went properly, we can now do an initial `ggplot` to see how things look side-by-side:
```{r}
ggplot(uk_census_merged_religion_ethnicity_nonwhite, aes(fill=Year, x=Religion, y=Value)) + geom_bar(position="dodge", stat ="identity", colour = "black") + facet_wrap(~Ethnicity, ncol = 2) + scale_fill_brewer(palette = "Set1") + ggtitle("Religious Affiliation in the 2001-2021 Census of England and Wales") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
ggplot(uk_census_merged_religion_ethnicity_nonwhite, aes(fill=Year, x=Religion, y=Value)) +
geom_bar(position="dodge", stat ="identity", colour = "black") +
facet_wrap(~Ethnicity, ncol = 2) +
scale_fill_brewer(palette = "Set1") +
ggtitle("Religious Affiliation in the 2001-2021 Census of England and Wales") +
xlab("") + ylab("") +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
```
We're getting there, but as you can see there are a few formatting issues which remain. Our y-axis number labels are in scientific format which isn't easy to read. You can use the very powerful and flexible `scales()` library to bring in some more readable formatting of numbers in a variety of places in R including in ggplot visualizations.
@ -385,14 +432,21 @@ uk_census_merged_religion_ethnicity <- uk_census_merged_religion_ethnicity %>%
ggplot(uk_census_merged_religion_ethnicity, aes(fill=Year, x=Religion, y=Percent)) + geom_bar(position="dodge", stat ="identity", colour = "black") + facet_wrap(~Ethnicity, scales="free_x") + scale_fill_brewer(palette = "Set1") + scale_y_continuous(labels = scales::percent) + ggtitle("Religious Affiliation in the 2001-2021 Censuses of England and Wales") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
```
Now you can see why this shift is important - the visualisation tells a completely different story in some cases across the two different charts. In the first, working off raw numbers we see a net increase in Christianity across all categories. But if we take into account the fact that the overall share of population is growing for each of these groups, their actual composition is changing in a different direction. The proportion of each group is declining across the three census periods (albeit with an exception for the "Other" category from 2011 to 2021).
Now you can see why this shift is important - the visualisation tells a completely different story in some cases across the two different charts. In the first (working off raw numbers) we see a net increase in Christianity across all categories. But if we take into account the fact that the overall share of population is growing for each of these groups, their actual composition is changing in a different direction. The proportion of each group is declining across the three census periods (albeit with an exception for the "Other" category from 2011 to 2021).
To highlight a few of the technical features I've added for this final plot, I've used a specific feature within `facet_wrap` `scales = "free_x"` to let each of the individual facets adjust the total range on the x-axis. Since we're looking at trends here and not absolute values, having correspondence across scales isn't important and this makes for something a bit more visually tidy. I've also shifted the code for `scale_y_continuous` to render values as percentages (rather than millions).
In case you want to print this plot out and hang it on your wall, you can use the `ggsave` tool to render the chart as an image file which you can print or email to a friend (or professor!):
```{r}
uk_census_merged_religion_ethnicity_plot <- ggplot(uk_census_merged_religion_ethnicity, aes(fill=Year, x=Religion, y=Percent)) + geom_bar(position="dodge", stat ="identity", colour = "black") + facet_wrap(~Ethnicity, scales="free_x") + scale_fill_brewer(palette = "Set1") + scale_y_continuous(labels = scales::percent) + ggtitle("Religious Affiliation in the 2001-2021 Censuses of England and Wales") + xlab("") + ylab("") + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
uk_census_merged_religion_ethnicity_plot <- ggplot(uk_census_merged_religion_ethnicity, aes(fill=Year, x=Religion, y=Percent)) +
geom_bar(position="dodge", stat ="identity", colour = "black") +
facet_wrap(~Ethnicity, scales="free_x") +
scale_fill_brewer(palette = "Set1") +
scale_y_continuous(labels = scales::percent) +
ggtitle("Religious Affiliation in the 2001-2021 Censuses of England and Wales") +
xlab("") + ylab("") +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
ggsave("figures/chart.png", plot=uk_census_merged_religion_ethnicity_plot, width = 8, height = 10, units=c("in"))
```

View file

@ -2,22 +2,21 @@
In this chapter, we'll explore the diverse variety of ways you can frame collecting data around religion. Before we dive into that all, however, you might be wondering, why does it all really matter? Can't we just use the census data and assume that's a reasonably accurate approximation? I'll explore the importance of getting the framing right, or better yet, working with data that seeks to unpack religious belonging, identity, and beliefs (or unbelief) in a variety of ways, but an example might serve to explain why this is important.
The 2016 presidential election result in the USA came as a surprise to many data analysts and pollsters. As the dust settled, a number of analysis scrambled to make sense of things and identify some hidden factor that might have tipped the balance away from the expected winner Hilary Clinton. One of the most widely circulated data points was the role of white evangelical Christians in supporting Trump. Exit polls reported that 81% of this constituency voted for Trump and many major media outlets reported this figure prominently, with public commentary from many religious leaders on the meaning this figure had the social direction of evangelical Christianity.
The 2016 presidential election result in the USA came as a surprise to many data analysts and pollsters. As the dust settled, a number of analysts scrambled to make sense of things and identify some hidden factor that might have tipped the balance away from their preduicted winner Hilary Clinton. One of the most widely circulated data points was the role of white evangelical Christians in supporting Trump. Exit polls reported that 81% of this constituency voted for Trump and many major media outlets reported this figure prominently, with public commentary from many religious leaders on the meaning this figure had the social direction of evangelical Christianity.
Far too few observers paused to ask what those exit polls were measuring and a closer look at that information reveals some interesting nuances. There is only a single firm that runs exit polling in the USA, Edison Research, who is contracted to do this work by a consortium of major media news outlets ("the National Election Pool"), which represents ABC News, Associated Press, CBS News, CNN, Fox News, and NBC News. It's not a process driven by slow, nuanced, scholarly researchers strapped for funding, it's a rapid high-stakes data collection exercise meant to provide data which can feed into the election week news cycle. The poll doesn't ask respondents simply if they are "evangelical" it uses a broader proxy question to do this: "Would you describe yourself as a born-again or evangelical Christian?" This term "born-again" can be a useful proxy, but it can also prove misleading. When asked if they are "born again" people who identify with a number of non-Christian religions, and people who might describe themselves as non-religious will also often answer "yes". This is particularly salient given the 2016 exit survey asked this question before asking specifically what a person's religion was, so as Pew Research reported, "everyone who takes the exit poll (including religious “nones” and adherents of non-Christian faiths) has the opportunity to identify as a born-again or evangelical Christian."
Few observers paused to ask what those exit polls were measuring and a closer look at that information reveals some interesting nuances. There is only a single firm that runs exit polling in the USA, Edison Research, who is contracted to do this work by a consortium of major media news outlets ("the National Election Pool"), which represents ABC News, Associated Press, CBS News, CNN, Fox News, and NBC News. It's not a process driven by slow, nuanced, scholarly researchers strapped for funding, it's a rapid high-stakes data collection exercise meant to provide data which can feed into the election week news cycle. The poll doesn't ask respondents simply if they are "evangelical" it uses a broader proxy question to do this. The pollsters ask respondents: "Would you describe yourself as a born-again or evangelical Christian?" This term "born-again" can be a useful proxy, but it can also prove misleading. When asked if they are "born again" people who identify with a number of non-Christian religions, and people who might describe themselves as non-religious will also often answer "yes". This is particularly salient given the 2016 exit survey asked this question before asking specifically what a person's religion was, so as Pew Research reported, "everyone who takes the exit poll (including religious “nones” and adherents of non-Christian faiths) has the opportunity to identify as a born-again or evangelical Christian."
While the "born-again" Christian category tends to correlate to high levels of attendance at worship services, in this case some researchers found that white protestant Christian voters for Trump tended to have low levels of participation in activities. We don't have access to the underlying data, and ultimately the exit polling was quite limited in scope (in some instances respondents weren't even asked about religion), so we'll never really have a proper understanding of what happened demographically in that election. But it's an interesting puzzle to consider how different ways to record participation in religion might fit together, or even be in tension with one another. For this chapter, we're going to take a look at another dataset which gives us exactly this kind of opportunity, to see how different kinds of measurement might reinforce or relate with one another.
# Survey Data: Spotlight Project
In the last chapter we explored some high level data about religion in the UK. This was a census sample, which usually refers to an attempt to get as comprehensive a sample as possible. But this is actually fairly unusual in practice. Depending on how complex a subject is and how representative we want our data to be, it's much more common to use selective sampling, that is survey responses at n=100 or n=1000 at a maximum. The advantage of a census sample is that you can explore how a wide range of other factors - particularly demographics - intersect with your question. And this can be really valuable in the study of religion, particularly as you will see as we go along that responses to some questions are more strongly correlated to things like economic status or educational attainment than they are to religious affiliation. It can be hard to tell if this is the case unless you have enough of a sample to break down into a number of different kinds of subsets. But census samples are complex and expensive to gather, so they're quite rare in practice.
For this chapter, I'm going to walk you through a data set that a colleague (Charles Ogunbode) and I collected in 2021. Another problem with smaller, more selective samples is that researchers can often undersample minoritised ethnic groups. This is particularly the case with climate change research. Until the time we conducted this research, there had not been a single study investigating the specific experiences of people of colour in relation to climate change in the UK. Past researchers had been content to work with large samples, and assumed that if they had done 1000 surveys and 50 of these were completed by people of colour, they could "tick" the box. But 5% is actually well below levels of representation in the UK generally, and even more sharply the case for specific communities and regions in the UK. And if we bear in mind that non-white respondents are (of course!) a highly heterogenous group, we're even more behind in terms of collecting data that can improve our knowledge. Up until recently researchers just haven't been paying close enough attention to catch the significant neglect of the empirical field that this represents.
For this chapter, I'm going to walk you through a data set that Charles Ogunbode and I collected in 2021. Another problem with smaller, more selective samples is that researchers can often undersample minoritised ethnic groups. This is particularly the case with climate change (and religion) research. Until the time we conducted this research, there had not been a single study investigating the specific experiences of people of colour in relation to climate change in the UK. Past researchers had been content to work with large samples, and assumed that if they had done 1000 surveys and 50 of these were completed by people of colour, they could "tick" the box. But 5% is actually well below levels of representation in the UK generally, and even more sharply the case for specific communities and regions in the UK. And if we bear in mind that non-white respondents are (of course!) a highly heterogenous group, we're even more behind in terms of collecting data that can improve our knowledge. Up until recently researchers just haven't been paying close enough attention to catch the significant neglect of the empirical field that this represents.
While I've framed my comments above in terms of climate change research, it is also the case that, especially in diverse societies like the USA, Canada, the UK etc., paying attention to non-majority groups and people and communities of colour automatically draws in a strongly religious sample. This is highlighted in one recent study done in the UK, the "[Black British Voices Report](https://www.cam.ac.uk/stories/black-british-voices-report)" in which the researchers observed that "84% of respondents described themselves as religious and/or spiritual". My comments above in terms of controlling for other factors remains important here - these same researchers also note that "despire their significant important to the lives of Black Britons, only 7% of survey respondents reported that their religion was more defining of their identity than their race".
While I've framed my comments above in terms of climate change research, it is also the case that, especially in diverse societies like the USA, Canada, the UK etc., paying attention to non-majority groups and people and communities of colour automatically draws in a strongly religious sample. This is highlighted in one recent study done in the UK, the "[Black British Voices Report](https://www.cam.ac.uk/stories/black-british-voices-report)" in which the researchers observed that "84% of respondents described themselves as religious and/or spiritual". My comments above in terms of controlling for other factors remains important here - these same researchers also note that "despite their significant importance to the lives of Black Britons, only 7% of survey respondents reported that their religion was more defining of their identity than their race".
We've decided to open up access to our data and I'm highlighting it in this book because it's a unique opportunitiy to explore a dataset that emphasises diversity from the start, and by extension, provides some really interesting ways to use data science techniques to explore religion in the UK.
We've decided to open up access to our data and I'm highlighting it in this book because it's a unique opportunity to explore a dataset that emphasises diversity from the start and by extension, provides some really interesting ways to use data science techniques to explore religion in the UK.
# Loading in some data

View file

@ -2,7 +2,7 @@
## Why this book?
Data science is quickly consolidating as a new field, with new tools and user communities emerging seemingly every week. At the same time the field of academic research has opened up into new interdisciplinary vistas, with experts crossing over into new fields, transgressing disciplinary boundaries and deploying tools in new and unexpected ways to develop knowledge. There are many gaps yet to be filled, but one which I found to be particularly glaring is the lack of applied data science documentation around the subject of religion. On one hand, scholars who are working with cutting edge theory seldom pick up these emerging tools of data science. On the other hand, data scientists rarely go beyond dabbling in religious themes, leaving quite a lot of really interesting theoretical research untouched. This book aims to bring these two things together: introducing the tools of data science in an applied way, whilst introducing some of the complexities and cutting edge theories which help us to conceptualise and frame our understanding of this knowledge regarding religion in the world around us.
Data science is quickly consolidating as a new field, with new tools and user communities emerging every week. At the same time academic research has opened up into new interdisciplinary vistas, with experts crossing over into new fields, transgressing disciplinary boundaries and deploying tools in new and unexpected ways to develop knowledge. There are many gaps yet to be filled, but one which I found to be particularly glaring is the lack of applied data science documentation around the subject of religion. On one hand, scholars who are working with cutting edge theory seldom pick up these emerging tools of data science. On the other hand, data scientists rarely go beyond dabbling in religious themes, leaving quite a lot of really interesting theoretical research untouched. This book aims to bring these two things together: introducing the tools of data science in an applied way, whilst introducing some of the complexities and cutting edge theories which help us to conceptualise and frame our understanding of this knowledge regarding religion in the world around us.
## The hacker way
@ -11,14 +11,14 @@ It's worth emphasising at the outset that this isn't meant to be a generic data
Back in the 1980s Steven Levy tried to capture some of this in his book "Hackers: Heroes of the Computer Revolution". As Levy put it, the "hacker ethic" included: (1) sharing, (2) openness, (3) decentralisation, (4) free access to computers and (5) world improvement. The key point here is that hacking isn't just about writing and breaking code, or testing and finding weaknesses in computer systems and networks. There is often a more substantial underpinning ethical code which dovetails with on-the-surface matters of curiosity and craft.
This emphasis on ethics is especially important when we're doing data science because this kind of research work will put you in positions of influence and bestow upon you a certain amount of social influence. You might think this seems a bit overstated, but it never ceases to amaze me how much bringing a bar chart which succinctly shows a social trend can sway a conversation or decision making process. There is something unusually persuasive that comes with the combination of aesthetics, data and storytelling. I've met many people who have come to data science out of a desire to bring about social transformation in some sphere of life. People want to use technology and communication to make the world better. However, it's possible that this can quickly get out of hand. With this in mind, I've found that it can be important to have a clear sense of what sorts of convictions guide your work in this field: a "hacker code" of sorts. Here are the principles that I have settled on in my own practice of hacking religion:
This emphasis on ethics is especially important when we're doing data science because this kind of research work will put you in positions of influence. You might think this seems a bit overstated, but it never ceases to amaze me how much bringing a bar chart which succinctly shows a social trend can sway a conversation or decision making process. There is something unusually persuasive that comes with the combination of aesthetics, data and storytelling. I've met many people who have come to data science out of a desire to bring about social transformation in some sphere of life. People want to use technology and communication to make the world better. However, it's possible that this can quickly get out of hand. With this in mind, I've found that it can be important to have a clear sense of the convictions that guide your work in this field: a "hacker code" of sorts. Here are the principles that I have settled on in my own practice of "hacking" religion:
1. Tell the truth: be candid about your limits, use visualisation responsibly
2. Work transparently: open data, open code
3. Work in community: draw others in by producing reproducible research
4. Work with reality and learn by doing
1. *Tell the truth*: be candid about your limits, use visualisation responsibly
2. *Work transparently*: open data, open code
3. *Work in community*: draw others in by producing reproducible research
4. *Work with reality* and learn by doing
It never ceases to amaze me how often people think that, when they're working for something they think is important it is acceptable to conceal bad news or amplify good or compelling information beyond its real scope. There are always consequences, eventually. When people realise you've been misleading or manipulating them your platform and credibility will evaporate. Good work mixed with bad will all get tossed out. And sometimes, our convictions can lead us beyond our true apprehension of a situation.
It never ceases to amaze me how often people think that, when they're working for something they think is important it is acceptable to conceal bad news or amplify good or compelling information beyond its real scope. There are always consequences, eventually. When people realise you've been misleading or manipulating them your platform and credibility will evaporate. Good work mixed with bad will all get tossed out. And sometimes, our convictions can lead us beyond an accurate and true apprehension of the situation we are focussed on in research.
Presenting through "facts" an argument can become unnaturally compelling. Wrapping those facts up in something that uses colour, line and shape in a way that is aesthetically pleasing, even beautiful, enhances this allure even further. As you craft your own set of hacker principles, it's vitally important that you always strive to tell the truth. This includes a willingness to acknowledge the limits of your information, and to share the whole set of information. The easiest way to do this is to work with visualation in a responsible way (I'll get into this a bit more in Chapter 1) and to open up your data and code to scrutiny. By allowing others to try, criticise, edit, and reappropriate your code and data in their own ways, you contribute to knowledge and help to build up a community of accountability. The upside of this is that it's also a lot more fun and interesting to work alongside others.
@ -28,19 +28,19 @@ I'll return to these principles periodically as we work through the coding and d
## Learning to code: my way
Alongside these guiding principles, it's also worth saying a bit about how I like to design teaching and learning. Some readers may notice that this guide is a little different from other textbooks targetting learning to code. I remember when I was first starting out, I went through a fair few guides, and they all tended to spend about 200 pages on various theoretical bits, how you form an integer, or data structures, subroutines, the logical structure of algorithms etc. etc. It was usually weeks of reading before I got to actually *do* anything. I know some people may prefer this approach, but I prefer a problem-focussed approach to learning. Give me something that is broken, or a problem to solve, which engages the things I want to figure out and the motivation for learning just comes much more naturally. And we know from research in cognitive science that these kinds of problem-focussed approaches can tend to faciliate faster learning and better retention, so it's not just my personal preference, but also justified! It will be helpful for you to be aware of this approach when you get into the book as it explains some of the editorial choices I've made and the way I've structured things.
Alongside these guiding principles, it's also worth saying a bit about how I like to design teaching and learning. I remember when I was first starting out, and gathered coding manuals to read and learn from. They all tended to spend the first several hundred pages on theory, how you form an integer, data structures, subroutines, the logical structure of algorithms etc. etc. It was usually weeks of reading before I got to actually *do* anything. I know some people may prefer this approach, but I prefer a problem-focussed approach to learning. Give me something that is broken, or a problem to solve, which engages the things I want to figure out and the motivation for learning just comes much more naturally. And we know from research in cognitive science that these kinds of problem-focussed approaches can tend to faciliate faster learning and better retention. It will be helpful for you to be aware of this approach when you get into the book as it explains some of the editorial choices I've made and the way I've structured things.
Each chapter focusses on a series of *problems* which are particularly salient for the use of data science to conduct research into religion. These problems will be my focal point, guiding choices of specific aspects of programming to introduce to you as we work our way around that data set and some of the crucial questions that arise in terms of how we handle it. If you find this approach unsatisfying, luckily there are a number of really terrific guides which lay things out slowly and methodically and I will explicitly signpost some of these along the way so that you can do a "deep dive" when you feel like it. You can also find a list of resources in Appendix B to this book. Otherwise, I'll take an accelerated approach to this introduction to data science in R. I expect that you will identify adjacent resources and perhaps even come up with your own creative approaches along the way, which incidentally is how real data science tends to work in practice.
Each chapter focusses on a series of *problems* which are particularly salient for the use of data science to conduct research into religion. These problems will be my focal point, guiding choices of specific aspects of programming to introduce to you as we work our way around that dataset and some of the crucial questions that arise in terms of how we handle it. If you find this approach unsatisfying, luckily there are a number of really terrific guides which lay things out slowly and methodically and I will explicitly signpost some of these along the way so that you can do a "deep dive" when you feel like it. You can also find a list of resources in Appendix B to this book. Otherwise, I'll take an accelerated approach to this introduction to data science in R. I expect that you will identify adjacent resources and perhaps even come up with your own creative approaches along the way, which incidentally is how real data science tends to work in practice.
There are a range of terrific textbooks out there which cover all these elements in greater depth and more slowly. In particular, I'd recommend that many readers will want to check out Hadley Wickham's "[R For Data Science](https://r4ds.hadley.nz/)" book. I'll include marginal notes in this guide pointing to sections of that book, and a few others which unpack the basic mechanics of R in more detail.
There are a range of terrific textbooks which cover all these elements in greater depth and more slowly. In particular, I'd recommend that many readers will want to check out Hadley Wickham's "[R For Data Science](https://r4ds.hadley.nz/)" book. I'll include marginal notes in this guide pointing to sections of that book, and a few others which unpack the basic mechanics of R in more detail.
## Getting set up
Every single tool, programming language and data set we refer to in this book is free and open source. These tools have been produced by professionals and volunteers who are passionate about data science and research and want to share it with the world, and in order to do this (and following the "hacker way") they've made these tools freely available. This also means that you aren't restricted to a specific proprietary, expensive, or unavailable piece of software to do this work. I'll make a few opinionated recommendations here based on my own preferences and experience, but it's really up to your own style and approach. In fact, given that this is an open source textbook, you can even propose additions to this chapter explaining other tools you've found that you want to share with others.
Every single tool, programming language and data set we refer to in this book is free and open source. These tools have been produced by professionals and volunteers who are passionate about data science and research and want to share it with the world, and in order to do this (and following the "hacker way") they've made these tools freely available. This also means that you aren't restricted to a specific proprietary, expensive, or unavailable piece of software to do this work. I'll make a few opinionated recommendations here based on my own preferences and experience, but it's really up to your own style and approach. In fact, given that this is an open source textbook, you can even propose additions to this chapter online sharing other tools you've found that you want to share with others.
There are, right now, primarily two languages that statisticians and data scientists use for this kind of programmatic data science: python and R. Each language has its merits and I won't rehash the debates between various factions. For this book, we'll be using the R language. This is, in part, because the R user community and libraries tend to scale a bit better for the work that I'm commending in this book. However, it's entirely possible that one could use python for all these exercises, and perhaps in the future we'll have volume two of this book outlining python approaches to the same operations.
There are, right now, primarily two languages that statisticians and data scientists use for this kind of programmatic data science: python and R. Each language has its merits and I won't rehash the debates between various factions. For this book, we'll be using the R language. This is, in part, because I've found that the R user community and libraries tend to scale a bit better for the work that I'm commending in this book. However, it's entirely possible that one could use python for all these exercises, and I'll release a future version of this volume outlining python approaches to hacking religion.
Bearing this in mind, the first step you'll need to take is to download and install R. You can find instructions and install packages for a wide range of hardware on a key resource online for R programmers: the The Comprehensive R Archive Network (or "CRAN"): https://cran.rstudio.com. Once you've installed R, you've got some choices to make about the kind of programming environment you'd like to use. You can just use a plain text editor like `textedit` to write your code and then execute your programs using the R software you've just installed. However, most users, myself included, tend to use an integrated development environment (or "IDE"). This is usually another software package with a guided user interface and some visual elements that make it faster to write and test your code. Some IDE packages, will have built-in reference tools so you can look up options for libraries you use in your code, they will allow you to visualise the results of your code execution, and perhaps most important of all, will enable you to execute your programs line by line so you can spot errors more quickly (we call this "debugging"). The two most popular IDE platforms for R coding at the time of writing this textbook are RStudio and Visual Studio. You should download and try out both and stick with your favourite, as the differences are largely aesthetic. I use a combination of RStudio and an enhanced plain text editor Sublime Text for my coding.
Bearing this in mind, the first step you'll need to take is to download and install R. You can find instructions and install packages for a wide range of hardware on a key resource online for R programmers: The Comprehensive R Archive Network (or "CRAN"): https://cran.rstudio.com. Once you've installed R, you've got some choices to make about the kind of programming environment you'd like to use. You can just use a plain text editor like `textedit` to write your code and then execute your programs using the R software you've just installed. However, most users, myself included, tend to use an integrated development environment (or "IDE"). This is usually another software package with a guided user interface and some visual elements that make it faster to write and test your code. Some IDE packages will have built-in reference tools so you can look up options for libraries you use in your code. They will allow you to visualise the results of your code execution and perhaps most important of all, will enable you to execute your programs line by line so you can spot errors more quickly (we call this "debugging"). The two most popular IDE platforms for R coding at the time of writing this textbook are RStudio and Visual Studio. You should download and try out both and stick with your favourite, as the differences are largely aesthetic. I use a combination of RStudio and an enhanced plain text editor "Sublime Text" for my coding.
Once you have R and your pick of an IDE, you are ready to go! Proceed to the next chapter and we'll dive right in and get started!