Let’s start by importing some data into R. Because R is what is called an object-oriented programming language, we’ll always take our information and give it a home inside a named object. There are many different kinds of objects, which you can specify, but usually R will assign a type that seems to fit best.
If you’d like to explore this all in a bit more depth, you can find a very helpful summary in R for Data Science, chapter 8, “data import”.
In the example below, we’re going to read in data from a comma separated value file (“csv”) which has rows of information on separate lines in a text file with each column separated by a comma. This is one of the standard plain text file formats. R has a function you can use to import this efficiently called “read.csv”. Each line of code in R usually starts with the object, and then follows with instructions on what we’re going to put inside it, where that comes from, and how to format it:
setwd("/Users/kidwellj/gits/hacking_religion_textbook/hacking_religion")library(here) # much better way to manage working paths in R across multiple instances
here() starts at /Users/kidwellj/gits/hacking_religion_textbook
library(tidyverse)
-- Attaching core tidyverse packages ------------------------ tidyverse 2.0.0 --
v dplyr 1.1.3 v readr 2.1.4
v forcats 1.0.0 v stringr 1.5.0
v ggplot2 3.4.3 v tibble 3.2.1
v lubridate 1.9.3 v tidyr 1.3.0
v purrr 1.0.2
-- Conflicts ------------------------------------------ tidyverse_conflicts() --
x dplyr::filter() masks stats::filter()
x dplyr::lag() masks stats::lag()
i Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
here::i_am("chapter_1.qmd")
here() starts at /Users/kidwellj/gits/hacking_religion_textbook/hacking_religion
What’s in the table? You can take a quick look at either the top of the data frame, or the bottom using one of the following commands:
head(uk_census_2021_religion)
geography total no_religion christian buddhist hindu jewish
1 North East 2647012 1058122 1343948 7026 10924 4389
2 North West 7417397 2419624 3895779 23028 49749 33285
3 Yorkshire and The Humber 5480774 2161185 2461519 15803 29243 9355
4 East Midlands 4880054 1950354 2214151 14521 120345 4313
5 West Midlands 5950756 1955003 2770559 18804 88116 4394
6 East 6335072 2544509 2955071 26814 86631 42012
muslim sikh other no_response
1 72102 7206 9950 133345
2 563105 11862 28103 392862
3 442533 24034 23618 313484
4 210766 53950 24813 286841
5 569963 172398 31805 339714
6 234744 24284 36380 384627
This is actually a fairly ugly table, so I’ll use an R tool called kable to give you prettier tables in the future, like this:
knitr::kable(head(uk_census_2021_religion))
geography
total
no_religion
christian
buddhist
hindu
jewish
muslim
sikh
other
no_response
North East
2647012
1058122
1343948
7026
10924
4389
72102
7206
9950
133345
North West
7417397
2419624
3895779
23028
49749
33285
563105
11862
28103
392862
Yorkshire and The Humber
5480774
2161185
2461519
15803
29243
9355
442533
24034
23618
313484
East Midlands
4880054
1950354
2214151
14521
120345
4313
210766
53950
24813
286841
West Midlands
5950756
1955003
2770559
18804
88116
4394
569963
172398
31805
339714
East
6335072
2544509
2955071
26814
86631
42012
234744
24284
36380
384627
You can see how I’ve nested the previous command inside the kable command. For reference, in some cases when you’re working with really complex scripts with many different libraries and functions, they may end up with functions that have the same name. You can specify the library where the function is meant to come from by preceding it with :: as we’ve done knitr:: above. The same kind of output can be gotten using tail:
knitr::kable(tail(uk_census_2021_religion))
geography
total
no_religion
christian
buddhist
hindu
jewish
muslim
sikh
other
no_response
5
West Midlands
5950756
1955003
2770559
18804
88116
4394
569963
172398
31805
339714
6
East
6335072
2544509
2955071
26814
86631
42012
234744
24284
36380
384627
7
London
8799728
2380404
3577681
77425
453034
145466
1318754
144543
86759
615662
8
South East
9278068
3733094
4313319
54433
154748
18682
309067
74348
54098
566279
9
South West
5701186
2513369
2635872
24579
27746
7387
80152
7465
36884
367732
10
Wales
3107494
1446398
1354773
10075
12242
2044
66947
4048
15926
195041
2.3 Parsing and Exploring your data
The first thing you’re going to want to do is to take a smaller subset of a large data set, either by filtering out certain columns or rows. Now let’s say we want to just work with the data from the West Midlands, and we’d like to omit some of the columns. We can choose a specific range of columns using select, like this:
You can use the filter command to do this. To give an example, filter can pick a single row in the following way:
Now we’ll use select in a different way to narrow our data to specific columns that are needed (no totals!).
Some readers will want to pause here and check out Hadley Wickham’s “R For Data Science” book, in the section, “Data visualisation” to get a fuller explanation of how to explore your data.
In keeping with my goal to demonstrate data science through examples, we’re going to move on to producing some snappy looking charts for this data.
2.4 Making your first data visulation: the humble bar chart
We’ve got a nice lean set of data, so now it’s time to visualise this. We’ll start by making a pie chart:
There are two basic ways to do visualisations in R. You can work with basic functions in R, often called “base R” or you can work with an alternative library called ggplot:
Let’s assume we’re working with a data set that doesn’t include a “totals” column and that we might want to get sums for each column. This is pretty easy to do in R:
First, remove the column with region names and the totals for the regions as we want just integer data.
2
Second calculate the totals. In this example we use the tidyverse library dplyr(), but you can also do this using base R with colsums() like this: uk_census_2021_religion_totals <- colSums(uk_census_2021_religion_totals, na.rm = TRUE). The downside with base R is that you’ll also need to convert the result into a dataframe for ggplot like this: uk_census_2021_religion_totals <- as.data.frame(uk_census_2021_religion_totals)
3
In order to visualise this data using ggplot, we need to shift this data from wide to long format. This is a quick job using gather()
4
Now plot it out and have a look!
You might have noticed that these two dataframes give us somewhat different results. But with data science, it’s much more interesting to compare these two side-by-side in a visualisation. We can join these two dataframes and plot the bars side by side using bind() - which can be done by columns with cbind() and rows using rbind():
Do you notice there’s going to be a problem here? How can we tell one set from the other? We need to add in something idenfiable first! This isn’t too hard to do as we can simply create a new column for each with identifiable information before we bind them:
Now we’re ready to plot out our data as a grouped barplot:
ggplot(uk_census_2021_religion_merged, aes(fill=dataset, x=reorder(key,-value), value)) +geom_bar(position="dodge", stat ="identity")
If you’re looking closely, you will notice that I’ve added two elements to our previous ggplot. I’ve asked ggplot to fill in the columns with reference to the dataset column we’ve just created. Then I’ve also asked ggplot to alter the position="dodge" which places bars side by side rather than stacked on top of one another. You can give it a try without this instruction to see how this works. We will use stacked bars in a later chapter, so remember this feature.
If you inspect our chart, you can see that we’re getting closer, but it’s not really that helpful to compare the totals. What we need to do is get percentages that can be compared side by side. This is easy to do using another dplyr feature mutate:
Now you can see a very rough comparison, which sets bars from the W Midlands data and overall data side by side for each category. The same principles that we’ve used here can be applied to draw in more data. You could, for example, compare census data from different years, e.g. 2001 2011 and 2021. Our use of dplyr::mutate above can be repeated to add an infinite number of further series’ which can be plotted in bar groups.
We’ll draw this data into comparison with later sets in the next chapter. But the one glaring issue which remains for our chart is that it’s lacking in really any aesthetic refinements. This is where ggplot really shines as a tool as you can add all sorts of things.
These are basically just added to our ggplot code. So, for example, let’s say we want to improve the colours used for our bars. You can specify the formatting for the fill on the scale using scale_fill_brewer. This uses a particular tool (and a personal favourite of mine) called colorbrewer. Part of my appreciation of this tool is that you can pick colours which are not just visually pleasing, and produce useful contrast / complementary schemes, but you can also work proactively to accommodate colourblindness. Working with colour schemes which can be divergent in a visually obvious way will be even more important when we work on geospatial data and maps in a later chapter.
ggplot(uk_census_2021_religion_merged, aes(fill=dataset, x=key, y=perc)) +geom_bar(position="dodge", stat ="identity") +scale_fill_brewer(palette ="Set1")
We might also want to add a border to our bars to make them more visually striking (notice the addition of color to the geom_bar below. I’ve also added reorder() to the x value to sort descending from the largest to smallest.
You can find more information about reordering ggplots on the R Graph gallery.
We can fine tune a few other visual features here as well, like adding a title with ggtitle and a them with some prettier fonts with theme_ipsum() (which requires the hrbrthemes() library). We can also remove the x and y axis labels (not the data labels, which are rather important).
ggplot(uk_census_2021_religion_merged, aes(fill=fct_reorder(dataset, value), x=reorder(key,-value),value, y=perc)) +geom_bar(position="dodge", stat ="identity", colour ="black") +scale_fill_brewer(palette ="Set1") +ggtitle("Religious Affiliation in the UK: 2021") +xlab("") +ylab("")
2.5 Is your chart accurate? Telling the truth in data science
There is some technical work yet to be done fine-tuning the visualisation of our chart here. But I’d like to pause for a moment and consider an ethical question. Is the title of this chart truthful and accurate? On one hand, it is a straight-forward reference to the nature of the question asked on the 2021 census survey instrument. However, as you will see in the next chapter, large data sets from the same year which asked a fairly similar question yield different results. Part of this could be attributed to the amount of non-respose to this specific question which, in the 2021 census is between 5-6% across many demographics. It’s possible (though perhaps unlikely) that all those non-responses were Sikh respondents who felt uncomfortable identifying themselves on such a survey. If even half of the non-responses were of this nature, this would dramatically shift the results especially in comparison to other minority groups. So there is some work for us to do here in representing non-response as a category on the census. But it’s equally possible that someone might feel uncertain when answering, but nonetheless land on a particular decision marking “Christian” when they wondered if they should instead tick “no religion. Some surveys attempt to capture uncertainty in this way, asking respondents to mark how confident they are about their answers, but the census hasn’t capture this so we simply don’t know. If a large portion of respondents in the”Christian” category were hovering between this and another response, again, they might shift their answers when responding on a different day, perhaps having just had a conversation with a friend which shifted their thinking. Even the inertia of survey design can have an effect on this, so responding to other questions in a particular way, thinking about ethnic identity, for example, can prime a person to think about their religious identity in a different or more focussed way, altering their response to the question. For this reason, some survey instruments randomise the order of questions. This hasn’t been done on the census (which would have been quite hard work given that most of the instruments were printed hard copies!), so again, we can’t really be sure if those answers given are stable. Finally, researchers have also found that when people are asked to mark their religious affiliation, sometimes they can prefer to mark more than one answer. A person might consider themselves to be “Muslim” but also “Spiritual but not religious” preferring the combination of those identities. It is also the case that respondents can identify with more unexpected hybrid religious identities, such as “Christian” and “Hindu”. The census only allows respondents to tick a single box for the religion category. It is worth noting that, in contrast, the responses for ethnicity allow for combinations. Given that this is the case, it’s impossible to know which way a person went at the fork in the road as they were forced to choose just one half of this kind of hybrid identity. Finally, it is interesting to wonder exactly what it means for a person when they tick a box like this. Is it because they attend synagogue on a weekly basis? Some persons would consider weekly attendance at workship a prerequisite for membership in a group, but others would not. Indeed we can infer from surveys and research which aims to track rates of participation in weekly worship that many people who tick boxes for particular religious identities on the census have never attended a worship service at all.
What does this mean for our results? Are they completely unreliable and invalid? I don’t think this is the case or that taking a clear-eyed look at the force and stability of our underlying data should be cause for despair. Instead, the most appropriate response is humility. Someone has made a statement which is recorded in the census, of this we can be sure. They felt it to be an accurate response on some level based on the information they had at the time. And with regard to the census, it is a massive, almost completely population level, sample so there is additional validity there. The easiest way to represent all this reality in the form of speaking truthfully about our data is to acknowledge that however valid it may seem, it is nonetheless a snapshot. For this reason, I would always advise that the best title for a chart is one which specifies the data set. We should also probably do something different with those non-responses:
ggplot(uk_census_2021_religion_merged, aes(fill=fct_reorder(dataset, value), x=reorder(key,-value),value, y=perc)) +geom_bar(position="dodge", stat ="identity", colour ="black") +scale_fill_brewer(palette ="Set1") +ggtitle("Religious Affiliation in the 2021 Census of England and Wales") +xlab("") +ylab("")
Change orientation of X axis labels + theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
Relabel fields Simplify y-axis labels Add percentage text to bars (or maybe save for next chapter?)
2.6 Making our script reproducible
Let’s take a moment to review our hacker code. I’ve just spent some time addressing how we can be truthful in our data science work. We haven’t done much yet to talk abour reproducibility.
2.7 Multifactor Visualisation
One element of R data analysis that can get really interesting is working with multiple variables. Above we’ve looked at the breakdown of religious affiliation across the whole of England and Wales (Scotland operates an independent census), and by placing this data alongside a specific region, we’ve already made a basic entry into working with multiple variables but this can get much more interesting. Adding an additional quantative variable (also known as bivariate data) into the mix, however can also generate a lot more information and we have to think about visualising it in different ways which can still communicate with visual clarity in spite of the additional visual noise which is inevitable with enhanced complexity. Let’s have a look at the way that religion in England and Wales breaks down by ethnicity.
library(nomisr)# Process to explore nomis() data for specific datasetsreligion_search <-nomis_search(name ="*Religion*")religion_measures <-nomis_get_metadata("NM_529_1", "measures")tibble::glimpse(religion_measures)
religion_geography <-nomis_get_metadata("NM_529_1", "geography", "TYPE")# Get table of Census 2011 religion data from nomisz <-nomis_get_data(id ="NM_529_1", time ="latest", geography ="TYPE499", measures=c(20301))# Filter down to simplified dataset with England / Wales and percentages without totalsuk_census_2011_religion <-filter(z, GEOGRAPHY_NAME=="England and Wales"& RURAL_URBAN_NAME=="Total"& C_RELPUK11_NAME !="All categories: Religion")# Drop unnecessary columnsuk_census_2011_religion <-select(uk_census_2011_religion, C_RELPUK11_NAME, OBS_VALUE)# Plot resultsplot1 <-ggplot(uk_census_2011_religion, aes(x = C_RELPUK11_NAME, y = OBS_VALUE)) +geom_bar(stat ="identity") +theme(axis.text.x =element_text(angle =90, vjust =0.5, hjust=1))ggsave(filename ="plot.png", plot = plot1)
Saving 7 x 5 in image
# grab data from nomis for 2011 census religion / ethnicity tablez1 <-nomis_get_data(id ="NM_659_1", time ="latest", geography ="TYPE499", measures=c(20100))# select relevant columnsuk_census_2011_religion_ethnicitity <-select(z1, GEOGRAPHY_NAME, C_RELPUK11_NAME, C_ETHPUK11_NAME, OBS_VALUE)# Filter down to simplified dataset with England / Wales and percentages without totalsuk_census_2011_religion_ethnicitity <-filter(uk_census_2011_religion_ethnicitity, GEOGRAPHY_NAME=="England and Wales"& C_RELPUK11_NAME !="All categories: Religion"& C_ETHPUK11_NAME !="All categories: Ethnic group")# Simplify data to only include general totals and omit subcategoriesuk_census_2011_religion_ethnicitity <- uk_census_2011_religion_ethnicitity %>%filter(grepl('Total', C_ETHPUK11_NAME))ggplot(uk_census_2011_religion_ethnicitity, aes(fill=C_ETHPUK11_NAME, x=C_RELPUK11_NAME, y=OBS_VALUE)) +geom_bar(position="dodge", stat ="identity", colour ="black") +scale_fill_brewer(palette ="Set1") +ggtitle("Religious Affiliation in the 2021 Census of England and Wales") +xlab("") +ylab("") +theme(axis.text.x =element_text(angle =90, vjust =0.5, hjust=1))
The trouble with using grouped bars here, as you can see, is that there are quite sharp disparities which make it hard to compare in meaningful ways. We could use logarithmic rather than linear scaling as an option, but this is hard for many general public audiences to apprecaite without guidance. One alternative quick fix is to extract data from “white” respondents which can then be placed in a separate chart with a different scale.
# Filter down to simplified dataset with England / Wales and percentages without totalsuk_census_2011_religion_ethnicitity_white <-filter(uk_census_2011_religion_ethnicitity, C_ETHPUK11_NAME =="White: Total")uk_census_2011_religion_ethnicitity_nonwhite <-filter(uk_census_2011_religion_ethnicitity, C_ETHPUK11_NAME !="White: Total")ggplot(uk_census_2011_religion_ethnicitity_nonwhite, aes(fill=C_ETHPUK11_NAME, x=C_RELPUK11_NAME, y=OBS_VALUE)) +geom_bar(position="dodge", stat ="identity", colour ="black") +scale_fill_brewer(palette ="Set1") +ggtitle("Religious Affiliation in the 2021 Census of England and Wales") +xlab("") +ylab("") +theme(axis.text.x =element_text(angle =90, vjust =0.5, hjust=1))
This still doesn’t quite render with as much visual clarity and communication as I’d like. For a better look, we can use a technique in R called “faceting” to create a series of small charts which can be viewed alongside one another.
ggplot(uk_census_2011_religion_ethnicitity_nonwhite, aes(x=C_RELPUK11_NAME, y=OBS_VALUE)) +geom_bar(position="dodge", stat ="identity", colour ="black") +facet_wrap(~C_ETHPUK11_NAME, ncol =2) +scale_fill_brewer(palette ="Set1") +ggtitle("Religious Affiliation in the 2011 Census of England and Wales") +xlab("") +ylab("") +theme(axis.text.x =element_text(angle =90, vjust =0.5, hjust=1))