55 Test on Unexpected Inputs
Now, let’s try to break our function. Don’t get truly diabolical (yet). Just make the kind of mistakes you can imagine making at 2am when, 3 years from now, you rediscover this useful function you wrote. Give your function inputs it’s not expecting.
average_delay_by_airline (airline_code="AB")
#> Error in average_delay_by_airline(airline_code = "AB"): object 'flights' not found
average_delay_by_airline(delay_type = "arr-delay")
#> Error in average_delay_by_airline(delay_type = "arr-delay"): object 'flights' not found
How happy are you with those error messages? You must imagine that some entire script has failed and that you were hoping to just source()
it without re-reading it. If a colleague or future you encountered these errors, do you run screaming from the room? How hard is it to pinpoint the usage problem?
55.1 Error Handling
We’ll add some simple checks to ensure that the airline code provided to the function is valid. If it’s not, we’ll return an informative error message. We’ll also add a check to ensure that the delay type provided is valid.
average_delay_by_airline <- function(data = flights, airline_code, delay_type = "dep_delay") {
# Check if any of the provided airline codes are not in the dataset
if (!any(airline_code %in% c(data$carrier, NA))) {
stop("One or more airline codes not found in the dataset.")
}
if (!delay_type %in% c("dep_delay", "arr_delay")) {
stop("Invalid delay type specified. Choose either 'dep_delay' or 'arr_delay'.")
}
data %>%
filter(carrier == airline_code) %>%
summarise(average_delay = mean(get(delay_type), na.rm = TRUE))
}
55.2 Check the validity of arguments
For functions that will be used again – which is not all of them! – it is good to check the validity of arguments. This implements a rule from the Unix philosophy:
Rule of Repair: When you must fail, fail noisily and as soon as possible.
55.2.1 stop if not
stopifnot()
is the entry level solution. I use it here to make sure the input data
is a data.frame.
mmm <- function(data = flights, airline_code, delay_type = "dep_delay") {
stopifnot(is.data.frame(data))
# Check if any of the provided airline codes are not in the dataset
if (!any(airline_code %in% c(data$carrier, NA))) {
stop("One or more airline codes not found in the dataset.")
}
if (!delay_type %in% c("dep_delay", "arr_delay")) {
stop("Invalid delay type specified. Choose either 'dep_delay' or 'arr_delay'.")
}
data %>%
filter(carrier == airline_code) %>%
summarise(average_delay = mean(get(delay_type), na.rm = TRUE))
}
mmm("eggplants are purple")
#> Error in mmm("eggplants are purple"): is.data.frame(data) is not TRUE
And we see that it catches all of the self-inflicted damage we would like to avoid.
55.2.2 if then stop
stopifnot()
doesn’t provide a helpful error message. The next approach is widely used. Put your validity check inside an if()
statement and call stop()
yourself, with a custom error message, in the body.
mmm2 <- function(x) {
if (!is.numeric(x)) {
stop(
"I am so sorry, but this function only works for numeric input!\n",
"You have provided an object of class: ", class(x)[1]
)
}
max(x) - min(x)
}
mmm2(gapminder)
#> Error in eval(expr, envir, enclos): object 'gapminder' not found
In addition to a gratuitous apology, the error raised also contains two more pieces of helpful info:
- Which function threw the error.
- Hints on how to fix things: expected class of input vs actual class.
If it is easy to do so, I highly recommend this template: “you gave me THIS, but I need THAT”.
The tidyverse style guide has a very useful chapter on how to construct error messages.
55.3 Wrap-up and what’s next?
Here’s the function we’ve written so far:
mmm2
#> function(x) {
#> if (!is.numeric(x)) {
#> stop(
#> "I am so sorry, but this function only works for numeric input!\n",
#> "You have provided an object of class: ", class(x)[1]
#> )
#> }
#> max(x) - min(x)
#> }
What we’ve accomplished:
- We’ve written our first function.
- We are checking the validity of its input, argument
x
. - We’ve done a good amount of informal testing.
Where to next? In part 2 we generalize this function to take differences in other quantiles and learn how to set default values for arguments.
55.4 Where were we? Where are we going?
In part 1 we wrote our simple R function to compute the difference between the max and min of a numeric vector. We checked the validity of the function’s only argument and, informally, we verified that it worked pretty well.
In this part, we generalize this function, learn more technical details about R functions, and set default values for some arguments.
55.7 Generalize our function to other quantiles
The max and the min are special cases of a quantile. Here are other special cases you may have heard of:
- median = 0.5 quantile
- 1st quartile = 0.25 quantile
- 3rd quartile = 0.75 quantile
If you’re familiar with box plots, the rectangle typically runs from the 1st quartile to the 3rd quartile, with a line at the median.
If \(q\) is the \(p\)-th quantile of a set of \(n\) observations, what does that mean? Approximately \(pn\) of the observations are less than \(q\) and \((1 - p)n\) are greater than \(q\). Yeah, you need to worry about rounding to an integer and less/greater than or equal to, but these details aren’t critical here.
Let’s generalize our function to take the difference between any two quantiles. We can still consider the max and min, if we like, but we’re not limited to that.
55.8 Get something that works, again
The eventual inputs to our new function will be the data x
and two probabilities.
First, play around with the quantile()
function. Convince yourself you know how to use it, for example, by cross-checking your results with other built-in functions.
quantile(gapminder$lifeExp,
robs = c(0.25, 0.75)
)
#> 0% 25% 50% 75% 100%
#> 23.6 48.2 60.7 70.8 82.6
boxplot(gapminder$lifeExp, plot = FALSE)$stats
#> [,1]
#> [1,] 23.6
#> [2,] 48.2
#> [3,] 60.7
#> [4,] 70.8
#> [5,] 82.6
Now write a code snippet that takes the difference between two quantiles.
55.9 Turn the working interactive code into a function, again
I’ll use qdiff
as the base of our function’s name. I copy the overall structure from our previous “max minus min” work but replace the guts of the function with the more general code we just developed.
qdiff1 <- function(x, probs) {
stopifnot(is.numeric(x))
the_quantiles <- quantile(x = x, probs = probs)
max(the_quantiles) - min(the_quantiles)
}
qdiff1(gapminder$lifeExp, probs = c(0.25, 0.75))
#> [1] 22.6
Again we do some informal tests against familiar results and external implementations.
55.10 Argument names: freedom and conventions
I want you to understand the importance of argument names.
I can name my arguments almost anything I like. Proof:
qdiff2 <- function(zeus, hera) {
stopifnot(is.numeric(zeus))
the_quantiles <- quantile(x = zeus, probs = hera)
max(the_quantiles) - min(the_quantiles)
}
qdiff2(zeus = gapminder$lifeExp, hera = 0:1)
#> [1] 59
Although I can name my arguments after Greek gods, it’s usually a bad idea. Take all opportunities to make things more self-explanatory via meaningful names. Future you will thank you.
If you are going to pass the arguments of your function as arguments of a built-in function, consider copying the argument names. Unless you have a good reason to do your own thing (some argument names are bad!), be consistent with the existing function. Again, the reason is to reduce your cognitive load. This is what I’ve been doing all along and now you know why:
qdiff1
#> function(x, probs) {
#> stopifnot(is.numeric(x))
#> the_quantiles <- quantile(x = x, probs = probs)
#> max(the_quantiles) - min(the_quantiles)
#> }
#> <bytecode: 0x0000017aa2f9d698>
We took this detour so you could see there is no structural relationship between my arguments (x
and probs
) and those of quantile()
(also x
and probs
). The similarity or equivalence of the names accomplishes nothing as far as R is concerned; it is solely for the benefit of humans reading, writing, and using the code. Which is very important!
55.11 What a function returns
By this point, I expect someone will have asked about the last line in my function’s body. Look above for a reminder of the function’s definition.
By default, a function returns the result of the last line of the body. I am just letting that happen with the line max(the_quantiles) - min(the_quantiles)
. However, there is an explicit function for this: return()
. I could just as easily make this the last line of my function’s body:
You absolutely must use return()
if you want to return early based on some condition, i.e. before execution gets to the last line of the body. Otherwise, you can decide your own conventions about when you use return()
and when you don’t.
55.12 Default values: freedom to NOT specify the arguments
What happens if we call our function but neglect to specify the probabilities?
qdiff1(gapminder$lifeExp)
#> Error in qdiff1(gapminder$lifeExp): argument "probs" is missing, with no default
Oops! At the moment, this causes a fatal error. It can be nice to provide some reasonable default values for certain arguments. In our case, it would not be a good idea to specify a default value for the primary input x
, but it would be a good idea to specify a default for probs
.
We started by focusing on the max and the min, so I think those make reasonable defaults. Here’s how to specify that in a function definition.
qdiff3 <- function(x, probs = c(0, 1)) {
stopifnot(is.numeric(x))
the_quantiles <- quantile(x, probs)
max(the_quantiles) - min(the_quantiles)
}
Again we check how the function works, in old examples and new, specifying the probs
argument and not.
55.13 Check the validity of arguments, again
Exercise: upgrade our argument validity checks in light of the new argument probs
.
55.14 Wrap-up and what’s next?
Here’s the function we’ve written so far:
qdiff3
#> function(x, probs = c(0, 1)) {
#> stopifnot(is.numeric(x))
#> the_quantiles <- quantile(x, probs)
#> max(the_quantiles) - min(the_quantiles)
#> }
#> <bytecode: 0x0000017a93abc7d8>
What we’ve accomplished:
- We’ve generalized our first function to take a difference between arbitrary quantiles.
- We’ve specified default values for the probabilities that set the quantiles.
Where to next? Next, we tackle NA
s, the special ...
argument, and formal unit testing.
55.15 Where were we? Where are we going?
Previously, we generalized our first R function so it could take the difference between any two quantiles of a numeric vector. We also set default values for the underlying probabilities, so that, by default, we compute the max minus the min.
In this part, we tackle NA
s, the special argument ...
and formal testing.
55.18 Be proactive about NA
s
I am being gentle by letting you practice with the Gapminder data. In real life, missing data will make your life a living hell. If you are lucky, it will be properly indicated by the special value NA
, but don’t hold your breath. Many built-in R functions have an na.rm =
argument through which you can specify how you want to handle NA
s. Typically the default value is na.rm = FALSE
and typical default behavior is to either let NA
s propagate or to raise an error. Let’s see how quantile()
handles NA
s:
z <- gapminder$lifeExp
z[3] <- NA
quantile(gapminder$lifeExp)
#> 0% 25% 50% 75% 100%
#> 23.6 48.2 60.7 70.8 82.6
quantile(z)
#> Error in quantile.default(z): missing values and NaN's not allowed if 'na.rm' is FALSE
So quantile()
simply will not operate in the presence of NA
s unless na.rm = TRUE
. How shall we modify our function?
If we wanted to hardwire na.rm = TRUE
, we could. Focus on our call to quantile()
inside our function definition.
qdiff4 <- function(x, probs = c(0, 1)) {
stopifnot(is.numeric(x))
the_quantiles <- quantile(x, probs, na.rm = TRUE)
max(the_quantiles) - min(the_quantiles)
}
qdiff4(gapminder$lifeExp)
#> [1] 59
This works but it is dangerous to invert the default behavior of a well-known built-in function and to provide the user with no way to override this.
We could add an na.rm =
argument to our own function. We might even enforce our preferred default – but at least we’re giving the user a way to control the behavior around NA
s.
55.19 The useful but mysterious ...
argument
You probably could have lived a long and happy life without knowing there are at least 9 different algorithms for computing quantiles. Go read about the type
argument of quantile()
. TLDR: If a quantile is not unambiguously equal to an observed data point, you must somehow average two data points. You can weight this average different ways, depending on the rest of the data, and type =
controls this.
Let’s say we want to give the user of our function the ability to specify how the quantiles are computed, but we want to accomplish with as little fuss as possible. In fact, we don’t even want to clutter our function’s interface with this! This calls for the very special ...
argument. In English, this set of three dots is frequently called an “ellipsis”.
qdiff6 <- function(x, probs = c(0, 1), na.rm = TRUE, ...) {
the_quantiles <- quantile(x = x, probs = probs, na.rm = na.rm, ...)
max(the_quantiles) - min(the_quantiles)
}
The practical significance of the type =
argument is virtually nonexistent, so we can’t demo with the Gapminder data. Thanks to @wrathematics, here’s a small example where we can (barely) detect a difference due to type
.
set.seed(1234)
z <- rnorm(10)
quantile(z, type = 1)
#> 0% 25% 50% 75% 100%
#> -2.346 -0.890 -0.564 0.429 1.084
Now we can call our function, requesting that quantiles be computed in different ways.
While the difference may be subtle, it’s there. Marvel at the fact that we have passed type = 1
through to quantile()
even though it was not a formal argument of our own function.
The special argument ...
is very useful when you want the ability to pass arbitrary arguments down to another function, but without constantly expanding the formal arguments to your function. This leaves you with a less cluttered function definition and gives you future flexibility to specify these arguments only when you need to.
You will also encounter the ...
argument in many built-in functions – read up on c()
or list()
– and now you have a better sense of what it means. It is not a breezy “and so on and so forth.”
There are also downsides to ...
, so use it with intention. In a package, you will have to work harder to create truly informative documentation for your user. Also, the quiet, absorbent properties of ...
mean it can sometimes silently swallow other named arguments, when the user has a typo in the name. Depending on whether or how this fails, it can be a little tricky to find out what went wrong.
The ellipsis package provides tools that help package developers use ...
more safely. The in-progress tidyverse principles guide provides further guidance on the design of functions that take ...
in Data, dots, details.
55.20 Use testthat for formal unit tests
Until now, we’ve relied on informal tests of our evolving function. If you are going to use a function a lot, especially if it is part of a package, it is wise to use formal unit tests.
The testthat package (cran; GitHub) provides excellent facilities for this, with a distinct emphasis on automated unit testing of entire packages. However, we can take it out for a test drive even with our one measly function.
We will construct a test with test_that()
and, within it, we put one or more expectations that check actual against expected results. You simply harden your informal, interactive tests into formal unit tests. Here are some examples of tests and indicative expectations.
library(testthat)
test_that("invalid args are detected", {
expect_error(qdiff6("eggplants are purple"))
expect_error(qdiff6(iris))
})
test_that("NA handling works", {
expect_error(qdiff6(c(1:5, NA), na.rm = FALSE))
expect_equal(qdiff6(c(1:5, NA)), 4)
})
No news is good news! Let’s see what test failure would look like. Let’s revert to a version of our function that does no NA
handling, then test for proper NA
handling. We can watch it fail.
qdiff_no_NA <- function(x, probs = c(0, 1)) {
the_quantiles <- quantile(x = x, probs = probs)
max(the_quantiles) - min(the_quantiles)
}
test_that("NA handling works", {
expect_that(qdiff_no_NA(c(1:5, NA)), equals(4))
})
Similar to the advice to use assertions in data analytical scripts, I recommend you use unit tests to monitor the behavior of functions you (or others) will use often. If your tests cover the function’s important behavior, then you can edit the internals freely. You’ll rest easy in the knowledge that, if you broke anything important, the tests will fail and alert you to the problem. A function that is important enough for unit tests probably also belongs in a package, where there are obvious mechanisms for running the tests as part of overall package checks.