Open Source: R Programming and Statistical Analysis {#IntroductoryRprogamming}

"Walking on water and developing software from a specification are easy if both are frozen" -- Edward V. Berard

Got R?

In this chapter, we develop some expertise in using the R statistical package. See the manual https://cran.r-project.org/doc/manuals/r-release/R-intro.pdf on the R web site. Work through Appendix A, at least the first page. Also see Grant Farnsworth's document "Econometrics in R": https://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf.

There is also a great book that I personally find to be of very high quality, titled "The Art of R Programming" by Norman Matloff.

You can easily install the R programming language, which is a very useful tool for Machine Learning. See: http://en.wikipedia.org/wiki/Machine_learning

Get R from: http://www.r-project.org/ (download and install it).

If you want to use R in IDE mode, download RStudio: http://www.rstudio.com.

Here is quick test to make sure your installation of R is working along with graphics capabilities.

System Commands

If you want to directly access the system you can issue system commands as follows:

Loading Data

To get started, we need to grab some data. Go to Yahoo! Finance and download some historical data in an Excel spreadsheet, re-sort it into chronological order, then save it as a CSV file. Read the file into R as follows.

Getting External Stock Data

We can do the same data set up exercise for financial data using the quantmod package.

Note: to install a package you can use the drop down menus on Windows and Mac operating systems inside RStudio, and use a package installer on Linux.

You can install R packages from the console using conda: conda install r-

Or issue the following command from the notebook:

(when asked for "selection" enter 60 for California)

Now we move on to using this package for one stock.

Let's take a quick look at the data.

Extract the dates using pipes (we will see this in more detail later).

Plot the data.

Summarize the data.

Compute risk (volatility).

We may also use the package to get data for more than one stock.

We now go ahead and concatenate columns of data into one stock data set.

Now, compute daily returns. This time, we do log returns in continuous-time. The mean returns are:

We can also compute the covariance matrix and correlation matrix:

Notice the print command allows you to choose the number of significant digits (in this case 4). Also, as expected the four return time series are positively correlated with each other.

Data Frames

Data frames are the most essential data structure in the R programming language. One may think of a data frame as simply a spreadsheet. In fact you can view it as such with the following command.

However, data frames in R are much more than mere spreadsheets, which is why Excel will never trump R in the hanlding and analysis of data, except for very small applications on small spreadsheets. One may also think of data frames as databases, and there are many commands that we may use that are database-like, such as joins, merges, filters, selections, etc. Indeed, packages such as dplyr and data.table are designed to make these operations seamless, and operate efficiently on big data, where the number of observations (rows) are of the order of hundreds of millions.

Data frames can be addressed by column names, so that we do not need to remember column numbers specifically. If you want to find the names of all columns in a data frame, the names function does the trick. To address a chosen column, append the column name to the data frame using the "$" connector, as shown below.

The command printed out the first few observations in the column "Close". All variables and functions in R are "objects", and you are well-served to know the object type, because objects have properties and methods apply differently to objects of various types. Therefore, to check an object type, use the class function.

To obtain descriptive statistics on the data variables in a data frame, the summary function is very handy.

Let's take a given column of data and perform some transformations on it. We can also plot the data, with some arguments for look and feel, using the plot function.

In case you want more descriptive statistics than provided by the summary function, then use an appropriate package. We may be interested in the higher-order moments, and we use the moments package for this.

Compute the daily and annualized standard deviation of returns.

Notice the interesting use of the print function here. The variance is easy as well.

Higher-Order Moments

Skewness and kurtosis are key moments that arise in all return distributions. We need a different library in R for these. We use the moments library.

\begin{equation} \mbox{Skewness} = \frac{E[(X-\mu)^3]}{\sigma^{3}} \end{equation}

Skewness means one tail is fatter than the other (asymmetry). Fatter right (left) tail implies positive (negative) skewness.

\begin{equation} \mbox{Kurtosis} = \frac{E[(X-\mu)^4]}{\sigma^{4}} \end{equation}

Kurtosis means both tails are fatter than with a normal distribution.

For the normal distribution, skewness is zero, and kurtosis is 3. Kurtosis minus three is denoted "excess kurtosis".

What is the skewness and kurtosis of the stock index (S\&P500)?

Reading space delimited files

Often the original data is in a space delimited file, not a comma separated one, in which case the read.table function is appropriate.

We compute covariance and correlation in the data frame.

Pipes with magrittr

We may redo the example above using a very useful package called magrittr which mimics pipes in the Unix operating system. In the code below, we pipe the returns data into the correlation function and then "pipe" the output of that into the print function. This is analogous to issuing the command print(cor(rets)).

Matrices

Question: What do you get if you cross a mountain-climber with a mosquito? Answer: Can't be done. You'll be crossing a scaler with a vector.

We will use matrices extensively in modeling, and here we examine the basic commands needed to create and manipulate matrices in R. We create a $4 \times 3$ matrix with random numbers as follows:

Transposing the matrix, notice that the dimensions are reversed.

Of course, it is easy to multiply matrices as long as they conform. By "conform" we mean that when multiplying one matrix by another, the number of columns of the matrix on the left must be equal to the number of rows of the matrix on the right. The resultant matrix that holds the answer of this computation will have the number of rows of the matrix on the left, and the number of columns of the matrix on the right. See the examples below:

Here is an example of non-conforming matrices.

Taking the inverse of the covariance matrix, we get:

Check that the inverse is really so!

It is, the result of multiplying the inverse matrix by the matrix itself results in the identity matrix.

A covariance matrix should be positive definite. Why? What happens if it is not? Checking for this property is easy.

What happens if you compute pairwise covariances from differing lengths of data for each pair?

Let's take the returns data we have and find the inverse.

Root Finding

Finding roots of nonlinear equations is often required, and R has several packages for this purpose. Here we examine a few examples. Suppose we are given the function [ (x^2 + y^2 - 1)^3 - x^2 y^3 = 0 ] and for various values of $y$ we wish to solve for the values of $x$. The function we use is called multiroot and the use of the function is shown below.

Here we demonstrate the use of another function called uniroot.

Regression

In a multivariate linear regression, we have

\begin{equation} Y = X \cdot \beta + e \end{equation}

where $Y \in R^{t \times 1}$, $X \in R^{t \times n}$, and $\beta \in R^{n \times 1}$, and the regression solution is simply equal to $\beta = (X'X)^{-1}(X'Y) \in R^{n \times 1}$.

To get this result we minimize the sum of squared errors.

\begin{eqnarray*} \min_{\beta} e'e &=& (Y - X \cdot \beta)' (Y-X \cdot \beta) \\ &=& Y'(Y-X \cdot \beta) - (X \beta)'\cdot (Y-X \cdot \beta) \\ &=& Y'Y - Y' X \beta - (\beta' X') Y + \beta' X'X \beta \\ &=& Y'Y - Y' X \beta - Y' X \beta + \beta' X'X \beta \\ &=& Y'Y - 2Y' X \beta + \beta' X'X \beta \end{eqnarray*}

Note that this expression is a scalar.

Differentiating w.r.t. $\beta'$ gives the following f.o.c:

\begin{eqnarray*} - 2 X'Y + 2 X'X \beta&=& {\bf 0} \\ & \Longrightarrow & \\ \beta &=& (X'X)^{-1} (X'Y) \end{eqnarray*}

There is another useful expression for each individual $\beta_i = \frac{Cov(X_i,Y)}{Var(X_i)}$. You should compute this and check that each coefficient in the regression is indeed equal to the $\beta_i$ from this calculation.

Example: We run a stock return regression to exemplify the algebra above.

Now we can cross-check the regression using the algebraic solution for the regression coefficients.

Example: As a second example, we take data on basketball teams in a cross-section, and try to explain their performance using team statistics. Here is a simple regression run on some data from the 2005-06 NCAA basketball season for the March madness stats. The data is stored in a space-delimited file called ncaa.txt. We use the metric of performance to be the number of games played, with more successful teams playing more playoff games, and then try to see what variables explain it best. We apply a simple linear regression that uses the R command lm, which stands for "linear model".

An alternative specification of regression using data frames is somewhat easier to implement.

P-Values, t-statistics

In a regression, we estimate the coefficients on each explanatory variable (these are the Estimates in the regression above). In addition, we also estimate the standard deviation of the coefficient value, which implies the range around the mean value. This is called the standard error of the coefficient. We are interested in making sure that the coefficient value $b$ is not zero in the statistical sense, usually taken to mean that it is at least 2 standard deviations away from zero. That is, we want the coefficient value $b$ divided by its standard deviation $\sigma_b$ to be at least 2. This is called the t-statistic or t-value, shown in the regression above.

The t-statistic is the number of standard deviations the coefficient is away from zero. It implies a p-value, which is a probability that the coefficient is equal to zero. So, we want the p-values to be small. We see in the above regression [Pr(>|t|)] that the coefficients that are statistically significant have small p-values and large absolute values of t-statistics. It is intuitive that when the t-statistic is large (negative or positive) it means that the coefficient is far away from zero and using the standard normal distribution we can calculate the probability left in the tails. So if the t-statistic is (say) 2.843, it means that there is only 0.006375 probability remaining in the right tail to the right of the t-statistic value.

For a more detailed discussion see this excellent article in the Scientific American (2019); pdf.

Parts of a regression

The linear regression is fit by minimizing the sum of squared errors, but the same concept may also be applied to a nonlinear regression as well. So we might have:

$$ y_i = f(x_{i1},x_{i2},...,x_{ip}) + \epsilon_i, \quad i=1,2,...,n $$

which describes a data set that has $n$ rows and $p$ columns, which are the standard variables for the number of rows and columns. Note that the error term (residual) is $\epsilon_i$.

The regression will have $(p+1)$ coefficients, i.e., ${\bf b} = \{b_0,b_1,b_2,...,b_p\}$, and ${\bf x}_i = \{x_{i1},x_{i2},...,x_{ip}\}$. The model is fit by minimizing the sum of squared residuals, i.e.,

$$ \min_{\bf b} \sum_{i=1}^n \epsilon_i^2 $$

We define the following:

The $R$-squared of the regression is

$$ R^2 = \left( 1 - \frac{SSE}{SST} \right) \quad \in (0,1) $$

The $F$-statistic in the regression is what tells us if the RHS variables comprise a model that explains the LHS variable sufficiently. Do the RHS variables offer more of an explanation that simply assuming that the mean value of $y$ is the best prediction? The null hypothesis we care about is

To test this the $F$-statistic is computed as the following ratio:

$$ F = \frac{\mbox{Explained variance}}{\mbox{Unexplained variance}} = \frac{SSM/DFM}{SSE/DFE} = \frac{MSM}{MSE} $$

where $MSM$ is the mean squared model error, and $MSE$ is mean squared error.

Now let's relate this to $R^2$. First, we find an approximation for the $R^2$.

$$ R^2 = 1 - \frac{SSE}{SST} \\ = 1 - \frac{SSE/n}{SST/n} \\ \approx 1 - \frac{MSE}{MST} \\ = \frac{MST-MSE}{MST} \\ = \frac{MSM}{MST} $$

The $R^2$ of a regression that has no RHS variables is zero, and of course $MSM=0$. In such a regression $MST = MSE$. So the expression above becomes:

$$ R^2_{p=0} = \frac{MSM}{MST} = 0 $$

We can also see with some manipulation, that $R^2$ is related to $F$ (approximately, assuming large $n$).

$$ R^2 + \frac{1}{F+1}=1 \quad \mbox{or} \quad 1+F = \frac{1}{1-R^2} $$

Check to see that when $R^2=0$, then $F=0$.

We can further check the formulae with a numerical example, by creating some sample data.

We can also compare two regressions, say one with 5 RHS variables with one that has only 3 of those five to see whether the additional two variables has any extra value. The ratio of the two $MSM$ values from the first and second regressions is also a $F$-statistic that may be tested for it to be large enough.

Note that if the residuals $\epsilon$ are assumed to be normally distributed, then squared residuals are distributed as per the chi-square ($\chi^2$) distribution. Further, the sum of residuals is distributed normal and the sum of squared residuals is distributed $\chi^2$. And finally, the ratio of two $\chi^2$ variables is $F$-distributed, which is why we call it the $F$-statistic, it is the ratio of two sums of squared errors.

Bias in regression coefficients

Underlying the analyses of the regression model above is an assumption that the error term $\epsilon$ is independent of the $x$ variables. This assumption ensures that the regression coefficient $\beta$ is unbiased. To see this in the simplest way, consider the univariate regression

$$ y = \beta x + \epsilon $$

We have seen earlier that the coefficient $\beta$ is given by

$$ \frac{Cov(x,y)}{Var(x)} = \frac{Cov(x,\beta x + \epsilon)}{Cov(x,x)} = \beta + \frac{Cov(x,\epsilon)}{Cov(x,x)} $$

This little piece of statistical math shows that this is biased if there is correlation between $x$ and $\epsilon$.

One way in which the coefficient is biased is if there is a missing variable in the regression that has an effect on both $x$ and $y$, which then injects correlation between $x$ and $\epsilon$. If there is a missing variable that impacts $y$ and not $x$, then it is just fine, after all, every regression has missing variables, else there would be no residual (error) term. Hopefully, there is some idea of how the missing variable impacts both $x$ and $y$ (direction, and if possible sign). Then at least one might have a sense of the direction of bias in the regression coefficient.

Heteroskedasticity

Simple linear regression assumes that the standard error of the residuals is the same for all observations. Many regressions suffer from the failure of this condition. The word for this is "heteroskedastic" errors. "Hetero" means different, and "skedastic" means dependent on type.

We can first test for the presence of heteroskedasticity using a standard Breusch-Pagan test available in R. This resides in the lmtest package which is loaded in before running the test.

We can see that there is very little evidence of heteroskedasticity in the standard errors as the $p$-value is not small. However, lets go ahead and correct the t-statistics for heteroskedasticity as follows, using the hccm function. The hccm stands for heteroskedasticity corrected covariance matrix.

Compare these to the t-statistics in the original model

It is apparent that when corrected for heteroskedasticity, the t-statistics in the regression are lower, and also render some of the previously significant coefficients insignificant.

Auto-Regressive Models

When data is autocorrelated, i.e., has dependence in time, not accounting for this issue results in unnecessarily high statistical significance (in terms of inflated t-statistics). Intuitively, this is because observations are treated as independent when actually they are correlated in time, and therefore, the true number of observations is effectively less.

Consider a finance application. In efficient markets, the correlation of stock returns from one period to the next should be close to zero. We use the returns on Google stock as an example. First, read in the data.

Next, create the returns time series.

Examine the autocorrelation. This is one lag, also known as first-order autocorrelation.

Run the Durbin-Watson test for autocorrelation. Here we test for up to 10 lags.

There is no evidence of auto-correlation when the DW statistic is close to 2. If the DW-statistic is greater than 2 it indicates negative autocorrelation, and if it is less than 2, it indicates positive autocorrelation.

If there is autocorrelation we can correct for it as follows. Let's take a different data set.

Test for autocorrelation.

Now make the correction to the t-statistics. We use the procedure formulated by Newey and West (1987). This correction is part of the car package.

Compare these to the stats we had earlier. Notice how they have come down after correction for AR. Note that there are several steps needed to correct for autocorrelation, and it might have been nice to roll one's own function for this. (I leave this as an exercise for you.)

For fun, lets look at the autocorrelation in stock market indexes, shown in the Figure below. The following graphic is taken from the book "A Non-Random Walk Down Wall Street" by Lo and MacKinlay (1999). Is the autocorrelation higher for equally-weighted or value-weighted indexes? Why?

Maximum Likelihood

Assume that the stock returns $R(t)$ mentioned above have a normal distribution with mean $\mu$ and variance $\sigma^2$ per year. MLE estimation requires finding the parameters $\{\mu,\sigma\}$ that maximize the likelihood of seeing the empirical sequence of returns $R(t)$. A normal probability function is required, and we have one above for $R(t)$, which is assumed to be i.i.d. (independent and identically distributed).

First, a quick recap of the normal distribution. If $x \sim N(\mu,\sigma^2)$, then \begin{equation} \mbox{density function:} \quad f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2} \right] \end{equation}

\begin{equation} N(x) = 1 - N(-x) \end{equation}\begin{equation} F(x) = \int_{-\infty}^x f(u) du \end{equation}

The standard normal distribution is $x \sim N(0,1)$. For the standard normal distribution: $F(0) = \frac{1}{2}$.

Noting that when returns are i.i.d., the mean return and the variance of returns scale with time, and therefore, the standard deviation of returns scales with the square-root of time. If the time intervals between return observations is $h$ years, then the probability density of $R(t)$ is normal with the following equation:

\begin{equation} f[R(t)] = \frac{1}{\sqrt{2 \pi \sigma^2 h}} \cdot \exp\left[ -\frac{1}{2} \cdot \frac{(R(t)-\alpha)^2}{\sigma^2 h} \right] \end{equation}

where $\alpha = \left(\mu-\frac{1}{2}\sigma^2 \right) h$. In our case, we have daily data and $h=1/252$.

For periods $t=1,2,\ldots,T$ the likelihood of the entire series is

\begin{equation} \prod_{t=1}^T f[R(t)] \end{equation}

It is easier (computationally) to maximize \begin{equation} \max_{\mu,\sigma} \; {\cal L} \equiv \sum_{t=1}^T \ln f[R(t)] \end{equation} known as the log-likelihood. This is easily done in R. First we create the log-likelihood function, so you can see how functions are defined in R.

Note that \begin{equation} \ln \; f[R(t)] = -\ln \sqrt{2 \pi \sigma^2 h} - \frac{[R(t)-\alpha]^2}{2 \sigma^2 h} \end{equation} We have used variable "sigsq" in function "LL" for $\sigma^2 h$.

We now read in the data and maximize the log-likelihood to find the required parameters of the return distribution.

Let's annualize the parameters and see what they are, comparing them to the raw mean and variance of returns.

As we can see, the parameters under the normal distribution are quite close to the raw moments.

Logit

We have seen how to fit a linear regression model in R. In that model we placed no restrictions on the dependent variable. However, when the LHS variable in a regression is categorical and binary, i.e., takes the value 1 or 0, then a logit regression is more apt. This regression fits a model that will always return a fitted value of the dependent variable that lies between $(0,1)$. This class of specifications covers what are known as limited dependent variables models. In this introduction to R, we will simply run a few examples of these models, leaving a more detailed analysis for later in this book.

Example: For the NCAA data, there are 64 observatios (teams) ordered from best to worst. We take the top 32 teams and make their dependent variable 1 (above median teams), and that of the bottom 32 teams zero (below median). Our goal is to fit a regression model that returns a team's predicted percentile ranking.

First, we create the dependent variable.

We use the function glm for this task. Running the model is pretty easy as follows.

Thus, we see that the best variables that separate upper-half teams from lower-half teams are the number of rebounds and the field goal percentage. To a lesser extent, field goal percentage and steals also provide some explanatory power. The logit regression is specified as follows:

\begin{eqnarray*} z &=& \frac{e^y}{1+e^y}\\ y &=& b_0 + b_1 x_1 + b_2 x_2 + \ldots + b_k x_k \end{eqnarray*}

The original data $z = \{0,1\}$. The range of values of $y$ is $(-\infty,+\infty)$. And as required, the fitted $z \in (0,1)$. The variables $x$ are the RHS variables. The fitting is done using MLE.

Suppose we ran this with a simple linear regression.

We get the same variables again showing up as significant.

Probit

We can redo the same regression in the logit using a probit instead. A probit is identical in spirit to the logit regression, except that the function that is used is

\begin{eqnarray*} z &=& \Phi(y)\\ y &=& b_0 + b_1 x_1 + b_2 x_2 + \ldots + b_k x_k \end{eqnarray*}

where $\Phi(\cdot)$ is the cumulative normal probability function. It is implemented in R as follows.

The results confirm those obtained from the linear regression and logit regression.

ARCH and GARCH

GARCH stands for "Generalized Auto-Regressive Conditional Heteroskedasticity". Engle (1982) invented ARCH (for which he got the Nobel prize) and this was extended by Bollerslev (1986) to GARCH.

ARCH models are based on the idea that volatility tends to cluster, i.e., volatility for period $t$, is auto-correlated with volatility from period $(t-1)$, or more preceding periods. If we had a time series of stock returns following a random walk, we might model it as follows

\begin{equation} r_t = \mu + e_t, \quad e_t \sim N(0,\sigma_t^2) \end{equation}

Returns have constant mean $\mu$ and time-varying variance $\sigma_t^2$. If the variance were stationary then $\sigma_t^2$ would be constant. But under GARCH it is auto-correlated with previous variances. Hence, we have

\begin{equation} \sigma_{t}^2 = \beta_0 + \sum_{j=1}^p \beta_{1j} \sigma_{t-j}^2 + \sum_{k=1}^q \beta_{2k} e_{t-k}^2 \end{equation}

So current variance ($\sigma_t^2$) depends on past squared shocks ($e_{t-k}^2$) and past variances ($\sigma_{t-j}^2$). The number of lags of past variance is $p$, and that of lagged shocks is $q$. The model is thus known as a GARCH$(p,q)$ model. For the model to be stationary, the sum of all the $\beta$ terms should be less than 1.

In GARCH, stock returns are conditionally normal, and independent, but not identically distributed because the variance changes over time. Since at every time $t$, we know the conditional distribution of returns, because $\sigma_t$ is based on past $\sigma_{t-j}$ and past shocks $e_{t-k}$, we can estimate the parameters $\{\beta_0,\beta{1j}, \beta_{2k}\}, \forall j,k$, of the model using MLE. The good news is that this comes canned in R, so all we need to do is use the tseries package.

That's it! Certainly much less painful than programming the entire MLE procedure. We see that the parameters $\{\beta_0,\beta_1,\beta_2\}$ are all statistically significant. Given the fitted parameters, we can also examine the extracted time series of volatilty.

We may also plot is side by side with the stock price series.

Notice how the volatility series clumps into periods of high volatility, interspersed with larger periods of calm. As is often the case, volatility tends to be higher when the stock price is lower.

Vector Autoregression

Also known as VAR (not the same thing as Value-at-Risk, denoted VaR). VAR is useful for estimating systems where there are simultaneous regression equations, and the variables influence each other over time. So in a VAR, each variable in a system is assumed to depend on lagged values of itself and the other variables. The number of lags may be chosen by the econometrician based on the expected decay in time-dependence of the variables in the VAR.

In the following example, we examine the inter-relatedness of returns of the following three tickers: SUNW, MSFT, IBM. For vector autoregressions (VARs), we run the following R commands:

We print out the Akaike Information Criterion (AIC)^[https://en.wikipedia.org/wiki/Akaike_information_criterion] to see which lags are significant.

Since there are three stocks' returns moving over time, we have a system of three equations, each with six lags, so there will be six lagged coefficients for each equation. We print out these coefficients here, and examine the sign. We note however that only one lag is significant, as the "order" of the system was estimated as 1 in the VAR above.

Interestingly we see that each of the tickers has a negative relation to its lagged value, but a positive correlation with the lagged values of the other two stocks. Hence, there is positive cross autocorrelation amongst these tech stocks. We can also run a model with three lags.

We examine cross autocorrelation found across all stocks by Lo and Mackinlay in their book "A Non-Random Walk Down Wall Street" -- see Figure below.

We see that one-lag cross autocorrelations are positive. Compare these portfolio autocorrelations with the individual stock autocorrelations in the example here.

Solving Non-Linear Equations

Earlier we examined root finding. Here we develop it further. We have also not done much with user-generated functions. Here is a neat model in R to solve for the implied volatility in the Black-Merton-Scholes class of models. First, we code up the Black and Scholes (1973) model; this is the function bms73 below. Then we write a user-defined function that solves for the implied volatility from a given call or put option price. The package minpack.lm is used for the equation solving, and the function call is nls.lm.

If you are not familiar with the Nobel Prize winning Black-Scholes model, never mind, almost the entire world has never heard of it. Just think of it as a nonlinear multivariate function that we will use as an exemplar for equation solving. We are going to use the function below to solve for the value of sig in the expressions below. We set up two functions.

We use the minimizer to solve the nonlinear function for the value of sig. The calls to this model are as follows:

We note that the function impvol was written such that the argument that we needed to solve for, sig0, the implied volatility, was the first argument in the function. However, the expression par=sig0 does inform the solver which argument is being searched for in order to satisfy the non-linear equation for implied volatility. Note also that the function bms73 returns the difference between the model price and observed price, not the model price alone. This is necessary as the solver tries to set this function value to zero by finding the implied volatility.

Lets check if we put this volatility back into the bms function that we get back the option price of 4. Voila!

Web-Enabling R Functions

We may be interested in hosting our R programs for users to run through a browser interface. This section walks you through the process to do so. This is an extract of my blog post at http://sanjivdas.wordpress.com/2010/11/07/web-enabling-r-functions-with-cgi-on-a-mac-os-x-desktop/. The same may be achieved by using the Shiny package in R, which enables you to create interactive browser-based applications, and is in fact a more powerful environment in which to create web-driven applications. See: https://shiny.rstudio.com/.

Here we desribe an example based on the Rcgi package from David Firth, and for full details of using R with CGI, see http://www.omegahat.org/CGIwithR/. Download the document on using R with CGI. It's titled "CGIwithR: Facilities for Processing Web Forms with R".^[https://www.jstatsoft.org/article/view/v008i10/CGIwithR-overview.pdf]

You need two program files to get everything working. (These instructions are for a Mac environment.)

(a) The html file that is the web form for input data. (b) The R file, with special tags for use with the CGIwithR package.

Our example will be simple, i.e., a calculator to work out the monthly payment on a standard fixed rate mortgage. The three inputs are the loan principal, annual loan rate, and the number of remaining months to maturity.

But first, let's create the html file for the web page that will take these three input values. We call it mortgage_calc.html. The code is all standard, for those familiar with html, and even if you are not used to html, the code is self-explanatory. See Figure rcgi1.

Notice that line 06 will be the one referencing the R program that does the calculation. The three inputs are accepted in lines 08-10. Line 12 sends the inputs to the R program.

Next, we look at the R program, suitably modified to include html tags. We name it mortgage_calc.R. See Figure rcgi2.

We can see that all html calls in the R program are made using the tag() construct. Lines 22--35 take in the three inputs from the html form. Lines 43--44 do the calculations and line 45 prints the result. The cat() function prints its arguments to the web browser page.

Okay, we have seen how the two programs (html, R) are written and these templates may be used with changes as needed. We also need to pay attention to setting up the R environment to make sure that the function is served up by the system. The following steps are needed:

Make sure that your Mac is allowing connections to its web server. Go to System Preferences and choose Sharing. In this window enable Web Sharing by ticking the box next to it.

Place the html file mortgage_calc.html in the directory that serves up web pages. On a Mac, there is already a web directory for this called Sites. It's a good idea to open a separate subdirectory called (say) Rcgi below this one for the R related programs and put the html file there.

The R program mortgage_calc.R must go in the directory that has been assigned for CGI executables. On a Mac, the default for this directory is /Library/WebServer/CGI-Executables and is usually referenced by the alias cgi-bin (stands for cgi binaries). Drop the R program into this directory.

Two more important files are created when you install the Rcgi package. The CGIwithR installation creates two files:

(a) A hidden file called .Rprofile; (b) A file called R.cgi.

Place both these files in the directory: /Library/WebServer/CGI-Executables.

If you cannot find the .Rprofile file then create it directly by opening a text editor and adding two lines to the file:

Now, open the R.cgi file and make sure that the line pointing to the R executable in the file is showing

R_DEFAULT=/usr/bin/R

The file may actually have it as #!/usr/local/bin/R which is for Linux platforms, but the usual Mac install has the executable in #! /usr/bin/R so make sure this is done.

Make both files executable as follows:

chmod a+rx .Rprofile chmod a+rx R.cgi

Finally, make the $\sim$/Sites/Rcgi/ directory write accessible:

chmod a+wx $\sim$/Sites/Rcgi

Just being patient and following all the steps makes sure it all works well. Having done it once, it's easy to repeat and create several functions. The inputs are as follows: Loan principal (enter a dollar amount). Annual loan rate (enter it in decimals, e.g., six percent is entered as 0.06). Remaining maturity in months (enter 300 if the remaining maturity is 25 years).

Causal Inference

We end with a brief comment on causality. Merely finding a relationship between the dependent variable and the independent variables in a regression is not enough. Such correlation does not imply causality. In many cases, we are interested in causality because we want to know that changing an independent variable will indeed change the dependent one. This is especially crucial in business, where we may be interested in knowing if raising wages will result in higher productivity, or in public policy, where we'd like to know the impact of reducing taxes on domestic product. Or in marketing where the causal impact of as spending is an important part of marketing attribution analysis.

We may also be concerned that a third confounding variable may be impacting both $y$ and $x$ in a regression, thereby biasing the causal impact implied by the regression coefficient.

The main question of causal inference is as follows: Holding all else constant, how does changing a single independent variable $x$ change the dependent variable $y$? Regression models are notoriously riddled with trouble in answering this question, because even as we know the the $x$ variables in the regression are correlated with each other, we are happy to assume that all the variables left out of the regression that might impact $y$ are not correlated in any way with the $x$ variables in the regression! Given this, any attempt at causal inference leaves any modeler very uncomfortable at the slightest hint of missing variable bias.

So how may we undertake causal inference with more confidence? The answer is through experiments, more technically, randomized control trials (RCTs).

Angrist and Pischke (2014) provide the following useful Identity of Causal Inference: The effect of experimental treatment may be decomposed into

$$ \mbox{Treated Outcome - Untreated Outcome} \\ = \mbox{Treated Outcome - Outcome of Treated if not treated} \\ + \mbox{Outcome of Treated if not treated - Untreated Outcome}\\ = \mbox{Treatement Effect + Selection Bias} $$

If we have a true random controlled trial, then the selection bias will be zero, and we will have estimated the treatment effect correctly. This is why setting up proper RCTs is so valuable in determining causality. Angirst and Pischke point out their “Furious Five methods of causal inference”: (i) random assignment, (ii) regression, (iii) instrumental variables, (iv) regression discontinuity, and (v) differences in differences. Their book is an excellent source for discussion of causal inference.

Top 10 Coding Mistakes made by Data Scientists

https://towardsdatascience.com/top-10-coding-mistakes-made-by-data-scientists-bb5bc82faaee; https://drive.google.com/file/d/1RNA6MPRwyvRI1jcM5EBwvJe--NmkmIFN/view?usp=sharing