%pylab inline
import os
from ipypublish import nb_setup
Populating the interactive namespace from numpy and matplotlib
%load_ext rpy2.ipython
#%load_ext RWinOut
"Walking on water and developing software from a specification are easy if both are frozen" -- Edward V. Berard
In this chapter, we develop some expertise in using the R statistical package. See the manual https://cran.r-project.org/doc/manuals/r-release/R-intro.pdf on the R web site. Work through Appendix A, at least the first page. Also see Grant Farnsworth's document "Econometrics in R": https://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf.
There is also a great book that I personally find to be of very high quality, titled "The Art of R Programming" by Norman Matloff.
You can easily install the R programming language, which is a very useful tool for Machine Learning. See: http://en.wikipedia.org/wiki/Machine_learning
Get R from: http://www.r-project.org/ (download and install it).
If you want to use R in IDE mode, download RStudio: http://www.rstudio.com.
Here is quick test to make sure your installation of R is working along with graphics capabilities.
%%R
#PLOT HISTOGRAM FROM STANDARD NORMAL RANDOM NUMBERS
x = rnorm(1000000)
hist(x,50)
grid(col="blue",lwd=2)
If you want to directly access the system you can issue system commands as follows:
%%R
#SYSTEM COMMANDS
#The following command will show the files in the directory which are notebooks.
print(system("ls -lt")) #This command will not work in the notebook.
[1] 0
To get started, we need to grab some data. Go to Yahoo! Finance and download some historical data in an Excel spreadsheet, re-sort it into chronological order, then save it as a CSV file. Read the file into R as follows.
%%R
#READ IN DATA FROM CSV FILE
data = read.csv("DSTMAA_data/goog.csv",header=TRUE)
print(head(data))
m = length(data)
n = length(data[,1])
print(c("Number of columns = ",m))
print(c("Length of data series = ",n))
Date Open High Low Close Adj.Close Volume 1 2004-08-19 49.67690 51.69378 47.66995 49.84580 49.84580 44994500 2 2004-08-20 50.17863 54.18756 49.92529 53.80505 53.80505 23005800 3 2004-08-23 55.01717 56.37334 54.17266 54.34653 54.34653 18393200 4 2004-08-24 55.26058 55.43942 51.45036 52.09616 52.09616 15361800 5 2004-08-25 52.14087 53.65105 51.60436 52.65751 52.65751 9257400 6 2004-08-26 52.13591 53.62621 51.99184 53.60634 53.60634 7148200 [1] "Number of columns = " "7" [1] "Length of data series = " "3607"
%%R
#REVERSE ORDER THE DATA (Also get some practice with a for loop)
for (j in 1:m) {
data[,j] = rev(data[,j])
}
print(head(data))
stkp = as.matrix(data[,7])
plot(stkp,type="l",col="blue")
grid(lwd=2)
Date Open High Low Close Adj.Close Volume 1 2018-12-14 1049.98 1062.60 1040.79 1042.10 1042.10 1685900 2 2018-12-13 1068.07 1079.76 1053.93 1061.90 1061.90 1329800 3 2018-12-12 1068.00 1081.65 1062.79 1063.68 1063.68 1523800 4 2018-12-11 1056.49 1060.60 1039.84 1051.75 1051.75 1394700 5 2018-12-10 1035.05 1048.45 1023.29 1039.55 1039.55 1807700 6 2018-12-07 1060.01 1075.26 1028.50 1036.58 1036.58 2101200
We can do the same data set up exercise for financial data using the quantmod package.
Note: to install a package you can use the drop down menus on Windows and Mac operating systems inside RStudio, and use a package installer on Linux.
You can install R packages from the console using conda: conda install r-
Or issue the following command from the notebook:
(when asked for "selection" enter 60 for California)
%%R
#install.packages("quantmod")
NULL
Now we move on to using this package for one stock.
%%R
#USE THE QUANTMOD PACKAGE TO GET STOCK DATA
library(quantmod)
getSymbols("IBM")
/Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Loading required package: xts warnings.warn(x, RRuntimeWarning) /Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Loading required package: zoo warnings.warn(x, RRuntimeWarning) /Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Attaching package: ‘zoo’ warnings.warn(x, RRuntimeWarning) /Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: The following objects are masked from ‘package:base’: as.Date, as.Date.numeric warnings.warn(x, RRuntimeWarning) /Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Registered S3 method overwritten by 'xts': method from as.zoo.xts zoo warnings.warn(x, RRuntimeWarning) /Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Loading required package: TTR warnings.warn(x, RRuntimeWarning) /Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Registered S3 method overwritten by 'quantmod': method from as.zoo.data.frame zoo warnings.warn(x, RRuntimeWarning) /Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Version 0.4-0 included new data defaults. See ?getSymbols. Learn from a quantmod author: https://www.datacamp.com/courses/importing-and-managing-financial-data-in-r warnings.warn(x, RRuntimeWarning) /Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: ‘getSymbols’ currently uses auto.assign=TRUE by default, but will use auto.assign=FALSE in 0.5-0. You will still be able to use ‘loadSymbols’ to automatically load data. getOption("getSymbols.env") and getOption("getSymbols.auto.assign") will still be checked for alternate defaults. This message is shown once per session and may be disabled by setting options("getSymbols.warning4.0"=FALSE). See ?getSymbols for details. warnings.warn(x, RRuntimeWarning)
[1] "IBM"
%%R
chartSeries(IBM)
Let's take a quick look at the data.
%%R
head(IBM)
IBM.Open IBM.High IBM.Low IBM.Close IBM.Volume IBM.Adjusted 2007-01-03 97.18 98.40 96.26 97.27 9196800 68.48550 2007-01-04 97.25 98.79 96.88 98.31 10524500 69.21778 2007-01-05 97.60 97.95 96.91 97.42 7221300 68.59115 2007-01-08 98.50 99.50 98.35 98.90 10340000 69.63318 2007-01-09 99.08 100.33 99.07 100.07 11108200 70.45693 2007-01-10 98.50 99.05 97.93 98.89 8744800 69.62614
Extract the dates using pipes (we will see this in more detail later).
%%R
library(magrittr)
dts = IBM %>% as.data.frame %>% row.names
dts %>% head %>% print
dts %>% length %>% print
[1] "2007-01-03" "2007-01-04" "2007-01-05" "2007-01-08" "2007-01-09" [6] "2007-01-10" [1] 3205
Plot the data.
%%R
stkp = as.matrix(IBM$IBM.Adjusted)
rets = diff(log(stkp))
dts = as.Date(dts)
plot(dts,stkp,type="l",col="blue",xlab="Years",ylab="Stock Price of IBM")
grid(lwd=2)
Summarize the data.
%%R
#DESCRIPTIVE STATS
summary(IBM)
Index IBM.Open IBM.High IBM.Low Min. :2007-01-03 Min. : 72.74 Min. : 76.98 Min. : 69.5 1st Qu.:2010-03-10 1st Qu.:125.92 1st Qu.:126.99 1st Qu.:124.7 Median :2013-05-15 Median :149.60 Median :150.54 Median :148.3 Mean :2013-05-14 Mean :149.81 Mean :151.00 Mean :148.7 3rd Qu.:2016-07-20 3rd Qu.:175.65 3rd Qu.:177.27 3rd Qu.:174.8 Max. :2019-09-25 Max. :215.38 Max. :215.90 Max. :214.3 IBM.Close IBM.Volume IBM.Adjusted Min. : 71.74 Min. : 1027500 Min. : 52.12 1st Qu.:126.08 1st Qu.: 3489400 1st Qu.: 94.93 Median :149.25 Median : 4699100 Median :132.22 Mean :149.89 Mean : 5631818 Mean :122.00 3rd Qu.:175.77 3rd Qu.: 6799200 3rd Qu.:144.85 Max. :215.80 Max. :30770700 Max. :169.22
Compute risk (volatility).
%%R
#STOCK VOLATILITY
sigma_daily = sd(rets)
sigma_annual = sigma_daily*sqrt(252)
print(sigma_annual)
print(c("Sharpe ratio = ",mean(rets)*252/sigma_annual))
[1] 0.2217987 [1] "Sharpe ratio = " "0.26146725711648"
We may also use the package to get data for more than one stock.
%%R
library(quantmod)
getSymbols(c("GOOG","AAPL","CSCO","IBM"))
[1] "GOOG" "AAPL" "CSCO" "IBM"
We now go ahead and concatenate columns of data into one stock data set.
%%R
goog = as.numeric(GOOG[,6])
aapl = as.numeric(AAPL[,6])
csco = as.numeric(CSCO[,6])
ibm = as.numeric(IBM[,6])
stkdata = cbind(goog,aapl,csco,ibm)
dim(stkdata)
[1] 3205 4
Now, compute daily returns. This time, we do log returns in continuous-time. The mean returns are:
%%R
n = dim(stkdata)[1]
rets = log(stkdata[2:n,]/stkdata[1:(n-1),])
colMeans(rets)
goog aapl csco ibm 0.0005235354 0.0009525354 0.0002568174 0.0002301313
We can also compute the covariance matrix and correlation matrix:
%%R
cv = cov(rets)
print(cv,2)
cr = cor(rets)
print(cr,4)
goog aapl csco ibm goog 0.00032 0.00019 0.00017 0.00011 aapl 0.00019 0.00039 0.00018 0.00013 csco 0.00017 0.00018 0.00033 0.00014 ibm 0.00011 0.00013 0.00014 0.00020 goog aapl csco ibm goog 1.0000 0.5462 0.5063 0.4557 aapl 0.5462 1.0000 0.4945 0.4616 csco 0.5063 0.4945 1.0000 0.5589 ibm 0.4557 0.4616 0.5589 1.0000
Notice the print command allows you to choose the number of significant digits (in this case 4). Also, as expected the four return time series are positively correlated with each other.
Data frames are the most essential data structure in the R programming language. One may think of a data frame as simply a spreadsheet. In fact you can view it as such with the following command.
%%R
#Only works in RStudio
#View(data)
NULL
However, data frames in R are much more than mere spreadsheets, which is why Excel will never trump R in the hanlding and analysis of data, except for very small applications on small spreadsheets. One may also think of data frames as databases, and there are many commands that we may use that are database-like, such as joins, merges, filters, selections, etc. Indeed, packages such as dplyr and data.table are designed to make these operations seamless, and operate efficiently on big data, where the number of observations (rows) are of the order of hundreds of millions.
Data frames can be addressed by column names, so that we do not need to remember column numbers specifically. If you want to find the names of all columns in a data frame, the names function does the trick. To address a chosen column, append the column name to the data frame using the "$" connector, as shown below.
%%R
#THIS IS A DATA FRAME AND CAN BE REFERENCED BY COLUMN NAMES
print(names(data))
print(head(data$Close))
[1] "Date" "Open" "High" "Low" "Close" "Adj.Close" [7] "Volume" [1] 1042.10 1061.90 1063.68 1051.75 1039.55 1036.58
The command printed out the first few observations in the column "Close". All variables and functions in R are "objects", and you are well-served to know the object type, because objects have properties and methods apply differently to objects of various types. Therefore, to check an object type, use the class function.
%%R
class(data)
[1] "data.frame"
To obtain descriptive statistics on the data variables in a data frame, the summary function is very handy.
%%R
#DESCRIPTIVE STATISTICS
summary(data)
Date Open High Low 2004-08-19: 1 Min. : 49.27 Min. : 50.54 Min. : 47.67 2004-08-20: 1 1st Qu.: 232.20 1st Qu.: 234.77 1st Qu.: 229.58 2004-08-23: 1 Median : 303.78 Median : 306.46 Median : 301.22 2004-08-24: 1 Mean : 438.93 Mean : 442.87 Mean : 434.65 2004-08-25: 1 3rd Qu.: 584.82 3rd Qu.: 587.96 3rd Qu.: 579.83 2004-08-26: 1 Max. :1271.00 Max. :1273.89 Max. :1249.02 (Other) :3601 Close Adj.Close Volume Min. : 49.68 Min. : 49.68 Min. : 7900 1st Qu.: 232.10 1st Qu.: 232.10 1st Qu.: 2050450 Median : 303.94 Median : 303.94 Median : 4791200 Mean : 438.82 Mean : 438.82 Mean : 7518149 3rd Qu.: 583.90 3rd Qu.: 583.90 3rd Qu.: 9869450 Max. :1268.33 Max. :1268.33 Max. :82768100
Let's take a given column of data and perform some transformations on it. We can also plot the data, with some arguments for look and feel, using the plot function.
%%R
#USING A PARTICULAR COLUMN
stkp = data$Adj.Close
dt = data$Date
print(c("Length of stock series = ",length(stkp)))
#Ln of differenced stk prices gives continuous returns
rets = diff(log(stkp)) #diff() takes first differences
print(c("Length of return series = ",length(rets)))
print(head(rets))
plot(rets,type="l",col="blue")
[1] "Length of stock series = " "3607" [1] "Length of return series = " "3606" [1] 0.018821894 0.001674866 -0.011279201 -0.011667469 -0.002861184 [6] 0.030544219
In case you want more descriptive statistics than provided by the summary function, then use an appropriate package. We may be interested in the higher-order moments, and we use the moments package for this.
%%R
print(summary(rets))
Min. 1st Qu. Median Mean 3rd Qu. Max. -0.1822511 -0.0098321 -0.0005632 -0.0008431 0.0075836 0.1234015
Compute the daily and annualized standard deviation of returns.
%%R
r_sd = sd(rets)
r_sd_annual = r_sd*sqrt(252)
print(c(r_sd,r_sd_annual))
#What if we take the stdev of annualized returns?
print(sd(rets*252))
#Huh?
print(sd(rets*252))/252
print(sd(rets*252))/sqrt(252)
[1] 0.01896794 0.30110676 [1] 4.779922 [1] 4.779922 [1] 4.779922 [1] 0.3011068
Notice the interesting use of the print function here. The variance is easy as well.
%%R
#Variance
r_var = var(rets)
r_var_annual = var(rets)*252
print(c(r_var,r_var_annual))
[1] 0.0003597829 0.0906652783
Skewness and kurtosis are key moments that arise in all return distributions. We need a different library in R for these. We use the moments library.
\begin{equation} \mbox{Skewness} = \frac{E[(X-\mu)^3]}{\sigma^{3}} \end{equation}Skewness means one tail is fatter than the other (asymmetry). Fatter right (left) tail implies positive (negative) skewness.
\begin{equation} \mbox{Kurtosis} = \frac{E[(X-\mu)^4]}{\sigma^{4}} \end{equation}Kurtosis means both tails are fatter than with a normal distribution.
%%R
#HIGHER-ORDER MOMENTS
library(moments)
hist(rets,50)
print(c("Skewness=",skewness(rets)))
print(c("Kurtosis=",kurtosis(rets)))
[1] "Skewness=" "-0.57512743445236" [1] "Kurtosis=" "12.6866762557976"
For the normal distribution, skewness is zero, and kurtosis is 3. Kurtosis minus three is denoted "excess kurtosis".
%%R
print(skewness(rnorm(1000000)))
print(kurtosis(rnorm(1000000)))
[1] 0.00078868 [1] 2.994231
What is the skewness and kurtosis of the stock index (S\&P500)?
Often the original data is in a space delimited file, not a comma separated one, in which case the read.table function is appropriate.
%%R
#READ IN MORE DATA USING SPACE DELIMITED FILE
data = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE)
print(head(data))
print(c("Length of data series = ",length(data$X.DATE)))
X.DATE SUNW MSFT IBM CSCO AMZN 1 20010102 -0.087443948 0.000000000 -0.002205882 -0.129084975 -0.10843374 2 20010103 0.297297299 0.105187319 0.115696386 0.240150094 0.26576576 3 20010104 -0.060606062 0.010430248 -0.015191546 0.013615734 -0.11743772 4 20010105 -0.096774191 0.014193549 0.008718981 -0.125373140 -0.06048387 5 20010108 0.006696429 -0.003816794 -0.004654255 -0.002133106 0.02575107 6 20010109 0.044345897 0.058748405 -0.010688043 0.015818726 0.09623431 mktrf smb hml rf 1 -0.0345 -0.0037 0.0209 0.00026 2 0.0527 0.0097 -0.0493 0.00026 3 -0.0121 0.0083 -0.0015 0.00026 4 -0.0291 0.0027 0.0242 0.00026 5 -0.0037 -0.0053 0.0129 0.00026 6 0.0046 0.0044 -0.0026 0.00026 [1] "Length of data series = " "1507"
We compute covariance and correlation in the data frame.
%%R
#COMPUTE COVARIANCE AND CORRELATION
rets = as.data.frame(cbind(data$SUNW,data$MSFT,data$IBM,data$CSCO,data$AMZN))
names(rets) = c("SUNW","MSFT","IBM","CSCO","AMZN")
print(cov(rets))
print(cor(rets))
SUNW MSFT IBM CSCO AMZN SUNW 0.0014380649 0.0003241903 0.0003104236 0.0007174466 0.0004594254 MSFT 0.0003241903 0.0003646160 0.0001968077 0.0003301491 0.0002678712 IBM 0.0003104236 0.0001968077 0.0002991120 0.0002827622 0.0002056656 CSCO 0.0007174466 0.0003301491 0.0002827622 0.0009502685 0.0005041975 AMZN 0.0004594254 0.0002678712 0.0002056656 0.0005041975 0.0016479809 SUNW MSFT IBM CSCO AMZN SUNW 1.0000000 0.4477060 0.4733132 0.6137298 0.2984349 MSFT 0.4477060 1.0000000 0.5959466 0.5608788 0.3455669 IBM 0.4733132 0.5959466 1.0000000 0.5303729 0.2929333 CSCO 0.6137298 0.5608788 0.5303729 1.0000000 0.4029038 AMZN 0.2984349 0.3455669 0.2929333 0.4029038 1.0000000
We may redo the example above using a very useful package called magrittr which mimics pipes in the Unix operating system. In the code below, we pipe the returns data into the correlation function and then "pipe" the output of that into the print function. This is analogous to issuing the command print(cor(rets)).
%%R
#Repeat the same process using pipes
library(magrittr)
rets %>% cor %>% print
SUNW MSFT IBM CSCO AMZN SUNW 1.0000000 0.4477060 0.4733132 0.6137298 0.2984349 MSFT 0.4477060 1.0000000 0.5959466 0.5608788 0.3455669 IBM 0.4733132 0.5959466 1.0000000 0.5303729 0.2929333 CSCO 0.6137298 0.5608788 0.5303729 1.0000000 0.4029038 AMZN 0.2984349 0.3455669 0.2929333 0.4029038 1.0000000
Question: What do you get if you cross a mountain-climber with a mosquito? Answer: Can't be done. You'll be crossing a scaler with a vector.
We will use matrices extensively in modeling, and here we examine the basic commands needed to create and manipulate matrices in R. We create a $4 \times 3$ matrix with random numbers as follows:
%%R
x = matrix(rnorm(12),4,3)
print(x)
[,1] [,2] [,3] [1,] 0.0518732 -0.6717193 -0.92050937 [2,] -0.2082150 0.4202647 -0.47640145 [3,] -0.1571675 -0.2385928 -0.88844408 [4,] -0.5663517 0.1489406 0.09808534
Transposing the matrix, notice that the dimensions are reversed.
%%R
print(t(x),3)
[,1] [,2] [,3] [,4] [1,] 0.0519 -0.208 -0.157 -0.5664 [2,] -0.6717 0.420 -0.239 0.1489 [3,] -0.9205 -0.476 -0.888 0.0981
Of course, it is easy to multiply matrices as long as they conform. By "conform" we mean that when multiplying one matrix by another, the number of columns of the matrix on the left must be equal to the number of rows of the matrix on the right. The resultant matrix that holds the answer of this computation will have the number of rows of the matrix on the left, and the number of columns of the matrix on the right. See the examples below:
%%R
print(t(x) %*% x,3)
print(x %*% t(x),3)
[,1] [,2] [,3] [1,] 0.392 -0.169 0.136 [2,] -0.169 0.707 0.645 [3,] 0.136 0.645 1.873 [,1] [,2] [,3] [,4] [1,] 1.301 0.145 0.9699 -0.2197 [2,] 0.145 0.447 0.3557 0.1338 [3,] 0.970 0.356 0.8710 -0.0337 [4,] -0.220 0.134 -0.0337 0.3526
Here is an example of non-conforming matrices.
%%R
#CREATE A RANDOM MATRIX
x = matrix(runif(12),4,3)
print(x)
print(x*2)
print(x+x)
print(t(x) %*% x) #THIS SHOULD BE 3x3
#print(x %*% x) #SHOULD GIVE AN ERROR
[,1] [,2] [,3] [1,] 0.11745159 0.7604409 0.616825059 [2,] 0.57022200 0.1588325 0.660023327 [3,] 0.05157777 0.3268232 0.314505190 [4,] 0.40146040 0.8728381 0.005924898 [,1] [,2] [,3] [1,] 0.2349032 1.5208819 1.2336501 [2,] 1.1404440 0.3176650 1.3200467 [3,] 0.1031555 0.6536465 0.6290104 [4,] 0.8029208 1.7456762 0.0118498 [,1] [,2] [,3] [1,] 0.2349032 1.5208819 1.2336501 [2,] 1.1404440 0.3176650 1.3200467 [3,] 0.1031555 0.6536465 0.6290104 [4,] 0.8029208 1.7456762 0.0118498 [,1] [,2] [,3] [1,] 0.5027787 0.5471515 0.4674070 [2,] 0.5471515 1.4721579 0.6818513 [3,] 0.4674070 0.6818513 0.9150526
Taking the inverse of the covariance matrix, we get:
%%R
cv_inv = solve(cv)
print(cv_inv,3)
goog aapl csco ibm goog 5052 -1619 -1222 -1014 aapl -1619 4121 -961 -1053 csco -1222 -961 5207 -2439 ibm -1014 -1053 -2439 8180
Check that the inverse is really so!
%%R
print(cv_inv %*% cv,3)
goog aapl csco ibm goog 1.00e+00 -6.97e-17 2.02e-16 4.33e-17 aapl 2.07e-16 1.00e+00 1.97e-16 9.96e-17 csco -1.16e-16 -1.70e-16 1.00e+00 -1.48e-16 ibm 2.95e-17 6.03e-18 1.39e-16 1.00e+00
It is, the result of multiplying the inverse matrix by the matrix itself results in the identity matrix.
A covariance matrix should be positive definite. Why? What happens if it is not? Checking for this property is easy.
%%R
library(corpcor)
is.positive.definite(cv)
[1] TRUE
What happens if you compute pairwise covariances from differing lengths of data for each pair?
Let's take the returns data we have and find the inverse.
%%R
cv = cov(rets)
print(round(cv,6))
cv_inv = solve(cv) #TAKE THE INVERSE
print(round(cv_inv %*% cv,2)) #CHECK THAT WE GET IDENTITY MATRIX
SUNW MSFT IBM CSCO AMZN SUNW 0.001438 0.000324 0.000310 0.000717 0.000459 MSFT 0.000324 0.000365 0.000197 0.000330 0.000268 IBM 0.000310 0.000197 0.000299 0.000283 0.000206 CSCO 0.000717 0.000330 0.000283 0.000950 0.000504 AMZN 0.000459 0.000268 0.000206 0.000504 0.001648 SUNW MSFT IBM CSCO AMZN SUNW 1 0 0 0 0 MSFT 0 1 0 0 0 IBM 0 0 1 0 0 CSCO 0 0 0 1 0 AMZN 0 0 0 0 1
%%R
#CHECK IF MATRIX IS POSITIVE DEFINITE (why do we check this?)
library(corpcor)
is.positive.definite(cv)
[1] TRUE
Finding roots of nonlinear equations is often required, and R has several packages for this purpose. Here we examine a few examples. Suppose we are given the function [ (x^2 + y^2 - 1)^3 - x^2 y^3 = 0 ] and for various values of $y$ we wish to solve for the values of $x$. The function we use is called multiroot and the use of the function is shown below.
%%R
#ROOT SOLVING IN R
library(rootSolve)
fn = function(x,y) {
result = (x^2+y^2-1)^3 - x^2*y^3
}
yy = 1
sol = multiroot(f=fn,start=1,maxiter=10000,rtol=0.000001,atol=0.000001,ctol=0.00001,y=yy)
print(c("solution=",sol$root))
check = fn(sol$root,yy)
print(check)
[1] "solution=" "1" [1] 0
Here we demonstrate the use of another function called uniroot.
%%R
fn = function(x) {
result = 0.065*(x*(1-x))^0.5- 0.05 +0.05*x
}
sol = uniroot.all(f=fn,c(0,1))
print(sol)
check = fn(sol)
print(check)
[1] 1.0000000 0.3717627 [1] 0.000000e+00 1.041576e-06
In a multivariate linear regression, we have
\begin{equation} Y = X \cdot \beta + e \end{equation}where $Y \in R^{t \times 1}$, $X \in R^{t \times n}$, and $\beta \in R^{n \times 1}$, and the regression solution is simply equal to $\beta = (X'X)^{-1}(X'Y) \in R^{n \times 1}$.
To get this result we minimize the sum of squared errors.
\begin{eqnarray*} \min_{\beta} e'e &=& (Y - X \cdot \beta)' (Y-X \cdot \beta) \\ &=& Y'(Y-X \cdot \beta) - (X \beta)'\cdot (Y-X \cdot \beta) \\ &=& Y'Y - Y' X \beta - (\beta' X') Y + \beta' X'X \beta \\ &=& Y'Y - Y' X \beta - Y' X \beta + \beta' X'X \beta \\ &=& Y'Y - 2Y' X \beta + \beta' X'X \beta \end{eqnarray*}Note that this expression is a scalar.
Differentiating w.r.t. $\beta'$ gives the following f.o.c:
\begin{eqnarray*} - 2 X'Y + 2 X'X \beta&=& {\bf 0} \\ & \Longrightarrow & \\ \beta &=& (X'X)^{-1} (X'Y) \end{eqnarray*}There is another useful expression for each individual $\beta_i = \frac{Cov(X_i,Y)}{Var(X_i)}$. You should compute this and check that each coefficient in the regression is indeed equal to the $\beta_i$ from this calculation.
Example: We run a stock return regression to exemplify the algebra above.
%%R
data = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE) #THESE DATA ARE RETURNS
print(names(data)) #THIS IS A DATA FRAME (important construct in R)
head(data)
[1] "X.DATE" "SUNW" "MSFT" "IBM" "CSCO" "AMZN" "mktrf" "smb" [9] "hml" "rf" X.DATE SUNW MSFT IBM CSCO AMZN 1 20010102 -0.087443948 0.000000000 -0.002205882 -0.129084975 -0.10843374 2 20010103 0.297297299 0.105187319 0.115696386 0.240150094 0.26576576 3 20010104 -0.060606062 0.010430248 -0.015191546 0.013615734 -0.11743772 4 20010105 -0.096774191 0.014193549 0.008718981 -0.125373140 -0.06048387 5 20010108 0.006696429 -0.003816794 -0.004654255 -0.002133106 0.02575107 6 20010109 0.044345897 0.058748405 -0.010688043 0.015818726 0.09623431 mktrf smb hml rf 1 -0.0345 -0.0037 0.0209 0.00026 2 0.0527 0.0097 -0.0493 0.00026 3 -0.0121 0.0083 -0.0015 0.00026 4 -0.0291 0.0027 0.0242 0.00026 5 -0.0037 -0.0053 0.0129 0.00026 6 0.0046 0.0044 -0.0026 0.00026
%%R
#RUN A MULTIVARIATE REGRESSION ON STOCK DATA
Y = as.matrix(data$SUNW)
X = as.matrix(data[,3:6])
res = lm(Y~X)
summary(res)
Call: lm(formula = Y ~ X) Residuals: Min 1Q Median 3Q Max -0.233758 -0.014921 -0.000711 0.014214 0.178859 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.0007256 0.0007512 -0.966 0.33422 XMSFT 0.1382312 0.0529045 2.613 0.00907 ** XIBM 0.3791500 0.0566232 6.696 3.02e-11 *** XCSCO 0.5769097 0.0317799 18.153 < 2e-16 *** XAMZN 0.0324899 0.0204802 1.586 0.11286 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.02914 on 1502 degrees of freedom Multiple R-squared: 0.4112, Adjusted R-squared: 0.4096 F-statistic: 262.2 on 4 and 1502 DF, p-value: < 2.2e-16
Now we can cross-check the regression using the algebraic solution for the regression coefficients.
%%R
#CHECK THE REGRESSION
n = length(Y)
X = cbind(matrix(1,n,1),X)
b = solve(t(X) %*% X) %*% (t(X) %*% Y)
print(b)
[,1] -0.0007256342 MSFT 0.1382312148 IBM 0.3791500328 CSCO 0.5769097262 AMZN 0.0324898716
Example: As a second example, we take data on basketball teams in a cross-section, and try to explain their performance using team statistics. Here is a simple regression run on some data from the 2005-06 NCAA basketball season for the March madness stats. The data is stored in a space-delimited file called ncaa.txt. We use the metric of performance to be the number of games played, with more successful teams playing more playoff games, and then try to see what variables explain it best. We apply a simple linear regression that uses the R command lm, which stands for "linear model".
%%R
#REGRESSION ON NCAA BASKETBALL PLAYOFF DATA
ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE)
print(head(ncaa))
y = ncaa[3]
y = as.matrix(y)
x = ncaa[4:14]
x = as.matrix(x)
No NAME GMS PTS REB AST TO A.T STL BLK PF FG FT X3P 1 1 NorthCarolina 6 84.2 41.5 17.8 12.8 1.39 6.7 3.8 16.7 0.514 0.664 0.417 2 2 Illinois 6 74.5 34.0 19.0 10.2 1.87 8.0 1.7 16.5 0.457 0.753 0.361 3 3 Louisville 5 77.4 35.4 13.6 11.0 1.24 5.4 4.2 16.6 0.479 0.702 0.376 4 4 MichiganState 5 80.8 37.8 13.0 12.6 1.03 8.4 2.4 19.8 0.445 0.783 0.329 5 5 Arizona 4 79.8 35.0 15.8 14.5 1.09 6.0 6.5 13.3 0.542 0.759 0.397 6 6 Kentucky 4 72.8 32.3 12.8 13.5 0.94 7.3 3.5 19.5 0.510 0.663 0.400
%%R
fm = lm(y~x)
res = summary(fm)
res
Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -1.5074 -0.5527 -0.2454 0.6705 2.2344 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -10.194804 2.892203 -3.525 0.000893 *** xPTS -0.010442 0.025276 -0.413 0.681218 xREB 0.105048 0.036951 2.843 0.006375 ** xAST -0.060798 0.091102 -0.667 0.507492 xTO -0.034545 0.071393 -0.484 0.630513 xA.T 1.325402 1.110184 1.194 0.237951 xSTL 0.181015 0.068999 2.623 0.011397 * xBLK 0.007185 0.075054 0.096 0.924106 xPF -0.031705 0.044469 -0.713 0.479050 xFG 13.823190 3.981191 3.472 0.001048 ** xFT 2.694716 1.118595 2.409 0.019573 * xX3P 2.526831 1.754038 1.441 0.155698 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9619 on 52 degrees of freedom Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06
An alternative specification of regression using data frames is somewhat easier to implement.
%%R
#CREATING DATA FRAMES
ncaa_data_frame = data.frame(y=as.matrix(ncaa[3]),x=as.matrix(ncaa[4:14]))
fm = lm(y~x,data=ncaa_data_frame)
summary(fm)
Call: lm(formula = y ~ x, data = ncaa_data_frame) Residuals: Min 1Q Median 3Q Max -1.5074 -0.5527 -0.2454 0.6705 2.2344 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -10.194804 2.892203 -3.525 0.000893 *** xPTS -0.010442 0.025276 -0.413 0.681218 xREB 0.105048 0.036951 2.843 0.006375 ** xAST -0.060798 0.091102 -0.667 0.507492 xTO -0.034545 0.071393 -0.484 0.630513 xA.T 1.325402 1.110184 1.194 0.237951 xSTL 0.181015 0.068999 2.623 0.011397 * xBLK 0.007185 0.075054 0.096 0.924106 xPF -0.031705 0.044469 -0.713 0.479050 xFG 13.823190 3.981191 3.472 0.001048 ** xFT 2.694716 1.118595 2.409 0.019573 * xX3P 2.526831 1.754038 1.441 0.155698 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9619 on 52 degrees of freedom Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06
In a regression, we estimate the coefficients on each explanatory variable (these are the Estimates in the regression above). In addition, we also estimate the standard deviation of the coefficient value, which implies the range around the mean value. This is called the standard error of the coefficient. We are interested in making sure that the coefficient value $b$ is not zero in the statistical sense, usually taken to mean that it is at least 2 standard deviations away from zero. That is, we want the coefficient value $b$ divided by its standard deviation $\sigma_b$ to be at least 2. This is called the t-statistic or t-value, shown in the regression above.
The t-statistic is the number of standard deviations the coefficient is away from zero. It implies a p-value, which is a probability that the coefficient is equal to zero. So, we want the p-values to be small. We see in the above regression [Pr(>|t|)] that the coefficients that are statistically significant have small p-values and large absolute values of t-statistics. It is intuitive that when the t-statistic is large (negative or positive) it means that the coefficient is far away from zero and using the standard normal distribution we can calculate the probability left in the tails. So if the t-statistic is (say) 2.843, it means that there is only 0.006375 probability remaining in the right tail to the right of the t-statistic value.
For a more detailed discussion see this excellent article in the Scientific American (2019); pdf.
The linear regression is fit by minimizing the sum of squared errors, but the same concept may also be applied to a nonlinear regression as well. So we might have:
$$ y_i = f(x_{i1},x_{i2},...,x_{ip}) + \epsilon_i, \quad i=1,2,...,n $$which describes a data set that has $n$ rows and $p$ columns, which are the standard variables for the number of rows and columns. Note that the error term (residual) is $\epsilon_i$.
The regression will have $(p+1)$ coefficients, i.e., ${\bf b} = \{b_0,b_1,b_2,...,b_p\}$, and ${\bf x}_i = \{x_{i1},x_{i2},...,x_{ip}\}$. The model is fit by minimizing the sum of squared residuals, i.e.,
$$ \min_{\bf b} \sum_{i=1}^n \epsilon_i^2 $$We define the following:
The $R$-squared of the regression is
$$ R^2 = \left( 1 - \frac{SSE}{SST} \right) \quad \in (0,1) $$The $F$-statistic in the regression is what tells us if the RHS variables comprise a model that explains the LHS variable sufficiently. Do the RHS variables offer more of an explanation that simply assuming that the mean value of $y$ is the best prediction? The null hypothesis we care about is
To test this the $F$-statistic is computed as the following ratio:
$$ F = \frac{\mbox{Explained variance}}{\mbox{Unexplained variance}} = \frac{SSM/DFM}{SSE/DFE} = \frac{MSM}{MSE} $$where $MSM$ is the mean squared model error, and $MSE$ is mean squared error.
Now let's relate this to $R^2$. First, we find an approximation for the $R^2$.
$$ R^2 = 1 - \frac{SSE}{SST} \\ = 1 - \frac{SSE/n}{SST/n} \\ \approx 1 - \frac{MSE}{MST} \\ = \frac{MST-MSE}{MST} \\ = \frac{MSM}{MST} $$The $R^2$ of a regression that has no RHS variables is zero, and of course $MSM=0$. In such a regression $MST = MSE$. So the expression above becomes:
$$ R^2_{p=0} = \frac{MSM}{MST} = 0 $$We can also see with some manipulation, that $R^2$ is related to $F$ (approximately, assuming large $n$).
$$ R^2 + \frac{1}{F+1}=1 \quad \mbox{or} \quad 1+F = \frac{1}{1-R^2} $$Check to see that when $R^2=0$, then $F=0$.
We can further check the formulae with a numerical example, by creating some sample data.
%%R
x = matrix(runif(300),100,3)
y = 5 + 4*x[,1] + 3*x[,2] + 2*x[,3] + rnorm(100)
y = as.matrix(y)
res = lm(y~x)
print(summary(res))
Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -2.3499 -0.7816 0.1548 0.8725 2.4822 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.4788 0.3996 13.711 < 2e-16 *** x1 4.2712 0.3748 11.396 < 2e-16 *** x2 2.3911 0.4029 5.935 4.66e-08 *** x3 1.5251 0.3949 3.862 0.000204 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.093 on 96 degrees of freedom Multiple R-squared: 0.6193, Adjusted R-squared: 0.6074 F-statistic: 52.05 on 3 and 96 DF, p-value: < 2.2e-16
%%R
e = res$residuals
SSE = sum(e^2)
SST = sum((y-mean(y))^2)
SSM = SST - SSE
print(c(SSE,SSM,SST))
R2 = 1 - SSE/SST
print(R2)
n = dim(x)[1]
p = dim(x)[2]+1
MSE = SSE/(n-p)
MSM = SSM/(p-1)
MST = SST/(n-1)
print(c(n,p,MSE,MSM,MST))
Fstat = MSM/MSE
print(Fstat)
[1] 114.7558 186.6638 301.4196 [1] 0.6192822 [1] 100.000000 4.000000 1.195373 62.221270 3.044643 [1] 52.05176
We can also compare two regressions, say one with 5 RHS variables with one that has only 3 of those five to see whether the additional two variables has any extra value. The ratio of the two $MSM$ values from the first and second regressions is also a $F$-statistic that may be tested for it to be large enough.
Note that if the residuals $\epsilon$ are assumed to be normally distributed, then squared residuals are distributed as per the chi-square ($\chi^2$) distribution. Further, the sum of residuals is distributed normal and the sum of squared residuals is distributed $\chi^2$. And finally, the ratio of two $\chi^2$ variables is $F$-distributed, which is why we call it the $F$-statistic, it is the ratio of two sums of squared errors.
Underlying the analyses of the regression model above is an assumption that the error term $\epsilon$ is independent of the $x$ variables. This assumption ensures that the regression coefficient $\beta$ is unbiased. To see this in the simplest way, consider the univariate regression
$$ y = \beta x + \epsilon $$We have seen earlier that the coefficient $\beta$ is given by
$$ \frac{Cov(x,y)}{Var(x)} = \frac{Cov(x,\beta x + \epsilon)}{Cov(x,x)} = \beta + \frac{Cov(x,\epsilon)}{Cov(x,x)} $$This little piece of statistical math shows that this is biased if there is correlation between $x$ and $\epsilon$.
One way in which the coefficient is biased is if there is a missing variable in the regression that has an effect on both $x$ and $y$, which then injects correlation between $x$ and $\epsilon$. If there is a missing variable that impacts $y$ and not $x$, then it is just fine, after all, every regression has missing variables, else there would be no residual (error) term. Hopefully, there is some idea of how the missing variable impacts both $x$ and $y$ (direction, and if possible sign). Then at least one might have a sense of the direction of bias in the regression coefficient.
Simple linear regression assumes that the standard error of the residuals is the same for all observations. Many regressions suffer from the failure of this condition. The word for this is "heteroskedastic" errors. "Hetero" means different, and "skedastic" means dependent on type.
We can first test for the presence of heteroskedasticity using a standard Breusch-Pagan test available in R. This resides in the lmtest package which is loaded in before running the test.
%%R
ncaa = read.table("DSTMAA_data/ncaa.txt",header=TRUE)
y = as.matrix(ncaa[3])
x = as.matrix(ncaa[4:14])
result = lm(y~x)
library(lmtest)
bptest(result)
studentized Breusch-Pagan test data: result BP = 15.538, df = 11, p-value = 0.1592
We can see that there is very little evidence of heteroskedasticity in the standard errors as the $p$-value is not small. However, lets go ahead and correct the t-statistics for heteroskedasticity as follows, using the hccm function. The hccm stands for heteroskedasticity corrected covariance matrix.
%%R
wuns = matrix(1,64,1)
z = cbind(wuns,x)
b = solve(t(z) %*% z) %*% (t(z) %*% y)
result = lm(y~x)
library(car)
vb = hccm(result)
stdb = sqrt(diag(vb))
tstats = b/stdb
print(tstats)
/Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Loading required package: carData warnings.warn(x, RRuntimeWarning)
GMS -2.68006069 PTS -0.38212818 REB 2.38342637 AST -0.40848721 TO -0.28709450 A.T 0.65632053 STL 2.13627108 BLK 0.09548606 PF -0.68036944 FG 3.52193532 FT 2.35677255 X3P 1.23897636
Compare these to the t-statistics in the original model
%%R
summary(result)
Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -1.5074 -0.5527 -0.2454 0.6705 2.2344 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -10.194804 2.892203 -3.525 0.000893 *** xPTS -0.010442 0.025276 -0.413 0.681218 xREB 0.105048 0.036951 2.843 0.006375 ** xAST -0.060798 0.091102 -0.667 0.507492 xTO -0.034545 0.071393 -0.484 0.630513 xA.T 1.325402 1.110184 1.194 0.237951 xSTL 0.181015 0.068999 2.623 0.011397 * xBLK 0.007185 0.075054 0.096 0.924106 xPF -0.031705 0.044469 -0.713 0.479050 xFG 13.823190 3.981191 3.472 0.001048 ** xFT 2.694716 1.118595 2.409 0.019573 * xX3P 2.526831 1.754038 1.441 0.155698 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9619 on 52 degrees of freedom Multiple R-squared: 0.5418, Adjusted R-squared: 0.4448 F-statistic: 5.589 on 11 and 52 DF, p-value: 7.889e-06
It is apparent that when corrected for heteroskedasticity, the t-statistics in the regression are lower, and also render some of the previously significant coefficients insignificant.
When data is autocorrelated, i.e., has dependence in time, not accounting for this issue results in unnecessarily high statistical significance (in terms of inflated t-statistics). Intuitively, this is because observations are treated as independent when actually they are correlated in time, and therefore, the true number of observations is effectively less.
Consider a finance application. In efficient markets, the correlation of stock returns from one period to the next should be close to zero. We use the returns on Google stock as an example. First, read in the data.
%%R
data = read.csv("DSTMAA_data/goog.csv",header=TRUE)
head(data)
Date Open High Low Close Adj.Close Volume 1 2004-08-19 49.67690 51.69378 47.66995 49.84580 49.84580 44994500 2 2004-08-20 50.17863 54.18756 49.92529 53.80505 53.80505 23005800 3 2004-08-23 55.01717 56.37334 54.17266 54.34653 54.34653 18393200 4 2004-08-24 55.26058 55.43942 51.45036 52.09616 52.09616 15361800 5 2004-08-25 52.14087 53.65105 51.60436 52.65751 52.65751 9257400 6 2004-08-26 52.13591 53.62621 51.99184 53.60634 53.60634 7148200
Next, create the returns time series.
%%R
n = length(data$Close)
stkp = rev(data$Adj.Close)
rets = as.matrix(log(stkp[2:n]/stkp[1:(n-1)]))
n = length(rets)
plot(rets,type="l",col="blue")
print(n)
[1] 3606
Examine the autocorrelation. This is one lag, also known as first-order autocorrelation.
%%R
cor(rets[1:(n-1)],rets[2:n])
[1] 0.009747685
Run the Durbin-Watson test for autocorrelation. Here we test for up to 10 lags.
%%R
library(car)
res = lm(rets[2:n]~rets[1:(n-1)])
durbinWatsonTest(res,max.lag=10)
lag Autocorrelation D-W Statistic p-value 1 5.491099e-05 1.995490 0.906 2 -9.073958e-03 2.013593 0.636 3 -1.063362e-03 1.996041 0.888 4 1.528849e-02 1.963262 0.254 5 -3.708613e-03 2.000267 0.970 6 -3.852869e-02 2.069481 0.030 7 -2.878048e-04 1.989592 0.864 8 2.693173e-02 1.935061 0.084 9 -2.954766e-02 2.047633 0.144 10 1.458095e-02 1.959268 0.262 Alternative hypothesis: rho[lag] != 0
There is no evidence of auto-correlation when the DW statistic is close to 2. If the DW-statistic is greater than 2 it indicates negative autocorrelation, and if it is less than 2, it indicates positive autocorrelation.
If there is autocorrelation we can correct for it as follows. Let's take a different data set.
%%R
md = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE)
names(md)
[1] "X.DATE" "SUNW" "MSFT" "IBM" "CSCO" "AMZN" "mktrf" "smb" [9] "hml" "rf"
Test for autocorrelation.
%%R
y = as.matrix(md[2])
x = as.matrix(md[7:9])
rf = as.matrix(md[10])
y = y-rf
library(car)
results = lm(y ~ x)
print(summary(results))
Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -0.213676 -0.014356 -0.000733 0.014462 0.191089 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.000197 0.000785 -0.251 0.8019 xmktrf 1.657968 0.085816 19.320 <2e-16 *** xsmb 0.299735 0.146973 2.039 0.0416 * xhml -1.544633 0.176049 -8.774 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.03028 on 1503 degrees of freedom Multiple R-squared: 0.3636, Adjusted R-squared: 0.3623 F-statistic: 286.3 on 3 and 1503 DF, p-value: < 2.2e-16
%%R
durbinWatsonTest(results,max.lag=6)
lag Autocorrelation D-W Statistic p-value 1 -0.07231926 2.144549 0.000 2 -0.04595240 2.079356 0.130 3 0.02958136 1.926791 0.182 4 -0.01608143 2.017980 0.670 5 -0.02360625 2.032176 0.484 6 -0.01874952 2.021745 0.616 Alternative hypothesis: rho[lag] != 0
Now make the correction to the t-statistics. We use the procedure formulated by Newey and West (1987). This correction is part of the car package.
%%R
#CORRECT FOR AUTOCORRELATION
library(sandwich)
b = results$coefficients
print(b)
vb = NeweyWest(results,lag=1)
stdb = sqrt(diag(vb))
tstats = b/stdb
print(tstats)
(Intercept) xmktrf xsmb xhml -0.0001970164 1.6579682191 0.2997353765 -1.5446330690 (Intercept) xmktrf xsmb xhml -0.2633665 15.5779184 1.8300340 -6.1036120
Compare these to the stats we had earlier. Notice how they have come down after correction for AR. Note that there are several steps needed to correct for autocorrelation, and it might have been nice to roll one's own function for this. (I leave this as an exercise for you.)
For fun, lets look at the autocorrelation in stock market indexes, shown in the Figure below. The following graphic is taken from the book "A Non-Random Walk Down Wall Street" by Lo and MacKinlay (1999). Is the autocorrelation higher for equally-weighted or value-weighted indexes? Why?
nb_setup.images_hconcat(["DSTMAA_images/ARequityindexes.png"], width=500)
Assume that the stock returns $R(t)$ mentioned above have a normal distribution with mean $\mu$ and variance $\sigma^2$ per year. MLE estimation requires finding the parameters $\{\mu,\sigma\}$ that maximize the likelihood of seeing the empirical sequence of returns $R(t)$. A normal probability function is required, and we have one above for $R(t)$, which is assumed to be i.i.d. (independent and identically distributed).
First, a quick recap of the normal distribution. If $x \sim N(\mu,\sigma^2)$, then \begin{equation} \mbox{density function:} \quad f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2} \right] \end{equation}
\begin{equation} N(x) = 1 - N(-x) \end{equation}\begin{equation} F(x) = \int_{-\infty}^x f(u) du \end{equation}The standard normal distribution is $x \sim N(0,1)$. For the standard normal distribution: $F(0) = \frac{1}{2}$.
Noting that when returns are i.i.d., the mean return and the variance of returns scale with time, and therefore, the standard deviation of returns scales with the square-root of time. If the time intervals between return observations is $h$ years, then the probability density of $R(t)$ is normal with the following equation:
\begin{equation} f[R(t)] = \frac{1}{\sqrt{2 \pi \sigma^2 h}} \cdot \exp\left[ -\frac{1}{2} \cdot \frac{(R(t)-\alpha)^2}{\sigma^2 h} \right] \end{equation}where $\alpha = \left(\mu-\frac{1}{2}\sigma^2 \right) h$. In our case, we have daily data and $h=1/252$.
For periods $t=1,2,\ldots,T$ the likelihood of the entire series is
\begin{equation} \prod_{t=1}^T f[R(t)] \end{equation}It is easier (computationally) to maximize \begin{equation} \max_{\mu,\sigma} \; {\cal L} \equiv \sum_{t=1}^T \ln f[R(t)] \end{equation} known as the log-likelihood. This is easily done in R. First we create the log-likelihood function, so you can see how functions are defined in R.
Note that \begin{equation} \ln \; f[R(t)] = -\ln \sqrt{2 \pi \sigma^2 h} - \frac{[R(t)-\alpha]^2}{2 \sigma^2 h} \end{equation} We have used variable "sigsq" in function "LL" for $\sigma^2 h$.
%%R
#LOG-LIKELIHOOD FUNCTION
LL = function(params,rets) {
alpha = params[1]; sigsq = params[2]
logf = -log(sqrt(2*pi*sigsq)) - (rets-alpha)^2/(2*sigsq)
LL = -sum(logf)
}
We now read in the data and maximize the log-likelihood to find the required parameters of the return distribution.
%%R
#READ DATA
data = read.csv("DSTMAA_data/goog.csv",header=TRUE)
stkp = data$Adj.Close
#Ln of differenced stk prices gives continuous returns
rets = diff(log(stkp)) #diff() takes first differences
print(c("mean return = ",mean(rets),mean(rets)*252))
print(c("stdev returns = ",sd(rets),sd(rets)*sqrt(252)))
#Create starting guess for parameters
params = c(0.001,0.001)
res = nlm(LL,params,rets)
print(res)
[1] "mean return = " "0.000843055708193043" "0.212450038464647" [1] "stdev returns = " "0.0189679427010029" "0.301106755616259" $minimum [1] -9181.616 $estimate [1] 0.0008439354 0.0003597036 $gradient [1] 8.818863e+00 -2.840669e+05 $code [1] 3 $iterations [1] 9
Let's annualize the parameters and see what they are, comparing them to the raw mean and variance of returns.
%%R
h = 1/252
alpha = res$estimate[1]
sigsq = res$estimate[2]
print(c("alpha=",alpha))
print(c("sigsq=",sigsq))
sigma = sqrt(sigsq/h)
mu = alpha/h + 0.5*sigma^2
print(c("mu=",mu))
print(c("sigma=",sigma))
print(mean(rets*252))
print(sd(rets)*sqrt(252))
[1] "alpha=" "0.000843935402288685" [1] "sigsq=" "0.000359703605306595" [1] "mu=" "0.25799437564538" [1] "sigma=" "0.30107359322475" [1] 0.21245 [1] 0.3011068
As we can see, the parameters under the normal distribution are quite close to the raw moments.
We have seen how to fit a linear regression model in R. In that model we placed no restrictions on the dependent variable. However, when the LHS variable in a regression is categorical and binary, i.e., takes the value 1 or 0, then a logit regression is more apt. This regression fits a model that will always return a fitted value of the dependent variable that lies between $(0,1)$. This class of specifications covers what are known as limited dependent variables models. In this introduction to R, we will simply run a few examples of these models, leaving a more detailed analysis for later in this book.
Example: For the NCAA data, there are 64 observatios (teams) ordered from best to worst. We take the top 32 teams and make their dependent variable 1 (above median teams), and that of the bottom 32 teams zero (below median). Our goal is to fit a regression model that returns a team's predicted percentile ranking.
First, we create the dependent variable.
%%R
y = c(rep(1,32),rep(0,32))
print(y)
x = as.matrix(ncaa[,4:14])
y = as.matrix(y)
[1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 [39] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
We use the function glm for this task. Running the model is pretty easy as follows.
%%R
h = glm(y~x, family=binomial(link="logit"))
print(logLik(h))
print(summary(h))
'log Lik.' -21.44779 (df=12) Call: glm(formula = y ~ x, family = binomial(link = "logit")) Deviance Residuals: Min 1Q Median 3Q Max -1.80174 -0.40502 -0.00238 0.37584 2.31767 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -45.83315 14.97564 -3.061 0.00221 ** xPTS -0.06127 0.09549 -0.642 0.52108 xREB 0.49037 0.18089 2.711 0.00671 ** xAST 0.16422 0.26804 0.613 0.54010 xTO -0.38405 0.23434 -1.639 0.10124 xA.T 1.56351 3.17091 0.493 0.62196 xSTL 0.78360 0.32605 2.403 0.01625 * xBLK 0.07867 0.23482 0.335 0.73761 xPF 0.02602 0.13644 0.191 0.84874 xFG 46.21374 17.33685 2.666 0.00768 ** xFT 10.72992 4.47729 2.397 0.01655 * xX3P 5.41985 5.77966 0.938 0.34838 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 88.723 on 63 degrees of freedom Residual deviance: 42.896 on 52 degrees of freedom AIC: 66.896 Number of Fisher Scoring iterations: 6
Thus, we see that the best variables that separate upper-half teams from lower-half teams are the number of rebounds and the field goal percentage. To a lesser extent, field goal percentage and steals also provide some explanatory power. The logit regression is specified as follows:
\begin{eqnarray*} z &=& \frac{e^y}{1+e^y}\\ y &=& b_0 + b_1 x_1 + b_2 x_2 + \ldots + b_k x_k \end{eqnarray*}The original data $z = \{0,1\}$. The range of values of $y$ is $(-\infty,+\infty)$. And as required, the fitted $z \in (0,1)$. The variables $x$ are the RHS variables. The fitting is done using MLE.
Suppose we ran this with a simple linear regression.
%%R
h = lm(y~x)
summary(h)
Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -0.65982 -0.26830 0.03183 0.24712 0.83049 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -4.114185 1.174308 -3.503 0.000953 *** xPTS -0.005569 0.010263 -0.543 0.589709 xREB 0.046922 0.015003 3.128 0.002886 ** xAST 0.015391 0.036990 0.416 0.679055 xTO -0.046479 0.028988 -1.603 0.114905 xA.T 0.103216 0.450763 0.229 0.819782 xSTL 0.063309 0.028015 2.260 0.028050 * xBLK 0.023088 0.030474 0.758 0.452082 xPF 0.011492 0.018056 0.636 0.527253 xFG 4.842722 1.616465 2.996 0.004186 ** xFT 1.162177 0.454178 2.559 0.013452 * xX3P 0.476283 0.712184 0.669 0.506604 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3905 on 52 degrees of freedom Multiple R-squared: 0.5043, Adjusted R-squared: 0.3995 F-statistic: 4.81 on 11 and 52 DF, p-value: 4.514e-05
We get the same variables again showing up as significant.
We can redo the same regression in the logit using a probit instead. A probit is identical in spirit to the logit regression, except that the function that is used is
\begin{eqnarray*} z &=& \Phi(y)\\ y &=& b_0 + b_1 x_1 + b_2 x_2 + \ldots + b_k x_k \end{eqnarray*}where $\Phi(\cdot)$ is the cumulative normal probability function. It is implemented in R as follows.
%%R
h = glm(y~x, family=binomial(link="probit"))
print(logLik(h))
print(summary(h))
'log Lik.' -21.27924 (df=12) Call: glm(formula = y ~ x, family = binomial(link = "probit")) Deviance Residuals: Min 1Q Median 3Q Max -1.76353 -0.41212 -0.00031 0.34996 2.24568 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -26.28219 8.09608 -3.246 0.00117 ** xPTS -0.03463 0.05385 -0.643 0.52020 xREB 0.28493 0.09939 2.867 0.00415 ** xAST 0.10894 0.15735 0.692 0.48874 xTO -0.23742 0.13642 -1.740 0.08180 . xA.T 0.71485 1.86701 0.383 0.70181 xSTL 0.45963 0.18414 2.496 0.01256 * xBLK 0.03029 0.13631 0.222 0.82415 xPF 0.01041 0.07907 0.132 0.89529 xFG 26.58461 9.38711 2.832 0.00463 ** xFT 6.28278 2.51452 2.499 0.01247 * xX3P 3.15824 3.37841 0.935 0.34988 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 88.723 on 63 degrees of freedom Residual deviance: 42.558 on 52 degrees of freedom AIC: 66.558 Number of Fisher Scoring iterations: 8
The results confirm those obtained from the linear regression and logit regression.
GARCH stands for "Generalized Auto-Regressive Conditional Heteroskedasticity". Engle (1982) invented ARCH (for which he got the Nobel prize) and this was extended by Bollerslev (1986) to GARCH.
Engle, Robert F. 1982. “Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation.” Econometrica 50 (4). [Wiley, Econometric Society]: 987–1007. http://www.jstor.org/stable/1912773.
Bollerslev, Tim. 1986. “Generalized autoregressive conditional heteroskedasticity.” Journal of Econometrics 31 (3): 307–27. https://ideas.repec.org/a/eee/econom/v31y1986i3p307-327.html.
ARCH models are based on the idea that volatility tends to cluster, i.e., volatility for period $t$, is auto-correlated with volatility from period $(t-1)$, or more preceding periods. If we had a time series of stock returns following a random walk, we might model it as follows
\begin{equation} r_t = \mu + e_t, \quad e_t \sim N(0,\sigma_t^2) \end{equation}Returns have constant mean $\mu$ and time-varying variance $\sigma_t^2$. If the variance were stationary then $\sigma_t^2$ would be constant. But under GARCH it is auto-correlated with previous variances. Hence, we have
\begin{equation} \sigma_{t}^2 = \beta_0 + \sum_{j=1}^p \beta_{1j} \sigma_{t-j}^2 + \sum_{k=1}^q \beta_{2k} e_{t-k}^2 \end{equation}So current variance ($\sigma_t^2$) depends on past squared shocks ($e_{t-k}^2$) and past variances ($\sigma_{t-j}^2$). The number of lags of past variance is $p$, and that of lagged shocks is $q$. The model is thus known as a GARCH$(p,q)$ model. For the model to be stationary, the sum of all the $\beta$ terms should be less than 1.
In GARCH, stock returns are conditionally normal, and independent, but not identically distributed because the variance changes over time. Since at every time $t$, we know the conditional distribution of returns, because $\sigma_t$ is based on past $\sigma_{t-j}$ and past shocks $e_{t-k}$, we can estimate the parameters $\{\beta_0,\beta{1j}, \beta_{2k}\}, \forall j,k$, of the model using MLE. The good news is that this comes canned in R, so all we need to do is use the tseries package.
%%R
library(tseries)
res = garch(rets,order=c(1,1))
summary(res)
/Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: ‘tseries’ version: 0.10-46 ‘tseries’ is a package for time series analysis and computational finance. See ‘library(help="tseries")’ for details. warnings.warn(x, RRuntimeWarning)
***** ESTIMATION WITH ANALYTICAL GRADIENT ***** I INITIAL X(I) D(I) 1 3.238046e-04 1.000e+00 2 5.000000e-02 1.000e+00 3 5.000000e-02 1.000e+00 IT NF F RELDF PRELDF RELDX STPPAR D*STEP NPRELDF 0 1 -1.256e+04 1 7 -1.256e+04 1.53e-04 2.28e-04 1.1e-04 2.2e+10 1.1e-05 2.54e+06 2 8 -1.256e+04 1.75e-05 1.93e-05 9.8e-05 2.0e+00 1.1e-05 2.23e+01 3 16 -1.260e+04 3.10e-03 5.12e-03 4.6e-01 2.0e+00 8.6e-02 2.22e+01 4 19 -1.268e+04 6.96e-03 4.74e-03 7.5e-01 2.0e+00 3.5e-01 1.17e+00 5 21 -1.271e+04 1.96e-03 1.87e-03 7.9e-02 2.0e+00 6.9e-02 4.77e+02 6 23 -1.276e+04 4.13e-03 4.09e-03 1.3e-01 2.0e+00 1.4e-01 4.53e+04 7 25 -1.277e+04 9.35e-04 1.08e-03 2.2e-02 2.0e+00 2.8e-02 1.19e+00 8 27 -1.280e+04 1.83e-03 3.28e-03 8.0e-02 2.0e+00 1.1e-01 1.12e+00 9 36 -1.280e+04 3.90e-04 7.87e-04 3.1e-06 4.9e+00 4.6e-06 1.12e-02 10 37 -1.280e+04 2.19e-06 1.88e-06 3.0e-06 2.0e+00 4.6e-06 8.03e-04 11 38 -1.280e+04 1.03e-07 1.06e-07 3.0e-06 2.0e+00 4.6e-06 8.54e-04 12 45 -1.280e+04 1.05e-04 1.55e-04 1.2e-02 1.5e+00 1.9e-02 8.52e-04 13 48 -1.282e+04 1.33e-03 1.14e-03 3.5e-02 0.0e+00 7.5e-02 1.91e-03 14 50 -1.283e+04 9.18e-04 9.78e-04 2.6e-02 1.6e+00 5.4e-02 6.43e-03 15 51 -1.284e+04 4.95e-04 6.20e-04 2.4e-02 8.8e-01 5.4e-02 1.18e-03 16 60 -1.284e+04 2.64e-05 6.14e-05 4.0e-07 4.4e+00 7.0e-07 2.18e-04 17 61 -1.284e+04 1.78e-06 1.69e-06 3.1e-07 2.0e+00 7.0e-07 6.27e-05 18 62 -1.284e+04 6.64e-09 7.01e-09 3.1e-07 2.0e+00 7.0e-07 6.40e-05 19 70 -1.284e+04 3.10e-05 3.74e-05 5.1e-03 8.0e-01 1.1e-02 6.39e-05 20 72 -1.284e+04 4.16e-06 6.33e-06 1.6e-03 7.7e-01 3.5e-03 1.02e-05 21 73 -1.284e+04 1.20e-06 2.33e-06 2.2e-03 0.0e+00 4.6e-03 2.33e-06 22 74 -1.284e+04 3.80e-07 3.87e-07 7.1e-04 0.0e+00 1.5e-03 3.87e-07 23 75 -1.284e+04 3.79e-10 6.53e-10 1.3e-05 0.0e+00 2.4e-05 6.53e-10 24 76 -1.284e+04 2.34e-11 1.76e-11 8.6e-07 0.0e+00 1.9e-06 1.76e-11 ***** RELATIVE FUNCTION CONVERGENCE ***** FUNCTION -1.284035e+04 RELDX 8.555e-07 FUNC. EVALS 76 GRAD. EVALS 25 PRELDF 1.763e-11 NPRELDF 1.763e-11 I FINAL X(I) D(I) G(I) 1 1.030513e-05 1.000e+00 6.155e+01 2 8.206572e-02 1.000e+00 1.801e-02 3 8.926080e-01 1.000e+00 2.204e-02 Call: garch(x = rets, order = c(1, 1)) Model: GARCH(1,1) Residuals: Min 1Q Median 3Q Max -6.41740 -0.45176 0.03289 0.56373 9.81315 Coefficient(s): Estimate Std. Error t value Pr(>|t|) a0 1.031e-05 5.597e-07 18.41 <2e-16 *** a1 8.207e-02 3.994e-03 20.55 <2e-16 *** b1 8.926e-01 4.104e-03 217.51 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Diagnostic Tests: Jarque Bera Test data: Residuals X-squared = 14703, df = 2, p-value < 2.2e-16 Box-Ljung test data: Squared.Residuals X-squared = 0.0083245, df = 1, p-value = 0.9273
That's it! Certainly much less painful than programming the entire MLE procedure. We see that the parameters $\{\beta_0,\beta_1,\beta_2\}$ are all statistically significant. Given the fitted parameters, we can also examine the extracted time series of volatilty.
%%R
#PLOT VOLATILITY TIMES SERIES
print(names(res))
plot(res$fitted.values[,1],type="l",col="red")
grid(lwd=2)
[1] "order" "coef" "n.likeli" "n.used" [5] "residuals" "fitted.values" "series" "frequency" [9] "call" "vcov"
We may also plot is side by side with the stock price series.
%%R
par(mfrow=c(2,1))
plot(res$fitted.values[,1],col="blue",type="l")
plot(stkp,type="l",col="red")
Notice how the volatility series clumps into periods of high volatility, interspersed with larger periods of calm. As is often the case, volatility tends to be higher when the stock price is lower.
Also known as VAR (not the same thing as Value-at-Risk, denoted VaR). VAR is useful for estimating systems where there are simultaneous regression equations, and the variables influence each other over time. So in a VAR, each variable in a system is assumed to depend on lagged values of itself and the other variables. The number of lags may be chosen by the econometrician based on the expected decay in time-dependence of the variables in the VAR.
In the following example, we examine the inter-relatedness of returns of the following three tickers: SUNW, MSFT, IBM. For vector autoregressions (VARs), we run the following R commands:
%%R
md = read.table("DSTMAA_data/markowitzdata.txt",header=TRUE)
y = as.matrix(md[2:4])
library(stats)
var6 = ar(y,aic=TRUE,order=6)
print(var6$order)
print(var6$ar)
[1] 1 , , SUNW SUNW MSFT IBM 1 -0.00985635 0.02224093 0.002072782 , , MSFT SUNW MSFT IBM 1 0.008658304 -0.1369503 0.0306552 , , IBM SUNW MSFT IBM 1 -0.04517035 0.0975497 -0.01283037
We print out the Akaike Information Criterion (AIC)^[https://en.wikipedia.org/wiki/Akaike_information_criterion] to see which lags are significant.
%%R
print(var6$aic)
0 1 2 3 4 5 6 23.950676 0.000000 2.762663 5.284709 5.164238 10.065300 8.924513
Since there are three stocks' returns moving over time, we have a system of three equations, each with six lags, so there will be six lagged coefficients for each equation. We print out these coefficients here, and examine the sign. We note however that only one lag is significant, as the "order" of the system was estimated as 1 in the VAR above.
%%R
print(var6$partialacf)
, , SUNW SUNW MSFT IBM 1 -0.00985635 0.022240931 0.002072782 2 -0.07857841 -0.019721982 -0.006210487 3 0.03382375 0.003658121 0.032990758 4 0.02259522 0.030023132 0.020925226 5 -0.03944162 -0.030654949 -0.012384084 6 -0.03109748 -0.021612632 -0.003164879 , , MSFT SUNW MSFT IBM 1 0.008658304 -0.13695027 0.030655201 2 -0.053224374 -0.02396291 -0.047058278 3 0.080632420 0.03720952 -0.004353203 4 -0.038171317 -0.07573402 -0.004913021 5 0.002727220 0.05886752 0.050568308 6 0.242148823 0.03534206 0.062799122 , , IBM SUNW MSFT IBM 1 -0.04517035 0.097549700 -0.01283037 2 0.05436993 0.021189756 0.05430338 3 -0.08990973 -0.077140955 -0.03979962 4 0.06651063 0.056250866 0.05200459 5 0.03117548 -0.056192843 -0.06080490 6 -0.13131366 -0.003776726 -0.01502191
Interestingly we see that each of the tickers has a negative relation to its lagged value, but a positive correlation with the lagged values of the other two stocks. Hence, there is positive cross autocorrelation amongst these tech stocks. We can also run a model with three lags.
%%R
ar(y,method="ols",order=3)
Call: ar(x = y, order.max = 3, method = "ols") $ar , , 1 SUNW MSFT IBM SUNW 0.01407 -0.0006952 -0.036839 MSFT 0.02693 -0.1440645 0.100557 IBM 0.01330 0.0211160 -0.009662 , , 2 SUNW MSFT IBM SUNW -0.082017 -0.04079 0.04812 MSFT -0.020668 -0.01722 0.01761 IBM -0.006717 -0.04790 0.05537 , , 3 SUNW MSFT IBM SUNW 0.035412 0.081961 -0.09139 MSFT 0.003999 0.037252 -0.07719 IBM 0.033571 -0.003906 -0.04031 $x.intercept SUNW MSFT IBM -9.623e-05 -7.366e-05 -6.257e-05 $var.pred SUNW MSFT IBM SUNW 0.0013593 0.0003007 0.0002842 MSFT 0.0003007 0.0003511 0.0001888 IBM 0.0002842 0.0001888 0.0002881
We examine cross autocorrelation found across all stocks by Lo and Mackinlay in their book "A Non-Random Walk Down Wall Street" -- see Figure below.
nb_setup.images_hconcat(["DSTMAA_images/ARcross.png"], width=500)
We see that one-lag cross autocorrelations are positive. Compare these portfolio autocorrelations with the individual stock autocorrelations in the example here.
Earlier we examined root finding. Here we develop it further. We have also not done much with user-generated functions. Here is a neat model in R to solve for the implied volatility in the Black-Merton-Scholes class of models. First, we code up the Black and Scholes (1973) model; this is the function bms73 below. Then we write a user-defined function that solves for the implied volatility from a given call or put option price. The package minpack.lm is used for the equation solving, and the function call is nls.lm.
If you are not familiar with the Nobel Prize winning Black-Scholes model, never mind, almost the entire world has never heard of it. Just think of it as a nonlinear multivariate function that we will use as an exemplar for equation solving. We are going to use the function below to solve for the value of sig in the expressions below. We set up two functions.
%%R
#Black-Merton-Scholes 1973
#sig: volatility
#S: stock price
#K: strike price
#T: maturity
#r: risk free rate
#q: dividend rate
#cp = 1 for calls and -1 for puts
#optprice: observed option price
bms73 = function(sig,S,K,T,r,q,cp=1,optprice) {
d1 = (log(S/K)+(r-q+0.5*sig^2)*T)/(sig*sqrt(T))
d2 = d1 - sig*sqrt(T)
if (cp==1) {
optval = S*exp(-q*T)*pnorm(d1)-K*exp(-r*T)*pnorm(d2)
}
else {
optval = -S*exp(-q*T)*pnorm(-d1)+K*exp(-r*T)*pnorm(-d2)
}
#If option price is supplied we want the implied vol, else optprice
bs = optval - optprice
}
#Function to return Imp Vol with starting guess sig0
impvol = function(sig0,S,K,T,r,q,cp,optprice) {
sol = nls.lm(par=sig0,fn=bms73,S=S,K=K,T=T,r=r,q=q,
cp=cp,optprice=optprice)
}
We use the minimizer to solve the nonlinear function for the value of sig. The calls to this model are as follows:
%%R
library(minpack.lm)
optprice = 4
res = impvol(0.2,40,40,1,0.03,0,-1,optprice)
print(names(res))
print(c("Implied vol = ",res$par))
Error: package or namespace load failed for ‘minpack.lm’ in dyn.load(file, DLLpath = DLLpath, ...): unable to load shared object '/Users/srdas/anaconda3/lib/R/library/minpack.lm/libs/minpack.lm.dylib': dlopen(/Users/srdas/anaconda3/lib/R/library/minpack.lm/libs/minpack.lm.dylib, 6): Library not loaded: @rpath/libopenblasp-r0.3.7.dylib Referenced from: /Users/srdas/anaconda3/lib/R/library/minpack.lm/libs/minpack.lm.dylib Reason: image not found
We note that the function impvol was written such that the argument that we needed to solve for, sig0, the implied volatility, was the first argument in the function. However, the expression par=sig0 does inform the solver which argument is being searched for in order to satisfy the non-linear equation for implied volatility. Note also that the function bms73 returns the difference between the model price and observed price, not the model price alone. This is necessary as the solver tries to set this function value to zero by finding the implied volatility.
Lets check if we put this volatility back into the bms function that we get back the option price of 4. Voila!
%%R
#CHECK
optp = bms73(res$par,40,40,1,0.03,0,0,4) + optprice
print(c("Check option price = ",optp))
Error in withVisible({ : object 'optprice' not found
/Users/srdas/anaconda3/lib/python3.7/site-packages/rpy2/rinterface/__init__.py:146: RRuntimeWarning: Error in withVisible({ : object 'optprice' not found warnings.warn(x, RRuntimeWarning)
We may be interested in hosting our R programs for users to run through a browser interface. This section walks you through the process to do so. This is an extract of my blog post at http://sanjivdas.wordpress.com/2010/11/07/web-enabling-r-functions-with-cgi-on-a-mac-os-x-desktop/. The same may be achieved by using the Shiny package in R, which enables you to create interactive browser-based applications, and is in fact a more powerful environment in which to create web-driven applications. See: https://shiny.rstudio.com/.
Here we desribe an example based on the Rcgi package from David Firth, and for full details of using R with CGI, see http://www.omegahat.org/CGIwithR/. Download the document on using R with CGI. It's titled "CGIwithR: Facilities for Processing Web Forms with R".^[https://www.jstatsoft.org/article/view/v008i10/CGIwithR-overview.pdf]
You need two program files to get everything working. (These instructions are for a Mac environment.)
(a) The html file that is the web form for input data. (b) The R file, with special tags for use with the CGIwithR package.
Our example will be simple, i.e., a calculator to work out the monthly payment on a standard fixed rate mortgage. The three inputs are the loan principal, annual loan rate, and the number of remaining months to maturity.
But first, let's create the html file for the web page that will take these three input values. We call it mortgage_calc.html. The code is all standard, for those familiar with html, and even if you are not used to html, the code is self-explanatory. See Figure rcgi1.
#rcgi1
nb_setup.images_hconcat(['DSTMAA_images/rcgi1.png'], width=500)
Notice that line 06 will be the one referencing the R program that does the calculation. The three inputs are accepted in lines 08-10. Line 12 sends the inputs to the R program.
Next, we look at the R program, suitably modified to include html tags. We name it mortgage_calc.R. See Figure rcgi2.
nb_setup.images_hconcat(['DSTMAA_images/rcgi2.png'], width=500)
We can see that all html calls in the R program are made using the tag() construct. Lines 22--35 take in the three inputs from the html form. Lines 43--44 do the calculations and line 45 prints the result. The cat() function prints its arguments to the web browser page.
Okay, we have seen how the two programs (html, R) are written and these templates may be used with changes as needed. We also need to pay attention to setting up the R environment to make sure that the function is served up by the system. The following steps are needed:
Make sure that your Mac is allowing connections to its web server. Go to System Preferences and choose Sharing. In this window enable Web Sharing by ticking the box next to it.
Place the html file mortgage_calc.html in the directory that serves up web pages. On a Mac, there is already a web directory for this called Sites. It's a good idea to open a separate subdirectory called (say) Rcgi below this one for the R related programs and put the html file there.
The R program mortgage_calc.R must go in the directory that has been assigned for CGI executables. On a Mac, the default for this directory is /Library/WebServer/CGI-Executables and is usually referenced by the alias cgi-bin (stands for cgi binaries). Drop the R program into this directory.
Two more important files are created when you install the Rcgi package. The CGIwithR installation creates two files:
(a) A hidden file called .Rprofile; (b) A file called R.cgi.
Place both these files in the directory: /Library/WebServer/CGI-Executables.
If you cannot find the .Rprofile file then create it directly by opening a text editor and adding two lines to the file:
%%R
#! /usr/bin/R
#library(CGIwithR,warn.conflicts=FALSE)
Now, open the R.cgi file and make sure that the line pointing to the R executable in the file is showing
R_DEFAULT=/usr/bin/R
The file may actually have it as #!/usr/local/bin/R which is for Linux platforms, but the usual Mac install has the executable in #! /usr/bin/R so make sure this is done.
Make both files executable as follows:
chmod a+rx .Rprofile chmod a+rx R.cgi
Finally, make the $\sim$/Sites/Rcgi/ directory write accessible:
chmod a+wx $\sim$/Sites/Rcgi
Just being patient and following all the steps makes sure it all works well. Having done it once, it's easy to repeat and create several functions. The inputs are as follows: Loan principal (enter a dollar amount). Annual loan rate (enter it in decimals, e.g., six percent is entered as 0.06). Remaining maturity in months (enter 300 if the remaining maturity is 25 years).
We end with a brief comment on causality. Merely finding a relationship between the dependent variable and the independent variables in a regression is not enough. Such correlation does not imply causality. In many cases, we are interested in causality because we want to know that changing an independent variable will indeed change the dependent one. This is especially crucial in business, where we may be interested in knowing if raising wages will result in higher productivity, or in public policy, where we'd like to know the impact of reducing taxes on domestic product. Or in marketing where the causal impact of as spending is an important part of marketing attribution analysis.
We may also be concerned that a third confounding variable may be impacting both $y$ and $x$ in a regression, thereby biasing the causal impact implied by the regression coefficient.
The main question of causal inference is as follows: Holding all else constant, how does changing a single independent variable $x$ change the dependent variable $y$? Regression models are notoriously riddled with trouble in answering this question, because even as we know the the $x$ variables in the regression are correlated with each other, we are happy to assume that all the variables left out of the regression that might impact $y$ are not correlated in any way with the $x$ variables in the regression! Given this, any attempt at causal inference leaves any modeler very uncomfortable at the slightest hint of missing variable bias.
So how may we undertake causal inference with more confidence? The answer is through experiments, more technically, randomized control trials (RCTs).
Angrist and Pischke (2014) provide the following useful Identity of Causal Inference: The effect of experimental treatment may be decomposed into
$$ \mbox{Treated Outcome - Untreated Outcome} \\ = \mbox{Treated Outcome - Outcome of Treated if not treated} \\ + \mbox{Outcome of Treated if not treated - Untreated Outcome}\\ = \mbox{Treatement Effect + Selection Bias} $$If we have a true random controlled trial, then the selection bias will be zero, and we will have estimated the treatment effect correctly. This is why setting up proper RCTs is so valuable in determining causality. Angirst and Pischke point out their “Furious Five methods of causal inference”: (i) random assignment, (ii) regression, (iii) instrumental variables, (iv) regression discontinuity, and (v) differences in differences. Their book is an excellent source for discussion of causal inference.
Joshua D. Angrist & Jörn-Steffen Pischke (2014). "Mastering Metrics: The Path from Cause to Effect", Princeton University Press. https://press.princeton.edu/titles/10363.html
Imbens, G. (2019). "Potential Outcome and Directed Acyclic Graph Approaches to Causality: Relevance for Empirical Practice in Economics", Working Paper; pdf. Discusses the DAG approach to Causality of Judea Pearl versus the standard approach of Potential Outcomes used by most economists, as in the reference above.
https://towardsdatascience.com/top-10-coding-mistakes-made-by-data-scientists-bb5bc82faaee; https://drive.google.com/file/d/1RNA6MPRwyvRI1jcM5EBwvJe--NmkmIFN/view?usp=sharing