The Machine Knows What You Want: Recommender Systems {#Recommenders}

Introduction

A recommendation algorithm tells you what you like or want. It may tell you about many things you like, sorted in order as well. It tries to understand your preferences using recorded data on your likes and dislikes. This machine learning class of models is often known as "collaborative filtering".

Netflix has a recommendation engine for movies. It tries to show you movies that you prefer. If you think about all the $N$ users of a movie service, each one having preferences over $K$ attributes of a movie, then we can represent this matrix as a collection of weights, with each user on the columns, and the attributes on the rows. This would be a matrix $u \in R^{K \times N}$. Each element of the matrix is indexed as $u_{ki}$, where $i=1,2,...,N$, and $k=1,2,...,K$.

Likewise imagine another matrix $m$ of $M$ movies on the columns and the same $K$ attributes on the rows. We get a matrix $m \in R^{K \times M}$. Each element of the matrix is indexed as $m_{kj}$, where $j=1,2,...,M$, and $k=1,2,...,K$.

For any user $i$, we may rank movies based on the predicted score $r_{ij}$ for movie $j$, easily calculated as

$$ r_{ij} = \sum_{k=1}^K u_{ki} m_{kj} = u_i^\top m_j $$

where $u_i$ is a column vector of size $K \times 1$, and $m_j$ is a column vector of size $K \times 1$ as well. The elements of $r_{ij}$ form a matrix $r$ of dimension $N \times M$. Some, but not all of these elements are actually observed, because users rate movies. While matrix $r$ may be observable with ratings data, matrices $u$ and $m$ are latent, because the $K$ attributes are unknown. We do know that

$$ r = u^\top m $$

Therefore, we would like to factorize matrix $r$ into the two matrices $u,m$.

If the true score for movie $j$, user $i$, is $y_{ij}$, then we want to find $u,m$ that deliver the closest value of $r_{ij}$ to its true value. This is done using a technique known as Alternating Least Squares (ALS).

Alternating Least Squares

The best fit recommender system is specified as the solution to the following problem, where we minimize loss function $L$. Since the notation gets hairy here, remember that any variable with two subscripts is scalar, with one subscript is a vector, and with no subscripts is a matrix.

$$ \begin{align} L &= \sum_{i=1}^N \sum_{j=1}^M (y_{ij}-r_{ij})^2 + \lambda_u \sum_{i=1}^N \parallel u_i \parallel^2 + \lambda_m \sum_{j=1}^M \parallel m_j \parallel^2 \\ &= \sum_{i=1}^N \sum_{j=1}^M (y_{ij}-u_i^\top m_j)^2 + \lambda_u \sum_{i=1}^N u_i^\top u_i + \lambda_m \sum_{j=1}^M m_j^\top m_j \end{align} $$

We wish to find matrices $u,m$ that solve this minimization problem. We begin with some starting guess for both matrices. Then, we differentiate function $L$ with respect to just one matrix, say $u$, and solve for its optimal values. Next, we take the new values of $u$, and old values of $m$, and differentiate $L$ with respect to $m$ to solve for the new optimal values of $m$, holding $u$ fixed. We then repeat the process for a chosen number of epochs, alternating between the $u$ and $m$ subproblems. This eventually converges to the optimal $u$ and $m$ matrices, completing the factorization that minimizes the loss function.

Solve $u$ matrix

Differentiate $L$ with respect to each user, obtaining $N$ first-order equations, which may then be set to zero and solved.

$$ \frac{\partial L}{\partial u_i} = \sum_{j=1}^M (y_{ij}-u_i^\top m_j)(-2 m_j) + 2 \lambda_u u_i = 0 \quad \in {\cal R}^{K \times 1} $$

which gives the following equation:

$$ \sum_{j=1}^M (y_{ij}-u_i^\top m_j)m_j = \lambda_u u_i $$

If you work it out carefully you can write this purely in matrix form as follows:

$$ m \cdot (y_i - m^\top u_i) = \lambda_u u_i \quad \in {\cal R}^{K \times 1} $$

(Note that $y_i \in {\cal R}^{M \times 1}$; $m \in {\cal R}^{K \times M}$; $u_i \in {\cal R}^{K \times 1}$. And so the LHS is ${\cal R}^{K \times 1}$, as is the RHS.)

We rewrite this economically as

$$ \begin{align} m \cdot (y_i - m^\top u_i) &= \lambda_u u_i \quad \in {\cal R}^{K \times 1} \\ m \cdot y_i &= (\lambda_u I + m \cdot m^\top) \cdot u_i \\ \mbox{ } \\ u_i &= (\lambda_u I + m \cdot m^\top)^{-1} \cdot m \cdot y_i \quad \in {\cal R}^{K \times 1} \end{align} $$

where $I \in {\cal R}^{K \times K}$ is an identity matrix. This gives one column $u_i$ of the $u$ matrix, and we compute this over all $i=1,2,...,N$. Therefore, holding $m$ fixed, we can get all the $u$ matrix values. And then holding $u$ fixed, we get all the $m$ matrix values, as shown next.

Solve $m$ matrix

This is analogous to the $u$ matrix, and the answer is

$$ m_j = (\lambda_m I + u \cdot u^\top)^{-1} \cdot u \cdot y_j \quad \in {\cal R}^{K \times 1} $$

This gives one column $m_j$ of the $m$ matrix, and we compute this over all $j=1,2,...,M$. Note that $I \in {\cal R}^{K \times K}$ is an identity matrix, and $y_j \in {\cal R}^{N \times 1}$.

For those students who are uncomfortable with this sort of matrix algebra, I strongly recommend taking a small system of $N=4$ users and $M=3$ movies, and factorize matrix $r \in {\cal R}^{3 \times 2}$. Maybe set $K=2$. Rework the calculus and algebra above to get comfortable with these mathematical objects.

The alternating least squares algorithm is similar to the Expectations-Maximization (EM) algorithm of @10.2307/2984875.

ALS package

In R, we have the ALS package to do this matrix factorization.

Suppose we have 50 users who rate 200 movies. This gives us the $y$ matrix of true ratings. We want to factorize this matrix into two latent matrices $u$ and $m$, by choosing $K=2$ latent attributes. The code for this is simple.

We now extract the two latent matrices. The predicted ratings matrix is also generated, i.e., $r$. We compute the RMSE of ratings predictions.

Interpretation and Use

What does the $u$ matrix tell us? It says for each of the 50 users, how they weight the two attributes. For example, the first user has the following weights:

You can see that each attribute is given different weights.

Likewise, you can take matrix $m$ and see how much each movie offers on each attribute dimension. For example, the first movie's loadings are

How do we use this decomposition? We can take a new user and find out which existing user is closest, using cosine similarity on some other characteristics. Then you can use that user's weights in matrix $u$, to get the movie rankings ordering. Suppose the new user's weights just happen to be the mean of all the other users'. Then we have

The top 6 movie numbers are listed.

ALS in Python

We revisit the same approach using Python. A good reference is here: https://medium.com/radon-dev/als-implicit-collaborative-filtering-5ed653ba39fe; https://drive.google.com/file/d/1M5mHqLfiBDJWqefQRT-d-Y3301RYqW-z/view?usp=sharing

See also this nice article on matrix factorization and collaborative filtering in the realm of ALS: https://towardsdatascience.com/prototyping-a-recommender-system-step-by-step-part-2-alternating-least-square-als-matrix-4a76c58714a1; https://drive.google.com/file/d/1LZN0kDoCGvErof9xAoOSLjU6NpG4C4bm/view?usp=sharing

We do a very basic implementation using NMF.