Implementing a recommender engine using Hadoop and Mahout

In continuation with my earlier blog, in this blog I will talk about how to implement a recommender engine using Mahout and Hadoop.

  • First a brief introduction about MapReduce and how some of the computational algorithm has to be re-written for taking advantage of parallel processing.
  • Second I will talk about Recommender Algorithm to be deduced to work in Mahout to take advantages of Hadoop.
  • Third, I will walk you thru the use case of how we can implement MapReduce Recommender engine with Hadoop and Mahout.

One of the fundamental principles where parallel computing can be taken advantage of is when each of the computers in a cluster can process same tasks independently without sharing any data among each other. Consider sort algorithm, they are not designed to work in a way were we chunk a large dataset and hand it off to many computer running in parallel and once the data is returned the large dataset is consolidated back. If we need to do sorting, we need to write algorithm to take advantage of MapReduce. Terasort algorithm is one of the sorting techniques which takes advantage of MapReduce.

Similarly in the context of recommender engine, if we need to take advantage of MapReduce we have to write it differently. Mahout supports recommendation algorithm to take advantage of MapReduce. Components of a Recommender engine are

  • User/Item based Recommender
  • Similarities (concept of neighborhood)

“Hello World” equivalent for recommendation problem is Netflix movie rating. The movie sample data is available here . In this test data you have a history of Users who rented Movies and rated these movies. When you rent a new movie, it will give you suggestions based on your renting pattern, as well as what other users with similar profiles are renting, this data is stored in transactional database like MySQL. We can write a batch program to the use the transaction data and move it to Hadoop filesystem. When you run Mahout MapReduce, it will return top ‘n’ recommendations. If you notice the data, it has comma separated userid,itemid,rating. It assumes you are maintaining users and items master tables in transactional database and you will combine these 2 to give the user more informative information.

Let me give you a glimpse of how an Item based recommendation algorithm works using MapReduce, for more on this refer Mahout in Action by Manning, chapter 6.

Recommender system involve

  • The first step using this data is to build an item co-occurrence matrix. It mainly answers a question as to how many times 2 items have co-occurred when the users are buying it. Which is basically as square matrix of the order of n where n is all the items the users have bought
  • The next step is to compute the user vector on what items he is buying. Which is a single column matrix
  • Final step is to produce the recommendation by multiplying the co-occurrence matrix and the user vector. The value to recommend to the user is the item with the highest of the zero value

Let us take a typical use case of airline reservation system. When a user is purchasing flight tickets for a location, he is offered recommendation relevant to his interest based on past purchase history and purchase history of similar users profiles. Assuming the transactional data is present in MySQL, we can use sqoop script to import the data into HDFS as below,

 sqoop import --connect jdbc:mysql://localhost/reservation --username krishna --password pass123  --table useritem
 

We need to run the mahout script as below in Hadoop

 hadoop jar mahout-core-0.6-job.jar org.apache.mahout.cf.taste.hadoop.item.RecommenderJob -Dmapred.input.dir=input/input.txt -Dmapred.output.dir=output --usersFile input/users.txt --booleanData --similarityClassname SIMILARITY_COOCCURRENCE
 

Finally we need to run sqoop script the export the final analytical data back to mysql as below,

 sqoop export --connect jdbc:mysql://localhost/reservation --username krishna --password pass123 --table useritem --export-dir useritem/part-m-00000
 

Now in your application, you do a data lookup to useritem table and return the Recommender results to the user.

1 thought on “Implementing a recommender engine using Hadoop and Mahout

Leave a comment