Sunday, February 28, 2016

Textual description of firstImageUrl

How to set up Hadoop Streaming to analyze MovieLens data























This post is designed for an Apache Hadoop 2.6.0 single cluster installation. The job uses a Hadoop Streaming design with C++, Ruby and Python. The MapReduce configuration is a standard or simple configuration without any tweaking on the streaming, mappers and reducers for enhanced performance. The dataset used is the GroupLens MovieLens 1M dataset. The file used is the ratings file (and the README.txt file).



1. Prepare the data




The data for the analysis is relatively easy to prepare. It can be imported (after extraction from the zipped format) into Microsoft Excel. The file can then be arranged into four columns (whose metadata can be obtained from the README.txt file packaged with the data). These are:

UserIDs range between 1 and 6040  
MovieIDs range between 1 and 3952
Ratings are made on a 5-star scale (whole-star ratings only)
Timestamp is represented in seconds since the epoch as returned by time(2).

There are a number of MapReduce jobs that can be set up on the dataset. The one selected for this illustration is to determine the following metrics:

Number of ratings made by each UserID
Number of ratings for each MovieID
Average rating for each MovieID
Number of ratings in each score category in the list {1,2,3,4,5}.


The data can be arranged into four files for the job. The UserID column can be extracted into a text file for analysis using wordcount MapReduce. The rationale for the approach is that number times a UserID occurs in the file column is equal to the number ratings made by the UserID. The scheme for generating the number of ratings for each movie follows analogously for a MovieID column text file. 

The average rating calculation outlined uses all four columns of the data in csv format, however, the Python (mapper/reducer) code can be easily adapted to include only the MovieID column and the Ratings column. The number of ratings for each score category can be determined (analogously to the UserID and MovieID approach) by using the ratings as words (instead of numbers).The Timestamp variable is not used in the illustration. 

Once the four files have been prepared they can be loaded into the Hadoop Distributed File System (HDFS) for the job. The next step is to prepare the mapper-reducer sets for the job.



2. Prepare the Mappers and Reducers




In this illustration the four mapper-reducer sets were prepared in three scripting languages (C++, Ruby and Python). 



C++ mapper-reducer set




The mapper-reducer set for the UserID wordcount MapReduce job was prepared in C++ according to the tutorial in this blog post.

C++ Mapper



C++ Reducer



The mapper.cpp file and reducer.cpp file have to be compiled into *.out files using the following commands on Ubuntu 14.04.3 LTS.


These will generate a mapper.out file and a reducer.out file which can be used in the Hadoop streaming.



Ruby mapper-reducer set 




The mapper-reducer set for the MovieID and Rating category wordcount MapReduce jobs was prepared in Ruby according to the tutorial in this blog post.

Ruby Mapper



Ruby Reducer





Python mapper-reducer set




The mapper-reducer set for the Average Ratings MapReduce job was prepared in Python according to the tutorial in this blog post.


Python Mapper



Python Reducer



It is important to run the chmod procedure for each file created in each of the mapper-reducer set creation processes. The mapper-reducer sets can then be saved in appropriate folders in the Hadoop local environment. The next step is to process the data in Hadoop using streaming.



3. Process the data in Hadoop




UserID file


The assumption is that the UserID input dataset (in HDFS) is called InputData.txt. In terms of the MapReduce Streaming code the assumption is that the data has been successfully loaded into HDFS folder - <HDFS input folder>, the hadoop-streaming-2.6.0.jar file is located in the local system folder - <hadoop-streaming-2.6.0.jar local folder> , the C++ mapper is located in local system mapper folder - < mapper folder>, the C++ reducer is located in local system reducer folder - < reducer folder>, and the HDFS output name has been selected to be - <HDFS output folder>. The next step is to run the following command in Hadoop.




This is an excerpt of my UserID rating counts output (6040 in total).




MovieID file


The assumption is that the MovieID input dataset (in HDFS) is called InputData.txt. In terms of the MapReduce Streaming code the assumption is that the data has been successfully loaded into HDFS folder - <HDFS input folder>, the hadoop-streaming-2.6.0.jar file is located in the local system folder - <hadoop-streaming-2.6.0.jar local folder> , the Ruby mapper is located in local system mapper folder - < mapper folder>, the Ruby reducer is located in local system reducer folder - < reducer folder>, and the HDFS output name has been selected to be - <HDFS output folder>. The next step is to run the following command in Hadoop.




This is an excerpt of my MovieID rating counts output (3706 in total).

























MovieID Ratings file


The assumption is that the MovieID Ratings input dataset (in HDFS) is called InputData.csv. In terms of the MapReduce Streaming code the assumption is that the data has been successfully loaded into HDFS folder - <HDFS input folder>, the hadoop-streaming-2.6.0.jar file is located in the local system folder - <hadoop-streaming-2.6.0.jar local folder> , the Python mapper (KeyAvgmapper.py) is located in local system mapper folder - < mapper folder>, the Python reducer (KeyAvgreducer.py) is located in local system reducer folder - < reducer folder>, and the HDFS output name has been selected to be - <HDFS output folder>. The next step is to run the following command in Hadoop.




This is an excerpt of my MovieID average ratings output (3706 in total).






Score Rating (category) counts file



The assumption is that the Score Rating (category) counts input dataset (in HDFS) is called InputData.txt. In terms of the MapReduce Streaming code the assumption is that the data has been successfully loaded into HDFS folder - <HDFS input folder>, the hadoop-streaming-2.6.0.jar file is located in the local system folder - <hadoop-streaming-2.6.0.jar local folder> , the Ruby mapper is located in local system mapper folder - < mapper folder>, the Ruby reducer is located in local system reducer folder - < reducer folder>, and the HDFS output name has been selected to be - <HDFS output folder>. The next step is to run the following command in Hadoop.




This is an excerpt of my category counts output.








4. Highlights from the results



The results from the MapReduce job were very interesting and exciting. I give a very brief summary from the SAS software PROC SGPLOT procedure using techniques from the tutorial in this post. I also supplemented the illustration with results from the SAS software PROC UNIVARIATE. The top 20 User_ID rating volumes (counts). 





The basic summary measures provide useful statistics for analyzing the User_ID rating volumes and also brief checks for the MapReduce job. These are the number of User_IDs that rated, 6040, and the sum of the observations which equals 1000209.




The top 20 Movie_ID rating volumes.



The brief checks for the MapReduce job and summary statistics of the Movie_ID rating volumes.













The top 20 Movie_ID average ratings. The highest average rating was 5. This was the average rating for the top ten movies.


























The brief check for the MapReduce job (only the 3706 in this case) and summary statistics of the Movie_ID rating averages.












The category counts are commensurate with the summary statistics of the movie averages.






 Summary




The MapReduce programming model provides a very powerful method to process data. The Apache Hadoop Streaming facility provides a way to customize the processing according to programming style and cluster resources.

The MovieLens dataset has a lot of information about user ratings. The MapReduce jobs considered in this post provide a simple way to begin to analyze the dataset.

I hope this post proves useful to you in applying MapReduce and analyzing the MovieLens dataset(s).


Interested in finding out more information about simple approaches to Hadoop Streaming and cloud computing?


  

No comments:

Post a Comment

Thank you for your comment.