Experimenting with Hadoop

Posted: 2010-12-14

Thanks to our web archiving team (who lead the uk web archive project), I was given a day of training on using Hadoop today. I was already fairly familiar with the map-reduce and HDFS architecture, but I’d not had a chance to actually develop a map-reduce task or run one on a real cluster with some real data. Today’s training gave me that chance, and I’m really pleased the results… Firstly, I was pleased because developing the map-reduce tasks was pretty easy. We went through the first two Cloudera training examples, and after that developing my own task took about half an hour. Furthermore, because the web archiving team were leading the training, we had a 23-node cluster with some genuine data to play with, in the form of a 50GB crawl log generated by Heretrix.

This log contained summaries of the requests and responses from a large web crawl of hundreds of millions of URLs. This log includes information on the response status codes, sizes, dates, mime-types, and so on (see this page for details). My 30-minute coding exercise simply filtered out the 125,295,840 HTTP 200 OK responses and emitted the individual mime-types during the ‘map’ phase, and counted up the frequency of each mime-type during the ‘reduce’ phase. The job took about a minute to run, and the basic results are shown below.

The first graph shows the top ten mime-types encountered during the crawl.

However, this is a little misleading, as the linear scale hides the long tail of file formats. A full chart showing all 637 mime-types in the data set requires a logarithmic vertical scale in order to be examined properly.

So, we find that although the set is dominated by a few main formats, the problem is that the long-tail of minor formats still makes up a significant proportion of the overall data set. About 66% of the file are HTML, and the ten most popular types combined make up about 90% of the content. The remaining 10% is made up of the other 627 mime-types, which makes it very difficult to deal with.

I’m sure this is just reproducing some well-known findings in the field, but that’s not really the point. The point is that, once the Hadoop infrastructure is in place, these kind of explorations are very quick, easy, and great fun to do.

(Updated: Re-ran experiment and analysis, correcting for non lower-case MIME type.)

Next: OPF Blog: Analysing the formats in the UK Web A... »

Data Mining  Web Archives  Digital Preservation

Mining Web Archives

Using data-mining techniques to explore, understand and utilise large-scale web archives.

Blog Series

  Digital Preservation Lessons Learned 1

  Digital Dark Age 7

  Format Registries 6

  Mining Web Archives 11

  Web Archiving APIs 1

Recent Posts

Tags

Websites (11) Travels (47) General (1) Development (7) Top Tips (4) Science (7) Rants (3) Top Links (2) Reviews (2) Visualisation (3) Digital Preservation (29) Procrastination (2) Data Mining (12) Open Access (1) Web Archives (16) Representation Information (2) Format Registry (3) SCAPE (3) webarchive-discovery (4) War Stories (1) Preservation Actions (2) BUDDAH (4) Publications (3) Digital Humanities (1)


Posted: 2010-12-14 | anj

 

Fighting entropy since 1993

© Dr Andrew N. Jackson — CC-BY

Elsewhere

Contact