Friday, May 23, 2014

Hadoop EcoSystem - a list growing >

Hadoop EcoSystem:

As we know there are many other projects based around core components of Hadoop, often reffered to as the "Hadoop Ecosystem". Below is the exhaustive list which continues to be grown.......
  • Distributed Filesystem
    • Hadoop Distributed File System (Apache Software Foundation)
    • HDFS is a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster. Prior to Hadoop 2.0.0, the NameNode was a single point of failure (SPOF) in an HDFS cluster. With Zookeeper the HDFS High Availability feature addresses this problem by providing the option of running two redundant NameNodes in the same cluster in an Active/Passive configuration with a hot standby.
    • Amazon S3 file system
    • Google File System (Google Inc.)
    • Ceph (Inktank, Red Hat)
    • GlusterFS (Red Hat)
    • Lustre (OpenSFS & Lustre)
  • Distributed Programming
    • MapReduce (Apache Software Foundation)
    • Apache Pig
    • JAQL
    • Apache Spark
    • Stratosphere
    • Netflix PigPen
    • AMPLab SIMR
    • Facebook Corona
    • Apache Twill
    • Damballa Parkour
    • Apache Hama
    • Datasalt Pangool
    • Apache Tez
    • Apache DataFu
    • Pydoop
  • NoSQL Databases
    • Column Data Model
      • Apache HBase
      • Apache Cassandra
      • Hypertable
      • Apache Accumulo
    • Document Data Model
      • MongoDB
      • RethinkDB
      • ArangoDB
    • Stream Data Model
      • EventStore
    • Key-value Data Model
      • Redis DataBase
      • Linkedin Voldemort
      • RocksDB
      • OpenTSDB
    • Graph Data Model
      • ArangoDB
      • Neo4j
  • NewSQL Databases
    • TokuDB
    • HandlerSocket
    • Akiban Server
    • Drizzle
    • Haeinsa
    • SenseiDB
    • Sky
    • BayesDB
    • InfluxDB
  • SQL-on-Hadoop
    • Apache Hive
    • Apache HCatalog
    • AMPLAB Shark
    • Apache Drill
    • Cloudera Impala
    • Facebook Presto
    • Datasalt Splout SQL
    • Apache Tajo
    • Apache Phoenix
  • Data Ingestion
    • Apache Flume
    • Apache Sqoop
    • Facebook Scribe
    • Apache Chukwa
    • Apache Storm
    • Apache Kafka
    • Netflix Suro
    • Apache Samza
    • Cloudera Morphline
    • HIHO
  • Service Programming
    • Apache Thrift
    • Apache Zookeeper
    • Apache Avro
    • Apache Curator
    • Apache karaf
    • Twitter Elephant Bird
    • Linkedin Norbert
  • Scheduling
    • Apache Oozie
    • Linkedin Azkaban
    • Apache Falcon
  • Machine Learning
    • Apache Mahout
    • WEKA
    • Cloudera Oryx
    • MADlib
  • Bechmarking
    • Apache Hadoop Benchmarking
    • Yahoo Gridmix3
    • PUMA Benchmarking
    • Berkeley SWIM Benchmark
    • Intel HiBench
  • Security
    • Apache Sentry
    • Apache Knox Gateway
  • System Deployment
    • Apache Ambari
    • Apache Whirr
    • Cloudera HUE
    • Buildoop
    • Apache Bigtop
    • Apache Helix
    • Hortonworks HOYA
    • Brooklyn
    • Marathon
    • Apache Mesos
  • Applications
    • Revolution R
    • Apache Nutch
    • Sphnix Search Server
    • Apache OODT
    • HIPI Library
    • PivotalR
  • Development Frameworks
    • Spring XD
  • Misselenious
    • Talend
    • Apache Tika
    • Twitter Finagle
    • Apache Giraph
    • Concurrent Cascading
    • S4 Yahoo
    • Intel GraphBuilder
    • Spango BI
    • Jedox Palo
    • Twitter Summingbird
    • Apache Kiji
    • Tableau
    • D3.JS

Wednesday, May 14, 2014

Hadoop at a glance

Apache Hadoop, at its core, consists of two components – Hadoop Distributed File System and Hadoop MapReduce. HDFS is the primary storage system used by Hadoop applications. HDFS creates multiple replicas of data blocks and distributes them on compute nodes throughout a cluster to enable reliable, extremely rapid computations. Hadoop MapReduce is a programming model and software framework for writing applications that rapidly process huge amounts of data in parallel on large clusters of compute nodes. Other Hadoop-related projects (also called EcoSystems) at Apache include Hive, Pig, HBase, Yarn, Mahout, Oozie, Sqoop, Avro, Cascading, ZooKeeper, Flume, Drill, etc.

Other competing technologies of Haddop are - Google Dremel, HPCC Systems, Apache Storm.

Google Dremel is a distributed system developed at Google for interactively querying large datasets and powers Google's BigQuery service. 

HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems. 

Apache Storm is a free and open source distributed real-time computation system. Storm makes it easy to reliably process unbounded streams of data, doing for real-time processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language.


Hadoop distributions are provided by growing numbers of companies. They provide products that include Apache Hadoop, a derivative work thereof, commercial support, and/or tools and utilities related to Hadoop. Some major hadoop distribution companies are - Cloudera, Hortonworks, MapR, Amazon Web services, Intel, EMC, IBM, etc.