High Impact Factor : 4.396 icon | Submit Manuscript Online icon |

Survey on Basic Concept of Big Data

Author(s):

S.Manasa , G.Pulla Reddy Engineering Collage; Y.Rama Mohan, G.Pulla Reddy Engineering Collage

Keywords:

Terabytes, Petabytes, Traditional Data Processing Applications, Hadoop, MapReduce, Hive, Pig, Zookeeper, HBase

Abstract

Big Data, a new technology is a collection of large amount of data sets which include medical data, business data, weather data, organizations data, etc.,. The size of the data will ranges from dozens of terabytes to many petabytes. Because of the presence of such a huge amount of data it becomes tough to handle by traditional data processing applications such as relational data base management systems, desktop statistics and visualization packages. So a massive parallel processing software technique that will run on hundreds and thousands of servers are required. The challenges are listed as storing, retrieving, sharing, searching, transfer, analysis, capture, and visualizing. There are many techniques that are existing to handle and process all these data sets in which some are hadoop, map reduce, hive, pig, zookeeper, Hbase and so on.

Other Details

Paper ID: IJSRDV3I60631
Published in: Volume : 3, Issue : 6
Publication Date: 01/09/2015
Page(s): 1228-1230

Article Preview

Download Article