Bigdata Hadoop Adminstration And Analytics


Data Analytics (Data Science) is the science of analyzing data to convert information to useful knowledge. This knowledge could help us understand our world better, and in many contexts enable us to make better decisions. While this is the broad and grand objective, the last 20 years has seen steeply decreasing costs to gather, store, and process data, creating an even stronger motivation for the use of empirical approaches to problem solving. This course seeks to present you with a wide range of data analytic techniques and is structured around the broad contours of the different types of data analytics, namely, descriptive, inferential, predictive, and prescriptive analytics.

Benefit of learning Bigdata Hadoop Adminstration And Analytics  From Selecom Technology


Course Content

Getting Started with Database (RDBMS)

  • Concept of the Database and RDBMS
  • Basic Select Statements & conditions
  • Operations and Flow of Commands
  • Creating and Managing Tables
  • Date Time Functions
  • Data Definition Language & Commands
  • Data Manipulation Language & Commands
  • Transaction Control Language & Commands
  • Constraints (PK, UNIQUE, NOT NULL, CHECK)
  • Relationship (Foreign Key)
  • Database Objects (Sequence, Index, View)
  • Schema, User Creation and privileges.

The Base of Hadoop : JAVA Programming

  • Introduction to the OOPS concepts
  • Installing and Starting programming in Java
  • Operators and Relations
  • Branching (One, Two, Multi-Way)
  • Looping Constructs
  • Fuctions (Void, Return Type, Default, Parameterized, Static, Non-Static)
  • Constructors (Single/Overloading)
  • Multiple classes
  • Inheritance (single, Multilevel)
  • Access Specifies (Public, Private, Protected)
  • Arrays & Exceptions
  • Interface & Packages

The Server-Side OS : LINUX

  • Installing Linux
  • Manual Partitions Creation while installing
  • Getting Familiar with The Linux environment
  • Working on Terminal
  • Getting Familiar with The commands: Mkdir, CD, Touch, rm, cal, ls, vim, gedit, cp, mv, tar etc.
  • Editing Files in Linux
  • Creating and Managing Users & Groups

Configuring SCALA & SPARK

  • Downloading and Installing SPARK
  • Configuring SPARK
  • Starting SPARK DAEMON
  • Starting SPARK Shell
  • Installing and Configuring SCALA
  • Working with SCALA command Line

SCALA Commands Line & Programming

  • Declaring Variables/ Constants
  • Operations (+,-,*, /, %)
  • Relations (<, >, <=, >=, !=)
  • BigInteger and BigDecimal
  • Importing libraries (scala.math._)
  • Commandline Functions: abs, cbrt, sqrt, round, floor, ciel, exp, pow, hypot, log10, log2, min, max, random, toRadians, toDegrees, sin, cos, tan etc.
  • Writing basic Scala Programs
  • Objects in scala
  • Conditions in scala
  • Loops in Scala
  • Concepts of Array

Hadoop for Bigdata

Prerequisite for Hadoop:
  • Downloading JDK for Linux
  • Installing and Configuring JDK for Hadoop
  • Editing the .Bash Profile for JAVA
  • Making JAVA available for all users
  • Setting environment Variable for JAVA
  • Verifying JAVA (Running JAVA code on Linux)
  • Hadoop Installation & Configuration:
  • Downloading and Installing Hadoop for Standalone Installation
  • Creating Environment Variable
  • Verifying Hadoop installation
  • Installing HADOOP in Distributed Mode
  • Configuring
  • Configuring core-site.xml
  • Configuring hdfs-site.xml
  • Configuring yarn-site.xml
  • Configuring mapred-site.xml
  • Name-Node Setup
  • Data-Node Setup
  • Formating Name Node
  • Starting HDFS (
  • Login using Ssh localhost
  • Accessing HADOOP on BROWSER


  • Downloading and Installing HBASE
  • Configuring HBASE
  • Setting Environment & Managing XML file
  • Configuring ROOT_DIR & DATA_DIR
  • Creating and Managing Tables
  • Inserting Data Using PUT
  • Updating Data Using PUT
  • Using Scan & Get Commands to Retrieve The DATA
  • Altering the Tables
  • Renaming a Table
  • Deleting the Data
  • Disabling and Dropping A Tables
  • Versioning in HBASE

Understanding Hadoop Ecosystem

Introduction to ecosystem which includes HDFS, MapReduce, YARN, HBase, Hive, Pig, hmaster, Zookeeper service, namenode, nodemanager.


  • Introduction to MapReduce
  • Using WordCount Program in Java
  • Creating Required Directories in HDFS
  • Putting Files (Java Files) from XFS to HDFS
  • Compiling Java File in HDFS
  • Generating Output From Jar File
  • Showing Output using “Hadoop dfs –cat”

HIVE & Derby DB

  • Installing And Configuring Hive
  • Installing and Configuring Derby DB
  • Configuring MetaStore in Hive
  • Creating
  • DDL
  • DML

Introduction to Python

  • Verifying the Python Installation
  • Command Line operations
  • Concepts of String example [2:5]
  • List
  • Tuple
  • Dictionary

Installing R-Programming & PIG

  • Installing R
  • Downloading Pig
  • Installing Pig
  • Getting to the grunt shell

Enroll Now