Roles & Responsibilities
* Responsible to Longest data from Files, Streams and Databases.
* Process the data with Spark, Scala, Kafka, Hive and Scoop.
* Develops Hadoop applications using Horton Works or other Hadoop distribution.
* Experienced with pulling data from various Database Systems, Network Elements and Unstructured text from web.
* Social Media Sites and other Domain Specific file.
* Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various use cases built on the platform.
* Provide high operational excellence guaranteeing high availability and platform stability.
* Implement scalable solutions to meet the Ever Increasing Data Volumes, using big Data/Cloud technologies Apache Spark, Kafka, Any Cloud computing etc.