Job Location: Bangalore / Chennai / Hyderabad / Mumbai / Pune
Joining: Immediate or max within 15 days
- Should have Strong expertise of Extraction, Transformation and Loading (ETL) mechanism using Informatica Big Data Management 10.2.X and various Push down mode using Spark, Blaze and Hive execution engine.
- Should have Strong expertise of Dynamic mapping Use case, Development, Deployment mechanism using Informatica Big Data Management 10.2.X.
- Should have experience on transforming and loading various Complex data sources types such as Unstructured data sources ,No SQL Data Sources.
- Should have Strong expertise of Hive Database including Hive DDL, Partition and Hive Query Language.
- Should have Good Understanding of Hadoop Eco system (HDFS, Spark, Hive).
- Should have Strong expertise of SQL/PLSQL
- Should have Good knowledge on working with Oracle/Sybase/SQL Databases.
- Should have Good knowledge of Data Lake and Dimensional data Modeling implementation.
Should be able to understand the requirements and write Functional Specification Document, Design Document and Mapping Specifications.