An inspirational and fun working environment, an innovation-driven, fast-growing company, ambitious projects and an incredibly talented team are just a few reasons why you will love it here.
An inspirational and fun working environment, an innovation-driven, fast-growing company, ambitious projects and an incredibly talented team are just a few reasons why you will love it here.
Data Engineer
The responsibilities are detailed as below:
• Build & maintain data pipelines to support large scale data management in alignment with data strategy and data processing standards
• Experience in designing efficient and robust ETL workflows
• Experience in Database programming using multiple flavors of SQL
• Deploy scalable data pipelines for analytical needs
• Experience in Big Data ecosystem - on-prem (Hortonworks/MapR) or Cloud (Dataproc/EMR/HDInsight)
• Worked on query languages/tools such as Hadoop, Pig, SQL, Hive, Sqoop, and SparkSQL.
• Experience in any orchestration tool such as Airflow/Oozie for scheduling pipelines
• Scheduling and monitoring of Hadoop, Hive, and Spark jobs
• Basic experience in cloud environments (AWS, Azure, GCP)
• Understanding of IN memory distributed computing frameworks like Spark (and/or DataBricks) and its parameter tuning, writing optimized queries in Spark
• Experience in using Spark Streaming, Kafka and Hbase
• Experience working in an Agile/Scrum development process
PREFERRED QUALIFICATIONS
• Exposure to the latest cloud ETL tools such as Glue/ADF/Dataflow is a plus
• Expertise in data structures, distributed computing, manipulating and analyzing complex high-volume data from a variety of internal and external sources
• Experience in building structured and unstructured data pipelines
• Proficient in a programming language such as Python/Scala
• Good understanding of data analysis techniques
• Solid hands-on working knowledge of SQL and scripting
• Good understanding of relational/dimensional modeling and ETL concepts
• Understanding of any reporting tools such as Tableau, Qlikview or PowerBI
No Of Vacancy: 20
Qualification: BE/BS/MTech/MS in computer science.
Experience: 2 to 6 years of experience in building data processing applications using Hadoop, Spark and NoSQL DB and Hadoop streaming.