Teradata Technical Consultant - Data Engineer in Powai Mumbai, India

Role Description:

This position is for a Senior Hadoop developer who will be specializing in Big Data (Hadoop/ MapReduce / Spark) Think Big projects. Good experience on Java & Spark Development will be a key competency for this position.

Minimum Requirements:

  • Minimum 3 years of relevant experience in Hadoop (HDFS, Hive, MapReduce, Oozie)

  • At least 1 year working experience on Spark Development

  • Good hands-on experience on Core Java and Other Programming languages like Scala or Python.

  • Working Experience on Kafka

  • Excellent understanding of Object Oriented Design & Patterns

  • Experience working independently and as part of a team to debug application issues working with configuration files\databases and application log files.

  • Should have good knowledge on optimization & performance tuning

  • Working knowledge of one of the IDEs (Eclipse or IntelliJ)

  • Experience in working with shared code repository (VSS, SVN or Git)

  • Experience with one of the software build tools (ANT, Maven or Gradle)

  • Knowledge on Web Services (SOAP, REST)

  • Good Experience in Basic SQL and Shell scripting

  • Awareness about Hadoop Security

  • Awareness about Apache NiFi

  • Databases: Teradata, DB2, PostgreSQL, MySQL, Oracle (one or more required).

  • Should be able to work or enhance on predefined frameworks

  • Should be able to communicate effectively with Customers

  • Must have understanding of Devops tools like Git, Jenkins, Docker etc..

  • Must have experience or understanding of promoting Bigdata applications into production

Nice to have Experience:

  • Working experience with Apache NiFi

  • Exposure on XML & JSON processing.

  • Awareness about Bigdata Security & related technologies

  • Experience with Webservers: Apache Tomcat or JBoss

  • Preferably one project worked by the candidate should be in Production