Teradata Senior Technical Consultant in Pune, India
- Hadoop Applications Support
Providing Applications Support for Think Big Customers on Hadoop platforms. Typically these customers may have 24/7 contracts, and the successful applicant must be prepared to work in shifts and also be on-call to support customer site/s per contractual obligations.
Minimum experience of 3-6 years in Managing and Supporting large scale Production Hadoop environments (configuration management, monitoring, and application performance tuning) in any of the Hadoop distributions (Apache, Hortonworks, Cloudera, MapR, IBM BigInsights, Pivotal HD)
Around 3-6 years of experience in Applications Support (Java / J2EE, any ETL tool , Strong Knowledge of SQL queries and Unix Shell Scripting, BI operations, Analytics support) engagements on large scale systems.
Experience in Hadoop components such as:
STreaming Tools – NiFi
Data pipeline tools
Experience working independently and as part of a team to debug application issues working with configuration files\databases and application log files.
Root cause analysis for job failures & data quality issues & providing solutions.
Have a working understanding of the software development lifecycle and be able to communicate incident and project status, issues, and resolutions
Experience in Incident management, ServiceNow, JIRA, Change Management Process.
3+ years of experience in Scripting Language (Linux, SQL, Python). Should be proficient in shell scripting.
Experience in developing / supporting RESTful applications
Working knowledge of Linux operating system required.
Strong written and verbal communication skills.
Database support or application DBA – Oracle, DB2, MySQL, PostgreSQL
Knowledge of Storm, Accumulo.
Knowledge of Datastage ETL tools – TalenD, Informatica, Data Stage.
Development, implementation or deployment experience in the Hadoop ecosystem
Working experience with one of the Scheduling tools (Control-M, JCL, Unix/Linux-cron etc.)
Proficiency in Hive internals (including HCatalog), SQOOP, Pig, Oozie and Flume/Kafka.
Proficiency with at least one of the following: Java, Python, Perl, Ruby, C or Web-related development
Development or Operational knowledge on NoSQL technologies like Hbase, MongoDB, Cassandra, Accumulo, etc.
Development or Operational knowledge on Web or cloud platforms like Amazon S3, EC2, Redshift, Rackspace, OpenShift, etc.
Development/scripting experience on Configuration management and provisioning tools e.g. Puppet, Chef
Web/Application Server & SOA administration (Tomcat, JBoss, etc.)
Handle deployment methodologies, code and data movement between Dev., QA and Prod Environments (deployment groups / folder copy/ data-copy etc.)
Should be able to articulate and discuss the principles of performance tuning on Hadoop
Develop and produce daily/ weekly operations reports and metrics as required by IT management
Experience on any of the following will be an added advantage:
Hadoop integration with large scale distributed DBMSs like Teradata, Teradata aster, Vertica, Greenplum, Netezza, DB2, Oracle, etc.
Data Modeling or ability to understand data models
Knowledge of Business Intelligence and/or Data Integration (ETL) solution delivery techniques, models, processes, methodologies
Exposure to tools data acquisition, transformation & integration tools like Talend, Informatica, etc. & BI tools like Tableau, Pentaho, etc.
Linux Administrator certified.