Teradata Big Data Software Engineer in Madrid, Spain

As a Big Data Software Engineer you will participate in projects that shape the strategy and architecture of our clients. You will provide technical knowledge and creative problem-solving capabilities to your team, working in developing path-breaking large-scale Big Data and Analytics solutions. You have some experience and understand the Big Data ecosystem (at least Hadoop, HDFS, Hive, Yarn, Kafka, Flume, Spark, and No-SQL databases). You also have sound knowledge on the needs of Analytics Solutions, and Data Pipelines and Data Governance concerns.You will be requested to design and develop code, scripts, and data pipelines that leverage structured and unstructured data integration from multiple sources, using Shell scripts, Java, Scala, Python, SQL and related languages. You will participate in architecting and installing complex software environments. You participate in customer workshops, supporting your team on how to address our customer’s concerns.As a Teradata Think Big Analytics team member, we will appreciate your help in establishing Teradata thought leadership in the big data space by contributing white papers, technical commentary, and participating at industry conferences.This position offers the opportunity to be at the forefront of the growing Big Data and Analytics market. The successful individual possesses knowledge of past and modern data architecture patterns, a strong sense of the art of the possible and, at the same time, can balance it with the requirements of large organizations to build reliable data environments. The right candidate is excited about working hands-on to some of the biggest and most well-known organizations.

Responsibilities

Main ResponsibilitiesYour main responsibilities are to deliver on our commitments and to advise organizations on how to make Big Data and Analytic technologies work for them. You will be designing and developing large scale data processing solutions for Fortune 500 clients. Be a member of a team that develops and implements advanced algorithms and data pipelines that extract, classify, merge, and deliver new insights and business value out of heterogeneous structured and unstructured data sets. You will have a chance to learn and work with multiple technologies and Thought Leaders in the Big Data space.

In additionSupport Sales Activities - You may be requested to support the sales team in demonstrating our clients the benefits of modern data architectures, and to position Teradata Think Big Analytics as the trusted advisor of choice.Develop customer relationship - As a Think Big Analytics ambassador in our clients you will develop deep relationships with stakeholders at your level in their organizations. You proactively advise on needs and issues and deriving actionable solutions.

Knowledge, Skills and Abilities

• Excellent interpersonal skills. Strong verbal and written communication, with good exposure to working in a cross-cultural environment. You may be requested to communicate and present some topics to small audiences. Experienced in writing documents that communicate complex technical topics in an accessible manner.• Experience with data architectures. Knowledge and experience of Big Data and/or Analytics technologies and tools, as the Hadoop ecosystem, Apache Hive, and Spark, among others. Sound knowledge of data governance and security, and data-related methodologies.• At least some experience in other data platforms, such as relational database systems, data warehouses or other OLAP systems, and in delivering enterprise-level ETL pipelines, data workflows, migrations and lifecycles, is also requested.• A strong background on software development, continuous integration, tooling and software architectures is needed, either in in enterprise environments, system integration, or science-related ones. You are proficient in either Java, Scala, C, Python, SQL, Ruby, Clojure, etc. You may also have some experience with Tableau, Shiny, R, Javascript, and so forth.

Job Qualifications:RequiredHand-on experience with the Hadoop stack and related Big Data technologies: Hive, Spark, Kafka, Flume, HBase. Knowledge and experience of structured, semi-structured and unstructured data.

A strong background on software development, continuous integration, tooling and software architectures and software development patterns is needed, either in in enterprise environments, system integration, or science-related ones. Experience programming in Java, Scala, Python and/or SQL

Experience with Unix/Linux (Bash scripting, ssh tunneling, networking (netstat, lsof, ifconfig) and Unix pipingProfessional or academic background that includes mathematics and statisticsExperience with SQL and relational database design and methods for efficiently retrieving dataExperience with NoSQL databases (HBase, Cassandra, MongoDB)Strong analytical skills and creative problem solverExcellent verbal and written communications skillsStrong team player capable of working in a demanding environment

DesiredExperience with Docker containers and orchestration platforms such as ECS, Kubernetes, Mesos and/or Docker Swarm.Experience with cloud services from AWS, GCP and/or Azure (e.g. EMR, S3, AWS Lambda, Google Cloud DataProc, HDInsight)

Education and Experience

• Two years or more experience in relevant roles• Computer Science, Mathematics, Physics, Engineering, or other relevant degree.• Consulting experience is considered a plus.

Complementary Information

• We offer you a not average place to work: we are inspiring and passionate people. We may work hard at times, but always in a dynamic, relaxed and collaborative culture. We offer you the chance of joining a rapidly expanding organization with ambitious growth targets, where you can really make the difference and shape the future.• Position is based in Madrid. However, we are often required to spend time on-site with our customers, mainly local but also international ones. You will be required to participate in travel-based work as a Teradata Think Big Analytics team member. English business-level is mandatory.