Data Engineer Senior

  • Data
  • São Paulo, São Paulo

Compartilhar vaga

Founded in 2013 by two American entrepreneurs, Escale is a rapidly growing customer acquisition company located in São Paulo, Brazil. We use data and technology to deliver more intelligent and optimized buying experiences to consumers. We manage the end-to-end sales funnel for big brands in industries such as telecom, healthcare and education with a simple thesis: the best sales strategy is to deliver an amazing buying experience to end users. 

With recent investments from top VCs such as Kaszek Ventures, Global Founders Capital and Redpoint e.Ventures, we are expanding our engineering and data teams to develop a world class customer acquisition platform.

About Us 
We dream big and constantly focus on the customer, knowing that what we do today are the building blocks for the greater future. We have a highly diverse culture and intend to keep it that way, respecting everyone regardless of their age, sex, skin color, country of origin, sexual orientation or experience level. We put deep understanding of agile values, above specific processes and methodologies. We make decisions based on data and are highly results oriented. Our culture resembles a sport team more than a family, where trust and respect are earned by winning hard battles together.

Your Mission
The mission of the Data Pipeline team is to accelerate the success of our customers through integrations and transformations that generate reliable data. As a team member, you work to help define and implement our data delivery strategy. You will be responsible for providing integration pipelines, transformations and availability of Escale data in a readable, easy to use and scalable catalog.

Desired Skills and Expected Qualifications

  • Team coaching and technical leadershipSolid foundation in computer science and fluency at applying it to practical problems
  • Good knowledge of a modern programming language. Ex. Python, Scala, etc.
  • Mastery in one of the RDBMS systems, like Redshift, MariaDB or Postgres
  • Practical knowledge of Data Lake, Hadoop or S3
  • Practical knowledge of Unstructured Data formats. Ex JSON, Parquet, Avro, etc.
  • Experience with open standards and open source technologies
  • Deep understanding of agile principles and their practical applications
  • Naturally self-driven collaborator
  • Experience with data pipelines and Dimensional database modeling
  • Experience with Workflow and Scheduling tools like Airflow and Luigi
  • Deep knowledge of SQL, including performance optimizations
  • Preferred Qualifications

  • Working experience with BigData
  • Knowledge about business rules