Senior Snowflake Data Engineer - LATAM
This position offers you the opportunity to join a fast-growing technology organization that is redefining productivity paradigms in the software engineering industry.
Thanks to our flexible, distributed model of global operation and the high caliber of our experts, we have enjoyed triple digit growth over the past five years, creating amazing career opportunities for our people.
If you want to accelerate your career working with like-minded subject matter experts, solving interesting problems and building the products of tomorrow, this opportunity is for you.
As a Snowflake Data Engineer at Parser, you will be part of our team and work on challenging engineering projects.
You will help improve data processes and tooling, automating workloads and pipelines wherever possible.
Moreover, we expect that you provide our client with your professional expertise, not only hands-on but also for technically improving the under development data framework.
Responsibilities
- Data Pipeline Design, Implementation, Optimization and productionization in snowflake
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Create and maintain datasets that support the needs and products
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability,
- Implementing processes oriented to improve quality, consistency, and reliability through the various pipelines (monitoring, retry, failure detection)
Technology Stack:
- A high-level programming language used in Data applications (Python, Scala, Go, R, Java, etc)
- Snowflake
- SQL
- DBT
- Postgres / MySQL
- Oracle
- Airflow
- Git
- AWS + Terraform
What you'll bring to us:
- MS or BS in CS, Engineering, Math, Statistics, or a related field or equivalent practical experience in data engineering.
- Proven track record within a Data Engineer or engineering environment where you have developed and deployed software / pipeline
- 3-5 years of experience working in data engineering using Snowflake.
- 2-4 years of experience working in data engineering using Python or any other language programming known for data engineering (Scala, Go, R, Java, etc).
- Proven SQL Skills
- Experience using the data warehousing tool: Snowflake
- Understanding about several tools for data transformation and pipelining, like Airflow, DBT, Spark, Pandas
- Cloud experience: Proficient in AWS, with expertise in data and analytics services such as Redshift, Kinesis, Glue, Step Functions, Sagemaker, RDS, etc
- Knowledge to build processes and infrastructure to manage lifecycle of datasets: data structures, metadata, dependency and workload management.
- You have worked in an Agile environment or open to adopting this culture.
- Excellent English communication skills.
Extra
- Experience with Technologies like Kubeflow, EKS, Docker
- Experience with stream-processing systems: Kafka, Storm, Spark-Streaming, etc.
- Statistical analysis and modeling experience
- Experience with machine learning algorithms
- Data-driven approach to problem solving
- The ability to visualize and communicate complex concepts
Some of the benefits you'll enjoy working with us:
- The chance to work in innovative projects with leading brands, that use the latest technologies that fuel transformation.
- The opportunity to be part of an amazing, multicultural community of tech experts.
- The opportunity to grow and develop your career with the company.
- A flexible and remote working environment.
Come and join our #ParserCommunity.
Follow us on Linkedin