Location:
Remote
Contract Type:
Full-time –
Contrato por prestación de servicios
English Level:
B2 (advanced, fluent communication and technical documentation)
Seniority:
Senior
About the Role
The Senior Data Engineer will be responsible for implementing business logic in Databricks to automate manual processes and optimize data workflows within the Azure ecosystem.
This is a full-time services contract role, working closely with analytics, BI, and business teams to translate requirements into efficient, scalable, and well-documented technical solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Databricks and PySpark.
- Implement complex business logic and ETL/ELT processes in Azure Data Factory and Databricks.
- Integrate structured and unstructured data sources (SQL, JSON, APIs, flat files).
- Automate manual processes through optimized data orchestration and workflow design.
- Manage code versioning and deployment using GitHub (branching strategy, pull requests, CI/CD).
- Ensure data quality, traceability, and performance optimization across data pipelines.
- Collaborate with cross-functional teams and document technical solutions clearly in English.
Requirements
- Proven hands-on experience with Databricks and PySpark (essential).
- Solid knowledge of Azure Data Factory, Azure Synapse, and Azure Data Lake.
- Advanced proficiency in JSON, SQL, and data modeling.
- Experience with GitHub and modern version control practices.
- Strong analytical and problem-solving skills with a focus on automation.
- English level C1 (technical fluency required).