Databricks Engineer / Big Data Engineer
We are looking for a motivated and goal-oriented Databricks Engineer to join our team.
The Databricks Engineer is responsible for designing, building, and maintaining scalable data pipelines and solutions on the Databricks platform.
They collaborate with data scientists, architects, and analysts to ensure high-quality, efficient, and reliable data processing.
Develop and maintain ETL/ELT pipelines using Databricks and Apache Spark.
Transform raw data into structured formats for analytics and machine learning
Optimize performance of Databricks notebooks and workflows
Integrate Databricks with cloud services (AWS, Azure, or GCP) and other data sources
Implement data quality, validation, and monitoring processes
Collaborate with data scientists to support ML model training and deployment
2–5 years in data engineering roles, preferably with Databricks
Strong experience with Apache Spark, Python, Scala, or SQL
Hands-on experience with Databricks notebooks, Delta Lake, and MLflow
Knowledge of cloud platforms (Azure Databricks, AWS, GCP)
Understanding of data warehousing, data lakes, and big data concepts
English level B2 or higher conversational (will be validated)
100% Remote
Remuneration in dollars
Stability and growth opportunities
If you're ready to turn data into game-changing insights, we want to hear from you Apply today