Tambourine
is one of the fastest-growing hospitality & tourism marketing firms.
Combining best-in-class tech with creative design, we revolutionize e-commerce for hotels, resorts and destinations.
Find us to learn more.
We are looking for a
Bilingual Data Engineer
to join our
Analitycs team in Bogotá.
The Data Engineer II is a key member of the analytics team responsible for designing, building, and owning robust and scalable data solutions on Google Cloud Platform.
This role involves developing complex data pipelines, implementing efficient data models, and managing infrastructure as code.
The ideal candidate will take full ownership of their projects, solve ambiguous problems independently, and collaborate closely with stakeholders to turn data into a critical asset for the organization.
This is an On-site position at our Bogotá Office
.
What we need from you:
- Bachelor's Degree.
- 3+ professional experience as Data Engineer.
- Full
English proficiency
.
- Proven expertise in
Google BigQuery
, including performance tuning, cost optimization, and writing complex, efficient queries.
- Hands-on experience building and deploying data pipelines using GCP services (Cloud Functions, Cloud Run, etc.) and orchestration with
Apache Airflow / Cloud Composer
.
- Strong experience in
data modeling
for analytical use cases.
- Demonstrated ability to productionize the ingestion and transformation of
GA4 data
.
- Familiarity with
Infrastructure as Code (IaC)
tools, preferably Terraform.
- A deep understanding of how data architecture impacts BI tools like
Looker
.
- A strong sense of
ownership and accountability
for the reliability and quality of data pipelines.
- Excellent
independent problem-solving
skills and the ability to tackle ambiguous challenges.
- Experience in
stakeholder management
and translating business needs into technical solutions.
- A collaborative spirit with an interest in
mentoring
junior team members.
Responsibilities
Advanced Data Pipeline Architecture & Development:
- Design, build, and deploy robust, scalable data pipelines using GCP services such as Cloud Functions, Cloud Run, and Dataflow.
- Orchestrate complex workflows, manage dependencies, and ensure data SLAs are met using Apache Airflow (Cloud Composer).
- Write, debug, and optimize complex SQL queries in
Google BigQuery
, implementing advanced features like partitioning, clustering, and materialized views to enhance performance and manage costs.
- Provision and manage GCP infrastructure programmatically using
Infrastructure as Code (IaC)
principles with tools like Terraform.
Data Modeling & Analytics Enablement:
- Design and implement efficient and scalable
data models
(e.g., star schema, dimensional models) within BigQuery to support analytical workloads and BI performance.
- Productionize the end-to-end ingestion and transformation of
Google Analytics 4 (GA4) data streams
, effectively handling event-based data structures to create reliable, analysis-ready datasets.
- Collaborate closely with data analysts to understand how data models in BigQuery impact
LookML
development and dashboard performance in Looker, providing expert guidance and solutions.
- Translate business requirements from stakeholders into technical specifications for data pipelines and models.
Team Contribution & Ownership:
- Take full
ownership
of data projects and pipelines, from initial design through to deployment, monitoring, and maintenance.
- Perform root cause analysis on ambiguous or complex data issues and implement effective, durable solutions with minimal supervision.
- Begin to
mentor junior engineers
through constructive code reviews, technical guidance, and sharing best practices.
- Effectively manage priorities and timelines for multiple concurrent projects.
Not required, but nice to have:
- Google Cloud Professional Data Engineer certification.
- Experience with streaming data technologies (e.g., Pub/Sub, Dataflow).
- Proficiency in Python testing frameworks (e.g., Pytest) and package management.
- Direct, hands-on experience developing LookML models in Looker.