Job description
About Us
At Teams Plus, we're building the next-generation
AI Analytics Platform
that transforms raw telecom and contact-center data into actionable insights for high-velocity operations.
Our stack centers on
Snowflake + Sigma + AI orchestration
, powering intelligent dashboards and future-ready features like conversational analytics, predictive alerts, and workflow automation.
We're looking for a
Data Engineer
who's excited to shape the foundation of this vision.
You won't just build pipelines — you'll architect the backbone that fuels AI-driven insights for thousands of users across North America.
⸻
Why This Role Matters
As one of our first dedicated Data Engineers, you'll:
• Own the
end-to-end data flow
— from ingestion to analytics-ready models.
• Enable the team to
scale quickly
by building reusable, modular pipelines.
• Guarantee the
trustworthiness and reliability
of our customer-facing dashboards.
• Directly influence how our platform evolves into a
global AI-powered product
.
This is a
builder role
: you'll have autonomy, visibility, and the chance to set best practices that define how our data stack grows.
⸻
What You'll Do
Design and Develop
• Architect and maintain scalable ELT pipelines into Snowflake.
• Transform raw data into curated, analytics-ready datasets.
• Implement automated data quality checks, lineage tracking, and schema management.
Collaborate and Lead
• Work closely with engineers, analysts, and product managers to meet fast-evolving data needs.
• Serve as the
technical owner
of data modeling best practices (star schemas, dimensional models, dbt conventions).
• Participate in agile ceremonies, code reviews, and roadmap discussions to align engineering with business goals.
Innovate and Future-Proof
• Explore and implement real-time or event-based ingestion where it makes sense.
• Leverage infrastructure-as-code (Terraform, CI/CD pipelines) to ensure reliable deployments.
• Contribute ideas for how AI/ML models can be powered with high-quality data pipelines.
⸻
What We're Looking For
• 3+ years of experience in data engineering or backend engineering focused on data workflows.
• Strong SQL fluency with cloud warehouses (Snowflake preferred; BigQuery or Redshift welcome).
• Hands-on experience with ELT orchestration (dbt, Airflow, Dagster, or similar).
• Proficiency in Python for data transformations, automation, and tooling.
• Experience working in cloud environments (Azure is a plus; AWS/GCP acceptable).
• Understanding of data governance, privacy, and secure access control.
• Conversational to professional English for distributed team collaboration.
⸻
Bonus Points If You Have
• Built integrations with enterprise SaaS systems (Salesforce, HubSpot, NetSuite).
• Worked with event-streaming tools (Kafka, Kinesis, Pub/Sub).
• Set up CI/CD pipelines for data infra (GitHub Actions, Azure DevOps).
• Monitored data pipelines with tools like DataDog or Prometheus.
• Curiosity about how
AI and data engineering intersect
(feature stores, ML data prep).
⸻
Why Join Us
•
Impact
: Your work directly powers customer-facing dashboards used daily by global clients.
•
Growth
: Be part of a fast-scaling product team where your voice shapes architecture and culture.
•
Tech Stack
: Modern and cloud-native — Snowflake, dbt, Airflow, Sigma, Azure.
- Experience experimenting with Snowflake Cortex (Cortex Analyst, functions, vector search) to enable conversational analytics.
- Culture
: We value curiosity, collaboration, and the drive to build things that last.
Required Skill Profession
Other General