Your impact:
Play a key role in delivering data-driven interactive experiences to our clients
Work closely with our clients in understanding their needs and translating them to technology solutions
Provide expertise as a technical resource to solve complex business issues that translate into data integration and database systems designs
Problem solving to resolve issues and remove barriers throughout the lifecycle of client engagements
Ensuring all deliverables are high quality by setting development standards, adhering to the standards, and participating in code reviews
Participate in integrated validation and analysis sessions of components and subsystems on production servers
Qualifications
Your Skills and Experience:
Bachelor’s degree in Computer Science, Engineering or related field
Demonstrable experience in enterprise level data platforms involving implementation of end-to-end data pipelines with Python and Scala
Hands-on experience with at least one of the leading public cloud data platforms (Ideally Azure)
Experience with column-oriented database technologies (i.e. Big Query, Redshift, Vertica), NoSQL database technologies (i.e. DynamoDB, BigTable, Cosmos DB, etc.) and traditional database systems (i.e. SQL Server, Oracle, MySQL)
Experience in architecting data pipelines and solutions for both streaming and batch integrations using tools/frameworks like Azure Databricks, Azure Data Factory, Spark, Spark Streaming, etc.
Understanding of data modeling, warehouse design and fact/dimension concepts
Good communication and willingness to work as a team
Additional Information
Certifications for any of the cloud services (Ideally Azure)
Experience working with code repositories and continuous integration
Understanding of development and project methodologies