Closeloop Technologies is a software product development firm based in Mountain View, CA, that helps bring your ideas to life. We help you develop digital solutions with cutting-edge technologies backed by professional expertise and skills. We function as your technology partner to build, innovate, and scale custom-built apps, web or mobile. Depend on our more than three decades of experience in creating groundbreaking digital products.
We serve clients wanting to build mobile apps, websites, web applications, enterprise solutions, eCommerce apps, or products powered by new-age technologies like Artificial Intelligence, Augmented Reality, Virtual Reality, IoT, and Wearable.
We are seeking a Sr. Data Engineer with a strong background in building ELT pipelines and expertise in modern data engineering practices. The ideal candidate will have experience with Databricks and DBT, strong proficiency in SQL and Python, and a solid understanding of data warehousing methodologies such as Kimball or Data Vault. Additionally, experience with DevOps tools, particularly within AWS, Databricks, and GitLab, is strongly preferred. The role requires collaboration with cross-functional teams to design, develop, and maintain scalable data infrastructure and pipelines using Databricks and DBT.
Data Pipeline Development: Design, build, and maintain scalable ELT pipelines for processing and transforming large datasets efficiently in Databricks.
Data Warehousing & Modeling: Implement Kimball data warehousing methodologies or other multi-dimensional modeling approaches such as Data Vault using DBT.
Cloud & DevOps Integration: Leverage AWS, Databricks, and GitLab to implement CI/CD practices for data engineering workflows using Databricks and DBT.
Database Management: Optimize SQL queries and database performance for analytical and operational use cases within Databricks.
Collaboration & Stakeholder Engagement: Work closely with data analysts, data scientists, and software engineers to ensure smooth data flow and accessibility in Databricks and DBT environments.
Data Governance & Security: Ensure compliance with data security, privacy, and governance standards within Databricks.
Performance Optimization: Monitor and fine-tune data pipelines and queries to improve efficiency and reduce processing time in Databricks.
6+ years of data engineering experience, focusing on ELT pipeline development in Databricks.
Hands-on experience with Databricks and DBT (strongly preferred).
Proficiency in SQL and Python for data processing and transformation in Databricks.
Experience with Kimball data warehousing or Data Vault methodologies within DBT.
Familiarity with DevOps tools and practices, particularly with AWS, Databricks, and GitLab.
Strong problem-solving skills and the ability to work in a fast-paced, agile environment.
Preferred Qualifications:
Experience with Apache Spark for large-scale data processing within Databricks.
Familiarity with CI/CD pipelines for data engineering workflows using Databricks and DBT.
Understanding of orchestration tools like Apache Airflow for Databricks pipelines.
Certifications in AWS, Databricks, or DBT are a plus.
We promise you an inclusive work environment where you will fall in love with challenging as well as getting challenged.