Data Engineer

  • map-pin-icon Mohali
  • briefcase-icon Full Time
Share this

Job description

Closeloop Technologies is a software product development firm based in Mountain View, CA, that helps bring your ideas to life. We help you develop digital solutions with cutting-edge technologies backed by professional expertise and skills. We function as your technology partner to build, innovate, and scale custom-built apps, web or mobile. Depend on our more than three decades of experience in creating groundbreaking digital products.

We serve clients wanting to build mobile apps, websites, web applications, enterprise solutions, eCommerce apps, or products powered by new-age technologies like Artificial Intelligence, Augmented Reality, Virtual Reality, IoT, and Wearable.

Job Summary:

We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have a solid background in data engineering, with mandatory expertise in AWS, Snowflake, Databricks, Redshift, Spark, and Hadoop. Experience with Apache Airflow is a plus. The role involves designing, building, and maintaining scalable data pipelines and systems to support our data analytics and business intelligence needs.

Responsibilities

  • Design, develop, and maintain robust data pipelines and ETL processes.
  • Optimize and manage data storage solutions, ensuring efficiency and scalability.
  • Work with large datasets to perform data integration, transformation, and aggregation.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements.
  • Ensure data quality and integrity across various data sources.
  • Monitor and troubleshoot data workflows and pipelines.
  • Implement best practices for data management, security, and compliance.

Key Qualifications

  • Snowflake: Proven experience in using Snowflake for data warehousing solutions.
  • Databricks: Strong hands-on experience with Databricks for big data processing and analytics using Machine Learning.
  • Redshift: Expertise in managing and optimizing Amazon Redshift data warehouses.
  • Spark: Advanced knowledge of Apache Spark for large-scale data processing.
  • Hadoop: Solid experience with the Hadoop ecosystem, including HDFS, MapReduce, Hive, and Pig.
  • Cloud Platforms: AWS Proficiency
  • Programming Languages: Proficiency in SQL, Python.
  • ETL Tools: Experience with ETL tools and frameworks.

Good to Have:

  • Apache Airflow: Experience in designing and managing workflows using Apache Airflow.
  • Big Data Tools: Knowledge of other big data tools and technologies.
  • Soft Skills: Excellent problem-solving skills, attention to detail, and the ability to work collaboratively in a team environment.

Education:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.

Benefits

  • Medical Insurance for employees, spouse, children
  • Accidental Life Insurance 
  • Provident Fund
  • Paid vacation time
  • Paid Holidays
  • Employee Referral Bonuses
  • Reimbursement for high-speed Internet at home
  • One-month free stay (for employees moving from other cities)
  • Tax-Free benefits
  • Other Bonuses as determined by management from time to time

Similar Jobs

We promise you an inclusive work environment where you will fall in love with challenging as well as getting challenged.