Data Engineer – Azure Big Data & Snowflake

Overview:

A Data Engineer specializing in the Azure ecosystem, with strong experience in Spark, Big Data technologies, and Snowflake data modeling. The role involves designing, developing, and optimizing data pipelines, implementing scalable architectures, and ensuring efficient data solutions for analytics and reporting.

Key Responsibilities:

– Design, develop, and maintain Azure-based data solutions with Spark and Snowflake.

– Build and optimize scalable ETL pipelines using Azure Data Factory, Databricks, and Snowflake.

– Write, optimize, and troubleshoot complex SQL queries in Snowflake.

– Implement data modeling techniques such as Data Cubes for analytical reporting.

– Collaborate with data scientists, business analysts, and cross-functional teams.

– Ensure data security, compliance, and governance best practices.

– Monitor and optimize performance of Big Data processing workflows.

Key Skills:

Technical:

– Expertise in Azure Big Data services (Azure Synapse, Azure Data Lake, Databricks).

– Proficient in Spark, Snowflake, and data modeling techniques.

– Experience with building scalable ETL pipelines using Azure Data Factory.

– Strong SQL skills for querying, tuning, and managing large datasets.

– Knowledge of data warehousing concepts, including Data Cubes and star schema.

– Familiarity with DevOps practices and CI/CD pipelines in Azure.

Soft Skills:

– Excellent problem-solving and communication skills.

– Ability to work in fast-paced environments and deliver under tight deadlines.

– Collaborative mindset with a focus on business impact.

– Strong analytical thinking with attention to detail.

Job Category: Information Technology
Job Type: Full Time
Job Location: India remote

Apply for this position

Allowed Type(s): .pdf, .doc, .docx