Aws Data Engineering – Lead Programmer Analyst

🏢 BILVANTIS TECHNOLOGIES
November 2, 2024

Job Overview

  • Date Posted
    November 2, 2024
  • Location
  • Expiration date
    --

Job Description

We are looking for a skilled and experienced AWS Data Lead Engineer to join our Data team.About the jobPosition: AWS Data Engineering – Lead Programmer AnalystTotal Experience: Minimum 5+ years to 8 years.Notice Period: Immediate to 15 DaysMust have skills: AWS Services, DW Concepts, ETL Tools, SQL, Python, PySpark, IaC, AWS CDK, Airflow.Job Description: AWS Data Engineering – Lead Programmer AnalystWe are seeking a highly skilled and motivated AWS Data Engineer with over 5+ years of experience to join our dynamic data engineering team. Your expertise in these technologies will drive the effective extraction, transformation, loading, and analysis of our organization’s data assets.Technical Skills:Experience should have 5+ years to 7 Years of AWS Data Engineering.Hands-on experience with AWS services (Lambda, Glue, Kinesis, SNS, SQS, and CloudFormation).Strong proficiency in Python programming language.Experience with serverless architecture and Infrastructure as Code (IaC) using AWS CDK.Proficiency in Apache Airflow for orchestration of data pipelines.Familiarity with data quality assurance techniques and tools, preferably Great Expectations.Experience with SQL for data manipulation and querying.Strong communication and collaboration skills, with the ability to work effectively in a team environment.Experience with Data Lakehouse, dbt, Apache Hudi data format is a plus.Responsibilities:Design, develop, and maintain end-to-end data pipelines on AWS, utilizing serverless architecture.Implement data ingestion, validation, transformation procedures using AWS services such as Lambda, Glue, Kinesis, SNS, SQS, and CloudFormation.Write orchestration tasks within Apache Airflow.Develop and execute data quality checks using Great Expectations to ensure data integrity and reliability.Collaborate with other teams to understand mission objectives and translate them into data pipeline requirements.Utilize PySpark for complex data processing tasks within AWS Glue jobs.