We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages, with at least 3-7 years of experience.
Requirements
- Design and implement scalable, high-performance data pipelines using AWS services
- Develop and optimize ETL processes using AWS Glue, EMR, and Lambda
- Build and maintain data lakes using S3 and Delta Lake
- Create and manage analytics solutions using Amazon Athena and Redshift
- Design and implement database solutions using Aurora, RDS, and DynamoDB
- Develop serverless workflows using AWS Step Functions
- Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL
- Ensure data quality, security, and compliance with industry standards
- Collaborate with data scientists and analysts to support their data needs
- Optimize data architecture for performance and cost-efficiency
- Troubleshoot and resolve data pipeline and infrastructure issues