We are looking for an experienced data engineer to join our team of top-notch technologists. You will use various methods to transform raw data into useful data that will accelerate scientific research and discovery.
Requirements
- Solid SQL and Python skills
- Demonstrable experience in data engineering involving implementation of end-to-end data pipelines
- Good communication and willingness to work as a team
- Detail-oriented with excellent organizational skills
- Hands-on experience with at least one of the leading public cloud data platforms (Amazon Web Services, Azure)
- Experience in implementing data pipelines for streaming and/or batch integrations using tools/frameworks like Glue ETL, Lambda, Apache Airflow, AWS Step Functions, etc.
- Experience working with code repositories and continuous integration
- Set Yourself Apart With: Working experience with writing and optimizing PL/SQL
- Experience in data modeling, warehouse design and fact/dimension implementations
- AWS Developer certifications
- Implementation experience with column-oriented data technologies (e.g., Redshift, Parquet), NoSQL database technologies (e.g., DynamoDB, MongoDB)
- Understanding of development and project methodologies