We are seeking a versatile and innovative Data Engineer to design, build, and maintain scalable data pipelines and infrastructure that support analytics, reporting, Machine Learning (ML), Generative AI (GenAI), Business Intelligence (BI), and automation initiatives.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field.
- Proven experience as a Data Engineer or similar role.
- Expertise with Google BigQuery and Google Cloud Storage; solid knowledge of GCP data and streaming services (Dataflow/Apache Beam, Pub/Sub, Cloud Composer/Airflow).
- Strong programming skills in Python and SQL
- Experience building reliable data pipelines for analytics, ML, BI, and automation use cases.
- Familiarity with ML frameworks (scikit-learn, TensorFlow, PyTorch), MLOps on GCP (Vertex AI Pipelines/Model Registry) or BigQuery ML, and GenAI libraries/tooling where applicable.
- Experience supporting BI/reporting solutions, preferably with Looker and LookML.
- Hands-on experience with automation/integration platforms such as MuleSoft is a strong plus.
- Understanding of data governance, security, quality, and compliance on cloud platforms.
- Excellent communication, collaboration, and problem-solving skills.
Benefits
- Generous Paid Time Off
- 401k Matching
- Retirement Plan
- Visa Sponsorship
- Four Day Work Week
- Generous Parental Leave
- Tuition Reimbursement
- Relocation Assistance