Back to jobsArchitect, develop, and maintain scalable data infrastructure, including data lakes, pipelines, and metadata repositories, ensuring the timely and accurate delivery of data to stakeholders.
Work closely with data scientists to build and support data models, integrate data sources, and support machine learning workflows and experimentation environments.
Develop and optimize large-scale, batch, and real-time data processing systems to enhance operational efficiency and meet business objectives.
Leverage Python, Apache Airflow, and AWS services to automate data workflows and processes, ensuring efficient scheduling and monitoring.
Utilize AWS services such as S3, Glue, EC2, and Lambda to manage data storage and compute resources, ensuring high performance, scalability, and cost-efficiency.
Implement robust testing and validation procedures to ensure the reliability, accuracy, and security of data processing workflows.
Stay informed of industry best practices and emerging technologies in both data engineering and data science to propose optimizations and innovative solutions.
