Data Pipeline Engineering: Design, develop, and deploy robust and scalable data pipelines using Databricks, incorporating data extraction from diverse sources (databases, APIs, streaming platforms), transformation and cleansing using Spark, and loading into target systems (data lakes, data warehouses, etc.). * Data Quality Assurance: Implement rigorous data quality checks and validation procedures throughout the data pipeline to maintain high accuracy and reliability. * Hold a University Degree in Computer Science, (Business) Engineering, or a specialized master's in Machine Learning or AI. * Data Fundamentals: Solid understanding of data warehousing principles, ETL processes, data modeling techniques, and database systems. * Contributions to open-source projects. * Work with innovative and impactful Cloud, Data & AI projects for industry-leading clients
more