In this role, you will be a crucial player in designing, building, and maintaining cutting-edge data solutions on the Databricks platform. * Data Pipeline Engineering: Design, develop, and deploy robust and scalable data pipelines using Databricks, incorporating data extraction from diverse sources (databases, APIs, streaming platforms), transformation and cleansing using Spark, and loading into target systems (data lakes, data warehouses, etc.). * Databricks Ecosystem Expertise: Utilize the full capabilities of the Databricks platform, including Databricks SQL, Delta Lake, Databricks Runtime, and Databricks Workflows, to orchestrate complex data workflows and ensure data quality and pipeline reliability. * Cloud Proficiency: Hands-on experience with at least one major cloud platform (Azure, AWS, or GCP), with a good understanding of cloud data services and infrastructure.
more