Data Pipeline Engineering: Design, develop, and deploy robust and scalable data pipelines using Databricks, incorporating data extraction from diverse sources (databases, APIs, streaming platforms), transformation and cleansing using Spark, and loading into target systems (data lakes, data warehouses, etc.). * Databricks Ecosystem Expertise: Utilize the full capabilities of the Databricks platform, including Databricks SQL, Delta Lake, Databricks Runtime, and Databricks Workflows, to orchestrate complex data workflows and ensure data quality and pipeline reliability. * Programming Skills: Strong programming skills in Python or Scala, with experience in data manipulation libraries (e.g., PySpark, Spark SQL). * Data Fundamentals: Solid understanding of data warehousing principles, ETL processes, data modeling techniques
more