You are viewing a preview of this job. Log in or register to view more details about this job.

Senior Data Engineer

The Data Engineer role requires robust data engineering capabilities to efficiently manage data flows from operational systems into a lake house architecture. The ideal candidate will understand the theory and mechanics behind data transformation, ensuring data adds more value as it transitions from bronze to silver to gold stages. Ultimately, the data will be structured into a star schema for ad-hoc analysis or reporting purposes using PowerBI. This role leverages Microsoft Fabric, including Azure Data Factory, Spark, SQL, PowerBI, and necessitates familiarity with high-level object-oriented programming (OOP).


OBJECTIVES  

  • Collaborate with other team members on scoping solutions and project decision points.
  • Design and implement data products (pipelines, reports, visualizations) that add value.
  • Interact with developers, business teams, and other stakeholders to determine requirements.
  • Further the buildout of our internal data lakehouse, ultimately providing better data-analysis platforms to internal teams.
  • Forecast and report transformed data to provide actionable insights.
  • Design ETL flows independently, ensuring data quality and efficiency.
  • Implement processes to transition data from bronze (raw) to silver (cleaned) to gold (optimized) stages, enhancing data quality and value.
  • Create and manage star schema data models to support efficient ad-hoc analysis and reporting by end users.



COMPETENCIES  

  • Understanding of datastore and data mart design, particularly in a lake house architecture.
  • Possess strong proficiency in SQL for database management and querying.
  • Knowledge of Object-Oriented Programming languages (e.g., C#, Java, Python).
  • Basic understanding of distributed database systems and their applications.
  • Familiarity with cloud services (Azure, AWS, or GCP), specifically Microsoft Fabric.
  • Strong technical writing skills to document processes and workflows.
  • Proficiency in creating interactive reports, dashboards, and visualizations using PowerBI and other report development tools.
  • Basic understanding and implementation of REST API tools for data integration.
  • Basic understanding of data security principles and best practices.
  • Understanding of both structured and unstructured data to improve traditional processes and implement innovative solutions.
  • Understanding of data modeling, including fact / dimension modeling, star / snowflake schema, and Kimball methodologies.
  • Basic understanding of Medallion architecture concepts of loading data into data lakehouse and data warehouse.
  • Utilize Microsoft Fabric tools, including Azure Data Factory for orchestrating data workflows, Spark for large-scale data processing, and SQL for database management and querying.
  • Apply high-level object-oriented programming skills to build and optimize data processing solutions.
  • Monitor and optimize data processing performance, ensuring scalability and efficiency in handling large datasets.


 For more information you can also contact on: +1 (813) 699-7969