Data Engineer
: Job Details :


Data Engineer

Vettura LLC

Location: all cities,AK, USA

Date: 2024-12-18T07:47:06Z

Job Description:
About the job:
    • Design, document, and develop distributed and event-driven data pipelines with cloud-native data stores such as Snowflake, Redshift, Big Query, or ADW.
    • Consult business, product, and data science teams to understand end-user requirements or analytics needs to implement the most appropriate data platform technology and scalable data engineering practices.
    • Prepare data mapping, data flow, production support, and pipeline documentation for all projects.
    • Delivering completeness of source system data by performing a profiling analysis and triaging issues reported in production systems.
    • Facilitate fast and efficient data migrations through a deep understanding of design, mapping, implementation, management, and support of distributed data pipelines

Requirements

  • Minimum of Bachelor's Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field.
  • You have a strong background in distributed data warehousing with Snowflake, Redshift, Big Query, and/or Azure Data Warehouse. You have productionized real-time data pipelines through EDA leveraging Kafka or a similar service.
  • You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise distributed data warehouse.
  • You have a strong understanding and exposure to data mesh principles in building modern data-driven products and platforms
  • You have expert programming/scripting knowledge in building and managing ETL pipelines using SQL, Python, and Bash.
  • You have implemented analytics applications using multiple database technologies, such as relational, multidimensional (OLAP), key-value, document, or graph.

Benefits

  • Minimum of Bachelor's Degree or its equivalent in Computer Science, Computer Information Systems, Information Technology and Management, Electrical Engineering or a related field.
  • You have a strong background in distributed data warehousing with Snowflake, Redshift, Big Query, and/or Azure Data Warehouse. You have productionized real-time data pipelines through EDA leveraging Kafka or a similar service.
  • You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise distributed data warehouse.
  • You have a strong understanding and exposure to data mesh principles in building modern data-driven products and platforms
  • You have expert programming/scripting knowledge in building and managing ETL pipelines using SQL, Python, and Bash.
  • You have implemented analytics applications using multiple database technologies, such as relational, multidimensional (OLAP), key-value, document, or graph.
Apply Now!

Similar Jobs (0)