The scope of the proposed services will include the following: - Create and manage the systems and pipelines that enable efficient and reliable flow of data, including ingestion, processing, and storage.
- Collect data from various sources, transforming and cleaning it to ensure accuracy and consistency. Load data into storage systems or data warehouses.
- Optimize data pipelines, infrastructure, and workflows for performance and scalability.
- Monitor data pipelines and systems for performance issues, errors, and anomalies, and implement solutions to address them.
- Implement security measures to protect sensitive information.
- Collaborate with data scientists, analysts, and other partners to understand their data needs and requirements, and to ensure that the data infrastructure supports the organization's goals and objectives.
- Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs.
- Implement and maintain ETL processes to ensure the accuracy, completeness, and consistency of data.
- Design and manage data storage systems, including relational databases, NoSQL databases, and data warehouses.
- Knowledgeable about industry trends, best practices, and emerging technologies in data engineering, and incorporating the trends into the organization's data infrastructure.
- Provide technical guidance to other staff.
- Communicate effectively with partners at all levels of the organization to gather requirements, provide updates, and present findings.
Expertise and/or relevant experience in the following areas are mandatory:
- Bachelor's degree in computer science, Information Technology, or a related field.
- Minimum 5 years of experience as a Backend Data Engineer or in a similar role and strong understanding of ETL processes and data warehousing concepts.
- Proven experience with Python and related data engineering libraries (e.g., pandas, NumPy, Spark) and hands-on experience with Apache Airflow for managing data pipelines and workflows.
- Proficiency in programming languages commonly used in data engineering, such as Python, Java, Scala, or SQL. Resource must be able to implement data automations within existing frameworks as opposed to writing one off scripts.
- Experience with big data technologies and frameworks like Hadoop, Spark, Kafka, and Flink.
- Strong understanding of database systems, including relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Experience regarding engineering best practices such as source control, automated testing, continuous integration and deployment, and peer review.
Expertise and/or relevant experience in the following areas are desirable but not mandatory:
- Experience with cloud computing platforms.
- Familiarity with agile development methodologies, software design patterns, and best practices.
- Strong analytical thinking and problem-solving abilities.
- Excellent verbal and written communication skills, including the ability to convey technical concepts to non-technical partners effectively.
- Flexibility to adapt to evolving project requirements and priorities.
- Outstanding interpersonal and teamwork skills; and the ability to develop productive working relationships with colleagues and partners.
- Experience working in a virtual environment with remote partners and teams
- Proficiency in Microsoft Office.