Data Engineer SQL/Python/automation
: Job Details :


Data Engineer SQL/Python/automation

Saxon Global

Location: Woonsocket,RI, USA

Date: 2024-12-02T19:18:41Z

Job Description:

Analytics engineering à build data pipelines and partner with Data Scientists on ML / MLOps. Personalization for retail pharmacy (text / calls / SMS / emails) using business rules, AB Testing etc. Databricks Spark Python NoSQL DBs Kubernetes SQL Translating business requirements into clinical rules engines Framework Development / Design à Capabilities Transitioning from batch to real-time à batch to real time via microservices & Kafka. Basic scripting commands - someone on the Infra side will do this No reporting for this team Snowflake à nice to have (moving source data) Snowpark as a replacement of Databricks Kafka basics - just need to know basic services i.e. producer, consumer, streams etc. You will collaborate with business partners to identify opportunities to leverage big data technologies in support of Pharmacy Personalization with a common set of tools and infrastructure to make analytics faster, more insightful, and more efficient. You will design highly saleable and extensible batch and real-time big data and cloud platforms which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You will define and maintain data architecture, focusing on applying technology to enable business solutions. You will assess and provide recommendations on business relevance, with appropriate timing and deployment. You will perform architecture design, data modeling, and implement CVS Big Data platforms and analytic applications. You will bring a DevOps mindset to enable big data and batch/real-time analytical solutions that leverage emerging technologies. You will develop prototypes and proof of concepts for the selected solutions, and implement complex big data projects. You will apply a creative mindset to a focus on collecting, parsing, managing, and automating data feedback loops in support of business innovation. Required Skills : • Strong in SQL and Python, with 3+ years hands-on coding experience with both • Experience building automated big data pipelines • Experience performing data analysis and data exploration • Experience working in an agile delivery environment • Strong critical thinking, communication, and problem solving skills • Experience with big data frameworks (i.e. Hadoop and Spark) • Experience with cloud-based platforms (i.e. Azure, GPC, AWS) • Experience with Snowflake and hands-on query tuning/optimization. • Experience working in multi-developer environment, using version control (i.e. Git) • Experience with orchestrating pipelines using tools (i.e. Airflow, Azure Data Factory) • Experience with real-time and streaming technology (i.e. Azure Event Hubs, Azure Functions Kafka, Spark Streaming) • Experience with REST API/Microservice development using Python • Experience with deployment/scaling of apps on containerized environment (i.e. Kubernetes, AKS) • Experience with technical solutioning and system architecture design • Experience partnering cross-functionally with other technical teams (i.e. data ingestion, data science, operational systems) to align priorities and achieve deliverable outcomes • Experience with setting coding standards, performing code reviews, and mentoring junior developers • Experience overseeing project delivery by mentoring junior technical developers Basic Qualification : Additional Skills : Background Check :Yes Drug Screen :Yes Notes : Selling points for candidate : Project Verification Info : Candidate must be your W2 Employee :Yes Exclusive to Apex :No Face to face interview required :No Candidate must be local :No Candidate must be authorized to work without sponsorship ::No Interview times set :Yes Type of project :Development/Engineering Master Job Title :Eng: Other Branch Code :Providence

Apply Now!

Similar Jobs (0)