About the job Databricks Data Engineer OUR CLIENT Our client provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting-edge analytics techniques, and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, our client's analytics team takes an industry-specific approach to transform decision-making and embed analytics more deeply into their business processes. They have a global footprint of 2,000+ data scientists and analysts who assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. They serve the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. ROLE We are seeking a Databricks Data Engineer. RESPONSIBILITIES:
- Lead and/or assist in designing and developing data systems, tailoring solutions to meet client-specific requirements
- Design and implement databricks-based solutions with a focus on distributed data processing, data partitioning and optimization for parallelism
- Engage with client to evaluate their current and future needs, crafting bespoke solution architectures and providing strategic recommendations
- Develop comprehensive architecture solution roadmaps integrating client business processes and technologies
- Define and enforce coding standards for ETL processes, ensuring maintainability, reusability, and adherence to best practices
- Architect and implement CI/CD pipelines for Databricks notebooks and jobs, ensuring testing, versioning, and deployment
- Disaster recovery strategies for Databricks environments, ensuring data resilience and minimal downtime in case of failure
- Innovate and expand solution offerings to address data challenges
- Advise stakeholders on data cloud platform architecture optimization, focusing on performance
- Experienced with Scrum and Agile Methodologies to coordinate global delivery teams, run scrum ceremonies, manage backlog items, and handle escalations
- Integrate data across different systems and platforms
- Strong verbal and written communication skills to manage client discussions
CANDIDATE PROFILE
- 5+ years experience in architecture, design, and implementation using Databricks
- Experience in designing and implementing scalable, fault-tolerant systems
- Deep understanding of one or more of the big data computing technologies such as Databricks, snowflake
- Demonstrated experience with the deployment of Databricks on cloud platforms, including advanced configurations
- In-depth knowledge of spark internals, catalyst optimization, and Databricks runtime environment
- Must have experience in implementing solutions using Databricks
- Experience in Insurance (P&C) is good to have
- Programming Languages SQL, Python
- Technologies Databricks, Delta Lake storage, Spark (PySpark, Spark SQL).
- Good to have - Airflow, Splunk, Kubernetes, Power BI, Git, Azure DevOps
- Project Management using Agile, Scrum
- B.S. Degree in a data-centric field (Mathematics, Economics, Computer Science, Engineering, or other science field), Information Systems, Information Processing, or engineering.
- Excellent communication & leadership skills, with the ability to lead and motivate team members
- Ability to work independently with some level of ambiguity and juggle multiple demands