Location: San Mateo,CA, USA
Title: Data Engineer(Databricks Azure)
Location: San Francisco, CA(Remote)
Contract
Responsibilities
Lead, design and implement innovative analytical solution using Tableau, SQL, and other Big Data related technologies
Work with product and engineering team to understand requirements, evaluate new features and architecture to help drive decisions
Build collaborative partnerships with architects, technical leads and key individuals within other functional groups
Perform detailed analysis of business problems and technical environments and use this in designing quality technical solution
Actively participate in code review and test solutions to ensure it meets best practice specifications
Build and foster a high-performance engineering culture, mentor team members and provide team with the tools and motivation to make things happen
Work with stakeholders and cross-functional teams to develop new solutions or enhance existing solution
Demonstrate our xxx values of Passion for Client Service, Innovation, Expertise, Balance, Respect for All, Teamwork, and Initiative
We're excited about you if you have:
8-10 years of software development and deployment experience with at least 5 years of hands-on experience with SQL, Databricks, ADF, Datastage (or other ETL tool), SSAS cubes, Cognos, Tableau, Thoughtspot and other BI tools
Write SQL for processing raw data, kafka ingestions, adf pipelines, data validation and QA
Knowledge working with APIs to collect or ingest data
Strong Database knowledge, SQL & No-SQL preferred
Experience building data ingestion pipelines (simulating Extract, Transform, Load workload), data warehouse or database architecture
Experience writing design documentation, Source to Target mapping documentation, manage confluence pages.
Experience in converting business functionalities into technical jira stories
Experience in analyzing huge datasets, identify trends, patterns, and outliers to extract meaningful insights
Strong experience with data modeling, design patterns, building highly scalable Business Intelligence Solutions and distributed applications
Knowledge of cloud platforms, for example:
Experience with Azure, AWS or equivalent cloud platforms
Experience with storing, joining, filtering, and analyzing data using SQL, Spark, Hive etc.
Experience working with continuous integration framework, building regression-able code within data world using GitHub, Jenkins and related applications
Experience with programming/scripting languages such as Scala/Java/Python/R etc. (any combination)
Analytical approach to problem-solving with an ability to work at an abstract level and gain consensus; excellent interpersonal, leadership and communication
Data-oriented personality. Motivated, independent, efficient and able to handle several projects; work under pressure with a solid sense for setting priorities
Ability to work in a fast-paced (startup like) agile development environment
Friendly, articulate, and interested in working in a fun, small team environment