Duties:
- Collaborate with cross-functional teams to design and implement a scalable and reliable KaaS platform.
- Java experience
- Javascript
- Graph database knowledge / Topbraid / GRAPHQL
- Develop a KaaS registration service, implementing Open API spec within Topbraid to manipulate a graph database
- Querying, reading and writing, to a graph database using GraphQL and SPARQL.
- Extend Node.js or java application that sit on top of GQL SQL to provide orchestration.
- Develop and maintain technical documentation, including system architecture diagrams, data flow diagrams, and API specifications.
- Understanding UML to be able to create custom queries.
Nice to have:
- DevOps pipeline experience
- Automation test experience
- Google Cloud Platform
- Terraform
- Trisotech
- Topbraid
Bachelor's degree in Computer Science, Information Technology or related field; OR equivalent 3+ years of experience. 3+ years of hands-on experience programming in SQL. 2+ years of experience building and maintaining automated data pipelines and data assets using batch and/or streaming processes Design and maintain data pipelines and services using best practice for ETL/ELT, data management and data governance. Analyze raw data sources and data transformation requirements. Perform data modeling against large datasets for peak requirements. Identify, design and implement process improvement solutions that automate manual processes and leverage standard frameworks and methodologies. Understand and incorporate data quality principals that ensure optimal performance, impact and user experience. Create and document functional and technical specifications. Perform ongoing research to explore new features, versions and related technologies, and provide recommendations to enhance our offerings Skills: N/A Education: Bachelor's degree in Computer Science, Information Technology or related field; OR equivalent 3+ years of experience. 3+ years of hands-on experience programming in SQL. 2+ years of experience building and maintaining automated data pipelines and data assets using batch and/or streaming processes