Scientific knowledge engineer- first shift
: Job Details :


Scientific knowledge engineer- first shift

Randstad

Location: Durham,NC, USA

Date: 2024-10-12T06:45:33Z

Job Description:

job summary: The Scientific Knowledge Engineering team, which sits within the Onyx Product Management organization, is responsible for the data modeling, ontology definition and management, vocabulary mapping, and other key metadata activities that ensure Onyx platforms and data assets speak scientific language. They are a core factor in delivering the R&D Knowledge Graph - the semantic layer that connects all of our data and metadata systems - as well as the core metadata experiences that ultimately allow us to build products and services that both delight our customers and enable impressive automation and intelligence. This role is responsible for maximizing the value of our data assets over a lifetime to bring purpose to data by acting as translators of highly technical information from domain experts into an appropriate data model - complete with significant ontology and vocabularythat can be utilized to effectively structure and index the data. Specifically working with Product managers and R&D subject matter expertise to define the language (data models, ontology, standards, etc.) of science into data products by acting as the voice of Knowledgebase and interoperability/value of asset. This includes responsibility for the understanding and translation of computational methods back through the data chain to maximize the quality and speed of data from source to drive experimental multi-variant analysis and data driven decision-making. location: Telecommute job type: Contract salary: $88.00 - 91.71 per hour work hours: 9 to 5 education: Bachelors responsibilities: - Definition of schemas and data models of scientific information required for the creation of value adding data products. This includes accountability for the quality control and mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling. - Accountable for the quality control (through validation and verification) of mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling - e.g., models, schemas, controlled vocab. - Working with Product managers/engineers confidently convert business need into defined deliverable business requirements to enable the integration of large-scale biology data to predict, model, and stabilize therapeutically relevant protein complex and antigen conformations for drug and vaccine discovery. - Collaborate with external groups to align data standards with industry/ academic ontologies ensuring that data standards are defined with usage/analytics in mind. They may also provide data source profiling and advisory consultancy to R&D outside of Onyx. - Support effective ingestion of data through understanding the entry requirements required by platform engineering teams and ensuring that the barrier for entry is met e.g. Scientific information has the appropriate metadata to be indexed, structured, integrated and standardized as needed. This may require articulation of engineering standards and metadata information needs to third parties to ensure efficient and automate ingestion at scale. - Provides bespoke subject matter expertise for R&D data to translate deep science into data for actionable insights qualifications: - Bachelor's degree (Bioinformatics, Biomedical Science, Biomedical Engineering, Molecular Biology, or Computer Science) - Biologist related work experience - 5-8 years job-related experience with an established track record of delivery - Working experience querying relational databases - SQL - Experience with industry standard data management / metadata platforms e.g. Collibra, Datahub, Datum, Informatica - Data modeling, quality, analysis, profiling (working experience with any data quality tool, SAS, Ataccama, Informatica Data Quality, Talend, OpenRefine) - Experience with industry standard tools for building data protocols e.g. Avro, Protocol Buffers, Thrift - Experience with at least one programming language - e.g. Python - for scripting vocabulary mappings, building data models, etc. - Awareness of RDF, Ontology, reference data - Experience with open-source ontology tools, data formats, languages (Protg, SPARQL, OWL, SKOS, SHACL, RML) - Specific experience with Knowledge Graph efforts, experience using ontology/taxonomy tools such as Centree, TopBraid, Smartlogic Semaphore etc - Experience with at least one programming language - e.g. Python - for scripting vocabulary mappings, building data models, etc. Preferred Qualifications - Demonstrated comfort operating and leading across organizational boundaries a matrixed team - Membership of data standards group, industry committee, board, or consortium - Specific experience with ontology, Knowledge Graph efforts - Experience in technical writing, documentation skills: Molecular Biology, Data Management Plan, Biomedical Engineering Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status. At Randstad, we welcome people of all abilities and want to ensure that our hiring and interview process meets the needs of all applicants. If you require a reasonable accommodation to make your application or interview experience a great one, please contact ...@randstadusa.com. Pay offered to a successful candidate will be based on several factors including the candidate's education, work experience, work location, specific job duties, certifications, etc. In addition, Randstad offers a comprehensive benefits package, including health, an incentive and recognition program, and 401K contribution (all benefits are based on eligibility). This posting is open for thirty (30) days.

Apply Now!

Similar Jobs (0)