Project Manager
: Job Details :


Project Manager

TechStargroup

Location: all cities,NJ, USA

Date: 2024-12-12T08:47:34Z

Job Description:
We are currently hiring a Data Engineer to join our team in Woodcliff Lake, New Jersey.Job DescriptionWe are a global digital solutions and professional services firm empowering businesses to compete by leveraging emerging technologies. Our vision is delivering cutting-edge solutions for our clientswith agility, responsiveness, transparency, and integrity. We have been recognized by Inc. Magazine as one of America's fastest growing private companies.Work, learn and grow! We are always looking for talented and qualified people who support our company's core values of professionalism, teamwork, work-life balance, and a family atmosphere in the workplace.As a Data Engineer, you will work with product owners, data scientists, business analysts, and software engineers to design and build solutions to ingest, transform, store, and export data in a cloud environment while maintaining security, scalability, and personal data protection. You will need hands-on AWS experience, not theoretical, along with Python, SQL, and experience with Big Data processes.Must be a W2 employee of TechStar Group, and this position will require the person to work in a hybrid model, both remote and onsite when needed. Local candidates preferred.TechStar Group is an equal opportunity employer. TechStar provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.Job ResponsibilitiesImplements and enhances complex data processing pipelines with a focus on collecting, parsing, cleaning, managing, and analyzing large data sets that produce valuable business insights and discoveries.Determines the required infrastructure, services, and software required to build advanced data ingestion & transformation pipelines and solutions in the cloud.Assists data scientists and data analysts with data preparation, exploration, and analysis activities.Applies problem-solving experience and knowledge of advanced algorithms to build high-performance, parallel, and distributed solutions.Performs code and solution review activities and recommends enhancements that improve efficiency, performance, stability, and decrease support costs.Applies the latest DevOps and Agile methodologies to improve delivery time.Works with SCRUM teams in daily stand-up, providing progress updates on a frequent basis.Supports applications, including incident and problem management.Performs debugging and triage of incidents or problems and deployment of fixes to restore services.Documents requirements and configurations and clarifies ambiguous specifications.Performs other duties as assigned by management.RequirementsEducation:Bachelor's degree in Computer Science, Mathematics, or Engineering or the equivalent of 4 years of related professional IT experience.Experience:3+ years of enterprise software engineering experience with object-oriented design, coding, and testing patterns, as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructure solutions.3+ years of software engineering and architecture experience within a cloud environment (Azure, AWS).3+ years of enterprise data engineering experience within any Big Data  environment (preferred).3+ years of software development experience using Python.2+ years of experience working in an Agile environment (Scrum, Lean, or Kanban).Knowledge/Skills/Abilities:3+ years of experience working in large-scale data integration and analytics projects, including using cloud (e.g., AWS Redshift, S3, EC2, Glue, Kinesis, EMR) and data-orchestration (e.g., Oozie, Apache Airflow) technologies.3+ years of experience in implementing distributed data processing pipelines using Apache Spark.3+ years of experience in designing relational/NoSQL databases and data warehouse solutions.2+ years of experience in writing and optimizing SQL queries in a business environment with large-scale, complex datasets.2+ years of Unix/Linux operating system knowledge (including shell programming).1+ years of experience in automation/configuration management tools such as Terraform, Puppet, or Chef.1+ years of experience in container development and management using Docker.Languages:SQL, Python, Spark. Strong experience with data handling using Python is required.Basic knowledge of continuous integration tools (e.g., Jenkins).Basic knowledge of machine learning algorithms and data visualization tools such as Microsoft Power BI and Tableau.#J-18808-Ljbffr
Apply Now!

Similar Jobs (0)