OverviewTeleWorld Solutions is seeking a Software Engineer/Developer for our team! We are looking for an experienced Software engineer with a focus on Data Engineering, ETL processes, preferably with exposure to both batch and streaming data. The candidate should have familiarity with use of Databases and DataLake infrastructure and associated tools for ingestion, transformation, and efficient querying across distributed data frameworks to include understanding of performance and scalability issues and query optimization. TeleWorld Solutions is a strategic wireless engineering and consulting firm offering network operators, OEMs and tower companies turnkey design, optimization, network dimensioning and deployment services. With the experience of hundreds of thousands of successful implementations, including macro, DAS, Small Cells, and Wi-Fi, the world's leading network operators and OEMs trust our knowledge and experience to plan, perform, troubleshoot, and implement an array of technologies and solutions. TeleWorld helps customers plan, design, manage, measure, and monetize opportunities throughout the network lifecycle and across every element of their network. Location: Plano, TXLength of Contract: 12/16/2024 to 07/01/2025Come join our Veteran-Friendly Team. The Company with Great Benefits and certified as A Great Place to Work .Responsibilities
- Develop and maintain workflows and pipelines to process continuous streams of data, including both batch and real-time processing. Design end-to-end solutions for Near-Real-Time and Batch Data Pipelines.
- Build, optimize, and manage ETL (Extract, Transform, Load) processes for transforming unstructured raw data into structured formats.
- Collaborate with data engineers and business intelligence engineers to create and maintain data integrations and ETL pipelines, driving projects from concept to production deployment.
- Maintain and support incoming data feeds from multiple sources, including external customer feeds (CSV, XML) and Publisher/Subscriber models, ensuring seamless integration into the data pipeline.
- Analyze and resolve performance and scalability issues within distributed data frameworks; optimize queries for efficient execution.
- Develop tools for data ingestion and provide corresponding API access, addressing dynamic data schema needs and enabling quick adaptability to schema changes.
- Transform and aggregate data across distributed clustered environments while addressing challenges in scalability and query performance.
- Utilize orchestration tools to schedule, automate, and monitor workflows and pipelines, continuously improving reporting and analysis processes.
- Implement techniques for consuming, holding, and aging out continuous data streams. Support data feed management with tools such as Kafka, Spark, and NiFi.
- Work with SQL and NoSQL databases, ensuring robust and efficient storage, querying, and retrieval of structured and unstructured data.
- Work closely with cross-functional teams, including business intelligence, analytics, and operations, to meet evolving data needs and deliver actionable insights.
- Identify opportunities to automate, simplify, and improve existing data processes, driving operational efficiency and enhancing self-service support for customers.
- Leverage technologies such as Spark, Kafka, Hive, PySpark, Impala, SQL, and NoSQL to manage data engineering tasks effectively.
QualificationsRequired:
- 2-4 years of experience developing Data engineering, and ad-hoc transformation of unstructured raw data
- Use of orchestration tools
- Design, build, and maintain workflows/pipelines to process continuous stream of data with experience in end-to-end design and build process of Near-Real-Time and Batch Data Pipelines.
- Expected to work closely with other data engineers and business intelligence engineers across teams to create data integrations and ETL pipelines to drive projects from initial concept to production deployment
- Maintaining and supporting incoming data feed into the data pipeline from multiple sources, including external customer feeds in CSV or XML file format to Publisher/Subscriber model automatic feeds.
- Knowledge of database structures, theories, principles and practices (both SQL and NoSQL).
- Active development of ETL processes using Python, PySpark, Spark or other highly parallel technologies, and implementing ETL/data pipelines
- Experience with Data Engineering technologies and tools such as Spark, Kafka, Hive, Ookla, NiFi, Impala, SQL, NoSQL etc
- Understanding of Map Reduce and other Data Query Processing and Aggregation models
- Understanding of challenges of transforming data across distributed clustered environment
- Experience with techniques for consuming, holding and aging out continuous data streams
- Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
- Ability to provide quick ingestion tools and corresponding access API's for continuously changing data schema, working closely with Data Engineers around specific transformation and access needs
Preferred:
- 1-2 years experience developing applications with Relational Databases, preferably with experience in SQL Server and/or MySQL.
- Some exposure to database optimization techniques for speed, complexity, normalization etc.
Skills and Attributes:
- Ability to have effective working relationships with all functional units of the organization
- Excellent written, verbal and presentation skills
- Excellent interpersonal skills
- Ability to work as part of a cross-cultural team
- Self-starter and Self-motivated
- Ability to work without lots of supervision
- Works under pressure and is able to manage competing priorities.
Technical qualifications and experience level:
- 3-7 years in development using Java, Python, PySpark, Spark, Scala, and object-oriented approaches in designing, coding, testing, and debugging programs
- Ability to create simple scripts and tools, using Linux, Perl, Bash
- Development of cloud based distributed applications
- Understanding of clustering and cloud orchestration tools
- Working knowledge of database standards and end user applications
- Working knowledge of data backup, recovery, security, integrity and SQL
- Familiarity with database design, documentation and coding
- Previous experience with DBA case tools (frontend/backend) and third-party tools
- Understanding of distributed file systems, and their optimal use in the commercial cloud (HDFS, S3, Google File System, Databricks)
- Familiarity with programming languages API
- Problem solving skills and ability to think algorithmically
- Working Knowledge on RDBMS/ORDBMS like MariaDb, Oracle and PostgreSQL
- Knowledge of SDLC (Waterfall, Agile and Scrum)
- BS degree in a computer discipline or relevant certification
Join Our Veteran-Friendly Team: Are you a veteran or a veteran spouse with expertise in telecommunications? Join our team at TeleWorld Solutions, where we value your military experience and provide great benefits. We invite all veterans and veteran spouses to bring their skills and dedication to our team. TeleWorld Solutions is committed to employing a diverse workforce and provides Equal Employment Opportunity for all individuals regardless of race, color, religion, gender, age, national origin, marital status, sexual orientation, gender identity, status as a protected veteran, genetic information, status as a qualified individual with a disability, or any other characteristic protected by law.