Verstand AI
POSITION HIGHLIGHTS:
Verstand AI is seeking Data Engineers with strong SQL expertise, python development skills and ideally real time streaming experience (e.g., Apache Kafka/AWS Kinesis) Verstand data engineers are instrumental in significant initiatives to transform all aspects of data management services for Verstand commercial and public sector clients, as well as our internal software product development efforts. In both areas, the work leads to revised approaches to enterprise data warehousing, business intelligence and data wrangling/ELT/ETL and leverages the cloud to apply advanced analytics and data mining capabilities.
Verstand Data Engineers play a significant role in the implementation, maintenance and continuous improvement of enterprise data platforms and work closely with business stakeholders, software development and support teams, as well as cloud DevOps. Most importantly, Verstand AI’s data engineers get an opportunity to work with cutting edge technologies and be part of data teams that help clients with end-to-end data science programs.
Key Responsibilities:
- Experience with setting up and operating data pipelines and data wrangling procedures using Python and/or SQL
- Collaborate with engineers and business customers to understand data needs (batch and real time/event streaming), capture requirements and deliver complete BI solutions
- Design and build data extraction, transformation, and loading processes by writing custom data pipelines
- Design, implement and support platforms that can provide ad-hoc access to large datasets and unstructured data
- Model data and metadata to support ad hoc and pre-built reporting
- Tune application and query performance using performance profiling tools and SQL
- Build data expertise and own data quality for assigned areas of ownership
Job Requirements:
Minimum Experience, Skills and Education:
- 7+ years of experience in using SQL and databases in a business environment
- 5+ years of experience in cloud environment, distributed systems, system automation, and real-time platform
- Experience with Apache Kafka (with Confluent a plus)
- 3 + years production experience with cloud technologies such as Google Cloud Platform (GCP), Azure and Amazon Web Services Redshift (AWS)
- 5+ years of experience in custom ETL design, implementation, and maintenance (toolsets such as Informatica, Talend, Boomi, SSIS, etc)
- 5+ years of experience with data warehouse schema design and data modeling
- Production level experience with Python, SQL, and shell scripting
- Experience with batch and stream processing
- Experience with building large scale data processing systems
- Solid understanding of data design patterns and best practices
- Working knowledge of data visualization tools such as Tableau and Power BI
- Experience in analyzing data to identify deliverables, gaps, and inconsistencies
- Familiarity with agile software development practices and drive to ship quickly
- Experience leading change, taking initiative, and driving results
- Effective communication skills and strong problem-solving skills
- Proven ability and desire to mentor others in a team environment
- Bachelor’s degree from four-year College or university in Computer Science, Technology or related field
Experience That Sets You Apart:
- Experience with microservice platforms, API development, and containers.
- Experience with Apache Airflow
Verstand AI is a fast-growing decision science and product engineering firm that believes in ongoing training and development for its staff. The firm’s mission is to help both its commercial and public sector clients resolve data management challenges and move to delivering insight and benefits for stakeholders, customers and constituents.
Based out of Tysons Corner, VA, Verstand does business across the United States and is moving into Europe and Asia. If you’re interested in working with us and have a desire to tackle challenging data problems, we welcome your interest and encourage you to apply.
Job Features
Location | Remote |