Mid Data Engineer

<p>RESPONSIBILITIES</p><p> Create and maintain optimal data pipeline architecture</p><p> Assemble large, complex data sets that meet functional / non-functional business</p><p>requirements.</p><p> Identify, design, and implement internal process improvements: automating manual</p><p>processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.</p><p> Build the infrastructure required for optimal extraction, transformation, and loading of</p><p>data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.</p><p> Build analytics tools that utilize the data pipeline to provide actionable insights into</p><p>customer acquisition, operational efficiency and other key business performance metrics.</p><p> Work with stakeholders across the organization to assist with data-related technical issues</p><p>and support their data infrastructure needs.</p><p> Create data tools for analytics and data scientist team members that assist them in</p><p>building and optimizing our product into an innovative industry leader.</p><p> Work with data and analytics experts to strive for greater functionality in our data</p><p>systems.</p><p><br></p><p>TECHNICAL REQUIREMENTS</p><p> Advanced working SQL knowledge and experience working with relational databases,</p><p>query authoring (SQL) as well as working familiarity with a variety of databases.</p><p> Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.</p><p> Experience performing root cause analysis on internal and external data and processes</p><p>to answer specific business questions and identify opportunities for improvement.</p><p> Strong analytic skills related to working with unstructured datasets.</p><p> Build processes supporting data transformation, data structures, metadata, dependency</p><p>and workload management.</p><p><br></p><p> A successful history of manipulating, processing and extracting value from large</p><p>disconnected datasets.</p><p> Working knowledge of message queuing, stream processing, and highly scalable ‘big</p><p>data’ data stores.</p><p> Strong project management and organizational skills.</p><p> Experience supporting and working with cross-functional teams in a dynamic</p><p>environment.</p><p> We are ideally looking for a candidate with 3+ years of experience in a Data Engineer</p><p>role.</p><p> While our systems are still evolving and architecture will change, the most competitive</p><p>candidates will have experience in many of the technologies we use today or are</p><p>currently exploring:</p><p>o GraphQL (preferred)</p><p>o Postgres (must have)</p><p>o Azure EDW (must have)</p><p> Azure Data Factory</p><p> SQL Server</p><p> PowerBI</p><p>o Tableau or AWS Quicksight (must have)</p><p>o Excel/Google Sheets</p><p>o Metabase (preferred)</p><p>o Redshift (preferred</p><p>o AWS DMS (preferred)</p><p><br></p><p>NICE TO HAVE</p><p> Salesforce&nbsp;</p><p> JavaScript</p><p> Streaming/event-based tech experience: Kafka, AWS SQS, etc.</p><p> Geospatial experience, such as PostGIS (for Postgres)</p>

Start your professional career with us

At Rootstack, we are focused on creating the technologies of the present and the future that help our clients elevate their digital presence. With a work culture focused on success, we put our employees first and we will focus on your growth within the company, always motivating you to achieve greatness.

First name *
Last name *
E-mail *
Phone *
English level *You can take a quick 15 minutes test here:
Primary role
Department *

This is the role for which you are applying: the one you are performing right now or you think you already have enough experience to be considered for the position.

Role *
Years of experience*

This section is for the recruitment of Rootstack employees. We are not responsible for any information you provide. We will not share your information with any third party. We will not be liable for any loss or damage arising from the use of this application.