Mid-Level Data Engineer

Product - Cost, Melbourne, Full Time

Stax is a home-grown start-up revolutionising the AWS cloud experience. Our aim is to provide a platform that automates and streamlines critical tasks to provide a secure and efficient developer experience. Stax comprises of a team of over 40 cloud artisans passionate about the quality outcomes our product delivers. With over 40 active customers across three continents, our customers range from fast food, to luxury, to ASX top 20. We've cultivated an environment based on creativity. When people who care about their craft are given the freedom to explore possibilities without limits, amazing things happen. There are no cool cliques, just a hard-working team generating ideas and devising solutions in a creative and collaborative workspace.

Our Stax tribe are going through a period of massive growth and we're pretty chuffed to be announced as one of the Top 3 fastest growing companies across Australia by CRN Australia and Schneider Electric.

We're looking for Mid-level Data Engineer to join our Cost engineering team on a permanent basis. You'll be an engineer with data application experience that can work in a fast-moving environment to help face into our roadmap.
 
Our cost product is a big data pipeline, processing hundreds of millions of rows and terabytes of data every day. The successful engineer will bring their love of Software and DevOps skills and Data expertise to continue to deliver value while setting us up for growth in the future. They will be joining an established cross-functional development team that's agile, fun, and driven to build high quality product.

This is a great opportunity for someone to tackle complex data problems, in an environment full of smart and talented people to learn from

As part of the team you will:

  • Develop and maintain scalable data pipelines and lakes to support continuing increases in data volume and complexity
  • Collaborate and engage in product decisions to improve the data models and processes that feed our platform
  • Write unit/integration tests, contributes to engineering wiki, and documents work
  • Work closely with other teams across the business to develop and our long-term data strategy
  • Work with our SRE team to monitor and improve our new and existing data pipelines to ensure they are reliable
  • Perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues
  • We are looking for someone who:

  • Has 3+ years’ experience in a Data Engineering role – programming in python is preferred
  • Has a software engineering background
  • Has experience running and maintaining production system
  • Experience working with SQL and no-SQL technologies
  • Is proactive and a ‘can do’ approach to the problem solving
  • Experience working in an agile environment – including planning, estimating, and delivering in iterative chunks
  • Experience working with AWS and AWS specific technologies like Redshift is highly regarded
  • Our values reflect the way we work. We’re a casual, inclusive bunch, with team members from a variety of backgrounds collaborating as a team to overcome challenges. Everyone is given space to learn and develop their skills and knowledge. We support each other in all ventures, whether attaining a new AWS certification or trying their hand at baking sourdough or brewing beer. We create remarkable experiences for our customers and we treat others the way we would like to be treated.