Staff Data Engineer
Galileo Financial Technologies
Employee Applicant Privacy Notice
Who we are:
Welcoming, collaborative and having the opportunity to make an impact - is how our employees describe working here. Galileo is a financial technology company that provides innovative and revolutionary software products and services that power some of the world's largest Fintechs. We are the only payments innovator that applies tech and engineering capabilities to empower Fintechs and financial institutions to unleash their full creativity to achieve their most inspired goals. Galileo leads its industry with superior fraud detection, security, decision-making analytics and regulatory compliance functionality combined with customized, responsive and flexible programs to accelerate the success of all payments companies and solve tomorrow's payments challenges today. We hire energetic and creative employees while providing them the opportunity to excel in their careers and make a difference for our clients. Learn more about us and why we work here at https://www.galileo-ft.com/working-at-galileo.
The role
Join our Data Engineering organization and help lead the transformation of the finance industry. You’ll work with a cloud-native, modern data platform, building data pipelines, designing data models, transformations, and deliver data products and dashboards to our clients. You’ll bring technical expertise and provide operational support to meet the data needs of our business, from migrating existing workloads to developing advanced cloud solutions.
In this role, you’ll collaborate closely with other engineering and product teams, focusing on delivering high-quality data products while promoting a culture of engineering excellence. We’re looking for someone who brings not only strong technical skills but also thoughtful perspectives and creativity to help push our team forward.
What you’ll do:
- Design and implement data pipelines and ETL processes to support large-scale data products.
- Develop and deliver efficient data models and products that align with business needs.
- Gather and understand requirements, then design data models and solutions tailored to analytical and usage demands.
- Apply data quality, governance, and best practices to ensure consistency, accuracy, and reliability across all data products; establish monitoring systems to maintain partner trust throughout the data lifecycle.
- Perform root cause analysis, implement preventative solutions, and monitor data processes to ensure quality and performance, including using cost analysis to design efficient, cost-effective solutions.
- Collaborate closely with stakeholders to understand requirements, translate them into detailed technical solutions, and plan for incremental, on-time delivery.
- Mentor junior data engineers through code reviews, sharing best practices, and helping to foster a culture of high performance and engineering excellence.
What you’ll need:
- Bachelor's Degree in Computer Science, Data Engineering or a related field
- 6+ years of experience in data engineering, with a strong background in building and scaling data-intensive applications.
- Solid understanding of data warehousing concepts, dimensional modeling, and ETL best practices.
- Proficiency in SQL and at least one programming language such as Python or Java.
- Strong hands-on experience with Snowflake, including building and optimizing data pipelines, data models, and transformations.
- Experience working with one or more major cloud platforms (e.g., AWS, Azure, or GCP). AWS preferred.
- Experience with workflow orchestration tools such as Apache Airflow.
- Familiarity with data transformation tools and frameworks such as dbt (Data Build Tool), PySpark, or Snowpark.
- Experience working with data visualization tools like Tableau or Looker.
- Working knowledge of Git and CI/CD best practices.
- A collaborative mindset and the ability to work effectively with both technical and non-technical team members.