Senior Data Engineer
Job Overview
Location
Remote
Employment Type
Full-time
Work Arrangement
Remote
Sector
Information Technology & Software
Experience Level
Senior (5-8 years)
Application Deadline
March 19, 2026
About the Company
GR8 Tech specializes in building leading B2B iGaming platforms for operators aiming to dominate the market. We provide comprehensive, high-impact technology solutions designed for scalability, including seamless integrations, expert consulting, and ongoing operational support.
Our platform is trusted by millions of active players and is instrumental in driving significant business growth for our clients. We pride ourselves on being the iGaming Platform for Champions.
With a global team of over 1000 dedicated professionals, GR8 Tech not only delivers cutting-edge technology but also empowers operators to achieve success across diverse brands, markets, and geographies. Our ambition fuels our innovation, and our people are the driving force behind our achievements.
Job Description
GR8 Tech is seeking a highly experienced Senior Data Engineer to join our innovative team. In this pivotal role, you will be instrumental in designing, building, and maintaining a scalable data infrastructure that underpins our high-throughput, production-grade iGaming platform.
You will take ownership of critical data systems, ensuring they effectively support analytics, operational workloads, and advanced ML-driven capabilities across our multi-tenant environment. This position offers significant architectural responsibility and demands end-to-end ownership, adhering to strict reliability, performance, and Service Level Agreement (SLA) standards.
This is a hands-on role where you will influence core data architecture decisions and contribute to the continuous evolution of our platform to meet growing scale and complexity. You will work with a modern tech stack including Python, SQL, Kafka, PostgreSQL, AWS, and various analytics and streaming tools.
To apply for this role, click the Apply button on this page and follow the instructions.
Required Skills
Key Responsibilities
- Design and operate batch and streaming data pipelines.
- Architect and evolve data storage layers (lake, analytical, operational).
- Develop scalable data models for analytics, ML, and operational systems.
- Ensure data quality, consistency, and observability across pipelines.
- Implement real-time processing using event-driven architectures.
- Build ingestion and transformation workflows (ETL/ELT).
- Own backfills, reprocessing strategies, and data migrations.
- Implement CI/CD, monitoring, and SLA-driven operational practices.
- Partner with engineers, analysts, and data scientists to design reliable and scalable data solutions.
- Contribute to data platform standards and best practices.
Qualifications
- 5+ years of experience building and operating production-grade data systems under SLA constraints.
- Strong proficiency in SQL and Python.
- Hands-on experience with streaming systems (Kafka or similar).
- Experience with batch processing and workflow orchestration.
- Production experience with relational databases (PostgreSQL or similar), including performance tuning, indexing, and partitioning.
- Experience designing high-throughput and low-latency systems.
- Solid understanding of point-in-time correctness and reproducible data pipelines.
- Experience with caching systems (Redis or similar).
- Cloud experience (AWS preferred).
- Experience with analytical engines (Athena, StarRocks, ClickHouse, or similar).
- Experience with distributed processing frameworks (Spark, Flink, Beam, or similar).
- Experience implementing data quality frameworks and CI/CD for data pipelines.
- Familiarity with vector databases or columnar storage systems.
- Experience supporting ML workloads (e.g., feature pipelines or feature storage patterns) is a plus.
- Familiarity with training-serving consistency concepts is a plus.
- Experience building shared data platform components is a plus.
- Experience building lightweight data-serving APIs (e.g., FastAPI) is a plus.
- Experience working with containerized environments (e.g., Docker) is a plus.
Benefits & Perks
- Benefits Cafeteria with an annual budget for Sports, Medical, Mental Health, Home Office, and Languages.
- Paid maternity/paternity leave with a monthly childcare allowance.
- 20+ vacation days, unlimited sick leave, and emergency time off.
- Remote-first work environment with tech support and coworking compensation.
- Regular team events (online/offline/offsite).
- A strong learning culture with internal courses and growth programs.
How to Apply
To apply for this role, click the Apply button on this page and follow the instructions.
Join Our Communities
Posted Date
March 4, 2026
31 people viewed this job
Data Analyst Intern
Confidential Employer
Data Analyst (M&E Support)
Women’s Rights Advancement and Protection Alternative (WRAPA)
Entry-Level Data Analyst
Confidential Employer
Data Analyst
Nikore Associates
Fraud Operations Analyst
Sezzle
Fraud Data Analyst
Sezzle
Chargeback Operations Analyst
Sezzle
Entry-Level Data Analyst
Confidential Employer
Quality Assurance Engineer
Big Cabal Media
Senior DevOps/Platform Engineer
Confidential Employer