About The Position
Location: Johannesburg
Job Type: Full-time (100%)
About Scytale
Scytale is a fast-growing B2B SaaS startup transforming cybersecurity compliance for businesses worldwide. Our innovative Compliance-as-a-Service platform simplifies frameworks like SOC 2, ISO 27001, HIPAA, GDPR, and PCI DSS for startups, scale-ups, and enterprises. Recognized as a leader in Governance, Risk & Compliance on G2, our customers rave about our platform and service.
Headquartered in Tel Aviv, we offer a collaborative, growth-oriented environment with a hybrid work model, competitive compensation, and benefits that prioritize your professional and personal well-being.
Role Overview
We are seeking a Data Engineer to join our growing engineering team. This is a key role for a motivated and technically skilled individual with a solid foundation in software engineering and data systems. You will work on building scalable data infrastructure, implementing robust data integrations, and collaborating with cross-functional teams to solve real-world data challenges.
Requirements
- 5+ years of professional experience as a Data Engineer or in a similar role developing data ETL pipelines.
- Advanced proficiency in Python for backend development and scripting
- Strong SQL skills with hands-on experience in querying and modeling relational databases
- Experience with cloud platforms such as AWS, GCP, or Azure
- Hands-on with containerization technologies like Docker or Kubernetes
- Solid understanding of RESTful APIs
- Experience with version control systems (GitHub, GitLab, Bitbucket) and CI/CD workflows
- Strong grasp of software development lifecycle (SDLC) and principles of clean, maintainable code
- Demonstrated ability to work independently, own projects end-to-end, and mentor junior engineers
- Familiarity with AI concepts and prompt engineering is a plus
Nice to Have:
- Experience with data security, privacy compliance, and access controls
- Knowledge of infrastructure-as-code tools (e.g., Terraform, Helm)
- Background in event-driven architecture or stream processing
Responsibilities
- Design, develop, test, and maintain reliable data pipelines and ETL processes using Python and SQL
- Build and manage API-based data ingestion workflows and real-time data integrations
- Apply software engineering best practices: modular design, testing, version control, and documentation
- Own and optimize data workflows and automation, ensuring efficiency and scalability
- Collaborate closely with senior engineers, data scientists, and stakeholders to translate business needs into technical solutions
- Maintain and enhance data reliability, observability, and error handling in production systems
- Develop and support internal data-driven tools
- Implement data operations best practices, including automated monitoring, alerting, and incident response for pipeline health
- Work with data-devops principles: CI/CD for data workflows, infrastructure-as-code, and containerized ETL deployments
Benefits
- Innovative Work: Be part of a cutting-edge product shaping the future of security and compliance.
- Learning & Growth: Access courses, conferences, and mentorship to grow your career.
- Hybrid Work Model: Enjoy the flexibility of hybrid working.
- Collaborative Culture: Work with inspiring colleagues in a supportive environment.
- Relaxation & Fun: Take breaks in our relaxation room or join our team events, happy hours, and holiday celebrations.
- Family First: Personal and family priorities always come first.