Back to Careers
Senior Software Engineer (Distributed Systems & Data Sync)
Engineering • San Francisco (3 days/week in office) • Full-time
Senior Software Engineer (Distributed Systems & Data Sync)
Engineering
San Francisco (3 days/week in office)
Full-time
About the Role
We're looking for a Senior Software Engineer to design and build reliable, scalable systems that power bi-directional data synchronization across multiple platforms. You'll be working on distributed systems that connect with third-party APIs, databases, and internal services, ensuring that millions of records stay consistent and up to date in real time. This role is ideal for someone who thrives on solving complex systems problems, has strong database fundamentals, and understands the trade-offs involved in data replication, consistency, and fault tolerance.
Responsibilities
- •Architect and implement distributed systems that handle large-scale, high-throughput data synchronization
- •Design, implement, and optimize two-way sync pipelines between external APIs (e.g. CRMs, ERPs) and internal data stores
- •Develop systems that ensure data consistency, conflict resolution, and idempotency across multiple sources
- •Collaborate with product and platform teams to define integration patterns and synchronization strategies
- •Profile and optimize systems for latency, throughput, and reliability
- •Establish observability (metrics, tracing, and logging) for distributed data flows
- •Mentor other engineers, share best practices, and contribute to architectural decisions
Required Qualifications
- •5+ years of experience as a backend or systems engineer
- •Strong proficiency in TypeScript, Go, or Python (or a similar modern language)
- •Deep understanding of distributed systems concepts: replication, partitioning, consensus, eventual consistency, retries/backoff, etc.
- •Experience with PostgreSQL, message queues (e.g. SQS, RabbitMQ, Kafka), and API integrations
- •Prior work with synchronization or integration platforms (e.g. Nango, Airbyte, Segment, Workato) or custom-built data pipelines
- •Experience deploying and scaling containerized services (Docker, ECS, or Kubernetes)
- •Excellent debugging, communication, and architectural design skills
Nice to Have
- •Experience with conflict resolution strategies in two-way syncs
- •Familiarity with event-driven architectures or CQRS
- •Background with ETL/ELT pipelines, change data capture (CDC), or API gateways
- •Contributions to open-source distributed systems or data tools
Benefits
- •Build foundational infrastructure for complex, real-world synchronization problems
- •Work with a small, high-impact team solving deep technical challenges
- •Strong ownership, flexible remote environment, and meaningful technical autonomy