We're hiring a hands-on Data Engineer (Architect) to design, build, and operate scalable geospatial data platforms for maritime mobility data (e.g., AIS trajectories). You'll own architecture and engineering decisions end-to-end: ingestion transformations curated datasets performance/cost tuning production reliability, governance, and security.
Responsibilities
- Architect and implement geospatial-first data platforms for high-volume mobility data
- Design raw staging curated datasets with strong schemas, table definitions, and naming conventions
- Define BigQuery partitioning/clustering strategies and PostgreSQL/PostGIS indexing for scale
- Build batch + streaming pipelines for near-real-time positional updates and large historical backfills
- Implement incremental, idempotent, replayable transformations and automated reprocessing workflows
- Execute data migrations and cutovers with reconciliation and minimal downtime
- Optimize BigQuery query patterns, materialization strategy, and cost-performance tradeoffs
- Tune PostgreSQL query plans, indexing, and operational maintenance for consistent performance
- Implement CDC strategies (where applicable) with replayability and reliable recovery paths
- Productionize pipelines with CI/CD, environment separation, and infrastructure automation
- Build observability (logging, metrics, alerts), define SLAs, and maintain runbooks for incident response
- Define governance standards: ownership, lineage, access controls, retention, and auditability
- Secure platform access and secrets management across BigQuery/PostgreSQL/object storage/runtime
- Partner with Data Science/Analytics stakeholders to translate needs into scalable data products
- Provide technical leadership, mentor engineers, and maintain architecture + pipeline documentation
Qualifications
- Bachelor's degree in Computer Science/Software/Data Engineering (or equivalent experience)
- 5+ years building and operating production data pipelines (batch; streaming strongly preferred)
- Advanced SQL and proven performance tuning experience
- Strong hands-on experience with BigQuery, including partitioning, clustering, and cost-aware query design
- Strong hands-on experience with PostgreSQL; PostGIS experience is highly valued
- Experience with object storage (S3/GCS/DO Spaces or equivalent)
- Experience orchestrating workflows using Airflow, Kafka-based pipelines, cron, or similar schedulers
- Experience designing analytics-ready geospatial models and working with spatial joins/geofencing/proximity
- Strong engineering discipline: idempotency, replayability, backfills/reprocessing, and versioned transformations
- Experienced in DevOps, CI/CD and version-controlled schema migrations
- Experienced in observability experience: monitoring, alerting, logging, performance metrics, and operational runbooks
- Experienced in security fundamentals: IAM/access controls, auditability, and secrets management
- Experienced in Kafka, Flink, Spark Structured Streaming, or other stream processing frameworks
- Experienced in CDC tooling (e.g., Debezium) and low-downtime migration patterns
- Familiarity with maritime mobility / AIS pipelines and domain concepts (MMSI/IMO, voyages/events) is an advantage
What We Look For
We're looking for growth-minded data engineers with high learning agility and strong engineering judgmentpeople who design architecture-first, production-ready geospatial data systems with clear conventions, idempotent and replayable pipelines, and disciplined observability/governance, balancing cost and performance while delivering reliable datasets for real-world maritime operations; this is a startup environment, so expect a fast pace and evolving requirements.
Apply Now.