Logo of Sorcero

Remote Data Engineer

Remote
Photo of Viktoriya Bliznyuk
Recruiter
Viktoriya Bliznyuk
Roles:
Data
Must have skills:
Python
Considering candidates from:
Central Europe, Argentina, United States, United Kingdom, and Canada
Work arrangement:
Remote only
Industry:
Artificial Intelligence
Language:
English
Level:
Middle or senior
Company size:
11-50 employees
Trial period:
3 months
Logo of Sorcero

Remote Data Engineer

Remote
Sorcero provides a smart enterprise platform that builds a deep language intelligence operating system for technical domains, including Insurance, Financial Services, and Life Sciences. Their platform, built by the former leadership of the MIT Media Lab, harnesses AI and Natural Language Understanding to deliver new capabilities to augment human performance. Sorcero's NLU platform is a pre-built “no-code” drag and drop solution to reduce the deployment time of applications from months to days.
Now Sorcero is looking for a Data Engineer experienced with orchestration of data and ML pipeline workflows and writing batch and stream data processing pipelines for: ingestion, ETL, and analytics.

Their app stack: HTML/CSS, Vue.js, Python, Machine Learning, data science, MLOps, Elasticsearch, Redis, various Graph databases, Cloud Storage, Cloud Composer, Cloud DataFlow, TensorFlow Extended, Vertex.

Must-have: 
  • Strong programming skills in Python (other programming languages are a plus);
  • Experience working with Google Cloud Composer (Airflow) (other workflow orchestration tools a plus);
  • Experience with Google Cloud DataFlow or Apache Beam and its runners (knowing how things work under the hood is a plus);
  • Experience with Elasticsearch and PostGres and an interest in learning Graph databases (Neo4j and DGraph);
  • Experience with DataLakes, Google Cloud Storage (S3, Delta lake a plus).
Nice-to-have: 
  • Familiar with dashboard technologies (Looker, Ploty, Tableau, Grafana, ELK, etc.);
  • Familiar with scaling (Redis, SQL, Elasticsearch, PostGres, Graph Dbs, Kafka, etc.);
  • Knowledge of data warehousing and ETL concepts;
  • Familiar with AWS equivalent tech stack: Glue, EMR, Data Pipelines;
  • Data security, sub-second latency, Machine Learning and NLP experience.

Apply now

or
By applying you agree to our terms of service. This site is protected by reCAPTCHA and the Google privacy policy and terms of service also apply.