Logo of Archipelo

Remote Go and Python Developer

Remote
Photo of Nikita Tsibulsky
Recruiter
Nikita Tsibulsky
Roles:
Backend
Must have skills:
Go
Python
Nice to have skills:
NLP
Elasticsearch
Kubernetes
Docker
Terraform
Considering candidates from:
Ukraine, Belarus, and Schengen
Work arrangement:
Remote only
Industry:
Developer tools
Language:
English
Level:
Senior
Company size:
2-10 employees
Logo of Archipelo

Remote Go and Python Developer

Remote
Archipelo is building an intelligent code discovery platform that provides the best tools for developers to discover code in any form—and benefit through insights, recognition, and greater productivity. They are transforming code search to improve the practice of modern programming—using a graph-based approach drawing on data from the entire open source ecosystem. They're on a mission to build the world's best code discovery engine. Archipelo is well-funded by top investors in Silicon Valley, including the first investors of Google, Twitter, Zoom, LinkedIn, and Uber. Their team has backgrounds from NASA, LinkedIn, Facebook, Amazon, AWS, Cisco and MIT, Harvard, Stanford, and Berkeley.
Right now, they are seeking a Backend Software Engineer to lead technology development on the frontier of code discovery and developer productivity. A successful applicant is one capable of building software using a variety of technologies. You will help the team design, test and rapidly iterate on multiple products and services stemming from our core technology. You will develop prototypes, tools, and methods that inform decision-making for software developers (e.g. “Is this the right solution to my coding problem?” or “How do I implement this specific code in my application?” or “What code libraries are other developers using to solve my problem?”).

Tasks: 
  • Write and maintain services, tools, APIs, wearing many hats
  • Use SQL to interact with our data
  • Leverage services provided by our cloud provider to power our products
  • Optimize and focus on performance with an emphasis on user experience and cost
  • Monitor and own your work in production
  • Protect our work by writing tests and automating quality control where possible
  • Write real-time pipelines that execute complex operations on incoming data
  • Experiment in ways that accelerate prototyping and maximize resource utilization
  • Ensure pipelines work quickly, focus on fast single node performance and leverage horizontal scaling
  • Manage our data pipeline, including scheduling, dataflow programming, SQL and data labeling
  • Orchestrate the operation of clusters of commodity machines
  • Review code, mentor other engineers and support your peers
  • Attract, recruit and retain top engineering and scientific talent
Must-have skills:
  • Expertise in microservices and cloud computing—across cloud platforms
  • Proficient with distributed systems and the coordination of high volume independent commodity machines into complete, functional systems to handle diverse workloads
  • Minimum 8+ years of professional software engineering experience
  • Expertise with ETL
  • Expertise with Go
  • Expertise with Python
  • Experience in building the server side including the web & public APIs
  • Experience working remotely, capable of leveraging asynchronous communication patterns
  • Experience documenting your work for the benefit of your peers
Nice-to-have skills:
  • PhD or Master’s degree in computer science/engineering, mathematics, physics, or related field 
  • 10+ years of professional data engineering and software engineering experience
  • Experience with machine learning and NLP
  • Expertise with machine learning frameworks (like Keras or PyTorch)
  • Experience debugging CPU/memory performance issues 
  • Experience with various security aspects of building public facing APIs 
  • Experience in frontend development 
  • Ability to run a CPU and IO profiler to figure out where to optimize the pipeline
  • Advanced working knowledge of information retrieval and search technologies 
  • Expertise with configuration and use of open-source search systems to query and understand data
  • Experience with most of the following technologies:
    • ElasticSearch, Solr and Lucene
    • Machine learning infrastructure
    • Kubernetes, Docker, Terraform
    • Deep learning, GNNs
    • CircleCI, GitHub Actions, Jenkins
    • Graph databases
Benefits:
  • Stock options
  • Paid vacation and sick leave
  • A strong remote work culture that includes group activities and local gatherings
Interview process:
  1. Intro call with Toughbyte
  2. Vision and Opportunity Interview 
  3. Culture and Operations Interview 
  4. First Technical Interview w/ Technical Challenge (20-25 mins) 
  5. Second Technical Interview
  6. (Optional) Third Technical Interview
  7. Compensation discussion 
  8. (Optional) Meeting with Investor to Close
Check out the answers to frequent questions about this position below. Can't find the answer you're looking for? Ask us via email or try the company page.

Is part-time work possible?

Unfortunately, no. The company is looking to hire a full-time employee.

Apply now

or
By applying you agree to our terms of service. This site is protected by reCAPTCHA and the Google privacy policy and terms of service also apply.