Open Opportunities

Array

Staff Backend Engineer - AI Application

Israel - Petah Tikva · Full-time · Senior

About The Position

About Cellebrite:

Cellebrite’s (Nasdaq: CLBT) mission is to enable its global customers to protect and save lives by enhancing digital investigations and intelligence gathering to accelerate justice in communities around the world. Cellebrite’s AI-powered Digital Investigation Platform enables customers to lawfully access, collect, analyze and share digital evidence in legally sanctioned investigations while preserving data privacy. Thousands of public safety organizations, intelligence agencies and businesses rely on Cellebrite’s digital forensic and investigative solutions—available via cloud, on-premises and hybrid deployments—to close cases faster and safeguard communities. To learn more, visit us at www.cellebrite.com, https://investors.cellebrite.com/investors and find us on social media @Cellebrite. 



Position Overview:

  • You will work on a domain where the output directly supports real-world investigations  
  • You will work on systems that directly support real-world investigations, where reliability, privacy, and trust matter. 
  • You will join a team building innovative agentic AI capabilities, including the core runtime and orchestration foundations that enable investigators to interact with complex data in faster, more intelligent ways. 
  • The technical challenges are substantial: multi-agent systems, retrieval pipelines, context engineering, memory management, strict multi-tenant isolation, high-throughput data flows, and real-time response patterns at scale. 
  • You will be part of a senior team that moves fast, owns services end-to-end, and cares deeply about engineering craft, AI quality, and production readiness. 



What is your mission? 

We're building the next generation of investigator tools at Cellebrite, powered by state-of-the-art agentic AI and cutting-edge backend technologies.  

As a Senior Backend Engineer, you will be part of the core team that designs, builds, maintains, and evolves the agent harness and shared AI capabilities that

power real

investigative workflows at enterprise scale. 

You will help turn advanced AI concepts into production-grade systems by shaping orchestration, retrieval, context, memory, and backend architecture across the platform.




Responsibilities:



Agentic AI Systems: 

  • Design and evolve multi-agent RAG architectures 
  • Design, build, maintain, and evolve the core agent harness and shared AI capabilities that power the platform. 
  • Develop production-grade orchestration flows for LLM-based agents, including tool calling, routing, control loops, and failure handling. 
  • Build and improve retrieval pipelines across structured and unstructured data using RAG and adjacent retrieval technologies. 
  • Shape context engineering patterns that assemble the right information, instructions, tools, and history for each agent workflow. 
  • Design memory capabilities for agentic systems, including short-term and long-term memory patterns and semantic, episodic, and procedural memory strategies where relevant. 
  • Drive AI evaluation frameworks, dataset design, regression detection, prompt and model change validation, and LLM-as-judge style metrics. 
  • Own real-time response infrastructure for interactive AI experiences, including WebSockets, SSE, and streaming responses. 


Data Pipeline & Ingestion: 

  • Design and implement scalable, reliable, event-driven data ingestion and processing pipelines. 
  • Build fault-tolerant workflows for indexing, enrichment, and retrieval over large, complex investigative datasets. 
  • Own search index design, vector retrieval patterns, and query performance at scale. 


Core Backend Services: 

  • Build and maintain high-performance async Python APIs 
  • Design multi-tenant database schemas with strict data isolation guarantees 
  • Enforce tenant isolation across every data layer: relational DB, search systems, and object storage 


Infrastructure & Platform:

  • Build and maintain portable backend infrastructure and deployment foundations across cloud environments. 
  • Drive observability through distributed tracing, structured logging, metrics, and alerting. 
  • Maintain and improve internal shared libraries, platform components, and engineering standards for AI services. 


Collaboration:

  • Work closely with product, data, and frontend engineers to translate requirements into backend architecture 
  • Participate in design reviews, code reviews, and cross-team technical discussions 
  • Mentor junior and mid-level engineers on backend and AI best practices.

Requirements

  • 5+ years of backend engineering.
  • Deep Python expertise, including async/await, type annotations, FastAPI or similar, Pydantic, and SQLAlchemy. 
  • Hands-on production experience with LLM orchestration, agent runtimes, tool-calling flows, or multi-step AI systems. 
  • Strong experience building production AI systems using RAG, retrieval pipelines, and related retrieval technologies. 
  • Solid understanding of context engineering and how to manage prompts, retrieved knowledge, tool context, and conversation state in production systems. 
  • Familiarity with memory patterns for AI systems, including short-term and long-term memory, and semantic, episodic, and procedural approaches. 
  • Experience designing multi-tenant SaaS systems with strict tenant data isolation. 
  • Solid understanding of event-driven architecture and distributed system design. 
  • Familiarity with real-time communication patterns like WebSockets, SSE, and HTTP streaming. 
  • A genuine passion for AI engineering and curiosity about fast-moving agentic systems, retrieval methods, and emerging model capabilities. 




Nice to Have: 

  • Experience with cloud platforms such as AWS or similar environments. 
  • Familiarity with AWS services such as ECS, Lambda, Step Functions, OpenSearch, Aurora PostgreSQL, SQS/SNS, and S3. 
  • Experience with agent orchestration frameworks such as LangGraph, PydanticAI, LangChain, or similar tools. 
  • Experience with vector databases, search index design, and retrieval infrastructure. 
  • Experience designing and running LLM evaluation pipelines. 
  • Experience integrating open-source models and open-source AI infrastructure. 
  • Experience working in privacy-sensitive, regulated, or security-critical environments. 


 

Apply for this position