We are
We're building the financial infrastructure that powers global innovation. With our cutting-edge suite of embedded payments, cards, and lending solutions, we enable millions of businesses and consumers to transact seamlessly and securely.
The Role
We are looking for a backend engineer who can design, build, and operate highly reliable Node.js services on AWS that enable generativeโAI capabilities across our products and internal workflows.
You will create scalable APIs, data pipelines, and serverless architectures that integrate largeโlanguageโmodel (LLM) services such as Amazon Bedrock, OpenAI, and openโsource models, enabling teams to safely and efficiently leverage generative AI.
Who You Are
- You have experience building RetrievalโAugmented Generation (RAG) systems or knowledgeโbase chatbots.
- You're Handsโon with vector databases such as Pinecone, Chroma, or pgvector on Postgres/Aurora.
- Have AWS certification (Developer, Solutions Architect, or Machine Learning Specialty).
- Experience with observability tooling (Datadog, New Relic) and costโoptimization strategies for AI workloads.
- Background in microservices, domainโdriven design, or eventโsourcing patterns.
What Youโll Be Doing
- Design and implement REST/GraphQL APIs in Node.js/TypeScript to serve generativeโAI features such as chat, summarization, and content generation.
- Build and maintain AWSโnative architectures using Lambda, API Gateway, ECS/Fargate, DynamoDB, S3, and Step Functions.
- Integrate and orchestrate LLM services (Amazon Bedrock, OpenAI, selfโhosted models) and vector databases (Amazon Aurora pgvector, Pinecone, Chroma) to power RetrievalโAugmented Generation (RAG) pipelines.
- Create secure, observable, and costโefficient infrastructure as code (CDK/Terraform) and automate CI/CD with GitHub Actions or AWS CodePipeline.
- Implement monitoring, tracing, and logging (CloudWatch, XโRay, OpenTelemetry) to track latency, cost, and output quality of AI endpoints.
- Collaborate with ML engineers, product managers, and frontโend teams in agile sprints; participate in design reviews and knowledgeโsharing sessions.
- Establish best practices for prompt engineering, model evaluation, and data governance to ensure responsible AI usage.
Requirements
What You Bring to the Table
- Available working some US hours
- Proficient in Hebrew and English both written and verbal, sufficient for achieving consensus and success in a remote and largely asynchronous work environment - Must
- 4+ years professional experience building production services with Node.js/TypeScript.
- 3+ years handsโon with AWS, including Lambda, API Gateway, DynamoDB, and at least one container service (ECS, EKS, or Fargate).
- Experience integrating thirdโparty or cloudโnative LLM services (e.g., Amazon Bedrock, OpenAI API) into production systems.
- Strong understanding of RESTful design, GraphQL fundamentals, and eventโdriven architectures (SNS/SQS, EventBridge).
- Proficiency with infrastructureโasโcode (AWS CDK, Terraform, or CloudFormation) and CI/CD pipelines.
- Familiarity with secure coding, authentication/authorization patterns (Cognito, OAuth), and data privacy best practices for AI workloads.
Technical Environment:
- Languages: TypeScript, JavaScript, SQL
- Frameworks & Libraries: Express.js, Fastify, Apollo Server, LangChainโJS, AWS SDK v3
- Datastores: DynamoDB, Aurora (Postgres + pgvector), Redis, S3
- Infra & DevOps: AWS Lambda, API Gateway, ECS/Fargate, Step Functions, CDK, Terraform, Docker, GitHub Actions
- AI Stack: Amazon Bedrock, OpenAI API, HuggingFace Inference Endpoints, Pinecone, Chroma
Send CV to [email protected]