Role: Backend Engineer
Rank: Mid to ****Senior-level
Location: Hybrid (Ghana)
Commitment: Full Time
Salary: GHS 9,600 - 16,000
We’re a trailblazing fintech startup building a smart wealth management powerhouse that accelerates financial freedom for all. Our solutions span personal finance, DeFi, and artificial intelligence. If this is your jam, and you’re passionate, forward-thinking, biased for execution and impact, we’d love to have you.
What you’ll do day-to-day
We are seeking a talented Backend Engineer to architect and optimize backend solutions, with a focus on data engineering, AI, and large language model (LLM) implementations. As an LLM Backend Engineer at Ladder, you’ll be responsible for developing, scaling, and maintaining LLM-based applications and services. You will work closely with our AI, systems, and cross-functional product delivery teams to drive our mission forward by delivering AI-driven insights and tools that elevate our users' experience. You will be expected to:
i. Understand and analyze product requirements and communicate with the product and AI teams
ii. Write clean, high-quality, high-performance, and scalable backend code for LLM integration
iii. Design, develop, and maintain APIs that support AI-driven applications and interact with large language models
iv. Optimize database interactions, manage data flows, and implement efficient data caching techniques for LLM applications
v. Coordinate with cross-functional teams to ensure alignment with business objectives and compliance standards
vi. Participate in code reviews, technical planning, and LLM model fine-tuning sessions
vii. Implement automated testing platforms, monitor AI service performance, and support deployment of new AI-driven features
viii. Stay updated on advancements in LLM technology and bring the best practices to our team
Work Hours
This role is expected to put in a total of 40 hours of quality work weekly between 9:30 AM to 5:30 PM GMT.
Main Stack
i. LLM Frameworks: OpenAI, LangChain
ii. Programming Language: Python,
iii. LLMOps: fastAPI, Guardrails-ai + Semantic router + Redisvl(SemanticCache/llmcache)
iv. Backend Framework: FastAPI, Django (DRF)
v. Data store: PostgreSQL, Redis for caching
vi. CI/CD: GitHub Actions, Docker
vii. Infrastructure: AWS, Kubernetes
Requirements
i. Proven experience in backend development, ideally in data engineering, AI/ML, or large language model applications
ii. Strong proficiency in Python, with demonstrated expertise in frameworks like FastAPI and Django
iii. Proficiency in designing RESTful APIs and managing high-traffic AI model requests
iv. Knowledge of database optimization, query handling, and data flow for LLM data pipelines
v. Familiarity with Docker, cloud infrastructure (AWS preferred), and CI/CD tools
vi. Ability to communicate effectively across teams, bringing clarity to technical and product discussions
vii. A sense of ownership and an innovative mindset with an ability to identify areas for AI-driven improvements
Nice to have:
i. Interest in experimenting with emerging LLMs and AI techniques to push the boundaries of what's possible
ii. Experience with LLM frameworks (e.g., Hugging Face, OpenAI API, LangChain) and integrating LLMs into backend systems