
AI Engineer / Full Stack Developer @ Antal
- Warszawa, mazowieckie Kraków, małopolskie
- Stała
- Pełny etat
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 3+ years of experience as a Full Stack Developer, ideally with AI-focused projects.
- Proficiency in Python (back-end) and JavaScript (front-end).
- Strong knowledge of LLMs and GenAI: selection, tuning, embeddings.
- Hands-on experience with API development and integration.
- Practical expertise with Docker, Kubernetes, and cloud platforms (AWS, GCP, Azure).
- Familiarity with SQL/NoSQL databases and caching systems (e.g., Redis).
- Knowledge of vector databases for LLM applications.
- Experience with Git, CI/CD pipelines, and DevOps practices.
- Fluency in both Polish and English.
- Familiarity with streaming/real-time data systems (e.g., Kafka).
- Experience with serverless computing (e.g., AWS Lambda).
- Background in ML frameworks (TensorFlow, PyTorch, ONNX).
- Understanding of AI data security and compliance standards.
- Strong problem-solving mindset and ability to work both independently and collaboratively.
- Excellent communication skills to translate technical concepts into actionable tasks.
- Curiosity and proactive learning approach toward emerging AI technologies.
- Embed LLMs and other GenAI models into web apps through well-designed, efficient APIs.
- Build and optimize endpoints to ensure smooth real-time communication between front-end and AI back-end systems.
- Design secure, scalable, and high-performance microservices for AI deployment.
- Develop engaging and responsive user interfaces (JavaScript/TypeScript) to showcase and interact with AI-driven features.
- Build components for visualizing and managing AI model outputs.
- Create reliable back-end services in Python to support large-scale GenAI models.
- Develop and maintain data pipelines — from preprocessing to post-processing model results.
- Apply best practices for handling sensitive data and ensuring consistent model performance.
- Use Docker and Kubernetes for containerization and orchestration.
- Build and maintain CI/CD pipelines to automate testing and deployments.
- Manage secure and scalable cloud environments (AWS, GCP, or Azure) for training, hosting, and running models.
- Work with vector databases (Pinecone, Weaviate, Faiss) to support semantic search and recommendation systems.
- Utilize frameworks such as Hugging Face Transformers, LangChain, and OpenAI APIs.
- Fine-tune and optimize LLMs to fit application requirements.
No Fluff Jobs