AI engineering
The revolution needs builders. Become an AI Engineer.
start:
Feb 24, 2026
16 classes
$ 350/mo
up to 20 students
$ 350/mo
up to 20 students
what's inside
Master AI Engineering as a new discipline. This is a course for those who are ready to enter a field that everyone talks about, but few deeply understand. You will learn how AI engineering differs from classical ML and learn to use the modern stack and foundation models to solve complex business problems.
The program is built on hands-on projects: RAG system on corporate data, autonomous agents with tool calling and memory, pipelines with Evals for quality control. You will go through the path from embeddings and vectorization to context engineering. The course focus is on production: how to understand when it's worth using LLM and when it would be inappropriate; how to optimize costs and ensure system observability.
We'll also pay attention to engineering culture: prompt testing, using LLM-as-a-judge, and solving latency and security issues.
The course is designed for developers with Python experience and basic ML understanding. After the course — a portfolio of projects and the opportunity to become the one who implements GenAI in your company.
*The cases you will examine in the course are not academic examples, but the instructor's production experience from real projects at Netflix.
curriculum
prepare yourself
- What is AI engineering
- Practical use cases of foundation models
- AI engineering stack
Intro to AI Engineering
The role of AI engineer and practical stack for working with LLM
- Training data, training, post-training and fine-tuning: what each stage is needed for
- Model architecture and its size
- Sampling and its impact on results
- Specialized models: coding, image processing, audio and video generation
- On-device and small (mini) models
Practice:
Running and using local models for different use cases: coding, image generation, speech-to-text
Foundational Models
How model parameters determine their behavior and application possibilities
- The difference between zero-shot and few-shot prompting
- Optimizing prompts for a specific task
- CoT prompting and using models for reasoning
- How to write prompts to reduce hallucinations in LLM
- Security risks and ways to minimize them
- Versioning and testing prompts
Practice:
Building a pipeline for extracting structured data from text files
Prompt Engineering
From basic zero-shot prompts to controlled, secure and tested interaction with models
- The probabilistic nature of LLM
- Writing scoring functions for AI solutions
- LLM-as-a-judge: using models to evaluate results
- Building a comprehensive evaluation pipeline as part of CI/CD
Practice:
Building a pipeline for evaluating AI solution performance. Preparing a test dataset and designing evaluation functions.
Evaluations for AI systems
How to systematically test and measure the quality of AI systems performance
- Concepts and basic principles
- Similarity search, clustering, semantic and hybrid search, reranking
- Chunking strategies
- Embeddings for text, images, audio, video and multimodal embeddings
- Practical use cases: normalization and deduplication
Practice:
Data vectorization. Similarity search with context consideration, normalization and deduplication.
Embeddings and Vectorization
How to convert different types of content into numerical vectors for search and comparison
- What is RAG and where it is used in the industry
- Why do we use RAG instead of training models?
- RAG architecture: retrieval algorithms and their optimization, response generation
- Context and memory: key things for building effective RAG solutions
Practice:
Creating a chatbot for answering questions from your own (internal) data. Building RAG for large volumes of data when it's impossible to fit all information into the context window.
RAG (Retrieval-Augmented Generation)
What is RAG and how to build RAG systems on your own data
- MCPs and tool calling
- Agent frameworks
- Agentic RAG
- Designing reliable AI agents
- Design patterns for agents
- Context engineering and memory management for agents
Practice:
Creating a solution using agent framework. Planning and tool calling. Agentic RAG. Using user feedback.
Agents
Building autonomous AI systems capable of planning, using tools and making decisions
- Coding agents: hype or working tool
- AI assistants for the full development cycle: from working on an idea to deployment to production
- Cursor, Claude, Cline and other popular tools
- MCP servers, Skills and their integration into the development process
Practice:
Creating an application entirely using a coding agent. Connecting MCP servers and Skills to extend agent capabilities.
DevEx Productivity / AI first
AI integration into developer workflow: from autocomplete to autonomous coding agents
- Cost of AI solutions: how to calculate expenses
- Best practices: when not to use LLM, RAG and agents
- Performance issues in interaction with LLM and agents
- Working with security and potentially dangerous actions of AI agents: guardrails, sandboxing, manual judgment, feedback loop, checkpoints
- Observability for AI application
Practice:
Working with LLMOps systems, evaluating performance and cost of LLM usage
Preparation for production usage
What you need to know before launching an AI system in production
- Presentation of the final AI assistant
- Analysis of important technical aspects, options for improving response accuracy
- Analysis of architecture and how it affects the solution cost
- Discussion of next steps for improving the solution and implementation into business processes
Presentation of course AI projects
Course completion and summary
instructor:

Dmytro Kovalenko
Senior Software Engineer @Netflix
10+ years of experience developing high-load and high-performance solutions in startups and tech companies. Specializes in GenAI in production: LLM, AI agents, RAG, NLP, model integration into real business processes.
Ready? Take the first step
ready?
take the first step
I accept the terms of the Public Offer Agreement and consent to the processing of my personal data in accordance with the Privacy Policy.
reviews
what alumni say
what awaits
have fun and dive deep
intensive mode
We meet on Zoom twice a week — every Tuesday and Thursday at 6:30 PM. Every week — a new homework.
All lectures are live meetings with the teacher with a recording (to return to the material later). We regularly hold additional Q&A sessions with the teacher and keep in touch with you on Slack.
The language of instruction is Ukrainian.
Additional materials are in English.
learn among the best
We carefully select students so you're surrounded by driven, motivated peers. Yes, we dismiss those who don't complete assignments.
Your instructor stays with you until it clicks — whether that means a third code review or a quick 15-minute call. That's what we do: push each other to learn and level up.
Oh, and share jokes in Slack and swap referrals to awesome companies.
results that matter
No shallow slides or long introductions — just deep dives into real production challenges.
Certificates are earned, not given. They're earned through real results: completed assignments, active discussions, and measurable progress.