top of page

python internals

Master Python from the inside out

start:

Mar 23, 2026

16 classes

350$

what's inside

Python is only as fast as you understand what's going on under the hood. In this course, we'll uncover what's usually hidden: the CPython memory model, PyObject, allocations and fragmentation, the GIL and thread contention, multiprocessing, async and zero-copy approaches. You'll learn not to "guess" but to measure, profile and optimize. You'll see that decisions at the memory or concurrency level are never trivial — they determine the speed, stability and cost of the system.


 This course moves developers into the category of those who understand Python beneath Python — engineers who control the behavior of the interpreter, are able to build scalable solutions, reduce latency and RAM, and write code that holds up under traffic. The result is a tangible increase in the performance of your programs and confidence in what is happening inside the interpreter.


 The program is not for passive viewing, but for deep work: research, experimentation, optimization, and perhaps rewriting what has seemed right for years.

curriculum
prepare yourself

  • CPython's internal object model: PyObject, reference counting, the role of GC
  •  Memory management levels: heap, arenas, pools, blocks and pymalloc logic
  •  Typical patterns of overallocation and memory fragmentation in Python code
  •  Measurement and diagnostic tools: tracemalloc, gc, sys.getsizeof, practical analysis techniques

Practice: Writing code without unnecessary allocations

How Python manages memory

How memory is organized in CPython and why it matters for performance

  • Shallow, deep, and hidden copying: where Python creates new objects
  • Memory fragmentation: causes and accumulation
  • The impact of copying and fragmentation on memory consumption and system stability
  • What to use instead of copies: working with references and view approaches (generators, weak references, slots, named tuples, dataclasses)  

Practice:
Implementation of Copy-On-Write array, analysis of hidden allocations, memory profiling

Memory copying and fragmentation

How silent copies and allocations affect memory consumption and program stability

  • Vectorization and abandonment of Python loops NumPy and alternatives: PyArrow, JAX — usage scenarios
  • Zero-copy approach: buffer protocol, memoryview, mmap — working with data without copies
  • Compilation and JIT optimization: Cython, Numba. When it's justified and when it's not

Practice:
Working with byte data without copies, choosing the optimal data processing approach for a specific task

Performance beyond Python

When an interpreter is no longer enough and what approaches allow you to scale performance

  • Local and external caches (in-memory vs Redis): when is which approach appropriate?
  •  Typical caching errors: stale data, cache stampede, uncontrolled growth
  •  Cache-eviction policies (LRU, LFU, TTL) and their impact on memory and hit-rate
  •  Approaches to choosing cache size: a trade-off between memory consumption, performance, and stability  

Practice: 

Choosing an approach for the task, implementing an LRU cache, analyzing the impact of the cache on performance and algorithm behavior

Caching strategies

From local cache to multi-tier strategies with Redis

  • The role of the GIL in CPython and its impact on performance
  •  Thread switching: how Python handles I/O-bound and CPU-bound tasks
  •  GIL interaction with thread scheduler and signals
  •  GIL implementation: mutexes, semaphores, pthreads conditional variables  

Practice: 

Choosing an execution strategy and explaining the results taking into account the impact of GIL

GIL and multithreading

How the Global Interpreter Lock determines concurrency limits in CPython

  • How and why thread races arise
  •  Locking primitives: mutex, lock and their impact on performance
  •  Thread-safe interaction through queues and data exchange
  •  Thread Coordination: Semaphore, Event, Condition
  •  Choosing the Number of Threads: A Tradeoff Between Concurrency and Overhead

Practice:

Implementation of a multi-threaded log collector, implementation of thread-safe structures

Synchronization and safe work with threads

How to avoid races, deadlocks, and unstable behavior in multithreaded code

  • Process launch models: fork, spawn
  • Data exchange between processes: queues, pipes, shared memory
  • IPC and process coordination
  • Choosing the number of processes: balancing performance and processor capabilities

Practice:
Centralized logging, parallel computing, and data sharing using IPC and shared memory

Multiprocessing

How to bypass GIL restrictions and scale CPU-bound tasks across processes

  • Coroutine model and event loop: how concurrency is achieved
  •  Waiting and coordinating tasks: await, gather, task groups
  •  Typical asynchronous code errors: blocking loops, race conditions, memory leaks
  •  Green flows and other alternatives

Practice: 

Task orchestration using a semaphore, choosing a concurrency/concurrency strategy for different types of tasks

Asynchrony

How the asynchronous execution model works and where it breaks down in real systems

instructor:

Svitlana Sumets

Ready? Take the first step

ready?
take the first step

I accept the terms of the Public Offer Agreement and consent to the processing of my personal data in accordance with the Privacy Policy.

reviews
what alumni say

what awaits
have fun and dive deep

learn among the best

We carefully select students so you're surrounded by driven, motivated peers. Yes, we dismiss those who don't complete assignments.

Your instructor stays with you until it clicks — whether that means a third code review or a quick 15-minute call. That's what we do: push each other to learn and level up.

Oh, and share jokes in Slack and swap referrals to awesome companies.

results that matter

No shallow slides or long introductions — just deep dives into real production challenges.

Certificates are earned, not given. They're earned through real results: completed assignments, active discussions, and measurable progress.

intensive mode

We meet on Zoom twice a week — every Monday and every Friday at 6:30 PM. Every week — a new homework.

All lectures are live meetings with the teacher with a recording (to return to the material later). We regularly hold additional Q&A sessions with the teacher and keep in touch with you on Slack.

The language of instruction is Ukrainian.
Additional materials are in English.

bottom of page