/assets/blog/top-python-libraries-2024/collection-python-e190b6eef0.png
blog
Top Python libraries of 2024

Tue, Dec 10, 2024

Welcome to the 10th edition of our annual Top Python Libraries blog post!

Over the past decade 🤯, we’ve explored the ever-evolving Python ecosystem, spotlighting the most innovative and impactful libraries each year. The fact that we could go on for so long is a testament to the Python community's creativity and dedication to solving real-world problems with elegant and practical tools.

For readers new to the series, libraries featured here are selected based on their relevance to 2024. First, they must either have been released in or around 2024. To select our top picks and runners-up, we look for a mix of practical utility, novelty, and—let's be honest—a bit of coolness factor, whether that means a groundbreaking approach, an elegant solution to complex problems, or sheer cleverness in execution.

This year, we’ve listened closely to community feedback and are making a key change: for the first time, we’ve separated the main lists into General Use libraries and AI / ML / Data libraries. With the rapidly growing AI / ML space, this new structure ensures we do justice to folks who are just not interested. Whether you’re a software engineer looking for powerful utilities, or a data scientist hunting for cutting-edge tools, we’ve got you covered!

We hope you enjoy this special 10th-anniversary edition, and as always, we’d love to hear your thoughts and suggestions for next year’s list. Let’s dive in!

Jump straight to:
  1. Top 10 - General use
  2. Top 10 - AI/ML/Data
  3. Runners-up – General use
  4. Runners-up – AI/ML/Data
  5. Long tail

Top 10 - General use

1. uv - the new standard package and project manager

uv GitHub stars

UV logo

Every once in a while, something happens that shakes the Python community so profoundly it’s impossible to ignore. This year, that something is uv. So unless you’ve been living under a rock, you’ve probably heard about it: a lightning-fast Python package and project manager. Seemingly overnight, uv has skyrocketed in popularity, and for good reason. With an active and passionate community driving its rapid development, uv is setting a new standard for speed, versatility, and ease of use in Python project management. It’s not just a tool; it’s a movement—and one you don’t want to miss out on.

At its core, uv is more than just a package manager. It’s an all-in-one powerhouse that replaces pip, poetry, pyenv, and more, while introducing a host of new features to make your development life easier. Whether you’re managing dependencies, running isolated scripts, or switching between Python versions, uv handles it all with jaw-dropping speed—10 to 100 times faster than pip, thanks to its Rust foundation. Add a vibrant community constantly adding features and refining its performance, and you have a tool that feels both polished and cutting-edge.

What makes uv stand out is its sheer versatility. Want to manage dependencies with a universal lockfile? Done. Need to run scripts with inline dependency metadata? Easy. Struggling with multiple Python versions? uv’s got you covered. It even supports Cargo-style workspaces for large-scale projects and boasts a disk-space-saving global cache for dependency deduplication. Everything you need to streamline your workflow is built in—and it’s ridiculously fast at every step.

Despite its power, uv remains incredibly approachable. You can install it in seconds via curl or pip, and its familiar pip-compatible interface ensures you can migrate with zero friction. Commands like uv add, uv run, and uv init are intuitive, making uv accessible to developers of all experience levels. Plus, with support for macOS, Linux, and Windows, it’s ready to enhance your workflow no matter your setup.

The rapid development of uv is a testament to its active community and the team behind it, Astral—the creators of the beloved Ruff. New features, fixes, and optimizations roll out constantly, making it one of the most exciting tools to follow in the Python ecosystem. The sense of momentum around uv is palpable, and developers are rallying around its ability to solve pain points that have plagued Python workflows for years.

A true game changer.

2. Tach - taming module dependencies

Tach GitHub stars

Tach logo

If you've ever worked on a large Python project, you've likely encountered the creeping chaos of tangled dependencies. What starts as clean module boundaries can quickly devolve into a web of imports, making your codebase harder to maintain, debug, and scale. Tach, a Python tool written in Rust, takes aim at this problem. Designed with modular monolithic architecture principles, it empowers developers to enforce clear dependencies between modules, keeping your project organized and resilient.

With a simple interactive setup, Tach lets you define which modules can interact, creating a blueprint for your project's structure. Its CLI tools then help you sync, validate, and visualize these dependencies. Need proof it’s working? The tach check command flags unauthorized imports, ensuring your code remains clean without runtime impact. For added clarity, Tach even generates dependency graphs, offering valuable insights into your project's architecture.

Tach's features go beyond basic enforcement. It allows you to define public interfaces for modules, limit imports to explicitly defined members, and even deprecate dependencies over time. These capabilities, combined with seamless integration into workflows like CI pipelines and pre-commit hooks, make it a flexible tool for maintaining high code quality in any Python project.

Tach solves dependency chaos at its root. Open-source, easy to adopt, and packed with thoughtful features, definitely one to keep an eye on to bring clarity and control to your projects.

3. Whenever - an intuitive datetime library

Whenever GitHub stars

Whenever logo

Working with dates and times in Python can often feel like stepping into a minefield. From distinguishing naive and timezone-aware datetime objects to handling Daylight Saving Time (DST) quirks, even seasoned developers can find themselves debugging unexpected errors. If you’ve ever wished for a tool that eliminates these challenges while delivering high performance, meet Whenever—a groundbreaking library for typed, DST-safe datetime operations.

At its core, Whenever is designed to help developers write correct, type-checked datetime code effortlessly. Its approach transforms common pitfalls into IDE-detected issues, ensuring that bugs are squashed before they even reach production. And it’s not just about safety — Whenever is remarkably fast, leveraging a Rust-powered backend while also offering a pure Python version for those prioritizing simplicity over speed. Whether parsing dates, managing timezones, or performing arithmetic across time boundaries, this library has you covered with precision and clarity.

The functionality of Whenever is refreshingly intuitive. It introduces explicit types like Instant, ZonedDateTime, and LocalDateTime, which clarify your intentions and prevent dangerous mixing of incompatible datetime objects. Consider this example:

This straightforward snippet handles timezone shifts and DST safely—no surprises, no guesswork. Whether you're scheduling meetings across continents or analyzing historical logs, Whenever makes the process smooth and predictable.

What sets Whenever apart isn’t just its user-friendly API but its thoughtful design. Unlike alternatives like Arrow or Pendulum, Whenever fixes DST-related pitfalls and supports strong typing to distinguish naive from aware datetimes. These enhancements empower developers to catch subtle errors that other libraries often miss.

Whether you’re tired of wrestling with Python’s datetime or simply curious about the cutting edge of Python libraries, Whenever is worth exploring. Say goodbye to datetime bugs and hello to a future where time is always on your side.

4. WAT - examine any Python object

WAT GitHub stars

WAT logo

Ever found yourself staring at a Python object, wondering what it truly is or how to work with it? Whether you’re debugging a tangled data structure or exploring an unfamiliar codebase, unraveling Python’s dynamic runtime can be a daunting task. WAT is the inspection tool designed to transform confusion into clarity. With a simple command like wat / object, it helps you dive deep into any object’s type, attributes, methods, parent types, and even its source code. It’s your go-to tool for answering the eternal question: What is this thing?

Python terminal showing datetime module output with timestamp, attributes, and methods

WAT stands out with its intuitive and flexible syntax, making inspection both powerful and delightful. Curious about a function’s signature? A quick wat / str.split lays it all out. Exploring a complex module? wat / pathlib reveals its structure, letting you navigate classes, functions, and submodules with ease. WAT even prettifies messy collections, turning deeply nested dictionaries into readable, indented outputs. It can also take modifiers like .short for concise views, or .code to reveal an object’s source, giving you control over how much detail to explore.

A neat one to have on your day-to-day arsenal.

5. peepDB - peek at your database

peepDB GitHub stars

If you liked our previous pick, you’ll find a natural complement in peepDB. This open-source command-line tool and Python library simplifies database exploration for MySQL, PostgreSQL, and MariaDB users, eliminating the need to write tedious or bespoke SQL queries. With commands like peepdb view <connection_name>, you can instantly inspect tables, explore specific datasets with pagination, or export data in JSON format—all with an intuitive interface designed for speed and ease of use.

Terminal showing PostgreSQL database table with employee records viewed through peepdb

Connection details are encrypted and stored locally, with secure password input workflows ensuring your credentials stay safe. PeepDB’s lightweight design and user-friendly CLI make it a perfect addition to any developer's toolkit, offering a seamless way to navigate databases with minimal effort.

6. Crawlee - a modern web scraping toolkit

Crawlee GitHub stars

Crawlee logo

Web scraping just got a major upgrade! The team at Apify, a well-established name in web scraping and automation, has officially launched Crawlee for Python—a powerful, open-source library built to streamline web scraping for Python developers. While Crawlee had already earned its stripes in the JavaScript world, this Python version marks an exciting step forward, bringing the same battle-tested functionality to a broader audience.

At its core, Crawlee offers a unified interface for both HTTP and browser-based scraping, making it a versatile choice for developers tackling diverse use cases. Whether you're extracting data from static HTML using BeautifulSoup, leveraging Parsel for advanced CSS selector-based parsing, or navigating JavaScript-heavy sites with Playwright, Crawlee equips you with tools designed for every scenario—all under one roof. Its intuitive design ensures you can get started quickly while scaling to more complex projects.

What makes Crawlee truly stand out in the crowded scraping space is its developer-centric approach combined with advanced capabilities. Crawlee automates some of the most challenging aspects of scraping—like handling retries, managing proxies, and rotating sessions—out of the box. Its configurable request routing and persistent URL queues let developers tackle large-scale projects without worrying about re-engineering their crawlers when requirements evolve. Plus, features like automatic parallel crawling and pluggable storage set Crawlee apart as a tool that not only scales efficiently but also integrates seamlessly into workflows.

Another key differentiator is Crawlee’s modern stack and simplicity. Built with type hints and leveraging Python’s asyncio, it offers enhanced developer experience, complete with IDE autocompletion and reduced bugs through static type checking. And for projects requiring headless browser support, Crawlee’s Playwright integration provides unparalleled performance and ease of use compared to traditional options like Scrapy.

Will Crawlee become the new go-to library for scraping in Python? Only time will tell.

7. PGQueuer - job queue that uses PostgreSQL

PGQueuer GitHub stars

PGQueuer logo

Let’s be honest, chances are you already rely on PostgreSQL. So why not make the most of it?

PGQueuer is a lightweight, high-performance job queue library for Python that leverages PostgreSQL’s built-in features to streamline background task management. By utilizing the same database you’re likely already using, PGQueuer eliminates the need for additional queueing infrastructure, offering simplicity, cost efficiency, and seamless integration.

At its core, PGQueuer combines the power of PostgreSQL’s LISTEN / NOTIFY for real-time updates and FOR UPDATE SKIP LOCKED for concurrency control, ensuring reliable, high-throughput job processing. From scheduling recurring tasks with cron-like expressions to handling large job batches, it offers a flexible and robust solution for developers managing anything from bursty workloads to critical, periodic operations.

You can begin defining entry points and scheduling tasks in minutes, and setting up recurring tasks is as easy as a few lines. Moreover, PGQueuer’s real-time monitoring dashboard provides valuable insights into job queue statuses, helping you track performance and optimize workflows without the overhead of separate monitoring tools.

If you’re ready to turn PostgreSQL into a powerful job management engine, PGQueuer is the tool for the job.

8. streamable - stream-like manipulation of iterables

streamable GitHub stars

Streamable logo

Imagine effortlessly working with iterables in Python, chaining operations fluently without compromising performance. That’s the promise of Streamable, a Python library that combines elegance, power, and practicality to redefine how you handle data transformations. If you've ever struggled with clunky, verbose pipelines or wished for more flexibility in processing data streams, Streamable is for you.

Streamable wraps any iterable in a fluent interface, allowing you to chain operations like .map(), .filter(), and .group() while keeping them lazily evaluated. This means operations are only executed when you iterate over the stream, conserving memory and enabling seamless integration into even the most complex pipelines. Whether you're processing a sequence of numbers, fetching data from an API, or building an ETL pipeline, Streamable provides a minimalist yet versatile toolkit to get the job done.

One standout feature is its support for concurrency, making it easy to execute tasks in parallel using threads, processes, or asyncio. For example, you can fetch Pokémon data from an API, transform it, and save it to a CSV file—all with concise, readable code. Its ability to switch between concurrency modes ensures compatibility with diverse workflows, while preserving the simplicity of your codebase. Here’s a sneak peek:

Beyond its functional elegance, Streamable is robust and future-proof. It’s fully type-annotated, and tested with 100% code coverage. Its design embraces immutability, ensuring each operation creates a new stream, and its .catch() method for handling errors makes it exceptionally reliable. From throttling requests to observing progress logs, the library anticipates real-world needs, allowing developers to focus on logic rather than boilerplate.

Whether you're a data engineer, a backend developer, or a Python enthusiast exploring new ways to streamline your workflows, why wait? Dive into the future of iterable manipulation and let Streamable inspire your next project.

9. RightTyper - generate static types for you

RightTyper GitHub stars

If you’ve ever been bogged down by the task of adding type annotations to your Python code, RightTyper might just be the productivity tool you’ve been waiting for. Designed for developers who value speed and efficiency, RightTyper automatically generates type hints for function arguments and return values in your Python programs without significantly slowing down execution or consuming extra memory. Whether you’re fine-tuning a codebase for production or adding type safety to a personal project, RightTyper enables seamless integration into your existing workflows, including popular tools like pytest.

At its core, RightTyper runs your code, observes its execution, and generates precise type annotations for every function that gets called. Unlike earlier tools like MonkeyType or PyAnnotate, which are notorious for introducing massive overhead, RightTyper is optimized for performance, operating with only a 20-30% slowdown in most scenarios. By selectively capturing the most commonly used types, it provides clean, accurate hints while leaving room for type checkers like mypy to flag potential mismatches. The result? A blend of automation and flexibility that doesn’t sacrifice reliability.

One standout feature of RightTyper is its ability to infer shape annotations for tensors in scientific computing and machine learning libraries like NumPy, JAX, and PyTorch. With just a simple flag, it can produce annotations compatible with advanced tools like jaxtyping, beartype, or typeguard, helping you write robust, error-free numerical code. Additionally, RightTyper includes functionality to compute type coverage for your codebase, offering insights into how thoroughly your code is annotated—a valuable metric for large projects or teams aiming to improve maintainability.

Using RightTyper couldn’t be simpler. You can run your scripts or test suites with the tool enabled, automatically generating type signatures and saving them to an output file. For example:

This command integrates with pytest, annotating your functions while running tests. If you’re ready to take the leap, you can even overwrite your original files with the newly generated hints, making the transition to typed Python smoother than ever.

If you’re eager to elevate your Python projects with robust type annotations and shape inference, RightTyper is a must-try. Give it a spin; your future self will thank you for it.

10. Rio - modern web apps in pure Python

Rio GitHub stars

Rio logo

Building frontend apps has always been a thorn in the side of Python developers. The moment you step into the world of web apps, you're forced to juggle entirely new languages like HTML, CSS, and JavaScript—not to mention the endless tooling and frameworks that come with them. That’s where Rio comes in—a framework that eliminates the need to learn anything outside Python, empowering developers to create stunning, modern web apps entirely in their favorite language.

Rio reimagines web development for Python enthusiasts, bringing the declarative, component-driven principles of frameworks like React directly into Python. Forget the usual backend/frontend divide: Rio blurs those lines completely, handling everything seamlessly behind the scenes. You define your app's logic and UI in pure Python, and Rio takes care of rendering, user interactions, and even the communication between the client and server. The result? A streamlined workflow that lets you focus on building great apps without the usual distractions.

At its heart, Rio is packed with over 50 built-in components, inspired by Google's Material Design. From buttons and sliders to layout systems, these reusable building blocks let you create polished, professional interfaces with minimal effort. And because Rio is entirely type-safe, it integrates beautifully with modern Python features like type annotations, helping your code editor provide smart suggestions and error checks as you write.

One of Rio’s standout features is its ability to run apps both locally and in the browser with the exact same codebase. Whether you want a desktop-like experience or a fully responsive web app, Rio has you covered. Installation is a breeze, and getting started is even easier as the rio new command can scaffold an app in seconds, complete with templates for common use cases like dashboards or even games.

Unlike traditional frameworks like Flask or Django, which serve HTML templates, Rio takes a completely different approach. Inspired by frameworks like React and Flutter, it uses a declarative model where you define how your app should look and behave as reusable components. These components automatically update when their state changes—no manual updates, no messy HTTP requests. It’s all done for you behind the scenes, keeping your workflow simple and frustration-free.

If you’re tired of being pulled into unfamiliar ecosystems just to build a frontend, it’s time to give Rio a try.

Top 10 - AI/ML/Data

1. BAML - domain-specific language for working with LLMs

BAML GitHub stars

Baml logo

As LLMs take center stage in modern AI workflows, the need to extract structured data efficiently, templatize complex prompts for reusability, and test them seamlessly has become critical. BAML meets this challenge head-on, offering a Developer Experience (DX) that sets a new standard.

BAML (Basically, A Made-up Language) introduces the concept of an LLM function, turning a simple prompt into a reusable, well-defined structure. An LLM function includes a prompt template with typed input variables —such as a string, integer, or object— and enforces a specific output type like a class, enum, union, or optional value. This level of specificity ensures outputs are always consistent and type-safe, making them easy to validate statically.

Written in Rust, BAML will generate code in Python, TypeScript or other languages, so you can call functions or use types from your projects. Calling an LLM feels as natural as calling any other function in your codebase, and with minimal boilerplate: the LLM-specific bits like prompts, clients and LLM hyperparameters all live outside of your Python code in easy to digest .baml modules. Here's how an LLM call to extract a structured resume from text would look like:

Compared to frameworks like Pydantic or Zod, BAML is purpose-built for GenAI workflows, delivering faster iterations and higher accuracy. It also supports advanced features like streaming structured data, redundancy for LLM calls, and compatibility with multiple programming languages, making it a one-stop solution for prompt engineering. At the time of writing, BAML outperforms all other current methods of obtaining structured data (like JSON) from LLMs. This makes it a powerful tool for building scalable, reliable applications that leverage the strengths of LLMs, whether you’re using GPT-4o, Claude, or open-source alternatives.

Something I particularly appreciate about BAML is that it aims for full transparency: at every step, you know exactly what inputs are being fed into the LLM and what outputs are being generated, ensuring everything is under your direct control.

If you are using VS Code or Cursor, BAML seamlessly integrates to provide an exceptional development experience. With features like real-time prompt previews, syntax highlighting, and an interactive playground, you can edit and test your prompts directly in your IDE. The integration allows you to validate outputs, iterate quickly, and visualize your prompt logic as you go. This tight coupling makes BAML a natural choice for developers looking to streamline their LLM workflows while maintaining precision and type safety.

Check out the Playground to get a glimpse of what it can do, directly from your browser. Can't recommend this one enough!

2. marimo - notebooks reimagined

marimo GitHub stars

Marimo logo

Close your eyes and imagine a Python notebook that’s not just a coding playground but a powerful, reactive environment for creating reproducible, shareable, and maintainable workflows. Well good news, because you need not dream: there’s marimo, a rework of the traditional Python notebook that transforms how developers interact with their code. Designed to replace tools like Jupyter and Streamlit, marimo eliminates common frustrations like hidden state, manual cell execution, and version control headaches. By storing notebooks as pure Python scripts (.py files), marimo makes it easy to version, execute, and deploy your work as apps or scripts, while ensuring your code, outputs, and program state are always in sync.

At its core, marimo guarantees reactivity and interactivity. Run a cell, and marimo automatically determines which dependent cells need re-execution, ensuring deterministic and error-free workflows. Interactive elements like sliders, dropdowns, and plots are fully synchronized with Python, requiring no callbacks or manual updates. Whether you’re adjusting parameters or visualizing data, marimo responds instantly. It’s not just a notebook—it’s also a framework for creating dynamic web apps or even interactive slideshows, letting you share your work effortlessly with others.

marimo comes with batteries-included; it integrates features like GitHub Copilot, vim-style keybindings, and package management while supporting markdown, SQL queries, and static code analysis for efficient execution. Its built-in package manager (with support for pip, rye, uv, poetry, pixi) even isolates dependencies per notebook for perfect reproducibility. Plus, with a WASM-based online playground and Jupyter conversion tools, getting started with marimo is seamless.

Whether you’re building machine learning pipelines, analyzing data, or sharing interactive reports, marimo is the modern, reactive notebook you didn’t know you needed.

3. OpenHands - powerful agent for code development

OpenHands GitHub stars

open hands logo

As a Python developer, you've probably experienced those moments when you wish you had an extra pair of hands - someone who could handle the routine coding tasks while you focus on architecture and design decisions. Enter OpenHands, an innovative open-source platform that's transforming how developers work with AI. Imagine having an intelligent assistant that can not only generate code but actually work alongside you - writing scripts, debugging issues, and even browsing documentation, just like a human pair programmer would.

At its core, OpenHands provides a secure sandboxed environment where AI agents can safely execute code, interact with web browsers, and manage files through a sophisticated event stream architecture. The platform's agents aren't limited to simple code generation - they can modify existing codebases, debug issues, refactor code, and even collaborate with other specialized agents to tackle complex development challenges. This is all made possible through a robust runtime environment that includes a Docker-sandboxed operating system, complete with bash shell access, web browsing capabilities, and an IPython server.

Development interface showing GitHub commit analysis with a line graph, code editor, and chat interactions discussing OpenDevin bot functionality

What sets OpenHands apart is its intuitive interaction model that mirrors how Python developers actually work. Whether you need to create a new FastAPI endpoint, fix a bug in your Django application, or set up a new GitHub Action, OpenHands agents can handle these tasks while maintaining best practices in software development. The platform's event-driven architecture ensures transparent communication between you and the agents, with all actions and observations carefully tracked and displayed through a sleek web interface.

The platform's is designed for extensibility, featuring a growing hub of specialized agents that can be mixed and matched for different tasks. This includes everything from general-purpose development agents to specialists in specific domains like web development or system administration. The community-driven nature of the project has already attracted over hundreds of contributors and garnered over 37,000+ GitHub stars, testament to its practical utility and innovative approach.

Wanted a glimpse at the future of software engineering? You’re welcome.

4. Crawl4AI - crawl websites intelligently for AI

Crawl4AI GitHub stars

Crawl4ai logo

Web scraping can feel like a thankless chore. If you’re a Python programmer, you’ve probably wrestled with the frustrations of building web crawlers: handling dynamic content, bypassing anti-bot mechanisms, extracting clean and structured data, or even scaling your scraping efforts to collect massive datasets. It’s a time sink and a headache—one that pulls you away from the exciting part. But what if there were a smarter, faster, and more AI-friendly way to crawl the web? Enter Crawl4AI, the open-source web crawling library designed to bridge the gap between modern web scraping and cutting-edge AI.

Crawl4AI is a Python powerhouse that takes the drudgery out of web crawling, letting you focus on what really matters—building great AI/ML applications. With its asynchronous architecture, it can handle complex JavaScript-heavy websites, scrape at lightning speeds, and return LLM-ready output formats like JSON, markdown, and cleaned HTML. It’s packed with features tailored for AI developers, such as automatically converting links into citations for research papers, extracting images, and even generating structured data that’s ready for training models or powering knowledge graphs. Gone are the days of hacking together brittle scripts just to gather data.

The library’s advanced extraction strategies are what really set it apart. Whether you’re looking to extract content based on CSS selectors, split it into meaningful chunks using regex or semantic topics, or use LLMs to generate detailed schemas, Crawl4AI has a tool for every scenario. For example, you can leverage OpenAI’s models to extract structured pricing data from a website or Hugging Face models to identify relationships between entities. It’s the perfect companion for creating high-quality datasets, enriching natural language processing pipelines, or powering intelligent web applications.

On the technical side, Crawl4AI uses Playwright to provide multi-browser support (Chromium, Firefox, and WebKit), with stealth capabilities to bypass bot detection. Its asynchronous design ensures blazing performance, outperforming even some paid competitors. It also offers proxy support, session management for crawling multi-step workflows, and seamless integration with Docker for API deployment. Whether you need to scrape a handful of URLs or orchestrate thousands of crawling tasks, Crawl4AI can scale effortlessly to meet your needs.

What’s more, Crawl4AI doesn’t just stop at scraping. It’s built with AI in mind, offering tools to turn web data into actionable insights. You can use it to generate knowledge graphs, refine content with BM25-based filters, or extract complex relationships from dynamic websites. And because it’s open-source and completely free, you can dive into its modular design, customize it for your workflow, and deploy it on your own infrastructure without worrying about licensing fees or vendor lock-in.

If you’re a developer looking for a web crawling solution that matches the sophistication of your projects, Crawl4AI is a must-try.

5. LitServe - effortless AI model serving

LitServe GitHub stars

Litserve logo

So you’ve developed an AI model, but now comes the critical challenge: how do you serve it efficiently to users? Whether you’re working with a single model or building a complex AI pipeline, LitServe from Lightning-AI (creators of PyTorch Lightning) makes deployment fast, scalable, and painless.

Built on top of the ubiquitous FastAPI, LitServe transforms the process of serving AI models, adding powerful features like batching, streaming, and GPU autoscaling. With its intuitive design and optimizations, LitServe lets you deploy anything from classic machine learning models to large language models (LLMs) with ease—all while being at least 2x faster than a plain FastAPI setup.

LitServe’s real strength lies in its flexibility. It supports all major frameworks—PyTorch, TensorFlow, JAX—and works seamlessly with models across domains, from generative AI and vision to audio processing and embeddings. Whether you're hosting a single model or creating a "compound AI" system with multiple models, LitServe makes the setup effortless. For example, you can quickly define an API, integrate models, and start serving requests with just a few lines of code. Features like GPU autoscaling and batching optimize performance for both lightweight applications and heavy-duty enterprise workloads, while optional integrations with tools like vLLM supercharge LLM deployments.

Scalability is where LitServe truly shines. You can self-host it on your own machines for full control or take advantage of Lightning’s fully managed hosting service, which handles load balancing, multi-machine scaling, and 24/7 observability. Whether you’re an indie developer or part of a large enterprise, LitServe adapts to your needs with options like Dockerization and "scale-to-zero" serverless capabilities. And if you’re building with LLMs, LitServe offers advanced features like KV caching for lightning-fast inference, making it a top choice for modern AI applications.

In short, LitServe is the ultimate serving engine for developer and user happiness.

6. Mirascope - a unified LLM interface

Mirascope GitHub stars

Mirascope logo

When it comes to extracting structured data from LLMs, there's no shortage of approaches—and no single "right" way to do it. Each tool in this emerging space offers its own perspective on how best to balance flexibility, usability, and control. Mirascope is a standout contender in this growing landscape, offering a clean, Pythonic way to interact with LLMs while prioritizing developer experience and maintainable code. Whether you're prototyping or scaling to production, Mirascope's thoughtful design makes it an excellent choice for working with structured outputs and managing LLM workflows.

Consider this example of extracting structured data, such as the title and author of a book, from unstructured text. With Mirascope, the process is as straightforward as defining a schema and wrapping your function with a decorator:

This example highlights one of Mirascope's core strengths: combining declarative schemas with decorator-based API. With this approach, developers can avoid boilerplate code while ensuring that outputs conform to a predefined structure.

Mirascope goes beyond simple LLM interactions by introducing Tools and Agents, two powerful concepts for building advanced AI systems. Tools are reusable functions that extend LLM capabilities, such as retrieving external data or performing computations, while Agents use these tools to tackle complex, multi-step workflows. For example, a WebSearchAgent can combine tools for web search and content extraction to autonomously gather information and provide detailed answers. With dynamic tool selection, prompt templates, and seamless integration of state, Mirascope empowers developers to design sophisticated, modular AI systems that strike a perfect balance between flexibility and control.

As you would expect, Mirascope also supports multiple LLM providers—including OpenAI, Anthropic, Cohere, and many others—allowing developers to switch between APIs without rewriting their code.

As patterns for extracting structured data from LLMs continue to evolve, Mirascope takes a pragmatic and extensible approach. Its decorator-based API, modular design, and reliance on Python's native tooling make it a pleasure to use while maintaining the transparency needed for debugging and iteration.

For anyone looking to build intelligent, LLM-driven systems with an emphasis on clean, maintainable code, Mirascope is a compelling choice.

7. Docling and Surya - transform any document to a structured representation

Docling GitHub stars Surya GitHub stars

Docling logo

In a world where unstructured data reigns supreme, managing and extracting meaningful insights from diverse document formats can be a monumental challenge. That’s why we selected not one, but two exceptional libraries to share this spot.

First, Docling, an open-source marvel from IBM, streamlines the often-daunting process of parsing and extracting data from an array of document formats. Whether you’re working with PDFs, DOCX files, Excel spreadsheets, or Markdown, Docling’s robust parsing engine turns messy, unstructured files into clean, machine-readable formats. But this isn’t just about reading files—it’s about understanding them. Docling intelligently extracts layouts, reading orders, and table structures, making it perfect for tasks like data extraction, content summarization, and even building retrieval-augmented generation (RAG) systems. Its integration with AI tools like LangChain and LlamaIndex opens up a world of possibilities for developers creating document-centric applications like chatbots or question-answering systems. Easy to install and packed with an intuitive API, Docling is a powerful ally for anyone looking to turn document chaos into clarity.

Getting started with Docling is as easy as your usual pip install docling (or uv add docling, why not?). Its intuitive Python API and command-line interface (CLI) make it accessible to developers of all skill levels. For instance, converting a PDF to Markdown is as simple as:

But it’s not just about what’s available now; Docling’s roadmap is brimming with promise. Upcoming features include equation and code extraction, enhanced metadata parsing (like authors and references), and a native LangChain extension, positioning it as an indispensable tool for data-intensive applications. The library’s tight integration with the Python ecosystem ensures compatibility across macOS, Linux, and Windows, with support for both x86_64 and ARM64 architectures.

On the other side of the spectrum, Surya focuses on mastery in document OCR and text recognition, offering unparalleled performance in over 90 languages. Named after the Hindu sun god for its "universal vision", Surya excels at extracting text from scanned images, PDFs, and other visual formats with precision that rivals commercial solutions like Google Cloud Vision. Beyond OCR, Surya dives deeper with advanced layout analysis, identifying headers, images, tables, and reading orders, making it ideal for digitizing books, forms, and scientific documents. Need to extract rows and columns from a complex table or parse multilingual content in a single file? Surya has you covered. Its GPU-optimized models ensure lightning-fast performance, and installation is simple. Whether you’re working on digitizing historical archives or processing multilingual forms, Surya is built to handle the toughest OCR challenges with ease.

Each of these libraries shines in its own domain. Docling is your go-to for seamless format parsing and integration into generative AI workflows, while Surya stands out as the choice for high-performance OCR and layout analysis across diverse languages. Both represent the best of open-source innovation, offering powerful capabilities without the need for costly cloud services.

Lost in unstructured data? Let Docling and Surya guide the way.

8. DataChain - complete data pipeline for AI

DataChain GitHub stars

Data chain logo

As should be abundantly clear by now, managing unstructured data—images, videos, text, PDFs, and more—is one of the toughest challenges in machine learning pipelines. DataChain, from Iterative (creators of DVC), streamlines this process by enabling efficient, versioned handling of multimodal datasets. By integrating directly with external storage systems like AWS S3, GCP, or Azure, DataChain eliminates data duplication while offering seamless metadata management. Whether you’re preparing data for fine-tuning, conducting LLM analytics, or organizing pre-training datasets, DataChain simplifies and accelerates workflows with its Python-friendly design and high-performance processing capabilities.

At its core, DataChain transforms scattered files and metadata into centralized, columnar datasets that are easy to query and manipulate. Developers can filter, join, and group data with Pythonic expressions while handling terabytes of information efficiently, without relying on tools like Spark or SQL. The library’s integration with AI workflows is a standout feature—users can generate AI-powered metadata, serialize LLM outputs, and execute high-scale batch inferences, all while keeping operations parallelized and memory-efficient.

DataChain integrates directly with frameworks like PyTorch and TensorFlow, enabling refined datasets to flow effortlessly into training and evaluation tasks. Its ability to handle multimodal data and AI-powered annotations makes it invaluable for projects like CLIP fine-tuning, chatbot evaluations, or GenAI alignment. For teams, the optional DataChain Studio platform extends functionality with a centralized dataset registry, scalable compute, and tools for visualizing complex datasets, offering a collaborative environment for AI projects.

In a landscape where unstructured data management can make or break an AI project, DataChain stands out as a flexible and scalable solution that can take your workflows to the next level.

9. Narwhals - compatibility layer between dataframe libraries

Narwhals GitHub stars

Narwhals logo

If you’ve ever found yourself juggling pandas, Polars, and other DataFrame libraries, you know the struggle is real. Maybe you’ve written code that works beautifully with pandas, only to find subtle (and not-so-subtle) differences when trying to run it on Polars or another library. For example, something as simple as 3 in pd.Series([1, 2, 3]) doesn’t behave the same way in Polars. And those differences can compound, especially when handling joins, indexing, or column naming conventions. Writing code that works consistently across DataFrame libraries is a minefield of quirks and edge cases. But what if you didn’t have to worry about any of that?

Enter Narwhals, a lightweight compatibility layer that unifies the diverse world of Python DataFrame libraries. Whether you’re working with pandas, Polars, PyArrow, Modin, or cuDF, Narwhals lets you write code that’s backend-agnostic and worry-free. By using a subset of the Polars API as its interface, Narwhals provides a clean, modern syntax for data manipulation while seamlessly translating your operations to the backend library of choice. Start with a pandas dataframe, and Narwhals returns pandas; work with cuDF, and it preserves GPU acceleration. It even supports lazy evaluation libraries like Dask, giving you flexibility across eager and deferred execution styles.

With zero dependencies, Narwhals keeps your project lightweight while dynamically adapting to the DataFrame backend you provide. It’s rigorously tested against nightly builds of pandas and Polars, ensuring that breaking changes and deprecations are handled behind the scenes so you don’t have to. The performance overhead is negligible, and with full static typing, you get the added bonus of IDE autocompletions and type hints to speed up development. It’s a solution designed for real-world use, trusted by projects like Plotly, VegaFusion, and PyShiny.

For those in data science, Narwhals is the tool you didn’t know you needed.

10. PydanticAI - pydantic for LLM Agents

PydanticAI GitHub stars

PydanticAI logo

Pydantic is one of the most beloved libraries in the Python ecosystem. If you’ve ever used FastAPI or countless other frameworks, you’ve already experienced first-hand the transformative power of its type-safe data validation. Now, the same team behind Pydantic is setting their sights on the cutting-edge world of LLMs. And they bring us PydanticAI, a brand-new framework designed to simplify building production-grade applications with Generative AI—bringing Pydantic’s clarity, safety, and Pythonic ergonomics to this rapidly evolving field.

Why does this matter? Because almost every modern Python-based AI framework—whether it’s LangChain, Transformers, or OpenAI’s SDK—relies on Pydantic under the hood for data validation. Now, with PydanticAI, the creators of that core technology are delivering a purpose-built solution for working directly with LLMs. It’s fresh, innovative, and designed from the ground up to make building AI-powered apps feel as natural and seamless as writing any other Python code. If you’ve been waiting for a framework that combines type safety with state-of-the-art AI workflows, this might be it.

At the heart of PydanticAI are Agents—reusable components that handle everything from interacting with LLMs to validating their outputs. Imagine building an AI-driven customer service bot or an advanced multi-agent workflow using just Python functions and classes. With Pydantic’s signature structured validation baked in, you can define exactly what type of response your agent should return, ensuring your application remains reliable even when dealing with the unpredictability of generative AI.

But PydanticAI doesn’t stop at clean APIs and strict type safety. It introduces innovative features like a dependency injection system, allowing agents to dynamically fetch contextual data at runtime. Need to create a banking assistant that accesses user balances in real-time or makes risk assessments? PydanticAI lets you build these complex systems with the same ease and precision you’d expect from frameworks like FastAPI. It also supports custom “tools”—Python functions that agents can call on-demand to retrieve information, perform calculations, or interact with external systems. These tools open up endless possibilities for building smarter, more capable AI-driven applications.

PydanticAI supports LLMs like OpenAI’s GPT, Google’s Gemini, and Groq out of the box, with plans to add more soon. This flexibility, combined with native support for streaming responses and seamless integration with Pydantic Logfire for debugging and monitoring, makes it a powerhouse for developers seeking reliability in their AI applications.

Although it’s still in beta, PydanticAI is already turning heads with its ambitious vision and elegant design. Do you want to stay ahead of the curve? Don’t miss out!

Runners-up – General Use

  • AnyPathLib AnyPathLib GitHub stars – Provides a unified API for performing file operations across different storage backends, including local, AWS S3, and Azure Blob Storage. It offers pathlib-like operations, local caching for improved performance, and easy extensibility for supporting additional cloud providers. The library simplifies tasks such as copying files between different storage systems, checking file existence, and listing directory contents. It also includes a CLI tool for command-line operations and prioritizes security by relying on environment variables for credentials.

  • Cyclopts Cyclopts GitHub stars – Enables creation of CLI applications through an intuitive API with comprehensive type hinting support, including complex types like Unions, Literals, Pydantic models, Dataclasses, and Attrs. Automatically generates rich help pages from docstrings and contextual data, while offering extensive customization options for converters, validators, token parsing, and application launching. Distinguished from alternatives like Typer by its more concise syntax, better type support, and improved documentation integration.

  • DBOS Transact DBOS Transact GitHub stars – Enables creation of failure-resistant programs by automatically storing workflow execution states in PostgreSQL databases. When programs are interrupted or crash, workflows automatically resume from their last completed step, eliminating the need for a dedicated workflow server. The system offers features like scheduled jobs with exactly-once execution, event processing capabilities for sources like Kafka, and built-in OpenTelemetry observability. Performance benchmarks show it operating 25x faster than AWS Step Functions, making it an efficient choice for durable workflow orchestration.

  • FastHTML FastHTML GitHub stars – Designed to be powerful, fast, and easy to use, FastHTML enables developers to build complex, interactive web applications using Python. It maps directly to HTML and HTTP while allowing good software engineering practices. The framework leverages HTMX for easy interactivity and supports hypermedia-driven development. FastHTML integrates seamlessly with the Python ecosystem, offering a unique approach to web development that combines simplicity with advanced capabilities.

  • flpc flpc GitHub stars – Integrates Rust's high-performance regex engine with Python, serving as a drop-in replacement for Python's native 're' module. The package provides all standard regex operations including pattern compilation, searching, substitution, and splitting, while delivering superior performance through its Rust backend. Despite being in experimental stage, it maintains API compatibility with Python's built-in regex functionality while introducing minor syntax adjustments, such as using 'fmatch()' instead of 'match()'.

  • Lambda Forge Lambda Forge GitHub stars – Provides a complete ecosystem for creating, deploying, and managing AWS Lambda functions with an emphasis on clean architecture and development efficiency. Features include a CLI tool for resource management, built-in support for Lambda layers, live development capabilities with hot reload functionality, and automatic CI/CD pipeline integration through AWS CodePipeline. Includes sophisticated documentation generation, architecture diagramming, and multi-stage environment support, while offering seamless integration with various AWS services like API Gateway, SNS, and SQS.

  • Meatie Meatie GitHub stars – Generates code for REST API integration based on method signatures with type hints, eliminating the need for boilerplate HTTP communication code. Handles URL construction, parameter encoding, and request/response serialization automatically. Supports major HTTP client libraries including requests, httpx, and aiohttp, while offering integrated features like rate limiting, retries, and caching. Includes seamless integration with Pydantic for data validation and optional middleware capabilities for custom request/response processing.

  • picows picows GitHub stars – Offers an efficient, non-async data path for WebSocket communication by providing direct access to WebSocket frame objects instead of complete messages. Features include a high-performance frame parser and builder implemented in Cython, memory reuse optimization, and auto ping-pong functionality with customizable messages. The library significantly outperforms other Python WebSocket implementations, showing 1.5-2 times faster performance than aiohttp, achieved through minimizing overhead and avoiding unnecessary Python object creation.

  • py-cachify py-cachify GitHub stars – Provides enhanced caching and distributed locking capabilities that work seamlessly in both synchronous and asynchronous environments. Implements decorator-based caching and locking mechanisms with dynamic key generation, while remaining backend agnostic to support various cache storage solutions. Features full type annotations, 100% test coverage, and supports Python 3.8+, making it particularly valuable for applications requiring distributed resource management and function result caching with minimal code overhead.

  • PyUIBuilder PyUIBuilder GitHub stars – Enables developers to create graphical user interfaces through a visual drag-and-drop interface that generates clean Python code for multiple UI frameworks. Supports major GUI frameworks like Tkinter and CustomTkinter, with planned support for Kivy and PySide. Features include pre-built UI widgets, plugin support for third-party libraries, multiple layout managers (flex, grid, absolute positioning), and automatic generation of requirements.txt files. The tool offers both a free web-based editor and premium versions with additional capabilities like local development and live preview features.

  • Scrapling Scrapling GitHub stars – Provides fast, intelligent web scraping capabilities with automatic adaptation to website changes through smart element tracking and similarity matching. Features multiple fetching modes including stealth mode for bypassing anti-bot protection, content-based element selection, and high-performance parsing that outperforms alternatives like BeautifulSoup by up to 620x. Includes rich navigation APIs, automatic selector generation, and efficient data structures for minimal memory footprint while maintaining compatibility with familiar scraping patterns.

  • temporals temporals GitHub stars – Provides three main classes (TimePeriod, DatePeriod, and DatetimePeriod) to handle different types of time intervals with a clean and intuitive API. Enables operations like checking overlaps between periods, calculating durations, and determining if specific moments fall within defined periods. Built on top of Python's datetime module, it simplifies common time-related tasks while offering flexible formatting options and duration calculations, though currently limited to naive datetime objects.

  • v8serialize v8serialize GitHub stars – Enables seamless data exchange between Python and JavaScript by implementing V8's serialization format. Supports a wide range of data types including strings, numbers, arrays, objects, dates, regular expressions, and binary data, making it compatible with JavaScript's structuredClone() algorithm. The package allows developers to serialize complex data structures in Python and deserialize them in JavaScript (Node.js/Deno) environments, or vice versa, effectively creating a bridge between the two languages for data transfer.

Runners-up – AI/ML/Data

  • BM25-Sparse BM25-Sparse GitHub stars – Implements the BM25 ranking algorithm using Scipy sparse matrices for efficient document scoring and retrieval. Features an eager computation approach that pre-calculates scores for all document tokens, resulting in significantly faster query processing compared to traditional implementations. Provides multiple BM25 variants, memory-efficient operations through memory mapping, and seamless integration with the Hugging Face ecosystem while maintaining minimal dependencies.

  • BoCoEL BoCoEL GitHub stars – Leverages Bayesian optimization techniques to select optimal subsets of data for evaluating Large Language Models (LLMs), significantly reducing computation costs and time. Utilizes embedding-based representation of data entries combined with Gaussian processes to intelligently sample evaluation queries. The tool supports various LLM architectures through Hugging Face integration, offers modular design for flexibility, and includes features for efficient corpus representation such as N-sphere representation and whitening of the latent space. Particularly valuable for organizations needing to evaluate LLMs on large datasets while working with limited computational resources.

  • exo exo GitHub stars – Enables running large AI models across multiple everyday devices by automatically discovering and connecting them in a peer-to-peer network. Implements dynamic model partitioning to optimally split models based on available device resources and network topology, allowing execution of models larger than any single device could handle. Features a ChatGPT-compatible API, supports various models like LLaMA and Mistral, and works across different platforms including iPhones, iPads, Macs, and Linux devices, with the ability to utilize both CPU and GPU resources.

  • fastc fastc GitHub stars – Enables efficient text classification by utilizing pre-trained LLM embeddings with either logistic regression or nearest centroid classification methods. Features parallel execution capabilities for handling hundreds of classifiers simultaneously while sharing the same embedding model. Supports various pooling strategies, template-based instruction tuning, and is specifically designed for limited-memory CPU environments using efficient distilled models. Includes functionality for model saving, loading, and direct publishing to HuggingFace, along with a built-in inference server for deployment.

  • Felafax Felafax GitHub stars – Focuses on efficient continued training and fine-tuning of open-source LLMs using XLA runtime, with particular emphasis on TPU optimization. Supports various model architectures including LLaMa-3.1 and Gemma2, offering both full-precision and LoRA training capabilities. Enables seamless scaling from single TPU VMs to large TPU Pods, with the ability to handle training scenarios up to 1000x larger, while providing out-of-the-box Jupyter notebooks for immediate deployment and a user-friendly configuration interface targeted at ML researchers.

  • GraphRAG GraphRAG GitHub stars – Transforms unstructured text into structured data using LLMs and knowledge graph memory structures to enhance reasoning capabilities. The system provides a complete data pipeline that extracts, processes, and organizes information in a way that allows LLMs to better understand and reason about private data. Features include a sophisticated indexing system, customizable prompt tuning capabilities, and integration with Azure resources, making it particularly effective for enterprises looking to leverage their proprietary data with LLMs.

  • Infinity Infinity GitHub stars – Enables deployment of any embedding, reranking, CLIP, CLAP, and sentence-transformer model from HuggingFace with optimized inference backends. Supports multiple inference engines including PyTorch, ONNX/TensorRT, and CTranslate2, utilizing hardware accelerators like NVIDIA CUDA, AMD ROCM, CPU, AWS INF2, and Apple MPS. Features dynamic batching, dedicated worker threads for tokenization, multi-model orchestration, and OpenAI-compatible API specifications, making it suitable for production deployments requiring high performance and flexibility.

  • LaVague LaVague GitHub stars – Enables developers to create AI-powered web agents that can autonomously navigate and interact with websites based on natural language objectives. Combines a World Model for interpreting objectives and generating instructions with an Action Engine that executes these instructions using web automation tools like Selenium or Playwright. Features include built-in contexts, customizable configurations, test runners, token counting, logging tools, and support for multiple web drivers, making it suitable for both general web automation and specialized use cases like QA testing.

  • Lighteval Lighteval GitHub stars – Provides comprehensive evaluation capabilities for Large Language Models across multiple backends including transformers, TGI, VLLM, and nanotron. Enables detailed performance analysis with sample-by-sample results storage, supporting both local and cloud-based storage options like S3 and Hugging Face Hub. Features a rich collection of pre-built tasks and metrics while allowing custom implementations, making it suitable for both standard benchmarking and specialized evaluation needs. Emphasizes speed and flexibility through various backend options, particularly leveraging VLLM for fast evaluations.

  • lm-format-enforcer lm-format-enforcer GitHub stars – Enables precise control over language model outputs by filtering allowed tokens at each generation step, ensuring adherence to specified formats while preserving model flexibility. Supports JSON Schema, regular expressions, and schemaless JSON mode, with compatibility across major frameworks like Transformers, LangChain, and vLLM. The tool maintains high output quality by allowing models to control aspects like whitespace and field ordering naturally, while offering diagnostic capabilities to monitor enforcement impact on generation quality and providing batched generation and beam search support.

  • Model2Vec Model2Vec GitHub stars – Transforms sentence transformer models into compact static versions, reducing model size by 15x while achieving up to 500x faster inference speeds. The technique works by passing a vocabulary through a sentence transformer model, reducing dimensionality using PCA, and applying zipf weighting to the embeddings. It supports three operational modes: output mode for quick creation, vocab mode for word-level tokenization, and a hybrid subword mode, all without requiring any training data.

  • mpl_ascii mpl_ascii GitHub stars – Functions as a specialized backend for Matplotlib that converts standard plots into ASCII character representations. Supports various plot types including bar charts, scatter plots, and line plots, with the ability to maintain color information in terminal output. The package is particularly valuable for version control scenarios, allowing plots to be stored as text files that are easier to diff and manage compared to binary image formats. Users can control the output dimensions through configurable width and height parameters, making it adaptable for different terminal sizes and display needs.

  • Nanotron Nanotron GitHub stars – Enables efficient pretraining of transformer models with a focus on simplicity and performance optimization. Implements 3D parallelism (DP+TP+PP), expert parallelism for MoEs, and supports features like ZeRO-1 optimizer, FP32 gradient accumulation, and parameter tying/sharding. The framework includes advanced capabilities such as AFAB and 1F1B schedules for pipeline parallelism, custom module checkpointing for large models, and spectral µTransfer parametrization for scaling up neural networks.

  • Ollama Ollama GitHub stars – Enables seamless integration with Ollama's large language model deployments through a Python interface. Supports both synchronous and asynchronous operations, including chat completions, text generation, model management, and embeddings generation. Features include response streaming, custom client configurations, comprehensive error handling, and full access to Ollama's REST API functionality, making it ideal for developers building AI-powered applications.

  • Open Interpreter Open Interpreter GitHub stars – Enables language models to execute code (Python, JavaScript, Shell, and more) directly on the user's local machine through a ChatGPT-like terminal interface. Provides full access to the computer's capabilities, allowing users to create and edit media files, control web browsers, analyze datasets, and perform system operations through natural language commands. Unlike cloud-based alternatives, it runs locally without limitations on internet access, package installations, file sizes, or runtime duration, while maintaining user control through code execution confirmation.

  • Open Parse Open Parse GitHub stars – Transforms complex documents into well-structured chunks by visually analyzing document layouts and semantic structures. Features markdown support, high-precision table extraction, and semantic processing capabilities through embedding-based clustering. The system goes beyond simple text splitting by maintaining document structure, supporting tables and images, and offering intuitive ways to overlay chunks on original PDFs. Includes extensible post-processing options and integrates with popular ML models for enhanced table detection and parsing.

  • OpenLIT OpenLIT GitHub stars – Enables comprehensive monitoring and management of AI applications with OpenTelemetry-native observability features, including full-stack monitoring for LLMs, vector databases, and GPUs. Provides essential tools for prompt management, API key security, and exception tracking through an analytics dashboard. Includes features like cost tracking for custom models, a prompt hub for versioning, and OpenGround for LLM experimentation, making it particularly valuable for teams building and deploying AI applications at scale.

  • Opik Opik GitHub stars – Enables developers to track and analyze LLM calls through comprehensive tracing capabilities, while providing tools for annotating responses and automating evaluations. Integrates with popular frameworks like LangChain, OpenAI, and Gemini, offering metrics for complex issues such as hallucination detection and RAG evaluation. Supports both development and production environments with features for CI/CD integration through PyTest, dataset management, and production monitoring capabilities.

  • Pipecat Pipecat GitHub stars – Enables developers to create interactive voice agents including personal coaches, meeting assistants, storytelling toys, and customer support bots. Supports real-time voice conversations through WebRTC integration, with features like Voice Activity Detection (VAD) and multiple AI service integrations including text-to-speech and speech recognition. The framework allows both local development and cloud deployment, with extensible pipeline architecture supporting various AI services, and includes capabilities for telephone integration, image output, and video input processing.

  • Pretzel Pretzel GitHub stars – Builds upon Jupyter's foundation by integrating AI-powered features for code generation, editing, and completion. Includes inline tab completion, sidebar chat for code assistance, error fixing capabilities, and AI-driven code generation using models like GPT-4 and Mistral's Codestral. Maintains full compatibility with existing Jupyter configurations, extensions, and keybindings while adding modern features like real-time collaboration, SQL support, and one-click dashboard creation.

  • PyPalettes PyPalettes GitHub stars – Provides seamless access to a vast collection of color palettes that can be used with popular visualization libraries like Matplotlib and Seaborn. Supports both discrete and continuous color maps, with features to reverse palettes, extract hex or RGB values, and combine multiple colormaps. The package is compatible with common Python visualization workflows and includes palettes from various sources, making it particularly useful for creating professional and aesthetically pleasing data visualizations.

  • RouteLLM RouteLLM GitHub stars – Enables intelligent routing of queries between expensive and cheaper language models to optimize cost-performance tradeoffs. Features a drop-in replacement for OpenAI's client that can reduce costs by up to 85% while maintaining 95% GPT-4 performance on benchmarks. Includes pre-trained routers out of the box, supports multiple routing strategies like matrix factorization and BERT-based classification, and provides comprehensive evaluation tools for comparing router performance across various benchmarks.

  • SAMURAI SAMURAI GitHub stars – Adapts the Segment Anything Model (SAM) for zero-shot visual object tracking with a motion-aware memory component. It achieves state-of-the-art performance on multiple visual tracking benchmarks including LaSOT, GOT-10k, and NeedForSpeed. The model leverages SAM's powerful segmentation capabilities and combines them with motion prediction to track objects across video frames without prior training on specific object classes. It includes tools for inference on custom videos and integration with various tracking datasets.

  • ScrapeGraphAI ScrapeGraphAI GitHub stars – Combines large language models with direct graph logic to create intelligent scraping pipelines for both websites and local documents (XML, HTML, JSON, Markdown). Users simply specify the information they want to extract through natural language prompts, and the framework handles the complexities of extraction. Supports multiple scraping pipelines including single-page scraping, multi-page search-based scraping, audio generation, and automated script creation. Integrates with various LLM providers including OpenAI, Groq, Azure, and Gemini, while also supporting local models through Ollama.

  • SGLang SGLang GitHub stars – Combines a fast backend runtime with a flexible frontend programming language to enable efficient and controllable model interactions. The runtime features advanced optimizations like RadixAttention, constrained decoding, continuous batching, and tensor parallelism, while the frontend language provides intuitive interfaces for complex prompting workflows, control flow, and multi-modal inputs. Supports a wide range of generative and vision language models including Llama, Mistral, LLaVA, and others, with capabilities for both local deployment and cloud integration.

  • StreamJoy StreamJoy GitHub stars – Transforms static images and datasets into animated sequences using sensible defaults and parallel processing capabilities. Features include URL and file-based animation creation, custom rendering support, intro splashes, configurable frame pauses, and the ability to connect multiple animations. Supports various output formats including GIF, MP4, and HTML, while offering built-in optimization tools and parallel execution for improved performance.

  • torchtune torchtune GitHub stars – Provides implementations of popular LLMs including Llama, Gemma, Mistral, Phi, and Qwen model families, along with training recipes for various finetuning techniques like LoRA, QLoRA, DPO, and PPO. Incorporates built-in memory efficiency features, performance improvements, and scaling capabilities using the latest PyTorch APIs. Supports configuration through YAML files and includes extensive dataset compatibility with popular formats and prompt templates, making it suitable for both single-device and distributed training scenarios.

  • Unsloth Unsloth GitHub stars – Provides specialized kernels written in OpenAI's Triton language to accelerate LLM fine-tuning by 2-5x while reducing memory usage by up to 80%. Features include manual backpropagation engine, support for 4-bit and 16-bit QLoRA/LoRA training, compatibility with popular models like Llama, Mistral and Phi, and seamless integration with Hugging Face's training frameworks. Works with NVIDIA GPUs since 2018 and maintains identical accuracy to standard approaches without approximations.

  • WordLlama WordLlama GitHub stars – Recycles components from large language models to create efficient and compact word representations through token embedding extraction. Implements fast operations like similarity computation, ranking, fuzzy deduplication, clustering, and semantic text splitting using minimal inference-time dependencies. The toolkit operates with a lightweight 16MB default model at 256 dimensions, improving on MTEB benchmarks over traditional word models while maintaining small model sizes. Features include Matryoshka representations for flexible dimension sizing, binary embeddings for faster computations, and a NumPy-only inference pipeline for easy deployment.

Long tail

Beyond our top picks, many overlooked Python libraries shine just as brightly. We've analyzed hundreds, categorizing them with concise one-liner summaries to guide your exploration.

CategoryLibraryGitHub StarsDescription
AI Agents Browser UseBrowser Use GitHub starsBrowser automation toolkit for AI agents to interact with web interfaces.
BurrBurr GitHub starsState machine framework for building decision-making applications and AI agents.
DurableSwarmDurableSwarm GitHub starsMulti-agent orchestration framework that adds durability to OpenAI's Swarm.
FastAgencyFastAgency GitHub starsMulti-agent workflow deployment framework with unified programming interface for development and production environments.
llama-githubllama-github GitHub starsKnowledge retrieval system for extracting code snippets, issues, and repository information from GitHub.
LLMEasyToolsLLMEasyTools GitHub starsFunction calling and schema generation toolkit for LLM interactions, focusing on agentic workflows and OpenAI API integration.
phidataphidata GitHub starsFramework for building intelligent agent systems with memory, knowledge, tools and reasoning capabilities.
TinyTroupeTinyTroupe GitHub starsMultiagent persona simulation framework for business insights and imagination enhancement through LLM-powered artificial agents. By Microsoft.
AI SecurityLLM GuardLLM Guard GitHub starsSecurity toolkit for protecting and sanitizing Large Language Model interactions.
AI ToolkitsEyeGesturesEyeGestures GitHub starsEye-tracking software using native webcams and phone cameras for eye-driven interfaces.
Asynchronous ToolsAsyncpalAsyncpal GitHub starsConcurrency and parallelism library for sporadic workloads with preemptive thread and process pools.
awaiterawaiter GitHub starsLightweight executor that makes blocking functions awaitable in asynchronous code.
credit-rate-limitcredit-rate-limit GitHub starsRate limiting utility for managing API request credits and computation units with support for async operations.
KewKew GitHub starsTask queue manager and scheduler with Redis-backed persistence and priority-based processing.
PSQLPyPSQLPy GitHub starsAsynchronous PostgreSQL driver written in Rust with Python bindings.
PyAwaitablePyAwaitable GitHub starsLibrary for writing and calling asynchronous Python functions from pure C code.
TSignalTSignal GitHub starsSignal/slot pattern implementation for Python without Qt dependencies.
Command-Line ToolsCappaCappa GitHub starsCommand line argument parsing framework with type-based inference and automatic help generation.
typed-argparsertyped-argparser GitHub starsCommand line argument parser utilizing Python type hints for parsing and validation.
Data HandlingAlternative QueriesAlternative Queries GitHub starsSQL query builder focused on handcrafted queries with type checking.
AmphiAmphi GitHub starsVisual data transformation platform for ETL, data preparation, and reporting.
cstructpycstructpy GitHub starsBinary serialization framework for structured data using C-like primitive types.
HashStashHashStash GitHub starsCaching library with multiple storage engines, serializers and compression options.
iceaxeiceaxe GitHub starsORM framework for PostgreSQL focused on performance and type safety.
MeerkatMeerkat GitHub starsData monitoring and change tracking system for various data sources.
odexodex GitHub starsObject indexing system for fast and declarative data retrieval.
pgcrudpgcrud GitHub starsA fast and lightweight database connector that simplifies CRUD operations with PostgreSQL.
SlowstoreSlowstore GitHub starsObject store that persists Python objects as individual JSON files with change tracking and undo capabilities.
VectorliteVectorlite GitHub starsVector search extension for SQLite based on hnswlib.
Data Processingbridge-dsbridge-ds GitHub starsDataset management framework for deep learning with unified interfaces across different data modalities.
PipeFuncPipeFunc GitHub starsComputational workflow framework for creating and executing function pipelines with automated dependency management and parallel execution capabilities.
UniSimUniSim GitHub starsUniversal similarity computation toolkit for text and image matching, deduplication, and clustering. By Google.
Data VisualizationInspectusInspectus GitHub starsMachine learning visualization tool for attention mechanisms and model analytics.
TorchImagerTorchImager GitHub starsVisualization library for displaying PyTorch tensors using GPU-accelerated rendering.
DataFrame ToolsCubedPandasCubedPandas GitHub starsMultidimensional data analysis extension for Pandas DataFrames offering intuitive filtering and navigation capabilities.
NanoCubeNanoCube GitHub starsIn-memory OLAP engine for fast point queries on DataFrames
PydfyPydfy GitHub starsPDF report generator that converts DataFrames and other data into formatted documents.
ShootsShoots GitHub starsDataframe storage server for Pandas utilizing Apache Arrow and Flight protocols.
Database Extensionssqlite-vecsqlite-vec GitHub starsVector search extension for SQLite that enables vector storage and similarity search capabilities.
tsellmtsellm GitHub starsDatabase extension for accessing Large Language Models through SQLite and DuckDB queries.
Date and Time Utilitiesfriendlydateparserfriendlydateparser GitHub starsNatural language date and time parser supporting complex everyday date expressions and relative references.
Temporal AdjusterTemporal Adjuster GitHub starsDate and time manipulation library for adjusting temporal objects based on specific rules.
Desktop ApplicationsMininterfaceMininterface GitHub starsInterface framework for creating GUI, TUI, and CLI applications with minimal code overhead.
PyloidPyloid GitHub starsDesktop application framework for building cross-platform GUI applications using web technologies and QtWebEngine.
TkForgeTkForge GitHub starsGUI builder that converts Figma designs into Tkinter applications.
Jupyter Toolsipyformipyform GitHub starsExtension for rendering Google Colab-style forms in standard Jupyter environments.
Pixi KernelPixi Kernel GitHub starsJupyter kernel manager for per-directory Pixi environments with multi-language support.
Knowledge GraphsGraphitiGraphiti GitHub starsTemporal knowledge graph framework for building and querying dynamic, time-aware entity relationships.
LLM Interfacesaisuiteaisuite GitHub starsUnified interface for multiple Generative AI providers.
allmsallms GitHub starsQuerying interface for Large Language Models (LLMs).
CriaCria GitHub starsLLM interaction framework designed for minimal configuration and frictionless operation.
DittoDitto GitHub starsLLM-powered code generation tool that builds Flask applications from natural language descriptions.
FormatronFormatron GitHub starsConstrained text generation toolkit for controlling language model outputs through format specifications and constraints.
ModelsmithModelsmith GitHub starsLarge language model interface library for structured data responses with Pydantic model support.
OpenVoiceChatOpenVoiceChat GitHub starsConversational interface framework for natural voice interactions with large language models.
optillmoptillm GitHub starsInference proxy implementing optimization techniques for improving LLM accuracy and performance.
prompticpromptic GitHub starsDecorator-based prompt engineering toolkit for interacting with large language models.
SemantixSemantix GitHub starsLarge language model interface framework for infusing semantic meaning into code structures.
TokenCostTokenCost GitHub starsToken counting and price estimation tool for LLM applications.
ML DevelopmentAXLearnAXLearn GitHub starsLarge-scale deep learning library built on JAX and XLA. By Apple.
CoreNetCoreNet GitHub starsDeep neural network toolkit for training standard and novel models. By Apple.
LLM2VecLLM2Vec GitHub starsLarge language model text encoding framework that converts decoder-only LLMs into text encoders.
MONAI-TRAINMONAI-TRAIN GitHub starsMedical imaging deep learning pipeline built on MONAI and PyTorch for healthcare image analysis.
Note_rlNote_rl GitHub starsReinforcement learning library for training agents with Keras and PyTorch
NuCSNuCS GitHub starsConstraint satisfaction and optimization problem solver accelerated by Numpy and Numba.
PenzaiPenzai GitHub starsFramework for writing and manipulating neural network models as functional pytree data structures. By Google.
PerpetualPerpetual GitHub starsGradient boosting machine algorithm that eliminates the need for hyperparameter optimization.
polippolip GitHub starsMachine learning utility framework for PyTorch-based AI development workflows.
smallpermsmallperm GitHub starsMemory-efficient permutation generator using pseudo-random permutations (PRP), useful for ML training.
TricycleTricycle GitHub starsDeep learning library built from scratch for educational transparency and understanding.
zephyrzephyr GitHub starsNeural network parameter management framework for JAX, focused on simplifying parameter creation and pure functional programming.
ML Testing & EvaluationFiddleCubeFiddleCube GitHub starsTesting framework for generating and evaluating question-answer datasets for LLMs and RAG systems.
IntentGuardIntentGuard GitHub starsTesting framework for verifying code properties using natural language assertions.
ParamountParamount GitHub starsQuality assurance toolkit for evaluating and testing AI chat interactions.
skoreskore GitHub starsML/DS project tracking and assistance toolkit.
tea-tastingtea-tasting GitHub starsStatistical analysis framework for A/B tests with support for various data backends.
NLPBasicLINGUABasicLINGUA GitHub starsLanguage model-based library for linguistic tasks including pattern extraction and intent recognition.
Niche ToolsAioTxAioTx GitHub starsComprehensive cryptocurrency operations framework supporting multiple blockchains including Bitcoin, Ethereum, TON, and TRON.
ClaudeSyncClaudeSync GitHub starsSynchronization tool for managing Claude.ai project files and workflows.
datamuledatamule GitHub starsFinancial data toolkit for SEC filings download, parsing, and analysis with built-in chatbot capabilities.
drawpyodrawpyo GitHub starsDiagram generation library for creating Diagrams.net/Draw.io charts programmatically.
filefragfilefrag GitHub starsFile fragmentation analysis and extent mapping interface.
Named SemaphoresNamed Semaphores GitHub starsPythonic API for POSIX IPC Named Semaphores.
PhotoshopAPIPhotoshopAPI GitHub starsLibrary for reading, writing, and manipulating Adobe Photoshop files (*.psd and *.psb).
rpaudiorpaudio GitHub starsRust-powered audio processing library with support for multiple formats and real-time effects.
rypryp GitHub starsInterface for running R code within Python, with focus on efficient data transfer between languages.
Smart SegmentSmart Segment GitHub starsCustomer segmentation tool focused on optimization-driven revenue maximization through dynamic propensity-based grouping.
ThreadsPipePyThreadsPipePy GitHub starsWrapper for Meta's Threads API enabling automated account interactions and post management. By Meta.
ViperIDEViperIDE GitHub starsWeb-based integrated development environment for MicroPython and CircuitPython development.
OCRComiQComiQ GitHub starsOCR library specialized for comic text detection and translation, combining traditional OCR engines with Gemini Flash-1.5 model.
MyLittleOCRMyLittleOCR GitHub starsOCR wrapper providing unified access to multiple OCR engines including Tesseract, EasyOCR, PaddleOCR, and others.
thepi.pethepi.pe GitHub starsDocument extraction and processing framework that converts various file formats into clean markdown and structured data for LLMs and vector databases.
Reactive Programming & State ManagementPylivenPyliven GitHub starsComputation framework enabling automatic variable updates based on dependency changes.
signifiedsignified GitHub starsReactive programming library with type narrowing support.
Testing, Debugging & Profilingpytest-editpytest-edit GitHub starsPlugin for opening failed test files in an editor during pytest execution.
PyTraceToIXPyTraceToIX GitHub starsExpression tracer framework for debugging Jinja2 templates, Flask web apps, lambdas, list comprehensions, and method chaining.
SmartProfilerSmartProfiler GitHub starsProfiling framework for execution time, memory usage, CPU time, and function call counting with thread-safe capabilities.
VisualizationDjango-PlottingsDjango-Plottings GitHub starsIntegration library for rendering Matplotlib graphics within Django web applications.
justpyplotjustpyplot GitHub starsPlotting library for direct array-based visualization with real-time performance focus.
PhysEngPhysEng GitHub starsPhysics engine for 3D physical system simulations with real-time visualization.
PyQt ToastPyQt Toast GitHub starsToast notification system for Qt-based desktop applications.
ridgeplotridgeplot GitHub starsPlotting library for creating beautiful and interactive ridgeline plots within the Plotly ecosystem.
Terminal Text EffectsTerminal Text Effects GitHub starsVisual effects engine for creating dynamic text animations and transitions in terminal environments.
Web Crawling & ScrapingLokiLoki GitHub starsFact verification tool for automated claim assessment and evidence-based verification.
PAR ScrapePAR Scrape GitHub starsWeb scraping tool with AI-powered data extraction capabilities and multi-browser support.
ParseraParsera GitHub starsLightweight web scraping framework that leverages Large Language Models for data extraction.
yamiyami GitHub starsText-based code stealing tool for downloading GitHub repositories anonymously.
Web DevelopmentApitallyApitally GitHub starsAPI monitoring and analytics platform focused on data privacy.
Django Action TriggersDjango Action Triggers GitHub starsEvent-driven action framework for Django that extends signal functionality with dynamic runtime configuration and external service integrations.
Emmett55Emmett55 GitHub starsLightweight web framework focused on simplicity and core features.
ExpanseExpanse GitHub starsModern web application framework with emphasis on developer experience.
LightAPILightAPI GitHub starsLightweight framework for building RESTful APIs with automatic CRUD endpoint generation and SQLAlchemy integration.
paramoratorparamorator GitHub starsUtility for creating type-safe parameterized decorators.
PostingPosting GitHub starsTerminal-based HTTP client with TUI interface for testing and managing API requests.
ProtatoquestsProtatoquests GitHub starsHTTP request library with automatic proxy rotation for anonymous web access.
PyJSXPyJSX GitHub starsJSX compiler and runtime that enables writing JSX syntax directly in Python code.
qlql GitHub starsGraphQL client library built around Pydantic for type validation and API querying.
Quick Api ClientQuick Api Client GitHub starsAPI client framework for creating fully typed declarative clients.
rfc9457rfc9457 GitHub starsException handling library implementing RFC9457 for web API error management and responses.
senseisensei GitHub starsHTTP request framework for building type-safe API clients with minimal implementation.
SupadanticSupadantic GitHub starsDatabase management interface combining Supabase and Pydantic models.
UiWizardUiWizard GitHub starsWeb UI framework for creating responsive web interfaces using HTMX and Tailwind CSS.
Web AnalyticsWeb Analytics GitHub starsWeb analytics tracking framework with minimal configuration.

Wondering how AI can help you?

This website uses cookies to improve user experience. Read more