👉 Add Your Tool
The AI Enablement Stack is a curated collection of venture-backed companies, tools and technologies that enable developers to build, deploy, and manage AI applications. It provides a structured view of the AI development ecosystem across five key layers:
AGENT CONSUMER LAYER: Where AI meets end-users through autonomous agents, assistive tools, and specialized solutions. This layer showcases ready-to-use AI applications and agentic tools.
OBSERVABILITY AND GOVERNANCE LAYER: Tools for monitoring, securing, and managing AI systems in production. Essential for maintaining reliable and compliant AI operations.
ENGINEERING LAYER: Development resources for building production-ready AI applications, including training tools, testing frameworks, and quality assurance solutions.
INTELLIGENCE LAYER: The cognitive foundation featuring frameworks, knowledge engines, and specialized models that power AI capabilities.
INFRASTRUCTURE LAYER: The computing backbone that enables AI development and deployment, from development environments to model serving platforms.
- For Developers: Find the right tools to build AI applications faster and more efficiently
- For Engineering Leaders: Make informed decisions about AI infrastructure and tooling
- For Organizations: Understand the AI development landscape and plan technology adoption
Each tool in this stack is carefully selected based on:
- Production readiness
- Enterprise-grade capabilities
- Active development and support
- Venture backing or significant market presence
To contribute to this list:
- Read the CONTRIBUTING.md
- Fork the repository
- Add logo under the ./public/images/ folder
- Add your tool in the appropriate category in the file ai-enablement-stack.json
- Submit a PR with a compelling rationale for its acceptance
Self-operating AI systems that can complete complex tasks independently
AGENT CONSUMER LAYER - Autonomous Agents
Cognition develops Devin, the world's first AI software engineer, designed to work as a collaborative teammate that helps engineering teams scale their capabilities through parallel task execution and comprehensive development support.
AGENT CONSUMER LAYER - Autonomous Agents
Bolt.new is a web-based development platform that enables in-browser application development with AI assistance (Claude 3.5 Sonnet), featuring real-time execution, one-click Netlify deployment, and no-setup required development environment, particularly suited for rapid prototyping and non-technical founders.
AGENT CONSUMER LAYER - Autonomous Agents
Vercel v0 is an AI-powered UI generation platform that enables developers to create React components through natural language prompts, featuring integration with Tailwind CSS and Shadcn/UI, rapid prototyping capabilities, and production-ready code generation with customization options.
AGENT CONSUMER LAYER - Autonomous Agents
Morph AI delivers an enterprise-grade developer assistant that automates engineering tasks across multiple languages and frameworks, enabling developers to focus on high-impact work while ensuring code quality through automated testing and compliance.
AI tools that enhance human capabilities and workflow efficiency
AGENT CONSUMER LAYER - Assistive Agents
Sourcegraph's Cody is an AI coding assistant that combines the latest LLM models (including Claude 3 and GPT-4) with comprehensive codebase context to help developers write, understand, and fix code across multiple IDEs, while offering enterprise-grade security and flexible deployment options.
AGENT CONSUMER LAYER - Assistive Agents
Tabnine provides a privacy-focused AI code assistant that offers personalized code generation, testing, and review capabilities, featuring bespoke models trained on team codebases, zero data retention, and enterprise-grade security with support for on-premises deployment.
AGENT CONSUMER LAYER - Assistive Agents
Supermaven provides ultra-fast code completion and assistance with a 1M token context window, supporting multiple IDEs (VS Code, JetBrains, Neovim) and LLMs (GPT-4, Claude 3.5), featuring real-time chat interface, codebase scanning, and 3x faster response times compared to competitors.
AGENT CONSUMER LAYER - Assistive Agents
Windsurf provides an agentic IDE that combines copilot and agent capabilities through 'Flows', featuring Cascade for deep contextual awareness, multi-file editing, command suggestions, and LLM-based search tools, all integrated into a VS Code-based editor for seamless AI-human collaboration.
AGENT CONSUMER LAYER - Assistive Agents
Goose is an open-source autonomous developer agent that operates directly on your machine, capable of executing shell commands, debugging code, managing dependencies, and interacting with development tools like GitHub and Jira, featuring extensible toolkits and support for multiple LLM providers.
AGENT CONSUMER LAYER - Assistive Agents
Hex provides an AI-powered analytics platform featuring Magic AI for query writing, chart building, and debugging, combining LLM capabilities with data warehouse context and semantic models to assist with SQL, Python, and visualization tasks while maintaining enterprise-grade security.
AGENT CONSUMER LAYER - Assistive Agents
Bloop.ai provides a code understanding and modernization platform with AI-powered code conversion from legacy languages to modern ones, featuring automatic behavioral validation, offline operation, continuous delivery support, and enhanced developer productivity through AI assistance.
Purpose-built AI agents designed for specific functions, like PR reviews and similar.
AGENT CONSUMER LAYER - Specialized Agents
Ellipsis provides AI-powered code reviews and automated bug fixes for GitHub repositories, offering features like style guide enforcement, code generation, and automated testing while maintaining SOC 2 Type 1 compliance and secure processing without data retention.
AGENT CONSUMER LAYER - Specialized Agents
Codeflash is a CI tool that keeps your Python code performant by using AI to automatically find the most optimized version of your code through benchmarking and verifying the new code for correctness.
AGENT CONSUMER LAYER - Specialized Agents
Superflex is a VSCode Extension that builds features from Figma designs, images and text prompts, while maintaining your design standards, code style, and reusing your UI components.
AGENT CONSUMER LAYER - Specialized Agents
Codemod provides AI-powered code migration agents that automate framework migrations, API upgrades, and refactoring at scale, featuring a community registry of migration recipes, AI-assisted codemod creation, and comprehensive migration management capabilities.
AGENT CONSUMER LAYER - Specialized Agents
Codegen provides enterprise-grade static analysis and codemod capabilities for large-scale code transformations, featuring advanced visualization tools, automated documentation generation, and platform engineering templates, with SOC 2 Type II certification for secure refactoring at scale.
Tools for managing and monitoring AI application lifecycles
OBSERVABILITY AND GOVERNANCE LAYER - Development Pipeline
Portkey provides a comprehensive AI gateway and control panel that enables teams to route to 200+ LLMs, implement guardrails, manage prompts, and monitor AI applications with detailed observability features while maintaining SOC2 compliance and HIPAA/GDPR standards.
OBSERVABILITY AND GOVERNANCE LAYER - Development Pipeline
Baseten provides high-performance inference infrastructure featuring up to 1,500 tokens/second throughput, sub-100ms latency, and GPU autoscaling, with Truss open-source model packaging, enterprise security (SOC2, HIPAA), and support for deployment in customer clouds or self-hosted environments.
OBSERVABILITY AND GOVERNANCE LAYER - Development Pipeline
Stack AI provides an enterprise generative AI platform for building and deploying AI applications with a no-code interface, offering pre-built templates, workflow automation, enterprise security features (SOC2, HIPAA, GDPR), and on-premise deployment options with support for multiple AI models and data sources.
Systems for tracking AI performance and behavior
OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring
- No description available
OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring
Cleanlab provides an AI-powered data curation platform that helps organizations improve their GenAI and ML solutions by automatically detecting and fixing data quality issues, reducing hallucinations, and enabling trustworthy AI deployment while offering VPC integration for enhanced security.
OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring
Patronus provides a comprehensive AI evaluation platform built on industry-leading research, offering features for testing hallucinations, security risks, alignment, and performance monitoring, with both pre-built evaluators and custom evaluation capabilities for RAG systems and AI agents.
OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring
Log10 provides an end-to-end AI accuracy platform for evaluating and monitoring LLM applications in high-stakes industries, featuring expert-driven evaluation, automated feedback systems, real-time monitoring, and continuous improvement workflows with built-in security and compliance features.
OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring
Traceloop provides open-source LLM monitoring through OpenLLMetry, offering real-time hallucination detection, output quality monitoring, and prompt debugging capabilities across 22+ LLM providers with zero-intrusion integration.
OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring
WhyLabs provides a comprehensive AI Control Center for monitoring, securing, and optimizing AI applications, offering real-time LLM monitoring, security guardrails, and privacy-preserving observability with SOC 2 Type 2 compliance and support for multiple modalities.
OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring
OpenLLMetry provides an open-source observability solution for LLMs built on OpenTelemetry standards, offering easy integration with major observability platforms like Datadog, New Relic, and Grafana, requiring just two lines of code to implement.
OBSERVABILITY AND GOVERNANCE LAYER - Evaluation & Monitoring
LangWatch provides a comprehensive LLMOps platform for optimizing and monitoring LLM performance, featuring automated prompt optimization using DSPy, quality evaluations, performance monitoring, and collaborative tools for AI teams, with enterprise-grade security and self-hosting options.
Frameworks for ensuring responsible AI use and regulatory compliance
Tools for protecting AI systems and managing access and user permissions
OBSERVABILITY AND GOVERNANCE LAYER - Security & Access Control
LiteLLM provides a unified API gateway for managing 100+ LLM providers with OpenAI-compatible formatting, offering features like authentication, load balancing, spend tracking, and monitoring integrations, available both as an open-source solution and enterprise service.
OBSERVABILITY AND GOVERNANCE LAYER - Security & Access Control
Martian provides an intelligent LLM routing system that dynamically selects the optimal model for each request, featuring performance prediction, automatic failover, cost optimization (up to 98% savings), and simplified integration, outperforming single models like GPT-4 while ensuring high uptime.
Resources for customizing and optimizing AI models
ENGINEERING LAYER - Training & Fine-Tuning
Provides tools for efficient fine-tuning of large language models, including techniques like quantization and memory optimization.
ENGINEERING LAYER - Training & Fine-Tuning
Platform for building and deploying machine learning models, with a focus on simplifying the development process and enabling faster iteration.
ENGINEERING LAYER - Training & Fine-Tuning
Modal offers a serverless cloud platform for AI and ML applications that enables developers to deploy and scale workloads instantly with simple Python code, featuring high-performance GPU infrastructure and pay-per-use pricing.
ENGINEERING LAYER - Training & Fine-Tuning
Lightning AI provides a comprehensive platform for building AI products, featuring GPU access, development environments, training capabilities, and deployment tools, with support for enterprise-grade security, multi-cloud deployment, and team collaboration, used by major organizations like NVIDIA and Microsoft.
Development utilities, libraries and services for building AI applications
ENGINEERING LAYER - Tools
Relevance AI provides a no-code AI workforce platform that enables businesses to build, customize, and manage AI agents and tools for various functions like sales and support, featuring Bosh, their AI Sales Agent, while ensuring enterprise-grade security and compliance.
ENGINEERING LAYER - Tools
Greptile provides an AI-powered code analysis platform that helps software teams ship faster by offering intelligent code reviews, codebase chat, and custom dev tools with full contextual understanding, while maintaining SOC2 Type II compliance and optional self-hosting capabilities.
ENGINEERING LAYER - Tools
Sourcegraph provides a code intelligence platform featuring Cody, an AI coding assistant, and advanced code search capabilities that help developers navigate, understand, and modify complex codebases while automating routine tasks across enterprise environments.
ENGINEERING LAYER - Tools
PromptLayer provides a comprehensive prompt engineering platform that enables technical and non-technical teams to collaboratively edit, evaluate, and deploy LLM prompts through a visual CMS, while offering version control, A/B testing, and monitoring capabilities with SOC 2 Type 2 compliance.
ENGINEERING LAYER - Tools
JigsawStack provides a comprehensive suite of AI APIs including web scraping, translation, speech-to-text, OCR, prediction, and prompt optimization, offering globally distributed infrastructure with type-safe SDKs and built-in monitoring capabilities across 99+ locations.
Systems for validating AI performance and reliability
ENGINEERING LAYER - Testing & Quality Assurance
Confident AI provides an LLM evaluation platform that enables organizations to benchmark, unit test, and monitor their LLM applications through automated regression testing, A/B testing, and synthetic dataset generation, while offering research-backed evaluation metrics and comprehensive observability features.
ENGINEERING LAYER - Testing & Quality Assurance
AI agent specifically designed for software testing and quality assurance, automating the testing process and providing comprehensive test coverage.
ENGINEERING LAYER - Testing & Quality Assurance
Braintrust provides an end-to-end platform for evaluating and testing LLM applications, offering features like prompt testing, custom scoring, dataset management, real-time tracing, and production monitoring, with support for both UI-based and SDK-driven workflows.
Core libraries and building blocks for AI application development
INTELLIGENCE LAYER - Frameworks
Provides an agent development platform with advanced memory management for LLMs, enabling developers to build, deploy, and scale production-ready AI agents with transparent reasoning and model-agnostic flexibility.
INTELLIGENCE LAYER - Frameworks
Langbase provides a serverless AI development platform featuring BaseAI (Web AI Framework), composable AI Pipes for agent development, 50-100x cheaper serverless RAG, unified LLM API access, and collaboration tools, with enterprise-grade security and observability.
INTELLIGENCE LAYER - Frameworks
Framework for developing LLM applications with multiple conversational agents that collaborate and interact with humans.
INTELLIGENCE LAYER - Frameworks
A framework for creating and managing workflows and tasks for AI agents.
INTELLIGENCE LAYER - Frameworks
Toolhouse provides a cloud infrastructure platform and universal SDK that enables developers to equip LLMs with actions and knowledge through a Tool Store, offering pre-built optimized functions, low-latency execution, and cross-LLM compatibility with just three lines of code.
INTELLIGENCE LAYER - Frameworks
Composio provides an integration platform for AI agents and LLMs with 250+ pre-built tools, managed authentication, and RPA capabilities, enabling developers to easily connect their AI applications with various services while maintaining SOC-2 compliance and supporting multiple agent frameworks.
INTELLIGENCE LAYER - Frameworks
CrewAI provides a comprehensive platform for building, deploying, and managing multi-agent AI systems, offering both open-source framework and enterprise solutions with support for any LLM and cloud platform, enabling organizations to create automated workflows across various industries.
INTELLIGENCE LAYER - Frameworks
AI Suite provides a unified interface for multiple LLM providers (OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, etc.), offering standardized API access with OpenAI-compatible syntax, easy provider switching, and seamless integration capabilities, available as an open-source MIT-licensed framework.
INTELLIGENCE LAYER - Frameworks
Promptflow is Microsoft's open-source development framework for LLM applications, offering tools for flow creation, testing, evaluation, and deployment, featuring visual flow design through VS Code extension, built-in evaluation metrics, and CI/CD integration capabilities.
INTELLIGENCE LAYER - Frameworks
LLMStack is an open-source platform for building AI agents, workflows, and applications, featuring model chaining across major providers, data integration from multiple sources (PDFs, URLs, Audio, Drive), and collaborative development capabilities with granular permissions.
Databases and systems for managing and retrieving information
INTELLIGENCE LAYER - Knowledge Engines
Supabase Vector provides an open-source vector database built on Postgres and pgvector, offering scalable embedding storage, indexing, and querying capabilities with integrated AI tooling for OpenAI and Hugging Face, featuring enterprise-grade security and global deployment options.
INTELLIGENCE LAYER - Knowledge Engines
Contextual AI provides enterprise-grade RAG (Retrieval-Augmented Generation) solutions that enable organizations in regulated industries to build and deploy production-ready AI applications for searching and analyzing large volumes of business-critical documents.
INTELLIGENCE LAYER - Knowledge Engines
Platform for working with unstructured data, offering tools for data pre-processing, ETL, and integration with LLMs.
INTELLIGENCE LAYER - Knowledge Engines
SciPhi offers R2R, an all-in-one RAG (Retrieval Augmented Generation) solution that enables developers to build and scale AI applications with advanced features including document management, hybrid vector search, and knowledge graphs, while providing superior ingestion performance compared to competitors.
INTELLIGENCE LAYER - Knowledge Engines
pgAI is a PostgreSQL extension that enables AI capabilities directly in the database, featuring automated vector embedding creation, RAG implementation, semantic search, and LLM integration (OpenAI, Claude, Cohere, Llama) with support for high-performance vector operations through pgvector and pgvectorscale.
AI models optimized for software development
INTELLIGENCE LAYER - Specialized Coding Models
Codestral is Mistral AI's specialized 22B code generation model supporting 80+ programming languages, featuring a 32k context window, fill-in-the-middle capabilities, and state-of-the-art performance on coding benchmarks, available through API endpoints and IDE integrations.
INTELLIGENCE LAYER - Specialized Coding Models
Claude 3.5 Sonnet is Anthropic's frontier AI model offering state-of-the-art performance in reasoning, coding, and vision tasks, featuring a 200K token context window, computer use capabilities, and enhanced safety measures, available through multiple platforms including Claude.ai and major cloud providers.
INTELLIGENCE LAYER - Specialized Coding Models
Qwen2.5-Coder is a specialized code-focused model matching GPT-4's coding capabilities, featuring 32B parameters, 128K token context window, support for 80+ programming languages, and state-of-the-art performance on coding benchmarks, available as an open-source Apache 2.0 licensed model.
INTELLIGENCE LAYER - Specialized Coding Models
Poolside Malibu is an enterprise-focused code generation model trained using Reinforcement Learning from Code Execution Feedback (RLCEF), featuring 100K token context, custom fine-tuning capabilities, and deep integration with development environments, available through Amazon Bedrock for secure deployment.
Development environments for sandboxing and building AI applications
INFRASTRUCTURE LAYER - AI Workspaces
Daytona.io is an open-source Development Environment Manager designed to simplify the setup and management of development environments across various platforms, including local, remote, and cloud infrastructures.
INFRASTRUCTURE LAYER - AI Workspaces
Runloop provides a secure, high-performance infrastructure platform that enables developers to build, scale, and deploy AI-powered coding solutions with seamless integration and real-time monitoring capabilities.
INFRASTRUCTURE LAYER - AI Workspaces
E2B provides an open-source runtime platform that enables developers to securely execute AI-generated code in cloud sandboxes, supporting multiple languages and frameworks for AI-powered development use cases.
INFRASTRUCTURE LAYER - AI Workspaces
Morph Labs provides infrastructure for developing and deploying autonomous software engineers at scale, offering Infinibranch for Morph Cloud and focusing on advanced infrastructure for AI-powered development, backed by partnerships with Together AI, Nomic AI, and other leading AI companies.
Services for deploying and running AI models
INFRASTRUCTURE LAYER - Model Access & Deployment
OpenAI develops advanced artificial intelligence systems like ChatGPT, GPT-4, and Sora, focusing on creating safe AGI that benefits humanity through products spanning language models, image generation, and video creation while maintaining leadership in AI research and safety.
INFRASTRUCTURE LAYER - Model Access & Deployment
Anthropic provides frontier AI models through the Claude family, emphasizing safety and reliability, with offerings including Claude 3.5 Sonnet and Haiku. Their models feature advanced capabilities in reasoning, coding, and computer use, while maintaining strong safety standards through Constitutional AI and comprehensive testing.
INFRASTRUCTURE LAYER - Model Access & Deployment
Mistral AI provides frontier AI models with emphasis on openness and portability, offering both open-weight models (Mistral 7B, Mixtral 8x7B) and commercial models (Mistral Large 2), available through multiple deployment options including serverless APIs, cloud services, and on-premise deployment.
INFRASTRUCTURE LAYER - Model Access & Deployment
Groq provides ultra-fast AI inference infrastructure for openly-available models like Llama 3.1, Mixtral, and Gemma, offering OpenAI-compatible API endpoints with industry-leading speed and simple three-line integration for existing applications.
INFRASTRUCTURE LAYER - Model Access & Deployment
AI21 Labs delivers enterprise-grade generative AI solutions through its Jamba foundation model and RAG engine, enabling organizations to build secure, production-ready AI applications with flexible deployment options and dedicated integration support.
INFRASTRUCTURE LAYER - Model Access & Deployment
Cohere provides an enterprise AI platform featuring advanced language models, embedding, and retrieval capabilities that enables businesses to build production-ready AI applications with flexible deployment options across cloud or on-premises environments.
INFRASTRUCTURE LAYER - Model Access & Deployment
Hugging Face provides fully managed inference infrastructure for ML models with support for multiple hardware options (CPU, GPU, TPU) across various cloud providers, offering autoscaling and dedicated deployments with enterprise-grade security.
INFRASTRUCTURE LAYER - Model Access & Deployment
Cartesia AI delivers real-time multimodal intelligence through state space models that enable fast, private, and offline inference capabilities across devices, offering streaming-first solutions with constant memory usage and low latency.
INFRASTRUCTURE LAYER - Model Access & Deployment
Provides easy access to open-source language models through a simple API, similar to offerings from closed-source providers.
INFRASTRUCTURE LAYER - Model Access & Deployment
Offers an API for accessing and running open-source LLMs, facilitating seamless integration into AI applications.
INFRASTRUCTURE LAYER - Model Access & Deployment
End-to-end platform for deploying and managing AI models, including LLMs, with integrated tools for monitoring, versioning, and scaling.
INFRASTRUCTURE LAYER - Model Access & Deployment
Amazon Bedrock is a fully managed service that provides access to leading foundation models through a unified API, featuring customization capabilities through fine-tuning and RAG, managed AI agents for workflow automation, and enterprise-grade security with HIPAA and GDPR compliance.
INFRASTRUCTURE LAYER - Model Access & Deployment
Serverless platform for running machine learning models, allowing developers to deploy and scale models without managing infrastructure.
INFRASTRUCTURE LAYER - Model Access & Deployment
SambaNova provides custom AI infrastructure featuring their SN40L Reconfigurable Dataflow Unit (RDU), offering world-record inference speeds for large language models, with integrated fine-tuning capabilities and enterprise-grade security, delivered through both cloud and on-premises solutions.
INFRASTRUCTURE LAYER - Model Access & Deployment
BentoML provides an open-source unified inference platform that enables organizations to build, deploy, and scale AI systems across any cloud with high performance and flexibility, while offering enterprise features like auto-scaling, rapid iteration, and SOC II compliance.
INFRASTRUCTURE LAYER - Model Access & Deployment
OpenRouter provides a unified OpenAI-compatible API for accessing 282+ models across multiple providers, offering standardized access, provider routing, and model rankings, with support for multiple SDKs and framework integrations.
Computing infrastructure that powers AI systems and their workspaces
INFRASTRUCTURE LAYER - Cloud Providers
Koyeb provides a high-performance serverless platform specifically optimized for AI workloads, offering GPU/NPU infrastructure, global deployment across 50+ locations, and seamless scaling capabilities for ML model inference and training with built-in observability.
INFRASTRUCTURE LAYER - Cloud Providers
Hyperbolic provides a decentralized GPU marketplace for AI compute and inference, offering up to 80% cost reduction compared to traditional providers, featuring high-throughput inference services, pay-as-you-go GPU access, and compute monetization capabilities with hardware-agnostic support.
INFRASTRUCTURE LAYER - Cloud Providers
Prime Intellect provides a unified GPU marketplace aggregating multiple cloud providers, featuring competitive pricing for various GPUs (H100, A100, RTX series), decentralized training capabilities across distributed clusters, and tools for collaborative AI model development with a focus on open-source innovation.
INFRASTRUCTURE LAYER - Cloud Providers
CoreWeave is an AI-focused cloud provider offering Kubernetes-native infrastructure optimized for GPU workloads, featuring 11+ NVIDIA GPU types, up to 35x faster performance and 80% cost reduction compared to traditional providers, with specialized solutions for ML/AI, VFX, and inference at scale.
INFRASTRUCTURE LAYER - Cloud Providers
Nebius provides an AI-optimized cloud platform featuring latest NVIDIA GPUs (H200, H100, L40S) with InfiniBand networking, offering managed Kubernetes and Slurm clusters, MLflow integration, and specialized infrastructure for AI training, fine-tuning, and inference workloads.
Please read the contribution guidelines before submitting a pull request.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details