CORTEX was developed at NASA Jet Propulsion Laboratory (JPL) and is open sourced under the Apache 2.0 License.
The development of CORTEX was funded internally by JPL/JNEXT as part of the Extant Exobiology Life Surveyor (EELS) project, and builds on the NEO Autonomy Framework (hence, NEO-CORTEX). EELS is a snake robot that is being developed to explore the subsurface oceans of Europa and Enceladus. We encourage you to use CORTEX in your own projects, and to contribute to the project by submitting issues and pull requests. See the References section for a list of relevant publications, documents, and projects.
CORTEX is a framework for accelerating robotics development through a combination of modern data infrastructure, test automation, and intelligent data analysis. The framework enables developers to rapidly prototype and test new algorithms and ideas with minimal effort. It also provides a set of tools for specifying and running experiments in a repeatable manner, and for collecting and analyzing data from those experiments. Finally, CORTEX provides facilities for single- and multi-device configuration management, logging, and monitoring, which are essential for managing and operating complex robotics systems.
CORTEX is installed as a Python library using the setup.py
script.
./setup.py install
# If you get a permission denied error:
sudo ./setup.py install
After installing the CORTEX Python library, you can import the modules as follows:
import cortex
from cortex.db import TemporalCRTX
from cortex.db.entities import *
# etc...
See notebooks/guides
for examples on how to use the CORTEX library.
CORTEX relies on a database connection to store and retrieve data. We have provided a Docker setup which includes a
Postgres database with TimescaleDB, and a Grafana dashboard for visualizing. We will assume that you have Docker
installed on your system. If you do not have Docker installed, you can download it from the
Docker website. The following commands require the docker compose
command
to work properly.
./setup.py docker --start start the CORTEX services (Postgres and Grafana)
--stop stop the CORTEX services
--restart restart the CORTEX services
--purge stop and remove the CORTEX services
./setup.py database --init populate the database with the necessary tables
--wipe clear the database, including locally mounted volumes
In most cases, the Docker images will continue running in the background and start automatically when you restart your
computer. You may also choose to connect CORTEX to your own instance of Postgres by modifying the .env
file.
The following components of CORTEX can be configured:
- Docker containers, see docker-compose.yml and .env
- Database (PostgreSQL w/ TimescaleDB), tables, etc., see config/timescaledb/README.md
- Device Metrics (Telegraf), see config/telegraf/README.md
- Grafana (Dashboard), see config/grafana/README.md
- ROS Workers, see src/cortex/config/workers/README.md
CORTEX is intended to work with a wide variety of robots and configurations. It is designed to be as modular as possible, so that it can be easily adapted and integrated into new and existing systems. The following diagram shows the high-level architecture of the CORTEX data framework:
CORTEX Agents can be thought of as components that are responsible for performing specific tasks in a robotics system. They are typically implemented in the form of Python scripts, and can be configured using YAML files (where applicable).
CORTEX currently provides the following Agents:
- worker: responsible for listening to topics, applying preprocessors and transforms, and inserting data into the database. Note that the worker node will typically subsample the data before sending it to the database in order to reduce the amount of data that is sent.
- monitor: responsible for collecting resource utilization metrics (CPU/Memory) from nodes and processes running on the system.
- annotator: responsible for recording events that occur during an experiment. This includes recording the start and end times of an experiment, as well as significant events such as state transitions, reaching a goal, crashing, or encountering an obstacle.
We have developed additional agents but have not added them to the open source repository yet. These agents include:
- ROSA (ROS Agent): an AI agent that uses LLMs to interface with ROS using natural language queries.
- orchestrator: manages the CORTEX system, including environment setup, configuration, and starting/stopping CORTEX services.
- sampler: collects data from sources that do not publish on open topics. This includes collecting data by performing service/action calls, or by reading data from files.
The following sections describe the various implementations of CORTEX (current and future).
While CORTEX agents are generally ROS-agnostic, we have developed a set of ROS nodes that can be used to interface with
the CORTEX framework. These nodes are implemented in Python and can be run on any system that has ROS installed.
Simply copy the ros1/
package into your ROS workspace.
We are currently working on the ROS2 implementation. Please check back soon.