diff --git a/.gitignore b/.gitignore index 7bff7318..a07b5426 100644 --- a/.gitignore +++ b/.gitignore @@ -103,3 +103,7 @@ venv.bak/ # mypy .mypy_cache/ +# IDE related +.idea/ +.DS_Store + diff --git a/.readthedocs.yaml b/.readthedocs.yaml new file mode 100644 index 00000000..c1543b64 --- /dev/null +++ b/.readthedocs.yaml @@ -0,0 +1,5 @@ +version: 2 + +python: + install: + - requirements: docs/requirements.txt diff --git a/README.md b/README.md index afc6648d..5dba6adf 100644 --- a/README.md +++ b/README.md @@ -1,70 +1,65 @@ -# Aneurysm workflow +## VaMPy - Vascular Modeling Pypeline [![Build Status](https://travis-ci.com/KVSlab/Aneurysm_workflow.svg?token=qbve9tcy6am6sUJksBcu&branch=master)](https://travis-ci.com/KVSlab/Aneurysm_workflow) -[![codecov](https://codecov.io/gh/KVSlab/Aneurysm_workflow/branch/master/graph/badge.svg?token=M2NMX6HOSZ)](https://codecov.io/gh/KVSlab/Aneurysm_workflow) +[![codecov](https://codecov.io/gh/KVSlab/VaMPy/branch/master/graph/badge.svg?token=M2NMX6HOSZ)](https://codecov.io/gh/KVSlab/VaMPy) +[![Documentation Status](https://readthedocs.org/projects/vampy/badge/?version=latest)](https://vampy.readthedocs.io/en/latest/?badge=latest)

- Output pre processing + Output pre processing

- Meshed aneurysm model showing inlet flow rate, outlet flow split, and probes. + Meshed and processed artery model. A variable density volumetric mesh (left), with corresponding inlet flow rate, outlet flow splits, and probes for velocity and pressure sampling (middle), generated by the pre-processing tools. Following the fluid simulation, temporal wall shear stress gradient (right) is one of the hemodynamic indices computed by the post-processing scripts.

-## Description -Aneurysm workflow is a collection of scripts to run an aneurysm problem with [Oasis](https://github.com/mikaem/Oasis). There are also scripts for a variety of post-processing; WSS-based metrics, more advanced turbulence metrics, and a variety of morphological parameters. The latter is implemented through automated neck plane detection, but are not adapted to the `Aneurysm_workflow` pipeline and are here merely for convenience. - -## Authors -These scripts was written by -- Aslak Wigdahl Bergersen -- Christophe Chnafa -- Henrik A. Kjeldsberg - -## Installation -You can choose how to install the dependencies, but the fastest way to get started is to first install anaconda or miniconda on your computer. Then create two environments, one for `vmtk/vtk` and one for `fenics` by executing the following in a terminal: -``` -conda create -n vtk -c vmtk python=3.6 itk vtk vmtk paramiko -conda create -n fenics -c conda-forge fenics -``` - -You might run into a problem with vmtk (1.4) if using python 3. To fix this, please [follow these instructions](https://morphman.readthedocs.io/en/latest/installation.html#basic-installation) for fixing the centerline problem. For fixing mesh writing change line 263 of vmtkmeshwriter.py (using the same path as described in the link) to: -``` -file = open(self.OutputFileName, 'rb') -```` -Alternatively, run the script `apply_vmtk_hotfixes.py`, which will automatically apply the required changes. -Please note that these changes are fixed in the development version of vmtk, but a new version has not been released in a long time. - -Now, you need to install [`Oasis`](https://github.com/mikaem/Oasis). You can do so with the following commands: -``` -conda activate fenics -cd [path_to_your_installation_folder] -git clone https://github.com/mikaem/Oasis -cd Oasis -pip install . && pip install cppimport # add "--user" if you are on a cluster, or "-e" if you are changing the Oasis source code -``` - -Now, all that is left is to clone the `Aneurysm_workflow` repository: -``` -git clone https://github.com/KVSLab/Aneurysm_workflow.git -cd Aneurysm_workflow -``` - -## Usage -First, use the automatedPreProcessing to create a mesh, boundary conditions, and probes for sampling. - -``` -conda deactivate && conda activate vtk -python automatedPreProcessing/automatedPreProcessing.py -m diameter -i test/Case_test_artery/artery.vtp --aneurysm False -c 1.3 -``` - -Then run a CFD simulation for two cycles with 10 000 time steps per cycle and default parameters with Oasis: -``` -conda deactivate && conda activate fenics && cd simulation -oasis NSfracStep problem=Artery mesh_path=../test/Case_test_artery/artery.xml.gz && cd .. -``` - -Finally, you can create the WSS from the CFD simulation: -``` -python postprocessing/compute_wss.py --case path_to_results/data/[run_number]/Solutions -``` - -You can also compute flow related metrics using `compute_flow_metrics.py`, but you would need to adapt how the files are read in to match with `compute_wss.py`. -To visualize velocity and pressure at the probes created by `Artery.py`, you can run the `visualize_probes.py` script, which has an additional dependency to [`Matplotlib`](https://github.com/matplotlib/matplotlib). +Description +----------- +The Vascular Modeling Pypeline (VaMPy) is a collection of scripts used to prepare, run, and analyze cardiac and atrial morphologies. This includes pre-processing scripts for meshing and probe sampling, a [Oasis](https://github.com/mikaem/Oasis) problem file for simulating flow in the [internal carotid artery](https://en.wikipedia.org/wiki/Internal_carotid_artery), and a variety of post-processing scripts for computing WSS-based metrics, more advanced turbulence metrics, and a variety of morphological parameters in patient-specific geometries. + + +Authors +------- +VaMPy has been developed by + +* Aslak Wigdahl Bergersen +* Christophe Chnafa +* Henrik A. Kjeldsberg + +Licence +------- +VaMPy is licensed under the GNU GPL, version 3 or (at your option) any +later version. + +VaMPy is Copyright (2018-2021) by the authors. + +Documentation +------------- +For detailed installation notes and an introduction to VaMPy, please refer to the [documentation](https://vampy.readthedocs.io/en/latest/). + +Installation +------------ +For reference, VaMPy requires the following dependencies: VTK > 8.1, Numpy <= 1.13, SciPy > 1.0.0, VMTK 1.4, ITK, Paramiko, and FEniCS. +If you are on Windows, macOS or Linux you can install all the general dependencies through [Anaconda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html). +First install Anaconda or Miniconda (preferably the Python 3.6 version). +Then create two environments, one for `VMTK` and one for `FEniCS` by executing the following in a terminal + + conda create -n vmtk -c vmtk python=3.6 itk vtk vmtk paramiko + conda create -n fenics -c conda-forge fenics + +You might run into a problem with VMTK 1.4 if using Python 3, or if you are a Linux user, and we have therefore provided a set of temporary fixes for these known issues [here](https://vampy.readthedocs.io/en/latest/installation.html#known-issues). + +Next, you need to install [`Oasis`](https://github.com/mikaem/Oasis). You can do so with the following commands: + + conda activate fenics + git clone https://github.com/mikaem/Oasis + cd Oasis + pip install . && pip install cppimport + +Finally, you are ready to clone and use the `VaMPy` repository: + + git clone https://github.com/KVSLab/VaMPy.git + cd VaMPy + +Issues +------ +Please report bugs and other issues through the issue tracker at: + +https://github.com/KVSlab/VaMPy/issues diff --git a/automatedPostProcessing/compute_flow_and_simulation_metrics.py b/automatedPostProcessing/compute_flow_and_simulation_metrics.py index 0a39454a..b4dabced 100755 --- a/automatedPostProcessing/compute_flow_and_simulation_metrics.py +++ b/automatedPostProcessing/compute_flow_and_simulation_metrics.py @@ -6,7 +6,10 @@ from postprocessing_common import read_command_line, epsilon -set_log_active(False) +try: + set_log_active(False) +except NameError: + pass def compute_flow_and_simulation_metrics(folder, nu, dt, velocity_degree): diff --git a/automatedPostProcessing/compute_hemodynamic_indices.py b/automatedPostProcessing/compute_hemodynamic_indices.py index d5d167c8..b5167fa6 100644 --- a/automatedPostProcessing/compute_hemodynamic_indices.py +++ b/automatedPostProcessing/compute_hemodynamic_indices.py @@ -6,7 +6,10 @@ from postprocessing_common import STRESS, read_command_line -parameters["reorder_dofs_serial"] = False +try: + parameters["reorder_dofs_serial"] = False +except NameError: + pass def compute_hemodynamic_indices(case_path, nu, dt, velocity_degree): diff --git a/automatedPostProcessing/postprocessing_common.py b/automatedPostProcessing/postprocessing_common.py index f42dda1f..c99b2947 100644 --- a/automatedPostProcessing/postprocessing_common.py +++ b/automatedPostProcessing/postprocessing_common.py @@ -2,8 +2,10 @@ try: from dolfin import * - - parameters["allow_extrapolation"] = True + try: + parameters["allow_extrapolation"] = True + except NameError: + pass except ImportError: pass diff --git a/automatedPostProcessing/visualize_probes.py b/automatedPostProcessing/visualize_probes.py index edd88f01..d3d6aa57 100644 --- a/automatedPostProcessing/visualize_probes.py +++ b/automatedPostProcessing/visualize_probes.py @@ -7,10 +7,6 @@ from postprocessing_common import read_command_line -# Plotting parameters -plt.rcParams["figure.figsize"] = [20, 10] -plt.rcParams["figure.autolayout"] = True - def visualize_probes(case_path, dt, no_of_cycles, probe_saving_frequency=100, cardiac_cycle=951, show_figure=True, save_figure=False): diff --git a/docs/Makefile b/docs/Makefile new file mode 100644 index 00000000..d0c3cbf1 --- /dev/null +++ b/docs/Makefile @@ -0,0 +1,20 @@ +# Minimal makefile for Sphinx documentation +# + +# You can set these variables from the command line, and also +# from the environment for the first two. +SPHINXOPTS ?= +SPHINXBUILD ?= sphinx-build +SOURCEDIR = source +BUILDDIR = build + +# Put it first so that "make" without argument is like "make help". +help: + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +.PHONY: help Makefile + +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/make.bat b/docs/make.bat new file mode 100644 index 00000000..6fcf05b4 --- /dev/null +++ b/docs/make.bat @@ -0,0 +1,35 @@ +@ECHO OFF + +pushd %~dp0 + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set SOURCEDIR=source +set BUILDDIR=build + +if "%1" == "" goto help + +%SPHINXBUILD% >NUL 2>NUL +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.https://www.sphinx-doc.org/ + exit /b 1 +) + +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% +goto end + +:help +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% + +:end +popd diff --git a/docs/requirements.txt b/docs/requirements.txt new file mode 100644 index 00000000..c896d4df --- /dev/null +++ b/docs/requirements.txt @@ -0,0 +1 @@ +docutils<0.18 \ No newline at end of file diff --git a/docs/source/conf.py b/docs/source/conf.py new file mode 100644 index 00000000..9cd25ec4 --- /dev/null +++ b/docs/source/conf.py @@ -0,0 +1,115 @@ +# -*- coding: utf-8 -*- +# +# Configuration file for the Sphinx documentation builder. +# +# This file only contains a selection of the most common options. For a full +# list see the documentation: +# https://www.sphinx-doc.org/en/master/usage/configuration.html + +# -- Path setup -------------------------------------------------------------- + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +# + +import os +import sys + +# If extensions (or modules to document with autodoc) are in another directory, +# add these directories to sys.path here. If the directory is relative to the +# documentation root, use os.path.abspath to make it absolute, like shown here. +workflow_root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "..") + +sys.path.insert(0, workflow_root) +sys.path.insert(0, os.path.join(workflow_root, "automatedPreProcessing")) +sys.path.insert(0, os.path.join(workflow_root, "automatedPostProcessing")) + +# -- Project information ----------------------------------------------------- + +project = 'Vascular Modeling Pypeline' +copyright = '2021, Aslak W. Bergersen & Christophe Chnafa & Henrik A. Kjeldsberg' +author = 'Aslak W. Bergersen & Christophe Chnafa & Henrik A. Kjeldsberg' + +# The short X.Y version. +version = '0.1' +# The full version, including alpha/beta/rc tags +release = '0.1' + +# -- General configuration --------------------------------------------------- + +# Add any Sphinx extension module names here, as strings. They can be +# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom +# ones. +extensions = [ + 'sphinx.ext.autodoc', + 'sphinx.ext.doctest', + 'sphinx.ext.intersphinx', + 'sphinx.ext.todo', + 'sphinx.ext.coverage', + 'sphinx.ext.mathjax', + 'sphinx.ext.ifconfig', + 'sphinx.ext.viewcode', + 'sphinx.ext.napoleon', + 'sphinx.ext.githubpages' +] + +# Add any paths that contain templates here, relative to this directory. +# templates_path = ['_templates'] + +# The suffix(es) of source filenames. +# You can specify multiple suffix as a list of string: +# source_suffix = ['.rst', '.md'] +source_suffix = '.rst' + +# The master toctree document. +master_doc = 'index' + +# List of patterns, relative to source directory, +# that match files and +# directories to ignore when looking for source files. +# This pattern also affects html_static_path and html_extra_path. +exclude_patterns = [] + +autodoc_mock_imports = ["numpy", "scipy", "vtk", "vmtk", "morphman", "paramiko", "dolfin", "matplotlib"] + +# -- Options for HTML output ------------------------------------------------- + +# The name of the Pygments (syntax highlighting) style to use. +pygments_style = 'sphinx' + +# If true, `todo` and `todoList` produce output, else they produce nothing. +todo_include_todos = True + +# The theme to use for HTML and HTML Help pages. See the documentation for +# a list of builtin themes. +# +html_theme = 'sphinx_rtd_theme' + +# Output file base name for HTML help builder. +htmlhelp_basename = 'workflow_docs_help' + +# Add any paths that contain custom static files (such as style sheets) here, +# relative to this directory. They are copied after the builtin static files, +# so a file named "default.css" will overwrite the builtin "default.css". +# html_static_path = ['_static'] + +# -- Options for LATEX output ------------------------------------------------ +# latex_engine = 'pdflatex' +# +# latex_elements = {} +# + +# -- Options for Epub output ---------------------------------------------- + +# Bibliographic Dublin Core info. +epub_title = project +epub_author = author +epub_publisher = author +epub_copyright = copyright + +# A list of files that should not be packed into the epub file. +epub_exclude_files = ['search.html'] + +# Example configuration for intersphinx: refer to the Python standard library. +intersphinx_mapping = {'https://docs.python.org/': None} diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst new file mode 100644 index 00000000..8a608b8c --- /dev/null +++ b/docs/source/getting_started.rst @@ -0,0 +1,59 @@ +.. title:: Using VaMPy + +.. _getting_started: + +==================================== +Using the Vascular Modeling Pypeline +==================================== +The Vascular Modeling Pypeline is a collection of scripts to prepare, run, and analyze vascular morphologies. This includes pre-processing scripts for generating a volumetric mesh, defining physiological boundary conditions, and inserting probes for sampling velocity and pressure. For the computational fluid dynamics (CFD) simulation, we have included an artery problem file used for running the simulation with `Oasis `_. +Finally, there are a variety of post-processing scripts, which computes wall shear stress-based metrics, more advanced turbulence metrics, and a variety of morphological parameters. In this walkthrough, we exemplify the usage by preparing, simulating, and post-processing the `internal carotid artery `_, although the software may be readily used for other tubular or vascular shapes. + +Pre-processing: Meshing and boundary conditions +=============================================== +The first step of using the Vascular Modeling Pypeline is pre-processing. The pre-processing scripts are located inside the ``automatedPreProcessing`` folder, and we will be executing the ``automatedPreProcessing.py`` script to generate a mesh, boundary conditions, and probes for velocity and pressure sampling. Here we will perform pre-processing for the artery case located in the ``test`` folder. +Start by entering the ``vmtk`` conda environment:: + + conda deactivate && conda activate vtk + +Then, to perform meshing, execute the following command:: + + python automatedPreProcessing/automatedPreProcessing.py -m diameter -i test/Case_test_artery/artery.vtp -c 1.3 + +When complete, the script will save the volumetric mesh as ``artery.vtu``, alongside a compressed DOLFIN mesh in ``artery.xml.gz``, used for the following simulations. +The pre-processing script will also produce an info file and a probe file, named ``artery_info.json`` and ``artery_probes``, respectively. + +Computational fluid dynamics simulations in Oasis +================================================= +The next step of using the Vascular Modeling Pypeline is performing the CFD simulations with `Oasis`. +First, activate the ``fenics`` conda environment:: + + conda deactivate && conda activate fenics && cd simulation + +Then, to run a CFD simulation for two cycles with 10 000 time steps per cycle and default parameters with Oasis, execute the following command:: + + oasis NSfracStep problem=Artery mesh_path=../test/Case_test_artery/artery.xml.gz && cd .. + +Running the simulations will create the result folder ``results_artery`` (specific to the artery problem), with the results and corresponding mesh saved compactly in HDF5 format. + +Post-processing: Hemodynamic indices, flow and simulation metrics, and probes +============================================================================= +Following the CFD simulations, the last usage of the Vascular Modeling Pypeline is the post-processing part. +You can start by computing the wall shear stress, oscillatory shear index and other hemodynamic indices by executing the following command:: + + python automatedPostProcessing/compute_hemodynamic_indices.py --case results_artery/data/[run_number]/Solutions + +To compute fluid dynamic quantities and simulation metrics, you may execute the following command:: + + python automatedPostProcessing/compute_flow_and_simulation_metrics.py --case results_artery/data/[run_number]/Solutions + +Finally, to visualize velocity and pressure at the probes created by ``Artery.py``, you can run the ``visualize_probes.py`` script, by executing the following command:: + + python automatedPostProcessing/visualize_probes.py --case results_artery/data/[run_number]/Solutions + +Note that this has an additional dependency to `Matplotlib `_, which can quickly be installed with either `conda` or `pip`. + +Features and issues +=================== +The existing methods provide many degrees of freedom, however, if you need a specific method or functionality, please do not hesitate to propose enhancements in the `issue tracker `_, or create a `pull request `_ with new features. +Similarly, we highly appreciate that you report any bugs or other issues you may experience in the `issue tracker `_. + diff --git a/docs/source/index.rst b/docs/source/index.rst new file mode 100644 index 00000000..529e1b5a --- /dev/null +++ b/docs/source/index.rst @@ -0,0 +1,15 @@ +.. title:: Vascular Modeling Pypeline + +========================== +Vascular Modeling Pypeline +========================== + +Documentation for the `Vascular Modeling Pypeline `_. + +.. toctree:: + :maxdepth: 2 + + installation + getting_started + scripts + diff --git a/docs/source/installation.rst b/docs/source/installation.rst new file mode 100644 index 00000000..424c7e06 --- /dev/null +++ b/docs/source/installation.rst @@ -0,0 +1,104 @@ +.. title:: Installation + +============ +Installation +============ +The Vascular Modeling Pypeline (VaMPy) is a collection of scripts to run an aneurysm problem with `Oasis `_. There are also scripts for a variety of post-processing; WSS-based metrics, more advanced turbulence metrics, and a variety of morphological parameters. The project is accessible through +`GitHub `_. + + +Dependencies +============ +The general dependencies of VaMPy are + +* VMTK 1.4.0 +* VTK 8.1.0 +* Numpy <= 1.13 +* SciPy 1.1.0 +* Paramiko +* Python (2.7 or >=3.5) + +Basic Installation +================== +You can choose how to install the dependencies, but the fastest way to get started is to install the dependencies through Anaconda. +First, install Anaconda or Miniconda (preferably the Python 3.6 version) on your computer. +Then create two environments, one for `VMTK `_ and one for `FEniCS `_ by executing the following in a terminal window:: + + conda create -n vmtk -c vmtk python=3.6 itk vtk vmtk paramiko + conda create -n fenics -c conda-forge fenics + +You can then activate your environment by running ``source activate [ENVIRONMENT NAME]``. +Windows users may need to install FEniCS as described `here `_. + +The next step is to install `Oasis `_. +You can do so with the following commands:: + + conda activate fenics + cd [path_to_your_installation_folder] + git clone https://github.com/mikaem/Oasis + cd Oasis + pip install cppimport && pip install .  + +Now, all that is left is to clone the `VaMPy` repository:: + + git clone https://github.com/KVSLab/VaMPy.git + cd VaMPy + +Now you are all set, and can start using the Vascular Modeling Pypeline. + +Known issues +============ + +.. WARNING:: The `VMTK` version 1.4, the one currently distributed with Anaconda, has a Python3 bug in ``vmtkcenterlines.py``, ``vmtksurfacecurvature.py``, and ``vmtkmeshwriter.py``. As a workaround you have to change these files. We have provided the script ``apply_vmtk_hotfixes.py``, which will automatically edit the files for you, provided you enter your local username, Anaconda version and environment name. The script may be executed with the following command:: + + python apply_vmtk_hotfixes.py + + Alternatively, you may edit the files manually. To find out where they are located you can use the ``which`` command while in the ``vmtk`` environment. For `vmtkcenterlines` you can use the following command:: + + which vmtkcenterlines + + Now copy the path up until ``vmtk`` and add ``lib/python3.6/site-packages/vmtk/vmtkcenterlines.py``. + Please change the path separation symbol to match your operating system and change ``python3.6`` to the python version you are using. If you are using Miniconda, replace `anaconda3` with `miniconda3`. + Using this path you can run the two following lines:: + + sed -i -e 's/len(self.SourcePoints)\/3/len\(self.SourcePoints\)\/\/3/g' /Users/[Username]/anaconda3/envs/vmtk/lib/python3.6/site-packages/vmtk/vmtkcenterlines.py + sed -i -e 's/len(self.TargetPoints)\/3/len\(self.TargetPoints\)\/\/3/g' /Users/[Username]/anaconda3/envs/vmtk/lib/python3.6/site-packages/vmtk/vmtkcenterlines.py + + Similarly, for `vmtksurfacecurvature.py`, run the following command:: + + sed -i -e 's/(len(values) - 1)\/2/\(len\(values\) - 1\)\/\/2/g' /Users/[Username]/anaconda3/envs/vmtk/lib/python3.6/site-packages/vmtk/vmtksurfacecurvature.py + + Finally, to fix the issue in `vmtkmeshwriter.py`, run the following command:: + + sed -i -e -r "s/file = open\(self\.OutputFileName, ?\'r\'\)/file = open\(self\.OutputFileName, \'rb\'\)/g" /Users/[Username]/anaconda3/envs/vmtk/lib/python3.6/site-packages/vmtk/vmtkmeshwriter.py + + Please note that these changes are fixed in the development version of `VMTK`, but a new version has not been released in a while. + + +.. WARNING:: Some Linux users may experience the following Python compatibility issue:: + + ModuleNotFoundError: No module named 'vtkRenderingOpenGL2Python' + + To fix this issue, a temporary solution is the install the ``llvm`` library directly in the virtual environment, using the following commands:: + + conda config --set restore_free_channel true + conda install llvm=3.3 + +.. WARNING:: Some Linux users may experience the following issue:: + + ERROR: In ../Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797 + + To fix this issue, a temporary solution is to install `VTK` version `8.1.0` directly from the Anaconda channel. Assuming you have already tried to install `VTK` and `VMTK` and have the ``vmtk`` channel active, proceed with the following instructions. + + Remove `VMTK` 1.4 and `VTK` 8.1 that is installed from the `VMTK` Anaconda channel:: + + conda uninstall vmtk vtk + + Install `VTK` 8.1.0 from the official Anaconda channel:: + + conda install -c anaconda vtk=8.1.0 + + Finally, install `VMTK` again:: + + conda install -c vmtk vmtk + diff --git a/docs/source/scripts.rst b/docs/source/scripts.rst new file mode 100644 index 00000000..f59ef541 --- /dev/null +++ b/docs/source/scripts.rst @@ -0,0 +1,119 @@ +.. title:: API documentation + +.. _api_documentation: + +================= +API documentation +================= + +Pre processing tools +==================== + +automatedPreProcessing.py +------------------------- + +.. automodule:: automatedPreProcessing + :members: + :undoc-members: + :show-inheritance: + +common.py +--------- + +.. automodule:: common + :members: + :undoc-members: + :show-inheritance: + +DisplayData.py +-------------- + +.. automodule:: DisplayData + :members: + :undoc-members: + :show-inheritance: + +ImportData.py +------------- + +.. automodule:: ImportData + :members: + :undoc-members: + :show-inheritance: + +NetworkBoundaryConditions.py +---------------------------- + +.. automodule:: NetworkBoundaryConditions + :members: + :undoc-members: + :show-inheritance: + +simulate.py +----------- + +.. automodule:: simulate + :members: + :undoc-members: + :show-inheritance: + +ToolRepairSTL.py +---------------- + +.. automodule:: ToolRepairSTL + :members: + :undoc-members: + :show-inheritance: + +visualize.py +------------ + +.. automodule:: visualize + :members: + :undoc-members: + :show-inheritance: + +vmtkpointselector.py +-------------------- + +.. automodule:: vmtkpointselector + :members: + :undoc-members: + :show-inheritance: + + + +Post processing tools +===================== + +compute_flow_and_simulation_metrics.py +-------------------------------------- + +.. automodule:: compute_flow_and_simulation_metrics + :members: + :undoc-members: + :show-inheritance: + +compute_hemodynamic_indices.py +------------------------------ + +.. automodule:: compute_hemodynamic_indices + :members: + :undoc-members: + :show-inheritance: + +postprocessing_common.py +------------------------ + +.. automodule:: postprocessing_common + :members: + :undoc-members: + :show-inheritance: + +visualize_probes.py +------------------- + +.. automodule:: visualize_probes + :members: + :undoc-members: + :show-inheritance: diff --git a/test/processed_model.png b/test/processed_model.png index fa623e44..82cfbb5e 100644 Binary files a/test/processed_model.png and b/test/processed_model.png differ