From 3355bacf9cecd6779eac1edea7a947cf20a86b92 Mon Sep 17 00:00:00 2001 From: James Reynolds Date: Fri, 29 Nov 2024 18:34:32 -0700 Subject: [PATCH] Fix documentation broken links and remove whitespace at end of lines --- docs/contributing/ARCHITECTURE.md | 12 ++++---- docs/contributing/INVOCATIONS.md | 4 +-- docs/contributing/MODEL_MANAGER.md | 28 +++++++++---------- docs/contributing/TESTS.md | 4 +-- .../contribution_guides/development.md | 12 ++++---- .../newContributorChecklist.md | 12 ++++---- docs/contributing/dev-environment.md | 8 +++--- docs/contributing/index.md | 4 +-- invokeai/frontend/web/README.md | 2 +- 9 files changed, 43 insertions(+), 43 deletions(-) diff --git a/docs/contributing/ARCHITECTURE.md b/docs/contributing/ARCHITECTURE.md index d74df94492c..f8d2e30166c 100644 --- a/docs/contributing/ARCHITECTURE.md +++ b/docs/contributing/ARCHITECTURE.md @@ -50,7 +50,7 @@ Applications are built on top of the invoke framework. They should construct `in ### Web UI -The Web UI is built on top of an HTTP API built with [FastAPI](https://fastapi.tiangolo.com/) and [Socket.IO](https://socket.io/). The frontend code is found in `/frontend` and the backend code is found in `/ldm/invoke/app/api_app.py` and `/ldm/invoke/app/api/`. The code is further organized as such: +The Web UI is built on top of an HTTP API built with [FastAPI](https://fastapi.tiangolo.com/) and [Socket.IO](https://socket.io/). The frontend code is found in `/invokeai/frontend` and the backend code is found in `/invokeai/app/api_app.py` and `/invokeai/app/api/`. The code is further organized as such: | Component | Description | | --- | --- | @@ -62,7 +62,7 @@ The Web UI is built on top of an HTTP API built with [FastAPI](https://fastapi.t ### CLI -The CLI is built automatically from invocation metadata, and also supports invocation piping and auto-linking. Code is available in `/ldm/invoke/app/cli_app.py`. +The CLI is built automatically from invocation metadata, and also supports invocation piping and auto-linking. Code is available in `/invokeai/frontend/cli`. ## Invoke @@ -70,7 +70,7 @@ The Invoke framework provides the interface to the underlying AI systems and is ### Invoker -The invoker (`/ldm/invoke/app/services/invoker.py`) is the primary interface through which applications interact with the framework. Its primary purpose is to create, manage, and invoke sessions. It also maintains two sets of services: +The invoker (`/invokeai/app/services/invoker.py`) is the primary interface through which applications interact with the framework. Its primary purpose is to create, manage, and invoke sessions. It also maintains two sets of services: - **invocation services**, which are used by invocations to interact with core functionality. - **invoker services**, which are used by the invoker to manage sessions and manage the invocation queue. @@ -82,12 +82,12 @@ The session graph does not support looping. This is left as an application probl ### Invocations -Invocations represent individual units of execution, with inputs and outputs. All invocations are located in `/ldm/invoke/app/invocations`, and are all automatically discovered and made available in the applications. These are the primary way to expose new functionality in Invoke.AI, and the [implementation guide](INVOCATIONS.md) explains how to add new invocations. +Invocations represent individual units of execution, with inputs and outputs. All invocations are located in `/invokeai/app/invocations`, and are all automatically discovered and made available in the applications. These are the primary way to expose new functionality in Invoke.AI, and the [implementation guide](INVOCATIONS.md) explains how to add new invocations. ### Services -Services provide invocations access AI Core functionality and other necessary functionality (e.g. image storage). These are available in `/ldm/invoke/app/services`. As a general rule, new services should provide an interface as an abstract base class, and may provide a lightweight local implementation by default in their module. The goal for all services should be to enable the usage of different implementations (e.g. using cloud storage for image storage), but should not load any module dependencies unless that implementation has been used (i.e. don't import anything that won't be used, especially if it's expensive to import). +Services provide invocations access AI Core functionality and other necessary functionality (e.g. image storage). These are available in `/invokeai/app/services`. As a general rule, new services should provide an interface as an abstract base class, and may provide a lightweight local implementation by default in their module. The goal for all services should be to enable the usage of different implementations (e.g. using cloud storage for image storage), but should not load any module dependencies unless that implementation has been used (i.e. don't import anything that won't be used, especially if it's expensive to import). ## AI Core -The AI Core is represented by the rest of the code base (i.e. the code outside of `/ldm/invoke/app/`). +The AI Core is represented by the rest of the code base (i.e. the code outside of `/invokeai/app/`). diff --git a/docs/contributing/INVOCATIONS.md b/docs/contributing/INVOCATIONS.md index 249642492b3..23e5ccc14d9 100644 --- a/docs/contributing/INVOCATIONS.md +++ b/docs/contributing/INVOCATIONS.md @@ -287,8 +287,8 @@ new Invocation ready to be used. Once you've created a Node, the next step is to share it with the community! The best way to do this is to submit a Pull Request to add the Node to the -[Community Nodes](nodes/communityNodes) list. If you're not sure how to do that, -take a look a at our [contributing nodes overview](contributingNodes). +[Community Nodes](../nodes/communityNodes.md) list. If you're not sure how to do that, +take a look a at our [contributing nodes overview](../nodes/contributingNodes.md). ## Advanced diff --git a/docs/contributing/MODEL_MANAGER.md b/docs/contributing/MODEL_MANAGER.md index 52b75d8c39a..ecbac9bf071 100644 --- a/docs/contributing/MODEL_MANAGER.md +++ b/docs/contributing/MODEL_MANAGER.md @@ -9,20 +9,20 @@ model. These are the: configuration information. Among other things, the record service tracks the type of the model, its provenance, and where it can be found on disk. - + * _ModelInstallServiceBase_ A service for installing models to disk. It uses `DownloadQueueServiceBase` to download models and their metadata, and `ModelRecordServiceBase` to store that information. It is also responsible for managing the InvokeAI `models` directory and its contents. - + * _DownloadQueueServiceBase_ A multithreaded downloader responsible for downloading models from a remote source to disk. The download queue has special methods for downloading repo_id folders from Hugging Face, as well as discriminating among model versions in Civitai, but can be used for arbitrary content. - + * _ModelLoadServiceBase_ Responsible for loading a model from disk into RAM and VRAM and getting it ready for inference. @@ -207,9 +207,9 @@ for use in the InvokeAI web server. Its signature is: ``` def open( - cls, - config: InvokeAIAppConfig, - conn: Optional[sqlite3.Connection] = None, + cls, + config: InvokeAIAppConfig, + conn: Optional[sqlite3.Connection] = None, lock: Optional[threading.Lock] = None ) -> Union[ModelRecordServiceSQL, ModelRecordServiceFile]: ``` @@ -363,7 +363,7 @@ functionality: * Registering a model config record for a model already located on the local filesystem, without moving it or changing its path. - + * Installing a model alreadiy located on the local filesystem, by moving it into the InvokeAI root directory under the `models` folder (or wherever config parameter `models_dir` @@ -371,21 +371,21 @@ functionality: * Probing of models to determine their type, base type and other key information. - + * Interface with the InvokeAI event bus to provide status updates on the download, installation and registration process. - + * Downloading a model from an arbitrary URL and installing it in `models_dir`. * Special handling for HuggingFace repo_ids to recursively download the contents of the repository, paying attention to alternative variants such as fp16. - + * Saving tags and other metadata about the model into the invokeai database when fetching from a repo that provides that type of information, (currently only HuggingFace). - + ### Initializing the installer A default installer is created at InvokeAI api startup time and stored @@ -461,7 +461,7 @@ revision. `config` is an optional dict of values that will override the autoprobed values for model type, base, scheduler prediction type, and so forth. See [Model configuration and -probing](#Model-configuration-and-probing) for details. +probing](#model-configuration-and-probing) for details. `access_token` is an optional access token for accessing resources that need authentication. @@ -494,7 +494,7 @@ source8 = URLModelSource(url='https://civitai.com/api/download/models/63006', ac for source in [source1, source2, source3, source4, source5, source6, source7]: install_job = installer.install_model(source) - + source2job = installer.wait_for_installs(timeout=120) for source in sources: job = source2job[source] @@ -504,7 +504,7 @@ for source in sources: print(f"{source} installed as {model_key}") elif job.errored: print(f"{source}: {job.error_type}.\nStack trace:\n{job.error}") - + ``` As shown here, the `import_model()` method accepts a variety of diff --git a/docs/contributing/TESTS.md b/docs/contributing/TESTS.md index 8d823bb4e97..8fa3602decf 100644 --- a/docs/contributing/TESTS.md +++ b/docs/contributing/TESTS.md @@ -1,6 +1,6 @@ # InvokeAI Backend Tests -We use `pytest` to run the backend python tests. (See [pyproject.toml](/pyproject.toml) for the default `pytest` options.) +We use `pytest` to run the backend python tests. (See [pyproject.toml](https://github.com/invoke-ai/InvokeAI/blob/main/pyproject.toml) for the default `pytest` options.) ## Fast vs. Slow All tests are categorized as either 'fast' (no test annotation) or 'slow' (annotated with the `@pytest.mark.slow` decorator). @@ -33,7 +33,7 @@ pytest tests -m "" ## Test Organization -All backend tests are in the [`tests/`](/tests/) directory. This directory mirrors the organization of the `invokeai/` directory. For example, tests for `invokeai/model_management/model_manager.py` would be found in `tests/model_management/test_model_manager.py`. +All backend tests are in the [`tests/`](https://github.com/invoke-ai/InvokeAI/tree/main/tests) directory. This directory mirrors the organization of the `invokeai/` directory. For example, tests for `invokeai/model_management/model_manager.py` would be found in `tests/model_management/test_model_manager.py`. TODO: The above statement is aspirational. A re-organization of legacy tests is required to make it true. diff --git a/docs/contributing/contribution_guides/development.md b/docs/contributing/contribution_guides/development.md index d75632fe17c..80c3010064b 100644 --- a/docs/contributing/contribution_guides/development.md +++ b/docs/contributing/contribution_guides/development.md @@ -2,7 +2,7 @@ ## **What do I need to know to help?** -If you are looking to help with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential. +If you are looking to help with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential. ## **Get Started** @@ -12,7 +12,7 @@ To get started, take a look at our [new contributors checklist](newContributorCh Once you're setup, for more information, you can review the documentation specific to your area of interest: * #### [InvokeAI Architecure](../ARCHITECTURE.md) -* #### [Frontend Documentation](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web) +* #### [Frontend Documentation](../frontend/index.md) * #### [Node Documentation](../INVOCATIONS.md) * #### [Local Development](../LOCAL_DEVELOPMENT.md) @@ -20,15 +20,15 @@ Once you're setup, for more information, you can review the documentation specif If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md), [translation](translation.md) or helping support other users and triage issues as they're reported in GitHub. -There are two paths to making a development contribution: +There are two paths to making a development contribution: 1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors. - 1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item you’d like to help with, reach out to the contributor assigned to the item to see how you can help. + 1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item you’d like to help with, reach out to the contributor assigned to the item to see how you can help. 2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.** *Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no one’s time is being misspent.* -## Best Practices: +## Best Practices: * Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged * Comments! Commenting your code helps reviewers easily understand your contribution * Use Python and Typescript’s typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development @@ -38,7 +38,7 @@ There are two paths to making a development contribution: If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord. -For frontend related work, **@psychedelicious** is the best person to reach out to. +For frontend related work, **@psychedelicious** is the best person to reach out to. For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@psychedelicious**. diff --git a/docs/contributing/contribution_guides/newContributorChecklist.md b/docs/contributing/contribution_guides/newContributorChecklist.md index c890672bf07..623487dff6f 100644 --- a/docs/contributing/contribution_guides/newContributorChecklist.md +++ b/docs/contributing/contribution_guides/newContributorChecklist.md @@ -22,15 +22,15 @@ Before starting these steps, ensure you have your local environment [configured 2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**. 3. Clone the repository to your local machine using: - ```bash - git clone https://github.com/your-GitHub-username/InvokeAI.git - ``` + ```bash + git clone https://github.com/your-GitHub-username/InvokeAI.git + ``` If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface. 4. Create a new branch for your fix using: - ```bash - git checkout -b branch-name-here - ``` + ```bash + git checkout -b branch-name-here + ``` 5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add. 6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index: diff --git a/docs/contributing/dev-environment.md b/docs/contributing/dev-environment.md index bfa7047594f..91c595e4541 100644 --- a/docs/contributing/dev-environment.md +++ b/docs/contributing/dev-environment.md @@ -27,9 +27,9 @@ If you just want to use Invoke, you should use the [installer][installer link]. 5. Activate the venv (you'll need to do this every time you want to run the app): - ```sh - source .venv/bin/activate - ``` + ```sh + source .venv/bin/activate + ``` 6. Install the repo as an [editable install][editable install link]: @@ -37,7 +37,7 @@ If you just want to use Invoke, you should use the [installer][installer link]. pip install -e ".[dev,test,xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121 ``` - Refer to the [manual installation][manual install link]] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not. + Refer to the [manual installation][manual install link] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not. 7. Install the frontend dev toolchain: diff --git a/docs/contributing/index.md b/docs/contributing/index.md index 15e97c6611c..79c1082746d 100644 --- a/docs/contributing/index.md +++ b/docs/contributing/index.md @@ -34,11 +34,11 @@ Please reach out to @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) ## Contributors -This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort. +This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](contributors.md). We thank them for their time, hard work and effort. ## Code of Conduct -The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/docs/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment. +The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](../CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment. By making a contribution to this project, you certify that: diff --git a/invokeai/frontend/web/README.md b/invokeai/frontend/web/README.md index 995a2812b95..076b68fc837 100644 --- a/invokeai/frontend/web/README.md +++ b/invokeai/frontend/web/README.md @@ -1,3 +1,3 @@ # Invoke UI - +