Skip to content

Commit

Permalink
Migrating docs from Docusarus to GQLAlchemy GitHub repo (#259)
Browse files Browse the repository at this point in the history
* Initial commit of docs

* Update H1 in documents

* Add mkdocs

* Update styling and syntax

* Update navigation

* Update links

* Update links

* Update internal links

* Update external Memgraph links

* Update docs

* Add logo and favicon

* Update README.md

Co-authored-by: Katarina Supe <[email protected]>

* Update README

* Update README.md

---------

Co-authored-by: katarinasupe <[email protected]>
Co-authored-by: Katarina Supe <[email protected]>
  • Loading branch information
3 people authored Sep 4, 2023
1 parent 313e95c commit 8a23edf
Show file tree
Hide file tree
Showing 55 changed files with 4,237 additions and 316 deletions.
207 changes: 11 additions & 196 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ poetry install -E arrow # Support for the CSV, Parquet, ORC and IPC/Feather/Arro
poetry install -E dgl # DGL support (also includes torch)
```
To run the tests, make sure you have an [active Memgraph instance](/memgraph), and execute one of the following commands:
To run the tests, make sure you have an [active Memgraph instance](https://memgraph.com/docs/getting-started), and execute one of the following commands:
```bash
poetry run pytest . -k "not slow" # If all extras installed
Expand All @@ -76,199 +76,6 @@ If you’ve installed only certain extras, it’s also possible to run their ass
poetry run pytest . -k "arrow"
poetry run pytest . -k "dgl"
```
## GQLAlchemy capabilities
<details>
<summary>🗺️ Object graph mapper</summary>
<br>
Below you can see an example of how to create `User` and `Language` node classes, and a relationship class of type `SPEAKS`. Along with that, you can see how to create a new node and relationship and how to save them in the database. After that, you can load those nodes and relationship from the database.
<br>
<br>
```python
from gqlalchemy import Memgraph, Node, Relationship, Field
from typing import Optional
db = Memgraph()
class User(Node, index=True, db=db):
id: str = Field(index=True, exist=True, unique=True, db=db)
class Language(Node):
name: str = Field(unique=True, db=db)
class Speaks(Relationship, type="SPEAKS"):
pass
user = User(id="3", username="John").save(db)
language = Language(name="en").save(db)
speaks_rel = Speaks(
_start_node_id = user._id,
_end_node_id = language._id
).save(db)
loaded_user = User(id="3").load(db=db)
print(loaded_user)
loaded_speaks = Speaks(
_start_node_id=user._id,
_end_node_id=language._id
).load(db)
print(loaded_speaks)
```
</details>
<details>
<summary>🔨 Query builder</summary>
<br>
When building a Cypher query, you can use a set of methods that are wrappers around Cypher clauses.
<br>
<br>
```python
from gqlalchemy import create, match
from gqlalchemy.query_builder import Operator
query_create = create()
.node(labels="Person", name="Leslie")
.to(relationship_type="FRIENDS_WITH")
.node(labels="Person", name="Ron")
.execute()
query_match = match()
.node(labels="Person", variable="p1")
.to()
.node(labels="Person", variable="p2")
.where(item="p1.name", operator=Operator.EQUAL, literal="Leslie")
.return_(results=["p1", ("p2", "second")])
.execute()
```
</details>
<details>
<summary>🚰 Manage streams</summary>
<br>
You can create and start Kafka or Pulsar stream using GQLAlchemy.
<br>
**Kafka stream**
```python
from gqlalchemy import MemgraphKafkaStream
stream = MemgraphKafkaStream(name="ratings_stream", topics=["ratings"], transform="movielens.rating", bootstrap_servers="localhost:9093")
db.create_stream(stream)
db.start_stream(stream)
```
**Pulsar stream**
```python
from gqlalchemy import MemgraphPulsarStream
stream = MemgraphPulsarStream(name="ratings_stream", topics=["ratings"], transform="movielens.rating", service_url="localhost:6650")
db.create_stream(stream)
db.start_stream(stream)
```
</details>
<details>
<summary>🗄️ Import table data from different sources</summary>
<br>
**Import table data to a graph database**
You can translate table data from a file to graph data and import it to Memgraph. Currently, we support reading of CSV, Parquet, ORC and IPC/Feather/Arrow file formats via the PyArrow package.
Read all about it in [table to graph importer how-to guide](https://memgraph.com/docs/gqlalchemy/how-to-guides/table-to-graph-importer).
**Make a custom file system importer**
If you want to read from a file system not currently supported by GQLAlchemy, or use a file type currently not readable, you can implement your own by extending abstract classes `FileSystemHandler` and `DataLoader`, respectively.
Read all about it in [custom file system importer how-to guide](https://memgraph.com/docs/gqlalchemy/how-to-guides/custom-file-system-importer).
</details>
<details>
<summary>⚙️ Manage Memgraph instances</summary>
<br>
You can start, stop, connect to and monitor Memgraph instances with GQLAlchemy.
**Manage Memgraph Docker instance**
```python
from gqlalchemy.instance_runner import (
DockerImage,
MemgraphInstanceDocker
)
memgraph_instance = MemgraphInstanceDocker(
docker_image=DockerImage.MEMGRAPH, docker_image_tag="latest", host="0.0.0.0", port=7687
)
memgraph = memgraph_instance.start_and_connect(restart=False)
memgraph.execute_and_fetch("RETURN 'Memgraph is running' AS result"))[0]["result"]
```
**Manage Memgraph binary instance**
```python
from gqlalchemy.instance_runner import MemgraphInstanceBinary
memgraph_instance = MemgraphInstanceBinary(
host="0.0.0.0", port=7698, binary_path="/usr/lib/memgraph/memgraph", user="memgraph"
)
memgraph = memgraph_instance.start_and_connect(restart=False)
memgraph.execute_and_fetch("RETURN 'Memgraph is running' AS result"))[0]["result"]
```
</details>
<details>
<summary>🔫 Manage database triggers</summary>
<br>
Because Memgraph supports database triggers on `CREATE`, `UPDATE` and `DELETE` operations, GQLAlchemy also implements a simple interface for maintaining these triggers.
```python
from gqlalchemy import Memgraph, MemgraphTrigger
from gqlalchemy.models import (
TriggerEventType,
TriggerEventObject,
TriggerExecutionPhase,
)
db = Memgraph()
trigger = MemgraphTrigger(
name="ratings_trigger",
event_type=TriggerEventType.CREATE,
event_object=TriggerEventObject.NODE,
execution_phase=TriggerExecutionPhase.AFTER,
statement="UNWIND createdVertices AS node SET node.created_at = LocalDateTime()",
)
db.create_trigger(trigger)
triggers = db.get_triggers()
print(triggers)
```
</details>
<details>
<summary>💽 On-disk storage</summary>
<br>
Since Memgraph is an in-memory graph database, the GQLAlchemy library provides an on-disk storage solution for large properties not used in graph algorithms. This is useful when nodes or relationships have metadata that doesn’t need to be used in any of the graph algorithms that need to be carried out in Memgraph, but can be fetched after. Learn all about it in the [on-disk storage how-to guide](https://memgraph.com/docs/gqlalchemy/how-to-guides/on-disk-storage).
</details>
<br>
If you want to learn more about OGM, query builder, managing streams, importing data from different source, managing Memgraph instances, managing database triggers and using on-disk storage, check out the GQLAlchemy [how-to guides](https://memgraph.com/docs/gqlalchemy/how-to-guides).
## Development (how to build)
```bash
Expand All @@ -279,14 +86,22 @@ poetry run pytest . -k "not slow and not extras"
## Documentation
The GQLAlchemy documentation is available on [memgraph.com/docs/gqlalchemy](https://memgraph.com/docs/gqlalchemy/).
The GQLAlchemy documentation is available on [GitHub](https://github.com/memgraph/gqlalchemy).
The documentation can be generated by executing:
The reference guide can be generated from the code by executing:
```
pip3 install pydoc-markdown
pydoc-markdown
```
Other parts of the documentation are written and located at docs directory. To test the documentation locally execute:
```
pip3 install mkdocs
pip3 install mkdocs-material
pip3 install pymdown-extensions
mkdocs serve
```
## License
Copyright (c) 2016-2022 [Memgraph Ltd.](https://memgraph.com)
Expand Down
Binary file added docs/assets/favicon.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 8a23edf

Please sign in to comment.