Skip to content

Commit

Permalink
deploy: 546ef07
Browse files Browse the repository at this point in the history
  • Loading branch information
zzeppozz committed Sep 27, 2024
0 parents commit dd67d79
Show file tree
Hide file tree
Showing 86 changed files with 12,054 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 6ea98c5a90654e6f280fe4cc28091e04
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added .doctrees/environment.pickle
Binary file not shown.
Binary file added .doctrees/index.doctree
Binary file not shown.
Binary file added .doctrees/pages/about.doctree
Binary file not shown.
Binary file added .doctrees/pages/aws/automation.doctree
Binary file not shown.
Binary file added .doctrees/pages/aws/aws_setup.doctree
Binary file not shown.
Binary file added .doctrees/pages/aws/roles.doctree
Binary file not shown.
Binary file added .doctrees/pages/history/aws_experiments.doctree
Binary file not shown.
Binary file added .doctrees/pages/history/year3.doctree
Binary file not shown.
Binary file added .doctrees/pages/history/year4_planA.doctree
Binary file not shown.
Binary file added .doctrees/pages/history/year4_planB.doctree
Binary file not shown.
Binary file added .doctrees/pages/history/year5.doctree
Binary file not shown.
Binary file added .doctrees/pages/interaction/aws_prep.doctree
Binary file not shown.
Binary file added .doctrees/pages/interaction/debug.doctree
Binary file not shown.
Binary file added .doctrees/pages/interaction/deploy.doctree
Binary file not shown.
Binary file added .doctrees/pages/workflow.doctree
Binary file not shown.
Empty file added .nojekyll
Empty file.
Binary file added _images/lm_logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
45 changes: 45 additions & 0 deletions _sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
Welcome to LmBISON - RIIS Analysis
======================================

The BISON repository contains data and scripts to annotate GBIF occurrence records
with information regarding geographic location and USGS RIIS status of the record.


Current
------------

.. toctree::
:maxdepth: 2

pages/about
pages/workflow

Setup AWS
------------

.. toctree::
:maxdepth: 2

pages/aws/aws_setup

Using BISON
------------

.. toctree::
:maxdepth: 2

pages/interaction/about

History
------------

.. toctree::
:maxdepth: 2

pages/history/year4_planB
pages/history/year4_planA
pages/history/year3
pages/history/year5
pages/history/aws_experiments

* :ref:`genindex`
12 changes: 12 additions & 0 deletions _sources/pages/about.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
About
========

The `Lifemapper BISON repository <https://github.com/lifemapper/bison>`_ is an open
source project supported by USGS award G19AC00211.

The aim of this repository is to provide a workflow for annotating and analyzing a
large set of United States specimen occurrence records for the USGS BISON project.

.. image:: ../.static/lm_logo.png
:width: 150
:alt: Lifemapper
68 changes: 68 additions & 0 deletions _sources/pages/aws/automation.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
Create lambda function to initiate processing
------------------------------------------------
* Create a lambda function for execution when the trigger condition is activated,
aws/events/bison_find_current_gbif_lambda.py

* This trigger condition is a file deposited in the BISON bucket

* TODO: change to the first of the month

* The lambda function will delete the new file, and test the existence of
GBIF data for the current month

* TODO: change to mount GBIF data in Redshift, subset, unmount

Edit the execution role for lambda function
--------------------------------------------
* Under Configuration/Permissions see the Execution role Role name
(bison_find_current_gbif_lambda-role-fb05ks88) automatically created for this function
* Open in a new window and under Permissions policies, Add permissions

* bison_s3_policy
* redshift_glue_policy

Create trigger to initiate lambda function
------------------------------------------------

* Check for existence of new GBIF data
* Use a blueprint, python, "Get S3 Object"
* Function name: bison_find_current_gbif_lambda
* S3 trigger:

* Bucket: arn:aws:s3:::gbif-open-data-us-east-1

* Create a rule in EventBridge to use as the trigger

* Event source : AWS events or EventBridge partner events
* Sample event, "S3 Object Created", aws/events/test_trigger_event.json
* Creation method: Use pattern form
* Event pattern

* Event Source: AWS services
* AWS service: S3
* Event type: Object-Level API Call via CloudTrail
* Event Type Specifications

* Specific operation(s): GetObject
* Specific bucket(s) by name: arn:aws:s3:::bison-321942852011-us-east-1

* Select target(s)

* AWS service


AWS lambda function that queries Redshift
--------------------------------------------

https://repost.aws/knowledge-center/redshift-lambda-function-queries

https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/redshift-data/client/execute_statement.html

* Connect to a serverless workgroup (bison), namespace (bison), database name (dev)

* When connecting to a serverless workgroup, specify the workgroup name and database
name. The database user name is derived from the IAM identity. For example,
arn:iam::123456789012:user:foo has the database user name IAM:foo. Also, permission
to call the redshift-serverless:GetCredentials operation is required.
* need redshift:GetClusterCredentialsWithIAM permission for temporary authentication
with a role
130 changes: 130 additions & 0 deletions _sources/pages/aws/aws_setup.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
AWS Resource Setup
********************

Create policies and roles
===========================================================

The :ref:`_bison_ec2_s3_role` allows an EC2 instance to access the public S3 data and
the bison S3 bucket. Its trust relationship grants AssumeRole to ec2 and s3 services.
This role will be assigned to an EC2 instance that will initiate
computations and compute matrices.

The :ref:`_bison_redshift_s3_role` allows Redshift to access public S3 data and
the bison S3 bucket, and allows Redshift to perform glue functions. Its trust
relationship grants AssumeRole to redshift service.

Make sure that the same role granted to the namespace is used for creating an external
schema and lambda functions. When mounting external data as a redshift table to the
external schema, you may encounter an error indicating that the "dev" database does not
exist. This refers to the external database, and may indicate that the role used by the
command and/or namespace differs from the role granted to the schema upon creation.

Redshift Namespace and Workgroup
===========================================================

Namespace and Workgroup
------------------------------

A namespace is storage-related, with database objects and users. A workspace is
a collection of compute resources such as security groups and other properties and
limitations.
https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-workgroup-namespace.html

External Schema
------------------------
The command below creates an external schema, redshift_spectrum, and also creates a
**new** external database "dev". It appears in the console to be the same "dev"
database that contains the public schema, but it is separate. Also note the IAM role
used to create the schema must match the role attached to the namespace::

CREATE external schema redshift_spectrum
FROM data catalog
DATABASE dev
IAM_ROLE 'arn:aws:iam::321942852011:role/bison_redshift_s3_role'
CREATE external database if NOT exists;

EC2 instance creation
===========================================================

Create (Console)
--------------------------------
* Future - create and save an AMI or template for consistent reproduction
* via Console, without launch template:

* Ubuntu Server 24.04 LTS, SSD Volume Type (free tier eligible), Arm architecture
* Instance type t4g.micro (1gb RAM, 2 vCPU)
* Security Group: launch-wizard-1
* 15 Gb General Purpose SSD (gp3)
* Modify `IAM instance profile` - to role created for s3 access (bison_ec2_s3_role)
* Use the security group created for this region (currently launch-wizard-1)
* Assign your key pair to this instance

* If you do not have a keypair, create one for SSH access (tied to region) on initial
EC2 launch
* One chance only: Download the private key (.pem file for Linux and OSX) to local
machine
* Set file permissions to 400

* Launch
* Test by SSH-ing to the instance with the Public IPv4 DNS address, with efault user
(for ubuntu instance) `ubuntu`::

ssh -i .ssh/<aws_keyname>.pem ubuntu@<ec2-xxx-xxx-xxx-xxx.compute-x.amazonaws.com>


Install software on EC2
===========================================================

Baseline
------------
* update apt
* install apache for getting/managing certificates
* install certbot for Let's Encrypt certificates
* install docker for BISON deployment::

sudo apt update
sudo apt install apache2 certbot plocate unzip
sudo apt install docker.io
sudo apt install docker-compose-v2

AWS Client tools
--------------------

* Use instructions to install the awscli package (Linux):
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html.
* Make sure to use the instructions with the right architecture (x86 vs Arm)
* Test by listing the contents of bison bucket (permission from role bison_ec2_s3_role)::

aws s3 ls s3://bison-321942852011-us-east-1/input/

SSL certificates
------------------

* Create an SSL certificate on the EC2 instance.
* For testing/development, use self-signed certificates because Cerbot will not create
certificates for an AWS EC2 Public IPv4 DNS, or an IP address.

* Edit the docker-compose.yml file under `nginx` service (which intercepts all web
requests) in `volumes` to bind-mount the directory containing self-signed
certificates to /etc/letsencrypt::

services:
...
nginx:
...
volumes:
- "/home/ubuntu/certificates:/etc/letsencrypt:ro"

BISON code
---------------------

* Download the BISON code repository::

git clone https://github.com/lifemapper/bison.git

* Edit the .env.conf (Docker environment variables) and nginx.conf (webserver address)
files with the FQDN of the server being deployed. For development/testing EC2 servers,
use the Public IPv4 DNS for the EC2 instance.

Launch BISON docker instances
-----------------------------------
84 changes: 84 additions & 0 deletions _sources/pages/aws/roles.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
Roles, Policies, Trust Relationships
=========================================

.. _bison_redshift_s3_role:

bison_redshift_s3_role
------------------------------

Attach to BISON namespace (Redshift)
* Regular role
* Trust relationships

* service: "redshift.amazonaws.com"

* Policies:

* AmazonRedshiftAllCommandsFullAccess (AWS managed)
* AmazonRedshiftDataFullAccess (AWS managed)
* AmazonRedshiftFullAccess (AWS managed)
* bison_invoke_lambda_policy (invoke lambda functions starting with `bison`)
* bison_lambda_log_policy (write CloudWatch logs to log groups starting with `bison`)
* bison_s3_policy (read public/GBIF S3 data and read/write S3 data in bison bucket)
* redshift_glue_policy.json (for Redshift interactions)

* AmazonS3FullAccess (AWS managed)

* for Redshift - Customizable

* TODO: change to Redshift - Scheduler when automated

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"arn:aws:lambda:us-east-1:321942852011:function:bison_s0_test_schedule:*",
"arn:aws:lambda:us-east-1:321942852011:function:bison_s0_test_schedule"
]
}
]
}
bison_redshift_lambda_role
------------------------

* Service role
* Trust relationships

* Services: ["lambda.amazonaws.com", "redshift.amazonaws.com"]

* Policies:

* same as bison_redshift_s3_role

* In Redshift, GRANT permissions to database::

GRANT CREATE
ON DATABASE dev
TO IAMR:bison_redshift_lambda_role

* Attached to BISON lambda functions
* Attach to BISON namespace (Redshift)



.. _bison_ec2_s3_role:

bison_ec2_s3_role
------------------------------

* Trusted entity type: AWS Service
* for S3
* Includes policies:

* bison_s3_policy.json (read public/GBIF S3 data and read/write bison S3 data)
* SecretsManagerReadWrite (AWS managed)

* Trust relationship:

* ec2_s3_role_trust_policy.json edit trust policy for both ec2 and s3
Loading

0 comments on commit dd67d79

Please sign in to comment.