Skip to content

Commit

Permalink
Repo refactoring.
Browse files Browse the repository at this point in the history
  • Loading branch information
garutilorenzo committed Mar 3, 2022
1 parent 3cebedb commit dc1c1d8
Show file tree
Hide file tree
Showing 70 changed files with 1,416 additions and 1,315 deletions.
110 changes: 75 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,20 +21,25 @@ Deploy Oracle Cloud services using Oracle [always free](https://docs.oracle.com/
* [Requirements](#requirements)
* [Setup RSA Key](#example-rsa-key-generation)
* [Oracle provider setup](#oracle-provider-setup)
* [Variables](#other-variables-to-adjust)
* [Common resources](#common-resources)
* [Project setup](#project-setup)
* [Firewall](#firewall)
* [OS](#os)
* [Shape](#shape)
* [Useful documentation](#useful-documentation)

### Repository structure

There are three examples:
In this repositroy there are 7 terrafrom modules, in order of dependency:

* Deploy a [simple compute instance](simple-instance/)
* Deploy two instances behind a network load balancer using an [instance pool](instance-pool/)
* Deploy a [k3s-cluster](k3s-cluster/)
* [simple-vcn](simple-vcn/) - Setup a VCN with two PUBLIC subnets
* [private-vcn](private-vcn/) - Setup a VCN with one PUBLIC subnet and one PRIVATE subnet
* [nat-instance](nat-instance/) - Setup a NAT instance (with the Oracle always free account you can't deploy a NAT gateway)
* [simple-instance](simple-instance/) - Deploy a simple instance in a private or public subnet
* [instance-pool](instance-pool/) - Deploy multiple instances using a Oracle instance pool and instance configurations
* [load-balancer](load-balancer/) - Deploy a public load balancer (Layer 7 HTTP)
* [network-load-balancer](network-load-balancer/) - Deploy a private load balancer (Layer 4 TCP)

For more information on how to use this modules follow the examples in the *examples* directory. To use this repository, clone this repository and use the *example* directory as base dir.

### Requirements

Expand All @@ -44,6 +49,12 @@ To use this repo you will need:

Once you get the account, follow the *Before you begin* and *1. Prepare* step in [this](https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/tf-provider/01-summary.htm) document.

You need also:

* [Terraform](https://www.terraform.io/) - Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.
* [kubectl](https://kubernetes.io/docs/tasks/tools/) - The Kubernetes command-line tool (optional)
* [oci cli](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/cliconcepts.htm) - Oracle command line interface (optional)

#### Example RSA key generation

To use terraform with the Oracle Cloud infrastructure you need to generate an RSA key. Generate the rsa key with:
Expand All @@ -58,9 +69,15 @@ replace *<your_name>* with your name or a string you prefer.

**NOTE** ~/.oci/<your_name>-oracle-cloud_public.pem this string will be used on the *terraform.tfvars* used by the Oracle provider plugin, so please take note of this string.

### Project setup

Once you have cloned this repo, change directory to [examples](examples/) dir and choose the example you prefer: *private subnet* or main.tf or *public subnet* main.tf-public file. Edit the example file and set the needed variables (*change-me* variables). Crate a *terraform.tfvars* file, for more detail see [Oracle provider setup](#oracle-provider-setup) and read all the modules requirements in each module directory.

Or if you prefer you can create a new empty directory in your workspace and start a new project from scratch. To setup the project follow the README.md in the [examples](examples/) directory.

### Oracle provider setup

In any subdirectory of this repo you need to create a terraform.tfvars file, the file will look like:
This is an example of the *terraform.tfvars* file:

```
fingerprint = "<rsa_key_fingerprint>"
Expand All @@ -78,48 +95,71 @@ The compartment_ocid is the same as tenency_ocid.

The fingerprint is the fingerprint of your RSA key, you can find this vale under User setting > API Keys

### Other variables to adjust
#### How to find the availability doamin name

Before triggering the infrastructure deployment adjust the following variables (vars.tf in each subdirectory):
To find the list of the availability domains run this command on che Cloud Shell:

* region, set the correct region based on your needs
* availability_domain, set you availability domain, you can get the availability domain string in the "*Create instance* form. Once you are in the create instance procedure under the placement section click "Edit" and copy the string that begin with *iAdc:*. Example iAdc:EU-ZURICH-1-AD-1
* default_fault_domain, set de default fault domain, choose one of: FAULT-DOMAIN-1, FAULT-DOMAIN-2, FAULT-DOMAIN-3
* PATH_TO_PUBLIC_KEY, this variable have to point at your ssh public key
* oci_core_vcn_cidr, set the default VCN subnet cidr
* oci_core_subnet_cidr10, set the default subnet cidr
* oci_core_subnet_cidr11, set the secondary subnet cidr
* tutorial_tag_key, set a key used to tag all the deployed resources
* tutorial_tag_value, set the value of the tutorial_tag_key
* my_public_ip_address, set your public ip address

### Common resources

All the environments share the same network and security list configurations.
```
oci iam availability-domain list
{
"data": [
{
"compartment-id": "<compartment_ocid>",
"id": "ocid1.availabilitydomain.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"name": "iAdc:EU-ZURICH-1-AD-1"
}
]
}
```

The network setup create:
#### How to list all the OS images

* One VCN (10.0.0.0/16 subnet), you can setup a custom network CIDR in oci_core_vcn_cidr variable.
* Two subnets, the first subnet (default) is the 10.0.0.0/24 range, the second subnet is 10.0.1.0/24 range. You can customize the subnets CIDR in oci_core_subnet_cidr10 and oci_core_subnet_cidr11 variables.
To filter the OS images by shape and OS run this command on che Cloud Shell:

The security list rules are:
```
oci compute image list --compartment-id <compartment_ocid> --operating-system "Canonical Ubuntu" --shape "VM.Standard.A1.Flex"
{
"data": [
{
"agent-features": null,
"base-image-id": null,
"billable-size-in-gbs": 2,
"compartment-id": null,
"create-image-allowed": true,
"defined-tags": {},
"display-name": "Canonical-Ubuntu-20.04-aarch64-2022.01.18-0",
"freeform-tags": {},
"id": "ocid1.image.oc1.eu-zurich-1.aaaaaaaag2uyozo7266bmg26j5ixvi42jhaujso2pddpsigtib6vfnqy5f6q",
"launch-mode": "NATIVE",
"launch-options": {
"boot-volume-type": "PARAVIRTUALIZED",
"firmware": "UEFI_64",
"is-consistent-volume-naming-enabled": true,
"is-pv-encryption-in-transit-enabled": true,
"network-type": "PARAVIRTUALIZED",
"remote-data-volume-type": "PARAVIRTUALIZED"
},
"lifecycle-state": "AVAILABLE",
"listing-type": null,
"operating-system": "Canonical Ubuntu",
"operating-system-version": "20.04",
"size-in-mbs": 47694,
"time-created": "2022-01-27T22:53:34.270000+00:00"
},
```

* By default only the incoming ICMP, SSH and HTTP traffic is allowed from your public ip. You can setup your public ip in my_public_ip_address variable.
* By default all the outgoing traffic is allowed
* A second security list rule (Custom security list) open all the incoming http traffic
* Both default security list and the custom security list are associated on both subnets
* Network flow from the private VCN subnet is allowed
**Note:** this setup was only tested with Ubuntu 20.04

### Firewall

By default firewall on the compute instances is disabled. On some test the firewall has created some problems
By default firewall on the compute instances is disabled (except for the nat instance).

### Software installed

In the simple-instance example and in the instance-pool example nginx will be installed by default.
Nginx is used for testing the security list rules an the correct setup of the Load Balancer (instance-pool example).
Nginx is used for testing the security list rules an the correct setup of the Load Balancer.

On the k3s-cluster example, k3s will be automatically installed on all the machines.
On the k3s-cluster example, k3s will be automatically installed on all the machines. **NOTE** k3s-cluster setup has moved to [this](https://github.com/garutilorenzo/k3s-oci-cluster) repository.

### OS

Expand Down
25 changes: 13 additions & 12 deletions instance-pool/.terraform.lock.hcl → examples/.terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

62 changes: 62 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Examples

In this folder there are two examples:

* main.tf - Use a private subnet with a nat instance, all services are deployed on the pivate subnet. (Default example)
* main.tf-public - Use a public subnet, all the services are deployed in the public subnet. (Disabled example)

If you want to use the public example, rename the *main.tf-public* in *main.tf*. Keep **ONLY ONE** *.tf file.

Now adjust all the *change-me* variables inside the main.tf file. Once you have setup your environment, we are ready to init terraform:

```
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/oci from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Using previously-installed hashicorp/oci v4.65.0
- Using previously-installed hashicorp/template v2.2.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
```

### Deploy

We are now ready to deploy our infrastructure. First we ask terraform to plan the execution with:

```
terraform plan
```

now we can deploy our resources with:

```
terraform apply
```

### Connect to private instances

We can connect to the private instances using the nat instance as Jump server:

```
ssh -J bastion@<NAT_INSTANCE_PUBLIC_IP> ubuntu@<INSTANCE_PRIVATE_IP>
```

### Start a project from scratch

If you want to create a new project from scratch you need three files:

* terraform.tfvars - More details in [Oracle provider setup](../README.md#oracle-provider-setup)
* main.tf - download main.tf file or main.tf-public based on your needs. If you choose main.tf-public **remember** to rename the file in main.tf
* provider.tf - download the file from this directory
Loading

0 comments on commit dc1c1d8

Please sign in to comment.