cloud:aws
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
cloud:aws [2023/11/01 07:13] – removed - external edit (Unknown date) 127.0.0.1 | cloud:aws [2024/01/30 14:52] (current) – skipidar | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ===== AWS ===== | ||
+ | |||
+ | == Service catalogue == | ||
+ | |||
+ | Human friendly overview of all services | ||
+ | |||
+ | https:// | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | |||
+ | == Prescriptive-guidance == | ||
+ | |||
+ | Prescriptive Guidance provides time-tested strategies, guides, and patterns to help accelerate your cloud migration, modernization, | ||
+ | |||
+ | https:// | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | == Library == | ||
+ | |||
+ | |||
+ | * Build applications patterns: https:// | ||
+ | * Infrastructure patterns: https:// | ||
+ | |||
+ | |||
+ | ===== Glossary ===== | ||
+ | |||
+ | \\ | ||
+ | \\ | ||
+ | \\ | ||
+ | =====Regions and Availability-Zones===== | ||
+ | The Amazon Cloud Servers can be user in different Regions. \\ | ||
+ | Each Region is separated in availability zones. | ||
+ | |||
+ | |||
+ | |||
+ | The Regions are quite separated from each other. The Availability Zones are more interconnected, | ||
+ | The Data can easily transfered inside one Availability-one but not inside one region. | ||
+ | |||
+ | \\ | ||
+ | |||
+ | ==Regions== | ||
+ | US-East, US-West, EU-West, Asia-Pacific... \\ | ||
+ | |||
+ | \\ | ||
+ | |||
+ | ==Availability zones== | ||
+ | e.g.: eu-west-1a, eu-west-1b, eu-west-1c \\ | ||
+ | |||
+ | \\ | ||
+ | \\ | ||
+ | \\ | ||
+ | |||
+ | =====VPC Virtual Private Cloud ===== | ||
+ | |||
+ | |||
+ | ==Data Traffic pricing== | ||
+ | |||
+ | |Traffic within "the same Availability Zone is free" - free| https:// | ||
+ | |Traffic " | ||
+ | |Traffic " | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | |||
+ | |||
+ | ==Network segment == | ||
+ | http:// | ||
+ | |||
+ | |||
+ | ==CIDR Masks do separate segments== | ||
+ | https:// | ||
+ | |||
+ | |||
+ | Subnet calculator for CIDR blocks, which is able to calculate asymmetric blocks | ||
+ | * http:// | ||
+ | |||
+ | |||
+ | == AWS ACL (Access Control List) == | ||
+ | |||
+ | The ACLs are stateless, so they don't remember the traffic, which left | ||
+ | and do not automatically allow traffic back. | ||
+ | (contrary to security-groups) | ||
+ | |||
+ | |||
+ | The **Inbound** ports - must be the **ports, under which a service is running** within the VPC-subnet. (like 22, 80 ...) | ||
+ | |||
+ | * To allow **Outbound traffic** back from subnet to the client | ||
+ | * **as port range** you must **NOT use the Service-Port-Range** (22, 80, ..) | ||
+ | * Instead, to communicate from **within the subnet back to client** the " | ||
+ | * which **(" | ||
+ | |||
+ | See examples of port ranges by OS: | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | Compare Security Groups and ACLs: | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | =====AWS Command Line Interface===== | ||
+ | Commands Reference | ||
+ | http:// | ||
+ | |||
+ | |||
+ | |||
+ | == VPC architecture == | ||
+ | |||
+ | The architecture inside the VPC. | ||
+ | * connecting on premises and be able to scale? | ||
+ | * maintain resources in a shared VPC? | ||
+ | * Transit Gateway | ||
+ | |||
+ | {{youtube> | ||
+ | [[ https:// | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | ==== FIlters ==== | ||
+ | There are predefined filters, which may be applied for several commands.\\ | ||
+ | Filters **DO NOT MATCH THE NAME OF THE PROPERTY!** E.g. instance-id in filter and instanceId in JSON | ||
+ | |||
+ | ^Command^ List of FIlters^ | ||
+ | |describe-instances|http:// | ||
+ | |describe-volumes|http:// | ||
+ | |||
+ | < | ||
+ | # volume ids for given instance | ||
+ | aws ec2 describe-volumes | ||
+ | |||
+ | # root devices name for given instance | ||
+ | aws ec2 describe-instances --filters Name=' | ||
+ | </ | ||
+ | |||
+ | ===== Query ===== | ||
+ | |||
+ | http:// | ||
+ | |||
+ | Specific tag filtering | ||
+ | < | ||
+ | --query ' | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== Automatic Backups ===== | ||
+ | |||
+ | === Existing solution === | ||
+ | Here is an instance which can be executed and contains an existing Asigra solution | ||
+ | https:// | ||
+ | |||
+ | |||
+ | === Own Script === | ||
+ | |||
+ | The AWS CLI is required for this script. How to configure it: | ||
+ | http:// | ||
+ | |||
+ | Script to make automatic snapshots and clean up: | ||
+ | < | ||
+ | #!/bin/bash | ||
+ | export PATH=$PATH:/ | ||
+ | |||
+ | # Safety feature: exit script if error is returned, or if variables not set. | ||
+ | # Exit if a pipeline results in an error. | ||
+ | set -ue | ||
+ | set -o pipefail | ||
+ | |||
+ | ## Automatic EBS Volume Snapshot Creation & Clean-Up Script | ||
+ | # | ||
+ | # Written by Casey Labs Inc. (https:// | ||
+ | # Contact us for all your Amazon Web Services Consulting needs! | ||
+ | # Script Github repo: https:// | ||
+ | # | ||
+ | # Additonal credits: Log function by Alan Franzoni; Pre-req check by Colin Johnson | ||
+ | # | ||
+ | # PURPOSE: This Bash script can be used to take automatic snapshots of your Linux EC2 instance. Script process: | ||
+ | # - Determine the instance ID of the EC2 server on which the script runs | ||
+ | # - Gather a list of all volume IDs attached to that instance | ||
+ | # - Take a snapshot of each attached volume | ||
+ | # - The script will then delete all associated snapshots taken by the script that are older than 7 days | ||
+ | # | ||
+ | # DISCLAIMER: This script deletes snapshots (though only the ones that it creates). | ||
+ | # Make sure that you understand how the script works. No responsibility accepted in event of accidental data loss. | ||
+ | # | ||
+ | |||
+ | |||
+ | ## Variable Declartions ## | ||
+ | |||
+ | # Get Instance Details | ||
+ | instance_id=$(wget -q -O- http:// | ||
+ | region=$(wget -q -O- http:// | ||
+ | |||
+ | # Set Logging Options | ||
+ | logfile="/ | ||
+ | logfile_max_lines=" | ||
+ | |||
+ | # How many days do you wish to retain backups for? Default: 7 days | ||
+ | retention_days=" | ||
+ | retention_date_in_seconds=$(date +%s --date " | ||
+ | |||
+ | |||
+ | ## Function Declarations ## | ||
+ | |||
+ | # Function: Setup logfile and redirect stdout/ | ||
+ | log_setup() { | ||
+ | # Check if logfile exists and is writable. | ||
+ | ( [ -e " | ||
+ | |||
+ | tmplog=$(tail -n $logfile_max_lines $logfile 2>/ | ||
+ | exec > >(tee -a $logfile) | ||
+ | exec 2>&1 | ||
+ | } | ||
+ | |||
+ | # Function: Log an event. | ||
+ | log() { | ||
+ | echo " | ||
+ | } | ||
+ | |||
+ | # Function: Confirm that the AWS CLI and related tools are installed. | ||
+ | prerequisite_check() { | ||
+ | for prerequisite in aws wget; do | ||
+ | hash $prerequisite &> /dev/null | ||
+ | if [[ $? == 1 ]]; then | ||
+ | echo "In order to use this script, the executable \" | ||
+ | fi | ||
+ | done | ||
+ | } | ||
+ | |||
+ | # Function: Snapshot all volumes attached to this instance. | ||
+ | snapshot_volumes() { | ||
+ | for volume_id in $volume_list; | ||
+ | log " | ||
+ | |||
+ | # Get the attched device name to add to the description so we can easily tell which volume this is. | ||
+ | device_name=$(aws ec2 describe-volumes --region $region --output=text --volume-ids $volume_id --query ' | ||
+ | |||
+ | # Take a snapshot of the current volume, and capture the resulting snapshot ID | ||
+ | snapshot_description=" | ||
+ | |||
+ | snapshot_id=$(aws ec2 create-snapshot --region $region --output=text --description $snapshot_description --volume-id $volume_id --query SnapshotId) | ||
+ | log "New snapshot is $snapshot_id" | ||
+ | |||
+ | # Add a " | ||
+ | # Why? Because we only want to purge snapshots taken by the script later, and not delete snapshots manually taken. | ||
+ | aws ec2 create-tags --region $region --resource $snapshot_id --tags Key=CreatedBy, | ||
+ | done | ||
+ | } | ||
+ | |||
+ | # Function: Cleanup all snapshots associated with this instance that are older than $retention_days | ||
+ | cleanup_snapshots() { | ||
+ | for volume_id in $volume_list; | ||
+ | snapshot_list=$(aws ec2 describe-snapshots --region $region --output=text --filters " | ||
+ | for snapshot in $snapshot_list; | ||
+ | log " | ||
+ | # Check age of snapshot | ||
+ | snapshot_date=$(aws ec2 describe-snapshots --region $region --output=text --snapshot-ids $snapshot --query Snapshots[].StartTime | awk -F " | ||
+ | snapshot_date_in_seconds=$(date " | ||
+ | snapshot_description=$(aws ec2 describe-snapshots --snapshot-id $snapshot --region $region --query Snapshots[].Description) | ||
+ | |||
+ | if (( $snapshot_date_in_seconds <= $retention_date_in_seconds )); then | ||
+ | log " | ||
+ | aws ec2 delete-snapshot --region $region --snapshot-id $snapshot | ||
+ | else | ||
+ | log "Not deleting snapshot $snapshot. Description: | ||
+ | fi | ||
+ | done | ||
+ | done | ||
+ | } | ||
+ | |||
+ | |||
+ | ## SCRIPT COMMANDS ## | ||
+ | |||
+ | log_setup | ||
+ | prerequisite_check | ||
+ | |||
+ | # Grab all volume IDs attached to this instance | ||
+ | volume_list=$(aws ec2 describe-volumes --region $region --filters Name=attachment.instance-id, | ||
+ | |||
+ | snapshot_volumes | ||
+ | cleanup_snapshots | ||
+ | </ | ||
+ | |||
+ | The script an be used in crontab as following: | ||
+ | < | ||
+ | # Minute | ||
+ | |||
+ | # makes SHORT term backups every day. Deletes short term snapshots older than 7 days. | ||
+ | 5 2 * * * / | ||
+ | |||
+ | # makes LONG term backups every Saturday night days. Deletes long term snapshots older than 30 days | ||
+ | 30 2 * * 6 / | ||
+ | |||
+ | </ | ||
+ | |||
+ | ===== IAM Domains and Supporting Services ===== | ||
+ | |||
+ | Which services do you need for identity management? | ||
+ | https:// | ||
+ | |||
+ | * Identification | ||
+ | * Authentification Authorization | ||
+ | * Access GOvernance | ||
+ | * Accountability | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | ===== Policies ===== | ||
+ | Via policies, assinged to Groups, assigned to Users | ||
+ | one can configure which resources are reachable for some particular user. | ||
+ | |||
+ | |||
+ | == Write access to a S3 bucket == | ||
+ | < | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | ], | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | ] | ||
+ | } | ||
+ | ] | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== Route tables ===== | ||
+ | |||
+ | Route tables are defining rules for how traffic leaves the subnet. \\ | ||
+ | Traffic destined for the **Destination** will be sent to the **Target**. | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | ===== Lambdas ===== | ||
+ | |||
+ | ||| | ||
+ | |Programming model|http:// | ||
+ | ||| | ||
+ | |||
+ | Configuring functions: | ||
+ | https:// | ||
+ | |||
+ | == Python == | ||
+ | |||
+ | Use the python API boto3. | ||
+ | From boto3 Use the client API, dont use the resource API. | ||
+ | |||
+ | Here is the documentation for the available services: http:// | ||
+ | |||
+ | == Java - custom runtime == | ||
+ | |||
+ | Build a custom Java runtime for AWS Lambda | ||
+ | |||
+ | https:// | ||
+ | |||
+ | ===== Organisations ===== | ||
+ | |||
+ | http:// | ||
+ | |||
+ | |||
+ | |||
+ | ==== SCP ==== | ||
+ | |||
+ | THe policies are applied on an organizational level. | ||
+ | |||
+ | This is an example, how all services, but sts, s3, iam are denied | ||
+ | |||
+ | <sxh> | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | ], | ||
+ | " | ||
+ | } | ||
+ | ] | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | This is how policies are evaluated. | ||
+ | https:// | ||
+ | |||
+ | {{https:// | ||
+ | ===== CloudFormation ===== | ||
+ | |||
+ | Debugging can be done, via following the logs: | ||
+ | |||
+ | Connect via SSH to the instance, which is currently modified and follow the logs: | ||
+ | |||
+ | < | ||
+ | tail -f / | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== ELB ===== | ||
+ | For experimenting with ELB use the following script, when provisioning EC2 instances, to deploy a webserver with a simple index.html | ||
+ | |||
+ | It will have a default site with the name of the instance in it. | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | yum update -y | ||
+ | yum install -y httpd24 php56 mysql55-server php56-mysqlnd | ||
+ | service httpd start | ||
+ | chkconfig httpd on | ||
+ | groupadd www | ||
+ | usermod -a -G www ec2-user | ||
+ | chown -R root:www /var/www | ||
+ | chmod 2775 /var/www | ||
+ | find /var/www -type d -exec chmod 2775 {} + | ||
+ | find /var/www -type f -exec chmod 0664 {} + | ||
+ | echo "<? | ||
+ | uname -n > / | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== STS ===== | ||
+ | |||
+ | You can give users a valid STS-token using this service. | ||
+ | The users must authenticate themselves against something else, like corporate app. | ||
+ | Then they can use the token to identify themselves against AWS apps. | ||
+ | You can then programmatically check, whether the STS token is valid. | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | Details: https:// | ||
+ | |||
+ | ===== S3 ===== | ||
+ | |||
+ | Storage classes | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | |||
+ | ===== Guard Duty ===== | ||
+ | |||
+ | * analyzes logs (VPC Flow Logs, CloudTrail ) | ||
+ | * identifies findings with severity | ||
+ | |||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | ===== Shield ===== | ||
+ | |||
+ | |||
+ | AWS Shield protects the OSI model’s infrastructure layers (Layer 3 Network, Layer 4 Transport) | ||
+ | |||
+ | AWS **Shield is a managed Distributed Denial of Service (DDoS) protection service**, | ||
+ | whereas AWS WAF is an application-layer firewall that controls access via Web ACL’s. | ||
+ | |||
+ | See https:// | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | Shield **" | ||
+ | |||
+ | Shield **" | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | ===== WAF Web Application Firewall ===== | ||
+ | |||
+ | See https:// | ||
+ | |||
+ | |||
+ | Network firewalls operate at Layer 3 (Network) and only understand the | ||
+ | * source IP Address, | ||
+ | * port, and | ||
+ | * protocol. | ||
+ | |||
+ | AWS **Security Group**s are a great example of this. | ||
+ | {{https:// | ||
+ | |||
+ | |||
+ | WAF works on **OSI layer 7** (Application) | ||
+ | |||
+ | means it understands higher-level protocols such as an | ||
+ | * HTTP(S) request, including its | ||
+ | * headers, | ||
+ | * body, | ||
+ | * method, and | ||
+ | * URL | ||
+ | |||
+ | |||
+ | * WAF interacts with | ||
+ | * CloudFront distributions, | ||
+ | * application load balancers, | ||
+ | * AppSync GraphQL, | ||
+ | * APIs and | ||
+ | * API Gateway REST APIs. | ||
+ | |||
+ | A WAF can be configured to detect traffic from the following: | ||
+ | |||
+ | * specific IPs; | ||
+ | * IP ranges or country of origin; | ||
+ | * content patterns in request bodies, headers and cookies; | ||
+ | * SQL injection attacks; | ||
+ | * cross-site scripting; and | ||
+ | * IPs exceeding rate-based rules | ||
+ | |||
+ | When incoming traffic matches any of the configured rules, WAF can reject requests, return custom responses or simply create metrics to monitor applicable requests. | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | ===== AWS API Gateway ===== | ||
+ | Redirects HTTP calls, can modify the content. In both directions. | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | |||
+ | ===== Authentication and Authorization ===== | ||
+ | |||
+ | - https:// | ||
+ | - https:// | ||
+ | - https:// | ||
+ | - https:// | ||
+ | - https:// | ||
+ | |||
+ | |||
+ | |||
+ | ===== SSH via Ec2 instance connect ===== | ||
+ | |||
+ | One can ssh to the instance via the AWS instance connect feature. | ||
+ | This allows to push an own Amazon agent to the target machine using the permission " | ||
+ | |||
+ | To set it up read: | ||
+ | https:// | ||
+ | |||
+ | |||
+ | After that one can ssh to the machine. | ||
+ | |||
+ | SSH Tunneling | ||
+ | < | ||
+ | ssh ec2-user@i-12342f8f3a2bc6367 -i ~/ | ||
+ | </ | ||
+ | |||
+ | |||
+ | SSH connect | ||
+ | < | ||
+ | ssh ec2-user@i-09452f8f3a2bc6367 -i ~/ | ||
+ | </ | ||
+ | |||
+ | |||
+ | ==== Boto3 ==== | ||
+ | |||
+ | |||
+ | === generate a signed URL === | ||
+ | |||
+ | Use the script to generate a signed URL | ||
+ | |||
+ | <sxh python> | ||
+ | import argparse | ||
+ | import logging | ||
+ | import boto3 | ||
+ | from botocore.exceptions import ClientError | ||
+ | import requests | ||
+ | import sys | ||
+ | |||
+ | logger = logging.getLogger(__name__) | ||
+ | logging.basicConfig(stream=sys.stdout, | ||
+ | |||
+ | |||
+ | def generate(): | ||
+ | |||
+ | s3 = boto3.client(' | ||
+ | |||
+ | bucket_name=' | ||
+ | object_key = " | ||
+ | |||
+ | response = s3.generate_presigned_post( | ||
+ | Bucket=bucket_name, | ||
+ | Key=object_key, | ||
+ | ) | ||
+ | logger.info(" | ||
+ | |||
+ | # formulate a CURL command now | ||
+ | |||
+ | response[' | ||
+ | |||
+ | form_values = " | ||
+ | for key, value in response[' | ||
+ | |||
+ | print(' | ||
+ | print(' | ||
+ | |||
+ | print(' | ||
+ | |||
+ | |||
+ | |||
+ | if __name__ == ' | ||
+ | generate() | ||
+ | |||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | ===== CloudWatch agent ===== | ||
+ | |||
+ | The agent to send logs to cloudwatch. | ||
+ | |||
+ | Done on Ubuntu 22 | ||
+ | |||
+ | |||
+ | Give the ec2 role of that machine the ability to access cloudwatch and ec2 services, | ||
+ | so that it can read ec2 metadata. | ||
+ | |||
+ | <sxh shell> | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | ], | ||
+ | " | ||
+ | " | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | |||
+ | Config files: | ||
+ | |||
+ | file: " | ||
+ | |||
+ | Commands to run as root, to get permissions to access log files. | ||
+ | |||
+ | <sxh shell> | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | }, | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | }, | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | } | ||
+ | ] | ||
+ | } | ||
+ | }, | ||
+ | " | ||
+ | }, | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | }, | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | ], | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | ] | ||
+ | }, | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | ], | ||
+ | " | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | |||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | The ansible playbook to install the agent | ||
+ | |||
+ | file: " | ||
+ | <sxh ansible> | ||
+ | --- | ||
+ | # Configuring syslog rotation | ||
+ | - hosts: "{{ host }}" | ||
+ | become: yes | ||
+ | become_method: | ||
+ | become_user: | ||
+ | tasks: | ||
+ | |||
+ | - name: Download awslogs-agent installer | ||
+ | get_url: | ||
+ | url: https:// | ||
+ | dest: / | ||
+ | owner: " | ||
+ | group: " | ||
+ | force: yes | ||
+ | mode: 0777 | ||
+ | |||
+ | |||
+ | - name: Install deb | ||
+ | apt: | ||
+ | deb: / | ||
+ | become: true | ||
+ | register: result | ||
+ | |||
+ | |||
+ | - debug: | ||
+ | msg: "{{ result }}" | ||
+ | |||
+ | |||
+ | - name: Remove the / | ||
+ | file: | ||
+ | path: / | ||
+ | state: absent | ||
+ | |||
+ | |||
+ | - name: awslogs config folder | ||
+ | file: | ||
+ | path: / | ||
+ | state: directory | ||
+ | mode: " | ||
+ | owner: root | ||
+ | group: root | ||
+ | recurse: yes | ||
+ | |||
+ | |||
+ | - name: Copy a " | ||
+ | copy: | ||
+ | src: "{{ playbook_dir }}/ | ||
+ | dest: / | ||
+ | mode: " | ||
+ | remote_src: false | ||
+ | |||
+ | |||
+ | - name: now fetch the config and run agent | ||
+ | ansible.builtin.shell: | ||
+ | cmd: amazon-cloudwatch-agent-ctl -s -a fetch-config -c file:/ | ||
+ | become: true | ||
+ | </ | ||
+ | |||
+ | |||
+ | Check status of service, should be " | ||
+ | <sxh shell> | ||
+ | sudo systemctl status amazon-cloudwatch-agent.service | ||
+ | </ | ||
+ | |||
+ | |||
+ | Check status of agent via ctl tool | ||
+ | <sxh shell> | ||
+ | $ amazon-cloudwatch-agent-ctl -a status | ||
+ | |||
+ | |||
+ | { | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | } | ||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | The consolidated configurations | ||
+ | which the system collects from above manual config files | ||
+ | one can check via: | ||
+ | |||
+ | <sxh shell> | ||
+ | $ cat / | ||
+ | |||
+ | |||
+ | connectors: {} | ||
+ | exporters: | ||
+ | awscloudwatch: | ||
+ | force_flush_interval: | ||
+ | max_datums_per_call: | ||
+ | max_values_per_datum: | ||
+ | namespace: CWAgent | ||
+ | region: eu-central-1 | ||
+ | resource_to_telemetry_conversion: | ||
+ | enabled: true | ||
+ | extensions: {} | ||
+ | processors: | ||
+ | ec2tagger: | ||
+ | ec2_instance_tag_keys: | ||
+ | - AutoScalingGroupName | ||
+ | ec2_metadata_tags: | ||
+ | - ImageId | ||
+ | - InstanceId | ||
+ | - InstanceType | ||
+ | refresh_interval_seconds: | ||
+ | receivers: | ||
+ | telegraf_disk: | ||
+ | collection_interval: | ||
+ | initial_delay: | ||
+ | telegraf_mem: | ||
+ | collection_interval: | ||
+ | initial_delay: | ||
+ | service: | ||
+ | extensions: [] | ||
+ | pipelines: | ||
+ | metrics/ | ||
+ | exporters: | ||
+ | - awscloudwatch | ||
+ | processors: | ||
+ | - ec2tagger | ||
+ | receivers: | ||
+ | - telegraf_mem | ||
+ | - telegraf_disk | ||
+ | telemetry: | ||
+ | logs: | ||
+ | development: | ||
+ | disable_caller: | ||
+ | disable_stacktrace: | ||
+ | encoding: console | ||
+ | error_output_paths: | ||
+ | initial_fields: | ||
+ | level: info | ||
+ | output_paths: | ||
+ | - / | ||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | Especially the agent logs are in: | ||
+ | |||
+ | <sxh shell> | ||
+ | $ cat / | ||
+ | </ | ||
+ | |||
+ | |||
+ | ==== Backup strategies ==== | ||
+ | |||
+ | Which patterns for high availability are available? | ||
+ | |||
+ | How much do they cost? | ||
+ | |||
+ | https:// | ||
+ | |||
+ | |||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | |||
+ | ===== Migration ===== | ||
+ | |||
+ | Prescriptive guideline describes pretty well how to organize it: | ||
+ | |||
+ | https:// | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | ===== Domains ===== | ||
+ | |||
+ | ==== Finance and credit card ==== | ||
+ | |||
+ | Real time credit card fraud evaluation: | ||
+ | |||
+ | https:// | ||
+ | |||
+ | {{https:// | ||
+ | |||
+ | |||
+ | ==== Goldman Sachs ==== | ||
+ | |||
+ | Achieving cross-regional availibility | ||
+ | |||
+ | https:// | ||
+ | |||
+ | |||