AWS (Day 0)

the last article you'll read to definitely understand AWS fundamentals

infrastructure

Disclaimers :

  1. Opinions expressed in this post (and in any of all my posts) are solely, unless otherwise specified, those of the authors, me. Those opinions absolutely do not reflect the views, policies, positions of any organizations, employers, affiliated groups.

  2. I don't agree with this, but my employer says I don't have the right to share the source code I've written in the course of my work. I don't ever want a problem. So, I'll try to say as much as I can without divulging any specific information.

  3. Writing acts as a powerful therapeutic outlet, allowing emotions such as anger and irritation to be released.

  4. If you are a colleague and recognize the situation described below, remember: I'm critiquing the system, not the people.

  5. Just in case my honesty lands me in hot water, my resume is here!

  6. This article is written primarily for Tidiane and Dethie.

  7. I've strived for accuracy throughout this piece, but if you catch any errors, please reach out—I'd be grateful for the feedback and happy to make updates!

  8. Render unto Caesar what is Caesar's, Claude helped me find up-to-date documentation links and he also helped beautify all the diagrams used (and not used) in this article.



Hook

One of the joys of corporate life: you're politely asked what training would help you be more productive, and a few months later, you're enrolled in something completely unrelated to your request.

I've been asked to take a course called "AWS Automation and Observability." Of course, I only have one choice: accept. Even though I don't need it. Even though I find it thoroughly boring. Even though I consider the whole thing unnecessary.

Being forced into training you didn't choose (especially when you already have solid technical skills) is frustrating. It's even more frustrating when the content feels dated; someone in 2026 teaching Jenkins as if it's revolutionary feels like being taught to use a rotary phone.

There's a certain irony in being told:

« Il est strictement interdit d'enregistrer, de filmer ou de fixer par quelque moyen que ce soit les sessions de formation. Les supports doivent rester confidentiels. »

...when the material itself is widely available online and hardly groundbreaking, when (at least for day 0), there was no custom material given by the instructor. I respect the instructor's knowledge of the subject, he clearly knows his stuff, but the format, the delivery, and the relevance left much to be desired.

Perhaps the problem isn't the instructor but the constraints he was working under: limited time, the fact that there were too many people with different levels in the same class or budget restrictions. Still, mandatory training should be engaging, current, and actually useful. This was none of those things.

Since there's no escaping it, might as well make the most of it and stay in a good mood. Let's talk about what I wish the first day had covered and how it should have been covered.

AWS

Intro

Amazon Web Services (AWS) is Amazon's cloud computing platform, launched in 2006 with two initial services: S3 (Simple Storage Service) and EC2 (Elastic Compute Cloud). The idea emerged from Amazon's internal infrastructure challenges—they had built robust, scalable systems to handle their e-commerce peaks, and realized they could rent that infrastructure to others.

Today, AWS offers 200+ services spanning compute, storage, databases, machine learning, networking, and more. It holds roughly 31% of the global cloud market share (as of 2024), making it the largest cloud provider. Phew.

In Africa, the Cape Town region (af-south-1) was launched in 2020. It is AWS's first and currently only African region. There's no dedicated region yet for West Africa specifically. Users typically connect to eu-west-3 (Paris) or af-south-1 (Cape Town) depending on latency requirements.

AWS does also have :

Alternatives: the hyperscalers

Hyperscalers are cloud providers operating at massive scale; we are talking about data centers across multiple continents, serving millionZ of customers simultaneously. The "Big 3" dominate the market:

Provider ShareStrengthsBest For
AWS~31%Widest service catalog, mature ecosystemGeneral purpose, enterprise, startups
Azure~24%Windows integration, hybrid cloudMicrosoft shops, hybrid deployments
GCP~11%Data analytics, ML/AI (TensorFlow), Kubernetes Data-heavy workloads, ML projects

Notable alternatives:

  • DigitalOcean / Linode / Vultr — Simpler, developer-friendly, cheaper for small workloads
  • OVHcloud — European provider, good for GDPR compliance, competitive pricing
  • Scaleway — French provider, strong EU presence, interesting ARM offerings
  • Hetzner — German provider, excellent price-to-performance ratio, popular for self-hosted projects

The hyperscaler choice often comes down to existing tooling, team expertise, and specific service needs rather than raw capability. They all solve similar problems.

As an advocate for the free software movement, what is my advice ?

GCP: Although proprietary & evil, I should admit, Google has historically opened up key technologies such as Kubernetes and many others to promote interoperability and encourage customers to leave more closed competitors.

Red Hat OpenShift (on any cloud): Not a hyperscaler per se, but using this platform on a cloud infrastructure allows you to maintain a free and portable software layer between different providers.

Why would a biomedical research center use AWS ?

For an organization handling sensitive health data and research workloads:

  • Compliance: HIPAA, GDPR-ready configurations
  • Burst capacity: spin up hundreds of CPUs for genomic analysis, pay only for what you use
  • Global collaboration: share datasets across continents with proper access controls
  • Focus on research: less time managing servers, more time on science



Table of contents

  1. Fundamentals - Where things are (Regions, Availability Zones, Organizations)
  2. Identity and Access Management - Who can do what (IAM)
  3. Network - How things connect
    • VPC basics (CIDR, Subnets)
    • Gateways (IGW, NAT)
    • Hybrid connectivity (VPN, Direct Connect)
    • Network Security (Security Groups, NACLs)
  4. Load Balancing - Traffic distribution
  5. Compute Services - Where code runs
  6. Storage - Where data lives
  7. Observability - How to monitor



AWS concepts → Traditional infrastructure

If you've been managing Linux servers, you already know most of these concepts. AWS just wraps them in managed services with different names:

AWS Service/ConceptTraditional EquivalentWhat's Different?
EC2Physical server, VM (KVM/VMware)Rent by the hour, no hardware to buy
VPCNetwork subnet, VLANSoftware-defined, fully isolated per customer
Security Groupsiptables rules (instance-level)Stateful, attached to network interfaces
Network ACLsiptables at router/subnet levelStateless, numbered rules, processed in order
S3File server (NFS, Samba), MinIOObject storage (not filesystem), infinite scale, HTTP API
EBSHard drive, LVM volumeNetwork-attached block storage, point-in-time snapshots
EFSNFS shareManaged NFS, auto-scaling, multi-AZ
RDSPostgreSQL server you installedAWS manages backups, patches, replication, HA
Load Balancer (ALB/NLB)Nginx, HAProxy, Apache mod_proxyFully managed, auto-scales, integrated health checks
Auto ScalingCustom bash scripts watching topAutomatic instance scaling based on CloudWatch metrics
IAM/etc/passwd, sudo, LDAP/Active DirectoryControls AWS API access, not just OS users
IAM RolesService accounts, systemd user credentialsTemporary credentials that auto-rotate
CloudWatch MetricsPrometheus, Nagios, Zabbix, GrafanaPre-integrated with all AWS services
CloudWatch Logssyslog, journald, ELK stackCentralized log aggregation, SQL-like queries
CloudWatch AlarmsNagios alerts, monit, custom scriptsTrigger actions (scaling, notifications, Lambda)
CloudTrailauditd, /var/log/audit/audit.logEvery AWS API call logged (who did what, when)
LambdaCron jobs + shell scriptsEvent-driven functions, no server to manage
Route 53BIND, dnsmasq, PowerDNSManaged DNS with health checks and routing policies
RegionsMultiple data center locationsCompletely isolated, compliance/data residency boundaries
Availability ZonesSeparate racks/power/networking in same DCPhysical separation within region, low-latency links

The trade-off: You give up control (can't SSH into the Load Balancer) in exchange for less maintenance (AWS patches it for you). Whether that's worth it depends on your team size, expertise, and what you're trying to build. And if you understood everything inside the table above, you can stop here, I'm serious :\



Want to know more ? Okay

Fundamentals

Regions are geographically isolated clusters of data centers. Each region (e.g., eu-west-3 for Paris, us-east-1 for N. Virginia) operates independently. You choose a region based on:

  • Latency — closer to your users = faster response
  • Compliance — data residency laws may require specific locations
  • Service availability — not all services exist in all regions
  • Pricing — costs vary by region

Availability Zones (AZs) are physically separate data centers within a region, connected by low-latency links. Deploying across multiple Zones provides fault tolerance: if one data center fails, your application survives.

Organizations is a service designed for central governance and management of multiple AWS accounts. They are useful for:

  • Isolating environments (dev/staging/prod)
  • Separating billing by department or project
  • Applying security policies across accounts
  • Consolidated billing with volume discounts

Identity and Access Management

Before you start spinning up resources, you need to understand who can do what in your AWS account. This is where IAM (Identity and Access Management) comes in.

IAM is AWS's authentication and authorization service. It controls who can access your AWS resources and what actions they can perform. Get this wrong, and you'll either lock yourself out or leave your infrastructure wide open to attackers.

Core IAM concepts:

ComponentPurposeWhen to Use
UsersIndividual identities with permanent credentialsHuman access via Console/CLI
GroupsCollections of users with shared permissionsOrganizing users by function (developers, ops, analysts)
RolesTemporary credentials that can be assumedServices, applications, cross-account access
PoliciesJSON documents defining permissionsAttach to users, groups, or roles

IAM Policies are JSON documents that specify allowed or denied actions. Example:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": ["s3:GetObject", "s3:ListBucket"],
    "Resource": ["arn:aws:s3:::my-research-data/*"]
  }]
}

This policy allows reading objects from a specific S3 bucket, nothing more. That's the principle of least privilege: grant only the minimum permissions needed to do the job.

Why roles matter more than you think:

The biggest mistake beginners make is hardcoding AWS credentials in their applications. I should be honest with you here, yes I've done it when I was younger. But today, please, I'm telling you, don't do this.

Example scenario: Your EC2 instance needs to read files from S3.

Wrong approach: Store AWS access keys in a config file on the instance.

  • Keys can be stolen if the instance is compromised
  • Keys don't rotate automatically
  • Difficult to audit who used which credentials

Right approach: Attach an IAM role with S3 read permissions to the EC2 instance.

  • No credentials stored anywhere
  • Temporary credentials rotate automatically
  • CloudTrail logs show exactly which instance accessed which resources

Root account protection:

When you create an AWS account, you get a root user with unlimited access. This is dangerous. Best practices:

  1. Enable MFA (Multi-Factor Authentication) on the root account
  2. Create IAM users for daily work, even for administrators
  3. Lock away root credentials - only use them for tasks that require root (billing, account closure)

AWS Organizations and Service Control Policies (SCPs) add another layer: they set permission boundaries across multiple accounts. Even if an IAM policy allows an action, an SCP can block it at the organization level. Useful for enforcing compliance (e.g., "no one can launch instances outside eu-west-3").

In the biomedical research context:

You'd use IAM to:

  • Give researchers read-only access to specific S3 buckets containing datasets
  • Allow the data engineering team to manage ETL pipelines (write access to certain resources)
  • Grant your analysis scripts (running on EC2/Lambda) temporary credentials to access databases
  • Prevent anyone from accidentally exposing patient data by restricting public S3 bucket creation

Security Groups (covered below) control network access. IAM controls identity and authorization. You need both.

Network

Maybe I did need this class after all. I'm having flashbacks... crimping tool... crossover cable... straight cable... cat5.... class C... IPV6... Layer 3... Layer 4.... Heeelp! \o/

A VPC (Virtual Private Cloud) is your isolated network within AWS. With a VPC you define:

  • CIDR block: your IP range (e.g., 10.0.0.0/16 gives you 65,536 IPs)
  • Subnets: subdivisions of your VPC, placed in specific Zones
    • Public subnets: resources can have public IPs, direct internet access
    • Private subnets: no direct internet access, only internal communication

Gateways:

  • Internet Gateway (IGW): enables internet access for public subnets. Attach one to your VPC, add routes, done.
  • NAT Gateway: lets private subnet resources reach the internet (for updates, API calls) without being directly reachable..

Hybrid connectivity:

  • Site-to-Site VPN: encrypted tunnel between your on-premises network and AWS VPC. Quick to set up, uses public internet, variable latency.
  • AWS Direct Connect: dedicated physical connection to AWS. Consistent latency, higher bandwidth, but requires physical setup and contracts. Useful for heavy data transfer or strict latency requirements.

Network Security:

AWS provides two complementary layers of network security: Security Groups and Network ACLs. Understanding when to use each is critical.

Security Groups are stateful firewalls attached to resources (EC2 instances, RDS databases, etc.). They act like bouncers at the door of individual servers. Define allowed inbound/outbound traffic by port, protocol, and source. If you allow inbound traffic on port 80, the return traffic is automatically allowed—that's what "stateful" means.

Network ACLs (NACLs) are stateless firewalls at the subnet level. They filter traffic entering or leaving entire subnets. Rules are evaluated in numerical order (100, 200, 300...), and the first match wins. Unlike Security Groups, NACLs require explicit rules for both inbound and outbound traffic.

Security Groups vs. Network ACLs:

FeatureSecurity GroupsNetwork ACLs
ScopeInstance level (attached to ENI)Subnet level
StateStateful (return traffic auto-allowed)Stateless (must allow both directions)
RulesOnly Allow rulesAllow + Deny rules
EvaluationAll rules evaluatedProcessed in order, first match wins
Default behaviorDeny all inbound, allow all outboundAllow everything
Typical useYour primary security controlEdge cases, compliance, IP blocking

When to use what:

Security Groups should be your default choice for 95% of scenarios:

  • Allow SSH (port 22) only from your office IP
  • Let web servers accept HTTP/HTTPS from anywhere
  • Allow database connections only from application servers

Network ACLs are for specific edge cases:

  • Block a known malicious IP range at the subnet boundary
  • Compliance requirements mandating subnet-level controls
  • Defense-in-depth: add a second layer of protection

Example scenario - Blocking an attacking IP:

Your web servers (in a public subnet) are under attack from 203.0.113.0/24. Security Groups can only allow traffic, not deny it. Solution: add a NACL rule.

NACL Inbound Rules (evaluated in order):
Rule 100: DENY   TCP   80   from 203.0.113.0/24   (block attackers)
Rule 200: ALLOW  TCP   80   from 0.0.0.0/0        (allow everyone else)
Rule *:   DENY   ALL   ALL  from 0.0.0.0/0        (default deny)

Because NACLs are stateless, you also need outbound rules for responses:

NACL Outbound Rules:
Rule 100: ALLOW  TCP  1024-65535  to 0.0.0.0/0   (ephemeral ports)
Rule *:   DENY   ALL  ALL         to 0.0.0.0/0    (default deny)

The 1024-65535 range covers ephemeral ports used by client connections. Yes, this is annoying. This is why most people stick to Security Groups.

Pro tip: The default NACL in your VPC allows all traffic. Most AWS users never modify it. Only touch NACLs when you have 3 specific reasons.

Flow Logs: Capture network traffic metadata (source/destination IP, port, protocol, action) for debugging and audit. Essential for troubleshooting connectivity issues and security investigations. Send logs to CloudWatch or S3 for analysis.

Load Balancing

High Availability means your application stays up even when parts of the infrastructure fail. In AWS, this typically involves deploying across multiple Availability Zones and using load balancers to distribute traffic.

Elastic Load Balancing (ELB) automatically distributes incoming traffic across multiple targets. AWS offers several types:

TypeLayerBest For
Application Load Balancer (ALB)Layer 7 (HTTP/HTTPS)Web apps, microservices, content-based routing
Network Load Balancer (NLB)Layer 4 (TCP/UDP)Ultra-low latency, millions of requests/sec
Gateway Load BalancerLayer 3Third-party virtual appliances (firewalls, IDS)

Auto Scaling automatically adjusts the number of EC2 instances based on demand:

  • Scheduled scaling: scale at predictable times (e.g., business hours)
  • Dynamic scaling: react to CloudWatch metrics (CPU, memory, custom metrics)
  • Predictive scaling: ML-based forecasting of future demand

Multi-AZ resilience: Deploy instances across multiple AZs, place a load balancer in front, and Auto Scaling replaces failed instances automatically. This is the foundation of most production architectures on AWS.

Compute Services

Compute refers to the processing power needed to run applications. AWS offers several options depending on your level of control vs. convenience:

EC2 (Elastic Compute Cloud) — Virtual machines you fully control. Choose your OS, instance type (CPU/RAM), and storage. Pricing models:

ModelDescriptionSavings
On-DemandPay by the hour/second, no commitmentBaseline pricing
Reserved1 or 3-year commitment for steady workloadsUp to 72% off
SpotBid on unused capacity, can be interruptedUp to 90% off
Savings PlansFlexible commitment across instance familiesUp to 72% off

Container services — For containerized workloads:

  • ECS (Elastic Container Service): AWS-native container orchestration. Simpler than Kubernetes, tightly integrated with AWS.
  • EKS (Elastic Kubernetes Service): Managed Kubernetes. Use this if you need Kubernetes compatibility or already use K8s.
  • Fargate: Serverless compute for containers. No EC2 instances to manage—just define CPU/memory and run your containers.

When to use what:

  • EC2: Full control needed, legacy apps, specific OS requirements
  • ECS + Fargate: Simple containerized apps, AWS-native approach
  • EKS: Kubernetes expertise on the team, multi-cloud portability needed

Cloud computing

Storage

AWS offers different storage types for different use cases:

Amazon S3 — Object storage for any type of data. Highly durable (11 nines), infinitely scalable. Storage classes optimize cost based on access patterns:

ClassAccess PatternRetrieval
S3 StandardFrequent accessInstant
S3 Intelligent-TieringUnknown/changing patternsInstant (auto-moves data)
S3 Standard-IAInfrequent accessInstant
S3 Glacier InstantArchive, rare accessMilliseconds
S3 Glacier FlexibleArchiveMinutes to hours
S3 Glacier Deep ArchiveLong-term archive12-48 hours

Lifecycle policies automatically transition objects between classes or delete them after a period.

Block vs. File storage:

  • EBS (Elastic Block Store): Block storage attached to a single EC2 instance. Think: hard drive for your VM. Great for databases.
  • EFS (Elastic File System): Shared file system mountable by multiple EC2 instances. Think: NFS. Great for shared application data.

Databases:

Amazon RDS (Relational Database Service) — Managed relational databases for PostgreSQL, (and other database providers I won't talk about because I am a free software advocate). AWS handles the operational heavy lifting: backups, patching, OS updates, and replication. You focus on schema design and queries.

High Availability and scaling options:

Understanding the difference between Multi-AZ and Read Replicas is critical:

FeatureMulti-AZRead Replicas
PurposeHigh availability (failover)Read scaling, analytics
ReplicationSynchronous (to standby in another AZ)Asynchronous (can lag slightly)
FailoverAutomatic (1-2 minutes)Manual promotion required
ReadableStandby is not accessibleYes, handle read traffic
Use caseProduction databases requiring uptimeDistribute read load, reporting queries
Cost~2x (paying for standby)Additional instance cost per replica

Multi-AZ deployment: Your primary database synchronously replicates to a standby instance in a different Availability Zone. If the primary fails (hardware issue, AZ outage), RDS automatically fails over to the standby. Your application reconnects to the same endpoint—no code changes needed. This is for disaster recovery, not performance.

Read Replicas: Create up to 15 read-only copies of your database. Use them to offload read traffic (analytics, reporting) from the primary. Replication is asynchronous, so there may be a slight lag (usually milliseconds to seconds). You can promote a replica to primary if needed, but it's a manual operation.

Amazon Aurora — AWS's proprietary database engine, PostgreSQL compatible:

  • Performance: Up to 3x faster than PostgreSQL (according to AWS benchmarks)
  • Storage auto-scaling: Starts at 10GB, grows automatically up to 128TB in 10GB increments
  • High availability built-in: Data replicated 6 ways across 3 AZs automatically
  • Read scaling: Up to 15 Aurora Replicas with sub-10ms replica lag
  • Cost: More expensive than standard RDS, but better price-to-performance for demanding workloads

Aurora is AWS's recommended choice for new applications that need high performance and availability.

Backup Strategies:

RDS provides two backup mechanisms:

  1. Automated backups:

    • Enabled by default
    • Point-in-time recovery: restore your database to any second within the retention period (1-35 days)
    • Full daily backups + transaction logs
    • Deleted when you delete the RDS instance (unless you configure a final snapshot)
  2. Manual snapshots:

    • User-initiated, persist until you explicitly delete them
    • Useful before major changes (schema migrations, upgrades)
    • Can be copied across regions for disaster recovery

Pro tip: Before any risky operation (major version upgrade, schema change), create a manual snapshot. Automated backups might not cover you if the retention period expires.

RDS vs. NoSQL:

Not all data fits the relational model. Amazon DynamoDB is AWS's managed NoSQL database (key-value and document store):

FeatureRDS (SQL)DynamoDB (NoSQL)
Data modelTables with fixed schema, relationsKey-value / JSON documents, flexible schema
QueriesComplex SQL (JOINs, aggregations)Simple key lookups, limited querying
ScalingVertical (bigger instance), Read ReplicasHorizontal (automatic), virtually unlimited
TransactionsFull ACID supportLimited transactions (single-item ACID, multi-item with conditions)
Best forComplex queries, reporting, traditional appsHigh-throughput simple queries, IoT, gaming, mobile backends
PricingPay for instance size (even if idle)Pay for storage + read/write capacity used

When to use what:

  • RDS: You need SQL, complex queries, transactions, existing relational schema, or you're migrating from on-premises MySQL/PostgreSQL
  • DynamoDB: Key-value access patterns, need massive scale, unpredictable traffic spikes, single-digit millisecond latency requirements

Most traditional applications (e-commerce, ERP, content management) use RDS. DynamoDB shines for high-scale, low-latency use cases where you don't need complex querying.

cloud storage

Observability

Observability = understanding what's happening inside your systems. AWS provides Amazon CloudWatch as the central service for this.

Metrics: Numerical data points over time. AWS services automatically send metrics (CPU usage, network traffic, request counts). You can also publish custom metrics from your applications.

Logs: Centralized log storage and analysis. Stream logs from EC2, Lambda, containers, or any application. Use Log Insights to query across log groups with a SQL-like syntax.

Alarms: Watch metrics and trigger actions when thresholds are breached:

  • Send notifications via SNS (email, SMS, Slack)
  • Trigger Auto Scaling to add/remove instances
  • Execute Lambda functions for custom remediation

Example alarm: "If average CPU > 80% for 5 minutes, add 2 instances and notify the ops team."

Beyond CloudWatch:

  • AWS X-Ray: Distributed tracing for microservices
  • CloudTrail: Audit log of all API calls in your account

cloud observability

One picture is worth thousand words

AWS Architecture - Overview

Click on a component to see details and analogy with your current stack

Internet
Users
Front & mobile Apps
HTTPS
Route 53
DNS
VPC (10.0.0.0/16)
AZ-1 (eu-west-3a)
Public Subnet (10.0.1.0/24)
Internet Gateway
IGW
Load Balancer
ALB / NLB
NAT Gateway
NAT
Private Subnet (10.0.10.0/24)
EC2 Instance
t3.medium
or
ECS / Fargate
Container
AZ-2 (eu-west-3b)
Public Subnet (10.0.2.0/24)
Load Balancer
ALB (replica)
Private Subnet (10.0.20.0/24)
EC2 Instance
t3.medium
RDS PostgreSQL
Multi-AZ

Storage

S3 Bucket
Object Storage
EBS Volume
Block Storage
EFS
File System

Monitoring and Streaming

CloudWatch
Monitoring
MSK (Kafka)
Streaming
Internet (public access)
VPC (your private network)
Public Subnet (exposed)
Private Subnet (protected)

Click on a component to see details

Select a component

Click on any element of the diagram to see a detailed explanation and the equivalent in your current environment (Nginx, Django, PostgreSQL...).

How to practice

You don't need a corporate account to learn AWS:

  • AWS Free Tier: The real AWS, free for 12 months (with limits). Great for hands-on experience with actual services.
  • LocalStack: AWS emulator running on your laptop. Spin up S3, Lambda, DynamoDB locally. No cloud costs.
  • Minikube / Kind: Local Kubernetes clusters for learning EKS concepts.
  • K3s / K3d: Lightweight Kubernetes distributions, perfect for development and edge deployments.
  • Podman Compose: Simulate architectures with containers (PostgreSQL, Redis, Nginx). Understand the concepts before going managed.

Conclusion

The key insight is this: everything you've been doing manually for years, fingers in the nose, eyes closed (configuring Nginx, Gunicorn, PostgreSQL, managing users, monitoring logs), AWS offers it as managed services. You're trading full control for less maintenance.

Is that trade-off worth it? It depends. For a small team that needs to move fast and can't afford dedicated ops, managed services make sense. For organizations with specific compliance needs or those who want to avoid vendor lock-in, the calculus changes.

Either way, understanding these concepts matters, whether you end up using AWS, another cloud, or staying on-premises.

There is no cloud

More on this topic

Congratulations, you made it to the end. I hope you learned something, if you enjoyed this piece, you can read my post about how I'm helping save lives in West Africa using free software, or you can like, share and subscribe; it helps fight the algorithm. Here are some links you may find interesing: