COMPLETE GUIDE DOCKER & KUBERNETES from STEPHEN GRIDER
This course from Stephen teaches how to create and deploy any web apps into Web services.
What is Docker?
Docker is a platform or ecosystem around creating and running containers
Why use Docker?
Docker wants to make it really easy and straight forward for run software or install in any computer as in webserver as well without worrying about all bunch of setup or dependencies.
What is an Image in Docker?
Single file with all the dependencies and config or setup required to run a very specific program.
for example a NODEJS, NGINX, or REDIS etc
What is a Container in Docker ?
Is an instance of an Image to run a program.
It's a program with his own isolated set of hardware resources, it has own set of memory, it has own space of networking technology, it has own space of hard drive.
== Manipulating Containers with the Docker Client ==
001. Docker Run in Detail
eg:
002. Overriding Default Commands
eg:
003. Listing Running Containers
eg:
004. Container Lifecycle
eg:
005. Restarting Stopped Container
eg:
006. Removing Stopped Containers
eg:
007. Retrieving log Outputs
eg:
008. Stopping Containers
stop command use to take more time for shutdown the container.
kill command use to shutdown the container immediately.
009. Multi-command Containers
we have two separate containers. we want to include redis-cli container into redis-server container to run together
010. Executing Command in Running Containers
eg:
011. The Purpose of the IT flag
When you running docker on your computer or machine every single container you are running is running inside a virtual machine running Linux.
The IT flag is two separate flag
012. Getting a Command Prompt in a Container
You will not want to execute without having execute same command.
"sh" is a command processor or a shell its allow to type command in and will be execute inside the container.
eg:
013. Starting with a Shell
014. Container Isolation
The containers do not automatically share their files system
== Building Custom Images Through Docker Server ==
015. Creating Docker Images
016. Building a Dockerfile
redis-image Dockerfile
017. Dockerfile Teardown
018. What's a Base Image
019. The Build Process in Details
why use new command?
the build command it's will be use to take docker file and generating it
020. A Brief Recap
021. Rebuild with Cache
022. Tagging an Image
the convention to tagging an Image
eg:
023. Manual Image Generation with Docker Commit
In common, we use image to create container. We can manually create a container runs command inside container and generate an image. In straight word we can do manually the same thing Dockerfile does.
eg:
== Making Real Projects with Docker ==
024. Making Real Projects with Docker
simple-web
025. Base Image Issues
To solve the issue "npm not available on a base image"
alpine is a term in docker role for a small incompact images. Many popular repository were going to offer alpine version of their images.
027 A few Missing Files
None of the files inside your root directory are available inside the container by default. Completely segmented out unless you specifically allowed inside your Dockerfile.
To solve 'no such file or directory'
eg:
028. Container Port Mapping
We do not setup port-porting inside Dockerfile, a port-porting stuff is strictly a run time constrain, in other words its something we only change when we run a container or start a container.
eg:
029. Specifying a Working Directory
eg:
to check the working directory is no longer in image root directory we can check by
eg:
030. Unnecessary Rebuilds
how to avoid having completely reinstall all dependencies just because we made a change in source code file?
eg:
== Docker Compose with Multiple Local Containers ==
031. Introducing Docker Compose
032. Docker Compose Files
033. Networking with Docker
034. Docker Compose Command
035. Stopping Docker Compose Containers
036. Container Maintenance with Compose
037. Container Status with Docker Compose
eg:
== Creating a Production-Grade Workflow ==
038. Development Workflow
039 flow Specifics
040. Docker Purpose
041. Creating the Dev Dockerfile
eg:
042. Duplicating Dependencies
to solve this problem, just delete node_modules on root folder.
043. Docker Volumes
044. Shorthand with Docker Compose
045. Live Updating Tests
046. Docker Compose for Running Tests
047. Multi-Step Docker Build for Production environment
but we have an issue here,
so we make two different images to solve this issue.
048. Implementing Mutli-Step build
Dockerfile for production env
049. Running Nginx
== Continuous Integration and Deployment with AWS ==
050. Travis CI Setup
What is Travis CI?
Travis CI is a hosted, distributed continuous integration service used to build and test projects hosted at Github. Travis CI automatically detects when a commit has been made and push to a Github repository that is using Travis CI, and each time this happen, it will try to build project and run test.
051. Travis YML file Configuration
.travis.yml file
052. AWS Elastic Beanstalk
The benefit of using Elastic Beanstalk is monitors the amount of the traffic that come into our virtual machines and automatically scale everything up.
053. Travis Config for Deployment
for bucket_name:
eg:
054. Automated Deployments
Set an API_KEYS to give access to our aws account over Travis-CI.
At travis CI add your access_key_id and secret_access_id,
055. Exposing Ports Through the Dockerfile | for production-env deployment
if you hit an error when deploy to aws, maybe you forget to config EXPOSE at Dockerfile
At aws-elascticbeanstalk is little bit different , elasticbeanstalk when it's start up docker container is gonna look at Dockerfile and gonna look to EXPOSE instruction, and what ever port you listed in there, is what elasticbeanstalk is going to map directly automatically.
DONT FORGET TO TERMINATE the ELastic Beanstalk app! for AWS not charge some money
== Building a Multi Container Application ==
056. Single Container Deployment Issues
THIS IS A BAD APPROACH
We build the images multiple times, we build out our images over travis-CI when we run our task, and we also build image a second time after we push all over code through travis-CI over to Elastic Beanstalk. This is not the best approach because we essentially taking web-server or web-apps and we using it to build the images, chance are we really want the web-server to be just concern running the web-server and not to take any extra process of building images. So we concern to not allow the images to build in active running web-server.
057. Application Overview
058. Application Architecture | Backend Architecture
a flow behind the scene
== Dockerizing Multiple Service ==
059. Dockerizing a React App - Again!
The purpose is to make dev Dockerfiles for each one is if we make a change client, server, worker we ensure to not rebuild the entire images to get changes into a fact that makes really slow development workflow.
060. Adding Postgres as a Service
eg:
or see:
docker-compose.yml
061. Environment Variables with docker-compose
when you use this command its mean you run 2 step process,
first step process you build an images, that's kind of preparation part create a new images.
second step process when some point on the future we run a container, we actually take an images and create instance of container out of it.
So if you have env-var setup on your machine like some secret API-KEY that maybe you want to use this syntax
062. Nginx Path Routing
063. Routing with Nginx
or see the files:
nginx-route-default.conf
== A Continuous Integration workflow for Multiple Images ==
063. Production Multi-container Deployment
064. Multiple Nginx Instances
065. Altering Nginx's Listen Port
On production environment nginx server has to listen on port 3000
065. Travis Configuration Setup
.travis.yml for multi-images
066. Pushing Images to Docker Hub
eg:
== Multi Container Deployment to AWS ==
066. Multi Container Definition Files
We have a couple different folder in each of them has a separate a Dockerfile, so any time we want to run multiple separate containers on AWS EB we need to create a special file.
The new file going to be a JSON file, that to tell EB exactly where to pull images from, what resources to allocate for each one, how to setup a port-mapping, and some associated information.
What is the different between docker-compose.yaml and Dockerrun.aws.json?
docker-compose.yaml have direction how to build an images and Dockerrun.aws.json the image has been build just specify images to use.
067. Finding Docs Container Definitions
Is not immediately clear when you start reading AWS documentation to how to customize Dockerrun.aws.json. So lets look the AWS documentation,
068. Forming Container Links
069. Creating the EB Environment
!! ATTENTION when you create a project without a free-plane, make sure you DELETE any project or AWS will charge any instances.
070. Managed Data Service Providers
071. Overview of AWS VPC's and Security Groups
Security Group (firewall Rules): is a rules describing what different services or sources of internet traffic can connect to different services running inside your VPC's
Q: Now we understand what VPC's is and Security Group is, how are we going to form a connection between EB Instance with RDS (Postgress) and EC (Redis)?
A: we gonna create a new security group, and new security group is going to say essentially as a rule let any traffic access this instances if it belong to the security group and we gonna attached it to all three of this different services. So all the services is gonna belong to this one common security group. And Security group essentially says if another AWS instance belong to a new security group then let the traffic flow through and let different services talk to each other.
072. RDS Databases Creation
073. ElastiCache Redis Creation
074. Creating a Custom Security Group
075. Applying Security Groups to Resources
ElastiCache (redis)
AMAZON RDS (Postgres)
EB Instances
076. Setting Environment Variables
ATTENTION
When you put your Environment properties in EB the values didn't hidden, so when you entry properties, potentially other people come to this page can see database password.
ElastiCache Redis at Primary Endpoint, we do not copy the port.
077. IAM Keys for Deployment
078. Travis Deploy Script
eg:
079. Container Memory Allocations
080. Cleaning Up AWS Resources
== Kubernetes ==
What is Kubernetes?
Is a system for running many different containers over multiple different machines
Why use Kubernetes?
When you need to run many different containers with different images
081. Kubernetes in Development and Production
082. Mapping Existing Knowledge
083. Adding Configuration file
client-pod.yaml
client-node-port.yaml
explanation config file
when we make config file kubernetes we not quite making a container me make something different we make an object.
Q: what is an object is on kubernetes?
A: a config file we set (make) the term object is refer to a think that exist inside at kubernetes cluster, so we don't specifically says we make an object so much, reality we make specific type of object
Q: what is an object use for on kubernetes?
A: object is essentially think that going to be created inside kuberntes cluster to get application to work the way we might expect. Every object or type of object have slightly different purpose
Q: what is Object types of Pod use for?
A: a Pod use to running a container
Q: what is object types ReplicaController use for?
A: a ReplicaController use for monitoring a container
Q: what is object types Service use for?
A: a Service use for up networking
084. Running Containers in Pods
When we start to load-up the configuration file into kubectl is going to create a Pod inside Virtual-Machine (we refer VM as a Node). A Pod it self it's a grouping of containers with very common purpose.
Q: We might be wondering why me making a Pod that has a grouping a container?
A: In the kubernetes world there is no such think as just creating a container on a cluster
back with EB, docker-compose we were creating containers really old-day no shoe what so ever. In the world kubernetes we do not have the ability to just run one naked single container by it self with no associated over had. The smallest think you can deploy is a Pod.
It always to be declare or deploying containers within a Pod, as the smallest think we can deploy to run a single container.
Q: why me make a Pod?
A: we cannot deploy individual containers by them self as we could with docker-compose, DB requirement of a Pod we must run one or more containers inside of it.
In the world of Pod we start to grouping together containers that have a very discrete or very tightly couple relationship, in other words these are containers absolutely have a tight immigration and must be executed with each others.
085. Service Config Files in Depth
We use this Second object (service) types any times we want to setup some amount of networking inside of kubernetes cluster.
eg:
rather then referring to the Service to connect to the client-pod.yaml, we instead using in kubernetes label-selector-system. To connect between client-node-port.yaml with client-pod.yaml
A component: web is arbitrary key-value pair.
A targetPort: 3000 is identical to the containerPort: 3000 over the Pod definition
A nodePort the most IMPORTANT is to communicated between developer to access multi-client Pod
Last updated