Deploy workload in Rancher

Kevin (Xiaocong) Zheng
4 min readSep 23, 2021

There are two different types of methods to deploy the workload in Rancher, Ingress, and Nodeport.

Ingress

Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and acts as a “smart router” or entry point into your cluster.

You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities.

The default GKE ingress controller will spin up a HTTP(S) Load Balancer for you. This will let you do both path-based and subdomain-based routing to backend services.

Ingress is the most useful if you want to expose multiple services under the same IP address, and these services all use the same L7 protocol (typically HTTP). You only pay for one load balancer if you are using the native GCP integration, and because Ingress is “smart” you can get a lot of features out of the box (like SSL, Auth, Routing, etc).

NodePort

A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.

Basically, a NodePort service has two differences from a normal “ClusterIP” service. First, the type is “NodePort.” There is also an additional port called the nodePort that specifies which port to open on the nodes. If you don’t specify this port, it will pick a random port. Most of the time you should let Kubernetes choose the port.

When would you use NodePort?

There are many downsides to this method:

  1. You can only have one service per port
  2. You can only use ports 30000–32767
  3. If your Node/VM IP address change, you need to deal with that

For these reasons, I don’t recommend using this method in production to directly expose your service. If you are running a service that doesn’t have to be always available, or you are very cost-sensitive, this method will work for you. A good example of such an application is a demo app or something temporary.

Deploy Workloads with Ingress

Step1. Deploying a workload

  1. From the Clusters page, open the cluster that you just created.
  2. From the main menu of the Dashboard, select Projects/Namespaces.
  3. Open the Project: Default project.
  4. Click Resources > Workloads. In versions before v2.3.0, click Workloads > Workloads.
  5. Click Deploy.
  6. Step Result: The Deploy Workload page opens.
  7. Enter a Name for your workload.
  8. From the Docker Image field, enter images that you need, we use Microsoft’s .NET core sample images as an example, and we also use our university’s JFrog as an artifactory,’artifacts.uottawa.ca/docker/dotnet/samples:aspnetapp', and this field is case-sensitive.
  9. Leave the remaining options on their default setting. We’ll tell you about them later.
  10. Click Launch.

Step2. Expose The Application Via An Ingress

  1. From the Clusters page, open the cluster that you just created.
  2. From the main menu of the Dashboard, select Projects.
  3. Open the Default project.
  4. Click Resources > Workloads > Load Balancing. In versions before v2.3.0, click the Workloads tab. Click on the Load Balancing tab.
  5. Click Add Ingress.
  6. Enter a name i.e. hello.
  7. In the Target field, drop down the list and choose the name that you set for your service.
  8. Enter 80 in the Port field.
  9. Leave everything else as default and click Save.

From the Load Balancing page, click the target link, which will look something like hello.default.xxx.xxx.xxx.xxx.xip.io > hello-world.

Your application will open in a separate window.

Deploy Workloads with NodePort

Step1. Deploying a workload

  1. From the Clusters page, open the cluster that you just created.
  2. From the main menu of the Dashboard, select Projects/Namespaces.
  3. Open the Project: Default project.
  4. Click Resources > Workloads. In versions before v2.3.0, click Workloads > Workloads.
  5. Click Deploy.
  6. Step Result: The Deploy Workload page opens.
  7. Enter a Name for your workload.
  8. From the Docker Image field, enter rancher/hello-world. This field is case-sensitive.
  9. From Port Mapping, click Add Port.
  10. From the As a drop-down, make sure that NodePort (On every node) is selected.

10. From the On Listening Port field, set the value to the port you want to map.

11.From the Publish the container port field, enter port 80.

12.Leave the remaining options on their default setting. We’ll tell you about them later.

13.Click Launch.

--

--