Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Firewall on Node, Add Windows Firewall rules for required ports #2

Merged
merged 4 commits into from
Jun 15, 2017
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 12 additions & 3 deletions parts/kuberneteswindowssetup.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -201,8 +201,17 @@ Get-PodGateway(`$podCIDR)
function
Set-DockerNetwork(`$podCIDR)
{
# Turn off Firewall to enable pods to talk to service endpoints. (Kubelet should eventually do this)
netsh advfirewall set allprofiles state off
# Windows Firewall rules to allow only Master to access Node's kubelet ports

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these rules only for powershell console workloads and this PR is not going to be merged into master branch, right?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JiangtianLi The rules are not just for PSCloudShell - but there are no other workloads on the windows nodes. We want to lockdown the Windows nodes by allowing only master to communicate with Kubelet ports and allowing PSCloudShell websocket connections.

Additional windows customers can add their own exceptions as they are onboarded.

We want the firewall feature to be in PROD. Will the release to PROD happen from Master OR Migration branch?

# Firewall rules to allow access to container's websockets
netsh advfirewall firewall add rule name="Container: Allow access to node localport 8080" dir=in action=allow protocol=TCP localport=8080
netsh advfirewall firewall add rule name="Container: Allow access to node localport 8888" dir=in action=allow protocol=TCP localport=8888
netsh advfirewall firewall add rule name="Container: Allow UDP inbound traffic for Container DNS Port 53" dir=in action=allow localport=53 protocol=UDP
netsh advfirewall firewall add rule name="Node: Allow only K8 Master to access localport 4194" dir=in action=allow protocol=TCP localport=4194 remoteip=`${global:MasterIP}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the global:MasterIP internal IP or external?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe their communication are through internal IPs. In this case, you need to think about supporting multiple master nodes, although cloud shell only use 1 master node now.

Copy link
Author

@raghushantha raghushantha Jun 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@robbiezhang To test the firewall rules, I used the Master's well known private IP [10.240.255.5], since this is a well known acs-engine constant.
So yes, this is internal.

Looking at the script [line 190], the global variable is set to the script parameter value.

What other changes need to happen to support multiple master nodes? I don't see any reference to this in the codebase

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multi-master is an ACS-Engine feature. It will introduce a LoadBalancer for the master nodes. The IP address for the LB is 10.240.255.15. However, there is no such LB in the single master node cluster. So the outbound IP address is the node address (10.240.255.5). Do you know how this script handle this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multi-Master must be handled in this script. Anthony's team owns this, I believe.

For now, the windows script is being called from the engine's go template:
https://github.com/Azure/acs-engine/blob/d3059c436d30bdc196d76cda27b1f051719316e7/pkg/acsengine/engine.go#L663

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script is executed as part of VM extensions and the parameters passed to it are in kuberneteswinagentresourcesvmas.t. MasterIP is variables('kubernetesAPIServerIP')

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@colemickens Could you please take a look at port allowed?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@raghushantha Could you add comment for the port allowed? How do the rules allow web service on port 80 or other service on custom port?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will add comments in the script to explain each rule. I thought the rule names are self-explanatory -:).

We only allow websocket connections to be made to PSCloudShell (8080/8888). No other customer's ports are allowed. Also, since this is a first party service, any other customer using the Windows kube nodes need to bring in their own rules. This change will lock down the nodes for PSCloudShell.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Specifically please comment what each port is used for. (53 is obvious though).

netsh advfirewall firewall add rule name="Node: Allow only K8 Master to access localport 10250" dir=in action=allow protocol=TCP localport=10250 remoteip=`${global:MasterIP}
netsh advfirewall firewall add rule name="Node: Allow only K8 Master to access localport 10255" dir=in action=allow protocol=TCP localport=10255 remoteip=`${global:MasterIP}

# Turn-on the firewall since we have allowed access to required ports
netsh advfirewall set allprofiles state on

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We explicitly turn off firewall for allprofiles in Set-DockerNetwork. I think it should be ON already but I can't guarantee. Anyway, it doesn't hurt to turn it on.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, after adding firewall rules it is better to turn on the firewall for all profiles


`$dockerTransparentNet=docker network ls --quiet --filter "NAME=`$global:TransparentNetworkName"
if (`$dockerTransparentNet.length -eq 0)
Expand Down Expand Up @@ -399,4 +408,4 @@ try
catch
{
Write-Error $_
}
}