{"stig":{"title":"Mirantis Kubernetes Engine Security Technical Implementation Guide","version":"2","release":"1"},"checks":[{"vulnId":"V-260903","ruleId":"SV-260903r986160_rule","severity":"medium","ruleTitle":"The Lifetime Minutes and Renewal Threshold Minutes Login Session Controls on MKE must be set.","description":"The \"Lifetime Minutes\" and \"Renewal Threshold Minutes\" login session controls in MKE are part of security features that help manage user sessions within the MKE environment. Setting these controls is essential.\n\nMKE must terminate all network connections associated with a communications session at the end of the session, or as follows: For in-band management sessions (privileged sessions), the session must be terminated after 10 minutes of inactivity.","checkContent":"Log in to the MKE web UI and navigate to admin >> Admin Settings >> Authentication & Authorization.\n\nEnsure that \"Lifetime Minutes\" is set to \"10\" and \"Renewal Threshold Minutes\" is set to \"0\".\n\nIf these settings are not configured as specified, this is a finding.","fixText":"Log in to the MKE web UI and navigate to admin >> Admin Settings >> Authentication & Authorization.\n\n- Below Lifetime Minutes, enter \"10\".\n- Below Renewal Threshold, enter \"0\".\n- Click \"Save\".","ccis":["CCI-001133","CCI-004895","CCI-002007"]},{"vulnId":"V-260904","ruleId":"SV-260904r966069_rule","severity":"medium","ruleTitle":"In an MSR organization, user permissions and repositories must be configured.","description":"Configuring user permissions, organizations, and repositories in MSR is crucial for maintaining a secure, organized, and efficient container image management environment. This will provide access control, security, and compliance when utilizing MSR.","checkContent":"If MSR is not being utilized, this is Not Applicable.\n\nVerify the organization, user permissions, and repositories in MSR are configured per the System Security Plan (SSP). Obtain and review the SSP.\n\n1. Log in to the MSR web UI as Admin and navigate to \"Organizations\". Verify the list of organizations are setup per the SSP.\n\n2. Navigate to \"Users\" and verify that the list of users are assigned to appropriate organizations per the SSP.\n\n3. Click on the user and verify the assigned repositories are appropriate per the SSP.\n\nIf the organization, user, or assigned repositories in MSR are not configured per the SSP, this is a finding.","fixText":"If MSR is not being utilized, this is Not Applicable.\n\nSet the organizations, user permissions, and repositories in MSR so they are configured per the SSP.\n\n1. Modify Organizations according to the SSP by logging in to the MSR web UI as Admin and navigating to Organizations.\n\nTo delete an Organization:\n- Click on the \"Organization\".\n- Click the \"Settings Tab\".\n- Click \"Delete\".\n- Confirm and click \"Delete\".\n\nTo Add an Organization:\n- Click \"New organization\".\n- Input the Organization name.\n- Click \"Save\".\n\nTo Assign Users to an Organization:\n- Click on an Organization.\n- Under the Members tab, click \"Add user\".\n- Select \"New\" or \"Existing\".\n- Fill in User information.\n- Click \"Save\".\n\n2. Modify Users according to the SSP.\n- Navigate to \"Users\".\n\nTo add a User:\n- Click \"New User\".\n- Fill in User information.\n- Click \"Save\".\n\nTo Delete a User:\n- Click on the \"User\".\n- Select \"Settings Tab\".\n- Click \"Delete User\".\n- Confirm and click \"Delete\".\n\n3. Modify Repositories according to the SSP:\n- Click on the User.\n- Under the Repositories tab, modify the assigned repositories to what is appropriate per the SSP.","ccis":["CCI-001499"]},{"vulnId":"V-260905","ruleId":"SV-260905r966072_rule","severity":"medium","ruleTitle":"User-managed resources must be created in dedicated namespaces.","description":"Dedicated namespaces act as security boundaries, limiting the blast radius in case of security incidents or misconfigurations. If an issue arises within a specific namespace, it is contained within that namespace and does not affect the resources in other namespaces. Kubernetes provides Role-Based Access Control (RBAC) mechanisms, and namespaces are a fundamental unit for access control. Using dedicated namespaces for user-managed resources provides a level of isolation. Each namespace acts as a separate environment, allowing users or teams to deploy their applications and services without interfering with the resources in other namespaces. This isolation helps prevent unintentional conflicts and ensures a more predictable deployment environment.","checkContent":"This check only applies when using Kubernetes orchestration.\n\nLog in to the MKE web UI and navigate to Kubernetes >> Namespaces. \n\nThe default namespaces are: \"default\", \"kube-public\", and \"kube-node-lease\".\n\n1. In the top right corner, if \"Set context for all namespaces\" is not enabled, this is a finding.\n\n2. Navigate to Kubernetes >> Services. Confirm that no service except \"kubernetes\" has the \"default\" namespace listed. Confirm that only approved system services have the \"kube-system\" namespace listed.\n\nIf \"default\" has a service other than the \"kubernetes\" services, this is a finding.\n\nIf \"kube-system\" has a service that is not listed in the System Security Plan (SSP), this is a finding.","fixText":"Log in to the MKE web UI and navigate to Kubernetes >> Namespaces.\n\nIn the top right corner, enable \"Set context for all namespaces\".\n\nMove any user-managed resources from the default, kube-public and kube-node-lease namespaces, to user namespaces.\n\n- Navigate to Kubernetes >> Services.\n- Select the user-managed service.\n- Click on the settings wheel in the top right corner to view the .yaml for that service.\n- Change the \"namespace\" to a user namespace.\n- Click \"Save\".","ccis":["CCI-000381"]},{"vulnId":"V-260906","ruleId":"SV-260906r986161_rule","severity":"high","ruleTitle":"Least privilege access and need to know must be required to access MKE runtime and instantiate container images.","description":"To control what is instantiated within MKE, it is important to control access to the runtime. Without this control, container platform specific services and customer services can be introduced without receiving approval and going through proper testing. Only those individuals and roles approved by the organization can have access to the container platform runtime.","checkContent":"Access to use the docker CLI must be limited to root only.\n\n1. Log on to the host CLI and execute the following:\n\nstat -c %U:%G /var/run/docker.sock | grep -v root:docker\n\nIf any output is present, this is a finding.\n\n2. Verify that the docker group has only the required users by executing:\n\ngetent group docker\n\nIf any users listed are not required to have direct access to MCR, this is a finding.\n\n3. Execute the following command to verify the Docker socket file has permissions of 660 or more restrictive:\n stat -c %a /var/run/docker.sock\n\nIf permissions are not set to \"660\", this is a finding.","fixText":"To remove unauthorized users from the docker group, access the host CLI and run:\n\ngpasswd -d docker [username to remove]\n\nTo ensure that docker.socket is group owned, execute the following:\n\nchown root:docker /var/run/docker.sock\n\nSet the file permissions of the Docker socket file to \"660\" execute the following:\n\nchmod 660 /var/run/docker.sock","ccis":["CCI-000213","CCI-001499","CCI-001094","CCI-003980","CCI-001813","CCI-001764","CCI-002385"]},{"vulnId":"V-260907","ruleId":"SV-260907r966078_rule","severity":"high","ruleTitle":"Only required ports must be open on containers in MKE.","description":"Ports, protocols, and services within MKE runtime must be controlled and conform to the PPSM CAL. Those ports, protocols, and services that fall outside the PPSM CAL must be blocked by the runtime. Instructions on the PPSM can be found in DOD Instruction 8551.01 Policy.\n\nA container can be run just with the ports defined in the Dockerfile for its image or can be arbitrarily passed runtime parameters to open a list of ports. A periodic review of open ports must be performed.\n\nBy default, all the ports that are listed in the Dockerfile under EXPOSE instruction for an image are opened when a container is run with -P or --publish-all flag.","checkContent":"This check must be executed on all nodes in an MKE cluster to ensure that mapped ports are the ones that are needed by the containers.\n\nVia CLI:\nLinux: As an administrator, execute the following command using a Universal Control Plane (MKE) client bundle:\n\ndocker ps --quiet | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}' \n\nReview the list and ensure the ports mapped are those needed for the container. If there are any mapped ports not documented by the System Security Plan (SSP), this is a finding.","fixText":"Document the ports required for each container in the SSP.\n\nFix the container image to expose only needed ports by the containerized application. Ignore the list of ports defined in the Dockerfile by NOT using -P (UPPERCASE) or --publish-all flag when starting the container. Use the -p (lowercase) or --publish flag to explicitly define the ports needed for a particular container instance.\n\nExample:\ndocker run --interactive --tty --publish 5000 --publish 5001 --publish 5002 centos /bin/bash","ccis":["CCI-000382","CCI-000366"]},{"vulnId":"V-260908","ruleId":"SV-260908r966081_rule","severity":"high","ruleTitle":"FIPS mode must be enabled.","description":"During any user authentication, MKE must use FIPS-validated SHA-2 or later protocol to protect the integrity of the password authentication process.\n\nFIPS mode enforces the use of cryptographic algorithms and modules. This ensures a higher level of cryptographic security, reducing the risk of vulnerabilities related to cryptographic functions. FIPS-compliant cryptographic modules are designed to provide strong protection for sensitive data. Enabling FIPS mode helps safeguard cryptographic operations, securing data both at rest and in transit within containerized applications.","checkContent":"On the MKE controller, verify FIPS mode is enabled.\n\nExecute the following command through the CLI:\n\ndocker info\n\nThe \"Security Options\" section in the response must show a \"fips\" label, indicating that, when configured, the remotely accessible MKE UI uses FIPS-validated digital signatures in conjunction with an approved hash function to protect the integrity of remote access sessions.\n\nIf the \"fips\" label is not shown in the \"Security Options\" section, then this is a finding.","fixText":"If the operating system has FIPS enabled, FIPS mode is enabled by default in MCR. The preferred method is to ensure FIPS mode is set on the operating system prior to installation.\n\nIf a change is required on a deployed system, create the directory if it does not exist by executing the following: \n\nmkdir -p /etc/systemd/system/docker.service.d/\n\nCreate a file called /etc/systemd/system/docker.service.d/fips-module.conf and add the following:\n\n[Service]\nEnvironment=\"DOCKER_FIPS=1\"\n\nReload the Docker configuration to systemd by executing the following: \n\nsudo systemctl daemon-reload\n\nRestart the Docker service by executing the following:\n\nsudo systemctl restart docker","ccis":["CCI-000197","CCI-001184","CCI-002890","CCI-003123","CCI-002418","CCI-002420","CCI-002422","CCI-002450","CCI-000803"]},{"vulnId":"V-260909","ruleId":"SV-260909r986168_rule","severity":"medium","ruleTitle":"MKE must be configured to integrate with an Enterprise Identity Provider.","description":"Configuring MKE to integrate with an Enterprise Identity Provider enhances security, simplifies user management, ensures compliance, provides auditing capabilities, and offers a more seamless and consistent user experience. It aligns MKE with enterprise standards and contributes to a more efficient and secure environment.","checkContent":"Verify that Enterprise Identity Provider integration is enabled and properly configured in the MKE Admin Settings.\n\n1. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Authentication & Authorization.\n\nIf LDAP or SAML are not set to \"Enabled\", this is a finding.\n\n2. Identity Provider configurations:\nWhen using LDAP, ensure the following are set:\n- LDAP/AD server's URL.\n- Reader DN.\n- Reader Password.\n\nWhen using SAML:\nIn the \"SAML IdP Server\" section, ensure the following: \n- URL for the identity provider exists in the \"IdP Metadata URL\" field.\n- Skip TLS Verification is unchecked.\n- Root Certificate Bundle is filled.\n\nIn the \"SAML Service Provider\" section, ensure the MKE Host field has the MKE UI IP address.\n\nIf the Identity Provider configurations do not match the System Security Plan (SSP), this is a finding.","fixText":"To configure Identity Provider, log in to the MKE web UI and navigate to admin >> Admin Settings >> Authentication & Authorization >> Identity Provider Integration section.\n\nTo configure LDAP:\nClick the radial button to set LDAP to \"Enabled\".\n\nIn the \"LDAP Server\" subsection set the following: \n- \"LDAP Server URL\" to the URL for the organization's AD or LDAP server (URL must be https).\n- \"Reader DN\" with the DN of the account used to search the LDAP entries.\n- \"Reader Password\" with the password for the Reader account.\n\nClick \"Save\".\n\nTo configure SAML, click the radial button to set SAML to \"Enabled\".\n\nEnter URL in the \"Service Provider Metadata URL\" field.\n\nUpload the certificate bundle for the IdP provider in \"Root Certificates Bundle\".\n\nIn the \"SAML Service Provider\" section, enter the \"MKE IP address\" in the MKE Host field.\n\nClick \"Save\".","ccis":["CCI-000015","CCI-000016","CCI-000017","CCI-000044","CCI-000765","CCI-000766","CCI-004045","CCI-001941","CCI-003627","CCI-004066","CCI-004061","CCI-000187","CCI-004045","CCI-002145","CCI-002130","CCI-002238","CCI-003938","CCI-001953","CCI-002009","CCI-002699","CCI-000172"]},{"vulnId":"V-260910","ruleId":"SV-260910r966087_rule","severity":"medium","ruleTitle":"SSH must not run within Linux containers.","description":"To limit the attack surface of MKE, it is important that the nonessential services are not installed. Containers are designed to be lightweight and isolated, and introducing SSH can add attack vectors. Unauthorized access or exploitation of SSH vulnerabilities would compromise the security of the container and the host system. SSH is not necessary for process management within containers, as container orchestration platforms provide mechanisms for starting, stopping, and monitoring containerized processes. SSH access within containers may bypass auditing mechanisms, making it harder to track and audit user activities.","checkContent":"This check must be executed on all nodes in a Docker Enterprise cluster.\n\nVerify no running containers have a process for SSH server. Using CLI, execute the following:\n\nfor i in $(docker container ls --format \"{{.ID}}\"); do\n  pid=$(docker inspect -f '{{.State.Pid}}' \"$i\")\n  ps -h --ppid \"$pid\" -o cmd\ndone | grep sshd\n\nIf a container is output, it has a process for SSH server, this is a finding.","fixText":"Containers found with SSH server must be removed by executing the following:\n\ndocker rm [container name]\n\nThen, a new image must be added with SSH server removed.","ccis":["CCI-000213"]},{"vulnId":"V-260911","ruleId":"SV-260911r986162_rule","severity":"medium","ruleTitle":"Swarm Secrets or Kubernetes Secrets must be used.","description":"Swarm Secrets in Docker Swarm and Kubernetes Secrets both provide mechanisms for encrypting sensitive data at rest. This adds an additional layer of security, ensuring that even if unauthorized access occurs, the stored secrets remain encrypted.\n\nMKE keystore must implement encryption to prevent unauthorized disclosure of information at rest within MKE. By leveraging Docker Secrets or Kubernetes secrets to store configuration files and small amounts of user-generated data (up to 500 kb in size), the data is encrypted at rest by the Engine's FIPS-validated cryptography.","checkContent":"Review the System Security Plan (SSP) and identify applications that leverage configuration files and/or small amounts of user-generated data, and ensure the data is stored in Docker Secrets or Kubernetes Secrets.\n\nWhen using Swarm orchestration, log in to the MKE web UI and navigate to Swarm >> Secrets and view the configured secrets.\n\nIf items identified for secure storage are not included in the secrets, this is a finding.\n\nWhen using Kubernetes orchestration, log on to the MKE Controller node then run the following command:\n\nkubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {\"\\n\"}{end}' -A\n\nOr, using API, configure the $AUTH variable to contain the token for the SCIM API endpoint:\n\ncurl -k 'Accept: application/json' -H \"Authorization: Bearer $AUTH\" -s \"https://$MKE_ADDRESS/api/MKE/config/kubernetes\" | jq '.KMSEnabled' true\n\nIf any of the values returned reference environment variables, this is a finding.","fixText":"To create secrets when using Swarm Orchestration, log in to the MKE UI. Navigate to Swarm >> Secrets, and then click \"Create\".\n\nProvide a name for the secret and enter the data into the \"Content\" field.\n\nAdd a label to allow for RBAC features to be used for access to secret.\n\nClick \"Save\".\n\nTo create secrets when using Kubernetes orchestration, run the following command on the MKE Controller node:\n\nConfigure the $AUTH variable to contain the token for the SCIM API endpoint.\n\ncurl -X PUT -H 'Accept: application/json' -H \"Authorization: Bearer $AUTH\" -d '{\"KMSEnabled\":true,\"KMSName\"\":\"<kms_name>\",\"KMSEndpoint\":\"/var/kms\"}' \"https://$MKE_ADDRESS/api/MKE/config/kubernetes\"","ccis":["CCI-000213","CCI-001499","CCI-004062","CCI-002450","CCI-002476"]},{"vulnId":"V-260912","ruleId":"SV-260912r966093_rule","severity":"medium","ruleTitle":"MKE must have Grants created to control authorization to cluster resources.","description":"MKE uses Role-Based Access Controls (RBAC) to enforce approved authorizations for logical access to information and system resources in accordance with applicable access control policies.\n\nUsing an IDP (per this STIG) still requires configure mapping. Refer to the following for more information: https://docs.mirantis.com/mke/3.7/ops/authorize-rolebased-access/rbac-tutorials/access-control-standard.html#access-control-standard.","checkContent":"Verify the applied RBAC policies set in MKE are configured per the requirements set forth by the System Security Plan (SSP).\n\nLog in to the MKE web UI as an MKE Admin and navigate to Access Control >> Grants.\n\nWhen using Kubernetes orchestration, select the \"Kubernetes\" tab and verify that cluster role bindings are configured per the requirements set forth by the SSP.\n\nWhen using Swarm orchestration, select the \"Swarm\" tabs. Verify that all grants are configured per the requirements set forth by the SSP.\n\nIf the grants are not configured per the requirements set forth by the SSP, then this is a finding.","fixText":"Create Role Bindings/Grants by logging in to the MKE web UI as an MKE Admin. Navigate to Access Control >> Grants.\n\nUsing Kubernetes orchestration:\n- Select the \"Kubernetes\" tab and click \"Create Role Binding\".\n- Add Users, Organizations or Service Accounts as needed and click \"Next\".\n- Under \"Resource Set\", enable \"Apply Role Binding to all namespaces\", and then click \"Next\".\n- Under \"Role\" select a cluster role.\n- Click \"Create\".\n\nUsing Swarm orchestration:\n- Select the \"Swarm\" tab and click \"Create Grant\".\n- Add Users, Organizations, or Service Accounts as needed and click \"Next\".\n- Under \"Resource Set\", click \"View Children\" until the required Swarm collection displays, and then click \"Next\".\n- Under \"Role\" select a cluster role.\n- Click \"Create\".","ccis":["CCI-001368"]},{"vulnId":"V-260913","ruleId":"SV-260913r966096_rule","severity":"medium","ruleTitle":"MKE host network namespace must not be shared.","description":"MKE can be built with privileges that are not approved within the organization. To limit the attack surface of MKE, it is essential that privileges meet organization requirements.\n\nThe networking mode on a container when set to --net=host, skips placing the container inside a separate network stack. This is potentially dangerous because it allows the container process to open low-numbered ports like any other root process. Thus, a container process can potentially do unexpected things such as shutting down the Docker host. Do not use this option.\n\nBy default, bridge mode is used.","checkContent":"When using Kubernetes orchestration, ensure that Pods do not use the host machine's network namespace and uses its own isolated network namespace.\n\nNote: If the hostNetwork field is not explicitly set in the Pod's specification, it will use the default behavior, which is equivalent to hostNetwork: false.\n\nExecute the following for all pods:\n\nkubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.hostNetwork == true) | .metadata.name'\n\nIf the above command returns a namespace then the \"hostNetwork\" = true, this is a finding unless a documented exception is present in the System Security Plan (SSP).\n\nWhen using Swarm orchestration, check that the host's network namespace is not shared.\n\nVia CLI: \nLinux: As an administrator, execute the following command using a Universal Control Plane (MKE) client bundle:\n\ndocker ps --filter \"label=com.docker.ucp.version\" | awk '{print $1}' | xargs docker inspect --format '{{ .Name }}: NetworkMode={{ .HostConfig.NetworkMode }}'\n\nIf the above command returns NetworkMode=host, this is a finding unless a documented exception is present in the SSP.","fixText":"When using Kubernetes orchestration:\nIn Kubernetes, the hostNetwork setting is a part of the Pod's specification, and once a Pod is created, its hostNetwork setting cannot be directly modified. However, the desired effect can be achieved by creating a new Pod with the updated hostNetwork setting and then deleting the existing Pod. This process replaces the old Pod with the new one.\n\nWhen using Swarm orchestration:\nReview and remove nonsystem containers previously created by these users that allowed access to the host network namespace must be removed using:\n\ndocker container rm [container]","ccis":["CCI-001414"]},{"vulnId":"V-260914","ruleId":"SV-260914r966099_rule","severity":"medium","ruleTitle":"Audit logging must be enabled on MKE.","description":"Enabling audit logging on MKE enhances security, supports compliance efforts, provides user accountability, and offers valuable insights for incident response and operational management. It is an essential component of maintaining a secure, compliant, and well-managed Kubernetes environment.\n\nWithout generating audit records that are specific to the security and mission needs of the organization, it would be difficult to establish, correlate, and investigate the events relating to an incident, or identify those responsible for one.","checkContent":"Check auditing configuration level for MKE nodes and controller:\n\nLog in to the MKE web UI and navigate to admin >> Admin Settings >> Logs & Audit Logs.\n\nIf \"AUDIT LOG LEVEL\" is not set to \"Request\", this is a finding.\n\nIf \"DEBUG LEVEL\" is set to \"ERROR\", this is a finding.","fixText":"Log in to the MKE web UI and navigate to admin >> Admin Settings >> Logs & Audit Logs.\n\nIn the \"Configure Audit Log Level\" section, select \"Request\"\n\nIn the \"Configure Global Log Level\" section, select \"INFO\" or \"DEBUG\". \nNote: The recommended setting is \"INFO\".\n\nClick \"Save\".","ccis":["CCI-001464","CCI-000018","CCI-001403","CCI-001404","CCI-001405","CCI-000169","CCI-000172","CCI-000135","CCI-002234"]},{"vulnId":"V-260915","ruleId":"SV-260915r966102_rule","severity":"medium","ruleTitle":"MKE must be configured to send audit data to a centralized log server.","description":"Sending audit data from MKE to a centralized log server enhances centralized monitoring, facilitates efficient incident response, scales effectively, provides redundancy, and helps organizations meet compliance requirements. This is the recommended best practice for managing Kubernetes environments, especially in enterprise settings.","checkContent":"Check centralized log server configuration.\n\nVia CLI, execute the following commands as a trusted user on the host operating system:\n\ncat /etc/docker/daemon.json\n\nVerify that the \"log-driver\" property is set to one of the following: \"syslog\", \"journald\", or \"<plugin>\" (where <plugin> is the naming of a third-party Docker logging driver plugin).\n\nWork with the SIEM administrator to determine if an alert is configured when audit data is no longer received as expected.\n\nIf \"log-driver\" is not set, or if alarms are not configured in the SIEM, then this is a finding.","fixText":"Configure logging driver by setting the log-driver and log-opts keys to appropriate values in the daemon.json file. Refer to this link for extra assistance: https://docs.docker.com/config/containers/logging/syslog/.\n\nVia CLI:\nLinux:\n1. As a trusted user on the host OS, open the /etc/docker/daemon.json file for editing. If the file does not exist, it must be created.\n\n2. Set the \"log-driver\" property to one of the following: \n\"syslog\", \"journald\", or \"<plugin>\" (where <plugin> is the naming of a third-party MKE logging driver plugin).\nNote: Mirantis recommends the \"journald\" setting.\n\nThe following example sets the log driver to journald:\n\n{\n  \"log-driver\": \"journald\"\n}\n\n\n3. Configure the \"log-opts\" object as required by the selected \"log-driver\".\n\n4. Save the file.\n\n5. Restart the Docker daemon by executing the following:\n\nsudo systemctl restart docker\n\nConfigure rsyslog to send logs to the SEIM system.\n\n1. Edit the /etc/rsyslog.conf file and add the IP address of remote server.\nExample: *.* @@loghost.example.com\n\n2. Work with the SIEM administrator to configure an alert when no audit data is received from Mirantis.","ccis":["CCI-000140","CCI-000154","CCI-001851","CCI-002702"]},{"vulnId":"V-260916","ruleId":"SV-260916r966105_rule","severity":"medium","ruleTitle":"MSR's self-signed certificates must be replaced with DOD trusted, signed certificates.","description":"Self-signed certificates pose security risks, as they are not issued by a trusted third party. DOD trusted, signed certificates have undergone a validation process by a trusted CA, reducing the risk of man-in-the-middle attacks and unauthorized access. Using these certificates enhances the trust and authenticity of the communication between clients and the MSR server.","checkContent":"If MSR is not being utilized, this is Not Applicable.\n\nCheck that MSR has been integrated with a trusted certificate authority (CA). \n\n1. In one terminal window execute the following:\nkubectl port-forward service/msr 8443:443\n\n2. In a second terminal window execute the following:\nopenssl s_client -connect localhost:8443 -showcerts </dev/null\n\nIf the certificate chain in the output is not valid and does not match that of the trusted CA, then this is a finding.","fixText":"If MSR is not being utilized, this is Not Applicable.\n\nEnsure the certificates are from a trusted DOD CA.\n\n1. Add the secret to the cluster by executing the following:\n\nkubectl create secret tls <secret-name> --key <keyfile>.pem --cert <certfile>.pem\n\n2. Update MSR with the custom certificate by executing the following:\n\nhelm upgrade msr [REPO_NAME]/msr --version <helm-chart-version> --set-file license=path/to/file/license.lic --set\nnginx.webtls.create=false --set nginx.webtls.secretName=\"<secret-name>\"","ccis":["CCI-000381"]},{"vulnId":"V-260917","ruleId":"SV-260917r966108_rule","severity":"medium","ruleTitle":"Allowing users and administrators to schedule containers on all nodes must be disabled.","description":"MKE and MSR are set to disallow administrators and users to schedule containers. This setting must be checked for allowing administrators or users to schedule containers may override essential settings, and therefore is not permitted.","checkContent":"To ensure this setting has not been modified follow these steps on each node:\n\nLog in to the MKE web UI and navigate to admin >> Admin Settings >> Orchestration. Scroll to down \"Container Scheduling\".\n\nVerify that the \"Allow administrators to deploy containers on MKE managers or nodes running MSR\" is disabled. If it is checked (enabled), this is a finding.\n\nVerify that the \"Allow users to schedule on all nodes, including MKE managers and MSR nodes\" is disabled. If it is checked (enabled), this is a finding.","fixText":"Set MKE and MSR to disallow administrators and users to schedule containers.\n\nLog in to the MKE web UI and navigate to admin >> Admin Settings >> Orchestration. Scroll to down \"Container Scheduling\".\n\nDisable the \"Allow administrators to deploy containers on MKE managers or nodes running MSR\".\n\nDisable \"Allow users to schedule on all nodes, including MKE managers and MSR nodes\" options.\n\nClick \"Save\".","ccis":["CCI-000381"]},{"vulnId":"V-260918","ruleId":"SV-260918r966111_rule","severity":"medium","ruleTitle":"MKE telemetry must be disabled.","description":"MKE provides a telemetry service that automatically records and transmits data to Mirantis through an encrypted channel for monitoring and analysis purposes. While this channel is secure, it introduces an attack vector and must be disabled.","checkContent":"Verify that usage and API analytics tracking is disabled in MKE.\n\nLog in to the MKE web UI and navigate to admin >> Admin Settings >> Usage.\n\nVerify the \"Enable hourly usage reporting\" and \"Enable API and UI tracking\" options are both unchecked.\n\nIf either box is checked, this is a finding.","fixText":"Disable usage and API analytics tracking in MKE.\n\nLog in to the MKE web UI and navigate to admin >> Admin Settings >> Usage.\n\nUncheck both the \"Enable hourly usage reporting\" and \"Enable API and UI tracking\" options.\n\nClick \"Save\".","ccis":["CCI-000381"]},{"vulnId":"V-260919","ruleId":"SV-260919r966114_rule","severity":"medium","ruleTitle":"MSR telemetry must be disabled.","description":"MSR provides a telemetry service that automatically records and transmits data to Mirantis through an encrypted channel for monitoring and analysis purposes. While this channel is secure, it introduces an attack vector and must be disabled.","checkContent":"If MSR is not being utilized, this is Not Applicable.\n\nVerify that usage and API analytics tracking is disabled in MSR.\n\nLog in to the MSR web UI and navigate to System >> General Tab. Scroll to the \"Analytics\" section.\n\nIf the \"Send data\" option is enabled, this is a finding.","fixText":"If MSR is not being utilized, this is Not Applicable.\n\nDisable usage and API analytics tracking in MSR.\n\nLog in to the MSR web UI and navigate to System >> General Tab. Scroll to the \"Analytics\" section.\n\nClick the \"Send data\" slider to disable this capability.","ccis":["CCI-000381"]},{"vulnId":"V-260920","ruleId":"SV-260920r966117_rule","severity":"medium","ruleTitle":"For MKE's deployed on an Ubuntu host operating system, the AppArmor profile must be enabled.","description":"AppArmor protects the Ubuntu OS and applications from various threats by enforcing security policy which is also known as AppArmor profile. The user can either create their own AppArmor profile for containers or use the Docker default AppArmor profile. This would enforce security policies on the containers as defined in the profile.\n\nBy default, docker-default AppArmor profile is applied for running containers and this profile can be found at /etc/apparmor.d/docker.","checkContent":"If MKE is not being used on an Ubuntu host operating system, this is Not Applicable.\n\nIf AppArmor is not in use, this is Not Applicable.\n\nThis check must be executed on all nodes in a cluster.\n\nVia CLI:\nLinux: Execute the following command as a trusted user on the host operating system:\n\ndocker ps -a -q | xargs -I {} docker inspect {} --format '{{ .Name }}: AppArmorProfile={{ .AppArmorProfile }}, Privileged={{ .HostConfig.Privileged }}' | grep 'AppArmorProfile=unconfined' | grep 'Privileged=false'\n\nIf any output, this is a finding.","fixText":"If not using MKE on Ubuntu host operating system, this is Not Applicable. \nIf AppArmor is not in use, this is Not Applicable.\n\nThis check must be executed on all nodes in a cluster.\n\nRun on all nonprivileged containers using an AppArmor profile:\n\nVia CLI:\nLinux: Install AppArmor (if not already installed).\n\nCreate/import an AppArmor profile (if not using the \"docker-default\" profile). Put the profile in \"enforcing\" model. Execute the following command as a trusted user on the host operating system to run the container using the customized AppArmor profile:\n\ndocker run [options] --security-opt=\"apparmor:[PROFILENAME]\" [image] [command]\n\nWhen using the \"docker-default\" default profile, run the container using the following command instead:\n\ndocker run [options] --security-opt apparmor=docker-default [image] [command]","ccis":["CCI-000381"]},{"vulnId":"V-260921","ruleId":"SV-260921r966120_rule","severity":"medium","ruleTitle":"If MKE is deployed on a Red Hat or CentOS system, SELinux security must be enabled.","description":"SELinux provides a Mandatory Access Control (MAC) system on RHEL and CentOS that greatly augments the default Discretionary Access Control (DAC) model. The user can thus add an extra layer of safety by enabling SELinux on the RHEL or CentOS host. When applied to containers, SELinux helps isolate and restrict the actions that containerized processes can perform, reducing the risk of container escapes and unauthorized access.\n\nBy default, no SELinux security options are applied on containers.","checkContent":"If using MKE on operating systems other than Red Hat Enterprise Linux or CentOS host operating systems where SELinux is in use, this check is Not Applicable.\n\nExecute on all nodes in a cluster.\n\nVerify that the appropriate security options are configured for all running containers:\n\nVia CLI:\nLinux: Execute the following command as a user on the host operating system:\n\ndocker info --format '{{.SecurityOptions}}'\n\nexpected output [name=seccomp, profile=default name=selinux name=fips]\n\nIf there is no output or name does not equal SELinux, this is a finding.","fixText":"If using MKE on operating systems other than Red Hat Enterprise Linux or CentOS host operating systems where SELinux is in use, this check is Not Applicable.\n\nExecute on all nodes in a cluster.\n\nStart MKE with SELinux mode enabled. Run containers using appropriate security options.\n\nVia CLI:\nLinux: Set the SELinux state and policy. Create or import a SELinux policy template for MKE. Then, start MKE with SELinux mode enabled by setting the \"selinux-enabled\" property to \"true\" in the \"/etc/docker/daemon.json\" daemon configuration file.\n\nRestart MKE.","ccis":["CCI-000381"]},{"vulnId":"V-260922","ruleId":"SV-260922r966123_rule","severity":"medium","ruleTitle":"The Docker socket must not be mounted inside any containers.","description":"The Docker socket docker.sock must not be mounted inside a container, with the exception case being during the installation of Universal Control Plane (UCP) component of Docker Enterprise as it is required for install.\n\nIf the Docker socket is mounted inside a container, it would allow processes running within the container to execute docker commands which effectively allows for full control of the host.\n\nBy default, docker.sock (Linux) and \\\\.\\pipe\\docker_engine (Windows) is not mounted inside containers.","checkContent":"If using Kubernetes orchestration, this check is Not Applicable.\n\nWhen using Swarm orchestration, log in to the CLI as an MKE Admin, and execute the following command using an MKE client bundle:\n\ndocker ps --all --filter \"label=com.docker.ucp.version\" | xargs docker inspect --format '{{ .Id }}: Volumes={{ .Mounts }}' | grep -i \"docker.sock\\|docker_engine\"\n\nIf the Docker socket is mounted inside containers, this is a finding.\n\nIf \"volumes\" is not present or if \"docker.sock\" is listed, this is a finding.","fixText":"If using Kubernetes orchestration, this check is Not Applicable.\n\nWhen using Swarm orchestration and using the -v/--volume flags to mount volumes to containers in a docker run command, do not use docker.sock as a volume.\n\nA reference for the docker run command can be found at https://docs.docker.com/engine/reference/run/.\n\nReview and remove nonsystem containers previously created by these users without the runAsGroup must be removed using:\n\ndocker container rm [container]","ccis":["CCI-000381"]},{"vulnId":"V-260923","ruleId":"SV-260923r966126_rule","severity":"medium","ruleTitle":"Linux Kernel capabilities must be restricted within containers.","description":"By default, MKE starts containers with a restricted set of Linux Kernel Capabilities. Any process may be granted the required capabilities instead of root access. Using Linux Kernel Capabilities, the processes do not have to run as root for almost all the specific areas where root privileges are usually needed. MKE supports the addition and removal of capabilities, allowing the use of a nondefault profile. Remove all capabilities except those explicitly required for the user's container process.\n\nBy default, below capabilities are available for Linux containers:\n\nAUDIT_WRITE\nCHOWN\nDAC_OVERRIDE\nFOWNER\nFSETID\nKILL\nMKNOD\nNET_BIND_SERVICE\nNET_RAW\nSETFCAP\nSETGID\nSETPCAP\nSETUID\nSYS_CHROOT","checkContent":"When using Kubernetes orchestration this check is Not Applicable.\n\nWhen using Swarm orchestration, via CLI:\n\nLinux: Execute the following command as a trusted user on the host operating system:\n\ndocker ps --quiet --all | xargs docker inspect --format '{{ .Name }}: CapAdd={{ .HostConfig.CapAdd }} CapDrop={{ .HostConfig.CapDrop }}'\n\nThe command will output all Linux Kernel Capabilities.\n\nIf Linux Kernel Capabilities exceed what is defined in the System Security Plan (SSP), this is a finding.","fixText":"When using Kubernetes orchestration this check is Not Applicable.\n\nWhen using Swarm orchestration, review and remove nonsystem containers previously created by these users that allowed capabilities to be added or must be removed using:\n\ndocker container rm [container]","ccis":["CCI-000381"]},{"vulnId":"V-260924","ruleId":"SV-260924r966129_rule","severity":"medium","ruleTitle":"Incoming container traffic must be bound to a specific host interface.","description":"Privileged ports are those ports below 1024 and that require system privileges for their use. If containers are able to use these ports, the container must be run as a privileged user. MKE must stop containers that try to map to these ports directly. Allowing nonprivileged ports to be mapped to the container-privileged port is the allowable method when a certain port is needed. An example is mapping port 8080 externally to port 80 in the container.\n\nBy default, if the user does not specifically declare the container port to host port mapping, MKE automatically and correctly maps the container port to one available in 49153-65535 block on the host. But, MKE allows a container port to be mapped to a privileged port on the host if the user explicitly declared it. This is because containers are executed with NET_BIND_SERVICE Linux kernel capability that does not restrict the privileged port mapping. The privileged ports receive and transmit various sensitive and privileged data. Allowing containers to use them can bring serious implications.","checkContent":"This check must be executed on all nodes in an MKE cluster.\n\nVerify that no running containers are mapping host port numbers below 1024.\n\nVia CLI:\nLinux: Execute the following command as a trusted user on the host operating system:\n\ndocker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}'\n\nReview the list and ensure that container ports are not mapped to host port numbers below 1024. If they are, then this is a finding.\n\nEnsure that there is no such container to host privileged port mapping declarations in the Mirantis config file. View the config file. If container to host privileged port mapping declarations exist, this is a finding.","fixText":"To edit container ports, log in to the MKE web UI and navigate to Shared Resources >> Containers.\n\n- Locate the container with the incorrect port mapping.\n- Click on the container name and stop the container by clicking on the three dots in the upper right hand corner.\n- Scroll down to Ports to check if ports have been manually assigned.\n- Edit the port to a nonprivileged port.","ccis":["CCI-000381"]},{"vulnId":"V-260925","ruleId":"SV-260925r966132_rule","severity":"medium","ruleTitle":"CPU priority must be set appropriately on all containers.","description":"All containers on a Docker host share the resources equally. By using the resource management capabilities of Docker host, such as CPU shares, the user controls the host CPU resources that a container may consume.\n\nBy default, CPU time is divided between containers equally. If CPU shares are not properly set, the container process may have to starve if the resources on the host are not available. If the CPU resources on the host are free, CPU shares do not place any restrictions on the CPU that the container may use.","checkContent":"Ensure Resource Quotas and CPU priority is set for each namespace.\n\nWhen using Kubernetes orchestration:\nLog in to the MKE web UI, navigate to Kubernetes >> Namespace, and then click on each defined Namespace.\n\nIf the Namespace states \"Quotas Nothing has been defined for this resource.\" or the limits.cpu or the limits.memory settings do not match the System Security Plan (SSP), this is a finding.\n\nWhen using Swarm orchestration:\n1. Check Resource Quotas:\n\nLinux: As an administrator, execute the following command using a Universal Control Plane (MKE) client bundle:\n\ndocker ps --quiet --filter \"\"\"\"label=com.docker.ucp.version\"\"\"\"  | xargs docker inspect --format '{{ .Name }}: Memory={{ .HostConfig.Memory }}'\n\nIf the above command returns \"0\", it means the memory limits are not in place, and this is a finding.\n\n2. Check CPU Priority:\nWhen using Swarm orchestration, to ensure CPU priority is set, use the CLI:\n\nLinux: As an MKE Admin, execute the following command using a Universal Control Plane (MKE) client bundle:\n\ndocker ps --quiet --filter \"\"label=com.docker.ucp.version\"\" | xargs docker inspect --format '{{ .Name }}: CpuShares={{ .HostConfig.CpuShares }}'\n\nCompare the output against the SSP, if any containers are set to \"0\" or \"1024\", and they are not documented in the System Security Plan (SSP), this is a finding.","fixText":"Set Resource Quotas and CPU priority for each namespace.\n\nWhen using Kubernetes orchestration:\n\n1. Create a resource quota as follows (quotaexample.yaml):\n\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: mem-cpu-demo\nspec:\n  hard:\n    requests.cpu: \"\"1\"\"\n    requests.memory: 1Gi\n    limits.cpu: \"\"2\"\"\n    limits.memory: 2Gi\nWhere the limits can be set according to the SSP.\n\nSave this file.\n\n2. Apply the quota to a namespace within the cluster by executing:\n\nkubectl apply -f [full path to quotaexample.yaml] --namespace=[name of namespace on cluster]\n\nThis must be repeated for all namespaces. Quotas can differ per namespace as required by the site.\n\nWhen using Swarm orchestration:\n\n1. Set Resource Quotas by executing the following:\n\ndocker exec -it [container name] --memory=\"\"\"\"2g\"\"\"\"\n\nThis must be repeated for all containers. Quotas can differ per container as required by the site.\n\n2. Set CPU Priority:\n\nWhen using Swarm orchestration to manage the CPU shares between containers, start the container using the --cpu-shares argument.\n\nFor example, run a container as below:\n\ndocker run --interactive --tty --cpu-shares 512 [image] [command]\n\nIn the above example, the container is started with CPU shares of 50 percent of what the other containers use. So, if the other container has CPU shares of 80 percent, this container will have CPU shares of 40 percent.\n\nNote: Every new container will have 1024 shares of CPU by default. However, this value is shown as \"0\" if running the command mentioned in the audit section.\n\nAlternatively:\n1. Navigate to /sys/fs/cgroup/cpu/system.slice/ directory.\n\n2. Check the container instance ID using docker ps.\n\n3. Inside the above directory (in step 1), there will be a directory called docker-<Instance ID>.scope. For example, docker-4acae729e8659c6be696ee35b2237cc1fe4edd2672e9186434c5116e1a6fbed6.scope. Navigate to this directory.\n\n4. Find a file named cpu.shares. Execute cat cpu.shares. This will always show the CPU share value based on the system. Even if there are no CPU shares configured using -c or --cpu-shares argument in the docker run command, this file will have a value of 1024.\n\nBy setting one containers CPU shares to 512, it will receive half of the CPU time compared to the other container. Take 1024 as 100 percent and derive the number that set for respective CPU shares. For example, use 512 to set 50 percent and 256 to set 25 percent.","ccis":["CCI-000381","CCI-002530","CCI-002824"]},{"vulnId":"V-260926","ruleId":"SV-260926r966135_rule","severity":"medium","ruleTitle":"MKE must use a non-AUFS storage driver.","description":"The aufs storage driver is an old driver based on a Linux kernel patch-set that is unlikely to be merged into the main Linux kernel. aufs driver is also known to cause some serious kernel crashes. aufs only has legacy support from Docker. Most importantly, aufs is not a supported driver in many Linux distributions using latest Linux kernels.","checkContent":"The default storage driver for MCR is overlay2. To confirm this has not been changed via CLI: \n\nAs a trusted user on the underlying host operating system, execute the following command:\n\ndocker info | grep -e \"Storage Driver:\"\n\nIf the Storage Driver setting contains *aufs or *btrfs, then this is a finding. If the above command returns no values, this is not a finding.","fixText":"Modify Storage Driver setting.\n\nVia CLI as a trusted user on the underlying host operating system:\n\nIf the daemon.json file does not exist, it must be created.\n\n\"/etc/docker/daemon.json\"\n\nEdit the \"/etc/docker/daemon.json\" file and set the \"storage-driver\" property to a value that is not \"aufs\" or \"btrfs\".\n{\n  \"storage-driver\": \"overlay2\"\n}\n\nRestart the Docker daemon by executing the following:\n\nsudo systemctl restart docker","ccis":["CCI-000382"]},{"vulnId":"V-260927","ruleId":"SV-260927r966138_rule","severity":"medium","ruleTitle":"MKE's self-signed certificates must be replaced with DOD trusted, signed certificates.","description":"Self-signed certificates pose security risks, as they are not issued by a trusted third party. DOD trusted, signed certificates have undergone a validation process by a trusted CA, reducing the risk of man-in-the-middle attacks and unauthorized access. MKE uses TLS to protect sessions. Using trusted certificates ensures that only trusted sources can access the MKE cluster.","checkContent":"If Kubernetes ingress is being used, this is Not Applicable.\n\nCheck that MKE has been integrated with a trusted certificate authority (CA).\n\nLog in to the MKE web UI and navigate to admin >> Admin Settings >> Certificates.\n\nClick \"Download MKE Server CA Certificate\". \n\nVerify that the contents of the downloaded \"ca.pem\" file match that of the trusted CA certificate.\n\nIf the certificate chain does not match the chain as defined by the System Security Plan (SSP), then this is a finding.","fixText":"If Kubernetes ingress is being used, this is Not Applicable.\n\nIntegrate MKE and MSR (if used) with a trusted certificate authority CA.\n\nLog in to the MKE web UI and navigate to admin >> Admin Settings >> Certificates.\n\nEither fill in the \"CA Certificate\" field with the contents of the external public CA certificate or upload a file.\n\nEither fill in the \"Server Certificate\" and \"Private Key\" fields with the contents of the public/private certificates or upload a file.\n\nThe \"Server Certificate\" field must include both the MKE server certificate and any intermediate certificates.\n\nClick \"Save\".","ccis":["CCI-000381","CCI-000185"]},{"vulnId":"V-260928","ruleId":"SV-260928r966141_rule","severity":"medium","ruleTitle":"The \"Create repository on push\" option in MSR must be disabled.","description":"Allowing repositories to be created on a push can override essential settings and must not be allowed.","checkContent":"If MSR is not being utilized, this is Not Applicable.\n\nVerify the \"Create repository on push\" option is disabled in MSR:\n\nLog in to the MSR web UI as an administrator and navigate to System >> General Tab >>Repositories Section.\n\nVerify the \"Create repository on push\" slider is turned off. If it is turned on, this is a finding.","fixText":"If MSR is not being utilized, this is Not Applicable.\n\nVerify the \"Create repository on push\" option is disabled in MSR:\n\nLog in to the MSR web UI as an administrator and navigate to System >> General Tab >>Repositories Section.\n\nSet the \"Create repository on push\" slider to off.","ccis":["CCI-000381"]},{"vulnId":"V-260929","ruleId":"SV-260929r966144_rule","severity":"medium","ruleTitle":"Containers must not map to privileged ports.","description":"Privileged ports are those ports below 1024 and that require system privileges for their use. If containers are able to use these ports, the container must be run as a privileged user. MKE must stop containers that try to map to these ports directly. Allowing nonprivileged ports to be mapped to the container-privileged port is the allowable method when a certain port is needed. An example is mapping port 8080 externally to port 80 in the container.\n\nBy default, if the user does not specifically declare the container port to host port mapping, MKE automatically and correctly maps the container port to one available in 49153-65535 block on the host. But, MKE allows a container port to be mapped to a privileged port on the host if the user explicitly declared it. This is because containers are executed with NET_BIND_SERVICE Linux kernel capability that does not restrict the privileged port mapping. The privileged ports receive and transmit various sensitive and privileged data. Allowing containers to use them can bring serious implications.","checkContent":"This check must be executed on all nodes in an MKE cluster.\n\nVerify no running containers are mapping host port numbers below 1024.\n\nVia CLI:\nLinux: Execute the following command as a trusted user on the host operating system:\n\ndocker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}'\n\nReview the list and ensure container ports are not mapped to host port numbers below 1024. If they are, then this is a finding.","fixText":"To edit container ports, log in to the MKE web UI and navigate to Shared Resources >> Containers.\n\n- Locate the container with the incorrect port mapping. \n- Click on the container name and stop the container by clicking the three dots in the upper right corner. \n- Scroll down to Ports and check if ports have been manually assigned.\n- Edit the port to a nonprivileged port.","ccis":["CCI-000382"]},{"vulnId":"V-260930","ruleId":"SV-260930r966147_rule","severity":"medium","ruleTitle":"MKE must not permit users to create pods that share host process namespace.","description":"Controlling information flow between MKE components and container user services instantiated by MKE must enforce organization-defined information flow policies. Example methods for information flow control are: using labels for containers to segregate services; user permissions and roles to limit what user services are available to each user; controlling the user the services are able to execute as; and limiting inter-container network traffic and the resources containers can consume. Process ID (PID) namespaces isolate the PID number space, meaning that processes in different PID namespaces can have the same PID. This is process level isolation between containers and the host.\n\nPID namespace provides separation of processes and removes the view of the system processes, and allows process IDs to be reused including PID 1. If the host's PID namespace is shared with the container, it would allow processes within the container to view all of the processes on the host system.\n\nContainer processes cannot view the processes on the host system. In certain cases, such as system-level containers, the container must share the host's process namespace. System-level containers have a defined label and this access must be documented.\n\nBy default, all containers have the PID namespace enabled and the host's process namespace is not shared with the containers.","checkContent":"When using Kubernetes orchestration, this check is Not Applicable.\n\nWhen using Swarm orchestration, to ensure the host's process namespace is not shared, log in via CLI:\n\nExecute the following using the MKE client bundle:\n\ncontainer_ids=$(docker ps --quiet --filter=label=com.docker.ucp.version)\nfor container_id in $container_ids\ndo\n   container_name=$(docker inspect -f '{{.Name}}' $container_id | cut -c2-)\n   pid_mode=$(docker inspect -f '{{.HostConfig.PidMode}}' $container_id)\n\n   echo \"Container Name: $container_name, ID: $container_id, PidMode: $pid_mode\"\ndone\n\nIf PidMode = \"host\", this is a finding.","fixText":"When using Kubernetes orchestration, this check is Not Applicable.\n\nUsing Swarm orchestration, review and remove nonsystem containers previously created by these users utilizing shared namespaces or with a PidMode=host using the following:\n\ndocker container rm [container]","ccis":["CCI-000764"]},{"vulnId":"V-260931","ruleId":"SV-260931r966150_rule","severity":"medium","ruleTitle":"IPSec network encryption must be configured.","description":"IPsec encrypts the data traffic between nodes in a Kubernetes cluster, ensuring that the information exchanged is confidential and protected from unauthorized access. This is particularly important when sensitive or confidential data is transmitted over the network. IPsec not only provides encryption but also ensures the integrity of the transmitted data. Through the use of cryptographic mechanisms, IPsec can detect and prevent tampering or modification of data during transit.\n\nIn a Kubernetes cluster managed by MKE, nodes communicate with each other for various purposes, such as pod networking, service discovery, and cluster coordination. IPsec helps secure these communications, reducing the risk of man-in-the-middle attacks and unauthorized interception.","checkContent":"Verify IPSec network encryption. \n\nFor Swarm orchestration log in to the MKE web UI and navigate to Swarm >> Networks.\n\nIf the \"scope\" is not local and the \"driver\" is not overlay, this is a finding.\n \nKubernetes orchestration:\nNote: The path may need to be edited.\n\ncat /etc/mke/config.toml | grep secure_overlay\n \nIf the \"secure_overlay\" settings is not set to \"true\", this is a finding.","fixText":"To configure IPSec network encryption in Swarm orchestration, create an overlay network with --opt encrypted flag. \n\nExample:\ndocker network create --opt encrypted --driver overlay my-network\n\nTo configure IPSec network encryption in Kubernetes orchestration, modify an existing MKE configuration.\n\nWorking as an MKE admin, use the config-toml API from within the directory of your client certificate bundle to export the current MKE settings to a TOML file (mke-config.toml).\n\n1. Define the following environment variables:\n\nexport MKE_USERNAME=<mke-username>\nexport MKE_PASSWORD=<mke-password>\nexport MKE_HOST=<mke-fqdm-or-ip-address>\n\n2. Obtain and define an AUTHTOKEN environment variable by executing the following:\n\nAUTHTOKEN=$(curl --silent --insecure --data '{\"username\":\"'$MKE_USERNAME'\",\"password\":\"'$MKE_PASSWORD'\"}' https://$MKE_HOST/auth/login | jq --raw-output .auth_token)\n\n3. Download the current MKE configuration file by executing the following:\n\ncurl --silent --insecure -X GET \"https://$MKE_HOST/api/MKE/config-toml\" -H \"accept: application/toml\" -H \"Authorization: Bearer $AUTHTOKEN\" > mke-config.toml\n\n4. Modify \"secure_overlay\" settings to \"true\".\n\n5. Upload the newly edited MKE configuration file by executing the following:\n\ncurl --silent --insecure -X PUT -H \"accept: application/toml\" -H \"Authorization: Bearer $AUTHTOKEN\" --upload-file 'mke-config.toml' https://$MKE_HOST/api/MKE/config-toml\n\nNote: Users may need to reacquire AUTHTOKEN, if significant time has passed since it was first attained.","ccis":["CCI-000778"]},{"vulnId":"V-260932","ruleId":"SV-260932r966153_rule","severity":"medium","ruleTitle":"MKE must preserve any information necessary to determine the cause of the disruption or failure.","description":"When a failure occurs within MKE, preserving the state of MKE and its components, along with other container services, helps to facilitate container platform restart and return to the operational mode of the organization with less disruption to mission essential processes. When preserving state, considerations for preservation of data confidentiality and integrity must be taken into consideration.","checkContent":"When using Swarm orchestration, this check is Not Applicable.\n\nReview the Kubernetes configuration to determine if information necessary to determine the cause of a disruption or failure is preserved.\n\nNotes:\n- The ReadWriteOnce access mode in the PVC means the volume can be mounted as read-write by a single node. Ensure the storage backend supports this mode.\n- Adjust the sleep duration in the writer pod as needed.\n- Ensure that the namespace and PVC names match the setup.\n\nSteps to verify data durability:\n1. Create a namespace to manage the testing:\n\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: stig\n\n2. PersistentVolumeClaim (PVC):\nEnsure a PVC is created. If using a storage class like Longhorn, it would look similar to:\n\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: stig-pvc\n  namespace: stig\nspec:\n  accessModes:\n    - ReadWriteOnce\n  storageClassName: longhorn  # Replace with your storage class if different, e.g. NFS\n  resources:\n    requests:\n      storage: 5Gi\n\n3. Deploying the Initial Pod:\nCreate a pod that writes data to the PVC. This pod will use a simple loop to write data (e.g., timestamps) to a file on the mounted PVC. Example:\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: write-pod\n  namespace: stig\nspec:\n  volumes:\n    - name: log-storage\n      persistentVolumeClaim:\n        claimName: stig-pvc\n  containers:\n    - name: writer\n      image: busybox\n      command: [\"/bin/sh\", \"-c\"]\n      args: [\"while true; do date >> /data/logs.log; sleep 10; done\"]\n      volumeMounts:\n        - name: log-storage\n          mountPath: /data\n\n4. Simulate Pod Failure:\nAfter the pod has been writing data for some time, it can be deleted to simulate a failure by executing the following:\n\nkubectl delete pod write-pod -n stig\n\n5. Deploying a New Pod to Verify Data:\nDeploy another pod that mounts the same PVC to verify that the data is still there.\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: read-pod\n  namespace: stig\nspec:\n  volumes:\n    - name: log-storage\n      persistentVolumeClaim:\n        claimName: stig-pvc\n  containers:\n    - name: reader\n      image: busybox\n      command: [\"/bin/sh\", \"-c\"]\n      args: [\"sleep infinity\"]\n      volumeMounts:\n        - name: log-storage\n          mountPath: /data\n\n6. Verify Data Persistence:\nCheck the contents of the log file in the new pod to ensure that the data written by the first pod is still there by executing the following:\n\nkubectl exec read-pod -n stig -- cat /data/logs.log\n\nIf there is no log data, this is a finding.","fixText":"When using Swarm orchestration, this check is Not Applicable.\n\nThis is a catastrophic error, contact Mirantis support.","ccis":["CCI-001665"]},{"vulnId":"V-260933","ruleId":"SV-260933r966156_rule","severity":"medium","ruleTitle":"MKE must enable kernel protection.","description":"System kernel is responsible for memory, disk, and task management. The kernel provides a gateway between the system hardware and software. Kubernetes requires kernel access to allocate resources to the Control Plane. Threat actors that penetrate the system kernel can inject malicious code or hijack the Kubernetes architecture. It is vital to implement protections through Kubernetes components to reduce the attack surface.","checkContent":"Verify kernel protection. \n\nWhen using Kubernetes orchestration, change to the /etc/sysconfig/ directory on the Kubernetes Control Plane using the command:\n\ngrep -i protect-kernel-defaults kubelet\n\nIf the setting \"protect-kernel-defaults\" is set to false or not set in the Kubernetes Kubelet, this is a finding.\n\nWhen using Swarm orchestration:\n\nLinux: Execute the following command as a trusted user on the host operating system:\n\ndocker ps --quiet --all | xargs docker inspect --format '{{ .Name }}: CapAdd={{ .HostConfig.CapAdd }} CapDrop={{ .HostConfig.CapDrop }}'\n\nThe command will output all Linux Kernel Capabilities.\n\nIf Linux Kernel Capabilities exceed what is defined in the System Security Plan (SSP), this is a finding.","fixText":"When using Kubernetes orchestration, edit the Kubernetes Kubelet file in the /etc/sysconfig directory on the Kubernetes Control Plane. Set the argument \"--protect-kernel-defaults\" to \"true\".\n\nReset Kubelet service using the following command:\n\nservice kubelet restart\n\nWhen using Swarm orchestration, review and remove nonsystem containers previously created by these users that allowed capabilities to be added or must be removed using:\n\ndocker container rm [container]","ccis":["CCI-001084"]},{"vulnId":"V-260934","ruleId":"SV-260934r966159_rule","severity":"medium","ruleTitle":"All containers must be restricted from acquiring additional privileges.","description":"To limit the attack surface of MKE, it is important that the nonessential services are not installed and access to the host system uses the concept of least privilege.\n\nRestrict the container from acquiring additional privileges via suid or sgid bits.\n\nA process can set the no_new_priv bit in the kernel. It persists across fork, clone, and execve. The no_new_priv bit ensures that the process or its children processes do not gain any additional privileges via suid or sgid bits. This way, many dangerous operations become a lot less dangerous because there is no possibility of subverting privileged binaries.\n\nno_new_priv prevents LSMs like SELinux from transitioning to process labels that have access not allowed to the current process.","checkContent":"This check must be executed on all nodes in an MKE cluster to ensure all containers are restricted from acquiring additional privileges.\n\nVia CLI:\nLinux: As an MKE Admin, execute the following command using a Universal Control Plane (MKE) client bundle:\n\ndocker ps --quiet --all | xargs -L 1 docker inspect --format '{{ .Id }}: SecurityOpt={{ .HostConfig.SecurityOpt }}'\n\nThe above command returns the security options currently configured for the running containers. If the \"SecurityOpt=\" setting does not include the \"no-new-privileges\" flag, this is a finding.","fixText":"Start the containers using the following:\n\ndocker run --rm -it --security-opt=no-new-privileges <image>\n\nA reference for the Docker run command can be found at https://docs.docker.com/engine/reference/run/.\n\nno-new-privileges command information can be found here: https://docs.mirantis.com/mke/3.7/install/plan-deployment/mcr-considerations/no-new-privileges.html.","ccis":["CCI-001090","CCI-002233"]},{"vulnId":"V-260935","ruleId":"SV-260935r966162_rule","severity":"medium","ruleTitle":"Host IPC namespace must not be shared.","description":"IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores, and message queues. IPC namespace on the host must not be shared with the containers and remain isolated unless required.\n\nIf the host's IPC namespace is shared with the container, it would allow processes within the container to view all of the IPC on the host system. This breaks the benefit of IPC level isolation between the host and the containers. Having access to the container can eventually manipulate the host IPC. Do not share the host's IPC namespace with the containers. Only containers with the proper label will share IPC namespace and this access must be documented.","checkContent":"Check if the \"IpcMode\" is set to \"host\" for a running or stopped container.\n\nLog in to the MKE WebUI and Navigate to admin >> Admin Settings >> Privileges.\n\nIf hostIPC is checked for User account privileges or Service account privileges, consult the System Security Plan (SSP).\n\nIf hostIPC is not allowed per the SSP, this is a finding.","fixText":"Modify IpcMode for a container by logging in to the MKE WebUI and navigating to admin >> Admin Settings >> Privileges.\n- Uncheck hostIPC.\n- Click \"Save\".","ccis":["CCI-001090","CCI-002530"]},{"vulnId":"V-260936","ruleId":"SV-260936r966165_rule","severity":"medium","ruleTitle":"All containers must be restricted to mounting the root filesystem as read only.","description":"The container's root filesystem must be treated as a \"golden image\" by using Docker run's --read-only option. This prevents any writes to the container's root filesystem at container runtime and enforces the principle of immutable infrastructure.\n\nEnabling this option forces containers at runtime to explicitly define their data writing strategy to persist or not persist their data. This also reduces security attack vectors since the container instance's filesystem cannot be tampered with or written to unless it has explicit read-write permissions on its filesystem folder and directories.","checkContent":"When using Kubernetes orchestration, this check is Not Applicable.\n\nFor Swarm orchestration, check via CLI:\n\nLinux: As an MKE Admin, execute the following command using a Universal Control Plane (MKE) client bundle:\n\ndocker ps --quiet --all | xargs -L 1 docker inspect --format '{{ .Name }}: ReadonlyRootfs={{ .HostConfig.ReadonlyRootfs }}' \n\nIf ReadonlyRootfs=false, it means the container's root filesystem is writable and this is a finding.","fixText":"When using Kubernetes orchestration, this check is Not Applicable.\n\nWhen using Swarm orchestration, review and remove nonsystem containers previously created by these users with read write permissions using:\n\ndocker container rm [container]\n\nAdd a --read-only flag at a container's runtime to enforce the container's root filesystem to be mounted as read only:\n\ndocker run <Run arguments> --read-only <Container Image Name or ID> <Command>\n\nEnabling the --read-only option at a container's runtime must be used by administrators to force a container's executable processes to only write container data to explicit storage locations during the container's runtime.\n\nExamples of explicit storage locations during a container's runtime include, but are not limited to:\n\n1. Use the --tmpfs option to mount a temporary file system for nonpersistent data writes. Example:\n\ndocker run --interactive --tty --read-only --tmpfs \"/run\" --tmpfs \"/tmp\" [image] [command]\n\n2. Enabling Docker rw mounts at a container's runtime to persist container data directly on the Docker host filesystem. Example:\n\ndocker run --interactive --tty --read-only -v /opt/app/data:/run/app/data:rw [image] [command]\n\n3. Utilizing Docker shared-storage volume plugins for Docker data volume to persist container data.\n\ndocker volume create -d convoy --opt o=size=20GB my-named-volume\n\ndocker run --interactive --tty --read-only -v my-named-volume:/run/app/data [image] [command]","ccis":["CCI-002233"]},{"vulnId":"V-260937","ruleId":"SV-260937r966168_rule","severity":"medium","ruleTitle":"The default seccomp profile must not be disabled.","description":"Seccomp filtering provides a means for a process to specify a filter for incoming system calls. The default seccomp profile works on a whitelist basis and allows 311 system calls, blocking all others. It must not be disabled unless it hinders the container application usage. The default seccomp profile blocks syscalls, regardless of --cap-add passed to the container. \n\nA large number of system calls are exposed to every user and process, with many of them going unused for the entire lifetime of the process. Most of the applications do not need all the system calls and thus benefit by having a reduced set of available system calls. The reduced set of system calls reduces the total kernel surface exposed to the application and thus improvises application security.\n\nWhen running a container, it uses the default profile unless it is overridden with the --security-opt option.","checkContent":"When using Kubernetes orchestration, this check is Not Applicable.\n\nFor Swarm orchestration, to ensure the default seccomp profile is not disabled, log in to the CLI:\n\nLinux: As an MKE Admin, execute the following command using a Universal Control Plane (MKE) client bundle:\n\ndocker ps --quiet --filter \"label=com.docker.ucp.version\" | xargs docker inspect --format '{{ .Id }}: SecurityOpt={{ .HostConfig.SecurityOpt }}' \n\nIf seccomp:=unconfined, then the container is running without any seccomp profiles and this is a finding.","fixText":"When using Kubernetes orchestration, this check is Not Applicable. \n\nWhen using Swarm orchestration, do not pass unconfined flags to run a container without the default seccomp profile. Refer to seccomp documentation for details: https://docs.docker.com/engine/security/seccomp/.","ccis":["CCI-002233"]},{"vulnId":"V-260938","ruleId":"SV-260938r986163_rule","severity":"medium","ruleTitle":"Docker CLI commands must be run with an MKE client trust bundle and without unnecessary permissions.","description":"Running docker CLI commands remotely with a client trust bundle ensures that authentication and role permissions are checked for the command.\n\nUsing --privileged option or --user option in docker exec gives extended Linux capabilities to the command. Do not run docker exec with the --privileged or --user options, especially when running containers with dropped capabilities or with enhanced restrictions. By default, docker exec command runs without --privileged or --user options.","checkContent":"The host OS must be locked down so that only authorized users with a client bundle can access docker commands.\n\nTo ensure that no commands with privilege or user authorizations are present via CLI:\n\nLinux: As a trusted user on the host operating system, use the below command to filter out docker exec commands that used --privileged or --user option.\n\nsudo ausearch -k docker | grep exec | grep privileged | grep user \n\nIf there are any in the output, then this is a finding.","fixText":"Docker CLI command must only be run with a client bundle and must not use --privileged or --user option.\n\nRefer to https://docs.mirantis.com/mke/3.7/ops/access-cluster/client-bundle/configure-client-bundle.html?highlight=client%20bundle.","ccis":["CCI-002233","CCI-004068"]},{"vulnId":"V-260939","ruleId":"SV-260939r966174_rule","severity":"medium","ruleTitle":"MKE users must not have permissions to create containers or pods that share the host user namespace.","description":"To limit the attack surface of MKE, it is important that the nonessential services are not installed and access to the host system uses the concept of least privilege.\n\nUser namespaces ensure that a root process inside the container will be mapped to a nonroot process outside the container. Sharing the user namespaces of the host with the container thus does not isolate users on the host with users on the containers.\n\nBy default, the host user namespace is shared with the containers until user namespace support is enabled.","checkContent":"When using Kubernetes orchestration, this check is Not Applicable.\n\nWhen using Swarm orchestration, ensure that the PIDs cgroup limit is used.\n\nLog in to the CLI as an MKE Admin and execute the following command using a Universal Control Plane (MKE) client bundle:\n\ndocker ps --quiet --all | xargs docker inspect --format '{{ .Id }}: UsernsMode={{ .HostConfig.UsernsMode }}'\n\nEnsure it does not return any value for UsernsMode. If it returns a value of \"host\", that means the host user namespace is shared with the containers, and this is a finding.","fixText":"When using Kubernetes orchestration, this check is Not Applicable.\n \nWhen using Swarm orchestration, review and remove nonsystem containers previously created by these users without the runAsGroup using:\n\ndocker container rm [container]","ccis":["CCI-002233"]},{"vulnId":"V-260940","ruleId":"SV-260940r966177_rule","severity":"medium","ruleTitle":"Use of privileged Linux containers must be limited to system containers.","description":"Using the --privileged flag gives all Linux Kernel Capabilities to the container, thus overwriting the --cap-add and --cap-drop flags. The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. Any container that requires this privilege must be documented and approved.","checkContent":"When using Kubernetes orchestration, this check is Not Applicable. \n\nWhen using Swarm orchestration, execute the following command as a trusted user on the host operating system via CLI:\n\ndocker ps --quiet --all | grep -iv \"MKE\\|kube\\|dtr\" | awk '{print $1}' | xargs docker inspect --format '{{ .Id }}: Privileged={{ .HostConfig.Privileged }}'\n\nVerify in the output that no containers are running with the --privileged flag. If there are, this is a finding.","fixText":"When using Kubernetes orchestration, this check is Not Applicable. \n\nReview and remove nonsystem containers previously created by these users that allowed privileged execution using:\n\ndocker container rm [container]","ccis":["CCI-002233"]},{"vulnId":"V-260941","ruleId":"SV-260941r966180_rule","severity":"medium","ruleTitle":"The network ports on all running containers must be limited to required ports.","description":"To validate that the services are using only the approved ports and protocols, the organization must perform a periodic scan/review of MKE and disable functions, ports, protocols, and services deemed to be unneeded or nonsecure.","checkContent":"Verify that only needed ports are open on all running containers. If an ingress controller is configured for the cluster, this check is not applicable.\n\nVia CLI: As a remote MKE admin, execute the following command using a client bundle:\n\ndocker ps -q | xargs docker inspect --format '{{ .Id }}: Ports={{ .NetworkSettings.Ports }}'\n\nReview the list and ensure that the ports mapped are the ones really needed for the containers per the requirements set forth by the System Security Plan (SSP).\n\nIf ports are not documented and approved in the SSP, this is a finding.","fixText":"Configuring an ingress controller is the preferred method to manage external ports. If an ingress controller is not used and unnecessary ports are in use, the container or pod network configurations must be updated.\n\nTo update a pod's configuration, log in to the MKE UI as an administrator. \n\nNavigate to Kubernetes >> Pods and click the pod with an open port that is not allowed.\n\nClick the three dots in the upper right corner (edit).\n\nModify the .yaml file to remove the port. Example:\n\nspec: \n   container:\n   - name: [pod name]\n     ports:\n      - containerPort: 80 [replace with 443]\n\nClick \"Save\".\n\nFor a Swarm service, navigate to Swarm >> Services and click on the service with unauthorized port.\n\nClick the three dots in the top left corner.\n\nSelect \"Network\" in the pop-up and remove the unauthorized port.\n\nClick \"Save\".","ccis":["CCI-001762"]},{"vulnId":"V-260942","ruleId":"SV-260942r986164_rule","severity":"medium","ruleTitle":"MKE must only run signed images.","description":"Controlling the sources where container images can be pulled from allows the organization to define what software can be run within MKE. Allowing any container image to be introduced and instantiated within MKE may introduce malicious code and vulnerabilities to the platform and the hosting system.\n\nMKE registry must deny all container images except for those signed by organizational-approved sources.","checkContent":"On each node, check that MKE is configured to only run images signed by applicable Orgs and Teams.\n\n1. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Docker Content Trust.\n\nIf Content Trust Settings \"Run only signed images\" is disabled, this is a finding.\n\n2. Verify that the Orgs and Teams that images must be signed by in the drop-down matches the organizational policies.\n\nIf an Org or Team selected does not match organizational policies, this is a finding. \n\n3. Verify that all images sitting on an MKE cluster are signed.\n\nVia CLI:\nLinux: As an MKE Admin, execute the following commands using a client bundle:\n\ndocker trust inspect $(docker images | awk '{print $1 \":\" $2}')\n\nVerify that all image tags in the output have valid signatures. If the images are not signed, this is a finding.","fixText":"On each node, enable Content Trust enforcement in MKE.\n\n1. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Docker Content Trust.\n\nUnder Content Trust Settings section, enable \"Run only signed images\".\n\n2. Log in to the MKE web UI and navigate to admin >> Admin Settings >> Docker Content Trust.\n\nClick \"Add Team +\" and set the appropriate Orgs and Teams that must sign images. Use the drop-down (\"v\") that follows to match the organizational policies.\n\nRemove any unwanted teams by clicking the minus symbol.\n\nClick \"Save\".\n\n3. Manually remove any unsigned images sitting on an MKE cluster by executing the following:\n\ndocker rmi <IMAGE_ID>","ccis":["CCI-001774","CCI-003992"]},{"vulnId":"V-260943","ruleId":"SV-260943r966186_rule","severity":"medium","ruleTitle":"Vulnerability scanning must be enabled for all repositories in MSR.","description":"Enabling vulnerability scanning for all repositories in Mirantis Secure Registry (MSR) is a critical security practice that helps organizations identify and mitigate potential security risks associated with container images.\n\nEnabling scanning for all repositories in MSR helps identify and prioritize security issues that could pose risks to the containerized applications.","checkContent":"If MSR is not being utilized, this is Not Applicable.\n\nCheck image vulnerability scanning enabled for all repositories.\n\nLog in to the MSR web UI and navigate to System >> Security Tab.\n\nVerify that the \"Enable Scanning\" slider is turned on and the vulnerability database has been successfully synced (online) or uploaded (offline).\n\nIf the \"Enable Scanning\" slider is tuned off, this is a finding.\n\nIf the vulnerability database is not synced or uploaded, this is a finding.","fixText":"If MSR is not being utilized, this is Not Applicable.\n\nEnable vulnerability scanning on the MSR UI by logging in to the MSR web UI and navigating to System >> Security Tab.\n\nClick the \"Enable Scanning\" slider to enable this capability.\n\nSync (online) or upload (offline) the vulnerability database.","ccis":["CCI-001067","CCI-002605","CCI-000366"]},{"vulnId":"V-260944","ruleId":"SV-260944r966189_rule","severity":"medium","ruleTitle":"Older Universal Control Plane (MKE) and Docker Trusted Registry (DTR) images must be removed from all cluster nodes upon upgrading.","description":"When upgrading either the UCP or DTR components of MKE, the newer images are pulled (or unpacked if offline) onto engine nodes in a cluster. Once the upgrade is complete, one must manually remove all old image version from the cluster nodes to meet the requirements of this control.\n\nWhen upgrading the Docker Engine - Enterprise component of MKE, the old package version is automatically replaced.","checkContent":"Verify all outdated MKE and DTR container images have been removed from all nodes in the cluster.\n\nVia CLI: As an MKE admin, execute the following command using a client bundle:\n\ndocker images --filter reference='mirantis/[ucp]*'\ndocker images --filter reference='registry.mirantis.com/msr/[msr]*'\n\nVerify there are no tags listed older than the currently installed versions of MKE and DTR.\n\nIf any of the tags listed are older than the currently installed versions of MKE and DTR, then this is a finding. If no tags are listed, this is not a finding.","fixText":"Remove all outdated MKE and DTR container images from all nodes in the cluster:\n\nVia CLI: As an MKE admin, execute the following commands using a client bundle:\n\ndocker rmi -f $(docker images --filter reference='mirantis/ucp*:[outdated_tags]' -q)\ndocker rmi -f $(docker images --filter reference='registry.mirantis.com/msr/[msr]*:[outdated_tags]' -q)","ccis":["CCI-002617"]},{"vulnId":"V-260945","ruleId":"SV-260945r966192_rule","severity":"medium","ruleTitle":"MKE must contain the latest updates.","description":"MKE must stay up to date with the latest patches, service packs, and hot fixes. Not updating MKE will expose the organization to vulnerabilities.","checkContent":"Check for updates by logging in to the MKE WebUI and Navigating to admin >> Admin Settings >> Upgrade.\n\nIn the \"Choose MKE Version\" section, select the drop-down. The UI will provide a list of available versions.\n\nIf an updated version is available in the list, this is a finding.","fixText":"Note: It is advisable to review the release notes to understand what changes and improvements come with the new version.\n\nLog in to the MKE WebUI and navigate to admin >> Admin Settings >> Upgrade.\n\nIn the \"Choose MKE Version\" section, select the drop-down. Follow the on-screen instructions to start the upgrade.","ccis":["CCI-002605"]},{"vulnId":"V-260946","ruleId":"SV-260946r966345_rule","severity":"low","ruleTitle":"MKE must display the Standard Mandatory DOD Notice and Consent Banner before granting access to platform components.","description":"MKE has countless components where different access levels are needed. To control access, the user must first log in to MKE and then be presented with a DOD-approved use notification banner before granting access to the component. This guarantees privacy and security notification verbiage used is consistent with applicable federal laws, Executive Orders, directives, policies, regulations, standards, and guidance.","checkContent":"Review the MKE configuration to determine if the Standard Mandatory DOD Notice and Consent Banner is configured to be displayed before granting access to platform components.\n\nLog in to MKE and verify that the Standard Mandatory DOD Notice and Consent Banner is being displayed before granting access.\n\nIf the Standard Mandatory DOD Notice and Consent Banner is not configured or is not displayed before granting access to MKE, this is a finding.","fixText":"Configure MKE to display the Standard Mandatory DOD Notice and Consent Banner before granting access to MKE:\n\nModify the existing MKE configuration:\nWorking as an MKE admin, use the config-toml API from within the directory of the client certificate bundle to export the current MKE settings to a TOML file (mke-config.toml).\n\n1. Define the following environment variables:\n\nexport MKE_USERNAME=<mke-username>\nexport MKE_PASSWORD=<mke-password>\nexport MKE_HOST=<mke-fqdm-or-ip-address>\n\n2. Obtain and define an AUTHTOKEN environment variable by executing the following:\n\nAUTHTOKEN=$(curl --silent --insecure --data '{\"username\":\"'$MKE_USERNAME'\",\"password\":\"'$MKE_PASSWORD'\"}' https://$MKE_HOST/auth/login | jq --raw-output .auth_token)\n\n3. Download the current MKE configuration file by executing the following:\n\ncurl --silent --insecure -X GET \"https://$MKE_HOST/api/MKE/config-toml\" -H \"accept: application/toml\" -H \"Authorization: Bearer $AUTHTOKEN\" > mke-config.toml\n\n4. Edit the MKE configuration (mke-config.toml) modify \"pre_logon_message\" settings to match the Standard Mandatory DOD Notice and Consent Banner.\n\nExample:\npre_logon_message = \"You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only. By using this IS (which includes any device attached to this IS), you consent to the following conditions :\\n\\n-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.\\n\\n-At any time, the USG may inspect and seize data stored on this IS.\\n\\n-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.\\n\\n-This IS includes security measures (e.g., authentication and access controls) to protect USG interests--not for your personal benefit or privacy.\\n\\n-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.\"\n\n5. Upload the edited MKE configuration file by executing the following:\n\ncurl --silent --insecure -X PUT -H \"accept: application/toml\" -H \"Authorization: Bearer $AUTHTOKEN\" --upload-file 'mke-config.toml' https://$MKE_HOST/api/MKE/config-toml\n\nNote: Users may need to reacquire the AUTHTOKEN, if significant time has passed since it was first acquired.\n\n6. Log in to MKE and verify that the Standard Mandatory DOD Notice and Consent Banner is being displayed before granting access.","ccis":["CCI-000048"]}]}