Download red hat 4.7




















Configuring and managing storage in OpenShift Container Platform. Backing up and restoring your OpenShift Container Platform cluster. Getting started with the web console in OpenShift Container Platform.

Learning how to use the command-line tools for OpenShift Container Platform. Creating and managing applications on OpenShift Container Platform. Creating and managing images and imagestreams in OpenShift Container Platform. Configuring and managing nodes in OpenShift Container Platform.

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. The following features are currently supported for new 4. Native Ignition support for LUKS disk encryption provides additional configurability for encrypted root filesystems, as well as support for encryption of additional data filesystems.

RHCOS now supports boot disk mirroring, except on sx, providing redundancy in the case of disk failure. For more information, see Mirroring disks during installation. On clusters that are upgrading from OpenShift Container Platform 4.

This enables you to have the latest fixes, features, and enhancements, such as NetworkManager features, as well as the latest hardware support and driver updates. You can use this service to save system memory content for later analysis.

The kdump service is not managed at the cluster-level and must be enabled and configured manually on a per-node basis. For more information, see Enabling kdump. With this fix, you now have the ability to configure the timeout value that is used when trying to acquire a DHCP lease.

See BZ for more information. RHCOS now supports multipath on the primary disk, allowing stronger resilience to hardware failure so that you can set up RHCOS on top of multipath to achieve higher host availability.

Only enable multipathing with kernel arguments within a machine config as documented. Do not enable multipathing during installation.

As a result, Ignition successfully reads its config from instance userdata, regardless of whether IMDSv1 is enabled or not. You are also required to include the CA certificates for C2S in the additionalTrustBundle field of the install-config. Clusters deployed to the C2S Secret Region do not have access to the Internet; therefore, you must configure a private image registry.

The installation program does not support destroying a cluster deployed to the C2S region; you must manually remove the resources of the cluster. For more information, see AWS government and secret regions. You can now install a cluster on Google Cloud Platform GCP and use a personal encryption key to encrypt both virtual machines and persistent volumes. This is done by setting the controlPlane.

The cluster can have both control plane and compute machines running on bare metal, or just compute machines. For more information, see Deploying a cluster with bare metal machines. These new validations include:. You can now create compute machines in clusters that run on RHOSP that use a network and subnet of your choice. Ansible playbooks for installing a cluster on your own RHOSP infrastructure are now packaged for retrieval by using a script in the installation documentation.

The computeFlavor property that is used in the install-config. As an alternative, you can now configure machine pool flavors in the platform. In previous versions of OpenShift Container Platform, you could not assign a static IP address to the bootstrap host of a bare metal installation that used installer-provisioned infrastructure.

Now, you can specify the MAC address that is used by the bootstrap virtual machine, which means you can use static DHCP reservations for the bootstrap host.

The installer for installer-provisioned installation on bare metal nodes now automatically creates a storage pool for storing relevant data files required during the installation, such as ignition files. The installer for installer-provisioned installation on bare metal nodes provides a survey which asks the user a minimal set of questions, and generates an install-config.

You can use the generated install-config. Cluster nodes deployed with installer-provisioned installation on bare metal clusters can deploy with static IP addresses. To deploy a cluster so that nodes use static IP addresses, configure a DHCP server to provide infinite leases to cluster nodes.

After the installer finishes provisioning each node, a dispatcher script will execute on each provisioned node and convert the DHCP infinite lease to a static IP address using the same static IP address provided by the DHCP server.

As a result, you are now prevented from performing an update within a minor version, for example, from 4. Previously, with a degraded machine config pool, the Machine Config Operator did not report its Upgradeable status as false.

The update was allowed and would eventually fail when updating the Machine Config Operator because of the degraded machine config pool. There is no change in this behavior for updates within z-stream releases, for example, from 4. As such, you should check the machine config pool status before performing a z-stream update. The web console is now localized and provides language support for global users.

English, Japanese, and Simplified Chinese are currently supported. The displayed language follows your browser preferences, but you can also select a language to override the browser default.

From the User drop-down menu, select Language preferences to update your language setting. Localized date and time is now also supported. A quick start is a guided tutorial with user tasks. In the web console, you can access quick starts under the Help menu. They are especially useful for getting oriented with an application, Operator, or other product offering.

See Creating quick start tutorials in the web console for more information. Insights provides cluster health data, such as the number of total issues and total risks of the issues. Risks are labeled as Critical , Important , Moderate , or Low.

You can quickly navigate to Red Hat OpenShift Cluster Manager for further details about the issues and how to fix them. The console now provides an extensibility mechanism that allows Red Hat Operators to build and package their own user interface extending the console. It also enables customers and Operators to add their own quick starts.

Hints, filters, and access from both Administrator and Developer perspectives are now added to make quick starts and the relevant content more accessible.

You can now quickly search for deployed workloads and application groupings in the topology List and Graph views to add them to your application. Persistent storage of user preferences is now provided so that when users move from one machine or browser to another they have a consistent experience. You can see the severity details of a vulnerability and also launch the Quay user interface, in the context of the manifest of the vulnerable image stored in that repository, to get more details about the vulnerability.

When the web terminal is inactive for a long period, it stops and provides the user an option to restart it. The pipeline creation process now makes better use of pipelines over the default build config system.

Build configs are no longer created by default along with the Pipelines using the Import from git workflow and the pipeline starts as soon as you create the application. You can now configure pipelines in the Pipeline builder page using either the Pipeline builder option or the YAML view option.

You can also use the Operator-installed, reusable snippets and samples to create detailed Pipelines. The PipelineRun page now contains a TaskRuns tab that lists the associated task runs. You can click on the required task run to see the details of the task run and debug your pipelines. You can now see the following metrics for your pipelines in the Pipeline Details page, per pipeline: pipeline run duration, task run duration, number of pipeline runs per day and the pipeline success ratio per day.

You can access the Serving and Eventing pages from the Administrator perspective and create serverless components using the console. See the name of the chart repository on the chart card in the catalog to distinguish charts with the same name, but from different chart repositories. Persistent shared storage must be provisioned by using either NFS or other supported storage protocols.

Users can now manage their own OAuth access tokens. This allows users to review their tokens and delete any tokens that have timed out or are no longer needed. For more information, see Managing user-owned OAuth access tokens. This option requires the presence of the admin-level credential during installation, but the credential is not stored in the cluster permanently and does not need to be long-lived.

You can now deploy a cluster with Secure Boot when using installer-provisioned infrastructure on bare metal nodes. You cannot use self-generated keys with Secure Boot. A migration to the OVN-Kubernetes cluster network provider is now supported on installer-provisioned clusters on the following platforms:. To assist you with diagnosing cluster network connectivity issues, the Cluster Network Operator CNO now runs a connectivity check controller to perform connection health checks in your cluster.

The results of the connection tests are available in PodNetworkConnectivityCheck objects in the openshift-network-diagnostics namespace. For more information, see Verifying connectivity to an endpoint.

When configuring an egress firewall rule, you can now use a DNS domain name instead of an IP address. For more information, see DPDK library for use with container applications. You can use the plug-in to deploy an egress router in redirect mode. For more information, see Deploying an egress router pod in redirect mode.

With IPsec enabled, all cluster network traffic between pods is sent over an encrypted IPsec tunnel. You cannot enable or disable IPsec after cluster installation. The IPsec tunnel is not used for network traffic between pods that are configured to use the host network. However, traffic sent from a pod on the host network and received by a pod that uses the cluster network does use the IPsec tunnel.

For more information, see IPsec encryption configuration. Make sure to add the necessary configuration by using spec. When using the OpenShift SDN or OVN-Kubernetes cluster network providers, you can select traffic from Ingress Controllers in a network policy rule regardless of whether an Ingress Controller runs on the cluster network or the host network.

In a network policy rule, the policy-group. You can continue to use the network. A cluster that uses the OpenShift SDN cluster network provider could select traffic from an Ingress Controller on the host network only by applying the network. A cluster that uses the OVN-Kubernetes cluster network provider could not select traffic from an Ingress Controller on the host network. For more information, refer to About network policy.

For more information, see Using CSI volume snapshots. The vSphere Problem Detector Operator is installed by default by the Cluster Storage Operator, allowing you to quickly identify and troubleshoot common storage issues, such as configuration and permissions, on vSphere clusters.

The Local Storage Operator now includes a must-gather image, allowing you to collect custom resources specific to this Operator for diagnostic purposes. You can use OCI images in the same way you would use Docker schema2 images. The need to understand if clients are leveraging image stream imports using docker registry v1 protocol resulted in this enhancement, which exports Operator metrics to telemetry.

Metrics related to protocol v1 usage are now visible in telemetry. To make upgrades more robust, it is recommend that Operators actively communicate with the service that is about to be updated. If a service is processing a critical operation, such as live migrating virtual machines VMs in OpenShift Virtualization or restoring a database, it might be unsafe to upgrade the related Operator at that time.

In OpenShift Container Platform 4. The non-upgradeable state delays any pending Operator upgrade, whether automatically or manually approved, until the Operator finishes the operation and reports upgrade readiness. See Operator conditions for more about how OLM uses this communication channel. See Enabling Operator conditions for details on updating your project as an Operator developer to use the communication channel. If certain images relevant to Operators managed by Operator Lifecycle Manager OLM are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default.

To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more secrets in a catalog source, some of these required images can be pulled for use in OperatorHub, while other images require updates to the global cluster pull secret or namespace-scoped secrets.

See Accessing images for Operators from private registries for more details. Cluster administrators can use the oc adm catalog mirror command to mirror the content of an Operator catalog into a container image registry.

This enhancement updates the oc adm catalog mirror command to also now mirror the index image being used for the operation into the registry, which was previously a separate step requiring the oc image mirror command. Deleting an InstallPlan object that is waiting for user approval causes the Operator to be stuck in an unrecoverable state as the Operator installation cannot be completed. This enhancement updates Operator Lifecycle Manager OLM to create a new install plan if the previously pending one is deleted.

As a result, users can now approve the new install plan and proceed with the Operator installation. BZ This enhancement updates the oc adm catalog mirror command to support mirroring images to a disconnected registry by first mirroring the images to local files.

For example:. As of OpenShift Container Platform 4. With the downstream release of Operator SDK v1. All metadata required to package an Operator for OLM is generated automatically. Operator developers can use this functionality to package and test their Operator for OLM and OpenShift distributions directly from their CI pipelines.

You can use the run bundle subcommand to run Operator on a cluster and test whether the Operator behaves correctly when managed by OLM. This feature relieves the cluster administrator of having to manually register the webhooks, add TLS certificates, and set up certificate rotation. Operator authors should validate that their Operator is packaged correctly and free of syntax errors.

To validate an Operator, the scorecard tool provided by the Operator SDK begins by creating all resources required by any related custom resources CRs and the Operator. The scorecard then creates a proxy container in the deployment of the Operator, which is used to record calls to the API server and run some of the tests.

The tests performed also examine some of the parameters in the CRs. Operator developers can use the Operator SDK to take advantage of code scaffolding support for Operator conditions, including reporting upgrade readiness to OLM. You can quickly test upgrading your Operator by using OLM integration in the Operator SDK, without requiring you to manually manage index images and catalog sources. The run bundle-upgrade subcommand automates triggering an installed Operator to upgrade to a later version by specifying a bundle image for the later version.

In the current version, when the OpenShift Container Platform performs a build and the log level is five or higher, the cluster writes the buildah version information to the build log. This information helps Red Hat Engineering reproduce bug reports. Previously, this version information was not available in the build logs. OpenShift Container Platform now creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace that contains an entry, the populating image, for each image stream tag.

You can use this config map as a reference for which images need to be mirrored for your image streams to import. For more information, see Cluster Samples Operator assistance for mirroring.

For more information, see Machine sets that deploy machines as Dedicated Instances. You can now enable encryption with a customer-managed key for machine sets running on GCP. For more information, see Enabling customer-managed encryption keys for a machine set.

The Machine API now honors cluster-wide proxy settings. When a cluster-wide proxy is configured, all Machine API components will route traffic through the configured proxy. The Machine Config Operator MCO no longer automatically reboots all corresponding nodes for the following machine configuration changes:. For more information, see Understanding the Machine Config Operator. If the operating system does not shut down the node within three minutes, the Bare Metal Operator executes a "hard" shutdown.

The behavior of executing a "hard" shutdown for remediation purposes will be back ported to OpenShift Container Platform 4. The machine health resource example described in About machine health checks has been updated with a shorter health check timer value.

Starting with OpenShift Container Platform 4. The exception to this is if the node is a master node or a node that was provisioned externally. When provisioning a new node in the cluster, if the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the installation fails and a registration error is displayed for the failed bare-metal host. You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the openshift-machine-api namespace.

For more information, see Diagnosing a duplicate MAC address when provisioning a new host in the cluster. The descheduler is now generally available. The descheduler provides the ability to evict a running pod so that the pod can be rescheduled onto a more suitable node. You can enable one or more of the following descheduler profiles:.

AffinityAndTaints : evicts pods that violate inter-pod anti-affinity, node affinity, and node taints. TopologyAndDuplicates : evicts pods in an effort to evenly spread similar pods, or pods of the same topology domain, among nodes.

LifecycleAndUtilization : evicts long-running pods and balances resource usage between nodes. With the GA, you can enable descheduler profiles and configure the descheduler interval. Any other settings that were available during Technology Preview are no longer available. For more information, see Evicting pods using the descheduler.

You can now specify a scheduler profile to control how pods are scheduled onto nodes. This is a replacement for configuring a scheduler policy. The following scheduler profiles are available:. LowNodeUtilization : This profile attempts to spread pods evenly across nodes to get low resource usage per node.

HighNodeUtilization : This profile attempts to place as many pods as possible onto as few nodes as possible, to minimize node count with high usage per node. NoScoring : This is a low-latency profile that strives for the quickest scheduling cycle by disabling all score plug-ins.

This might sacrifice better scheduling decisions for faster ones. For more information, see Scheduling pods using a scheduler profile.

Autoscaling for memory utilization is now generally available. You can create horizontal pod autoscaler custom resources to automatically scale the pods associated with a deployment config or replication controller to maintain the average memory utilization you specify, either a direct value or a percentage of requested memory. For more information, see Creating a horizontal pod autoscaler object for memory utilization. You can now configure a priority class to be non-preempting by setting the preemptionPolicy field to Never.

Pods with this priority class setting are placed in the scheduling queue ahead of lower priority pods, but do not preempt other pods. For more information, see Non-preempting priority classes. The AlertmanagerClusterCrashlooping alert is added. The critical alert provides notification if at least half of the Alertmanager instances in a cluster are crashlooping.

The AlertmanagerClusterDown alert is added. The critical alert provides notification if at least half of the Alertmanager instances in a cluster are down. The critical alert provides notification if all Alertmanager instances in a cluster failed to send notifications. The warning alert provides notification if an Alertmanager instance failed to send notifications.

The etcdBackendQuotaLowSpace alert is added. The critical alert provides notification if the database size of an etcd cluster exceeds the defined quota on an etcd instance. The etcdExcessiveDatabaseGrowth alert is added. The etcdHighFsyncDurations alert is added. The critical alert provides notification if the 99th percentile fsync durations of an etcd cluster are too high. The warning alert provides notification if Kubelet failed to renew its client certificate.

The warning alert provides notification if Kubelet failed to renew its server certificate. The NTODegraded alert is added. The warning alert provides notification if the Node Tuning Operator is degraded. The warning alert provides notification if a specific pod on a node is not ready. The PrometheusOperatorNotReady alert is added.

The warning alert provides notification if a Prometheus Operator instance is not ready. The PrometheusOperatorRejectedResources alert is added. The warning alert provides notification if specific resources are rejected by the Prometheus Operator. The PrometheusOperatorSyncFailed alert is added. The warning alert provides notification if the controller of a Prometheus Operator failed to reconcile specific objects.

The PrometheusTargetLimitHit alert is added. The warning alert provides notification if Prometheus has dropped targets because some scrape configurations have exceeded the limit of the targets. The ThanosSidecarPrometheusDown alert is added. The critical alert provides notification that the Thanos sidecar cannot connect to Prometheus.

The ThanosSidecarUnhealthy alert is added. The critical alert provides notification that the Thanos sidecar is unhealthy for a specified amount of time. The NodeClockNotSynchronising alert is updated to prevent false positives in environments that use the chrony time service, chronyd. The NodeNetworkReceiveErrs alert is updated to ensure that the alert does not fire when only a small number of errors are reported.

The rule now uses the ratio of errors to total packets instead of the absolute number of errors. The NodeNetworkTransmitErrs alert is updated to ensure that the alert does not fire when only a small number of errors are reported. These alerts fired if a high percentage of HTTP requests failed on an etcd instance. Red Hat does not guarantee backward compatibility for metrics, recording rules, or alerting rules. For more information, see Support considerations for monitoring. The API Performance dashboard is now available from the web console.

HWMon data collection is enabled for hardware health telemetry such as CPU temperature and fan speeds for bare metal clusters. You can now configure the Thanos Querier logLevel field for purposes such as debugging. The memory limit was removed on the config-reloader container in the openshift-user-workload-monitoring namespace for Prometheus and Thanos Ruler pods. This update prevents OOM kill of the config-reloader container, which previously occurred when the container used more memory than the defined limit.

The previous Technology Preview configuration that enabled users to monitor their own services is now removed and not supported in OpenShift Container Platform 4. The techPreviewUserWorkload field is removed from the cluster-monitoring-config ConfigMap object and is no longer supported.

See Understanding the monitoring stack for more information on monitoring user defined projects. Updated guidance around cluster maximums for OpenShift Container Platform 4. This Operator supports the requirements of vRAN deployments for low power, cost, and latency, while also delivering the capacity to manage spikes in performance for a range of use cases.

One of the most compute-intensive 4G and 5G workloads is RAN layer 1 L1 forward error correction FEC , which resolves data transmission errors over unreliable or noisy communication channels. Delivering high performance FEC is critical to 5G maintaining high performance as it matures and as more users depend on the network.

The latency test, a part of the CNF-test container, provides a way to measure if the isolated CPU latency is below the requested upper bound. For information about running a latency test, see Running the latency tests. A new performance profile field globallyDisableIrqLoadBalancing is available to manage whether or not device interrupts are processed by the isolated CPU set.

New pod annotations irq-load-balancing. When configured, CRI-O disables device interrupts only as long as the pod is running. For more information, see Managing device interrupt processing for guaranteed pod isolated CPUs. When you create a secondary network using a rawConfig configuration for the CNO custom resource and configure a VRF for it, the interface created for the pod is associated with the VRF.

It can look beyond headers or special protocols that are not covered by other iptables modules. For more information, see Performing end-to-end tests for platform verification. The configuration files for all available operator. The Persistent Volume definition, if used in the openshift-image-registry configuration.

Appearances of certain log entries of pods in the openshift-apiserver-operator namespace. Appearances of certain log entries of sdn pods in the openshift-sdn namespace. Power-based remediation for bare metal, which is installed using installer provisioned infrastructure IPI , is now available.

You can configure MachineHealthCheck CRs to trigger power-based remediation that power-cycles instead of reprovisioning the node. This remediation significantly reduces the time to recover stateful workloads and compute capacity in bare metal environments. For more information, see About power-based remediation of bare metal.

When OLM increments Kubernetes dependencies, the embedded resources are updated as well. Typically, Kubernetes is backwards compatible with a few of its previous versions. Operator authors are encouraged to keep their projects up to date to maintain compatibility and take advantage of updated resources. See Kubernetes documentation for details about version skew policies in the upstream Kubernetes project.

The default scheduler in OpenShift Container Platform 4. Ensure that your nodes have the required labels in order for pod replicas to spread properly.

By default, the scheduler requires the kubernetes. If your nodes will not use these labels, define your own pod topology spread constraints instead of using the default constraints. For more information, see Controlling pod placement by using pod topology spread constraints. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality deprecated and removed within OpenShift Container Platform 4. Additional details for more fine-grained functionality that has been deprecated and removed are listed after the table. Using a scheduler policy to control pod placement is deprecated and is planned for removal in a future release.

For more information on the Technology Preview alternative, see Scheduling pods using a scheduler profile. When using the oc adm catalog mirror command to mirror catalogs, the --filter-by-os flag was previously allowed to filter architectures of mirrored content. This would break references to those images in the catalog that point to the manifest list and not the manifest. The --filter-by-os flag now only filters the index image that is pulled and unpacked.

To clarify this, the new --index-filter-by-os flag is now added and should be used instead. Image stream image imports are no longer tracked in real time by conditions on the Cluster Samples Operator configuration resource. In-progress image streams no longer directly affect updates to the ClusterOperator instance openshift-samples.

Prolonged errors with image streams are now reported by Prometheus alerts. Upgrade tracking is now achieved by the other conditions and both the individual image stream config maps and the imagestream-to-image config map.

For example, v1 is corrected to apps. This release adds a warning that displays the correct value of apiVersion when it is missing from an object. When using installer-provisioned installation on bare metal nodes, OpenShift Container Platform 4.

The following images are no longer included in the samples imagestreams provided with OpenShift Container Platform:. Previously, the openshift-service-ca namespace was labeled with openshift. This label has been removed, and now the pods in this namespace run with the appropriate privileges. Previously, the openshift-service-ca-operator namespace was labeled with openshift. This label has been removed for new installations, and now the pods in this namespace run with the appropriate privileges.

For upgraded clusters, you can remove this label manually and restart the affected pods. Previously, a missed condition in the Cluster Authentication Operator code caused its log to be flooded with messages about updates to a deployment that did not occur. The logic for deciding whether to update the Operator status was updated and the Cluster Authentication Operator log no longer receives messages for a deployment update that did not occur.

Previously, the Cluster Authentication Operator only watched configuration resources named cluster , which caused the Operator to ignore changes in ingress configuration, which was named default.

This led to incorrectly assuming that there were no schedulable worker nodes when ingress was configured with a custom node selector. The Cluster Authentication Operator now watches all resources regardless of their name, and the Operator now properly observes ingress configuration changes and reconciles worker node availability. Previously on some systems, the installer would communicate with Ironic before it was ready and fail. This is now prevented. Previously, when using virtual media on a Dell system, if the virtual media was already attached before the deployment commenced it would fail.

Ironic now retries if this occurs. Previously, master nodes were losing their IPv6 link-local address on the provisioning interface preventing provisioning from working with IPv6. Previously the cluster-baremetal-operator used the incorrect logging library.

This issue resulted in command line arguments not being consistent with other Operators and not all Kubernetes library logs were getting logged. Switching the logging library has fixed this issue. This issue led to PXE boot failures occurring for nodes after the IPv6 link-local address is removed. Previously Supermicro nodes boot to PXE upon reboot after successful deployment to disk. Supermicro nodes now boot to disk persistently after deployment. Since network boot is not compatible with secure boot, using virtual media is required in this case.

Node auto-discovery is no longer enabled in baremetal IPI. It was not handled correctly and caused duplicate bare metal hosts registration. Previously, the syslinux-nonlinux package was not included with bare metal provisioning images. As a result, virtual media installations on machines that used BIOS boot mode failed.

The package is now included in the image. Previously, certain Dell firmware versions reported the Redfish PowerState inaccurately. Previously, Redfish was not present in the list of interfaces that can get and set BIOS configuration values. Redfish is now included in the list, and it can be used in BIOS configuration. Previously, the Redfish interface that is used to set BIOS configurations was not implemented properly.

The implementation error was corrected. Additionally, Red Hat OpenShift 4. This provides a pathway for organizations to move Windows Containers to Red Hat OpenShift regardless of where they live and without needing to completely re-architect or write new code.

With this, as a developer, you can specify that the platform automatically scales a replication controller or deployment of your application, based on memory metrics. As an OpenShift administrator, Scheduling Profiles Technology Preview can be used to customize behavior of the default out-of-box scheduler to optimize the cluster for a number of factors such as resource utilization, high performance, and high availability for their business needs.

Descheduler allows for the administrator to evict pods based on predefined policies so that the pods can be rescheduled based on the latest cluster scheduling policies. The descheduler operator manages the life cycle of the descheduler. The d escheduler and the operator is GA in 4. Enabling IPsec encryption can prevent the cluster traffic data from being monitored and manipulated.

OCP 4. Now, anyone can create an in-cluster quick start!



0コメント

  • 1000 / 1000