Description of problem: Two dashes(--) in the cron job pod name. Only one(-) is expected. Version-Release number of selected component (if applicable): Version: 4.10.0-0.nightly-2021-12-23-153012 How reproducible: Always Steps to Reproduce: 1. Deploy cluster logging. 2. Check the cronjob pod name #oc get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 13m 15m elasticsearch-im-audit */15 * * * * False 0 13m 15m elasticsearch-im-infra */15 * * * * False 0 13m 15m #oc get pods -l "logging-infra"="indexManagement" -o name pod/elasticsearch-im-app-27344280--1-9pxtm pod/elasticsearch-im-audit-27344280--1-wgq8v pod/elasticsearch-im-infra-27344280--1-tv4lh Expected results: #oc get pods -l "logging-infra"="indexManagement" -o name pod/elasticsearch-im-app-27344280-1-9pxtm pod/elasticsearch-im-audit-27344280-1-wgq8v pod/elasticsearch-im-infra-27344280-1-tv4lh
same issue to the other cronjob pods. $oc get pods --all-namespaces |grep -E "\-\-" openshift-logging elasticsearch-im-app-27344310--1-btp8z 0/1 Completed 0 12m openshift-logging elasticsearch-im-audit-27344310--1-c89xj 0/1 Completed 0 12m openshift-logging elasticsearch-im-infra-27344310--1-6gfml 0/1 Completed 0 12m openshift-marketplace 7ada5e2e41a35e52e10b2d3f71c73c0143ed80ab4bd73a31d17914--1-66q9x 0/1 Completed 0 60m openshift-marketplace bd21de40ff29167bb62f859356cedcf8931e79fbf0345393972a14--1-h5zbj 0/1 Completed 0 60m openshift-operator-lifecycle-manager collect-profiles-27344280--1-rp2nc 0/1 Completed 0 42m openshift-operator-lifecycle-manager collect-profiles-27344295--1-68c8f 0/1 Completed 0 27m openshift-operator-lifecycle-manager collect-profiles-27344310--1-79t62 0/1 Completed 0 12m
oc create job pi-job --image=image-registry.openshift-image-registry.svc:5000/openshift/perl -- perl -Mbignum=bpi -wle 'print bpi(2000)' oc get pod pi-job--1-x2xhc 0/1 Completed 0 2m17s From this output, the root cause is in Job. Further, from https://kubernetes.io/docs/concepts/workloads/controllers/job/ , it says: " kubectl describe jobs/pi Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7 kubectl get pods The output is similar to this: pi-5rwd7 " So, Job has regression bug.
This will be fixed when https://github.com/openshift/kubernetes/pull/1087 lands.
Can't reproduce the issue now: [root@localhost ~]# oc version --client Client Version: 4.10.0-202201281850.p0.g7c299f1.assembly.stream-7c299f1 [root@localhost ~]# oc create job pi-job --image=image-registry.openshift-image-registry.svc:5000/openshift/perl -- perl -Mbignum=bpi -wle 'print bpi(2000)' job.batch/pi-job created [root@localhost ~]# oc get pods NAME READY STATUS RESTARTS AGE pi-job-wcd62 0/1 ContainerCreating 0 3s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056