此操作将删除页面 "For A Fixed Completion Count Job"
,请三思而后行。
utsouthwestern.edu
Name: pi Namespace: default Selector: .io/ controller-uid=c9948307
-e56d-4b5d-8302-ae2d7b7da6
Labels: batch.kubernetes.io/ controller-uid=c9948307
-e56d-4b5d-8302-ae2d7b7da6
batch.kubernetes.io/ job-name=pi ... Annotations: batch.kubernetes.io/ job-tracking: "" Parallelism: 1 Completions: 1 Start Time: Mon, 02 Dec 2019 15:20:11 +0200 Completed At: Mon, 02 Dec 2019 15:21:16 +0200 Duration: 65s Pods Statuses: 0 Running/ 1 Succeeded/ 0 Failed Pod Template: Labels: batch.kubernetes.io/ controller-uid=c9948307
-e56d-4b5d-8302-ae2d7b7da6
batch.kubernetes.io/ job-name=pi Containers: pi: Image: perl:5.34.0 Port: Host Port: Command: perl -Mbignum=bpi -wle print bpi( 2000) Environment: Mounts: Volumes: Events: Type Reason Age From Message-- ---------- ------- Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4 Normal Completed 18s job-controller Job completed
indeed.com
apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/ job-tracking: "" ... creationTimestamp: "2022-11-10T17:53:53 Z" generation: 1 labels: batch.kubernetes.io/ controller-uid: 863452e6
-270d-420e-9b94-53a54146c2
batch.kubernetes.io/ job-name: pi name: pi namespace: default resourceVersion: "4751" uid: 204fb678
-040b-497f-9266-35ffa8716d
specification: backoffLimit: 4 completionMode: NonIndexed conclusions: 1 parallelism: 1 selector: matchLabels: batch.kubernetes.io/ controller-uid: 863452e6
-270d-420e-9b94-53a54146c2
suspend: false template: metadata: creationTimestamp: null labels: batch.kubernetes.io/ controller-uid: 863452e6
-270d-420e-9b94-53a54146c2
batch.kubernetes.io/ job-name: pi spec: containers: - command: - perl - -Mbignum=bpi - -wle - print bpi( 2000) image: perl:5.34.0 imagePullPolicy: IfNotPresent name: pi resources: terminationMessagePath:/ dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Never schedulerName: default-scheduler securityContext: terminationGracePeriodSeconds: 30 status: active: 1 all set: 0 startTime: "2022-11-10T17:53:57 Z" uncountedTerminatedPods:
To view completed Pods of a Task, utilize kubectl get pods.
To list all the Pods that come from a Task in a device understandable kind, you can use a command like this:
Here, the selector is the same as the selector for the Job. The-- output=jsonpath choice defines an expression with the name from each Pod in the returned list.
View the basic output of among the pods:
Another method to view the logs of a Job:
The output is similar to this:
Writing a Task specification
As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields.
When the control aircraft produces new Pods for a Task, the.metadata.name of the Job becomes part of the basis for naming those Pods. The name of a Task should be a valid DNS subdomain worth, but this can produce unforeseen outcomes for the Pod hostnames. For best compatibility, the name ought to follow the more restrictive guidelines for a DNS label. Even when the name is a DNS subdomain, the name needs to be no longer than 63 characters.
A Task likewise requires a.spec section.
Job Labels
Job labels will have batch.kubernetes.io/ prefix for job-name and controller-uid.
Pod Template
The.spec.template is the only required field of the.spec.
The.spec.template is a pod design template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind.
In addition to needed fields for a Pod, a pod design template in a Task should define appropriate labels (see pod selector) and a proper reboot policy.
Only a RestartPolicy equal to Never or OnFailure is permitted.
Pod selector
The.spec.selector field is optional. In nearly all cases you should not specify it. See section defining your own pod selector.
Parallel execution for Jobs
There are three main types of task suitable to run as a Job:
1. Non-parallel Jobs- normally, just one Pod is started, unless the Pod stops working.
- the Job is total as quickly as its Pod ends effectively.
2. Parallel Jobs with a fixed completion count:- specify a non-zero positive value for.spec.completions.
- the Job represents the total job, and is total when there are.spec.completions successful Pods.
- when using.spec.completionMode="Indexed", each Pod gets a different index in the range 0 to.spec.completions-1.
3. Parallel Jobs with a work line:- do not specify.spec.completions, default to.spec.parallelism.
- the Pods should coordinate among themselves or an external service to identify what each ought to work on. For example, a Pod may fetch a batch of approximately N items from the work queue.
- each Pod is independently efficient in figuring out whether or not all its peers are done, and therefore that the whole Job is done.
- when any Pod from the Job terminates with success, no brand-new Pods are created.
- when a minimum of one Pod has ended with success and all Pods are ended, then the Job is finished with success.
- when any Pod has left with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of leaving.
For a non-parallel Job, you can leave both.spec.completions and.spec.parallelism unset. When both are unset, both are defaulted to 1.
For a fixed completion count Job, you ought to set.spec.completions to the number of completions needed. You can set.spec.parallelism, or leave it unset and it will default to 1.
For a work queue Job, you need to leave.spec.completions unset, and set.spec.parallelism to a non-negative integer.
To find out more about how to make usage of the various kinds of task, see the task patterns area.
Controlling parallelism
The asked for parallelism (. spec.parallelism) can be set to any non-negative worth. If it is undefined, it defaults to 1. If it is specified as 0, then the Job is efficiently stopped briefly up until it is increased.
Actual parallelism (variety of pods running at any immediate) might be basically than requested parallelism, for a range of reasons:
- For repaired conclusion count Jobs, the actual number of pods running in parallel will not go beyond the number of remaining conclusions. Higher worths of.spec.parallelism are efficiently neglected.
- For work queue Jobs, no new Pods are begun after any Pod has actually succeeded-- remaining Pods are allowed to complete, nevertheless.
- If the Job Controller has not had time to respond.
- If the Job controller stopped working to create Pods for any reason (lack of ResourceQuota, absence of authorization, and so on), then there may be fewer pods than requested.
- The Job controller may throttle brand-new Pod creation due to excessive previous pod failures in the very same Job.
- When a Pod is with dignity shut down, it takes some time to stop.
Completion mode
Jobs with set completion count - that is, tasks that have non null.spec.completions - can have a completion mode that is defined in.spec.completionMode:
NonIndexed (default): the Job is considered total when there have been.spec.completions successfully completed Pods. In other words, each Pod completion is homologous to each other. Note that Jobs that have null.spec.completions are implicitly NonIndexed.
Indexed: the Pods of a Job get an associated conclusion index from 0 to.spec.completions-1. The index is readily available through four systems:
- The Pod annotation batch.kubernetes.io/ job-completion-index.
- The Pod label batch.kubernetes.io/ job-completion-index (for v1.28 and later). Note the feature gate PodIndexLabel should be allowed to utilize this label, and it is made it possible for by default.
- As part of the Pod hostname, following the pattern $(job-name)-$(index). When you utilize an Indexed Job in mix with a Service, Pods within the Job can use the deterministic hostnames to resolve each other through DNS. To find out more about how to configure this, see Job with Pod-to-Pod Communication.
- From the containerized task, in the environment variable JOB_COMPLETION_INDEX.
The Job is considered complete when there is one effectively completed Pod for each index. For more details about how to use this mode, see Indexed Job for Parallel Processing with Static Work Assignment.
此操作将删除页面 "For A Fixed Completion Count Job"
,请三思而后行。