You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Cancels a job in an Batch job queue. Jobs that are in the SUBMITTED or PENDING are canceled. A job inRUNNABLE remains in RUNNABLE until it reaches the head of the job queue. Then the job status is updated to FAILED. A PENDING job is canceled after all dependency jobs are completed. Therefore, it may take longer than expected to cancel a job in PENDING status. When you try to cancel an array parent job in PENDING, Batch attempts to cancel all child jobs. The array parent job is canceled when all child jobs are completed. Jobs that progressed to the STARTING or RUNNING state aren't canceled. However, the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation",
9
+
"Cancels a job in an Batch job queue. Jobs that are in a SUBMITTED, PENDING, or RUNNABLE state are cancelled and the job status is updated to FAILED. A PENDING job is canceled after all dependency jobs are completed. Therefore, it may take longer than expected to cancel a job in PENDING status. When you try to cancel an array parent job in PENDING, Batch attempts to cancel all child jobs. The array parent job is canceled when all child jobs are completed. Jobs that progressed to the STARTING or RUNNING state aren't canceled. However, the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation",
"Creates a job to invoke a model on multiple prompts (batch inference). Format your data according to Format your inference data and upload it to an Amazon S3 bucket. For more information, see Create a batch inference job. The response returns a jobArn that you can use to stop or get details about the job. You can check the status of the job by sending a GetModelCustomizationJob request",
455
+
options: [
456
+
{
457
+
name: "--job-name",
458
+
description: "A name to give the batch inference job",
459
+
args: {
460
+
name: "string",
461
+
},
462
+
},
463
+
{
464
+
name: "--role-arn",
465
+
description:
466
+
"The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference",
467
+
args: {
468
+
name: "string",
469
+
},
470
+
},
471
+
{
472
+
name: "--client-request-token",
473
+
description:
474
+
"A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency",
475
+
args: {
476
+
name: "string",
477
+
},
478
+
},
479
+
{
480
+
name: "--model-id",
481
+
description:
482
+
"The unique identifier of the foundation model to use for the batch inference job",
483
+
args: {
484
+
name: "string",
485
+
},
486
+
},
487
+
{
488
+
name: "--input-data-config",
489
+
description:
490
+
"Details about the location of the input to the batch inference job",
491
+
args: {
492
+
name: "structure",
493
+
},
494
+
},
495
+
{
496
+
name: "--output-data-config",
497
+
description:
498
+
"Details about the location of the output of the batch inference job",
499
+
args: {
500
+
name: "structure",
501
+
},
502
+
},
503
+
{
504
+
name: "--timeout-duration-in-hours",
505
+
description:
506
+
"The number of hours after which to force the batch inference job to time out",
507
+
args: {
508
+
name: "integer",
509
+
},
510
+
},
511
+
{
512
+
name: "--tags",
513
+
description:
514
+
"Any tags to associate with the batch inference job. For more information, see Tagging Amazon Bedrock resources",
515
+
args: {
516
+
name: "list",
517
+
},
518
+
},
519
+
{
520
+
name: "--cli-input-json",
521
+
description:
522
+
"Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally",
523
+
args: {
524
+
name: "string",
525
+
},
526
+
},
527
+
{
528
+
name: "--generate-cli-skeleton",
529
+
description:
530
+
"Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command",
"Gets details about a batch inference job. For more information, see View details about a batch inference job",
932
+
options: [
933
+
{
934
+
name: "--job-identifier",
935
+
description:
936
+
"The Amazon Resource Name (ARN) of the batch inference job",
937
+
args: {
938
+
name: "string",
939
+
},
940
+
},
941
+
{
942
+
name: "--cli-input-json",
943
+
description:
944
+
"Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally",
945
+
args: {
946
+
name: "string",
947
+
},
948
+
},
949
+
{
950
+
name: "--generate-cli-skeleton",
951
+
description:
952
+
"Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command",
"Lists all batch inference jobs in the account. For more information, see View details about a batch inference job",
1618
+
options: [
1619
+
{
1620
+
name: "--submit-time-after",
1621
+
description:
1622
+
"Specify a time to filter for batch inference jobs that were submitted after the time you specify",
1623
+
args: {
1624
+
name: "timestamp",
1625
+
},
1626
+
},
1627
+
{
1628
+
name: "--submit-time-before",
1629
+
description:
1630
+
"Specify a time to filter for batch inference jobs that were submitted before the time you specify",
1631
+
args: {
1632
+
name: "timestamp",
1633
+
},
1634
+
},
1635
+
{
1636
+
name: "--status-equals",
1637
+
description:
1638
+
"Specify a status to filter for batch inference jobs whose statuses match the string you specify",
1639
+
args: {
1640
+
name: "string",
1641
+
},
1642
+
},
1643
+
{
1644
+
name: "--name-contains",
1645
+
description:
1646
+
"Specify a string to filter for batch inference jobs whose names contain the string",
1647
+
args: {
1648
+
name: "string",
1649
+
},
1650
+
},
1651
+
{
1652
+
name: "--max-results",
1653
+
description:
1654
+
"The maximum number of results to return. If there are more results than the number that you specify, a nextToken value is returned. Use the nextToken in a request to return the next batch of results",
1655
+
args: {
1656
+
name: "integer",
1657
+
},
1658
+
},
1659
+
{
1660
+
name: "--next-token",
1661
+
description:
1662
+
"If there were more results than the value you specified in the maxResults field in a previous ListModelInvocationJobs request, the response would have returned a nextToken value. To see the next batch of results, send the nextToken value in another request",
1663
+
args: {
1664
+
name: "string",
1665
+
},
1666
+
},
1667
+
{
1668
+
name: "--sort-by",
1669
+
description: "An attribute by which to sort the results",
1670
+
args: {
1671
+
name: "string",
1672
+
},
1673
+
},
1674
+
{
1675
+
name: "--sort-order",
1676
+
description:
1677
+
"Specifies whether to sort the results by ascending or descending order",
1678
+
args: {
1679
+
name: "string",
1680
+
},
1681
+
},
1682
+
{
1683
+
name: "--cli-input-json",
1684
+
description:
1685
+
"Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally",
1686
+
args: {
1687
+
name: "string",
1688
+
},
1689
+
},
1690
+
{
1691
+
name: "--starting-token",
1692
+
description:
1693
+
"A token to specify where to start paginating. This is the\nNextToken from a previously truncated response.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide",
1694
+
args: {
1695
+
name: "string",
1696
+
},
1697
+
},
1698
+
{
1699
+
name: "--page-size",
1700
+
description:
1701
+
"The size of each page to get in the AWS service call. This\ndoes not affect the number of items returned in the command's\noutput. Setting a smaller page size results in more calls to\nthe AWS service, retrieving fewer items in each call. This can\nhelp prevent the AWS service calls from timing out.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide",
1702
+
args: {
1703
+
name: "integer",
1704
+
},
1705
+
},
1706
+
{
1707
+
name: "--max-items",
1708
+
description:
1709
+
"The total number of items to return in the command's output.\nIf the total number of items available is more than the value\nspecified, a NextToken is provided in the command's\noutput. To resume pagination, provide the\nNextToken value in the starting-token\nargument of a subsequent command. Do not use the\nNextToken response element directly outside of the\nAWS CLI.\nFor usage examples, see Pagination in the AWS Command Line Interface User\nGuide",
1710
+
args: {
1711
+
name: "integer",
1712
+
},
1713
+
},
1714
+
{
1715
+
name: "--generate-cli-skeleton",
1716
+
description:
1717
+
"Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command",
"Stops a batch inference job. You're only charged for tokens that were already processed. For more information, see Stop a batch inference job",
1970
+
options: [
1971
+
{
1972
+
name: "--job-identifier",
1973
+
description:
1974
+
"The Amazon Resource Name (ARN) of the batch inference job to stop",
1975
+
args: {
1976
+
name: "string",
1977
+
},
1978
+
},
1979
+
{
1980
+
name: "--cli-input-json",
1981
+
description:
1982
+
"Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally",
1983
+
args: {
1984
+
name: "string",
1985
+
},
1986
+
},
1987
+
{
1988
+
name: "--generate-cli-skeleton",
1989
+
description:
1990
+
"Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command",
"Only print the commands that would be executed to connect your tool with your repository without making any changes to your configuration",
3429
+
"Only print the commands that would be executed to connect your tool with your repository without making any changes to your configuration. Note that this prints the unredacted auth token as part of the output",
"Imports the source repository credentials for an CodeBuild project that has its source code stored in a GitHub, GitHub Enterprise, or Bitbucket repository",
1155
+
"Imports the source repository credentials for an CodeBuild project that has its source code stored in a GitHub, GitHub Enterprise, GitLab, GitLab Self Managed, or Bitbucket repository",
"For GitHub or GitHub Enterprise, this is the personal access token. For Bitbucket, this is either the access token or the app password. For the authType CODECONNECTIONS, this is the connectionArn",
1168
+
"For GitHub or GitHub Enterprise, this is the personal access token. For Bitbucket, this is either the access token or the app password. For the authType CODECONNECTIONS, this is the connectionArn. For the authType SECRETS_MANAGER, this is the secretArn",
"The type of authentication used to connect to a GitHub, GitHub Enterprise, GitLab, GitLab Self Managed, or Bitbucket repository. An OAUTH connection is not supported by the API and must be created using the CodeBuild console. Note that CODECONNECTIONS is only valid for GitLab and GitLab Self Managed",
1183
+
"The type of authentication used to connect to a GitHub, GitHub Enterprise, GitLab, GitLab Self Managed, or Bitbucket repository. An OAUTH connection is not supported by the API and must be created using the CodeBuild console",
0 commit comments