From PVC (API)

If you have already pre-loaded a model onto a Persistant Volume Claim (PVC), you can add the model to HPE Machine Learning Inferencing Software by following the steps below.

Before You Start

PVC Syntax & URL Options

Review the following PVC syntax and URL options to ensure you have the correct information for adding your model.

OptionDescriptionExampleDefault
PVC NameName of the Persistent Volume Claim (PVC) to be mountedpvc://my-model-pvcRequired, no default
PathOptional path within the PVC to be mountedpvc://my-model-pvc/modelsIf not specified, the entire PVC is mounted
ContainerPathDirectory in container where the PVC is mountedpvc://my-model-pvc?containerPath=/mnt/models/mnt/models
readOnlyWhether the volume is read-onlypvc://my-model-pvc?readOnlyIf not specified, the volume is read-write

warning icon PVC Name
The PVC Name (e.g., <my-model-pvc>) must already exist in the Kubernetes namespace where the packaged model will be deployed.

How to Add a Packaged Model From a Registry

  1. Sign in to HPE Machine Learning Inferencing Software.
    curl -X 'POST' \
      '<YOUR_EXT_CLUSTER_IP>/api/v1/login' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
      "username": "<YOUR_USERNAME>",
      "password": "<YOUR_PASSWORD>"
    }'
  2. Obtain the Bearer token from the response.
  3. Use the following cURL command to add a new packaged model. For information on setting environment variables and arguments, see the Advanced Configuration reference article.
    curl -X 'POST' \
      '<YOUR_EXT_CLUSTER_IP>/api/v1/models' \
      -H 'accept: application/json' \
      -H 'Authorization: Bearer <YOUR_ACCESS_TOKEN>' \
      -H 'Content-Type: application/json' \
      -d '{
      "arguments": ["--model_dir <PATH_WHERE_MODEL_IS_STORED>"],
      "description": "<DESCRIPTION>",
      "environment": {
          "key": "value",
          "key2": "value2"
      },
      "image": "<USER_NAME>/<MODEL_NAME>:<TAG>",
      "modelFormat": "<MODEL_FORMAT>",
      "name": "<MODEL_NAME>",
      "resources": {
          "gpuType": "<GPU_TYPE>",
          "limits": {
              "cpu": "<CPU_LIMIT>",
              "gpu": "<GPU_LIMIT>",
              "memory": "<MEMORY_LIMIT>"
          },
          "requests": {
              "cpu": "<CPU_REQUEST>",
              "gpu": "<GPU_REQUEST>",
              "memory": "<MEMORY_REQUEST>"
          }
      },
      "url": "pvc://<PVC_NAME>/<OPTIONAL_PATH>?containerPath=<DIR_IN_CONTAINER>"
    }'