Skip to main content

User Models and Model Transfer Guide

This guide explains how to use the User Models structure and how to perform Model Transfer between devices through the Cordatus Platform.

With the User Models feature, you can:

  • Use models downloaded on one device across all your other devices
  • Transfer models between Cordatus-registered devices on the same local network
  • Transfer and use custom models that are not available on the internet

See details → Application Hub Overview | Application Hub Quickstart | Standard Application Launch Guide | NVIDIA VSS Guide


Model Path Configuration

To use the User Models feature, you must first define model paths on your devices.

Accessing the Metrics Page

  1. Connect to the device you want to use in Cordatus
  2. Navigate to the Metrics area from the Device Table
  3. Click on the Model Info tab

Defining Model Paths

In the Model Info tab, you will find three input fields for defining model paths for three different inference engines:

  • Huggingface Cache Paths
  • Ollama Models Cache Paths
  • NVIDIA NIM Cache Paths

Path Definition Steps:

  1. Select the appropriate directories from the Cordatus interface for each inference engine
  2. After defining the paths, you can view information about models located in the specified directory
Important

Correctly defining model paths is mandatory for the User Models feature to function properly.


Discovering and Adding Models on Your Device

Accessing the LLM Models Page

  1. Click on the LLM Models tab from the left-side menu
  2. Click the Explore Models on Your Device button located next to the Application-Hub / User Models options in the top right
  3. Select the device you want to use from the menu that opens

Model Addition Methods

After selecting a device, you can add models using two different methods:


Method 1: Start Scanning (Automatic Scan)

This method allows Cordatus to automatically detect models located in the paths you previously defined.

Scanning Process:

  1. Click the Start Scanning button
  2. Cordatus will scan all models in the defined paths

Information Displayed:

  • Model name
  • Directory where the model is located
  • File size
  • Whether it has been previously added to the system
  • Which inference engines it can be used with

Adding Models:

  1. Check the checkboxes on the left side of the models you want to add to the system
  2. If you have created a custom model, you can edit the model name in this area
  3. Click the Add Models button to add the selected models to the system

Re-scanning:

  • You can perform a new scan at any time using the Rescan button in the top right

Method 2: Add New Model Manually (Manual Addition)

This method allows you to manually add a model located on your device to the system.

Manual Addition Steps:

  1. Click on the Add New Model Manually option
  2. Navigate to the model directory from the Cordatus interface
  3. Select the model you want to add
  4. Review and confirm the model information

This method is especially useful for models located outside standard paths.


User Models Management

User Model Operations

 

Viewing the User Models List

  1. On the LLM Models page, switch to the User Models tab from the Application-Hub / User Models tabs in the top right
  2. You can view all models you have added to the system as a list

Model Information

The following information is displayed for each model in the User Models list:

  • Model name
  • Device where the model is located
  • Model size
  • Supported inference engines
  • Model status (active/inactive)

Editing and Deleting Model Information

You can edit or delete models you have added to the system at any time:

Editing Model Information:

  • You can change the model name
  • You can update the Quantization information (e.g., Q4_K_M, Q8_0, FP16)
  • You can edit the Parameters information (e.g., 7B, 13B, 70B)
  • You can update other metadata

Deleting Models:

  • You can delete any model from the User Models list at any time
  • The deletion process only removes the record from the Cordatus system
  • The physical files of the model remain on your device
Note

Editing model information is especially important for custom models. Correct quantization and parameter information helps you determine which hardware the model will run on.


Model Transfer and Usage

Model Transfer and Deployment

 

Deploying a Model

To use a model from the User Models list:

  1. Click the Deploy button next to the model you want to use
  2. Select an inference engine from the menu that opens (vLLM, TensorRT-LLM, etc.)
  3. The system will redirect you to the Application Launch interface

Model Transfer Process

When you want to use a model on a different device:

Transfer Check:

  • Cordatus automatically checks whether the model exists on the target device
  • If the model does not exist on the target device, the user can initiate the transfer process

Transfer Features:

  • If the transfer process is interrupted, it will resume from where it left off
  • You can track the progress during the transfer
  • After the transfer is complete, you can continue with the application launch process

Automatic Volume Configuration:

  • Based on the selected inference engine, Cordatus automatically performs volume mapping
  • Required directories are mounted according to the model paths you previously defined

Launching the Application

After the model transfer is complete:

  1. The Advanced Settings screen will open
  2. Configure the necessary settings (GPU selection, Docker Options, etc.)
  3. Click the Start Environment button to start the container
Note

For a detailed explanation of the application launch process, please refer to the Standard Application Launch Guide.


7. Best Practices

7.1 Model Path Management

  • Define model paths consistently on each device
  • Store models in an organized folder structure
  • Use external storage areas for backups

7.2 Model Transfer Tips

  • Ensure sufficient disk space before transferring large models
  • Perform model transfers during times without heavy network traffic
  • Ensure devices remain on the same network during transfer

7.3 Model Organization

  • Define model names in a meaningful and consistent manner