Safeguard Documentation Center
Enterprise Software Supply Chain Manager (ESSCM)IntegrationsAI Models

Hugging Face

Connect Hugging Face models to generate SBOMs

Hugging Face Integration

Connect models from Hugging Face Hub to Safeguard for AI model SBOM generation and dependency scanning.

Prerequisites

  • A Hugging Face account (optional for public models)
  • Hugging Face Access Token (for private models or higher rate limits)

Public Models

Hugging Face public models can be scanned without authentication.

Step 1: Navigate to Integrations

Go to Integrations from the sidebar and click Connect on the Hugging Face card.

Step 2: Enter Model Reference

  1. Select the Public tab
  2. Enter a Name for this configuration
  3. Optionally add a Description
  4. Enter the Hugging Face model ID:
    • Format: organization/model-name or username/model-name
    • Example: meta-llama/Llama-2-7b, openai/whisper-base
  5. Click Add

Step 3: Review & Connect

  1. Configure Project Name and Version
  2. Click Connect to complete

Private Models

For private models or gated models, you'll need a Hugging Face Access Token.

Step 1: Navigate to Integrations

Go to Integrations from the sidebar and click Connect on the Hugging Face card.

Step 2: Enter Access Token

  1. Select the Private tab
  2. Enter a Name for this configuration
  3. Optionally add a Description
  4. Enter your Hugging Face Access Token
  5. Click Verify Credentials

Step 3: Select Models

  1. Once verified, browse your models or enter model IDs
  2. Select the models you want to scan
  3. Choose specific revisions (optional)

Step 4: Configure & Connect

  1. Set Project Name and Version for each model
  2. Click Connect to complete

Creating a Hugging Face Access Token

  1. Sign in to Hugging Face
  2. Click your profile picture then Settings
  3. Select Access Tokens from the sidebar
  4. Click New token
  5. Enter a name (e.g., "Safeguard Integration")
  6. Select token type:
    • Read - For scanning public and private models you own
    • Write - Not needed for Safeguard
  7. Click Generate token
  8. Copy the token immediately

What Gets Scanned

When scanning Hugging Face models, Safeguard analyzes:

ComponentDescription
Model filesWeights, configs, tokenizers
requirements.txtPython dependencies
Model cardMetadata and documentation
Config filesModel configuration JSON
Preprocessing codeCustom preprocessing scripts

Supported Model Formats

FormatExtensionNotes
SafeTensors.safetensorsRecommended - most secure
PyTorch.bin, .pt, .pthCommon but uses Pickle
TensorFlow.h5, SavedModelKeras and TF formats
ONNX.onnxCross-platform format
GGUF/GGML.gguf, .ggmlQuantized formats

Troubleshooting

"Model not found"

  • Verify the model ID is correct (case-sensitive)
  • Check if the model exists on Hugging Face Hub
  • For private models, ensure your token has access

"Access denied"

  • For gated models, ensure you've accepted the model's terms
  • Verify your token has read access
  • Check if the model requires specific permissions

"Credentials verification failed"

  • Verify the token was copied correctly
  • Check that the token hasn't been revoked
  • Ensure the token has the required permissions

Best Practices

  • Use read-only tokens - Safeguard only needs read access
  • Scan specific revisions - Pin to specific commits for reproducibility
  • Check model cards - Review the model's documentation and license
  • Prefer SafeTensors - More secure than Pickle-based formats

On this page