tensor-fusion

module
v1.41.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 29, 2025 License: Apache-2.0

README ยถ

Logo

TensorFusion.AI
Next-Generation GPU Virtualization and Pooling for Enterprises
Less GPUs, More AI Apps.
Explore the docs ยป
View Demo | Report Bug | Request Feature

โ™พ๏ธ Tensor Fusion

Contributors Forks Stargazers Issues MIT License LinkedIn Ask DeepWiki

Tensor Fusion is a state-of-the-art GPU virtualization and pooling solution designed to optimize GPU cluster utilization to its fullest potential.

๐ŸŒŸ Highlights

๐Ÿ“ Fractional GPU with Single TFlops/MiB Precision
๐Ÿ”„ Battle-tested GPU-over-IP Remote GPU Sharing
โš–๏ธ GPU-first Scheduling and Auto-scaling
๐Ÿ“Š Computing Oversubscription and GPU VRAM Expansion
๐Ÿ›ซ GPU Pooling, Monitoring, Live Migration, AI Model Preloading and more

๐ŸŽฌ Demo

Fractional vGPU & GPU-over-IP & Distributed Allocation

AI Infra Console

AI Infra Console

GPU Live-migration [End-to-end feature WIP]

https://cdn.tensor-fusion.ai/GPU_Content_Migration.mp4

๐Ÿš€ Quick Start

Onboard Your Own AI Infra
๐Ÿ’ฌ Discussion

๐Ÿ”ฎ Features & Roadmap

Core GPU Virtualization Features
  • Fractional GPU and flexible oversubscription
  • Remote GPU sharing with SOTA GPU-over-IP technology, less than 4% performance loss
  • GPU VRAM expansion and hot/warm/cold tiering
  • None NVIDIA GPU/NPU vendor support
Pooling & Scheduling & Management
  • GPU/NPU pool management in Kubernetes
  • GPU-first scheduling and allocation, with single TFlops/MB precision
  • GPU node auto provisioning/termination
  • GPU compaction/bin-packing
  • Seamless onboarding experience for Pytorch, TensorFlow, llama.cpp, vLLM, Tensor-RT, SGlang and all popular AI training/serving frameworks
  • Centralized Dashboard & Control Plane
  • GPU-first autoscaling policies, auto set requests/limits/replicas
  • Request multiple vGPUs with group scheduling for large models
  • Support different QoS levels
Enterprise Features
  • GPU live-migration, snapshot/distribute/restore GPU context cross cluster, fastest in the world
  • AI model registry and preloading, build your own private MaaS(Model-as-a-Service)
  • Advanced auto-scaling policies, scale to zero, rebalance of hot GPUs
  • Advanced observability features, detailed metrics & tracing/profiling of CUDA calls
  • Monetize your GPU cluster by multi-tenancy usage measurement & billing report
  • Enterprise level high availability and resilience, support topology aware scheduling, GPU node auto failover etc.
  • Enterprise level security, complete on-premise deployment support, encryption in-transit & at-rest
  • Enterprise level compliance, SSO/SAML support, advanced audit, ReBAC control, SOC2 and other compliance reports available
๐Ÿ—ณ๏ธ Platform Support
  • Run on Linux Kubernetes clusters
  • Run on Linux VMs or Bare Metal (one-click onboarding to Edge K3S)
  • Run on Windows (Docs not ready, contact us for support)
  • Run on MacOS (Imagining mount a virtual NVIDIA GPU device on MacOS!)

See the open issues for a full list of proposed features (and known issues).

๐Ÿ™ Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request
Top contributors
contrib.rocks image

๐Ÿ”ท License

  1. This repo is open sourced with Apache 2.0 License, which includes GPU pooling, scheduling, management features, you can use it for free and modify it.
  2. GPU virtualization and GPU-over-IP features are also free to use as the part of Community Plan, the implementation is not fully open sourced
  3. Features mentioned in "Enterprise Features" above are paid, licensed users can automatically unlock these features.

FOSSA Status

Directories ยถ

Path Synopsis
api
v1
Package v1 contains API Schema definitions for the tensor-fusion.ai v1 API group.
Package v1 contains API Schema definitions for the tensor-fusion.ai v1 API group.
cmd
nodediscovery command
internal
cloudprovider/pricing
* GPU instance data is from:https://instances.vantage.sh/ ,Thanks a lot!
* GPU instance data is from:https://instances.vantage.sh/ ,Thanks a lot!
gpuallocator
Package gpuallocator handles GPU allocation
Package gpuallocator handles GPU allocation
metrics
NOTE: Make sure any new field/tag to existing metrics or new metrics should be added to SetupTable function for manual DB migration
NOTE: Make sure any new field/tag to existing metrics or new metrics should be added to SetupTable function for manual DB migration
test

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL