Welcome to TIR AI Platform Documentation!
TIR is a modern AI Development Platform designed to tackle the friction of training and serving large AI models.
We do so by using highly optimised GPU containers (NGC), pre-configured environments (pytorch, tensorflow, triton), automated API generation for model serving, shared notebook storage, and much more.
- Introduction
- Getting Started
- How-to Guides
- Projects
- Notebooks
- Committed Notebook
- GPU H100 Plans
- Datasets
- Models (Storage)
- Model Deployments
- Fine-Tuning Models
- Samples and Tutorials
- Fine-tune LLMA with Multiple GPUs
- Deploy Inference for LLMA 2
- Deploy Inference for Codellama
- Deploy Inference for Stable Diffusion v2.1
- Fine-tune Stable Diffusion model on TIR
- About Training methods
- Fine tuning the model
- Step-1: Launch a Notebook on TIR
- Step-2: Initial Setup
- Step-3: Settings for teaching the new concept
- Step-4: Teach the model the new concept (Fine-tuning with the training method)
- Step-5: Run Inference with the newly trained Model
- Step-6: Save the newly created concept
- Step-7: Create Inference Server against our newly trained model
- Conclusion
- Deploy Inference for MPT-7B-CHAT
- Custom Containers in TIR
- Fine Tuning Bloom
- Natural Language Queries to SQL wth Code Llama
- API Tokens
- SSH Key
- Team Features
- Settings
- Analytics
- Billing