๐ค DART-LLM Multi-Model - Robot Task Planning
Choose from three fine-tuned models specialized for robot task planning using QLoRA technique:
- ๐ Dart-llm-model-1B: Ready for Jetson Nano deployment (870MB GGUF)
- โ๏ธ Dart-llm-model-3B: Ready for Jetson Xavier NX deployment (1.9GB GGUF)
- ๐ฏ Dart-llm-model-8B: Ready for Jetson AGX Xavier/Orin deployment (4.6GB GGUF)
Capabilities: Convert natural language robot commands into structured task sequences for excavators, dump trucks, and other construction robots. Edge-ready for Jetson devices with DAG Visualization!
๐ง Recommended for Jetson Deployment (GGUF Models)
For optimal edge deployment performance, use these GGUF quantized models:
- YongdongWang/llama-3.2-1b-lora-qlora-dart-llm-gguf (870MB) - Jetson Nano/Orin Nano
- YongdongWang/llama-3.2-3b-lora-qlora-dart-llm-gguf (1.9GB) - Jetson Orin NX/AGX Orin
- YongdongWang/llama-3.1-8b-lora-qlora-dart-llm-gguf (4.6GB) - High-end Jetson AGX Orin
๐ก Deploy with: Ollama, llama.cpp, or llama-cpp-python for efficient edge inference
โ๏ธ Generation Settings
Model Size
Select model for your Jetson device (1B = Nano, 3B = Xavier NX, 8B = AGX)
50 5000
๐ง GGUF Models for Jetson Deployment
Recommended for edge deployment:
- 1B (870MB): Jetson Nano/Orin Nano (2GB RAM)
- 3B (1.9GB): Jetson Orin NX/AGX Orin (4GB RAM)
- 8B (4.6GB): High-end Jetson AGX Orin (8GB RAM)
๐ก Use Ollama or llama.cpp for efficient inference
๐ก Example Operator Commands