๐Ÿค– DART-LLM Multi-Model - Robot Task Planning

Choose from three fine-tuned models specialized for robot task planning using QLoRA technique:

  • ๐Ÿš€ Dart-llm-model-1B: Ready for Jetson Nano deployment (870MB GGUF)
  • โš–๏ธ Dart-llm-model-3B: Ready for Jetson Xavier NX deployment (1.9GB GGUF)
  • ๐ŸŽฏ Dart-llm-model-8B: Ready for Jetson AGX Xavier/Orin deployment (4.6GB GGUF)

Capabilities: Convert natural language robot commands into structured task sequences for excavators, dump trucks, and other construction robots. Edge-ready for Jetson devices with DAG Visualization!

๐Ÿ”ง Recommended for Jetson Deployment (GGUF Models)

For optimal edge deployment performance, use these GGUF quantized models:

๐Ÿ’ก Deploy with: Ollama, llama.cpp, or llama-cpp-python for efficient edge inference

โš™๏ธ Generation Settings

Model Size

Select model for your Jetson device (1B = Nano, 3B = Xavier NX, 8B = AGX)

50 5000

๐Ÿ”ง GGUF Models for Jetson Deployment

Recommended for edge deployment:

  • 1B (870MB): Jetson Nano/Orin Nano (2GB RAM)
  • 3B (1.9GB): Jetson Orin NX/AGX Orin (4GB RAM)
  • 8B (4.6GB): High-end Jetson AGX Orin (8GB RAM)

๐Ÿ’ก Use Ollama or llama.cpp for efficient inference

๐Ÿ’ก Example Operator Commands