Workflows
Let flows do the work ..
Workflows Setup
You have several options on how you wish implement the Agents Docker containers:
WSL + Docker Desktop on Windows - deploy containers in Windows
WSL + Docker - deploy containers in a Linux OS running as a subsystem on Windows
Linux + Docker - deploy containers on Linux
MacOS + Docker - deploy containers on MacOS
To prevent conflicts the AI stack containers will be deployed on: WSL + Windows Docker Desktop.
Local LLM (Large Language Model) serving with both CPU and GPU options
A web interface for interacting with Ollama models
Vector database for semantic search capabilities
Database for storing n8n workflows and data
A workflow automation platform that can connect to various services
Low-code platform for building AI workflows
Clone Workshop--LLM git repository
The easiest installation method is to execute the setup.sh script which will automate the process.
Ensure the prerequisite steps have been completed: WSL + Docker
Clone the GitHub repository:
gh repo clone jporeilly/Workshop--LLM
Create a GenAI directory & copy over the GenAI files.
cd
mkdir ~/GenAI
cd ~/GenAI
Copy GenAI STACK files.
cd
cd ~/GenAI
cp -rvpi ~/Workshop--LLM/GenAI/* ~/Workshop--LLM/GenAI/.* .
ls -al
Deploy AI Stack
Check you have the required resources.
watch nvidia-smi

Deploy containers.
Deploy AI Stack
To make life easier, in the GenAI directory you will find 4 scripts:
Docker Compose configuration as
docker-compose.yml
Environment template as
.env.template
Setup script as
setup.sh
GPU helper as
configure-gpus.sh
The setup script will:
✅ Check for Docker and Docker Compose
✅ Install Ollama if needed
✅ Create all necessary directories
✅ Generate your .env
file with secure random keys
✅ Create the external volume
✅ Pull all Docker images
✅ Start Ollama and pull AI models
✅ Start all services
✅ Test connections
Run the automated setup.
cd \
cd GenAI
# Make scripts executable
chmod +x setup.sh configure-gpus.sh
# Run the main setup (this does everything automatically)
./setup.sh

2. Configure Your GPUs (Optional)
If you have multiple GPUs:
# Run the GPU configuration helper
./configure-gpus.sh
This will show your GPUs and let you choose an allocation strategy.
Check Docker Desktop that the containers are up and running.

Was this helpful?