Deploying vLLM with Audio and LLM Inference on ROSA with GPUs
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
Red Hat OpenShift Service on AWS (ROSA) provides a managed OpenShift environment that can leverage AWS GPU instances. This guide will walk you through deploying vLLM for both audio transcription (Whisper) and large language model inference on ROSA using GPU instances, along with a web application to interact with both services.
Use case
Automatically transcribe audio conversations (meetings, customer calls) and analyze content with an LLM to extract insights, decisions, and action items
Maintain confidentiality of sensitive data by avoiding external SaaS services, while benefiting from advanced AI capabilities (transcription + intelligent analysis) with a production-ready and supported solution.
Prerequisites
- A Red Hat OpenShift on AWS (ROSA classic or HCP) 4.18+ cluster
- OC CLI (Admin access to cluster)
- ROSA CLI
Set up GPU-enabled Machine Pool
First we need to check availability of our instance type used here (g6.xlarge), it should be in same region of the cluster.
Using the following command, you can check for the availability of the g6.xlarge instance type in all eu-* regions:
for region in $(aws ec2 describe-regions --query 'Regions[?starts_with(RegionName, `eu`)].RegionName' --output text); do
echo "Region: $region"
aws ec2 describe-instance-type-offerings --location-type availability-zone \
--filters Name=instance-type,Values=g6.xlarge --region $region \
--query 'InstanceTypeOfferings[].Location' --output table
echo ""
done
With the region and zone known, use the following command to create a machine pool with GPU Enabled Instances. In this example we use region eu-west-3b:
# Replace $mycluster with the name of your ROSA cluster
export CLUSTER_NAME=$mycluster
rosa create machine-pool -c $CLUSTER_NAME --name gpu --replicas=2 --instance-type g6.xlarge
This command creates a machine pool named “gpu” with two replicas using the g6.xlarge instance, which provides modern GPU capabilities suitable for inference workloads.
Deploy Required Operators
We’ll use kustomize to deploy the necessary operators thanks to this repository provided by Red Hat COP (Community of Practices) link
Node Feature Discovery (NFD) Operator:
oc apply -k https://github.com/redhat-cop/gitops-catalog/nfd/operator/overlays/stableThe NFD Operator detects hardware features and configuration in your cluster.
GPU Operator:
oc apply -k https://github.com/redhat-cop/gitops-catalog/gpu-operator-certified/operator/overlays/stableThe GPU Operator manages NVIDIA GPUs drivers in your cluster.
Create Operator Instances
After the operators are installed, wait about 20 seconds, then use the following commands to create their instances:
NFD Instance:
oc apply -k https://github.com/redhat-cop/gitops-catalog/nfd/instance/overlays/only-nvidiaThis creates an NFD instance for cluster.
GPU Operator Instance:
oc apply -k https://github.com/redhat-cop/gitops-catalog/gpu-operator-certified/instance/overlays/awsThis creates a GPU Operator instance configured for AWS.
Deploy vLLM for Audio Inference (Whisper)
Next, we’ll deploy a vLLM instance for audio transcription using the Whisper model.
Create a new project:
oc new-project inferenceDeploy the Whisper vLLM instance:
oc new-app registry.redhat.io/rhaiis/vllm-cuda-rhel9:3 --name rh-inf-whisper -l app=rh-inf-whisper \ -e HF_HUB_OFFLINE=0 \ -e VLLM_MAX_AUDIO_CLIP_FILESIZE_MB=500Configure the deployment strategy to use Recreate instead of rolling updates:
oc patch deployment rh-inf-whisper --type=json -p='[ {"op": "replace", "path": "/spec/strategy/type", "value": "Recreate"}, {"op": "remove", "path": "/spec/strategy/rollingUpdate"} ]'Add persistent storage for model caching:
oc set volume deployment/rh-inf-whisper --add --type=pvc --claim-size=100Gi --mount-path=/opt/app-root/src/.cache --name=llm-cacheAllocate GPU resources:
oc set resources deployment/rh-inf-whisper --limits=nvidia.com/gpu=1Configure vLLM to serve the Whisper model:
oc patch deployment rh-inf-whisper --type='json' -p='[ { "op": "replace", "path": "/spec/template/spec/containers/0/command", "value": ["vllm"] }, { "op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [ "serve", "RedHatAI/whisper-large-v3-turbo-quantized.w4a16" ] } ]'Create a service to expose the Whisper inference endpoint:
oc create service clusterip rh-inf-whisper --tcp=8000:8000
Deploy vLLM for LLM Inference
Now we’ll deploy a second vLLM instance for language model inference.
Deploy the LLM vLLM instance:
oc new-app registry.redhat.io/rhaiis/vllm-cuda-rhel9:3 --name rh-inf-llm -l app=rh-inf-llm \ -e HF_HUB_OFFLINE=0 \ -e VLLM_MAX_AUDIO_CLIP_FILESIZE_MB=500Configure the deployment strategy:
oc patch deployment rh-inf-llm --type=json -p='[ {"op": "replace", "path": "/spec/strategy/type", "value": "Recreate"}, {"op": "remove", "path": "/spec/strategy/rollingUpdate"} ]'Add persistent storage for model caching:
oc set volume deployment/rh-inf-llm --add --type=pvc --claim-size=100Gi --mount-path=/opt/app-root/src/.cache --name=llm-cacheAllocate GPU resources:
oc set resources deployment/rh-inf-llm --limits=nvidia.com/gpu=1Configure vLLM to serve the language model:
oc patch deployment rh-inf-llm --type='json' -p='[ { "op": "replace", "path": "/spec/template/spec/containers/0/command", "value": ["vllm"] }, { "op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [ "serve", "RedHatAI/gpt-oss-20b" ] } ]'Create a service to expose the LLM inference endpoint:
oc create service clusterip rh-inf-llm --tcp=8000:8000
Deploy the Transcription Web Application
Finally, we’ll deploy a web application that integrates both the audio transcription and LLM inference services.
Note: This transcription web application was entirely created using Cursor IDE with a single prompt. The complete prompt used to generate the application can be found in the PROMPT.md file of the repository. This demonstrates how modern AI-assisted development tools can rapidly create functional applications from a well-structured prompt.
Deploy the application from the Git repository:
oc new-app https://github.com/rh-mobb/transcription-webapp.git --strategy=docker \ -e AUDIO_INFERENCE_URL=http://rh-inf-whisper:8000 \ -e AUDIO_MODEL_NAME=RedHatAI/whisper-large-v3-turbo-quantized.w4a16 \ -e LLM_INFERENCE_URL=http://rh-inf-llm:8000 \ -e LLM_MODEL_NAME=RedHatAI/gpt-oss-20bCreate a secure route to access the application:
oc create route edge --service=transcription-webappConfigure the route timeout for longer processing times:
oc annotate route transcription-webapp haproxy.router.openshift.io/timeout=180s
Verify Deployment
Use the following commands to ensure all nvidia pods are either running or completed:
oc get pods -n nvidia-gpu-operatorAll pods in the inference namespace should be running:
oc get pods -n inferenceCheck logs of the Whisper inference service to verify GPU detection:
oc logs -l app=rh-inf-whisperCheck logs of the LLM inference service:
oc logs -l app=rh-inf-llmVerify that both vLLM instances can receive requests and have started correctly:
oc exec deployment/rh-inf-whisper -- curl -XPOST localhost:8000/ping -s -I oc exec deployment/rh-inf-llm -- curl -XPOST localhost:8000/ping -s -IYou should receive HTTP 200 responses from both endpoints, indicating the services are ready to accept inference requests.
Accessing the Web Application
After deploying the transcription web application, follow these steps to access it:
Get the route URL:
oc get route transcription-webappOpen the URL in your web browser. You should see the transcription application interface.
Testing Your Setup:
- Upload an audio file to test the Whisper transcription service.
- The transcribed text can be processed further using the LLM service.
- Verify that both services are responding correctly.
Architecture Overview
This deployment creates a complete inference pipeline:
- Whisper Service: Handles audio transcription using the Whisper large v3 turbo quantized model
- LLM Service: Provides text generation and processing capabilities using the GPT-OSS 20B model
- Web Application: Provides a user-friendly interface to interact with both services
Each vLLM instance runs in its own pod with dedicated GPU resources, ensuring optimal performance and isolation.
Cost Optimization
For development or non-production environments, you can scale down the GPU machine pool to 0 when not in use:
rosa edit machine-pool -c $CLUSTER_NAME gpu --replicas=0
This helps optimize costs while maintaining the ability to quickly scale up when needed.
Uninstalling
Delete the inference namespace:
oc delete project inferenceDelete operator instances:
oc delete -k https://github.com/redhat-cop/gitops-catalog/nfd/instance/overlays/only-nvidia oc delete -k https://github.com/redhat-cop/gitops-catalog/gpu-operator-certified/instance/overlays/awsDelete operators:
oc delete -k https://github.com/redhat-cop/gitops-catalog/nfd/operator/overlays/stable oc delete -k https://github.com/redhat-cop/gitops-catalog/gpu-operator-certified/operator/overlays/stableDelete the GPU machine pool:
rosa delete machine-pool -c $CLUSTER_NAME gpu
Conclusion
- Production-Ready Solution: By using exclusively Red Hat certified container images (registry.redhat.io), this deployment benefits from the complete Red Hat lifecycle management, including security patches, updates, and enterprise support. This allows organizations to rapidly achieve a fully production-ready AI inference platform.
- Enhanced Productivity with Confidentiality: This solution enables organizations to significantly boost employee productivity with AI capabilities while maintaining complete data confidentiality. By deploying models on-premises or in your own cloud infrastructure, you avoid the risks of shadow IT and maintain full control over sensitive data, ensuring it never leaves your security perimeter.