Get started
This guide explains how to integrate the ojin/oris-1.0 persona model into your applications using either Pipecat or Websockets
Prerequisites
An Ojin account with an active API key, if you don't have one get your API key
Create a Persona or use a Persona Template
Save the Persona Configuration ID from the dashboard
Integrate with your application using either Pipecat or Websockets
Pipecat Integration
Pipecat is a powerful open source framework for building conversational AI pipelines. The ojin/oris-1.0 model integrates seamlessly with Pipecat through our dedicated OjinVideoService.
Option 1: Clone pipecat repository and checkout the ready to use ojin-chatbot example
To start using it, create a python virtual environment on it and install requirements
python -m venv venv
source venv/bin/activate
pip install -r requirements.txtCreate a .env file and add your Ojin API key and persona ID
OJIN_API_KEY="your_api_key_here"
OJIN_CONFIG_ID="your_persona_id_here"Then just run the mock_bot.py to check that your ojin setup is correct and see a generation out of a wav file:
python mock/mock_bot.pyAlternatively, you can configure all required environment variables for the services used in this example (such as Hume) by referring to env.example. Once configured, you can interact with a conversational, human-like bot using your local audio input/output
python bot.pyHow It Works
The microphone listens for speech input
Voice Activity Detection identifies speech segments
User audio is sent to Hume to get an LLM response using their Speech-To-Speech service.
The OjinVideoService animates your persona based on the STS audio.
Video frames are received and displayed in real-time together with the audio.
WebSocket Integration
For Websocket integration check our API Reference →
Next Steps
API Reference
Dive deeper into the model API for custom integrations
Last updated
Was this helpful?