Get started

This guide explains how to integrate the ojin/oris-1.0 persona model into your applications using either Pipecat or Websockets

Prerequisites

  1. An Ojin account with an active API key, if you don't have one get your API key

  2. Save the Persona Configuration ID from the dashboard

  3. Integrate with your application using either Pipecat or Websockets

Pipecat Integration

Pipecat is a powerful open source framework for building conversational AI pipelines. The ojin/oris-1.0 model integrates seamlessly with Pipecat through our dedicated OjinVideoService.

Option 1: Clone pipecat repository and checkout the ready to use ojin-chatbot example

To start using it, create a python virtual environment on it and install requirements

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Create a .env file and add your Ojin API key and persona ID

OJIN_API_KEY="your_api_key_here"
OJIN_CONFIG_ID="your_persona_id_here"

Then just run the mock_bot.py to check that your ojin setup is correct and see a generation out of a wav file:

python mock/mock_bot.py

Alternatively, you can configure all required environment variables for the services used in this example (such as Hume) by referring to env.example. Once configured, you can interact with a conversational, human-like bot using your local audio input/output

python bot.py

How It Works

  1. The microphone listens for speech input

  2. Voice Activity Detection identifies speech segments

  3. User audio is sent to Hume to get an LLM response using their Speech-To-Speech service.

  4. The OjinVideoService animates your persona based on the STS audio.

  5. Video frames are received and displayed in real-time together with the audio.

You can customize the pipeline by adding or removing components, or by adjusting their parameters to suit your needs.

Next Steps

API Reference

Dive deeper into the model API for custom integrations

API Reference →

Last updated

Was this helpful?