# Get started

This guide explains how to integrate the `ojin/flashhead-lite-1.0` persona model into your applications using either Pipecat or WebSockets

## Prerequisites

1. An Ojin account with an active API key, if you don't have one [get your API key](https://docs.ojin.ai/getting-started/authentication)
2. [Create a Persona](https://docs.ojin.ai/models/flashhead-lite/creating-persona) or use a [Persona Template](https://docs.ojin.ai/models/flashhead-lite/using-persona-template)
3. Save the Persona Configuration ID from the dashboard
4. Integrate with your application using either [Pipecat](#pipecat-integration) or [WebSockets](#websocket-integration)

{% hint style="info" %}
**Staging deployments:** For secure, low-latency video applications, connect to the real-time WebSocket API from a backend server rather than a front-end client (to keep your API key secure and leverage a network transport appropriate for real-time video media delivery under varying network conditions). Typically, WebRTC is used to deliver the final media stream to end users for smooth, reliable, low-latency playback.
{% endhint %}

{% tabs %}
{% tab title="Pipecat" %}

#### Pipecat Integration

[Pipecat](https://github.com/pipecat-ai/pipecat) is a powerful open source framework for building conversational AI pipelines. The `ojin/flashhead-lite-1.0` model integrates seamlessly with Pipecat through our dedicated `OjinVideoService`.

**Option 1: Clone pipecat repository and checkout the ready to use** [**ojin-chatbot example**](https://github.com/journee-live/pipecat-ojin/tree/main/examples/ojin-chatbot)

To start using it, create a python virtual environment on it and install requirements

```bash
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```

Create a `.env` file and add your Ojin API key and persona ID

```bash
OJIN_API_KEY="your_api_key_here"
OJIN_CONFIG_ID="your_persona_id_here"
```

Then just run the [mock\_bot.py](https://github.com/journee-live/pipecat-ojin/blob/main/examples/ojin-chatbot/mock/mock_bot.py) to check that your ojin setup is correct and see a generation out of a wav file:

```bash
python mock/mock_bot.py
```

Alternatively, you can configure all required environment variables for the services used in this example (such as Hume) by referring to env.example. Once configured, you can interact with a conversational, human-like bot using your local audio input/output

```bash
python bot.py
```

**How It Works**

1. The microphone listens for speech input
2. Voice Activity Detection identifies speech segments
3. User audio is sent to Hume to get an LLM response using their Speech-To-Speech service.
4. The OjinVideoService animates your persona based on the STS audio.
5. Video frames are received and displayed in real-time together with the audio.

{% hint style="info" %}
You can customize the pipeline by adding or removing components, or by adjusting their parameters to suit your needs.
{% endhint %}
{% endtab %}

{% tab title="WebSocket" %}

#### WebSocket Integration

For WebSocket integration check our [API Reference →](https://docs.ojin.ai/models/flashhead-lite/api)
{% endtab %}
{% endtabs %}

## Next Steps

#### API Reference

Dive deeper into the model API for custom integrations

[API Reference →](https://docs.ojin.ai/models/flashhead-lite/api)
