burger
logo
burger logo

AIDI Enterprise

Have you tested a business hypothesis or do you need greater flexibility, transparency and cost saving?

Let's deploy, customize, and, if needed, retrofit your own LLM into the AIDI ecosystem.

Contact sales

Open-source LLMs in AIDI ecosystem

Select from Our Range of Popular and Effective Models or Request a Custom Solution Tailored to Your Business Needs

LLM-ecosystem

AIDI Enterprise features:

zoom

AIDI web application for a recording and analysis of offline communications

conversations

Unlimited amount of conversations to analyse on monthly basis

phone

Multichannel conversations analysis (messengers, e-mail, zoom, teams)

dashboards

Any kind of dashboards

integrations

Any kind of integrations with your software

Benefits & Requirements of open-source LLMs

While open-source options offer many advantages, they also require more technical expertise and resources for implementation and maintenance.

Benefits
Requirements
line

RAM Requirements

VRAM Requirements

For handling large models like 65B and 70B, high-end GPUs with a minimum of 40GB VRAM (A100 40GB, dual RTX 3090s/4090s, A40, RTX A6000, or 8000) and 64GB system RAM are essential. For GGML/GGUF CPU inference, ensure at least 40GB of RAM is available.

shield
Enhanced data security and privacy

Open-source LLMs eliminate licensing fees, making them a cost-effective solution for enterprises and startups with tight budgets.

db
Cost savings

With open-source LLMs, organizations can deploy the model on their own infrastructure and thus, have more control over their data.

code
Code transparency

Open-source LLMs offer transparency into their underlying code, allowing organizations to inspect and validate the model’s functionality.

dependency
Reduced vendor dependency

Businesses can reduce reliance on a single vendor, promoting flexibility and preventing vendor lock-in.

llm-customization
Language model customization

Tailoring the model to specific industry or domain needs is more manageable with open-source LLMs. Organizations can fine-tune the model to suit their unique requirements.

support
Active community support

Open-source projects often have thriving communities of developers and experts. It means quicker issue resolution, access to helpful resources, and a collaborative environment for problem-solving.

lamp
Fosters Innovation

Open-source LLMs encourage innovation by enabling organizations to experiment and build upon existing models. Startups, in particular, can leverage these models as a foundation for creative and unique applications.

Requirements
GPTQ (GPU inference)

60 gb

RAM Requirements

40 gb

VRAM Requirements

Combination of GPTQ and GGML / GGUF (offloading)

20 gb

RAM Requirements

20 gb

VRAM Requirements

GGML / GGUF (CPU inference)

40 gb

RAM Requirements

600 mb

VRAM Requirements

For handling large models like 65B and 70B, high-end GPUs with a minimum of 40GB VRAM (A100 40GB, dual RTX 3090s/4090s, A40, RTX A6000, or 8000) and 64GB system RAM are essential. For GGML/GGUF CPU inference, ensure at least 40GB of RAM is available.

Trust AI to evaluate calls and recordings now

Contact us, we will schedule a call and conduct a demonstration of the service based on your audio recordings

Contact sales