This project demonstrates how to get started with integrating Symbl with Snowflake using Python, and build dashboard using Streamlit.
This is a sample project to demonstrate how to use Symbl's APIs to extract various insights from sales calls and meetings, store them in the Snowflake database with sample CRM data, and build a dashboard using Streamlit to analyze and build a co-pilot for sales analysis. By merging unstructured calls data with existing structured business data, this integration helps you uncover deeper insights, enabling more informed decision-making across customer service, sales, recruitment, and operations.
- Symbl.ai Account: Sign up for an account at Symbl.ai and retrieve your App ID and App Secret. Request access to Nebula API and get the Nebula API key.
- Snowflake Account: Sign up for a Snowflake account and gather the necessary credentials (account ID, username, and password).
- Python 3.8 or higher: Ensure Python is installed on your local machine.
The project has three main artifacts:
- setup_snowflake_env.py: sets up the Snowflake database and tables required for the project.
- main.py: processes the audio files, extracts insights using Symbl's APIs, and stores them in the Snowflake database.
- streamlit: directory contains the dashboard for insights stored in the Snowflake.
Clone the repository and navigate to the project directory.
git clone https://github.com/symblai/snowflake-symbl-integration-python.git
cd snowflake-symbl-integration-python
Create a Conda environment and activate it. Install miniconda if you don't have it installed.
conda create -n snowflake-symbl python=3.11
conda activate snowflake-symbl
Install the required dependencies by running -
pip install -r requirements.txt
Copy secrets.toml.default
file to a new secrets.toml
file.
cp secrets.toml.default secrets.toml
Update secrets.toml
file with your Symbl and Snowflake credentials.
Once all values are updated, run the setup script to create the necessary tables in Snowflake.
python setup_snowflake_env.py
This will also copy your secrets.toml
file to the streamlit/.streamlit
directory so that the Streamlit app can
access the required credentials.
You are now ready to run the main.py
script to process the files and store the results in the Snowflake
python main.py
The main.py
by default will use the sample transcripts and CRM data provided in the data directory. You can
also provide your own data by updating the main.py
script. You can modify it to also use your own audio files by
calling submit_audio_file
function instead of submit_transcript
function.
Once the script has finished running, you can start the Streamlit application to run the dashboard.
cd streamlit
streamlit run app.py
You can now access the Streamlit dashboard by navigating to http://localhost:8501
in your browser.
If you have paid Snowflake account that is not in trial, you can optionally install Streamlit app to Snowflake to use Snowpark so that it can access Symbl's Nebula API from within your Snowflake instance.
You can skip this step if you are using a trial account.
python setup_snowflake_env.py --install_streamlit_app