September 10, 2024 No Comments

Introduction​

In today’s busy work environments, getting information quickly is very important. With so many documents, finding what you need can be hard. A PDF Q&A bot can help, especially when used with Slack, a popular tool for team communication. This bot lets you ask questions about PDF documents and get instant answers, making it easier to find the information you need.

Langchain is a powerful tool for building applications that understand natural language. Using advanced models, we can achieve sophisticated natural language processing tasks such as text generation, question answering, and language translation, enabling the development of highly interactive and intelligent applications.

ChatGPT is great at understanding and generating human-like text, making it useful for tasks like summarizing text, translating languages, and analyzing sentiment. For our PDF Q&A bot, ChatGPT will answer questions by finding relevant information in PDF documents. 

By integrating this bot with Slack, users can easily ask questions about their documents in a Slack channel and get instant answers, streamlining their workflow.

In this blog, we will discover how to build a Slack bot that gives answers from PDF files using ChatGPT.

Step 1: Create a Slack Bot​

The first step is to set up a bot in Slack for that please follow steps 1 to 23 in our blog post Slack Bot with Python.

Before moving forward, please make sure that you follow the blog’s instructions.

Step 2: Setup For Receive Messages From Slack

After setting up the Slack bot we have to modify the script for receiving messages from the Slack server.

Make sure that you have installed all the required libraries:

				
					pip install slackclient
pip install flask
pip install slackeventsapi
pip install requests
				
			

First of all, Make a python file named app.py’ and add all the libraries required for the code:

				
					import json
import time
import slack
from flask import Flask
from slackeventsapi import SlackEventAdapter
import requests
from flask import Flask, request, jsonify
from slack.errors import SlackApiError

SLACK_TOKEN="<YOUR SLACK TOKEN>"
SIGNING_SECRET="<YOUR SIGNING SECRET>"
app = Flask(__name__)
slack_event_adapter = SlackEventAdapter(SIGNING_SECRET, '/slack/events', app)
client = slack.WebClient(token=SLACK_TOKEN)
				
			

Replace Your Slack bot token with <YOUR SLACK TOKEN> and Signing Secret key with <YOUR SIGNING SECRET>.
Now, we need to create a route to receive messages given by the user and create a function that can handle all these messages.

				
					@slack_event_adapter.on('message')
def message(payload):
   # check information about the incoming HTTP request
   response = request.environ
   print(payload)
  
   # It check the HTTP request count
   if "HTTP_X_SLACK_RETRY_NUM" not in response:
       text = ''
      
       # It gives the response when only the user makes the user not the bot
       event, channel_id, user_id, sentence, file_name, bot_profile, file_url, file_id, block_id, file_size, file_type, subtype = slack_messege(payload)
       if bot_profile == None:


           # gives the response when the bot join the channel
           if subtype == 'channel_join':
               text = '''
               In a world where every second counts, ⏳ don't lose precious time sifting through lengthy PDFs.\nWell Hello! 🤖🌟 I'm your friendly Q&A bot, ready to assist you with any questions about your PDF documents. Just upload your PDF to get started, and let's dive into the details together! 📄🔍\nCan't wait to help you out! 🚀✨
               '''
               # gives the bot response to the user
               client.chat_postMessage(channel=channel_id,text=text)


           # You can add custom messages by detecting the response given by the user
           elif sentence == "hi" or sentence == 'hello':
               text = "Hi there! To get started, please provide the PDF document. Let's begin our conversation!"
               client.chat_postMessage(channel=channel_id,text=text)
               
if __name__ == "__main__":
   app.run(debug=True)
				
			

Note: According to the Slack slash command documentation, you need to respond within 3000ms (three seconds). If your command takes longer then you get the Timeout was reached error and after that Slack server makes another request for the same query.

To Tackle that problem make sure that all the code has to be written in ‘ if “HTTP_X_SLACK_RETRY_NUM” not in response’ Condition. Here ‘response = request.environ’ is used for HTTP request count.

Ensure that all conditions are placed within the ‘bot_profile == None’ block. Otherwise, if an ‘else’ condition is used, the bot may respond infinitely to a single user request.

Do not run this file until you add this ‘slack_messege(payload)’ function otherwise, it gives an error.

To extract the data from the ‘payload’ add the ‘slack_messege(payload)’ function:

				
					def slack_messege(payload):
   # takes the data from payload
  
   file_name = ''
   bot_profile = ''
   file_url = ''
   file_id = ''
   block_id = ''
   text = ''
   file_size = ''
   file_type = ''
   subtype = ''
   event = payload.get('event', {})
   channel_id = event.get('channel')
   user_id = event.get('user')
   text = event.get('text')
   bot_profile = event.get('bot_profile')
   if "subtype" in event:
       subtype = event.get('subtype')
   if 'files' in event:
       files = event.get('files',{})
       file_name = files[0].get('name')
       file_url = files[0].get('url_private_download')
       file_id = files[0].get('id')
       file_size = files[0].get('size')
       file_type = files[0].get('mimetype')
   if 'blocks' in event:
       blocks = event.get('blocks',{})
       block_id = blocks[0].get("block_id")
   return event, channel_id, user_id, text, file_name,bot_profile, file_url, file_id, block_id, file_size, file_type, subtype

				
			

When a PDF is sent to the bot, you can check the terminal to see the JSON response, as displayed in the screenshot below:

we need to get the file name and URL for the file which we already took in ‘slack_messege()’  function.

To detect when the bot receives a PDF file, we need to add an additional condition that checks for the file. This condition should be placed alongside the one that handles the welcome message sent by the user.

				
					           elif file_name:
               if file_type == "application/pdf": # check file given by user is pdf or not
                   if file_size > 10000000:
                       text = "Please upload a file that is smaller than 10 MB."
                   else:
                       text = "Your PDF has been successfully saved! 🎉 You can now ask any questions you have. Let's get started! 😊"
                       # download the pdf
                       download_pdf(file_url, file_name)
                       print("\nuser_id\n",user_id)
                       # function use to make chunks of pdf and added into chromadb.
                       pdf_added_to_database(file_name,user_id)
               else:
                   text = "Sorry for the inconvenience, but only PDF format is supported for uploading documents. If you have a PDF file, feel free to upload it, and I'll be happy to assist you further."
               client.chat_postMessage(channel=channel_id,text=text)
		else:
               if sentence:
                   # check that user uploaded the pdf or not
                   document_list=get_all_documents(user_id)
                   if len(document_list) == 0:
                       text = "Could you please upload the PDF first? Then I'll be able to help with your question. 😊📄"
                   elif sentence:
                       # it makes the request to chatgpt for question asking by user.
                       text = ask_question(sentence, user_id)
               else:
                   text = "Sorry, it seems that the media you're trying to upload is not supported. Please ensure you're trying to upload a PDF file, as only PDF format is supported for document uploads."
               client.chat_postMessage(channel=channel_id,text=text)

				
			

If the user inputs any text other than “hi,” the bot will not respond with a static message, as it is impractical to provide a specific response for every possible input. Instead, when the user asks a question, the input is handled by an else condition, which triggers the ask_question(sentence, user_id)’ function.

The ask_question(sentence, user_id)’ function sends a request to OpenAI to retrieve the answer from the PDF.

Here ‘get_all_documents(channel_id)’ function checks in the ChromaDB vector database whether the user has uploaded their pdf or not if it is not uploaded then give them a response such as “Upload pdf first”. Don’t worry We will add this function in upcoming steps.

we have used pdf_added_to_database(file_name,user_id)function which makes the chunks and stores those chunks in chromaDB, Don’t worry we will add this function to the upcoming step.

Before chunking the pdf we need to download the pdf for that we have used download_pdf(file_url, file_name)’ function to add this function to download the pdf file.

				
					def download_pdf(file_url, file_name):
   headers = {
   'Authorization': f'Bearer {SLACK_TOKEN}'
}
   response = requests.get(file_url, headers=headers)


   if response.status_code == 200:
       # Save the PDF to a file
       with open(file_name, 'wb') as file:
           file.write(response.content)
       print('Download complete.')
   else:
       print(f'Failed to download file. Status code: {response.status_code}')
				
			

Now, we need to set up Langchain and OpenAI.

Step 3: Setup for Langchain and OpenAI

For Langchain and OpenAI setup first make a new Python file namedtest_pdf_reader.py’ 

And followed steps 1-7 from our RAG Tutorial using OpenAI and Langchain blog.

Make sure that you have installed all the required libraries:

				
					pip install langchain openai chromadb pypdf tiktoken
pip install langchain_openai
pip install langchain_community
pip install pysqlite3

				
			

Now, we need to modify the main script to automate the process of uploading PDF into ChromaDB and making the QNA.

First of all, add all the library requirements for the code:

				
					from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_openai import ChatOpenAI
from langchain_community.document_loaders import PyPDFLoader
from langchain.chains.question_answering import load_qa_chain
import os
openai_key = '<YOUR-OPENAI-KEY>'
model_name = "gpt-4o"
				
			

Replace Your OpenAI key with <YOUR-OPENAI-KEY>

We have used the “gpt-4o” model for embedding and generating answers, but you can also use other OpenAI models such as  “gpt-3.5-turbo” etc.

For embedding the pdf file please add this function:

				
					def get_initialize():
   os.environ['OPENAI_API_KEY'] = openai_key
   embeddings_model = OpenAIEmbeddings()
   llm = ChatOpenAI(model_name = model_name, openai_api_key = openai_key, max_tokens = 400)
   return embeddings_model, llm

				
			

For QNA, we need to add the function that requests the question to ChatGPT and makes the answer from pdf, so add ‘ask_question(question, chat_id)’ function which we have already defined in app.py.

				
					def ask_question(question, chat_id):
   embeddings_model, llm = get_initialize()
   vectordb = Chroma(persist_directory=str(chat_id), embedding_function = embeddings_model)
   retriever = vectordb.as_retriever(search_kwargs = {"k" : 3})
   chain = load_qa_chain(llm, chain_type="stuff")
   context = retriever.get_relevant_documents(question)
   answer = (chain({"input_documents": context, "question": f"You are a question- answering bot. The user will upload the document and asks question about the uploaded document,  you need to provide an answer. If user says thank you then thank the user in return and tell them to reach out to us when they want any help with questions. If the answer has incomplete sentence then remove the incomplete sentence from the response and try to cover it in the previous sentence. Here is the question: {question}"}, return_only_outputs=True))['output_text']
   return answer
				
			

Next, we need to make the chunks of PDF data and add them into chromaDB for that add this function:

				
					def pdf_added_to_database(pdf_name, chat_id):
   embeddings_model, llm = get_initialize()
   loader = PyPDFLoader(pdf_name, extract_images=False)
   pages = loader.load_and_split()
   text_splitter = RecursiveCharacterTextSplitter(
       chunk_size = 4000,
       chunk_overlap  = 20,
       length_function = len,
       add_start_index = True,
   )
   chunks = text_splitter.split_documents(pages)
   __import__('pysqlite3')
   db = Chroma.from_documents(chunks, embedding = embeddings_model, persist_directory=str(chat_id))
   db.persist()
   return "Your PDF is uploaded and saved!"
				
			

We need to first check chromadb to ensure it is empty or not if chromaDB is empty then give the response that first uploads the pdf. For that, we need to add this function:

				
					def get_all_documents(chat_id):
   documents = []
   documents_set = set()
   db = Chroma(persist_directory=str(chat_id))
   metadata = db.get()["metadatas"]
   for index,data in enumerate(metadata):
       documents_set.add(data["source"])
   print("documentset",documents_set)
   for doc in documents_set:
       documents.append({
                   "type": "button",
                   "text": {
                       "type": "plain_text",
                       "text": doc
                   },
                   "style": "primary",
                   "action_id": doc
               },)
   # print("Documents: ", documents)
   return documents
				
			

To ensure all functions are available before running the app.py’ file, please add the following line at the beginning of the app.py’ file which inherited all the required functions from  test_pdf_reader’ Which we have made.

				
					from test_pdf_reader import pdf_added_to_database, get_all_documents, ask_question
				
			
				
					Run the ‘app.py’ file in the terminal: python3 app.py
Run Ngrok on terminal: ngrok http 5000
				
			

Step 4: Test The Slack Bot

For this blog, we utilized a research paper in PDF format to conduct Question and Answer (Q&A) tasks. The title of the research paper is “Google Gemini as a Next Generation AI Educational Tool: A Review of Emerging Educational Technology,” and it is accessible at the link provided below. You can download the PDF from there.

https://slejournal.springeropen.com/articles/10.1186/s40561-024-00310-z

When we added a PDF and asked questions based on its content, we received results like this:

Conclusion:

Creating a PDF Q&A bot for Slack using Langchain and OpenAI can significantly enhance information accessibility and improve team efficiency. This bot will enable users to query PDF documents directly within Slack, providing instant answers and saving valuable time.

Write a comment

Your email address will not be published. Required fields are marked *

Want to Create a RAG based AI Chatbot for FREE?
Build your AI Chatbot with custom knowledgebase in 3 simple steps. Try Now.