Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
updated chatbot
from newspaper import Article
import random
import string
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import nltk
import numpy as np
import warnings
#ifnore any warning messages
warnings.filterwarnings('ignore')
#download the packages from nltk
nltk.download('punkt',quiet=True)
nltk.download('wordnet',quiet=True)
article = Article('https://www.mayoclinic.org/diseases-conditions/chronic-kidney-disease/symptoms-causes/syc-20354521')
article.download()
article.parse()
article.nlp()
corpus = article.text
#print the corpus/text
print(corpus)
#Tokenization
text = corpus
#convert the text into a list of sentence
sent_tokens = nltk.sent_tokenize(text)
#print the list of sentence
print(sent_tokens)
create a doictionary(key:value) pair to remove punctuations
remove_punct_dict = dict( (punct,None) for punct in string.punctuation)
#print the punctuations
print(string.punctuation)
print(remove_punct_dict)
#create a function to return a list of lemmatized lower case words after removing punctuations
def LemNormalize(text):
return nltk.word_tokenize(text.lower().translate(remove_punct_dict))
#print the tokenizaation text
print(LemNormalize(text))
#keyword matching
#Greetin inputs
greeting_inputs = ["hi","hello","greetings","wassup","hey"]
#greetind response back to the user
greeting_response = ["howdy","hi","hey","what's good","hello","hey there"]
#function to return a random greeting response to a user greeting
def greeting(sentence):
for word in sentence.split():
if word.lower() in greeting_inputs:
return random.choice(greeting_response)
#generate the response
def response(user_response):
#The users response/query
user_response = 'what is a chronic disease'
#Make the response lower case
user_response = user_response.lower()
#print the user response
# print(user_response)
flag = True
print("DOCBot: I am Doctor Bot or DOCBot for short. I will answer your queries about the chronic disease.If you need any help we here to response")
while(flag == True):
user_response = input()
user_response = user_response.lower()
if(user_response != 'bye'):
if(user_response == 'thanks' or user_response == 'thank you'):
flag = False
print("DOCBot: You are welcome !")
else:
if(greeting(user_response)!=None):
print("DOCBot: "+greeting(user_response))
else:
print("DOCBot: "+response(user_response))
else:
flag = False
print("DOCBot: Chat with you later !")