Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added fancier prompt that remembers state #2

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,15 @@ You'd have to specify a bunch of docker containers that you'd like a Kubernetes

So this is a managed Kubernetes service lol. But not really. Because you're still very much running Kubernetes - think of this tool as an assistant.

Problems I faced:
1. lots of evicted pods
2. failed readiness checks and it was unclear how to debug them and how to get started
3. should have been a way to pass in a Dockerfile and generate a corresponding kubernetes YAML
4. shouldn't have to learn the semantics of what it means to


Possible next steps:
1. download the chat history of the kubernetes slack channel with user questions and expert answers, and use that to finetune gpt3 using openai api
2. https://github.com/ht2/gpt_content_indexing/
https://loft.sh/blog/kubernetes-monitoring-dashboards-5-best-open-source-tools/

16 changes: 9 additions & 7 deletions app.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,23 @@

app = Flask(__name__)

simple_in_memory_db = {}

@app.route('/api', methods=['POST'])
def api():
def api(error_message, prompt, config):
# get user id
user_id = request.headers.get('user_id')

# get the prompt and config objects from the request
prompt = request.form['prompt']
config = request.form['config']


# prompt = request.form.get('prompt')
# config = request.form.get('config')

# make request to OpenAI API with prompt and config
# return response from API to caller
final_prompt = handler(prompt, config)

suggestion = handler(prompt, config)
simple_in_memory_db[user_id].append((suggestion,))
# server remembers what we suggested to the caller so we can use it in the next request
return jsonify(suggestion)

if __name__ == '__main__':
app.run(debug=True, host='localhost', port=5000)
33 changes: 5 additions & 28 deletions k8s.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,35 +6,12 @@
openai.api_key = os.getenv("OPENAI_API_KEY", "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")

def construct_prompt(query_str, config):
prompt = '''
You are a Kubernetes debugging assistant.
Given the following config object, and a question, suggest what actions can be taken with references ("SOURCES").
ALWAYS return a "SOURCES" part in your answer.

QUESTION: My cluster has 72 pods, a large chunk of which are the blenderbot pod. How do I prevent there from being so many replicas of the blenderbot?
=========
Config: apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: blenderbot\n namespace: chirpy\nspec:\n selector:\n matchLabels:\n app: blenderbot\n template:\n metadata:\n labels:\n app: blenderbot\n spec:\n containers:\n - image: 742352046111.dkr.ecr.us-east-1.amazonaws.com/chirpy/blenderbot:latest\n name: main\n ports:\n - containerPort: 80\n name: main\n readinessProbe:\n httpGet:\n path: /\n port: 80\n initialDelaySeconds: 120\n tolerations:\n - effect: NoSchedule\n key: node_type\n operator: Equal\n value: gpu\napiVersion: v1\nkind: Service\nmetadata:\n name: blenderbot\n namespace: chirpy\nspec:\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app: blenderbot\n
=========
FINAL ANSWER: Check your node groups, its possible the pod isn't being able to get scheduled on any node and so is constantly respawning.
SOURCES: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

QUESTION: How to eat vegetables using kubectl?
=========
Config: apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: blenderbot\n namespace: chirpy\nspec:\n selector:\n matchLabels:\n app: blenderbot\n template:\n metadata:\n labels:\n app: blenderbot\n spec:\n containers:\n - image: 742352046111.dkr.ecr.us-east-1.amazonaws.com/chirpy/blenderbot:latest\n name: main\n ports:\n - containerPort: 80\n name: main\n readinessProbe:\n httpGet:\n path: /\n port: 80\n initialDelaySeconds: 120\n tolerations:\n - effect: NoSchedule\n key: node_type\n operator: Equal\n value: gpu\napiVersion: v1\nkind: Service\nmetadata:\n name: blenderbot\n namespace: chirpy\nspec:\n ports:\n - port: 80\n protocol: TCP\n targetPort: 80\n selector:\n app: blenderbot\n

=========
FINAL ANSWER: You can't eat vegetables using kubernetes. You can only eat them using your mouth.
SOURCES:

QUESTION: {query_str}
=========
{config}
=========
FINAL ANSWER:

prompt = f'''
You are a Kubernetes debugging assistant. I've tried the following commands on my cluster, and the outputs are below.
Give me another command to try, with no English text, only kubectl commands.
{query_str}
'''
return prompt.format(query_str=query_str, config=config)


def handler(prompt, config):
# TODO: only get the relevant parts of the config
# TODO: decide what the relevant parts of the config are
Expand Down