From 519a351a3866e4de81a4d829870373e6abd2b817 Mon Sep 17 00:00:00 2001
From: jinyoungkim927 <101994163+jinyoungkim927@users.noreply.github.com>
Date: Fri, 12 Jan 2024 21:31:19 -0800
Subject: [PATCH 01/48] Update README.md
---
README.md | 94 +++++++++----------------------------------------------
1 file changed, 15 insertions(+), 79 deletions(-)
diff --git a/README.md b/README.md
index 25ff4a1..0246714 100644
--- a/README.md
+++ b/README.md
@@ -1,89 +1,25 @@
-# PromptCraft-Robotics
+# Plan for Building
-The PromptCraft-Robotics repository serves as a community for people to test and share interesting prompting examples for large language models (LLMs) within the robotics domain. We also provide a sample [robotics simulator](https://github.com/microsoft/PromptCraft-Robotics/tree/main/chatgpt_airsim) (built on Microsoft AirSim) with ChatGPT integration for users to get started.
-We currently focus on OpenAI's [ChatGPT](https://openai.com/blog/chatgpt/), but we also welcome examples from other LLMs (for example open-sourced models or others with API access such as [GPT-3](https://openai.com/api/) and Codex).
+### 1. Build GUI
+- side by side Web interface + windows GUI
+- Hook this up to whisper for voice to text command
-Users can contribute to this repository by submitting interesting prompt examples to the [Discussions](https://github.com/microsoft/PromptCraft-Robotics/discussions) section of this repository. A prompt can be submitted within different robotics categories such as [Manipulation](https://github.com/microsoft/PromptCraft-Robotics/discussions/categories/llm-manipulation), [Home Robotics](https://github.com/microsoft/PromptCraft-Robotics/discussions/categories/llm-home-robots), [Physical Reasoning](https://github.com/microsoft/PromptCraft-Robotics/discussions/categories/llm-physical-reasoning), among many others.
-Once submitted, the prompt will be reviewed by the community (upvote your favorites!) and added to the repository by a team of admins if it is deemed interesting and useful.
-We encourage users to submit prompts that are interesting, fun, or useful. We also encourage users to submit prompts that are not necessarily "correct" or "optimal" but are interesting nonetheless.
+### 2. Try different GPTs
+- Run different versions of GPTs rather than 3.5 turbo commands (like GPT-4)
-We encourage prompt submissions formatted as markdown, so that they can be easily transferred to the main repository. Please specify which LLM you used, and if possible provide other visuals of the model in action such as videos and pictures.
+### 3. Try vision models
+- Take photo the drone takes and then process that into answering certain instruction based questions like GPT-4
+- If we have time, switch this out for a fancy model like LLaVA
-## Paper, videos and citations
+### 4. Implement 'brownian motion'
+- So the drones aren't sitting ducks for targets
+- Oscillates
+- STRETCH* a version where if it is a swarm, it moves in a confusing swarm pattern
-Blog post: aka.ms/ChatGPT-Robotics
+### 5. Multiple drones
+- Multiple drones swarming to one object at once.
-Paper: ChatGPT for Robotics: Design Principles and Model Abilities
-Video: https://youtu.be/NYd0QcZcS6Q
-If you use this repository in your research, please cite the following paper:
-```
-@techreport{vemprala2023chatgpt,
-author = {Vemprala, Sai and Bonatti, Rogerio and Bucker, Arthur and Kapoor, Ashish},
-title = {ChatGPT for Robotics: Design Principles and Model Abilities},
-institution = {Microsoft},
-year = {2023},
-month = {February},
-url = {https://www.microsoft.com/en-us/research/publication/chatgpt-for-robotics-design-principles-and-model-abilities/},
-number = {MSR-TR-2023-8},
-}
-```
-
-## ChatGPT Prompting Guides & Examples
-
-The list below contains links to the different robotics categories and their corresponding prompt examples. We welcome contributions to this repository to add more robotics categories and examples. Please submit prompt examples to the [Discussions](https://github.com/microsoft/PromptCraft-Robotics/discussions) page, or submit a pull request with your category and examples.
-
-* Embodied agent
- * [ChatGPT - Habitat, closed loop object navigation 1](examples/embodied_agents/visual_language_navigation_1.md)
- * [ChatGPT - Habitat, closed loop object navigation 2](examples/embodied_agents/visual_language_navigation_2.md)
- * [ChatGPT - AirSim, object navigation using RGBD](examples/embodied_agents/airsim_objectnavigation.md)
-* Aerial robotics
- * [ChatGPT - Real robot: Tello deployment](examples/aerial_robotics/tello_example.md) | [Video Link](https://youtu.be/i5wZJFb4dyA)
- * [ChatGPT - AirSim turbine Inspection](examples/aerial_robotics/airsim_turbine_inspection.md) | [Video Link](https://youtu.be/38lA3U2J43w)
- * [ChatGPT - AirSim solar panel Inspection](examples/aerial_robotics/airsim_solarpanel_inspection.md)
- * [ChatGPT - AirSim obstacle avoidance](examples/aerial_robotics/airsim_obstacleavoidance.md) | [Video Link](https://youtu.be/Vn6NapLlHPE)
-* Manipulation
- * [ChatGPT - Real robot: Picking, stacking, and building the MSFT logo](examples/manipulation/pick_stack_msft_logo.md) | [Video Link](https://youtu.be/wLOChUtdqoA)
- * [ChatGPT - Manipulation tasks](examples/manipulation/manipulation_zeroshot.md)
-* Spatial-temporal reasoning
- * [ChatGPT - Visual servoing with basketball](examples/spatial_temporal_reasoning/visual_servoing_basketball.md)
-
-
-## ChatGPT + Robotics Simulator
-
-We provice a sample [AirSim](https://github.com/microsoft/AirSim) environment for users to test their ChatGPT prompts. The environment is a binary containing a sample inspection environment with assets such as wind turbines, electric towers, solar panels etc. The environment comes with a drone and interfaces with ChatGPT such that users can easily send commands in natural language. [[Simulator Link]](chatgpt_airsim/README.md)
-
-We welcome contributions to this repository to add more robotics simulators and environments. Please submit a pull request with your simulator and environment.
-
-## Related resources
-
-Beyond the prompt examples here, we leave useful and related links to the use of large language models below:
-
-* [Read about the OpenAI APIs](https://openai.com/api/)
-* [Azure OpenAI service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service)
-* [OPT language model](https://huggingface.co/docs/transformers/model_doc/opt)
-
-## Contributing
-
-This project welcomes contributions and suggestions. Most contributions require you to agree to a
-Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
-the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
-
-When you submit a pull request, a CLA bot will automatically determine whether you need to provide
-a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
-provided by the bot. You will only need to do this once across all repos using our CLA.
-
-This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
-For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
-contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
-
-## Trademarks
-
-This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
-trademarks or logos is subject to and must follow
-[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
-Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
-Any use of third-party trademarks or logos are subject to those third-party's policies.
From b1311da5dca1764b21a5f7faf6927b5e039fa0ed Mon Sep 17 00:00:00 2001
From: jinyoungkim927 <101994163+jinyoungkim927@users.noreply.github.com>
Date: Fri, 12 Jan 2024 21:32:47 -0800
Subject: [PATCH 02/48] Update README.md
---
README.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index 0246714..400c950 100644
--- a/README.md
+++ b/README.md
@@ -12,13 +12,15 @@
- Take photo the drone takes and then process that into answering certain instruction based questions like GPT-4
- If we have time, switch this out for a fancy model like LLaVA
-### 4. Implement 'brownian motion'
+### 4. Implement 'flutter'
- So the drones aren't sitting ducks for targets
- Oscillates
- STRETCH* a version where if it is a swarm, it moves in a confusing swarm pattern
-### 5. Multiple drones
+### 5. Multiple drones/objects
- Multiple drones swarming to one object at once.
+- Detect multiple objects
+
From a5b81d21fac80fcafc5a72289283736dfa73b5a1 Mon Sep 17 00:00:00 2001
From: jinyoungkim927 <101994163+jinyoungkim927@users.noreply.github.com>
Date: Fri, 12 Jan 2024 21:34:50 -0800
Subject: [PATCH 03/48] Update README.md
---
README.md | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index 400c950..7c7458e 100644
--- a/README.md
+++ b/README.md
@@ -19,9 +19,14 @@
### 5. Multiple drones/objects
- Multiple drones swarming to one object at once.
-- Detect multiple objects
-
+- Detect multiple objects
+- Not just turbines and people (try other objects)
+### 6. New environment
+- Switch out the environment
+## Interesting Tasks:
+- Counting tasks
+- Following objects
From dcc8ce340d6ced6eb6a820dd8bcab2e4566612f1 Mon Sep 17 00:00:00 2001
From: Stanford User
Date: Fri, 12 Jan 2024 21:42:18 -0800
Subject: [PATCH 04/48] GPT-4
---
chatgpt_airsim/chatgpt_airsim.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/chatgpt_airsim/chatgpt_airsim.py b/chatgpt_airsim/chatgpt_airsim.py
index 236638a..1c66df4 100644
--- a/chatgpt_airsim/chatgpt_airsim.py
+++ b/chatgpt_airsim/chatgpt_airsim.py
@@ -50,7 +50,7 @@ def ask(prompt):
}
)
completion = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
+ model="gpt-4",
messages=chat_history,
temperature=0
)
From f76d00dd25cb5c82f0ae7b5aca657624ce59644f Mon Sep 17 00:00:00 2001
From: jinyoungkim927 <101994163+jinyoungkim927@users.noreply.github.com>
Date: Fri, 12 Jan 2024 21:42:32 -0800
Subject: [PATCH 05/48] Update README.md
---
README.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/README.md b/README.md
index 7c7458e..606cf64 100644
--- a/README.md
+++ b/README.md
@@ -7,6 +7,7 @@
### 2. Try different GPTs
- Run different versions of GPTs rather than 3.5 turbo commands (like GPT-4)
+- 4 is solid, but for testing we will use 3.5
### 3. Try vision models
- Take photo the drone takes and then process that into answering certain instruction based questions like GPT-4
From b580886d5d0652c85875f348981f3c9105b97e84 Mon Sep 17 00:00:00 2001
From: Jin Kim
Date: Fri, 12 Jan 2024 22:05:12 -0800
Subject: [PATCH 06/48] Flutter with threading
---
chatgpt_airsim/airsim_wrapper.py | 73 ++++++++++++++++++++++++-
chatgpt_airsim/prompts/airsim_basic.txt | 3 +-
2 files changed, 73 insertions(+), 3 deletions(-)
diff --git a/chatgpt_airsim/airsim_wrapper.py b/chatgpt_airsim/airsim_wrapper.py
index 4b7a2b5..dc6da74 100644
--- a/chatgpt_airsim/airsim_wrapper.py
+++ b/chatgpt_airsim/airsim_wrapper.py
@@ -1,5 +1,9 @@
-import airsim
import math
+import random
+import threading
+import time
+
+import airsim
import numpy as np
objects_dict = {
@@ -44,7 +48,15 @@ def fly_path(self, points):
airsim_points.append(airsim.Vector3r(point[0], point[1], -point[2]))
else:
airsim_points.append(airsim.Vector3r(point[0], point[1], point[2]))
- self.client.moveOnPathAsync(airsim_points, 5, 120, airsim.DrivetrainType.ForwardOnly, airsim.YawMode(False, 0), 20, 1).join()
+ self.client.moveOnPathAsync(
+ airsim_points,
+ 5,
+ 120,
+ airsim.DrivetrainType.ForwardOnly,
+ airsim.YawMode(False, 0),
+ 20,
+ 1,
+ ).join()
def set_yaw(self, yaw):
self.client.rotateToYawAsync(yaw, 5).join()
@@ -61,3 +73,60 @@ def get_position(self, object_name):
object_names_ue = self.client.simListSceneObjects(query_string)
pose = self.client.simGetObjectPose(object_names_ue[0])
return [pose.position.x_val, pose.position.y_val, pose.position.z_val]
+
+ @staticmethod
+ def is_within_boundary(start_pos, current_pos, limit_radius):
+ """Check if the drone is within the spherical boundary"""
+ distance = math.sqrt(
+ (current_pos.x_val - start_pos.x_val) ** 2
+ + (current_pos.y_val - start_pos.y_val) ** 2
+ + (current_pos.z_val - start_pos.z_val) ** 2
+ )
+ return distance <= limit_radius
+
+ def flutter(self, speed=5, change_interval=1, limit_radius=10):
+ """Simulate Brownian motion /fluttering with the drone"""
+ # Takeoff and get initial position
+ self.client.takeoffAsync().join()
+ start_position = self.client.simGetVehiclePose().position
+
+ while not self.stop_thread:
+ # Propose a random direction
+ pitch = random.uniform(-1, 1) # Forward/backward
+ roll = random.uniform(-1, 1) # Left/right
+ yaw = random.uniform(-1, 1) # Rotate
+
+ # Move the drone in the proposed direction
+ self.client.moveByAngleThrottleAsync(
+ pitch, roll, 0.5, yaw, change_interval
+ ).join()
+
+ # Get the current position
+ current_position = self.client.simGetVehiclePose().position
+
+ # Check if the drone is within the boundary
+ if not self.is_within_boundary(
+ start_position, current_position, limit_radius
+ ):
+ # If outside the boundary, adjust to a new random direction
+ self.client.moveToPositionAsync(
+ start_position.x_val,
+ start_position.y_val,
+ start_position.z_val,
+ speed,
+ ).join()
+
+ # Wait for the next change
+ time.sleep(change_interval)
+
+ def start_fluttering(self, speed=5, change_interval=1, limit_radius=10):
+ self.stop_thread = False
+ self.flutter_thread = threading.Thread(
+ target=self.flutter, args=(speed, change_interval, limit_radius)
+ )
+ self.flutter_thread.start()
+
+ def stop_fluttering(self):
+ self.stop_thread = True
+ if self.flutter_thread is not None:
+ self.flutter_thread.join()
diff --git a/chatgpt_airsim/prompts/airsim_basic.txt b/chatgpt_airsim/prompts/airsim_basic.txt
index 9e4dc54..c95f4d2 100644
--- a/chatgpt_airsim/prompts/airsim_basic.txt
+++ b/chatgpt_airsim/prompts/airsim_basic.txt
@@ -8,6 +8,7 @@ aw.fly_path(points) - flies the drone along the path specified by the list of po
aw.set_yaw(yaw) - sets the yaw of the drone to the specified value in degrees.
aw.get_yaw() - returns the current yaw of the drone in degrees.
aw.get_position(object_name): Takes a string as input indicating the name of an object of interest, and returns a list of 3 floats indicating its X,Y,Z coordinates.
+aw.flutter(): The drone keeps moving in a 'random' way within a confined radius.
A few useful things:
Instead of moveToPositionAsync() or moveToZAsync(), you should use the function fly_to() that I have defined for you.
@@ -25,4 +26,4 @@ turbine1, turbine2, solarpanels, car, crowd, tower1, tower2, tower3.
None of the objects except for the drone itself are movable. Remember that there are two turbines, and three towers. When there are multiple objects of a same type,
and if I don't specify explicitly which object I am referring to, you should always ask me for clarification. Never make assumptions.
-In terms of axis conventions, forward means positive X axis. Right means positive Y axis. Up means positive Z axis.
\ No newline at end of file
+In terms of axis conventions, forward means positive X axis. Right means positive Y axis. Up means positive Z axis.
From e7ceca2a685274c65bf8ee8b90cabd60c237633d Mon Sep 17 00:00:00 2001
From: Stanford User
Date: Fri, 12 Jan 2024 22:12:13 -0800
Subject: [PATCH 07/48] object counting cloud vision API
---
chatgpt_airsim/chatgpt_airsim.py | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/chatgpt_airsim/chatgpt_airsim.py b/chatgpt_airsim/chatgpt_airsim.py
index 1c66df4..6d2a799 100644
--- a/chatgpt_airsim/chatgpt_airsim.py
+++ b/chatgpt_airsim/chatgpt_airsim.py
@@ -7,6 +7,8 @@
import os
import json
import time
+import requests
+from google.cloud import vision
parser = argparse.ArgumentParser()
parser.add_argument("--prompt", type=str, default="prompts/airsim_basic.txt")
@@ -111,6 +113,19 @@ class colors: # You may need to change color settings
response = ask(question)
+ if 'count' in question:
+ # @Kaien: take an image function
+ path = 'path to image'
+ client = vision.ImageAnnotatorClient()
+
+ with open(path, "rb") as image_file:
+ content = image_file.read()
+ image = vision.Image(content=content)
+
+ objects = client.object_localization(image=image).localized_object_annotations
+
+ response = "Number of objects found: {len(objects)}"
+
print(f"\n{response}\n")
code = extract_python_code(response)
From 580ea5e9f3a26d7f265bb01a014bb77a85b05195 Mon Sep 17 00:00:00 2001
From: Stanford User
Date: Fri, 12 Jan 2024 22:27:48 -0800
Subject: [PATCH 08/48] New logic for Vision API + GPT-4
---
chatgpt_airsim/chatgpt_airsim.py | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/chatgpt_airsim/chatgpt_airsim.py b/chatgpt_airsim/chatgpt_airsim.py
index 6d2a799..1d88c5d 100644
--- a/chatgpt_airsim/chatgpt_airsim.py
+++ b/chatgpt_airsim/chatgpt_airsim.py
@@ -123,8 +123,10 @@ class colors: # You may need to change color settings
image = vision.Image(content=content)
objects = client.object_localization(image=image).localized_object_annotations
-
- response = "Number of objects found: {len(objects)}"
+
+ # convert objects to string
+ # add the question with count in it
+ response = ask(string_json + question)
print(f"\n{response}\n")
From 8df82384424b619553813f281bf1b579a738a514 Mon Sep 17 00:00:00 2001
From: Jin Kim
Date: Fri, 12 Jan 2024 22:41:27 -0800
Subject: [PATCH 09/48] Basic GUI website
---
.../system_prompts/airsim_basic.txt | 2 +-
chatgpt_airsim/templates/index.html | 61 ++++++++
chatgpt_airsim/web_ui_chatgpt_airsim.py | 136 ++++++++++++++++++
3 files changed, 198 insertions(+), 1 deletion(-)
create mode 100644 chatgpt_airsim/templates/index.html
create mode 100644 chatgpt_airsim/web_ui_chatgpt_airsim.py
diff --git a/chatgpt_airsim/system_prompts/airsim_basic.txt b/chatgpt_airsim/system_prompts/airsim_basic.txt
index 92ebd58..b84085f 100644
--- a/chatgpt_airsim/system_prompts/airsim_basic.txt
+++ b/chatgpt_airsim/system_prompts/airsim_basic.txt
@@ -2,4 +2,4 @@ You are an assistant helping me with the AirSim simulator for drones.
When I ask you to do something, you are supposed to give me Python code that is needed to achieve that task using AirSim and then an explanation of what that code does.
You are only allowed to use the functions I have defined for you.
You are not to use any other hypothetical functions that you think might exist.
-You can use simple Python functions from libraries such as math and numpy.
\ No newline at end of file
+You can use simple Python functions from libraries such as math and numpy.
diff --git a/chatgpt_airsim/templates/index.html b/chatgpt_airsim/templates/index.html
new file mode 100644
index 0000000..8858c4d
--- /dev/null
+++ b/chatgpt_airsim/templates/index.html
@@ -0,0 +1,61 @@
+
+
+
+
+
+ AirSim Chatbot UI
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/chatgpt_airsim/web_ui_chatgpt_airsim.py b/chatgpt_airsim/web_ui_chatgpt_airsim.py
new file mode 100644
index 0000000..e870cb5
--- /dev/null
+++ b/chatgpt_airsim/web_ui_chatgpt_airsim.py
@@ -0,0 +1,136 @@
+import argparse
+import json
+import math
+import os
+import re
+import time
+
+import numpy as np
+import openai
+from airsim_wrapper import *
+from flask import Flask, jsonify, render_template, request
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--prompt", type=str, default="prompts/airsim_basic.txt")
+parser.add_argument("--sysprompt", type=str, default="system_prompts/airsim_basic.txt")
+args = parser.parse_args()
+
+app = Flask(__name__)
+
+
+@app.route("/")
+def index():
+ return render_template("index.html")
+
+
+@app.route("/ask", methods=["POST"])
+def web_ask():
+ user_input = request.json.get("question")
+ if user_input:
+ response = ask(user_input)
+ return jsonify({"response": response})
+ return jsonify({"response": "No question received"})
+
+
+if __name__ == "__main__":
+ app.run(debug=True)
+
+with open("config.json", "r") as f:
+ config = json.load(f)
+
+print("Initializing ChatGPT...")
+openai.api_key = config["OPENAI_API_KEY"]
+
+with open(args.sysprompt, "r") as f:
+ sysprompt = f.read()
+
+chat_history = [
+ {"role": "system", "content": sysprompt},
+ {"role": "user", "content": "move 10 units up"},
+ {
+ "role": "assistant",
+ "content": """```python
+aw.fly_to([aw.get_drone_position()[0], aw.get_drone_position()[1], aw.get_drone_position()[2]+10])
+```
+
+This code uses the `fly_to()` function to move the drone to a new position that is 10 units up from the current position. It does this by getting the current position of the drone using `get_drone_position()` and then creating a new list with the same X and Y coordinates, but with the Z coordinate increased by 10. The drone will then fly to this new position using `fly_to()`.""",
+ },
+]
+
+
+def ask(prompt):
+ chat_history.append(
+ {
+ "role": "user",
+ "content": prompt,
+ }
+ )
+ completion = openai.ChatCompletion.create(
+ model="gpt-4", messages=chat_history, temperature=0
+ )
+ chat_history.append(
+ {
+ "role": "assistant",
+ "content": completion.choices[0].message.content,
+ }
+ )
+ return chat_history[-1]["content"]
+
+
+print(f"Done.")
+
+code_block_regex = re.compile(r"```(.*?)```", re.DOTALL)
+
+
+def extract_python_code(content):
+ code_blocks = code_block_regex.findall(content)
+ if code_blocks:
+ full_code = "\n".join(code_blocks)
+
+ if full_code.startswith("python"):
+ full_code = full_code[7:]
+
+ return full_code
+ else:
+ return None
+
+
+class colors: # You may need to change color settings
+ RED = "\033[31m"
+ ENDC = "\033[m"
+ GREEN = "\033[32m"
+ YELLOW = "\033[33m"
+ BLUE = "\033[34m"
+
+
+print(f"Initializing AirSim...")
+aw = AirSimWrapper()
+print(f"Done.")
+
+with open(args.prompt, "r") as f:
+ prompt = f.read()
+
+ask(prompt)
+print(
+ "Welcome to the AirSim chatbot! I am ready to help you with your AirSim questions and commands."
+)
+
+while True:
+ question = input(colors.YELLOW + "AirSim> " + colors.ENDC)
+
+ if question == "!quit" or question == "!exit":
+ break
+
+ if question == "!clear":
+ os.system("cls")
+ continue
+
+ response = ask(question)
+
+ print(f"\n{response}\n")
+
+ code = extract_python_code(response)
+ if code is not None:
+ print("Please wait while I run the code in AirSim...")
+ exec(extract_python_code(response))
+ print("Done!\n")
From f8105218b0c861679ce4a3ec478244213485930d Mon Sep 17 00:00:00 2001
From: Jin Kim
Date: Sat, 13 Jan 2024 00:28:16 -0800
Subject: [PATCH 10/48] Fixes to fluttering mechanism
---
chatgpt_airsim/airsim_wrapper.py | 5 +++--
chatgpt_airsim/prompts/airsim_basic.txt | 2 ++
chatgpt_airsim/templates/index.html | 2 +-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/chatgpt_airsim/airsim_wrapper.py b/chatgpt_airsim/airsim_wrapper.py
index dc6da74..d0df7ed 100644
--- a/chatgpt_airsim/airsim_wrapper.py
+++ b/chatgpt_airsim/airsim_wrapper.py
@@ -24,6 +24,7 @@ def __init__(self):
self.client.confirmConnection()
self.client.enableApiControl(True)
self.client.armDisarm(True)
+ self.stop_thread = False
def takeoff(self):
self.client.takeoffAsync().join()
@@ -97,8 +98,8 @@ def flutter(self, speed=5, change_interval=1, limit_radius=10):
yaw = random.uniform(-1, 1) # Rotate
# Move the drone in the proposed direction
- self.client.moveByAngleThrottleAsync(
- pitch, roll, 0.5, yaw, change_interval
+ self.client.moveByRollPitchYawrateThrottleAsync(
+ roll, pitch, yaw, 0.5, change_interval
).join()
# Get the current position
diff --git a/chatgpt_airsim/prompts/airsim_basic.txt b/chatgpt_airsim/prompts/airsim_basic.txt
index c95f4d2..97f7863 100644
--- a/chatgpt_airsim/prompts/airsim_basic.txt
+++ b/chatgpt_airsim/prompts/airsim_basic.txt
@@ -9,6 +9,8 @@ aw.set_yaw(yaw) - sets the yaw of the drone to the specified value in degrees.
aw.get_yaw() - returns the current yaw of the drone in degrees.
aw.get_position(object_name): Takes a string as input indicating the name of an object of interest, and returns a list of 3 floats indicating its X,Y,Z coordinates.
aw.flutter(): The drone keeps moving in a 'random' way within a confined radius.
+aw.start_fluttering(): The drone starts fluttering
+aw.stop_fluttering(): The drone stops fluttering
A few useful things:
Instead of moveToPositionAsync() or moveToZAsync(), you should use the function fly_to() that I have defined for you.
diff --git a/chatgpt_airsim/templates/index.html b/chatgpt_airsim/templates/index.html
index 8858c4d..658219d 100644
--- a/chatgpt_airsim/templates/index.html
+++ b/chatgpt_airsim/templates/index.html
@@ -58,4 +58,4 @@