Skip to content

Commit

Permalink
Merge pull request #20 from deepanshubaghel/main
Browse files Browse the repository at this point in the history
Enhancing Terrain-v3 as Terrain-v4 for Performance Optimization
  • Loading branch information
Akasxh authored Oct 11, 2024
2 parents 6bc5c08 + d1ffd25 commit fb003e0
Show file tree
Hide file tree
Showing 10,519 changed files with 11,227 additions and 566 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
351 changes: 351 additions & 0 deletions .ipynb_checkpoints/Terrain_V3-checkpoint.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,351 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Importing necessary Libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Install necessary packages and import libraries\n",
"# The first few lines ensure that TensorFlow, Scipy, and Matplotlib are installed.\n",
"# pip install tensorflow\n",
"# pip install scipy\n",
"# pip install matplotlib\n",
"import tensorflow as tf\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inline plotting with Matplotlib"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# %matplotlib inline\n",
"# this command ensures that plots are displayed directly in the notebook after being generated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Building a Convolutional Neural Network (CNN) model for terrain classification\n",
"# Importing layers from Keras for building the model architecture\n",
"from keras.preprocessing import image\n",
"from keras.models import Sequential\n",
"from keras.layers import Conv2D\n",
"from keras.layers import MaxPooling2D\n",
"from keras.layers import Flatten\n",
"from keras.layers import Dense\n",
"from keras.layers import Dropout\n",
"from keras.layers import BatchNormalization\n",
"from keras.models import load_model\n",
"from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
"from tensorflow.keras.callbacks import LearningRateScheduler, ReduceLROnPlateau\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Defining the CNN model\n",
"The following code defines a CNN model using TensorFlow's Keras API. The model is designed for terrain classification with a series of convolutional layers, pooling layers, and fully connected layers.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Define the CNN model architecture with residual connections\n",
"def create_model():\n",
" model = tf.keras.Sequential([\n",
" tf.keras.layers.Input(shape=(64, 64, 3)),\n",
" tf.keras.layers.Conv2D(32, kernel_size=3, activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(pool_size=2, strides=2),\n",
" tf.keras.layers.BatchNormalization(),\n",
"\n",
" tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(pool_size=2, strides=2),\n",
" tf.keras.layers.BatchNormalization(),\n",
"\n",
" tf.keras.layers.Conv2D(128, kernel_size=3, activation='relu'),\n",
" tf.keras.layers.MaxPooling2D(pool_size=2, strides=2),\n",
" tf.keras.layers.BatchNormalization(),\n",
"\n",
" tf.keras.layers.Flatten(),\n",
"\n",
" tf.keras.layers.Dense(1024, activation='relu'),\n",
" tf.keras.layers.Dropout(0.2),\n",
" tf.keras.layers.BatchNormalization(),\n",
"\n",
" tf.keras.layers.Dense(1024, activation='relu'),\n",
" tf.keras.layers.Dropout(0.2),\n",
" tf.keras.layers.BatchNormalization(),\n",
"\n",
" tf.keras.layers.Dense(256, activation='relu'),\n",
" tf.keras.layers.Dropout(0.2),\n",
" tf.keras.layers.BatchNormalization(),\n",
"\n",
" tf.keras.layers.Dense(5, activation='softmax')\n",
" ])\n",
" return model\n",
"\n",
"model = create_model()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compiling the CNN model\n",
"The model is compiled using the Adam optimizer, with categorical crossentropy as the loss function, and accuracy as the evaluation metric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Compile the model\n",
"# Categorical crossentropy is used since the labels are one-hot encoded.\n",
"# Adam optimizer is used for its adaptive learning rate.\n",
"\n",
"model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data Augmentation and Preprocessing\n",
"This section applies data augmentation techniques using the `ImageDataGenerator` from Keras to generate more training samples and preprocess the images. These operations include rescaling, rotating, shifting, and flipping.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Data augmentation and preprocessing for training and validation datasets\n",
"# The ImageDataGenerator applies various transformations to augment the training data\n",
"# Data augmentation for training dataset\n",
"train_datagen = ImageDataGenerator(\n",
" rescale=1./255,\n",
" shear_range=0.2,\n",
" zoom_range=0.2,\n",
" horizontal_flip=True,\n",
" rotation_range=30,\n",
" brightness_range=[0.8, 1.2],\n",
" width_shift_range=0.2,\n",
" height_shift_range=0.2\n",
")\n",
"\n",
"# Validation dataset (only rescaling)\n",
"test_datagen = ImageDataGenerator(rescale=1./255)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loading and Preparing the Data\n",
"In this section, the training and validation data are loaded from the respective directories, and the images are prepared for the model using the augmentation techniques defined earlier.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Update the directory paths to point to your local folder\n",
"training_set = train_datagen.flow_from_directory(\n",
" r'C:\\Users\\draka\\Downloads\\Terrain_Images\\Training Data',\n",
" target_size=(64, 64),\n",
" batch_size=64,\n",
" class_mode='categorical'\n",
")\n",
"\n",
"test_set = test_datagen.flow_from_directory(\n",
" r'C:\\Users\\draka\\Downloads\\Terrain_Images\\Testing Data',\n",
" target_size=(64, 64),\n",
" batch_size=64,\n",
" class_mode='categorical'\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Define learning rate scheduler\n",
"def scheduler(epoch, lr):\n",
" if epoch < 10:\n",
" return 1e-2\n",
" else:\n",
" return 1e-3\n",
"\n",
"lr_schedule = LearningRateScheduler(scheduler)\n",
"lr_reduction = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, min_lr=1e-6)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Training the CNN Model\n",
"The model is trained using the training and validation data. The training process is monitored, and a learning rate scheduler is used to adjust the learning rate dynamically during training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Train the model\n",
"modelZ = model.fit(\n",
" training_set,\n",
" steps_per_epoch=2000,\n",
" epochs=20,\n",
" validation_data=test_set,\n",
" validation_steps=7,\n",
" callbacks=[lr_schedule, lr_reduction]\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Plotting Training and Validation Accuracy\n",
"This section plots the training and validation accuracy and loss over time to visualize how the model performs during training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Plot training and validation accuracy over epochs\n",
"plt.figure(figsize=(12, 16))\n",
"\n",
"plt.subplot(4, 2, 1)\n",
"plt.plot(modelZ.history['loss'], label='Loss')\n",
"plt.plot(modelZ.history['val_loss'], label='val_Loss')\n",
"plt.title('Loss Function Evolution')\n",
"plt.legend()\n",
"\n",
"plt.subplot(4, 2, 2)\n",
"plt.plot(modelZ.history['accuracy'], label='Accuracy')\n",
"plt.plot(modelZ.history['val_accuracy'], label='val_accuracy')\n",
"plt.title('Accuracy Function Evolution')\n",
"plt.legend()\n",
"\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Saving the Trained Model\n",
"After training, the model is saved to a file for future use.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# model.save('modelZ.h5')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# from tensorflow.keras.models import load_model\n",
"\n",
"# # Load the model\n",
"# model = load_model('saved_model/my_model')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
Loading

0 comments on commit fb003e0

Please sign in to comment.