{"id":1359,"date":"2024-02-05T19:48:59","date_gmt":"2024-02-05T19:48:59","guid":{"rendered":"https:\/\/excalibursol.com\/exus\/?page_id=1359"},"modified":"2024-02-05T19:48:59","modified_gmt":"2024-02-05T19:48:59","slug":"plant-classifier","status":"publish","type":"page","link":"https:\/\/excalibursol.com\/exus\/plant-classifier\/","title":{"rendered":"Plant Classifier"},"content":{"rendered":"\n<h1 class=\"wp-block-heading\" id=\"Introduction-to-Computer-Vision:-Plant-Seedlings-Classification\">Introduction to Computer Vision: Plant Seedlings Classification<\/h1>\n\n\n\n\n\n<p>Project by Noor Aftab<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Problem-Statement\">Problem Statement<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Context\">Context<\/h3>\n\n\n\n<p>In recent times, the field of agriculture has been in urgent need of modernizing, since the amount of manual work people need to put in to check if plants are growing correctly is still highly extensive. Despite several advances in agricultural technology, people working in the agricultural industry still need to have the ability to sort and recognize different plants and weeds, which takes a lot of time and effort in the long term. The potential is ripe for this trillion-dollar industry to be greatly impacted by technological innovations that cut down on the requirement for manual labor, and this is where Artificial Intelligence can actually benefit the workers in this field, as&nbsp;<strong>the time and energy required to identify plant seedlings will be greatly shortened by the use of AI and Deep Learning.<\/strong>&nbsp;The ability to do so far more efficiently and even more effectively than experienced manual labor, could lead to better crop yields, the freeing up of human inolvement for higher-order agricultural decision making, and in the long term will result in more sustainable environmental practices in agriculture as well.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Objective\">Objective<\/h3>\n\n\n\n<p>The aim of this project is to Build a Convolutional Neural Netowrk to classify plant seedlings into their respective categories.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Data-Dictionary\">Data Dictionary<\/h3>\n\n\n\n<p>The Aarhus University Signal Processing group, in collaboration with the University of Southern Denmark, has recently released a dataset containing images of unique plants belonging to 12 different species.<\/p>\n\n\n\n<p>The data file names are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>images.npy<\/li>\n\n\n\n<li>Label.csv<\/li>\n<\/ul>\n\n\n\n<p>Due to the large volume of data, the images were converted to the images.npy file and the labels are also put into Labels.csv, so that we can work on the data\/project seamlessly without having to worry about the high data volume.<\/p>\n\n\n\n<p>The goal of the project is to create a classifier capable of determining a plant&#8217;s species from an image.<\/p>\n\n\n\n<p><strong>List of Species<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Black-grass<\/li>\n\n\n\n<li>Charlock<\/li>\n\n\n\n<li>Cleavers<\/li>\n\n\n\n<li>Common Chickweed<\/li>\n\n\n\n<li>Common Wheat<\/li>\n\n\n\n<li>Fat Hen<\/li>\n\n\n\n<li>Loose Silky-bent<\/li>\n\n\n\n<li>Maize<\/li>\n\n\n\n<li>Scentless Mayweed<\/li>\n\n\n\n<li>Shepherds Purse<\/li>\n\n\n\n<li>Small-flowered Cranesbill<\/li>\n\n\n\n<li>Sugar beet<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Importing-Libraries\">Importing Libraries<\/h2>\n\n\n\n<p>In&nbsp;[&nbsp;]:<strong>import<\/strong> os <em>#Libraries to manipulate data<\/em><strong>import<\/strong> numpy <strong>as<\/strong> np <strong>import<\/strong> pandas <strong>as<\/strong> pd <em>#Libraries to visualize data<\/em><strong>import<\/strong> matplotlib.pyplot <strong>as<\/strong> plt <strong>import<\/strong> math <strong>import<\/strong> cv2 <strong>import<\/strong> seaborn <strong>as<\/strong> sns <em>#Tensorflow modules<\/em><strong>import<\/strong> tensorflow <strong>as<\/strong> tf <strong>from<\/strong> tensorflow.keras.preprocessing.image <strong>import<\/strong> ImageDataGenerator <strong>from<\/strong> tensorflow.keras.models <strong>import<\/strong> Sequential <strong>from<\/strong> tensorflow.keras.layers <strong>import<\/strong> Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization <strong>from<\/strong> tensorflow.keras.optimizers <strong>import<\/strong> Adam, SGD <strong>from<\/strong> sklearn <strong>import<\/strong> preprocessing <strong>from<\/strong> sklearn <strong>import<\/strong> metrics <strong>from<\/strong> sklearn.model_selection <strong>import<\/strong> train_test_split <strong>from<\/strong> sklearn.metrics <strong>import<\/strong> confusion_matrix <strong>from<\/strong> sklearn.preprocessing <strong>import<\/strong> LabelBinarizer <em>#Display images using OpenCV<\/em><strong>from<\/strong> google.colab.patches <strong>import<\/strong> cv2_imshow <strong>from<\/strong> sklearn.model_selection <strong>import<\/strong> train_test_split <strong>from<\/strong> tensorflow.keras <strong>import<\/strong> backend <strong>from<\/strong> keras.callbacks <strong>import<\/strong> ReduceLROnPlateau <strong>import<\/strong> random <strong>from<\/strong> tensorflow.keras.callbacks <strong>import<\/strong> ReduceLROnPlateau <strong>import<\/strong> warnings warnings<strong>.<\/strong>filterwarnings(&#8216;ignore&#8217;)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Load-the-dataset\">Load the dataset<\/h2>\n\n\n\n<p>In&nbsp;[&nbsp;]:<strong>from<\/strong> google.colab <strong>import<\/strong> drive drive<strong>.<\/strong>mount(&#8216;\/content\/drive&#8217;) Drive already mounted at \/content\/drive; to attempt to forcibly remount, call drive.mount(&#8220;\/content\/drive&#8221;, force_remount=True).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Loading-images-and-labels\">Loading images and labels<\/h3>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#load images file of dataset. Since images is npy, we will use np.load()<\/em> images <strong>=<\/strong> np<strong>.<\/strong>load(&#8216;\/content\/drive\/MyDrive\/Plant_Classification\/images.npy&#8217;) <em>#load label files of dataset<\/em> labels <strong>=<\/strong> pd<strong>.<\/strong>read_csv(&#8216;\/content\/drive\/MyDrive\/Plant_Classification\/Labels.csv&#8217;)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Data-Overview\">Data Overview<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Understand-the-shape-of-the-dataset\">Understand the shape of the dataset<\/h3>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#printing the shape of images and labels<\/em> print(&#8220;Images: &#8220;,images<strong>.<\/strong>shape) print(&#8220;Labels:&#8221;, labels<strong>.<\/strong>shape) Images: (4750, 128, 128, 3) Labels: (4750, 1)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Plotting-random-Images-from-our-dataset\">Plotting random Images from our dataset<\/h3>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Defining a function to visualize few images from the dataset.<\/em><strong>def<\/strong> plot_images(images, labels): <em># Define the number of classes<\/em> num_classes <strong>=<\/strong> 10 <em># Get unique categories<\/em> categories <strong>=<\/strong> np<strong>.<\/strong>unique(labels) <em># Create a dictionary &#8216;keys&#8217; from the &#8216;Label&#8217; column in the DataFrame &#8216;labels&#8217;<\/em> keys <strong>=<\/strong> dict(labels[&#8216;Label&#8217;]) <em># Creating a 3&#215;4 grid of images<\/em> rows <strong>=<\/strong> 3 cols <strong>=<\/strong> 4 <em># Create a new figure with a specified size (10 inches wide and 8 inches tall)<\/em> fig <strong>=<\/strong> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(10, 8)) <em># Loop to display images in a grid<\/em><strong>for<\/strong> i <strong>in<\/strong> range(cols): <strong>for<\/strong> j <strong>in<\/strong> range(rows): <em># Generate a random index within the range of the number of labels<\/em> random_index <strong>=<\/strong> np<strong>.<\/strong>random<strong>.<\/strong>randint(0, len(labels)) <em># Add a subplot to the figure, with &#8216;rows&#8217; rows and &#8216;cols&#8217; columns<\/em> ax <strong>=<\/strong> fig<strong>.<\/strong>add_subplot(rows, cols, i <strong>*<\/strong> rows <strong>+<\/strong> j <strong>+<\/strong> 1) <em># Display the image at the random_index position in the &#8216;images&#8217; array<\/em> ax<strong>.<\/strong>imshow(images[random_index, :]) <em># Displaying the title of each image using the label from the &#8216;keys&#8217; dictionary<\/em> ax<strong>.<\/strong>set_title(keys[random_index]) <em># Show the entire figure with the grid of images<\/em> plt<strong>.<\/strong>show()<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Code to input the images and labels to the function and plot the images with their labels<\/em> plot_images(images,labels)<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img fetchpriority=\"high\" decoding=\"async\" width=\"852\" height=\"669\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-14.png\" alt=\"\" class=\"wp-image-1374\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-14.png 852w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-14-300x236.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-14-768x603.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-14-600x471.png 600w\" sizes=\"(max-width: 852px) 100vw, 852px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Checking-for-Imbalanced-Dataset\">Checking for Imbalanced Dataset<\/h2>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Checking for distribution of classes in the dataset to check of imbalnced data<\/em><em># Calculate the counts of each category for the plot<\/em> category_counts <strong>=<\/strong> labels[&#8216;Label&#8217;]<strong>.<\/strong>value_counts() <em># Calculate the percentage of each category<\/em> category_percentage <strong>=<\/strong> category_counts <strong>\/<\/strong> category_counts<strong>.<\/strong>sum() <strong>*<\/strong> 100 <em># Setting the default figure size for the plots to 10&#215;7 inches for better readability<\/em> plt<strong>.<\/strong>rcParams[&#8220;figure.figsize&#8221;] <strong>=<\/strong> (15,7) <em># Create a count plot with Seaborn<\/em> ax <strong>=<\/strong> sns<strong>.<\/strong>countplot(x<strong>=<\/strong>&#8216;Label&#8217;, data<strong>=<\/strong>labels, order<strong>=<\/strong>category_counts<strong>.<\/strong>index, palette<strong>=<\/strong>&#8216;Greens_r&#8217;) <em># Labeling the x-axis as &#8216;Plant Categories&#8217;<\/em> plt<strong>.<\/strong>xlabel(&#8216;Plant Categories&#8217;) <em># Rotating the x-axis labels by 90 degrees to prevent overlapping and improve readability<\/em> plt<strong>.<\/strong>xticks(rotation<strong>=<\/strong>90) <em># Annotating the percentage of each category above the bars<\/em><strong>for<\/strong> i, p <strong>in<\/strong> enumerate(ax<strong>.<\/strong>patches): <em># Calculate the height to place the annotation correctly<\/em> height <strong>=<\/strong> p<strong>.<\/strong>get_height() <em># Adding text annotation and formatting to show up to two decimal places<\/em> ax<strong>.<\/strong>text(p<strong>.<\/strong>get_x() <strong>+<\/strong> p<strong>.<\/strong>get_width()<strong>\/<\/strong>2., height <strong>+<\/strong> 3, &#8216;{:1.2f}%&#8217;<strong>.<\/strong>format(category_percentage[i]), ha<strong>=<\/strong>&#8220;center&#8221;) <em># Displaying the plot<\/em> plt<strong>.<\/strong>show()<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"633\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-1024x633.png\" alt=\"\" class=\"wp-image-1360\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-1024x633.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-300x185.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-768x475.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-600x371.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image.png 1238w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Observations\">Observations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>From the chart, we can see that the classes are not evenly distributed. The class with the lowest representation (4.65% for three classes: Black-grass, Shepherd&#8217;s Purse, and Maize) could potentially be more challenging for the models to learn due to fewer training examples.<\/li>\n\n\n\n<li>On the other hand, Loose Silky-bent has the highest representation at 13.77%, which might mean the model will perform better on this class simply due to more training data.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Converting-the-images-from-BGR-To-RGB\">Converting the images from BGR To RGB<\/h3>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#converting all images from Blue-Green-Red to Red-Green-Blue format<\/em><strong>for<\/strong> i <strong>in<\/strong> range(len(images)): images[i] <strong>=<\/strong> cv2<strong>.<\/strong>cvtColor(images[i], cv2<strong>.<\/strong>COLOR_BGR2RGB)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Reducing-the-size-of-images\">Reducing the size of images<\/h2>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#As the size of the images is large, it may be computationally expensive to train on these larger images.<\/em><em>#Therefore, it is preferable to reduce the image size from 128 to 64.<\/em><em>#creating a new array to hold the images with updated dimension of 64 x 64<\/em> images_decreased<strong>=<\/strong>[] height <strong>=<\/strong> 64 <em># Code to define the height as 64<\/em> width <strong>=<\/strong> 64 <em># Code to define the width as 64<\/em> dimensions <strong>=<\/strong> (width, height) <strong>for<\/strong> i <strong>in<\/strong> range(len(images)): images_decreased<strong>.<\/strong>append( cv2<strong>.<\/strong>resize(images[i], dimensions, interpolation<strong>=<\/strong>cv2<strong>.<\/strong>INTER_LINEAR))<\/p>\n\n\n\n<p><strong>Image before resizing<\/strong><\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:plt<strong>.<\/strong>imshow(images[3])<\/p>\n\n\n\n<p>Out[&nbsp;]:&lt;matplotlib.image.AxesImage at 0x78f7bd9c68f0&gt;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"595\" height=\"586\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-19.png\" alt=\"\" class=\"wp-image-1379\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-19.png 595w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-19-300x295.png 300w\" sizes=\"(max-width: 595px) 100vw, 595px\" \/><\/figure>\n\n\n\n<p><strong>Image after resizing<\/strong><\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:plt<strong>.<\/strong>imshow(images_decreased[3])<\/p>\n\n\n\n<p>Out[&nbsp;]:&lt;matplotlib.image.AxesImage at 0x78f7b79bb220&gt;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"586\" height=\"584\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-1.png\" alt=\"\" class=\"wp-image-1361\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-1.png 586w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-1-300x300.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-1-150x150.png 150w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-1-100x100.png 100w\" sizes=\"(max-width: 586px) 100vw, 586px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Data-Preparation-for-Modeling\">Data Preparation for Modeling<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>As we have less images in our dataset, we will only use 10% of our data for testing, 10% of our data for validation and 80% of our data for training.<\/li>\n\n\n\n<li>We are using the train_test_split() function from scikit-learn. Here, we split the dataset into three parts, train,test and validation.<\/li>\n<\/ul>\n\n\n\n<p>In&nbsp;[&nbsp;]:<strong>from<\/strong> sklearn.model_selection <strong>import<\/strong> train_test_split <em># Split the data into a temporary set and a test set<\/em> X_temp, X_test, y_temp, y_test <strong>=<\/strong> train_test_split(np<strong>.<\/strong>array(images_decreased), labels, test_size<strong>=<\/strong>0.1, random_state<strong>=<\/strong>42, stratify<strong>=<\/strong>labels) <em># Split the temporary set into a training set and a validation set<\/em> X_train, X_val, y_train, y_val <strong>=<\/strong> train_test_split(X_temp, y_temp, test_size<strong>=<\/strong>0.1, random_state<strong>=<\/strong>42, stratify<strong>=<\/strong>y_temp)<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:print(X_train<strong>.<\/strong>shape, y_train<strong>.<\/strong>shape) <em># Check the shape of the training data and labels<\/em> print(X_val<strong>.<\/strong>shape, y_val<strong>.<\/strong>shape) <em># Check the shape of the validation data and labels<\/em> print(X_test<strong>.<\/strong>shape, y_test<strong>.<\/strong>shape) <em># Check the shape of the test data and labels<\/em> (3847, 64, 64, 3) (3847, 1) (428, 64, 64, 3) (428, 1) (475, 64, 64, 3) (475, 1)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Encoding-the-categorical-data-using-Label-Binarizer\">Encoding the categorical data using Label Binarizer<\/h2>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#LabelBinarizer helps translate categories or labels<\/em><strong>from<\/strong> sklearn.preprocessing <strong>import<\/strong> LabelBinarizer <em># Initialize the LabelBinarizer<\/em> enc <strong>=<\/strong> LabelBinarizer() <em># Fit and transform y_train<\/em> y_train_encoded <strong>=<\/strong> enc<strong>.<\/strong>fit_transform(y_train) <em># Transform y_val<\/em> y_val_encoded <strong>=<\/strong> enc<strong>.<\/strong>transform(y_val) <em># Transform y_test<\/em> y_test_encoded <strong>=<\/strong> enc<strong>.<\/strong>transform(y_test)<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#checking the shape of target variables post encoding<\/em> y_train_encoded<strong>.<\/strong>shape, y_val_encoded<strong>.<\/strong>shape, y_test_encoded<strong>.<\/strong>shape<\/p>\n\n\n\n<p>Out[&nbsp;]:((3847, 12), (428, 12), (475, 12))<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Data-Normalization\">Data Normalization<\/h3>\n\n\n\n<p>Since the&nbsp;<strong>image pixel values range from 0-255<\/strong>, our method of normalization here will be&nbsp;<strong>scaling<\/strong>.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We are&nbsp;<strong>dividing all the pixel values by 255 to standardize the images to have values between 0-1.<\/strong><\/li>\n<\/ul>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Code to normalize the image pixels of train, test and validation data<\/em> X_train_normalized <strong>=<\/strong> X_train<strong>.<\/strong>astype(&#8216;float32&#8217;) <strong>\/<\/strong> 255.0 X_val_normalized <strong>=<\/strong> X_val<strong>.<\/strong>astype(&#8216;float32&#8217;) <strong>\/<\/strong> 255.0 X_test_normalized <strong>=<\/strong> X_test<strong>.<\/strong>astype(&#8216;float32&#8217;) <strong>\/<\/strong> 255.0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Model-Building\">Model Building<\/h2>\n\n\n\n<p>We are creating a base model using 3 pairs of Convolution and Pooling Layers.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Next we are using the Flatten()<\/li>\n\n\n\n<li>We follow this with fully connected layers of 16 neurons, followed by drop out layer.<\/li>\n\n\n\n<li>Finally, we add the output layer using softmax activation and Adam optimizer.<\/li>\n<\/ul>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Clearing the backend<\/em> backend<strong>.<\/strong>clear_session()<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Fixing the seed for random number generators<\/em> np<strong>.<\/strong>random<strong>.<\/strong>seed(42) random<strong>.<\/strong>seed(42) tf<strong>.<\/strong>random<strong>.<\/strong>set_seed(42)<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#initalize the model<\/em> model1 <strong>=<\/strong> Sequential() <em>#Feature Extraction<\/em><em>#First layer with 128 neurons, relu activaiton, same padding to keep the input and output same<\/em> model1<strong>.<\/strong>add(Conv2D(128,(3,3), activation<strong>=<\/strong>&#8216;relu&#8217;, padding<strong>=<\/strong>&#8216;same&#8217;, input_shape<strong>=<\/strong>(64, 64, 3))) <em>#Adding Max Pooling to reduce dimension<\/em> model1<strong>.<\/strong>add(MaxPooling2D((2,2), padding<strong>=<\/strong>&#8216;same&#8217;)) <em>#Code to create two similar convolution and max-pooling layers activation = relu<\/em> model1<strong>.<\/strong>add(Conv2D(64,(3,3), activation<strong>=<\/strong>&#8216;relu&#8217;, padding<strong>=<\/strong>&#8216;same&#8217;)) model1<strong>.<\/strong>add(MaxPooling2D((2,2), padding<strong>=<\/strong>&#8216;same&#8217;)) model1<strong>.<\/strong>add(Conv2D(32,(3,3), activation<strong>=<\/strong>&#8216;relu&#8217;, padding<strong>=<\/strong>&#8216;same&#8217;)) model1<strong>.<\/strong>add(MaxPooling2D((2,2), padding<strong>=<\/strong>&#8216;same&#8217;)) <em>#Flattening the output after convolution and max pooling<\/em> model1<strong>.<\/strong>add(Flatten()) <em>#Code to add a fully connected dense layer with 16 neurons<\/em> model1<strong>.<\/strong>add(Dense(16,activation<strong>=<\/strong>&#8216;relu&#8217;)) model1<strong>.<\/strong>add(Dropout(0.3)) <em>#Code to add the output layer with 12 neurons and activation functions as softmax since this is a multi-class classification problem<\/em> model1<strong>.<\/strong>add(Dense(12,activation<strong>=<\/strong>&#8216;softmax&#8217;)) <em>#Using Adam Optimizer<\/em> opt <strong>=<\/strong> Adam() <em>#Compile the model<\/em> model1<strong>.<\/strong>compile(optimizer<strong>=<\/strong>opt, loss<strong>=<\/strong>&#8216;categorical_crossentropy&#8217;, metrics<strong>=<\/strong>[&#8216;accuracy&#8217;]) <em>#Generate model summary<\/em> model1<strong>.<\/strong>summary() Model: &#8220;sequential&#8221; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 64, 64, 128) 3584 max_pooling2d (MaxPooling2 (None, 32, 32, 128) 0 D) conv2d_1 (Conv2D) (None, 32, 32, 64) 73792 max_pooling2d_1 (MaxPoolin (None, 16, 16, 64) 0 g2D) conv2d_2 (Conv2D) (None, 16, 16, 32) 18464 max_pooling2d_2 (MaxPoolin (None, 8, 8, 32) 0 g2D) flatten (Flatten) (None, 2048) 0 dense (Dense) (None, 16) 32784 dropout (Dropout) (None, 16) 0 dense_1 (Dense) (None, 12) 204 ================================================================= Total params: 128828 (503.23 KB) Trainable params: 128828 (503.23 KB) Non-trainable params: 0 (0.00 Byte) _________________________________________________________________<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Fitting-the-model-on-train-data\">Fitting the model on train data<\/h3>\n\n\n\n<p>In\u00a0[\u00a0]:<em>#Fitting the model on train and also using the validation data for validation<\/em> history_1 <strong>=<\/strong> model1<strong>.<\/strong>fit(X_train_normalized, y_train_encoded, epochs <strong>=<\/strong>30, validation_data<strong>=<\/strong>(X_val_normalized,y_val_encoded), batch_size <strong>=<\/strong>32, verbose<strong>=<\/strong>2) <\/p>\n\n\n\n<p>Epoch 1\/30 121\/121 &#8211; 3s &#8211; loss: 2.4543 &#8211; accuracy: 0.1053 &#8211; val_loss: 2.4388 &#8211; val_accuracy: 0.1379 &#8211; 3s\/epoch &#8211; 24ms\/step Epoch 2\/30 121\/121 &#8211; 1s &#8211; loss: 2.3920 &#8211; accuracy: 0.1643 &#8211; val_loss: 2.1059 &#8211; val_accuracy: 0.3575 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 3\/30 121\/121 &#8211; 1s &#8211; loss: 2.0629 &#8211; accuracy: 0.3067 &#8211; val_loss: 1.8427 &#8211; val_accuracy: 0.3925 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 4\/30 121\/121 &#8211; 1s &#8211; loss: 1.9083 &#8211; accuracy: 0.3361 &#8211; val_loss: 1.7196 &#8211; val_accuracy: 0.4252 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 5\/30 121\/121 &#8211; 1s &#8211; loss: 1.7649 &#8211; accuracy: 0.3767 &#8211; val_loss: 1.5800 &#8211; val_accuracy: 0.4720 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 6\/30 121\/121 &#8211; 1s &#8211; loss: 1.6525 &#8211; accuracy: 0.4068 &#8211; val_loss: 1.4383 &#8211; val_accuracy: 0.5350 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 7\/30 121\/121 &#8211; 1s &#8211; loss: 1.5677 &#8211; accuracy: 0.4307 &#8211; val_loss: 1.3422 &#8211; val_accuracy: 0.5327 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 8\/30 121\/121 &#8211; 1s &#8211; loss: 1.4942 &#8211; accuracy: 0.4562 &#8211; val_loss: 1.2520 &#8211; val_accuracy: 0.5654 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 9\/30 121\/121 &#8211; 1s &#8211; loss: 1.4390 &#8211; accuracy: 0.4799 &#8211; val_loss: 1.1876 &#8211; val_accuracy: 0.6051 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 10\/30 121\/121 &#8211; 1s &#8211; loss: 1.3807 &#8211; accuracy: 0.5035 &#8211; val_loss: 1.2670 &#8211; val_accuracy: 0.5794 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 11\/30 121\/121 &#8211; 1s &#8211; loss: 1.3182 &#8211; accuracy: 0.5201 &#8211; val_loss: 1.1684 &#8211; val_accuracy: 0.6168 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 12\/30 121\/121 &#8211; 1s &#8211; loss: 1.3135 &#8211; accuracy: 0.5235 &#8211; val_loss: 1.0868 &#8211; val_accuracy: 0.6355 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 13\/30 121\/121 &#8211; 1s &#8211; loss: 1.2812 &#8211; accuracy: 0.5295 &#8211; val_loss: 1.1118 &#8211; val_accuracy: 0.6285 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 14\/30 121\/121 &#8211; 1s &#8211; loss: 1.2376 &#8211; accuracy: 0.5472 &#8211; val_loss: 1.0302 &#8211; val_accuracy: 0.6449 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 15\/30 121\/121 &#8211; 1s &#8211; loss: 1.2220 &#8211; accuracy: 0.5513 &#8211; val_loss: 1.0038 &#8211; val_accuracy: 0.6869 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 16\/30 121\/121 &#8211; 1s &#8211; loss: 1.2179 &#8211; accuracy: 0.5558 &#8211; val_loss: 1.0013 &#8211; val_accuracy: 0.6846 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 17\/30 121\/121 &#8211; 1s &#8211; loss: 1.1841 &#8211; accuracy: 0.5638 &#8211; val_loss: 0.9745 &#8211; val_accuracy: 0.6893 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 18\/30 121\/121 &#8211; 1s &#8211; loss: 1.1281 &#8211; accuracy: 0.5854 &#8211; val_loss: 0.9683 &#8211; val_accuracy: 0.6893 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 19\/30 121\/121 &#8211; 1s &#8211; loss: 1.1101 &#8211; accuracy: 0.5896 &#8211; val_loss: 0.9784 &#8211; val_accuracy: 0.6659 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 20\/30 121\/121 &#8211; 1s &#8211; loss: 1.0669 &#8211; accuracy: 0.6093 &#8211; val_loss: 0.9875 &#8211; val_accuracy: 0.7033 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 21\/30 121\/121 &#8211; 1s &#8211; loss: 1.0897 &#8211; accuracy: 0.6049 &#8211; val_loss: 0.9850 &#8211; val_accuracy: 0.6846 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 22\/30 121\/121 &#8211; 1s &#8211; loss: 1.0730 &#8211; accuracy: 0.6044 &#8211; val_loss: 0.9444 &#8211; val_accuracy: 0.7009 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 23\/30 121\/121 &#8211; 1s &#8211; loss: 1.0391 &#8211; accuracy: 0.6174 &#8211; val_loss: 0.9757 &#8211; val_accuracy: 0.7126 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 24\/30 121\/121 &#8211; 1s &#8211; loss: 1.0300 &#8211; accuracy: 0.6220 &#8211; val_loss: 0.9271 &#8211; val_accuracy: 0.6986 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 25\/30 121\/121 &#8211; 1s &#8211; loss: 1.0076 &#8211; accuracy: 0.6309 &#8211; val_loss: 0.9220 &#8211; val_accuracy: 0.7126 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 26\/30 121\/121 &#8211; 1s &#8211; loss: 0.9627 &#8211; accuracy: 0.6444 &#8211; val_loss: 0.8981 &#8211; val_accuracy: 0.7150 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 27\/30 121\/121 &#8211; 1s &#8211; loss: 0.9734 &#8211; accuracy: 0.6348 &#8211; val_loss: 0.9724 &#8211; val_accuracy: 0.7056 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 28\/30 121\/121 &#8211; 1s &#8211; loss: 0.9421 &#8211; accuracy: 0.6543 &#8211; val_loss: 0.9244 &#8211; val_accuracy: 0.7243 &#8211; 1s\/epoch &#8211; 9ms\/step Epoch 29\/30 121\/121 &#8211; 1s &#8211; loss: 0.9490 &#8211; accuracy: 0.6501 &#8211; val_loss: 0.9061 &#8211; val_accuracy: 0.7056 &#8211; 1s\/epoch &#8211; 10ms\/step Epoch 30\/30 121\/121 &#8211; 1s &#8211; loss: 0.9277 &#8211; accuracy: 0.6631 &#8211; val_loss: 0.8989 &#8211; val_accuracy: 0.7173 &#8211; 1s\/epoch &#8211; 9ms\/step<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Model-Evaluation\">Model Evaluation<\/h3>\n\n\n\n<p>In&nbsp;[&nbsp;]:plt<strong>.<\/strong>plot(history_1<strong>.<\/strong>history[&#8216;accuracy&#8217;]) plt<strong>.<\/strong>plot(history_1<strong>.<\/strong>history[&#8216;val_accuracy&#8217;]) plt<strong>.<\/strong>title(&#8216;Model Accuracy&#8217;) plt<strong>.<\/strong>ylabel(&#8216;Accuracy&#8217;) plt<strong>.<\/strong>xlabel(&#8216;Epochs&#8217;) plt<strong>.<\/strong>legend([&#8216;Train&#8217;,&#8217;Validation&#8217;],loc<strong>=<\/strong>&#8216;upper left&#8217;) plt<strong>.<\/strong>show()<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"518\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-4-1024x518.png\" alt=\"\" class=\"wp-image-1364\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-4-1024x518.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-4-300x152.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-4-768x389.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-4-600x304.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-4.png 1233w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Observations:\">Observations:<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The fluctuating validation loss and accuracy, especially towards the end of training, are indicative of potential overfitting<\/li>\n\n\n\n<li>Training Time: The model trains relatively quickly, with each epoch taking around 1-2 seconds, which is a positive aspect.<\/li>\n\n\n\n<li>Final Validation Accuracy: The final validation accuracy around 0.6631 suggests that the model achieves reasonable performance on the validation data, but there may still be room for improvement.<\/li>\n<\/ul>\n\n\n\n<p><strong>Evaluate the model on test data<\/strong><\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># X_test_normalized: The normalized test feature data<\/em><em># y_test_encoded: The encoded test target labels<\/em><em># verbose=2: Display evaluation results<\/em> accuracy <strong>=<\/strong> model1<strong>.<\/strong>evaluate(X_test_normalized, y_test_encoded, verbose<strong>=<\/strong>2) 15\/15 &#8211; 0s &#8211; loss: 1.0660 &#8211; accuracy: 0.6611 &#8211; 86ms\/epoch &#8211; 6ms\/step<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Here we would get the output as probablities for each category<\/em> y_pred <strong>=<\/strong> model1<strong>.<\/strong>predict(X_test_normalized) 15\/15 [==============================] &#8211; 0s 3ms\/step<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Observations\">Observations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The overall accuracy of the model is 0.6611, which suggests that it correctly predicts the plant species approximately 66.11% of the time on the test set.<\/li>\n\n\n\n<li>The classes are not equally easy or difficult for the model to predict. Some classes have higher precision and recall, while others have lower values, indicating variability in the model&#8217;s performance across different species.<\/li>\n<\/ul>\n\n\n\n<p><strong>Plotting the Confusion Matrix<\/strong><\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Obtain categorical variables from y_test_encoded and y_pred<\/em> y_pred_arg <strong>=<\/strong> np<strong>.<\/strong>argmax(y_pred, axis<strong>=<\/strong>1) y_test_arg <strong>=<\/strong> np<strong>.<\/strong>argmax(y_test_encoded, axis<strong>=<\/strong>1) <em># Plotting confusion matrix with &#8216;viridis&#8217; cmap (green color map)<\/em> confusion_matrix <strong>=<\/strong> tf<strong>.<\/strong>math<strong>.<\/strong>confusion_matrix(y_test_arg, y_pred_arg) f, ax <strong>=<\/strong> plt<strong>.<\/strong>subplots(figsize<strong>=<\/strong>(12, 12)) sns<strong>.<\/strong>heatmap( confusion_matrix, annot<strong>=<\/strong><strong>True<\/strong>, linewidths<strong>=<\/strong>0.4, fmt<strong>=<\/strong>&#8216;d&#8217;, square<strong>=<\/strong><strong>True<\/strong>, ax<strong>=<\/strong>ax, cmap<strong>=<\/strong>&#8216;Greens&#8217; <em># Set the colormap to Green<\/em> ) <em># Setting labels to both axes<\/em> ax<strong>.<\/strong>set_xlabel(&#8216;Predicted labels&#8217;) ax<strong>.<\/strong>set_ylabel(&#8216;True labels&#8217;) ax<strong>.<\/strong>set_title(&#8216;Confusion Matrix&#8217;) ax<strong>.<\/strong>xaxis<strong>.<\/strong>set_ticklabels(list(enc<strong>.<\/strong>classes_), rotation<strong>=<\/strong>40) ax<strong>.<\/strong>yaxis<strong>.<\/strong>set_ticklabels(list(enc<strong>.<\/strong>classes_), rotation<strong>=<\/strong>20) plt<strong>.<\/strong>show()<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"957\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-10-1024x957.png\" alt=\"\" class=\"wp-image-1370\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-10-1024x957.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-10-300x280.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-10-768x718.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-10-600x561.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-10.png 1084w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Calculate the classification report<\/em> cr1 <strong>=<\/strong> metrics<strong>.<\/strong>classification_report(y_test_arg, y_pred_arg) <em># Print the classification report<\/em> print(cr1) precision recall f1-score support 0 0.50 0.04 0.07 26 1 0.89 0.85 0.87 39 2 0.88 0.76 0.81 29 3 0.81 0.85 0.83 61 4 0.33 0.09 0.14 22 5 0.71 0.85 0.77 48 6 0.56 0.92 0.69 65 7 0.69 0.50 0.58 22 8 0.67 0.65 0.66 52 9 0.63 0.52 0.57 23 10 0.87 0.82 0.85 50 11 0.71 0.79 0.75 38 accuracy 0.71 475 macro avg 0.69 0.64 0.63 475 weighted avg 0.71 0.71 0.69 475<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Model-Performance-Improvement\">Model Performance Improvement<\/h2>\n\n\n\n<p><strong>Reducing the Learning Rate:<\/strong><\/p>\n\n\n\n<p><strong>ReduceLRonPlateau()<\/strong>&nbsp;is a function that will be used to decrease the learning rate by some factor, if the loss is not decreasing for some time. This may start decreasing the loss at a smaller learning rate. There is a possibility that the loss may still not decrease. This may lead to executing the learning rate reduction again in an attempt to achieve a lower loss.<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Code to monitor val_accuracy<\/em> learning_rate_reduction <strong>=<\/strong> ReduceLROnPlateau(monitor<strong>=<\/strong>&#8216;val_accuracy&#8217;, patience<strong>=<\/strong>3, verbose<strong>=<\/strong>1, factor<strong>=<\/strong>0.5, min_lr<strong>=<\/strong>0.00001)<\/p>\n\n\n\n<h1 class=\"wp-block-heading\" id=\"Data-Augmentation\">Data Augmentation<\/h1>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Clear Backend<\/em><strong>from<\/strong> tensorflow.keras <strong>import<\/strong> backend backend<strong>.<\/strong>clear_session() <em>#fixing random seed generators<\/em><strong>import<\/strong> random np<strong>.<\/strong>random<strong>.<\/strong>seed(42) random<strong>.<\/strong>seed(42) tf<strong>.<\/strong>random<strong>.<\/strong>set_seed(42)<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Code to set the rotation_range to 20<\/em> train_datagen <strong>=<\/strong> ImageDataGenerator( rotation_range<strong>=<\/strong>20, <em># Set the rotation range to 20 degrees<\/em> fill_mode<strong>=<\/strong>&#8216;nearest&#8217; )<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Intializing a sequential model<\/em> model2 <strong>=<\/strong> Sequential() <em># Code to add the first conv layer with 64 filters and kernel size 3&#215;3 , padding &#8216;same&#8217; provides the output size same as the input size<\/em><em># Input_shape denotes input image dimension images<\/em> model2<strong>.<\/strong>add(Conv2D(64, (3,3), activation<strong>=<\/strong>&#8216;relu&#8217;, padding<strong>=<\/strong>&#8220;same&#8221;, input_shape<strong>=<\/strong>(64, 64, 3))) <em># Code to add max pooling to reduce the size of output of first conv layer<\/em> model2<strong>.<\/strong>add(MaxPooling2D((2, 2), padding <strong>=<\/strong> &#8216;same&#8217;)) model2<strong>.<\/strong>add(Conv2D(32, (3, 3), activation<strong>=<\/strong>&#8216;relu&#8217;, padding<strong>=<\/strong>&#8220;same&#8221;)) model2<strong>.<\/strong>add(MaxPooling2D((2, 2), padding <strong>=<\/strong> &#8216;same&#8217;)) model2<strong>.<\/strong>add(BatchNormalization()) <em># flattening the output of the conv layer after max pooling to make it ready for creating dense connections<\/em> model2<strong>.<\/strong>add(Flatten()) <em># Adding a fully connected dense layer with 16 neurons<\/em> model2<strong>.<\/strong>add(Dense(16, activation<strong>=<\/strong>&#8216;relu&#8217;)) <em># Code to add dropout with dropout_rate=0.3<\/em> model2<strong>.<\/strong>add(Dropout(0.3)) <em># Adding the output layer with 12 neurons and activation functions as softmax since this is a multi-class classification problem<\/em> model2<strong>.<\/strong>add(Dense(12, activation<strong>=<\/strong>&#8216;softmax&#8217;)) <em># Initializing Adam Optimimzer<\/em> opt<strong>=<\/strong>Adam() <em># Compiling model<\/em> model2<strong>.<\/strong>compile(optimizer<strong>=<\/strong>opt, loss<strong>=<\/strong>&#8216;categorical_crossentropy&#8217;, metrics<strong>=<\/strong>[&#8216;accuracy&#8217;]) <em># Generating the summary of the model<\/em> model2<strong>.<\/strong>summary() Model: &#8220;sequential&#8221; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 64, 64, 64) 1792 max_pooling2d (MaxPooling2 (None, 32, 32, 64) 0 D) conv2d_1 (Conv2D) (None, 32, 32, 32) 18464 max_pooling2d_1 (MaxPoolin (None, 16, 16, 32) 0 g2D) batch_normalization (Batch (None, 16, 16, 32) 128 Normalization) flatten (Flatten) (None, 8192) 0 dense (Dense) (None, 16) 131088 dropout (Dropout) (None, 16) 0 dense_1 (Dense) (None, 12) 204 ================================================================= Total params: 151676 (592.48 KB) Trainable params: 151612 (592.23 KB) Non-trainable params: 64 (256.00 Byte) _________________________________________________________________<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Set the number of epochs and batch size<\/em> epochs <strong>=<\/strong>30 batch_size<strong>=<\/strong>64 <em># Fit the model using the ImageDataGenerator with data augmentation<\/em> history_2 <strong>=<\/strong> model2<strong>.<\/strong>fit(train_datagen<strong>.<\/strong>flow(X_train_normalized, y_train_encoded, batch_size<strong>=<\/strong>batch_size, shuffle<strong>=<\/strong><strong>False<\/strong>), epochs<strong>=<\/strong>epochs, steps_per_epoch<strong>=<\/strong> X_train_normalized<strong>.<\/strong>shape[0] <strong>\/\/<\/strong>batch_size, validation_data<strong>=<\/strong>(X_val_normalized, y_val_encoded), verbose<strong>=<\/strong>1, callbacks<strong>=<\/strong>[learning_rate_reduction] ) Epoch 1\/30 60\/60 [==============================] &#8211; 6s 73ms\/step &#8211; loss: 2.1460 &#8211; accuracy: 0.2464 &#8211; val_loss: 2.4110 &#8211; val_accuracy: 0.1425 &#8211; lr: 0.0010 Epoch 2\/30 60\/60 [==============================] &#8211; 4s 72ms\/step &#8211; loss: 1.6699 &#8211; accuracy: 0.4187 &#8211; val_loss: 2.2931 &#8211; val_accuracy: 0.2079 &#8211; lr: 0.0010 Epoch 3\/30 60\/60 [==============================] &#8211; 4s 72ms\/step &#8211; loss: 1.4746 &#8211; accuracy: 0.4893 &#8211; val_loss: 2.1898 &#8211; val_accuracy: 0.2921 &#8211; lr: 0.0010 Epoch 4\/30 60\/60 [==============================] &#8211; 4s 74ms\/step &#8211; loss: 1.4168 &#8211; accuracy: 0.5057 &#8211; val_loss: 2.0589 &#8211; val_accuracy: 0.3925 &#8211; lr: 0.0010 Epoch 5\/30 60\/60 [==============================] &#8211; 4s 74ms\/step &#8211; loss: 1.2908 &#8211; accuracy: 0.5453 &#8211; val_loss: 1.8826 &#8211; val_accuracy: 0.5794 &#8211; lr: 0.0010 Epoch 6\/30 60\/60 [==============================] &#8211; 4s 71ms\/step &#8211; loss: 1.1815 &#8211; accuracy: 0.5823 &#8211; val_loss: 1.6619 &#8211; val_accuracy: 0.6706 &#8211; lr: 0.0010 Epoch 7\/30 60\/60 [==============================] &#8211; 4s 71ms\/step &#8211; loss: 1.1701 &#8211; accuracy: 0.5964 &#8211; val_loss: 1.5149 &#8211; val_accuracy: 0.6168 &#8211; lr: 0.0010 Epoch 8\/30 60\/60 [==============================] &#8211; 4s 70ms\/step &#8211; loss: 1.1466 &#8211; accuracy: 0.5953 &#8211; val_loss: 1.3013 &#8211; val_accuracy: 0.6262 &#8211; lr: 0.0010 Epoch 9\/30 60\/60 [==============================] &#8211; ETA: 0s &#8211; loss: 1.0864 &#8211; accuracy: 0.6167 Epoch 9: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257. 60\/60 [==============================] &#8211; 4s 70ms\/step &#8211; loss: 1.0864 &#8211; accuracy: 0.6167 &#8211; val_loss: 1.3709 &#8211; val_accuracy: 0.5514 &#8211; lr: 0.0010 Epoch 10\/30 60\/60 [==============================] &#8211; 4s 72ms\/step &#8211; loss: 0.9963 &#8211; accuracy: 0.6405 &#8211; val_loss: 1.2891 &#8211; val_accuracy: 0.5935 &#8211; lr: 5.0000e-04 Epoch 11\/30 60\/60 [==============================] &#8211; 4s 70ms\/step &#8211; loss: 0.9561 &#8211; accuracy: 0.6590 &#8211; val_loss: 1.0101 &#8211; val_accuracy: 0.6822 &#8211; lr: 5.0000e-04 Epoch 12\/30 60\/60 [==============================] &#8211; 4s 71ms\/step &#8211; loss: 0.9293 &#8211; accuracy: 0.6632 &#8211; val_loss: 0.8704 &#8211; val_accuracy: 0.7336 &#8211; lr: 5.0000e-04 Epoch 13\/30 60\/60 [==============================] &#8211; 5s 78ms\/step &#8211; loss: 0.8915 &#8211; accuracy: 0.6868 &#8211; val_loss: 0.9048 &#8211; val_accuracy: 0.7056 &#8211; lr: 5.0000e-04 Epoch 14\/30 60\/60 [==============================] &#8211; 5s 77ms\/step &#8211; loss: 0.8654 &#8211; accuracy: 0.6905 &#8211; val_loss: 0.9051 &#8211; val_accuracy: 0.7336 &#8211; lr: 5.0000e-04 Epoch 15\/30 60\/60 [==============================] &#8211; ETA: 0s &#8211; loss: 0.8497 &#8211; accuracy: 0.7031 Epoch 15: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628. 60\/60 [==============================] &#8211; 5s 76ms\/step &#8211; loss: 0.8497 &#8211; accuracy: 0.7031 &#8211; val_loss: 1.0092 &#8211; val_accuracy: 0.6776 &#8211; lr: 5.0000e-04 Epoch 16\/30 60\/60 [==============================] &#8211; 4s 74ms\/step &#8211; loss: 0.8292 &#8211; accuracy: 0.6994 &#8211; val_loss: 0.7460 &#8211; val_accuracy: 0.7757 &#8211; lr: 2.5000e-04 Epoch 17\/30 60\/60 [==============================] &#8211; 4s 71ms\/step &#8211; loss: 0.7911 &#8211; accuracy: 0.7172 &#8211; val_loss: 1.1533 &#8211; val_accuracy: 0.6542 &#8211; lr: 2.5000e-04 Epoch 18\/30 60\/60 [==============================] &#8211; 4s 72ms\/step &#8211; loss: 0.7830 &#8211; accuracy: 0.7166 &#8211; val_loss: 0.8083 &#8211; val_accuracy: 0.7593 &#8211; lr: 2.5000e-04 Epoch 19\/30 60\/60 [==============================] &#8211; 4s 71ms\/step &#8211; loss: 0.7788 &#8211; accuracy: 0.7145 &#8211; val_loss: 0.7060 &#8211; val_accuracy: 0.8037 &#8211; lr: 2.5000e-04 Epoch 20\/30 60\/60 [==============================] &#8211; 4s 72ms\/step &#8211; loss: 0.7684 &#8211; accuracy: 0.7179 &#8211; val_loss: 0.9217 &#8211; val_accuracy: 0.6916 &#8211; lr: 2.5000e-04 Epoch 21\/30 60\/60 [==============================] &#8211; 4s 75ms\/step &#8211; loss: 0.7539 &#8211; accuracy: 0.7285 &#8211; val_loss: 0.6886 &#8211; val_accuracy: 0.8131 &#8211; lr: 2.5000e-04 Epoch 22\/30 60\/60 [==============================] &#8211; 4s 70ms\/step &#8211; loss: 0.7550 &#8211; accuracy: 0.7153 &#8211; val_loss: 0.9083 &#8211; val_accuracy: 0.6986 &#8211; lr: 2.5000e-04 Epoch 23\/30 60\/60 [==============================] &#8211; 4s 72ms\/step &#8211; loss: 0.7723 &#8211; accuracy: 0.7177 &#8211; val_loss: 0.7199 &#8211; val_accuracy: 0.7921 &#8211; lr: 2.5000e-04 Epoch 24\/30 60\/60 [==============================] &#8211; ETA: 0s &#8211; loss: 0.7410 &#8211; accuracy: 0.7269 Epoch 24: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814. 60\/60 [==============================] &#8211; 4s 70ms\/step &#8211; loss: 0.7410 &#8211; accuracy: 0.7269 &#8211; val_loss: 0.7285 &#8211; val_accuracy: 0.8037 &#8211; lr: 2.5000e-04 Epoch 25\/30 60\/60 [==============================] &#8211; 4s 70ms\/step &#8211; loss: 0.7417 &#8211; accuracy: 0.7322 &#8211; val_loss: 0.8511 &#8211; val_accuracy: 0.7383 &#8211; lr: 1.2500e-04 Epoch 26\/30 60\/60 [==============================] &#8211; 4s 71ms\/step &#8211; loss: 0.7136 &#8211; accuracy: 0.7420 &#8211; val_loss: 0.6797 &#8211; val_accuracy: 0.8061 &#8211; lr: 1.2500e-04 Epoch 27\/30 60\/60 [==============================] &#8211; ETA: 0s &#8211; loss: 0.7221 &#8211; accuracy: 0.7380 Epoch 27: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05. 60\/60 [==============================] &#8211; 4s 70ms\/step &#8211; loss: 0.7221 &#8211; accuracy: 0.7380 &#8211; val_loss: 0.7485 &#8211; val_accuracy: 0.7850 &#8211; lr: 1.2500e-04 Epoch 28\/30 60\/60 [==============================] &#8211; 4s 72ms\/step &#8211; loss: 0.6899 &#8211; accuracy: 0.7486 &#8211; val_loss: 0.6873 &#8211; val_accuracy: 0.8014 &#8211; lr: 6.2500e-05 Epoch 29\/30 60\/60 [==============================] &#8211; 4s 74ms\/step &#8211; loss: 0.7054 &#8211; accuracy: 0.7420 &#8211; val_loss: 0.7191 &#8211; val_accuracy: 0.8014 &#8211; lr: 6.2500e-05 Epoch 30\/30 60\/60 [==============================] &#8211; ETA: 0s &#8211; loss: 0.6911 &#8211; accuracy: 0.7520 Epoch 30: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05. 60\/60 [==============================] &#8211; 4s 73ms\/step &#8211; loss: 0.6911 &#8211; accuracy: 0.7520 &#8211; val_loss: 0.7066 &#8211; val_accuracy: 0.7897 &#8211; lr: 6.2500e-05<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Model-Evaluation\">Model Evaluation<\/h3>\n\n\n\n<p>In&nbsp;[&nbsp;]:plt<strong>.<\/strong>plot(history_2<strong>.<\/strong>history[&#8216;accuracy&#8217;]) plt<strong>.<\/strong>plot(history_2<strong>.<\/strong>history[&#8216;val_accuracy&#8217;]) plt<strong>.<\/strong>title(&#8216;Model Accuracy&#8217;) plt<strong>.<\/strong>ylabel(&#8216;Accuracy&#8217;) plt<strong>.<\/strong>xlabel(&#8216;Epoch&#8217;) plt<strong>.<\/strong>legend([&#8216;Train&#8217;,&#8217;Validation&#8217;], loc<strong>=<\/strong>&#8216;upper left&#8217;) plt<strong>.<\/strong>show()<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"518\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-18-1024x518.png\" alt=\"\" class=\"wp-image-1378\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-18-1024x518.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-18-300x152.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-18-768x389.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-18-600x304.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-18.png 1233w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><strong>Evaluate the model on test data<\/strong><\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Code to evaluate the model on test data<\/em> accuracy <strong>=<\/strong> model2<strong>.<\/strong>evaluate(X_test_normalized, y_test_encoded, verbose<strong>=<\/strong>2) 15\/15 &#8211; 0s &#8211; loss: 0.8112 &#8211; accuracy: 0.7537 &#8211; 68ms\/epoch &#8211; 5ms\/step<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Code to obtain the output probabilities<\/em> y2_pred <strong>=<\/strong> model2<strong>.<\/strong>predict(X_test_normalized) 15\/15 [==============================] &#8211; 0s 2ms\/step<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Obtain categorical varaibles from y_test_encoded and y_pred<\/em> y2_pred_arg <strong>=<\/strong> np<strong>.<\/strong>argmax(y2_pred, axis<strong>=<\/strong>1) y2_test_arg <strong>=<\/strong> np<strong>.<\/strong>argmax(y_test_encoded, axis<strong>=<\/strong>1) <em>#Plotting confusion matrix<\/em> confusion_matrix <strong>=<\/strong> tf<strong>.<\/strong>math<strong>.<\/strong>confusion_matrix(y2_test_arg, y2_pred_arg) f,ax <strong>=<\/strong> plt<strong>.<\/strong>subplots(figsize<strong>=<\/strong>(12,12)) sns<strong>.<\/strong>heatmap( confusion_matrix, annot<strong>=<\/strong><strong>True<\/strong>, linewidths<strong>=<\/strong>0.4, fmt<strong>=<\/strong>&#8216;d&#8217;, square<strong>=<\/strong><strong>True<\/strong>, ax<strong>=<\/strong>ax, cmap<strong>=<\/strong>&#8216;Greens&#8217; <em># Set the colormap to Green<\/em> ) <em>#Setting labels to both axes<\/em> ax<strong>.<\/strong>set_xlabel(&#8216;Predicted labels&#8217;); ax<strong>.<\/strong>set_ylabel(&#8216;True labels&#8217;); ax<strong>.<\/strong>set_title(&#8216;Confusion Matrix&#8217;); ax<strong>.<\/strong>xaxis<strong>.<\/strong>set_ticklabels(list(enc<strong>.<\/strong>classes_),rotation<strong>=<\/strong>40) ax<strong>.<\/strong>yaxis<strong>.<\/strong>set_ticklabels(list(enc<strong>.<\/strong>classes_),rotation<strong>=<\/strong>20) plt<strong>.<\/strong>show()<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"951\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-7-1024x951.png\" alt=\"\" class=\"wp-image-1367\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-7-1024x951.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-7-300x279.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-7-768x713.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-7-600x557.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-7.png 1084w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Calculate the classification report<\/em> cr2 <strong>=<\/strong> metrics<strong>.<\/strong>classification_report(y2_test_arg, y2_pred_arg) <em># Print the classification report<\/em> print(cr2) precision recall f1-score support 0 0.38 0.35 0.36 26 1 0.80 0.90 0.84 39 2 0.79 0.79 0.79 29 3 0.96 0.87 0.91 61 4 0.80 0.36 0.50 22 5 0.73 0.69 0.71 48 6 0.62 0.77 0.69 65 7 0.69 0.82 0.75 22 8 0.75 0.90 0.82 52 9 0.69 0.48 0.56 23 10 0.91 0.84 0.87 50 11 0.78 0.76 0.77 38 accuracy 0.75 475 macro avg 0.74 0.71 0.72 475 weighted avg 0.76 0.75 0.75 475<\/p>\n\n\n\n<p>Observations:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The revised model has an accuracy of 0.7537, which means that it correctly predicts the class labels for 75.37% of the instances in the dataset.<\/li>\n\n\n\n<li>The precision, recall, and F1-score vary for each class, indicating that the model&#8217;s performance is better for some classes (e.g., class 1) compared to others (e.g., class 0).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Final-Model\">Final Model<\/h2>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#Code to compare precision and recall scores of Model 1 vs Model 2<\/em><em># Class names<\/em> class_names <strong>=<\/strong> np<strong>.<\/strong>unique(labels) model1_precision <strong>=<\/strong> [0.50, 0.89, 0.88, 0.81, 0.33, 0.71, 0.56, 0.69, 0.67, 0.63, 0.87, 0.71] model2_precision <strong>=<\/strong> [0.38, 0.80, 0.79, 0.96, 0.80, 0.73, 0.62, 0.69, 0.75, 0.69, 0.91, 0.78] model1_recall <strong>=<\/strong> [0.04, 0.85, 0.76, 0.85, 0.09, 0.85, 0.92, 0.50, 0.65, 0.52, 0.82, 0.79] model2_recall <strong>=<\/strong> [0.35, 0.90, 0.79, 0.87, 0.36, 0.69, 0.77, 0.82, 0.90, 0.48, 0.84, 0.76] <em># Set the width of the bars<\/em> bar_width <strong>=<\/strong> 0.35 index <strong>=<\/strong> np<strong>.<\/strong>arange(len(class_names)) <em># Create subplots for precision<\/em> fig, ax <strong>=<\/strong> plt<strong>.<\/strong>subplots() bar1 <strong>=<\/strong> ax<strong>.<\/strong>bar(index <strong>&#8211;<\/strong> bar_width<strong>\/<\/strong>2, model1_precision, bar_width, label<strong>=<\/strong>&#8216;Model 1&#8217;, color<strong>=<\/strong>&#8216;lightgreen&#8217;, alpha<strong>=<\/strong>0.7) bar2 <strong>=<\/strong> ax<strong>.<\/strong>bar(index <strong>+<\/strong> bar_width<strong>\/<\/strong>2, model2_precision, bar_width, label<strong>=<\/strong>&#8216;Model 2&#8217;, color<strong>=<\/strong>&#8216;darkgreen&#8217;, alpha<strong>=<\/strong>0.7) <em># Customize the plot for precision<\/em> ax<strong>.<\/strong>set_xlabel(&#8216;Classes&#8217;) ax<strong>.<\/strong>set_ylabel(&#8216;Precision Scores&#8217;) ax<strong>.<\/strong>set_title(&#8216;Precision by Class for Model 1 and Model 2&#8217;) ax<strong>.<\/strong>set_xticks(index) ax<strong>.<\/strong>set_xticklabels(class_names, rotation<strong>=<\/strong>45, ha<strong>=<\/strong>&#8216;right&#8217;) ax<strong>.<\/strong>legend() <em># Display the precision plot<\/em> plt<strong>.<\/strong>tight_layout() plt<strong>.<\/strong>show() <em># Create subplots for recall<\/em> fig, ax <strong>=<\/strong> plt<strong>.<\/strong>subplots() bar1 <strong>=<\/strong> ax<strong>.<\/strong>bar(index <strong>&#8211;<\/strong> bar_width<strong>\/<\/strong>2, model1_recall, bar_width, label<strong>=<\/strong>&#8216;Model 1&#8217;, color<strong>=<\/strong>&#8216;lightgreen&#8217;, alpha<strong>=<\/strong>0.7) bar2 <strong>=<\/strong> ax<strong>.<\/strong>bar(index <strong>+<\/strong> bar_width<strong>\/<\/strong>2, model2_recall, bar_width, label<strong>=<\/strong>&#8216;Model 2&#8217;, color<strong>=<\/strong>&#8216;darkgreen&#8217;, alpha<strong>=<\/strong>0.7) <em># Customize the plot for recall<\/em> ax<strong>.<\/strong>set_xlabel(&#8216;Classes&#8217;) ax<strong>.<\/strong>set_ylabel(&#8216;Recall Scores&#8217;) ax<strong>.<\/strong>set_title(&#8216;Recall by Class for Model 1 and Model 2&#8217;) ax<strong>.<\/strong>set_xticks(index) ax<strong>.<\/strong>set_xticklabels(class_names, rotation<strong>=<\/strong>45, ha<strong>=<\/strong>&#8216;right&#8217;) ax<strong>.<\/strong>legend() <em># Display the recall plot<\/em> plt<strong>.<\/strong>tight_layout() plt<strong>.<\/strong>show()<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"475\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-13-1024x475.png\" alt=\"\" class=\"wp-image-1373\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-13-1024x475.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-13-300x139.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-13-768x356.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-13-600x278.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-13.png 1489w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"475\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-11-1024x475.png\" alt=\"\" class=\"wp-image-1371\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-11-1024x475.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-11-300x139.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-11-768x356.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-11-600x278.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-11.png 1489w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Observations:\">Observations:<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Model 2&#8217;s performance is particularly strong in Charlock, Common Chickweed, Fat Hen, and Small-flowered Cranesbill, where it achieves high precision and recall scores.<\/li>\n\n\n\n<li><strong>Class-Specific Performance:<\/strong>&nbsp;Model 2 generally outperforms Model 1 in terms of precision, recall, and F1-score for most classes. It has higher precision for many classes, meaning it is better at minimizing false positives. Model 1 has better recall for a few classes, but overall, Model 2 has better or similar recall.<\/li>\n\n\n\n<li><strong>Model 2 has Higher Accuracy:<\/strong>&nbsp;Model 2 has a higher overall accuracy (0.75) compared to Model 1 (0.66), indicating that it makes more correct predictions on average.<\/li>\n<\/ul>\n\n\n\n<p>Based on these observations, Model 2 is our choice for the final model.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Visualizing-the-prediction\">Visualizing the prediction<\/h3>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Visualizing the predicted and correct label of images from test data<\/em><em># First image (index 2)<\/em> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(2, 2)) plt<strong>.<\/strong>imshow(X_test[2]) plt<strong>.<\/strong>show() <em># Complete the code to predict the test data using the final model selected<\/em> predicted_label <strong>=<\/strong> model2<strong>.<\/strong>predict(X_test_normalized[2]<strong>.<\/strong>reshape(1, 64, 64, 3)) predicted_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(predicted_label) true_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(y_test_encoded)[2] print(&#8216;Predicted Label:&#8217;, predicted_label[0]) print(&#8216;True Label:&#8217;, true_label) <em># Second image (index 33)<\/em> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(2, 2)) plt<strong>.<\/strong>imshow(X_test[33]) plt<strong>.<\/strong>show() <em># Complete the code to predict the test data using the final model selected<\/em> predicted_label <strong>=<\/strong> model2<strong>.<\/strong>predict(X_test_normalized[33]<strong>.<\/strong>reshape(1, 64, 64, 3)) predicted_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(predicted_label) true_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(y_test_encoded)[33] print(&#8216;Predicted Label:&#8217;, predicted_label[0]) print(&#8216;True Label:&#8217;, true_label) <em># Third image (index 59)<\/em> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(2, 2)) plt<strong>.<\/strong>imshow(X_test[59]) plt<strong>.<\/strong>show() <em># Complete the code to predict the test data using the final model selected<\/em> predicted_label <strong>=<\/strong> model2<strong>.<\/strong>predict(X_test_normalized[59]<strong>.<\/strong>reshape(1, 64, 64, 3)) predicted_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(predicted_label) true_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(y_test_encoded)[59] print(&#8216;Predicted Label:&#8217;, predicted_label[0]) print(&#8216;True Label:&#8217;, true_label) <em># Fourth image (index 36)<\/em> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(2, 2)) plt<strong>.<\/strong>imshow(X_test[36]) plt<strong>.<\/strong>show() <em># Complete the code to predict the test data using the final model selected<\/em> predicted_label <strong>=<\/strong> model2<strong>.<\/strong>predict(X_test_normalized[36]<strong>.<\/strong>reshape(1, 64, 64, 3)) predicted_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(predicted_label) true_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(y_test_encoded)[36] print(&#8216;Predicted Label:&#8217;, predicted_label[0]) print(&#8216;True Label:&#8217;, true_label)<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"201\" height=\"202\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-6.png\" alt=\"\" class=\"wp-image-1366\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-6.png 201w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-6-150x150.png 150w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-6-100x100.png 100w\" sizes=\"(max-width: 201px) 100vw, 201px\" \/><\/figure>\n\n\n\n<p>1\/1 [==============================] &#8211; 0s 184ms\/step Predicted Label: Small-flowered Cranesbill True Label: Small-flowered Cranesbill<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"201\" height=\"202\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-9.png\" alt=\"\" class=\"wp-image-1369\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-9.png 201w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-9-150x150.png 150w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-9-100x100.png 100w\" sizes=\"(max-width: 201px) 100vw, 201px\" \/><\/figure>\n\n\n\n<p>1\/1 [==============================] &#8211; 0s 20ms\/step Predicted Label: Cleavers True Label: Cleavers<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"201\" height=\"202\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-2.png\" alt=\"\" class=\"wp-image-1363\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-2.png 201w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-2-150x150.png 150w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-2-100x100.png 100w\" sizes=\"(max-width: 201px) 100vw, 201px\" \/><\/figure>\n\n\n\n<p>1\/1 [==============================] &#8211; 0s 23ms\/step Predicted Label: Common Chickweed True Label: Common Chickweed<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"201\" height=\"202\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-5.png\" alt=\"\" class=\"wp-image-1365\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-5.png 201w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-5-150x150.png 150w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-5-100x100.png 100w\" sizes=\"(max-width: 201px) 100vw, 201px\" \/><\/figure>\n\n\n\n<p>1\/1 [==============================] &#8211; 0s 18ms\/step Predicted Label: Shepherds Purse True Label: Shepherds Purse<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Recommendations-and-Insights\">Recommendations and Insights<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>With an accuracy of roughly 75% on test data, the model is able to significantly reduce the time and effort required to identify these plants. With additional data, the model can be tuned and likely achieve even better results.(We will explore pre-trained models to see if we could improve perfromance further. This would be in Appendix)<\/li>\n\n\n\n<li>This model could be integrated with automatic weeding systems, allowing weeds to be targeted over crops, reduce the amount of pesticides used overall, and lead to more eco-friendly farming.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"APPENDIX:\">APPENDIX:<\/h2>\n\n\n\n<p><strong>Experimenting with pre trained models.<\/strong><\/p>\n\n\n\n<p><strong>MobileNetV2<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>MobileNetV2 is a lightweight convolutional neural network (CNN) architecture tailored for efficient image classification and object detection, especially on mobile and embedded devices.<\/li>\n\n\n\n<li>It builds upon the original MobileNet design, focusing on enhancing efficiency and maintaining high accuracy.<\/li>\n\n\n\n<li>MobileNetV2 typically consists of approximately 3.4 million parameters and is structured with multiple layers. Its architecture includes depth-wise separable convolutions and inverted residual blocks, with a variable number of these blocks depending on the specific model variant.<\/li>\n\n\n\n<li>This efficient design is a key strength of MobileNetV2, enabling powerful computer vision tasks with relatively few parameters, making it well-suited for resource-constrained environments.<\/li>\n\n\n\n<li>We will be using this model for our next level of model building.<\/li>\n<\/ul>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#MobileNetv2 takes images in 128&#215;128 or higher dimensions. So we are taking orignial dataset.<\/em><em>#We are splitting the data set again.<\/em><strong>from<\/strong> sklearn.model_selection <strong>import<\/strong> train_test_split <em># Split the data into a temporary set and a test set<\/em> X3_temp, X3_test, y3_temp, y3_test <strong>=<\/strong> train_test_split(np<strong>.<\/strong>array(images), labels, test_size<strong>=<\/strong>0.1, random_state<strong>=<\/strong>42, stratify<strong>=<\/strong>labels) <em># Split the temporary set into a training set and a validation set<\/em> X3_train, X3_val, y3_train, y3_val <strong>=<\/strong> train_test_split(X3_temp, y3_temp, test_size<strong>=<\/strong>0.1, random_state<strong>=<\/strong>42, stratify<strong>=<\/strong>y3_temp)<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:print(X3_train<strong>.<\/strong>shape, y3_train<strong>.<\/strong>shape) <em># Check the shape of the training data and labels<\/em> print(X3_val<strong>.<\/strong>shape, y3_val<strong>.<\/strong>shape) <em># Check the shape of the validation data and labels<\/em> print(X3_test<strong>.<\/strong>shape, y3_test<strong>.<\/strong>shape) (3847, 128, 128, 3) (3847, 1) (428, 128, 128, 3) (428, 1) (475, 128, 128, 3) (475, 1)<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em>#printing one of the orignial images<\/em> plt<strong>.<\/strong>imshow(images[3])<\/p>\n\n\n\n<p>Out[&nbsp;]:&lt;matplotlib.image.AxesImage at 0x78f7b72962f0&gt;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"595\" height=\"586\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-20.png\" alt=\"\" class=\"wp-image-1380\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-20.png 595w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-20-300x295.png 300w\" sizes=\"(max-width: 595px) 100vw, 595px\" \/><\/figure>\n\n\n\n<p>In&nbsp;[499]:<em>#LabelBinarizer helps translate categories or labels<\/em><strong>from<\/strong> sklearn.preprocessing <strong>import<\/strong> LabelBinarizer <em># Initialize the LabelBinarizer<\/em> enc <strong>=<\/strong> LabelBinarizer() <em># Fit and transform y_train<\/em> y3_train_encoded <strong>=<\/strong> enc<strong>.<\/strong>fit_transform(y3_train) <em># Transform y_val<\/em> y3_val_encoded <strong>=<\/strong> enc<strong>.<\/strong>transform(y3_val) <em># Transform y_test<\/em> y3_test_encoded <strong>=<\/strong> enc<strong>.<\/strong>transform(y3_test)<\/p>\n\n\n\n<p>In&nbsp;[500]:<em>#printing the shape of target train, encloded and test<\/em> y3_train_encoded<strong>.<\/strong>shape, y3_val_encoded<strong>.<\/strong>shape, y3_test_encoded<strong>.<\/strong>shape<\/p>\n\n\n\n<p>Out[500]:((3847, 12), (428, 12), (475, 12))<\/p>\n\n\n\n<p>In&nbsp;[501]:<em>#Code to normalize the image pixels of train, test and validation data<\/em> X3_train_normalized <strong>=<\/strong> X3_train<strong>.<\/strong>astype(&#8216;float32&#8217;) <strong>\/<\/strong> 255.0 X3_val_normalized <strong>=<\/strong> X3_val<strong>.<\/strong>astype(&#8216;float32&#8217;) <strong>\/<\/strong> 255.0 X3_test_normalized <strong>=<\/strong> X3_test<strong>.<\/strong>astype(&#8216;float32&#8217;) <strong>\/<\/strong> 255.0<\/p>\n\n\n\n<p>In&nbsp;[502]:<em>#clearing backend<\/em><strong>from<\/strong> tensorflow.keras <strong>import<\/strong> backend backend<strong>.<\/strong>clear_session() <em>#fixing random seed generators<\/em><strong>import<\/strong> random np<strong>.<\/strong>random<strong>.<\/strong>seed(42) random<strong>.<\/strong>seed(42) tf<strong>.<\/strong>random<strong>.<\/strong>set_seed(42)<\/p>\n\n\n\n<p>In&nbsp;[503]:<strong>from<\/strong> tensorflow.keras.models <strong>import<\/strong> Sequential <strong>from<\/strong> tensorflow.keras.layers <strong>import<\/strong> Dense, Dropout, GlobalAveragePooling2D <strong>from<\/strong> tensorflow.keras.applications <strong>import<\/strong> MobileNetV2 <em># Load MobileNetV2 Model without top<\/em> base_model <strong>=<\/strong> MobileNetV2(weights<strong>=<\/strong>&#8216;imagenet&#8217;, include_top<strong>=<\/strong><strong>False<\/strong>, input_shape<strong>=<\/strong>(128, 128, 3)) <em># Freeze the layers of the base model<\/em> base_model<strong>.<\/strong>trainable <strong>=<\/strong><strong>False<\/strong><em># Initializing the model<\/em> model3 <strong>=<\/strong> Sequential() model3<strong>.<\/strong>add(base_model) model3<strong>.<\/strong>add(GlobalAveragePooling2D()) <em># Adding custom layers<\/em> model3<strong>.<\/strong>add(Dense(32, activation<strong>=<\/strong>&#8216;relu&#8217;)) <em># Output layer<\/em> model3<strong>.<\/strong>add(Dense(12, activation<strong>=<\/strong>&#8216;softmax&#8217;)) <em># Using softmax in output layers as we have 12 classes<\/em><em># Compile the model<\/em> opt <strong>=<\/strong> Adam() model3<strong>.<\/strong>compile(optimizer<strong>=<\/strong>opt, loss<strong>=<\/strong>&#8216;categorical_crossentropy&#8217;, metrics<strong>=<\/strong>[&#8216;accuracy&#8217;]) <em># Display the model&#8217;s architecture<\/em> model3<strong>.<\/strong>summary() Model: &#8220;sequential&#8221; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= mobilenetv2_1.00_128 (Func (None, 4, 4, 1280) 2257984 tional) global_average_pooling2d ( (None, 1280) 0 GlobalAveragePooling2D) dense (Dense) (None, 32) 40992 dense_1 (Dense) (None, 12) 396 ================================================================= Total params: 2299372 (8.77 MB) Trainable params: 41388 (161.67 KB) Non-trainable params: 2257984 (8.61 MB) _________________________________________________________________<\/p>\n\n\n\n<p>In&nbsp;[506]:<strong>from<\/strong> tensorflow.keras.callbacks <strong>import<\/strong> EarlyStopping <em>#Define EarlyStopping callback<\/em> early_stopping <strong>=<\/strong> EarlyStopping( monitor<strong>=<\/strong>&#8216;val_loss&#8217;, <em># Monitor validation loss<\/em> patience<strong>=<\/strong>3, <em># Number of epochs with no improvement before stopping<\/em> restore_best_weights<strong>=<\/strong><strong>True<\/strong><em># Restore the model weights to the best epoch<\/em> ) <em># Train the model with EarlyStopping<\/em> history_3 <strong>=<\/strong> model3<strong>.<\/strong>fit( X3_train_normalized, y3_train_encoded, epochs<strong>=<\/strong>12, batch_size<strong>=<\/strong>60, validation_data<strong>=<\/strong>(X3_val_normalized,y3_val_encoded), callbacks<strong>=<\/strong>[early_stopping] <em># Include the EarlyStopping callback<\/em> ) Epoch 1\/12 65\/65 [==============================] &#8211; 8s 50ms\/step &#8211; loss: 1.7020 &#8211; accuracy: 0.4432 &#8211; val_loss: 1.2138 &#8211; val_accuracy: 0.5911 Epoch 2\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 1.0019 &#8211; accuracy: 0.6743 &#8211; val_loss: 0.9319 &#8211; val_accuracy: 0.6846 Epoch 3\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 0.7734 &#8211; accuracy: 0.7458 &#8211; val_loss: 0.8262 &#8211; val_accuracy: 0.7103 Epoch 4\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 0.6380 &#8211; accuracy: 0.8001 &#8211; val_loss: 0.7619 &#8211; val_accuracy: 0.7570 Epoch 5\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 0.5504 &#8211; accuracy: 0.8271 &#8211; val_loss: 0.7210 &#8211; val_accuracy: 0.7710 Epoch 6\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 0.4813 &#8211; accuracy: 0.8503 &#8211; val_loss: 0.7080 &#8211; val_accuracy: 0.7944 Epoch 7\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 0.4269 &#8211; accuracy: 0.8721 &#8211; val_loss: 0.6937 &#8211; val_accuracy: 0.7687 Epoch 8\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 0.3882 &#8211; accuracy: 0.8893 &#8211; val_loss: 0.6552 &#8211; val_accuracy: 0.7874 Epoch 9\/12 65\/65 [==============================] &#8211; 2s 30ms\/step &#8211; loss: 0.3421 &#8211; accuracy: 0.9002 &#8211; val_loss: 0.6714 &#8211; val_accuracy: 0.7780 Epoch 10\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 0.3240 &#8211; accuracy: 0.9038 &#8211; val_loss: 0.6509 &#8211; val_accuracy: 0.8014 Epoch 11\/12 65\/65 [==============================] &#8211; 2s 31ms\/step &#8211; loss: 0.2855 &#8211; accuracy: 0.9212 &#8211; val_loss: 0.6491 &#8211; val_accuracy: 0.7921 Epoch 12\/12 65\/65 [==============================] &#8211; 2s 32ms\/step &#8211; loss: 0.2644 &#8211; accuracy: 0.9244 &#8211; val_loss: 0.6487 &#8211; val_accuracy: 0.7991<\/p>\n\n\n\n<p>In&nbsp;[507]:plt<strong>.<\/strong>plot(history_3<strong>.<\/strong>history[&#8216;accuracy&#8217;]) plt<strong>.<\/strong>plot(history_3<strong>.<\/strong>history[&#8216;val_accuracy&#8217;]) plt<strong>.<\/strong>title(&#8216;Model Accuracy&#8217;) plt<strong>.<\/strong>ylabel(&#8216;Accuracy&#8217;) plt<strong>.<\/strong>xlabel(&#8216;Epoch&#8217;) plt<strong>.<\/strong>legend([&#8216;Train&#8217;,&#8217;Validation&#8217;], loc<strong>=<\/strong>&#8216;upper left&#8217;) plt<strong>.<\/strong>show()<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"518\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-3-1024x518.png\" alt=\"\" class=\"wp-image-1362\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-3-1024x518.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-3-300x152.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-3-768x389.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-3-600x304.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-3.png 1233w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>In&nbsp;[508]:<em>#Evaluating the model on test data<\/em> accuracy <strong>=<\/strong> model3<strong>.<\/strong>evaluate(X3_test_normalized, y3_test_encoded, verbose<strong>=<\/strong>2) 15\/15 &#8211; 0s &#8211; loss: 0.6935 &#8211; accuracy: 0.7916 &#8211; 302ms\/epoch &#8211; 20ms\/step<\/p>\n\n\n\n<p>In&nbsp;[509]:<em>#Code to obtain the output probabilities<\/em> y3_pred <strong>=<\/strong> model3<strong>.<\/strong>predict(X3_test_normalized) 15\/15 [==============================] &#8211; 1s 20ms\/step<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"Observations:\">Observations:<\/h3>\n\n\n\n<p>The model achieves an overall accuracy of 79.16% on the test set. This is a high accuracy rate, indicating that the model is robust and performs well across various classes of plant seedlings.<\/p>\n\n\n\n<p><strong>Plotting the Confusion Matrix<\/strong><\/p>\n\n\n\n<p>In&nbsp;[512]:<em># Obtaining the categorical values from y_test_encoded and y_pred<\/em> y3_pred_arg<strong>=<\/strong>np<strong>.<\/strong>argmax(y3_pred,axis<strong>=<\/strong>1) y3_test_arg<strong>=<\/strong>np<strong>.<\/strong>argmax(y3_test_encoded,axis<strong>=<\/strong>1) <em># Plotting the Confusion Matrix using confusion matrix() function which is also predefined in tensorflow module<\/em> confusion_matrix <strong>=<\/strong> tf<strong>.<\/strong>math<strong>.<\/strong>confusion_matrix(y3_test_arg, y3_pred_arg) <em># Complete the code to obatin the confusion matrix<\/em> f, ax <strong>=<\/strong> plt<strong>.<\/strong>subplots(figsize<strong>=<\/strong>(12, 12)) sns<strong>.<\/strong>heatmap( confusion_matrix, annot<strong>=<\/strong><strong>True<\/strong>, linewidths<strong>=<\/strong>.4, fmt<strong>=<\/strong>&#8220;d&#8221;, square<strong>=<\/strong><strong>True<\/strong>, ax<strong>=<\/strong>ax, cmap<strong>=<\/strong>&#8216;Greens&#8217; ) <em># Setting the labels to both the axes<\/em> ax<strong>.<\/strong>set_xlabel(&#8216;Predicted labels&#8217;);ax<strong>.<\/strong>set_ylabel(&#8216;True labels&#8217;); ax<strong>.<\/strong>set_title(&#8216;Confusion Matrix&#8217;); ax<strong>.<\/strong>xaxis<strong>.<\/strong>set_ticklabels(list(enc<strong>.<\/strong>classes_),rotation<strong>=<\/strong>40) ax<strong>.<\/strong>yaxis<strong>.<\/strong>set_ticklabels(list(enc<strong>.<\/strong>classes_),rotation<strong>=<\/strong>20) plt<strong>.<\/strong>show()<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"951\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-16-1024x951.png\" alt=\"\" class=\"wp-image-1376\" srcset=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-16-1024x951.png 1024w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-16-300x279.png 300w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-16-768x713.png 768w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-16-600x557.png 600w, https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-16.png 1084w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Calculate the classification report<\/em> cr3 <strong>=<\/strong> metrics<strong>.<\/strong>classification_report(y3_test_arg, y3_pred_arg) <em># Print the classification report<\/em> print(cr3) precision recall f1-score support 0 0.56 0.38 0.45 26 1 0.81 0.87 0.84 39 2 0.82 0.79 0.81 29 3 0.88 0.84 0.86 61 4 0.93 0.59 0.72 22 5 0.75 0.88 0.81 48 6 0.72 0.86 0.78 65 7 0.89 0.77 0.83 22 8 0.77 0.85 0.81 52 9 0.65 0.48 0.55 23 10 0.86 0.86 0.86 50 11 0.87 0.87 0.87 38 accuracy 0.79 475 macro avg 0.79 0.75 0.77 475 weighted avg 0.79 0.79 0.79 475<\/p>\n\n\n\n<p><strong>Precision, Recall, and F1-Scores:<\/strong><\/p>\n\n\n\n<p>The precision across classes is mostly high, with &#8216;Common Chickweed&#8217; (label 3),&#8217;Common wheat'(label4), &#8216;Maize&#8217; (label 7),&#8217;Sugar beet'(label 10), and 11&#8217;Charlock&#8217; showing particularly strong precision of 0.85 or above. This indicates that when the model predicts these classes, it is very likely to be correct.<\/p>\n\n\n\n<p>The recall is also notably high for most classes, with &#8216;Sugar beet&#8217;, &#8216;Charlock&#8217; and &#8216;Common Chickweed&#8217; again standing out, suggesting that the model is capable of identifying most of the true positives for these classes.<\/p>\n\n\n\n<p>The F1-scores, which are the harmonic mean of precision and recall, are uniformly high across most classes, indicating that the model maintains a good balance between the precision and recall. This balance is crucial in practical applications where both identifying the correct class and minimizing false positives are important.<\/p>\n\n\n\n<p><strong>Class-Specific Performance:<\/strong><\/p>\n\n\n\n<p>While most classes exhibit strong F1-scores, the class &#8216;Black-grass&#8217; (label 0) has a lower F1-score of 0.56 (but best across models). This suggests that there may be characteristics of this class that are not being captured as effectively as others, which may be due to intra-class variability or similarities with other classes. The class &#8216;Small-flowered Cranesbill&#8217; (label 4) also shows a slightly lower recall and F1-score compared to its precision, indicating that while the model is highly confident when it predicts this class, it is missing some true positives.<\/p>\n\n\n\n<p><strong>Model Consistency:<\/strong><\/p>\n\n\n\n<p>The training and validation accuracy curves show a consistent improvement over epochs, with the training accuracy reaching a high level. This suggests that the model is learning effectively from the training data. The validation accuracy curve is smooth and does not exhibit high variance, which would indicate overfitting. This suggests that the model generalizes well to unseen data.<\/p>\n\n\n\n<p>In&nbsp;[&nbsp;]:<em># Visualizing the predicted and correct label of images from test data<\/em><em># First image (index 2)<\/em> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(2, 2)) plt<strong>.<\/strong>imshow(X3_test[2]) plt<strong>.<\/strong>show() <em># Complete the code to predict the test data using the final model selected<\/em> predicted_label <strong>=<\/strong> model3<strong>.<\/strong>predict(X3_test_normalized[2]<strong>.<\/strong>reshape(1, 128, 128, 3)) predicted_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(predicted_label) true_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(y3_test_encoded)[2] print(&#8216;Predicted Label:&#8217;, predicted_label[0]) print(&#8216;True Label:&#8217;, true_label) <em># Second image (index 33)<\/em> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(2, 2)) plt<strong>.<\/strong>imshow(X3_test[33]) plt<strong>.<\/strong>show() <em># Complete the code to predict the test data using the final model selected<\/em> predicted_label <strong>=<\/strong> model3<strong>.<\/strong>predict(X3_test_normalized[33]<strong>.<\/strong>reshape(1, 128, 128, 3)) predicted_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(predicted_label) true_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(y3_test_encoded)[33] print(&#8216;Predicted Label:&#8217;, predicted_label[0]) print(&#8216;True Label:&#8217;, true_label) <em># Third image (index 59)<\/em> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(2, 2)) plt<strong>.<\/strong>imshow(X3_test[59]) plt<strong>.<\/strong>show() <em># Complete the code to predict the test data using the final model selected<\/em> predicted_label <strong>=<\/strong> model3<strong>.<\/strong>predict(X3_test_normalized[59]<strong>.<\/strong>reshape(1, 128, 128, 3)) predicted_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(predicted_label) true_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(y3_test_encoded)[59] print(&#8216;Predicted Label:&#8217;, predicted_label[0]) print(&#8216;True Label:&#8217;, true_label) <em># Fourth image (index 37)<\/em> plt<strong>.<\/strong>figure(figsize<strong>=<\/strong>(2, 2)) plt<strong>.<\/strong>imshow(X3_test[37]) plt<strong>.<\/strong>show() <em># Code to predict the test data using the final model selected<\/em> predicted_label <strong>=<\/strong> model3<strong>.<\/strong>predict(X3_test_normalized[37]<strong>.<\/strong>reshape(1, 128, 128, 3)) predicted_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(predicted_label) true_label <strong>=<\/strong> enc<strong>.<\/strong>inverse_transform(y3_test_encoded)[37] print(&#8216;Predicted Label:&#8217;, predicted_label[0]) print(&#8216;True Label:&#8217;, true_label)<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"210\" height=\"202\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-8.png\" alt=\"\" class=\"wp-image-1368\"\/><\/figure>\n\n\n\n<p>1\/1 [==============================] &#8211; 0s 22ms\/step Predicted Label: Small-flowered Cranesbill True Label: Small-flowered Cranesbill<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"210\" height=\"202\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-15.png\" alt=\"\" class=\"wp-image-1375\"\/><\/figure>\n\n\n\n<p>1\/1 [==============================] &#8211; 0s 22ms\/step Predicted Label: Cleavers True Label: Cleavers<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"210\" height=\"202\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-12.png\" alt=\"\" class=\"wp-image-1372\"\/><\/figure>\n\n\n\n<p>1\/1 [==============================] &#8211; 0s 22ms\/step Predicted Label: Common Chickweed True Label: Common Chickweed<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"210\" height=\"202\" src=\"https:\/\/excalibursol.com\/exus\/wp-content\/uploads\/2024\/02\/image-17.png\" alt=\"\" class=\"wp-image-1377\"\/><\/figure>\n\n\n\n<p>1\/1 [==============================] &#8211; 0s 26ms\/step Predicted Label: Loose Silky-bent True Label: Loose Silky-bent<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Business-Insights\">Business Insights<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>By using a pre-trained model, MobileNetV2, we were able to improve the accuracy of the model from our best score of 75% on test data to 79.3%.<\/li>\n\n\n\n<li>The recall and F1 score has improved for most classes.<\/li>\n\n\n\n<li>The improved accuracy of Model 3 using MobileNet demonstrates the potential for deploying deep learning models in agricultural settings. The ability to accurately classify plant species can enhance crop management and weed control practices.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Recommendations\">Recommendations<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Model Deployment:<\/strong>&nbsp;With an accuracy of 79.3% and high F1-scores for most classes, the model is approaching a level of reliability suitable for real-world application, which could significantly reduce the time and labor costs associated with manual plant sorting.<\/li>\n\n\n\n<li><strong>Expert Collaboration:<\/strong>&nbsp;Work with agricultural scientists to understand the nuances of different species and to identify any additional features that could be used to improve model accuracy. This domain expertise can guide further feature engineering and data collection efforts.<\/li>\n\n\n\n<li><strong>Educational Programs:<\/strong>&nbsp;Educate the end-users, such as farmers and agricultural workers, on how to use the technology effectively. Offer workshops or training sessions to ensure they can leverage the AI system to its full potential.<\/li>\n<\/ul>\n\n\n\n<p>By adopting these recommendations, the agricultural business can leverage AI to enhance productivity, reduce costs, and support sustainable farming practices. The long-term goal should be not only to implement current models but also to foster an environment of continuous improvement and adaptation to technological advancements in AI and machine learning.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction to Computer Vision: Plant Seedlings Classification Project by Noor Aftab Problem Statement Context In recent times, the field of agriculture has been in urgent need of modernizing, since the amount of manual work people need to put in to check if plants are growing correctly is still highly extensive. Despite several advances in agricultural [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-1359","page","type-page","status-publish","hentry"],"rttpg_featured_image_url":null,"rttpg_author":{"display_name":"admin","author_link":"https:\/\/excalibursol.com\/exus\/author\/admin\/"},"rttpg_comment":0,"rttpg_category":null,"rttpg_excerpt":"Introduction to Computer Vision: Plant Seedlings Classification Project by Noor Aftab Problem Statement Context In recent times, the field of agriculture has been in urgent need of modernizing, since the amount of manual work people need to put in to check if plants are growing correctly is still highly extensive. Despite several advances in agricultural&hellip;","_links":{"self":[{"href":"https:\/\/excalibursol.com\/exus\/wp-json\/wp\/v2\/pages\/1359","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/excalibursol.com\/exus\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/excalibursol.com\/exus\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/excalibursol.com\/exus\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/excalibursol.com\/exus\/wp-json\/wp\/v2\/comments?post=1359"}],"version-history":[{"count":1,"href":"https:\/\/excalibursol.com\/exus\/wp-json\/wp\/v2\/pages\/1359\/revisions"}],"predecessor-version":[{"id":1381,"href":"https:\/\/excalibursol.com\/exus\/wp-json\/wp\/v2\/pages\/1359\/revisions\/1381"}],"wp:attachment":[{"href":"https:\/\/excalibursol.com\/exus\/wp-json\/wp\/v2\/media?parent=1359"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}