--- language: en license: cc-by-4.0 tags: - vision - image-classification - cnn - satellite-imagery - land-use-classification - france - remote-sensing model-index: - name: satellite-land-use-classifier-france results: - task: type: image-classification dataset: name: hadrilec/satellite-pictures-classification-ign-france type: image-dataset metrics: - name: accuracy type: classification value: 0.9728 split: validation widget: - task: image-classification inputs: - name: example_input type: image url: >- https://huggingface.co/datasets/hadrilec/satellite-pictures-classification-ign-france/viewer/default/train?views%5B%5D=train&image-viewer=9473D066F7CE8EC28648894FD2FDAAF273ED2520 model: architecture: Custom CNN (2-layer Conv + 2-FC) framework: PyTorch input_shape: - 3 - 256 - 256 output_labels: - forest - sea - urban - field model_description: > This model uses a custom convolutional neural network, trained on satellite images provided by IGN, to classify areas in France into four categories: forest, sea, urban, and field citation: - authors: - Hadrien Leclerc title: Satellite Image Classifier for Land-Use in France year: 2025 datasets: - hadrilec/satellite-pictures-classification-ign-france metrics: - accuracy --- # Satellite Image Classifier (Custom CNN) This repository contains a **custom convolutional neural network** trained on satellite imagery for land classification in France. The model is inspired by the foundational book *Deep Learning with Pytorch, Eli Stevens, Luca Antiga, Thomas Viehmann* **Dataset:** [hadrilec/satellite-pictures-classification-ign-france](https://huggingface.co/datasets/hadrilec/satellite-pictures-classification-ign-france) --- ## 🏗 Model Architecture ```python class Net(nn.Module): def __init__(self): super().__init__() # Feature extractor self.conv1 = nn.Conv2d(4, 16, kernel_size=3, padding=1) # (RGB + EDGE: 3 + 1) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=3, padding=1) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 64, kernel_size=3, padding=1) self.bn3 = nn.BatchNorm2d(64) # After 3x maxpool (stride=2), 256 -> 128 -> 64 -> 32 self.fc1 = nn.Linear(64 * 32 * 32, 256) self.fc2 = nn.Linear(256, 64) self.fc3 = nn.Linear(64, 4) # 4 classes self.dropout = nn.Dropout(0.5) def forward(self, x): # Conv layers out = F.relu(self.bn1(self.conv1(x))) out = F.max_pool2d(out, 2) # 256 -> 128 out = F.relu(self.bn2(self.conv2(out))) out = F.max_pool2d(out, 2) # 128 -> 64 out = F.relu(self.bn3(self.conv3(out))) out = F.max_pool2d(out, 2) # 64 -> 32 # Flatten out = out.view(out.size(0), -1) # Fully connected layers out = F.relu(self.fc1(out)) out = self.dropout(out) out = F.relu(self.fc2(out)) out = self.fc3(out) # logits, apply CrossEntropyLoss return out ``` ## Example Notebook We provide an example notebook that demonstrates how to train the model: [Open the example notebook](./example_sobel.ipynb) This notebook is intended as a starting point for experimentation and helps you quickly see how to use the dataset in practice. Accuracy is 0.9745 ## IT infrastructure The model has been trained on [Onyxia datalab platform](https://www.onyxia.sh/), with an NVIDIA GPU Tesla T4. ## 📈 Training & Validation Loss Below is the training vs. validation loss curve: ![Training vs Validation Loss](https://huggingface.co/hadrilec/satellite-pictures-classification-ign-france-model/resolve/main/images/validation_training_loss.png) --- ## 🖼 Misclassified Samples Here is a facet plot showing some **misclassified images** from the validation set: ![Misclassified Samples](https://huggingface.co/hadrilec/satellite-pictures-classification-ign-france-model/resolve/main/images/Misclassified_pictures.png) --- ## 🖼 Correctly Classified Samples Here is a facet plot showing some **well-classified images** from the validation set: ![Correctly Classified Samples](https://huggingface.co/hadrilec/satellite-pictures-classification-ign-france-model/resolve/main/images/Well-classified_pictures.png)