Logo

Logistic Regression Demo

AIO2025: Module 06.

๐Ÿ“Š
About this Logistic Regression Demo
Interactive demonstration of Logistic Regression using NumPy and gradient descent. Learn binary classification with sigmoid activation, binary cross-entropy loss, and adjustable prediction threshold. Visualize training metrics and experiment with different threshold values.

๐Ÿ“Š How to Use: Select binary classification data โ†’ Configure target (must have 2 classes) โ†’ Set training parameters โ†’ Adjust threshold โ†’ Enter feature values โ†’ Run training!

Start with sample datasets or upload your own CSV/Excel files.

๐Ÿ—‚๏ธ Sample Datasets
๐ŸŽฏ Target Column

๐Ÿ”„ Loading sample data...

๐Ÿ“‹ Data Preview (First 5 Rows)

๐Ÿ“Š Logistic Regression Parameters

0 6

Current Learning Rate: 0.01

0 10

Current Batch Size: Full Batch

๐Ÿ“Š Data Split Configuration

0.6 0.9

๐ŸŽฏ Prediction Threshold Configuration

0 1

Current Threshold: 0.50

๐Ÿ“Š Logistic Regression Results & Visualization

**๐Ÿ“Š Logistic Regression Results**

Training details will appear here showing model performance, learned parameters, and predictions with current threshold.

๐Ÿ“Š Logistic Regression Guide:

๐Ÿ“ˆ Training Metrics:

  • Loss (BCE): Binary Cross-Entropy loss decreases as model learns. Lower loss indicates better fit.
  • Accuracy: Classification accuracy improves during training. Monitor both training and validation accuracy.

๐Ÿ”ง Training Parameters:

  • Epochs: Number of complete passes through training data. More epochs = better learning, but watch for overfitting.
  • Learning Rate: Step size for gradient descent. Recommended: 0.001 to 0.01. Too high may cause instability.
  • Batch Size: Samples processed before updating parameters. Powers of 2: 1, 2, 4, 8... or Full Batch. Smaller = faster updates but noisier. Larger = more stable.
  • Train/Validation Split: Proportion of data for training vs validation. Default 80/20 split.

๐ŸŽฏ Threshold Parameter:

  • Threshold: Probability cutoff for binary classification. If predicted probability โ‰ฅ threshold โ†’ class 1, else โ†’ class 0.
  • Default: 0.5 (balanced)
  • Lower threshold (e.g., 0.3): More predictions of class 1 โ†’ higher recall, lower precision
  • Higher threshold (e.g., 0.7): Fewer predictions of class 1 โ†’ higher precision, lower recall
  • Experiment: Adjust threshold to see how predictions and accuracy change!

๐Ÿงฎ Algorithm Details:

  • Sigmoid Activation: Maps linear output to probability (0-1 range)
  • Binary Cross-Entropy Loss: Optimized for binary classification tasks
  • Feature Normalization: Automatic standardization (zero mean, unit variance) for stable training

๐Ÿ’ก Tips:

  • Start with default parameters (100 epochs, learning rate 0.01, threshold 0.5)
  • Monitor validation metrics to detect overfitting
  • Adjust threshold based on your classification goals (precision vs recall)
  • Use batch size = Full Batch for most stable training