Skip to content

Fine-Tuning

Agribound supports fine-tuning delineation engines on user-provided reference field boundaries. This can significantly improve accuracy for specific regions or crop types not well represented in the default pre-trained models.

Overview

The fine-tuning workflow:

  1. Reference boundaries are rasterized into 3-class segmentation masks (background, field interior, field boundary).
  2. The raster and masks are chipped into training patches.
  3. Patches are split into train/validation sets.
  4. The selected engine is fine-tuned on the training data.
  5. The fine-tuned model is used for inference on the full study area.

Providing Reference Boundaries

Reference boundaries must be a vector file (Shapefile, GeoPackage, GeoJSON, or GeoParquet) containing field boundary polygons:

from agribound import AgriboundConfig, delineate

config = AgriboundConfig(
    study_area="area.geojson",
    source="sentinel2",
    year=2024,
    engine="ftw",
    gee_project="my-project",
    reference_boundaries="reference_fields.gpkg",
    fine_tune=True,
    fine_tune_epochs=20,
    fine_tune_val_split=0.2,
)

gdf = delineate(config=config, study_area=config.study_area)

Via CLI:

agribound delineate \
    --study-area area.geojson \
    --source sentinel2 \
    --engine ftw \
    --gee-project my-project \
    --reference reference_fields.gpkg \
    --fine-tune

Note

When --reference is provided without --fine-tune, agribound evaluates the delineation results against the reference boundaries instead of fine-tuning.

Engine-Specific Fine-Tuning

FTW (Pre-Trained Only)

FTW fine-tuning is not yet supported

FTW's training pipeline requires paired temporal windows (two Sentinel-2 scenes from different seasons) in a specific band ordering, which differs from agribound's single-composite format. When fine_tune=True is set with the FTW engine, agribound downloads the pre-trained checkpoint and uses it directly. The pre-trained FTW models already generalize well across regions.

# FTW will use pre-trained weights even with fine_tune=True
config = AgriboundConfig(
    engine="ftw",
    fine_tune=True,  # Downloads pre-trained checkpoint, logs a warning
    engine_params={"model": "FTW_PRUE_EFNET_B5"},
    ...
)

Delineate-Anything

Fine-tunes the YOLO instance segmentation model. Training data is converted to YOLO segmentation format automatically.

config = AgriboundConfig(
    engine="delineate-anything",
    fine_tune=True,
    engine_params={
        "model_size": "small",  # or "large"
        "chip_size": 256,
    },
    ...
)

GeoAI

Fine-tunes the Mask R-CNN model from the geoai-py package.

config = AgriboundConfig(
    engine="geoai",
    fine_tune=True,
    engine_params={
        "batch_size": 4,
    },
    ...
)

Prithvi

Fine-tunes the Prithvi-EO-2.0 foundation model with a UPerNet decoder via terratorch. Requires 4-band input (R, G, B, NIR).

config = AgriboundConfig(
    engine="prithvi",
    fine_tune=True,
    engine_params={
        "model_name": "Prithvi-EO-2.0-300M-TL",
        "batch_size": 4,
    },
    ...
)

Configuration Options

Parameter Default Description
reference_boundaries None Path to vector file with reference field polygons.
fine_tune False Enable fine-tuning before inference.
fine_tune_epochs 20 Number of training epochs.
fine_tune_val_split 0.2 Fraction of data reserved for validation.
engine_params.chip_size 256 Patch size for chipping (pixels).
engine_params.batch_size 4-8 Training batch size (engine-dependent).

Fallback Behavior

If the selected engine does not support fine-tuning (e.g., embedding or ensemble), agribound automatically falls back to the best-supported engine for the satellite source:

Source Fallback Engine
sentinel2, hls, landsat, local ftw (pre-trained only)
naip, spot delineate-anything

A warning is logged when a fallback occurs. Note that falling back to FTW will use pre-trained weights since FTW fine-tuning is not yet supported.

Tips

  • Provide at least 50--100 reference field polygons for meaningful fine-tuning.
  • Use polygons that are representative of the study area's field sizes and shapes.
  • Start with 10--20 epochs and increase if validation loss is still decreasing.
  • Fine-tuning takes ~30 minutes per model on an Apple M2 Max (MPS). Each model variant is fine-tuned independently.
  • Fine-tuned checkpoints are cached in .agribound_cache/checkpoints/ and reused across years and runs. Delete the cache to force re-training.