Getting started with ODM

ODM stands for OpenDroneMap, an open-source photogrammetry software. OpenDroneMap project includes several modules which facilitate geospatial analysis in customized configurations of available computing power and for projects of different scales. The OpenDroneMap/ODM [GitHub] module is the analytical core of the software. Typically it is employed by the higher-level layers of other OpenDroneMap products that provide a graphical user interface or manage analysis configuration and task scheduling (learn more in the Introduction to OpenDroneMap section of the Geospatial Workbook.

The point is that you can directly use the ODM module on the command line in the terminal for drone imagery mapping. This tutorial will take you step-by-step through creating an efficient file structure and setting up a script to send the job to the SLURM queue on an HPC cluster, which will allow you to collect all the results of the photogrammetric analysis with OpenDroneMap.

The complete workflow includes photo alignment and generation of the dense point cloud (DPC), elevation models (DSMs & DTMs), textured 3D meshes, and orthomosaics (georeferenced & orthorectified). Note that in this approach there is NO support to create web tiles. So, the results can not be directly opened in the complementary WebODM graphical interface. But, the files can be still visualized in external software that supports the given format.

Run ODM in the command line
using Atlas cluster of the SCINet HPC

The OpenDroneMap/ODM module is available on the Atlas (and Ceres) clusters as a singularity container. The image of the software can be accessed on the path:

/reference/containers/opendronemap/2.8.3/opendronemap-2.8.3.sif

In your scripts, you can either set a variable to this path:

odm_exe=/reference/containers/opendronemap/2.8.3/opendronemap-2.8.3.sif

or you can create a softlink in your file structure and further use it:

ln -s /reference/containers/opendronemap/2.8.3/opendronemap-2.8.3.sif /<path-to-your-project-on-Atlas/odm.sif

In this tutorial, we will create a sample script that uses a variable to store the path to the original ODM image.

Create file structure

The ODM module for the command-line usage requires the specific file structure to work without issues. Specifically, it requires that input imagery be in the code/images subdirectories structure of the working directory for a given project.

To facilitate proper path management, I suggest creating the core of ordered file structure for all future projects. Let’s say it will be the ODM directory, serving as a working directory for all future ODM analyses. It should contain two main subdirectories: IMAGES and RESULTS. In the IMAGES directory, you will create a new folder with a unique name for each new project, where you will place photos in JPG format (e.g., ~/ODM/IMAGES/PROJECT-1). In the RESULTS directory, when submitting an ODM task into the SLURM queue, a subdirectory with ODM outputs will be automatically created for each project. And there, also automatically, an code/images subdirectories will be created with soft links to photos from the relative project.

ODM/          (storage directory for all future ODM projects)
|― run_odm.sh      (SLURM job submission script)
|― IMAGES/
   |― PROJECT-1/    (directory with images in JPG format and gcp_list.txt file with GCPs)
   |― PROJECT-2/
|― RESULTS/      (parent directory for ODM analysis outputs)
   |― PROJECT-1-tag/   (automatically created directory with ODM analysis outputs)
      |― code/       (automatically created dir for analysis outputs; required!)
       |― images/   (automatically created dir with soft links to JPG images)

This way, if you want to perform several analyses with different parameters on the same set of images, you will not need to have a hard copy for each repetition. Instead, you will use soft links to the original photos stored in the IMAGES directory. That will significantly save storage space and prevent you from accidentally deleting input imagery when resuming the analysis in the working directory.


To set up the file structure for ODM analysis follow the steps in the command line:

0. Open the terminal window on your local machine and login to the SCINet Atlas cluster (or any HPC infrastructure) using the ssh command and respective hostname:

ssh <user.name>@atlas-login.hpc.msstate.edu

Then, enter on the keyboard your 1) multi-factor authenticator number (6 digits), followed by 2) password in a separate prompt.

^ Note that when entering the identification code and password, the characters are not visible on the screen.

PRO TIP:
If you happen to forget the hostname for the Atlas cluster, you can save the login command to a text file in your home directory on your local machine:

$ echo "ssh user.name@atlas-login.hpc.msstate.edu" > ~/login_atlas

Then every time you want to log in, just call the name of this file with a dot and space preceding it:

$ . ~/login_atlas

You can also review the HPC clusters available on SCINet at https://scinet.usda.gov/guide/quickstart#hpc-clusters-on-scinet.


1. Once logged in to Atlas HPC, go into your group folder in the /project location

cd /project/<your_account_folder>/

# e.g., cd /project/90daydata
PRO TIP:
If you do not remember or do not know the name of your group directory in /project location, try the command:

$ ls /project

That will display all available group directories. You can search for suitable ones by name or you can quickly filter out only the ones you have access to:

$ ls /project/* 2> /dev/null

WARNING:
Note that you do NOT have access to all directories in the /project location. You also can NOT create the new directory there on your own. All users have access to /project/90daydata, but data is stored there only for 90 days and the folder is dedicated primarily to collaborative projects between groups. If you do NOT have access to your group's directory or need a directory for a new project request a new project allocation.
PRO TIP:
You can display the contents of any directory with access while being in any other location in the file system. To do this, you need to know the relative or absolute path to the target location.

The absolute path requires defining all intermediate directories starting from the root (shortcutted by a single /):
$ ls /project/90daydata

The relative path requires defining all intermediate directories relative to the current location. To indicate the parent directories use the ../ syntax for each higher level. To point to child directories you must name them directly. Remember, however, that pressing the tab key expands the available options, so you don't have to remember entire paths.
$ ls ../../inner_folder

The same principle applies to relocation in the file system using the cd command.

2. Create a working directory (mkdir) for all future ODM projects and get into it (cd):

mkdir ODM
cd ODM

3. Create a directory for input images (IMAGES) and ODM analysis outputs (RESULTS):

mkdir IMAGES RESULTS

Detect GCPs on the photos

Ground Control Points (GCPs) are clearly visible objects which can be easily identified in several images. Using the precise GPS position of these ground points is a good reference that improves significantly the accuracy of the project’s geolocation. Ground control points can be any steady structure existing in the mission area, otherwise can be set using targets placed on the ground.

ODM enables using the GCP data from a gcp_list.txt text file with the --gcp option.

--gcp gcp_list.txt

To create the gcp_list.txt file properly follow the instructions in the “Software for manual detection of GCPs” or “Automatic detection of ARUco targets” sections of the Geolocation data for the ODM workflow tutorial.

When the file is ready, place it in the project directory (within IMAGES dir) along with photos.

|― IMAGES/
   |― PROJECT-1/    (directory with images in JPG format and gcp_list.txt file with GCPs)
     |― gcp_list.txt   (GCPs file)
     |― IMG_0001.jpg  (photo 1)
     |― IMG_0002.jpg  (photo 2)
     |― ...

Copy input imagery on Atlas

A. export from local machine

In case the input images are on your local machine, you can transfer them to HPC Atlas via the command line using the scp command with syntax:
scp <location on local machine> <user.name>@atlas-dtn.hpc.msstate.edu:<project location on Atlas cluster>.

The complete command should look like that:

scp /local/machine/JPGs/* alex.badacz@atlas-dtn.hpc.msstate.edu:/project/isu_gif/Alex/ODM/IMAGES/project-X

…and it has to be executed in the terminal window from the selected location in the file system on your local machine.

WARNING:
Note that you HAVE to use transfer nodes every time you want to move data to a cluster. Transfer nodes differ from login nodes by the hostname while your user.name remains the same.
For data transfer purposes, always use the transfer hostname: user.name@atlas-dtn.hpc.msstate.edu

You can also transfer data from your local machine to the Atlas cluster using the web-based Globus approach. Learn more by following the tutorial in the DataScience workbook: Copying Data using Globus.

B. import from other HPC

You can use the approach from section A to export data from any computing machine (including another HPC, e.g., Ceres) to an Atlas cluster. You need to be physically logged into that other machine and follow the procedure described in step A.

If you want to make a transfer from another machine while logged into Atlas then you will import the data. You can also do this using the scp command, but you NEED to know the hostname for that other external machine. The idea of syntax is simple:
scp source_location destination_location.

scp username@external-hostname:/path/JPGs/* /project/project_account/on/Atlas/ODM/IMAGES/project-X

Sometimes an external machine may require access from a specific port, in which case you must use the -P option, i.e.,
scp -P port_number source_host:location destination_location_on_atlas.

You can probably transfer data from other HPC infrastructure to the Atlas cluster using the web-based Globus approach. Learn more by following the tutorial in the DataScience workbook: Copying Data using Globus.

C. move locally on Atlas

To move data locally in the file system on a given machine (such as an Atlas cluster) use the cp command with the syntax:
cp source_location destination_location.

The complete command should look like that:

cp /project/90daydata/shared/project-X/JPGs/* /project/project_account/user/ODM/IMAGES/project-X

…and it has to be executed in the terminal window when logged into Atlas cluster.

PRO TIP:
Absolute paths work regardless of the current location in the file system. If you want to simplify the path syntax, first go to the source or destination location and replace them with ./* or ./ respectively. An asterisk (*) means that all files from the source location will be transferred.

transfer while in the source location: cp ./* /project/.../ODM/IMAGES/project-X

transfer while in the destination location: cp /project/90daydata/project-X/JPGs/* ./


Setup SLURM script

Make sure you are in your ODM working directory at the /project path:

pwd

It should return a string with your current location, something like /project/project_account/user/ODM. If the basename of your current directory is different from “ODM” use the cd command to get into it. When you get to the right location in the file system follow the next instructions.

Create an empty file for the SLURM script and open it with your favorite text editor:

touch run_odm_latest.sh
nano run_odm_latest.sh           # nano, vim, mcedit are good text file editors

Copy-paste the template code provided below:

#!/bin/bash

# job standard output will go to the file slurm-%j.out (where %j is the job ID)
# DEFINE SLURM VARIABLES
#SBATCH --job-name="geo-odm"                   # custom SLURM job name visible in the queue
##SBATCH --partition=atlas                      # partition on Atlas: atlas, gpu, bigmem, service
#SBATCH --partition=short                      # partition on Ceres: short, medium, long, mem, longmem
#SBATCH --nodes=1                              # number of nodes
#SBATCH --ntasks=48                            # logical cores: 48 on atlas, 72 on short (Ceres)
#SBATCH --time=04:00:00                        # walltime limit (HH:MM:SS)
#SBATCH --account=<project_account>            # EDIT ACCOUNT, provide your SCINet project account
#SBATCH --mail-user=user.name@usda.gov         # EDIT EMAIL, provide your email address
#SBATCH --mail-type=BEGIN                      # email notice of job started
#SBATCH --mail-type=END                        # email notice of job finished
#SBATCH --mail-type=FAIL                       # email notice of job failure


# LOAD MODULES, INSERT CODE, AND RUN YOUR PROGRAMS HERE
module load apptainer                        # load container dependency

# DEFINE CODE VARIABLES
workdir=/project/<project_account>/.../ODM     # EDIT PATH, path to your ODM directory; use `pwd` if the script is in ODM workdir
project=PROJECT-1                              # EDIT PROJECT NAME, name of the directory with input JPG imagery
ODM=3.3.0

tag=$SLURM_JOB_ID                              # EDIT CUSTOM TAG, by default it is a SLURM job ID
#tag=`date +%Y%b%d-%T | tr ':' '.'`            # alternative TAG, use date in format 2022Aug16-16.57.59 to make the name unique
images_dir=$workdir/IMAGES/$project            # automatically generated path to input images when stored in ~/ODM/IMAGES; otherwise provide absolute path
output_dir=$workdir/RESULTS/$project-$tag      # automatically generated path to project outputs
mkdir -p $output_dir/code/images               # automatically generated images directory
ln -s $images_dir/* $output_dir/code/images/   # automatically linked input imagery
cp $BASH_SOURCE $output_dir/submit_odm.sh      # automatically copied the SLURM script into the outputs directory (e.g., for future reuse or reference of used options)
odm=/reference/containers/opendronemap/$ODM/opendronemap-$ODM.sif        # pre-configured odm image on Atlas

# DEFINE ODM COMMAND
apptainer run --writable-tmpfs $odm  \
--feature-quality ultra --min-num-features 10000 --matcher-type flann \
--pc-quality ultra --pc-classify --pc-rectify --pc-las \
--mesh-octree-depth 12 \
--gcp $output_dir/code/images/gcp_list.txt \
--dsm --dtm --dem-resolution 1 --smrf-threshold 0.4 --smrf-window 24 \
--build-overviews \
--use-hybrid-bundle-adjustment --max-concurrency 72 \
--project-path $output_dir --ignore-gsd


Each time before submitting the script to the queue…

A. Adjust script variables and paths

WARNING:
Follow the adjustment steps each time before submitting the job into the SLURM queue. Note that you SHOULD use the same script file every time you arrange an ODM analysis. For your convenience, when you submit a job to the queue, the script with all the settings is automatically copied to the corresponding folder of ODM analysis outputs (located directly in the RESULTS directory).


^ Adjust the script lines marked with # EDIT comment

0. Select Atlas partition in section # DEFINE SLURM VARIABLES (optional)

#SBATCH --partition=atlas
PRO TIP:
For most jobs, Atlas users should specify the atlas partition. The specification for all available Atlas partitions is provided in Atlas Documentation, in section Available Atlas Partitions.

1. Enter your SCINet project account in section # DEFINE SLURM VARIABLES (obligatory)

#SBATCH --account= project_account

For example, I use isu_gif_vrsc account: #SBATCH --account=isu_gif_vrsc

2. Edit path to your ODM directory in section # DEFINE CODE VARIABLES (obligatory)

workdir= /project/project_account/.../ODM

For example, I use the following path: workdir=/project/isu_gif_vrsc/Alex/geospatial/ODM

3. Edit name of the directory with input imagery in section # DEFINE CODE VARIABLES (obligatory)

IMPORTANT: This step determines which set of images will be used in the ODM analysis!

project= PROJECT-1
WARNING:
CASE 1: Note that the entered name should match the subdirectory existing directly in your ~/ODM/IMAGES/ where you store imagery for a particular analysis. Then you do NOT need to alter the images_dir variable (with the full path to the input photos) because it will be created automatically.
Keep images_dir default:
images_dir=$workdir/IMAGES/$project

CASE 2: Otherwise, give a customized (any) name for the project outputs but remember to provide the absolute path (of any location in the HPC file system) to the input photos in the images_dir variable.
Provide the absolute path to imagery:
images_dir=/aboslute/path/to/input/imagery/in/any/location

4. Edit tag variable to customize project outputs directory # DEFINE CODE VARIABLES (optional)

By default, the tag variable is tagging the name of the directory with the ODM analysis outputs by adding the $SLURM_JOB_ID when the job is submitted. Alternatively, it could be date and time (in the format: 2022Aug16-16:57:59). This prevents accidental overwriting of results for a project started from the same input images and makes the project name of each analysis unique.

tag=$SLURM_JOB_ID
# tag=`date +%Y%b%d-%T | tr ':' '.'`
PRO TIP:
You can overwrite the value of the tag variable in any way that will distinguish the analysis variant and make the name of the new folder unique.
Avoid overwriting the tag with manually typed words, and remember to always add an automatically generated randomization part in the variable to prevent overwriting the results in a previous project (for example, when you forget to update the tag).

If you want to assign a value to the tag variable as the result of a command, use backtick bash expansion around it, tag=`command`.

B. Choose ODM options for analysis

WARNING:
The script template provided in this section has a default configuration of options available in the command-line ODM module. You may find that these settings are not optimal for your project. Follow the instructions in this section to learn more about the available ODM options and their possible values.

^ Adjust flags in the # DEFINE ODM COMMAND section in the script file

# DEFINE ODM COMMAND
singularity run --writable-tmpfs $odm \
--feature-quality ultra --min-num-features 10000 \                  # photo alignment
--matcher-type flann
--pc-quality ultra --pc-classify --pc-rectify --pc-las \            # point cloud
--mesh-octree-depth 12 \                                            # meshing
--gcp $output_dir/code/images/gcp_list.txt \                        # georeferencing
--dsm --dem-resolution 1 \                                          # 3D model: DSM
--dtm --smrf-threshold 0.4 --smrf-window 24 \                       # 3D model: DTM
--build-overviews \                                                 # orthophoto
--use-hybrid-bundle-adjustment --max-concurrency 16 \               # performance
--project-path $output_dir --ignore-gsd \                           # inputs / outputs
--time                                                              # runtime info

The syntax of the first line launches via the singularity container the odm image, $odm. All further --X flags/arguments define the set of options used in photogrammetry analysis. For clarity and readability, a long command line has been broken into multiple lines using the special character, backslash \ . Thus, be careful when adding or removing options. Also, do NOT write any characters after backslash character (# comments are placed in this code block only for educational purposes). The order of the options entered does not matter but they have been grouped by their impact on various outputs. Finally, check that the code block with the ODM command does NOT contain blank lines between the options (this may be the case when you manually copy and paste into some text editors).

WARNING:
Note especially to update the path to the GCP file, geo.txt (with the assignment of GCPs to images):
--gcp $output_dir/code/images/geo.txt

If you do not have this file, completely remove this line from the script (instead of commenting with #). In that case, the EXIF metadata will be used.

You can find a complete list of all available options with a description in the official OpenDroneMap documentation: v2.8.8. Click on the selected headline in the list below to expand the corresponding section with options.

MANAGE WORKFLOW options

A. To end processing at selected stage use --end-with option followed by the keyword for respective stage:
1. dataset6. odm_filterpoints10. odm_dem
2. split7. odm_meshing11. odm_orthophoto
3. merge8. mvs_texturing12. odm_report
4. opensfm9. odm_georeferencing13. odm_postprocess [default]
5. openmvs
B. There are several options to restart workflow:
  • To restart the selected stage only and stop use --rerun option followed by the keyword for respective stage (see list in section A).
  • To resume processing from selected stage to the end, use --rerun-from option followed by the keyword for respective stage (see list in section A).
  • To permanently delete all previous results and rerun the processing pipeline use --rerun-all flag.
C. For fast generation of orthophoto skip dense reconstruction and 3D model generation using --fast-orthophoto flag. It creates the orthophoto directly from the sparse reconstruction which saves the time needed to build a full 3D model.

D. Skip individually other stages of the workflow:
  • Skip generation of a full 3D model with --skip-3dmodel flag in case you only need 2D results such as orthophotos and DEMs. Saves time!
  • Skip generation of the orthophoto with --skip-orthophoto flag n case you only need 3D results or DEMs. Saves time!
  • Skip generation of PDF report with --skip-report flag in case you do not need it. Saves time!
PHOTO ALIGNMENT options
flagvaluesdefaultdescriptionnotes
--feature-typeakaze, hahog, orb, siftsiftalgorithm for extracting keypoints and computing descriptors
--min-num-featuresinteger10000minimum number of features to extract per imageMore features ~ more matches between images. Improves reconstruction of areas with little overlap or insufficient features.
More features slow down processing.
--feature-qualityultra, high, medium, low, lowesthighlevels of feature extraction qualityHigher quality generates better features, but requires more memory and takes longer.
--resize-tointeger2048resizes images by the largest side for feature extraction purposes onlySet to -1 to disable or use --feature-quality instead. This does not affect the final orthophoto resolution quality and will not resize the original images.
--matcher-neighborspositive integer0performs image matching with the nearest N images based on GPS exif dataSet to 0 to match by triangulation.
--matcher-typebow, bruteforce, flannflannimage matcher algorithmFLANN is slower, but more stable.
BOW is faster, but can sometimes miss valid matches.
BRUTEFORCE is very slow but robust.
SfM & DPC options
Structure from Motion (SfM) algorithm estimates camera positions in time (motions) and generates a 3D Dense Point Cloud (DPC) of the object from multi-view stereo (MVS) photogrammetry on the set of images.
flagvaluesdefaultdescriptionnotes
--sfm-algorithmincremental, triangulation, planarincrementalstructure from motion algorithm (SFM)For aerial datasets, if camera GPS positions and angles are available, triangulation can generate better results. For planar scenes captured at fixed altitude with nadir-only images, planar can be much faster.
--depthmap-resolutionpositive float640controls the density of the point cloud by setting the resolution of the depthmap imagesHigher values take longer to compute but produce denser point clouds.
Overrides --pc-quality
--pc-qualityultra, high, medium, low, lowestmediumthe level of point cloud qualityHigher quality generates better, denser point clouds, but requires more memory and takes longer (~4x/level).
--pc-geometric offimproves the accuracy of the point cloud by computing geometrically consistent depthmapsIncreases processing time, but can improve results in urban scenes.
--pc-classify offclassify the point cloud outputs using a Simple Morphological FilterYou can control the behavior of this option by tweaking the --dem-* parameters.
--pc-rectify offperforms ground rectification on the point cloudThe wrongly classified ground points will be re-classified and gaps will be filled.
Useful for generating DTMs.
--pc-filterpositive float2.5filters the point cloud by removing points that deviate more than N standard deviations from the local mean0 - disables filtering
--pc-samplepositive float0.0filters the point cloud by keeping only a single point around a radius N [meters]Useful to limit the output resolution of the point cloud and remove duplicate points.
0 - disables sampling
--pc-copc offexports the georeferenced point cloudCloud Optimized Point Cloud (COPC) format
--pc-csv offexports the georeferenced point cloudEntwine Point Tile (EPT) format
--pc-ept offCSV formatexports the georeferenced point cloud
--pc-las offexports the georeferenced point cloudLAS format
MESHING & TEXTURING options
flagvaluesdefaultdescriptionnotes
--mesh-octree-depthinteger:
1 <= x <= 14
11octree depth used in the mesh reconstructionIt should be increased to 10-11 in urban areas to recreate better buildings / roofs.
--mesh-sizepositive integer200000the maximum vertex count of the output meshIt should be increased to 300000-600000 in urban areas to recreate better buildings / roofs.
--texturing-data-termgmi, areagmitexturing featureWhen texturing the 3D mesh, for each triangle, choose to prioritize images with sharp features (gmi) or those that cover the largest area (area).
It should be set to area in forest landscape.
--texturing-keep-unseen-faces off keeps faces in the mesh that are not seen in any camera
--texturing-nadir-weightpositive integer It should be increased to 29-32 in urban areas to reconstruct better edges of roofs.
It should be decreased to 0-6 in grassy / flat areas.
--texturing-outlier-removal-typenone, gauss_clamping, gauss_dampinggauss_clampingtype of photometric outlier removal method
--texturing-skip-global-seam-leveling offskips normalization of colors across all imagesUseful when processing radiometric data.
--texturing-skip-local-seam-leveling offskips the blending of colors near seams
--texturing-tone-mappingnone, gamma noneturns on gamma tone mapping or none for no tone mapping
GEOREFERENCING options
flagvaluesdefaultdescriptionnotes
--force-gps offuses images’ GPS exif data for reconstruction, even if there are GCPs presentUseful if you have high precision GPS measurements.*
If there are no GCPs, it does nothing.
--gcpPATH stringnonepath to the file containing the ground control points used for georeferencing
--use-exif offEXIF-based georeferencingUse this tag if you have a GCP File but want to use the EXIF information for georeferencing instead.
--geoPATH stringnonepath to the image geolocation file containing the camera center coordinates used for georeferencingNote that omega/phi/kappa are currently not supported (you can set them to 0).
--gps-accuracypositive float10value in meters for the GPS Dilution of Precision (DOP) information for all imagesIf you use high precision GPS (RTK), this value will be set automatically. You can manually set it in case the reconstruction fails. Lowering the value can help control bowling-effects over large areas.
DSM - Digital Surface Model options
flagvaluesdefaultdescriptionnotes
--dsm offbuilds DSM (ground + objects) using a progressive morphological filterUse ---dem* parameters for finer tuning.
--dem-resolutionfloat5.0DSM/DTM resolution in cm / pixelThe value is capped to 2x the ground sampling distance (GSD) estimate.
^ use –-ignore-gsd to remove the cap
--dem-decimationpositive integer1decimates the points before generating the DEM
1 is no decimation (full quality)
100 decimates ~99% of the points
Useful for speeding up generation of DEM results in very large datasets.
--dem-euclidean-mapoffcomputes an euclidean raster map for each DEMUseful to isolate the areas that have been filled.
--dem-gapfill-stepspositive integer3number of steps used to fill areas with gaps
0 disables gap filling
see details in the docs
For urban scenes increasing this value to 4-5 can help produce better interpolation results in the areas that are left empty by the SMRF filter.
DTM - Digital Terrain Model options
flagvaluesdefaultdescriptionnotes
--dtm offbuilds DTM (ground only) using a simple morphological filterUse --dem* and --smrf* parameters for finer tuning.
--smrf-thresholdpositive float0.5Simple Morphological Filter elevation threshold parameter [meters]; elevation thresholdSet this parameter to the minimum height (in meters) that you expect non-ground objects to be.
This option has the biggest impact on results!
--smrf-windowpositive float > 1018.0Simple Morphological Filter window radius parameter [meters]It corresponds to the size of the largest feature (building, trees, etc.) to be removed. Should be set to a value higher than 10.
--smrf-slopepositive float in range 0.1 - 1.20.15Simple Morphological Filter slope parameter; a measure of “slope tolerance”Increase this parameter for terrains with lots of height variation.
--smrf-scalarpositive float1.25Simple Morphological Filter elevation scalar parameterIncrease this parameter for terrains with lots of height variation.
ORTHOPHOTO options
flagvaluesdefaultdescriptionnotes
--orthophoto-resolutionfloat > 0.05.0orthophoto resolution in cm / pixel
--orthophoto-compressionJPEG, LZW, LZMA, DEFLATE, PACKBITS, NONEDEFLATEcompression to use for orthophotos
--orthophoto-cutline offgenerates a polygon around the cropping area that cuts the orthophoto around the edges of featuresThe polygon can be useful for stitching seamless mosaics with multiple overlapping orthophotos.
--orthophoto-png offgenerates rendering of the orthophotoPNG format
--orthophoto-kmz offgenerates rendering of the orthophotoGoogle Earth (KMZ) format
--orthophoto-no-tiled offgenerates striped GeoTIFF
--build-overviews offbuilds orthophoto overviewsUseful for faster display in programs such as QGIS.
--use-3dmesh offuses a full 3D mesh to compute the orthophoto instead of a 2.5D meshThis option is a bit faster and provides similar results in planar areas.
By default, the 2.5D mesh is used to generate the orthophoto (which tends to work better than the full 3D mesh).
GENERAL QUALITY OPTIMIZTION options
flagvaluesdefaultdescriptionnotes
--auto-boundary offautomatically set a boundary using camera shot locations to limit the area of the reconstructionUseful to remove far away background artifacts (sky, background landscapes, etc.).
--boundaryJSON filenoneGeoJSON polygon limiting the area of the reconstructionCan be specified either as path to a GeoJSON file or as a JSON string representing the contents of a GeoJSON file.
--camera-lensauto, perspective, brown, fisheye, spherical, dual, equirectangularautocamera projection typeManually setting a value can help improve geometric undistortion.
--camerasJSON filenonecamera parameters computed from another datasetUse params from text file instead of calculating them.
Can be specified either as path to a cameras.json file or as a JSON string representing the contents of a cameras.json file.
--use-fixed-camera-params offturns off camera parameter optimization during bundle adjustmentThis can be sometimes useful for improving results that exhibit doming/bowling or when images are taken with a rolling shutter camera.
--cog offcreates cloud-optimized GeoTIFFs instead of normal GeoTIFFs
PERFORMANCE OPTIMIZATION options
flagvaluesdefaultdescriptionnotes
--use-hybrid-bundle-adjustment offruns local bundle adjustment for every image added to the reconstruction and a global adjustment every 100 imagesSpeeds up reconstruction for very large datasets.
--max-concurrencypositive integer4maximum number of processes to use in various processesPeak memory requirement is ~1GB per thread and 2 megapixel image resolution.
--no-gpu offdoes not use GPU acceleration, even if it’s available
--optimize-disk-space offdeletes heavy intermediate files to optimize disk space usageThis disables restarting the pipeline from an intermediate stage, but allows the analysis on machines that don’t have sufficient disk space available.
INPUT / OUTPUT MANAGEMENT options
flagvaluesdefaultdescriptionnotes
--project-pathPATH stringnonepath to the project folderYour project folder should contain subfolders for each dataset. Each dataset should have an images folder.
--nameNAME stringcodename of datasetThat is the ODM-required subfolder within project folder.
--ignore-gsd offignores Ground Sampling Distance (GSD)GSD caps the maximum resolution of image outputs and resizes images, resulting in faster processing and lower memory usage. Since GSD is an estimate, sometimes ignoring it can result in slightly better image output quality.
--croppositive float3crop image outputs by creating a smooth buffer around the dataset boundaries, shrunk by N metersUse 0 to disable cropping.
-copy-toPATH stringnonecopies output results to this folder after processing

See description of other options directly in the OpenDroneMap documentation:

Submit ODM job into the SLURM queue

The SLURM is a workload manager available on the Atlas cluster. It is a simple Linux utility for resource management and computing task scheduling. In simple terms, you HAVE to use it every time you want to outsource some computation on HPC infrastructure. To learn more about SLURM take a look at the tutorial SLURM: Basics of Workload Manager available in the DataScience Workbook.

PRO TIP:
If you are working on an HPC infrastructure that uses the PBS workload manager, take a look at the tutorial PBS: Portable Batch System to learn more about the command that sends a task to the queue and the script configuration.
[source: DataScience Workbook]

Use the sbatch SLURM command to submit the computing job into the queue:

sbatch run_odm.sh
WARNING:
Tasks submitted into the queue are sent to powerful compute nodes, while all the commands you write in the terminal right after logging in are executed on the capacity-limited login node.

Never perform an aggravating computation on a logging node.
A. If you want to optimize some computationally demanding procedure and need a live preview for debugging, start an interactive session on the compute node.
B. If you want to migrate a large amount of data use transfer node: @atlas-dtn.hpc.msstate.edu.

Use the squeue SLURM command to check the status of the job in the queue:

squeue -u user.name                             # e.g., squeue -u alex.badacz

job status

Access ODM analysis results

Use the ls command to display contents of the directory with outputs:

ls /project/<scinet_account>/<user-specific-path>/ODM/RESULTS/<project-name>/code

e.g., ls /project/isu_gif_vrsc/Alex/geospatial/ODM/RESULTS/project-1-2022Aug24/code/

job status

The figure below shows the file structure of all outputs generated by the ODM command-line module. The original screenshot comes from the official OpenDroneMap (v2.8.7) Documentation.

OpenDroneMap outputs

Get ODM on local machine

Download the ODM module

A. Download docker image using singularity
^ suggested for usage on computing machines where the singularity is available

 module load singularity
 singularity pull --disable-cache  docker://opendronemap/odm:latest
WARNING:
Do it only once (!) when the first time you configure your work with the command-line ODM module. Once created, the singularity image of an ODM tool can be used any number of times.

Executing the code in the command line should create a new file named odm_latest.sif. This is an image of an ODM tool whose configuration ensures that it can be used efficiently on an HPC cluster. You can check the presence of the created file using the ls command, which will display the contents of the current directory.

B. Download docker image using Docker
^ suggested for usage on computing machines where the Docker can be installed
^ requires prior Docker installation, follow instructions for your OS at docs.docker.com

# Windows
docker run -ti --rm -v c:/Users/youruser/datasets:/datasets opendronemap/odm --project-path /datasets project

# Mac or Linux
docker run -ti --rm -v /home/youruser/datasets:/datasets opendronemap/odm --project-path /datasets project

C. ODM at GitHub: https://github.com/OpenDroneMap/OpenDroneMap/

git clone https://github.com/OpenDroneMap/ODM.git

D. Download zipped source code: https://github.com/OpenDroneMap/OpenDroneMap/archive/master.zip

 get https://github.com/OpenDroneMap/OpenDroneMap/archive/master.zip

Further Reading


Homepage Section Index Previous Next