Skip to content

Commit

Permalink
Improve README
Browse files Browse the repository at this point in the history
  • Loading branch information
roym899 committed Jul 11, 2022
1 parent d13b911 commit ad0f0e0
Show file tree
Hide file tree
Showing 6 changed files with 11 additions and 10 deletions.
21 changes: 11 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ SDFEst is a package for pose, scale, and shape estimation using discretized sign

[Installation](#installation) | [Dataset Preparation](#dataset-preparation) | [Paper Reproduction](#paper-reproduction) | [Code Structure](#code-structure) | [Development](#development) | [Citation](#citation) | [Docs](https://roym899.github.io/sdfest/)

![Animation of differentiable renderer.](resources/sdfest_overview.gif)

## Installation

Expand Down Expand Up @@ -34,7 +35,7 @@ pip install -r requirements.txt -e .
You need to install Detectron2 manually to run the pipeline with automatic instance segmentation.

Follow the [detectron2 installation guide](https://detectron2.readthedocs.io/en/latest/tutorials/install.html) from there.
Tested with detectron2 0.5 + torch 1.9.0.
Tested with detectron2 0.6 + torch 1.12.0 (and various older versions).

## Dataset Preparation
See below for expected folder structure for each dataset.
Expand Down Expand Up @@ -71,14 +72,14 @@ See below for expected folder structure for each dataset.

First, make sure the datasets are in the right format.

### Benchmark Results
Depending on which dataset, you have downloaded you can reproduce the results reported in the paper (using the already trained models) by running the script
```bash
source reproduce_{shapenet,modelnet,redwood}_experiments.sh
```
after that, all results can be found in `./results`.
### Full paper (training and experiments)
See the bash script in `reproduction_scripts/reproduce_paper.sh`.

Evaluation code for REAL275 and REDWOOD75 experiments will be integrated in [cpas_toolbox](https://github.com/roym899/pose_and_shape_evaluation) soon.

<sup>Non-cleaned up version of evaluation code can be found in `icaps_eval` branch.</sup>

### Train Models
### Train Models Only
To train a network for a specific category you need to first train a per-category VAE, and *afterwards* an initialization network.
#### VAE
First we need to convert the ShapeNet meshes to SDFs and optionally filter the dataset. To reproduce the preprocessing of the paper run
Expand All @@ -91,7 +92,7 @@ source train_vaes.sh
```
to train the models using the same configuration as used for the paper.

#### Init Network
#### Initialization Network
To train the initialization network we used in our paper, run
```bash
source train_init_networks.sh
Expand All @@ -113,7 +114,7 @@ Code is structured into 4 sub-packages:

Differentiable rendering of depth image for signed distance fields.

The signed distance field is assumed to be voxelized and it's pose is given by a x, y, z in the camera frame, a quaternion describing its orientation and a scale parameter describing its size. This module provides the derivative with respect to the signed distance values, and the full pose description (position, orientation, scale).
The signed distance field is assumed to be voxelized and its pose is given by a x, y, z in the camera frame, a quaternion describing its orientation and a scale parameter describing its size. This module provides the derivative with respect to the signed distance values, and the full pose description (position, orientation, scale).

#### Generating compile_commands.json
<sup>General workflow for PyTorch extensions (only tested for JIT, probably similar otherwise)</sup>
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Binary file added resources/sdfest_overview.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit ad0f0e0

Please sign in to comment.