diff --git a/README.md b/README.md
index b08ad90b..d46d16da 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,7 @@ can be found [here](https://github.com/google/sling). CASPAR is intended to
parse using cascades which are handling different parts of the transition
action space.
-The basic CASPAR parser is a general transition-based frame semantic parser
+The CASPAR parser is a general transition-based frame semantic parser
using bi-directional LSTMs for input encoding and a Transition Based Recurrent
Unit (TBRU) for output decoding. It is a jointly trained model using only the
text tokens as input and the transition system has been designed to output frame
@@ -27,517 +27,13 @@ A more detailed description of the SLING parser can be found in this paper:
-## Trying out the parser
+## More information ...
-If you just want to try out the parser on a pre-trained model, you can install
-the wheel with pip and download a pre-trained parser model. On a Linux machine
-with Python 2.7 you can install a pre-built wheel:
-
-```
-sudo pip install http://www.jbox.dk/sling/sling-1.0.0-cp27-none-linux_x86_64.whl
-```
-and download the pre-trained model:
-```
-wget http://www.jbox.dk/sling/sempar.flow
-```
-You can then use the parser in Python:
-```
-import sling
-
-parser = sling.Parser("sempar.flow")
-
-text = raw_input("text: ")
-doc = parser.parse(text)
-print doc.frame.data(pretty=True)
-for m in doc.mentions:
- print "mention", doc.phrase(m.begin, m.end)
-```
-
-## Installation
-
-First, clone the GitHub repository and switch to the caspar branch.
-
-```shell
-git clone https://github.com/google/sling.git
-cd sling
-git checkout caspar
-```
-
-SLING uses [Bazel](https://bazel.build/) as the build system, so you need to
-[install Bazel](https://docs.bazel.build/versions/master/install.html) in order
-to build the SLING parser.
-
-```shell
-sudo apt-get install pkg-config zip g++ zlib1g-dev unzip python
-wget -P /tmp https://github.com/bazelbuild/bazel/releases/download/0.13.0/bazel-0.13.0-installer-linux-x86_64.sh
-chmod +x /tmp/bazel-0.13.0-installer-linux-x86_64.sh
-sudo /tmp/bazel-0.13.0-installer-linux-x86_64.sh
-```
-
-The parser trainer uses Python v2.7 and PyTorch for training, so they need to be
-installed.
-
-```shell
-# Change to your favorite version as needed.
-sudo pip install http://download.pytorch.org/whl/cpu/torch-0.3.1-cp27-cp27mu-linux_x86_64.whl
-```
-
-## Building
-
-Operating system: Linux
-Languages: C++, Python 2.7, assembler
-CPU: Intel x64 or compatible
-Build system: Bazel
-
-You can test your installation by building a few important targets.
-
-```shell
-git checkout caspar
-bazel build -c opt sling/nlp/parser sling/nlp/parser/tools:all
-```
-
-Next, build and link to the SLING Python module since it will be used by the
-trainer. But first, remember to switch to the caspar branch since it implements
-all functionality inside CASPAR.
-
-```shell
-git checkout caspar
-bazel build -c opt sling/pyapi:pysling.so
-sudo ln -s $(realpath python) /usr/lib/python2.7/dist-packages/sling
-```
-
-**NOTE:**
-* In case you are using an older version of GCC (< v5), you may want to comment
-out [this cxxopt](https://github.com/google/sling/blob/f8f0fbd1a18596ccfe6dbfba262a17afd36e2b5f/.bazelrc#L8) in .bazelrc.
-
-## Training
-
-Training a new model consists of preparing the commons store and the training
-data, specifying various options and hyperparameters in the training script,
-and tracking results as training progresses. These are described below in
-detail.
-
-### Data preparation
-
-The first step consists of preparing the commons store (also called global store).
-This has frame and schema definitions for all types and roles of interest, e.g.
-`/saft/person` or `/pb/love-01` or `/pb/arg0`. In order to build the commons store
-for the OntoNotes-based parser you need to checkout PropBank in a directory
-parallel to the SLING directory:
-
-```shell
-cd ..
-git clone https://github.com/propbank/propbank-frames.git propbank
-cd sling
-sling/nlp/parser/tools/build-commons.sh
-```
-
-This will build a SLING store with all the schemas needed and put it into
-`/tmp/commons`.
-
-Next, write a converter to convert documents in your existing format to
-[SLING documents](sling/nlp/document/document.h). A SLING document is just a
-document frame of type `/s/document`. An example of such a frame in textual encoding
-can be seen below. It is best to create one SLING document per input sentence.
-
-```shell
-{
- :/s/document
- /s/document/text: "John loves Mary"
- /s/document/tokens: [
- {
- :/s/document/token
- /s/token/index: 0
- /s/token/start: 0
- /s/token/length: 4
- /s/token/break: 0
- /s/token/text: "John"
- },
- {
- :/s/document/token
- /s/token/index: 1
- /s/token/start: 5
- /s/token/length: 5
- /s/token/text: "loves"
- },
- {
- :/s/document/token
- /s/token/index: 2
- /s/token/start: 11
- /s/token/length: 4
- /s/token/text: "Mary"
- }]
- /s/document/mention: {=#1
- :/s/phrase
- /s/phrase/begin: 0
- /s/phrase/evokes: {=#2 :/saft/person }
- }
- /s/document/mention: {=#3
- :/s/phrase
- /s/phrase/begin: 1
- /s/phrase/evokes: {=#4
- :/pb/love-01
- /pb/arg0: #2
- /pb/arg1: {=#5 :/saft/person }
- }
- }
- /s/document/mention: {=#6
- :/s/phrase
- /s/phrase/begin: 2
- /s/phrase/evokes: #5
- }
-}
-```
-For writing your converter or getting a better hold of the concepts of frames and store in SLING, you can have a look at detailed deep dive on frames and stores [here](sling/frame/README.md).
-
-The SLING [Document class](sling/nlp/document/document.h)
-also has methods to incrementally make such document frames, e.g.
-```c++
-Store global;
-// Read global store from a file via LoadStore().
-
-// Lookup handles in advance.
-Handle h_person = global.Lookup("/saft/person");
-Handle h_love01 = global.Lookup("/pb/love-01");
-Handle h_arg0 = global.Lookup("/pb/arg0");
-Handle h_arg1 = global.Lookup("/pb/arg1");
-
-// Prepare the document.
-Store store(&global);
-Document doc(&store); // empty document
-
-// Add token information.
-doc.SetText("John loves Mary");
-doc.AddToken(0, 4, "John", 0);
-doc.AddToken(5, 10, "loves", 1);
-doc.AddToken(11, 15, "Mary", 1);
-
-// Create frames that will eventually be evoked.
-Builder b1(&store);
-b1.AddIsA(h_person);
-Frame john_frame = b1.Create();
-
-Builder b2(&store);
-b2.AddIsA(h_person);
-Frame mary_frame = b2.Create();
-
-Builder b3(&store);
-b3.AddIsA(h_love01);
-b3.Add(h_arg0, john_frame);
-b3.Add(h_arg1, mary_frame);
-Frame love_frame = b3.Create();
-
-# Add spans and evoke frames from them.
-doc.AddSpan(0, 1)->Evoke(john_frame);
-doc.AddSpan(1, 2)->Evoke(love_frame);
-doc.AddSpan(2, 3)->Evoke(mary_frame);
-
-doc.Update();
-string encoded = Encode(doc.top());
-
-// Append 'encoded' to a recordio file.
-RecordWriter writer();
-
-writer.Write(encoded);
-...
-
-writer.Close();
-```
-
-Use the converter to create the following corpora:
-+ Training corpus of annotated SLING documents.
-+ Dev corpus of annotated SLING documents.
-
-CASPAR uses the [recordio file format](https://github.com/google/sling/blob/caspar/sling/file/recordio.h)
-for training where each record corresponds to one encoded document. This format is up to 25x faster
-to read than zip files, with almost identical compression ratios.
-
-### Specify training options and hyperparameters:
-
-Once the commons store and the corpora have been built, you are ready for training
-a model. For this, use the supplied [training script](sling/nlp/parser/tools/train.sh).
-The script provides various commandline arguments. The ones that specify
-the input data are:
-+ `--commons`: File path of the commons store built in the previous step.
-+ `--train`: Path to the training corpus built in the previous step.
-+ `--dev`: Path to the annotated dev corpus built in the previous step.
-+ `--output` or `--output_dir`: Output folder where checkpoints, master spec,
- temporary files, and the final model will be saved.
-
-Then we have the various training options and hyperparameters:
-+ `--word_embeddings`: Empty, or path to pretrained word embeddings in
- [Mikolov's word2vec format](https://github.com/tmikolov/word2vec/blob/master/word2vec.c).
- If supplied, these are used to initialize the embeddings for word features.
-+ `--batch`: Batch size used during training.
-+ `--report_every`: Checkpoint interval (in number of batches).
-+ `--steps`: Number of training batches to process.
-+ `--method`: Optimization method to use (e.g. adam or momentum), along
- with auxiliary arguments like `--adam_beta1`, `--adam_beta2`, `--adam_eps`.
-+ `--learning_rate`: Learning rate.
-+ `--grad_clip_norm`: Max norm beyond which gradients will be clipped.
-+ `--moving_average`: Whether or not to use exponential moving average.
-
-The script comes with reasonable defaults for the hyperparameters for
-training a semantic parser model, but it would be a good idea to hardcode
-your favorite arguments [directly in the
-flag definitions](sling/nlp/parser/trainer/train_util.py#L94)
-to avoid supplying them again and again on the commandline.
-
-### Run the training script
-
-To test your training setup, you can kick off a small training run:
-```shell
-./sling/nlp/parser/tools/train.sh --commons= \
- --train= --dev= \
- --report_every=500 --train_steps=1000 --output=