This project is a replication of the Covered Interest Parity(CIP). A CIP deviation is a spread a cash riskliss rate and a syntehtic riskless rate. The synthetic rate is a local currency borrowing swapped into a foriegn denominated rate using cross currency derivatives.
To quickest way to run code in this repo is to use the following steps. First clone this github repository and open in the IDE of your choice. Then make sure to install all the requirements using and then install the dependencies with pip. Make sure you have an environment in your terminal set as well.
pip install -r requirements.txt
In order to convert your files to LaTeX, you will need to have Pandoc installed on your device. If you are running on Mac OS then
brew install pandoc
Finally, you can then run
doit
And that's it!
If you would also like to run the R code included in this project, you can either install
R and the required packages manually, or you can use the included environment.yml
file.
To do this, run
mamba env create -f environment.yml
I'm using mamba
here because conda
is too slow. Activate the environment.
Then, make sure to uncomment
out the RMarkdown task from the dodo.py
file. Then,
run doit
as before.
You can run the unit test, including doctests, with the following command:
pytest --doctest-modules
You can build the documentation with:
rm ./src/.pytest_cache/README.md
jupyter-book build -W ./
Use del
instead of rm on Windows
You can
export your environment variables
from your .env
files like so, if you wish. This can be done easily in a Linux or Mac terminal with the following command:
set -a ## automatically export all variables
source .env
set +a
In Windows, this can be done with the included set_env.bat
file,
set_env.bat
-
The
assets
folder is used for things like hand-drawn figures or other pictures that were not generated from code. These things cannot be easily recreated if they are deleted. -
The
_output
folder, on the other hand, contains dataframes and figures that are generated from code. The entire folder should be able to be deleted, because the code can be run again, which would again generate all of the contents. -
The
data_manual
is for data that cannot be easily recreated. This data should be version controlled. Anything in the_data
folder or in the_output
folder should be able to be recreated by running the code and can safely be deleted. -
I'm using the
doit
Python module as a task runner. It works likemake
and the associatedMakefile
s. To rerun the code, installdoit
(https://pydoit.org/) and execute the commanddoit
from thesrc
directory. Note that doit is very flexible and can be used to run code commands from the command prompt, thus making it suitable for projects that use scripts written in multiple different programming languages. -
I'm using the
.env
file as a container for absolute paths that are private to each collaborator in the project. You can also use it for private credentials, if needed. It should not be tracked in Git.
I'll often use a separate folder for storing data. Any data in the data folder can be deleted and recreated by rerunning the PyDoit command (the pulls are in the dodo.py file). Any data that cannot be automatically recreated should be stored in the "data_manual" folder. Because of the risk of manually-created data getting changed or lost, I prefer to keep it under version control if I can. Thus, data in the "_data" folder is excluded from Git (see the .gitignore file), while the "data_manual" folder is tracked by Git.
Output is stored in the "_output" directory. This includes dataframes, charts, and rendered notebooks. When the output is small enough, I'll keep this under version control. I like this because I can keep track of how dataframes change as my analysis progresses, for example.
Of course, the _data directory and _output directory can be kept elsewhere on the
machine. To make this easy, I always include the ability to customize these
locations by defining the path to these directories in environment variables,
which I intend to be defined in the .env
file, though they can also simply be
defined on the command line or elsewhere. The settings.py
is reponsible for
loading these environment variables and doing some like preprocessing on them.
The settings.py
file is the entry point for all other scripts to these
definitions. That is, all code that references these variables and others are
loading by importing config
.