This is the application for generating a static website for podcasts, which means output HTML/assets and can be served by just a web server. it is quite inspired by unix philosophy for deployment pipeline, and functional programming for stateless. it's currently used for https://kaigaiiju.ch.
- dependencies as little as possible
Try to make fewer external libraries and use a more standard solution of Ruby/Rails which is a lot of maintenance. for instance, not necessary to use RSpec.
- Try to follow Rails way architecture
more discussion can be found here on wiki
- Use Database modeling with a DDD perspective
use database constraints and modeling effectively, ERD can be found here.
- (experimental) CQRS
Write operation and Read Operation have different requirements, try not to mix it up.
if the latest commit message contains RELEASE_TRRIGER_MESSAGE then it will trigger the release workflow to build the website via kaigaiijuch/release.
# e.g. with RELEASE_TRRIGER_MESSAGE=release!
$ git commit --allow-empty -m 'release!'
$ git push origin main
it will trigger a build and make a pull-request on kaigaiijuch/kaigaiijuch.github.io. the trigger defined at here
also there is fetch_rss
trigger scheduled, it can trigger via curl. (you need token)
curl -L -X POST -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${GITHUB_TOKEN}" -H "X-GitHub-Api-Version: 2022-11-28" https://api.github.com/repos/kaigaiijuch/website/dispatches -d '{"event_type":"fetch_rss"}'
- Ruby version: check .ruby_version
- sqlite3
$ bin/setup # with DATA_REPO=your_data_repo, it will clone the data repository
$ bin/rails s
$ open http://localhost:3000 # caution it's not HTTPS
$ docker-compose up
$ open http://localhost:13000 # caution it's not HTTPS
$ bin/data/fetch_all [rss_feed_url] [csv/directory] [answers.csv] [transcription/csv/directory]
This is idempotent operation, it fetches the data from the sources and store it in the data repository.
-
rss_feed_url
: Spotify for podcasters RSS feed is supported, it fetches the feeds data the data is stored default inFeedsSpotifyForPodcaster
.important convention: the description of the episode should be ended by formatted as
#123-a title
where123-a
is the episode number, it can be alphanumeric. -
csv/directory
: (this is a temporary solution) Import data from csv files in the directory, it is compatible with google sheets exported csv format, sample file: (TBD). -
answers.csv
: (this is a temporary solution) Import answers from csv file, it is compatible with exported csv format of google forms linked spreadsheet. The format is#{question_number}: 【{category}】{question_original_title}
, sample file: (TBD). -
transcription/csv/directory
: (this is a temporary solution) Import transcriptions from csv file. The file name should be#{episode_number}.transcription.csv
, sample file: (TBD).
$ bin/data/commit
this will commit the data to the data repository.
data/episodes/#{episode_number}.chapters.txt
- chapter data for the episode. the data format isHH:MM:SS.mmm title
. sample file is here
For adding new pages, be aware that it may needs to added for page list on bin/pages/list.rb and site map on config/sitemap.rb.
check .env
file and satisfy the requirements, they are used in config/application.rb
.
TZ
is used for setting timezone.
$ bin/rake db:create db:migrate db:seed
$ bin/rake
$ bin/build
it will generate the static pages in public/
directory based on the path list in bin/pages/list.rb
.
NOTE: after generate in public/
directory, the pages are served as static pages not by server. use bin/clean
to remove static pages.
for github build workflow: it needs to set DATA_REPO_TOKEN as a secret.
$ bin/rake sitemap:refresh