-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run notebooks on dispatch and make PR #530
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Closes #575 --------- Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
Part of #495. We now will enforce in CI that historical API docs are valid for Runtime and Provider. To do this, this PR adds the `--skip-broken-historical` argument. In a follow up, I want to try checking Qiskit docs, at least certain versions. But I'm waiting until we regenerate the docs with #552 applied because I suspect it will fix some issues.
This PR changes the release notes to have a chronological descending order. #### How to regenerate the release notes To regenerate the release notes and fix the order of each version, I used the API generation script with some changes. We need to modify the writeSeparateReleaseNotes function from `releaseNotes.ts` and the `updateApiDocs.ts` script. The first step is to comment the `extractMarkdownReleaseNotesPatches` call in line 245, given that we don't want to write the latest release notes file. Also, we need to substitute the two arrays returned by that call for the release notes we want to regenerate and an empty dictionary. I did the regeneration in batches of 6 files: ```ts //const [minorVersionsFound, markdownByPatchVersion] = // extractMarkdownReleaseNotesPatches(releaseNoteMarkdown); const markdownByPatchVersion: { [id: string]: string; } = {}; const minorVersionsFound = ['0.10', '0.15', '0.14', '0.13', '0.12', '0.11']; ``` The following step is to add some logic to read all the files and store each patch version we find in each file. With that information, we will re-write each release notes file at the end of the function with the correct order. That piece of code was extracted from the first version of the script which can be found in this commit 7aa586f. It was copied in the same place it appear in the commit. ```ts const [_, markdownByPatchOldVersion] = extractMarkdownReleaseNotesPatches(currentMarkdown); for (let [versionPatch, markdownPatch] of Object.entries( markdownByPatchOldVersion, )) { // We keep the release notes for a patch if it hasn't been modified for the current release notes. // Otherwise, we use the modified version. if (!markdownByPatchVersion.hasOwnProperty(versionPatch)) { markdownByPatchVersion[versionPatch] = markdownPatch; } } ``` Finally, to speed up the process, I commented the files we regenerate in the `updateApiDocs.ts` script. This step is optional, and if used we have to make sure we restore all the docs before creating the PR. ```ts async function convertHtmlToMarkdown( htmlPath: string, markdownPath: string, baseSourceUrl: string, pkg: Pkg, ) { const files = await globby( [ //"apidocs/**.html", //"apidoc/**.html", //"stubs/**.html", "release_notes.html", ], { cwd: htmlPath, }, ); ``` Closes #538
Came across some little fixes.
Part of #223. First small refactor of the `sphinxHtmlToMarkdown.ts` script. This PR divides the `sphinxHtmlToMarkdown` function into different helper functions, reducing its length and allowing the possibility of adding tests for different parts of the code. Changes in this PR: - Renamed the `$page` variable to `$` to reduce the noise introduced by the number of calls we had and to follow the [Cheerio](https://cheerio.js.org/docs/basics/loading#load) convention. - Moved the code to find all the images into a helper function. - Grouped similar transformations to the HTML (done previous to its conversion to markdown) into a helper function called `preprocess`. - Moved the conversion from HTML to markdown process to a different function to simplify the `sphinxHtmlToMarkdown` function (divide more in the future). - Refactored the recursive search of methods, adding early returns and merging some parts. - Moved the unified plugin to the `unifiedParser.ts` file in order to abstract the transformation, which can now be called from other files. That script can group together all the unified plugins, such as the link checker, the mergeClassMembers.ts, and this one. In this PR, the code has been copied almost identically to how it was before, except for some little adjustments, like in the unified plugin where we now use an anonymous function and have the two visitors merged. More changes will be introduced in a follow-up. --------- Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
Part of #223. One way to improve the maintainability of our API generation is to test more. It gives us confidence we aren't breaking things and it makes it more obvious how the function behaves. This PR also makes some light refactoring like renaming things to be more clear. --------- Co-authored-by: Arnau Casau <arnaucasau@gmail.com> Co-authored-by: Arnau Casau <47946624+arnaucasau@users.noreply.github.com>
Closes #511. Example: ``` $ npm run check-pages-render Checked 10 / 71 pages Checked 20 / 71 pages Checked 30 / 71 pages Checked 40 / 71 pages Checked 50 / 71 pages Checked 60 / 71 pages Checked 70 / 71 pages ✅ All pages render without crashing ``` ## Only checks non-API docs by default This script is quite slow, at least on my M1 because the Docker images is built with x86. So, to avoid CI slowing down too much, we only check non-API pages in PR builds. Our nightly cron job checks everything else. ## Does not auto-start Docker We no longer have the file `webServer.ts` thanks to #578, which was a great improvement. Rather than adding back somewhat complex code for us to auto-start the server—and then to periodically ping if it's ready or time out—we expect the user to start up the server. That's acceptable since usually people will rely on CI to run this check. It's too slow for people to be frequently running locally. --------- Co-authored-by: Frank Harkins <frankharkins@hotmail.co.uk>
Co-authored-by: Crowdin Bot <support+bot@crowdin.com>
This was an oversight. We want to always collapse the left sidebar.
This PR is a follow-up from #584. None of the logic is changed, only the control flow. This uses early returns to make it more clear how each distinct API type is handled. It also DRYs this code: ```typescript const priorPythonApiType = meta.python_api_type; if (!priorPythonApiType) { meta.python_api_type = python_type; meta.python_api_name = id; } ``` --------- Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
Partially fulfilling requirements for #284 --------- Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
github-actions seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
github-actions seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
This PR add the ability to dispatch the action, run all notebooks, and make a PR with the updated changes. There are a couple of use cases for this:
main
branch to update all notebooks (in the case of package upadates or similar)To test, go to https://github.com/Qiskit/documentation/actions/workflows/notebook-test.yml, then choose "run workflow".
For an example, see: #529