-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: use emit from swc instead of tsc #15118
Conversation
Co-authored-by: Nayeem Rahman <nayeemrmn99@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few thoughts
} | ||
|
||
impl TypeCheckCache { | ||
pub fn new(db_file_path: &Path) -> Result<Self, AnyError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
meta: we have never managed the sizes of caches, and having an unbounded database it raises the question of quota management and how we would manage an LRU and size constraints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't think it was worth the extra complexity, but maybe it is? The previous tsbuildinfo cache was also unbounded.
I just wrote some tests for this and for the same entrypoint type checked multiple times with a unique change each time, the end result was this (the hashes don't take up a lot of room):
- 1,000 unique type checks - 92KB (size of sqlite db)
- 5,000 unique type checks - 316KB
- 10,000 unique type checks - 600KB
Also, I didn't notice any slowdown in speed as the database size grew.
However, once you start type checking different root entry points, then it starts to grow a lot because it stores the tsbuildinfo as well. For example, this is with unique entrypoints:
- 1,000 unique entry points - 6.1MB
- 5,000 unique entry points - ~30MB
- 10,000 unique entry points - 59.5MB
For this, I also didn't notice any slowdown as the database size grew... it remained exactly the same. SQLite seems quite fast.
I think it would be unlikely for someone to even have more than 100 root entrypoints before upgrading their CLI version. Let me know your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I don't disagree, it was more of a meta concern that this plus other stuff likely needs better quota management.
/// Runs the common sqlite pragma. | ||
pub fn run_sqlite_pragma(conn: &Connection) -> Result<(), AnyError> { | ||
// Enable write-ahead-logging and tweak some other stuff | ||
let initial_pragmas = " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: how were these determined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's from here:
Lines 69 to 81 in 56d0ca7
// Enable write-ahead-logging and tweak some other stuff. | |
let initial_pragmas = " | |
-- enable write-ahead-logging mode | |
PRAGMA journal_mode=WAL; | |
PRAGMA synchronous=NORMAL; | |
PRAGMA temp_store=memory; | |
PRAGMA page_size=4096; | |
PRAGMA mmap_size=6000000; | |
PRAGMA optimize; | |
"; | |
conn.execute_batch(initial_pragmas)?; | |
conn.set_prepared_statement_cache_capacity(128); |
Divy let me know about these in the last pr: #13462 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, how were they determined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @littledivy -- do you know?
return false; | ||
} | ||
hasher.write(specifier.as_str().as_bytes()); | ||
hasher.write(code.as_bytes()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be better to use precomputed hashes of the code here instead of the code itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe. I wouldn't be surprised if retrieving and storing those hashes is slower than just doing the hash though. xxhash is very fast.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Oh, I forgot, how is the output of |
"experimentalDecorators": true, | ||
"incremental": true, | ||
"jsx": "react", | ||
"jsxFactory": "React.createElement", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding these here appears to cause issues when "jsx"
is set to "react-jsx"
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm… I can’t remember why I added these :/ (but it’s late here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ref: #15263
This change separates emitting from type checking and now only emits with swc. It also separates the emit cache from the type checking cache.
deno run --no-check=remote
/deno check --remote
does not report remote errors if a local check was already performed successfully #14632Part of #13302
Closes #14632
Closes #8706