You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So my initial thought was to use IPython input transformer as the baseline, and then run normal module/script code through something that would pipe into the input transformer.
Part of that reasoning was that the transformer inherently could process lines iteratively instead of running one large block of code. The Special Eval harnass also does this, it'll process a whole script the same way as a line/block of code. It'll break it up into logical lines and go from there.
So thinking about this, the only real delta is that >%> type of tokens should act as line continuations. Now, I can add the ability for engines to define line continuation tokens, so the the transformer accumulate texts and then sends the entire block to the engine.
This is a slight deviation from my original thought, which was to breakup block/script text and pass it to an input formatter as if it was typed into IPython. That way there wouldn't be a possible deviation.
Right now, the thought is that something like:
df%>%tail%>%print
would result in a SyntaxError node that would then be processed by the engine.
I do wonder if it makes sense to add a line transformer to engines. My goal with that is to make sure the interacitve/script execution runs through the exact same pipeline where possible.
The text was updated successfully, but these errors were encountered:
So my initial thought was to use IPython input transformer as the baseline, and then run normal module/script code through something that would pipe into the input transformer.
Part of that reasoning was that the transformer inherently could process lines iteratively instead of running one large block of code. The Special Eval harnass also does this, it'll process a whole script the same way as a line/block of code. It'll break it up into logical lines and go from there.
So thinking about this, the only real delta is that
>%>
type of tokens should act as line continuations. Now, I can add the ability for engines to define line continuation tokens, so the the transformer accumulate texts and then sends the entire block to the engine.This is a slight deviation from my original thought, which was to breakup block/script text and pass it to an input formatter as if it was typed into IPython. That way there wouldn't be a possible deviation.
Right now, the thought is that something like:
would result in a SyntaxError node that would then be processed by the engine.
I do wonder if it makes sense to add a line transformer to engines. My goal with that is to make sure the interacitve/script execution runs through the exact same pipeline where possible.
The text was updated successfully, but these errors were encountered: