Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pipelining support #3

Open
dalejung opened this issue Jan 14, 2015 · 0 comments
Open

pipelining support #3

dalejung opened this issue Jan 14, 2015 · 0 comments

Comments

@dalejung
Copy link
Owner

So my initial thought was to use IPython input transformer as the baseline, and then run normal module/script code through something that would pipe into the input transformer.

Part of that reasoning was that the transformer inherently could process lines iteratively instead of running one large block of code. The Special Eval harnass also does this, it'll process a whole script the same way as a line/block of code. It'll break it up into logical lines and go from there.

So thinking about this, the only real delta is that >%> type of tokens should act as line continuations. Now, I can add the ability for engines to define line continuation tokens, so the the transformer accumulate texts and then sends the entire block to the engine.

This is a slight deviation from my original thought, which was to breakup block/script text and pass it to an input formatter as if it was typed into IPython. That way there wouldn't be a possible deviation.

Right now, the thought is that something like:

df %>% tail
  %>% print

would result in a SyntaxError node that would then be processed by the engine.

I do wonder if it makes sense to add a line transformer to engines. My goal with that is to make sure the interacitve/script execution runs through the exact same pipeline where possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant