Files
atom/spec
Kevin Sawicki 27cee3e19c Don't terminate tokenization if stack size changes
Previously Python import blocks were not tokenizing correctly since
the loop was prematurely terminating when a match at the end of the line
was reached and no tokens were generated for it.

This approach was incorrect since the tokenizer may have just popped a rule
and another loop could possibly pop more rules.

Now this early termination is only performed if the stack size hasn't changed.
2013-08-20 11:38:06 -07:00
..
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00
2013-08-19 20:13:58 -07:00