Don't terminate tokenization if stack size changes

Previously Python import blocks were not tokenizing correctly since
the loop was prematurely terminating when a match at the end of the line
was reached and no tokens were generated for it.

This approach was incorrect since the tokenizer may have just popped a rule
and another loop could possibly pop more rules.

Now this early termination is only performed if the stack size hasn't changed.
This commit is contained in:
Kevin Sawicki
2013-08-20 11:28:32 -07:00
parent b10a01ddc2
commit 27cee3e19c
2 changed files with 25 additions and 1 deletions

View File

@@ -146,7 +146,7 @@ class TextMateGrammar
tokens.push(nextTokens...)
position = tokensEndPosition
break if position is line.length and nextTokens.length is 0
break if position is line.length and nextTokens.length is 0 and ruleStack.length is previousRuleStackLength
else # push filler token for unmatched text at end of line
if position < line.length