This class models the most crutial elements of a syntax description - the pattern. A TextParserPattern primarily consists of a set of tokens. Tokens are Strings where the first character determines the type of the token. There are 4 known types.
Terminal token: In the syntax declaration the terminal token is prefixed by an underscore. Terminal tokens are terminal symbols of the syntax tree. They just represent themselves.
Variable token: The variable token describes values of a certain class such as strings or numbers. In the syntax declaration the token is prefixed by a dollar sign and the text of the token specifies the variable type. See ProjectFileParser for a complete list of variable types.
Reference token: The reference token specifies a reference to another parser rule. In the syntax declaration the token is prefixed by a bang and the text matches the name of the rule. See TextParserRule for details.
End token: The . token marks the expected end of the input stream.
In addition to the pure syntax tree information the pattern also holds documentary information about the pattern.
# File lib/TextParser/Pattern.rb, line 46 46: def initialize(tokens, function = nil) 47: # A unique name for the pattern that is used in the documentation. 48: @keyword = nil 49: # Initialize pattern doc as empty. 50: @doc = nil 51: # A list of TokenDoc elements that describe the meaning of variable 52: # tokens. The order of the tokens and entries in the Array must correlate. 53: @args = [] 54: # A list of references to other patterns that are related to this pattern. 55: @seeAlso = [] 56: # A reference to a file under test/TestSuite/Syntax/Correct and a tag 57: # within that file. This identifies example TJP code to be included with 58: # the reference manual. 59: @exampleFile = nil 60: @exampleTag = nil 61: 62: @tokens = [] 63: tokens.each do |token| 64: unless '!$_.'.include?(token[0]) 65: raise "Fatal Error: All pattern tokens must start with a type " + 66: "identifier [!$_.]: #{tokens.join(', ')}" 67: end 68: # For the syntax specification using a prefix character is more 69: # convenient. But for further processing, we need to split the string 70: # into two symbols. The prefix determines the token type, the rest is 71: # the token name. There are 4 types of tokens: 72: # :reference : a reference to another rule 73: # :variable : a terminal symbol 74: # :literal : a user defined string 75: # :eof : marks the end of an input stream 76: type = [ :reference, :variable, :literal, :eof ]['!$_.'.index(token[0])] 77: # For literals we use a String to store the token content. For others, 78: # a symbol is better suited. 79: name = type == :literal ? 80: token[1..1] : (type == :eof ? '<END>' : token[1..1].intern) 81: # We favor an Array to store the 2 elements over a Hash for 82: # performance reasons. 83: @tokens << [ type, name ] 84: # Initialize pattern argument descriptions as empty. 85: @args << nil 86: end 87: @function = function 88: # In some cases we don't want to show all tokens in the syntax 89: # documentation. This value specifies the index of the last shown token. 90: @lastSyntaxToken = @tokens.length - 1 91: 92: @transitions = [] 93: end
Conveniance function to access individual tokens by index.
# File lib/TextParser/Pattern.rb, line 211 211: def [](i) 212: @tokens[i] 213: end
Add the transitions to the State objects of this pattern. states is a Hash with all State objects. rules is a Hash with the Rule objects of the syntax. stateStack is an Array of State objects that have been traversed before reaching this pattern. sourceState is the State that the transition originates from. destRule, this pattern and destIndex describe the State the transition is leading to. loopBack is boolean flag, set to true when the transition describes a loop back to the start of the Rule.
# File lib/TextParser/Pattern.rb, line 112 112: def addTransitionsToState(states, rules, stateStack, sourceState, 113: destRule, destIndex, loopBack) 114: # If we hit a token in the pattern that is optional, we need to consider 115: # the next token of the pattern as well. 116: loop do 117: if destIndex >= @tokens.length 118: # Have we reached the end of the pattern? Such state always trigger 119: # a reduce operation. 120: sourceState.noReduce = false 121: if sourceState.rule == destRule 122: if destRule.repeatable 123: # The transition leads us back to the start of the Rule. This 124: # will generate transitions to the first token of all patterns 125: # of this Rule. 126: destRule.addTransitionsToState(states, rules, [], sourceState, 127: true) 128: end 129: end 130: # We've reached the end of the pattern. No more transitions to 131: # consider. 132: return 133: end 134: 135: # The token descriptor tells us where the transition(s) need to go to. 136: tokenType, tokenName = token = @tokens[destIndex] 137: 138: case tokenType 139: when :reference 140: # The descriptor references another rule. 141: unless (refRule = rules[tokenName]) 142: raise "Unknown rule #{tokenName} referenced in rule #{refRule.name}" 143: end 144: # If we reference another rule from a pattern, we need to come back 145: # to the pattern once we are done with the referenced rule. To be 146: # able to come back, we collect a list of all the States that we 147: # have passed during a reference resolution. This list forms a stack 148: # that is popped during recude operations of the parser FSM. 149: skippedState = states[[ destRule, self, destIndex ]] 150: # Rules may reference themselves directly or indirectly. To avoid 151: # endless recursions of this algorithm, we stop once we have 152: # detected a recursion. We have already all necessary transitions 153: # collected. The recursion will be unrolled in the parser FSM. 154: unless stateStack.include?(skippedState) 155: # Push the skipped state on the stateStack before recursing. 156: stateStack.push(skippedState) 157: refRule.addTransitionsToState(states, rules, stateStack, 158: sourceState, loopBack) 159: # Once we're done, remove the State from the stateStack again. 160: stateStack.pop 161: end 162: 163: # If the referenced rule is not optional, we have no further 164: # transitions for this pattern at this destIndex. 165: break unless refRule.optional?(rules) 166: else 167: unless (destState = states[[ destRule, self, destIndex ]]) 168: raise "Destination state not found" 169: end 170: # We've found a transition to a terminal token. Add the transition 171: # to the source State. 172: sourceState.addTransition(@tokens[destIndex], destState, stateStack, 173: loopBack) 174: # Fixed tokens are never optional. There are no more transitions for 175: # this pattern at this index. 176: break 177: end 178: 179: destIndex += 1 180: end 181: end
Iterator for tokens.
# File lib/TextParser/Pattern.rb, line 216 216: def each 217: @tokens.each { |type, name| yield(type, name) } 218: end
Returns true of the pattern is empty.
# File lib/TextParser/Pattern.rb, line 221 221: def empty? 222: @tokens.empty? 223: end
Generate the state machine states for the pattern. rule is the Rule that the pattern belongs to. A list of generated State objects will be returned.
# File lib/TextParser/Pattern.rb, line 98 98: def generateStates(rule) 99: states = [] 100: @tokens.length.times { |i| states << State.new(rule, self, i) } 101: states 102: end
Returns the number of tokens in the pattern.
# File lib/TextParser/Pattern.rb, line 226 226: def length 227: @tokens.length 228: end
Return true if all tokens of the pattern are optional. If a token references a rule, this rule is followed for the check.
# File lib/TextParser/Pattern.rb, line 232 232: def optional?(rules) 233: @tokens.each do |type, name| 234: if type == :literal || type == :variable 235: return false 236: elsif type == :reference 237: if !rules[name].optional?(rules) 238: return false 239: end 240: end 241: end 242: true 243: end
Set the documentation text and for the idx-th variable.
# File lib/TextParser/Pattern.rb, line 190 190: def setArg(idx, doc) 191: @args[idx] = doc 192: end
Set the keyword and documentation text for the pattern.
# File lib/TextParser/Pattern.rb, line 184 184: def setDoc(keyword, doc) 185: @keyword = keyword 186: @doc = doc 187: end
Set the file and tag for the TJP code example.
# File lib/TextParser/Pattern.rb, line 205 205: def setExample(file, tag) 206: @exampleFile = file 207: @exampleTag = tag 208: end
Restrict the syntax documentation to the first idx tokens.
# File lib/TextParser/Pattern.rb, line 195 195: def setLastSyntaxToken(idx) 196: @lastSyntaxToken = idx 197: end
Set the references to related patterns.
# File lib/TextParser/Pattern.rb, line 200 200: def setSeeAlso(also) 201: @seeAlso = also 202: end
Returns true if the i-th token is a terminal symbol.
# File lib/TextParser/Pattern.rb, line 246 246: def terminalSymbol?(i) 247: @tokens[i][0] == :variable || @tokens[i][0] == :literal 248: end
Find recursively the first terminal token of this pattern. If an index is specified start the search at this n-th pattern token instead of the first. The return value is an Array of [ token, pattern ] tuple.
# File lib/TextParser/Pattern.rb, line 253 253: def terminalTokens(rules, index = 0) 254: type, name = @tokens[index] 255: # Terminal token start with an underscore or dollar character. 256: if type == :literal 257: return [ [ name, self ] ] 258: elsif type == :variable 259: return [] 260: elsif type == :reference 261: # We have to continue the search at this rule. 262: rule = rules[name] 263: # The rule may only have a single pattern. If not, then this pattern 264: # has no terminal token. 265: tts = [] 266: rule.patterns.each { |p| tts += p.terminalTokens(rules, 0) } 267: return tts 268: else 269: raise "Unexpected token #{type} #{name}" 270: end 271: end
Generate a text form of the pattern. This is similar to the syntax in the original syntax description.
# File lib/TextParser/Pattern.rb, line 355 355: def to_s 356: str = "" 357: @tokens.each do |type, name| 358: case type 359: when :reference 360: str += "!#{name} " 361: when :variable 362: str += "$#{name } " 363: when :literal 364: str += "#{name} " 365: when :eof 366: str += ". " 367: else 368: raise "Unknown type #{type}" 369: end 370: end 371: 372: str 373: end
Returns a string that expresses the elements of the pattern in an EBNF like fashion. The resolution of the pattern is done recursively. This is just the wrapper function that sets up the stack.
# File lib/TextParser/Pattern.rb, line 276 276: def to_syntax(argDocs, rules, skip = 0) 277: to_syntax_r({}, argDocs, rules, skip) 278: end
Generate a syntax description for this pattern.
# File lib/TextParser/Pattern.rb, line 281 281: def to_syntax_r(stack, argDocs, rules, skip) 282: # If we find ourself on the stack we hit a recursive pattern. This is used 283: # in repetitions. 284: if stack[self] 285: return '[, ... ]' 286: end 287: 288: # "Push" us on the stack. 289: stack[self] = true 290: 291: str = '' 292: first = true 293: # Analyze the tokens of the pattern skipping the first 'skip' tokens. 294: skip.upto(@lastSyntaxToken) do |i| 295: type, name = @tokens[i] 296: # If the first token is a _{ the pattern describes optional attributes. 297: # They are represented by a standard idiom. 298: if first 299: first = false 300: return '{ <attributes> }' if name == '{' 301: else 302: # Separate the syntax elemens by a whitespace. 303: str << ' ' 304: end 305: 306: if @args[i] 307: # The argument is documented in the syntax definition. We copy the 308: # entry as we need to modify it. 309: argDoc = @args[i].dup 310: 311: # A documented argument without a name is a terminal token. We use the 312: # terminal symbol as name. 313: if @args[i].name.nil? 314: str << "#{name}" 315: argDoc.name = name 316: else 317: str << "<#{@args[i].name}>" 318: end 319: addArgDoc(argDocs, argDoc) 320: 321: # Documented arguments don't have the type set yet. Use the token 322: # value for that. 323: if type == :variable 324: argDoc.typeSpec = "<#{name}>" 325: end 326: else 327: # Undocumented tokens are recursively expanded. 328: case type 329: when :literal 330: # Literals are shown as such. 331: str << name.to_s 332: when :variable 333: # Variables are enclosed by angle brackets. 334: str << "<#{name}>" 335: when :reference 336: if rules[name].patterns.length == 1 && 337: !rules[name].patterns[0].doc.nil? 338: addArgDoc(argDocs, TokenDoc.new(rules[name].patterns[0].keyword, 339: rules[name].patterns[0])) 340: str << '<' + rules[name].patterns[0].keyword + '>' 341: else 342: # References are followed recursively. 343: str << rules[name].to_syntax(stack, argDocs, rules, 0) 344: end 345: end 346: end 347: end 348: # Remove us from the "stack" again. 349: stack.delete(self) 350: str 351: end
Disabled; run with --debug to generate this.
Generated with the Darkfish Rdoc Generator 1.1.6.