How to parse a txt data file of 240k lines by streaming the file with per-char check?

UPDATE: This question is not necessary to me anymore since I decided to use plist (in lieu of txt) as the dictionary format of my input method app. However, I believe that there might be other people who have the same question / necessity, hence my decision of leaving this thread here as a brainstorming thread.

I wrote a finite state machine here.

// Copyright (c) 2021 and onwards The vChewing Project (MIT-NTL License).
/*
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:

1. The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

2. No trademark license is granted to use the trade names, trademarks, service
marks, or product names of Contributor, except as required to fulfill notice
requirements above.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/

import Cocoa

// Finite State Machine
public enum TestFSM {
  static func parse(string strData: String, swap isSwapped: Bool = false) -> [String: [Range<String.Index>]] {
    var rangeMap = [String: [Range<String.Index>]]()
    var isObservingTheKey = !isSwapped
    var newLineStartIndex = 0
    var theKey = ""
    var skipThisLine = false
    var previousChar = String.Element(" ")
    var columnIndicator = 0
    for (idxNow, theChar) in strData.enumerated() {
      if String(theChar) == "\n" {
        if !theKey.isEmpty, idxNow > newLineStartIndex, skipThisLine == false {
          let indexStart = strData.index(strData.startIndex, offsetBy: newLineStartIndex, limitedBy: strData.endIndex)!
          let indexEnd = strData.index(strData.startIndex, offsetBy: idxNow, limitedBy: strData.endIndex)!
          let theValue = indexStart..<indexEnd
          rangeMap[theKey, default: []].append(theValue)
        }
        // Reset state observers
        theKey.removeAll()
        newLineStartIndex = idxNow + 1
        isObservingTheKey = !isSwapped
        skipThisLine = false
        columnIndicator = 0
      } else {
        if String(theChar) == "#", String(previousChar) == "\n" {
          skipThisLine = true
        }
        if " \t".contains(String(theChar)) {
          columnIndicator += 1
          if columnIndicator == 1 {
            isObservingTheKey = isSwapped
          }
          if columnIndicator == 2 {
            isObservingTheKey = false
          }
        }
        if isObservingTheKey, String(theChar) != " " {
          theKey += String(theChar)
        }
      }
      previousChar = theChar
    }
    return rangeMap
  }
}

var rangeMap = [String: [Range<String.Index>]]()
var strData = ""

do {
  strData = try String(contentsOfFile: "./TestData.txt", encoding: .utf8).replacingOccurrences(of: "\t", with: " ")
  rangeMap = TestFSM.parse(string: strData, swap: false)
} catch {
  print("\(error)")
  print("↑ Exception happened when reading data.")
}

// The following lines are merely for testing.
print(rangeMap.count)

for neta in rangeMap.keys {
  print(neta)
}

With the sample data file attached here (240k lines):
Boost loading speed with a finite state machine. · Issue #82 · ShikiSuen/vChewing-macOS (github.com)

However, the process immediately crashes (long-time hang).
I wonder what's wrong with my finite state machine, hence this thread here asking for help.

Did you try it with a small file (a few dozens of lines only) first? :slight_smile:

It works with small files.

Looks like this is creating a large number of entries in your dictionary. You might be better off keeping track of the start/end index for the key you are generating and then use a Substring over the corresponding range (i.e. a slice of the original string) as your key rather than creating a whole new String every time.

1 Like

If by "process immediately crashes (long-time hang)" you mean process takes too long and watchdog kills it - then nothing is inherently wrong with the code (other than it being slower than desired), and your options would be to move the code on a background queue/thread so it is no longer watchdog concern, and optionally speed it up. I've no idea what it's doing, so can't comment on how to speed it up, other than noticing it works in O(n^2) time while normally parsers can do in O(n).

1 Like