Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Scan the Message Table, with an optional start Message ID or timestamp, which can either be inclusive or exclusive
  2. Depending on whether the row has the payload column (the p column), the handling is different
    1. With the payload column (the p column)
      1. The row represents a message and the payload is the value in the p column
      2. Message ID is generated from the row key as concat(publish_time, seq_id, 0L, 0)
    2. Without the payload column, then the transaction column (the t column) must exist
      1. Scan the Payload Table with prefix concat(topic, transaction_write_pointer), where the transaction_write_pointer is the value of the t column in the Message Table
      2. Each row encountered during the scanning is a message and the payload is the value in the p column
      3. Message ID is generated from the row key in the Message Table and the row key in the Payload Table as concat(publish_time, seq_id, transaction_ write_pointertimestamp, p_seq_id)

Transactional

Transactional consumption basically follow the same procedures as the non-transactional one, with the addition that it will stop at the first uncommitted message when scanning the Message Table. The transaction information comes from the client and it is the client responsibility to open a new transaction in order to get a new snapshot of committed messages in the messaging system. The will increase the latency of message consumption, but the technique described above for message publishing, this latency should be minimal in the range of less than a second.

...