Distinguishing between frame types and FN

Hi All

When receiving M17 frames, for stream mode there are two frame types, the Link Setup frame and the rest. They are the same length and use the same sync vector. How is it possible to distinguish between them? I can think of a way, but it is computationally expensive and appears sub-optimal. Is there an easier way?

I have an observation: why is there a 16-bit FN field? On a network packet I can understand it, but on RF frames I do not, they cannot appear out of order. I think the FN is used so that it is possible to put the LICH chunks back together if you missed the initial Link Setup frame and perform a late entry, this is what NXDN does for example. However to do this, you only need a counter to five, and an extra pattern or bit to indicate the end of transmission as now, and you save a great number of bits.

For comparison P25 phase I has no header (it has an optional encryption header but that is a different thing) and all incoming transmissions are late entry.

Jonathan G4KLX

We either want to:

  • use 2 different SYNC fields to determine what incoming frame type is, or
  • always use the same SYNC field and determine the frame type by reading an additional small field (in each frame)

FN counter, along with NONCE, is used for AES-CTR.

Since encryption on the amateur bands with one exception is not allowed (in the US licence), which is nothing to do with digital voice, why is there space for encryption? After all it is an amateur protocol.

If you are choosing sync vectors, why not follow professional practice and choose sync vectors that only consist of +3 and -3 symbols, which are easier to distinguish in weak signal conditions. I know that System Fusion doesn’t do that, but it gets lots of things wrong anyway.

Jonathan G4KLX

Encryption is allowed in Poland and few other countries. You are right with the SYNC pattern, we should change it.

Hi G4KLX,

That is a good point and an on-going discussion (issue that touches on this). I certainly think there is room for refinement.

SP5WWP suggested different sync words, but I don’t like mixing the data types with syncing physical layer packets.

From the draft specification as it currently is written, a decoder can use information that the full LICH is only transmitted after the preamble to decode it and otherwise assume data are part of superframes. This isn’t elegant.

My understanding is that the FN will be used for encryption modes (which I’m personally less interested in). I like the idea of a bit to denote LICH start to indicate the first portion reassembling. This removes the need of a mod 5 on the FN field. A decoder can look for the LICH start and then collect 5 packets to reassemble the LICH efficiently or actually just decode that LICH chunk to see if it should continue decoding as that LICH chunk contains the destination.

Having different sync patterns for different frame types is used in dPMR, and is a pretty elegant solution, you could go further and change the sync pattern at the end as well and save a bit in the FN at the same time and make decoding a little simpler. You could make the end pattern the inverse (in terms of symbols) of the Link Setup or superframe sync pattern for extra robustness.

There really should be a nice way to determine whether a frame is part of a super frame or a Link Setup frame.

I’m currently writing the M17 code for the MMDVM and obviously want to do the best that I can in terms of correctness and efficiency.

Jonathan G4KLX
[hr]

Having different sync patterns for different frame types is used in dPMR, and is a pretty elegant solution, you could go further and change the sync pattern at the end as well and save a bit in the FN at the same time and make decoding a little simpler. You could make the end pattern the inverse (in terms of symbols) of the Link Setup or superframe sync pattern for extra robustness.

There really should be a nice way to determine whether a frame is part of a super frame or a Link Setup frame.

I’m currently writing the M17 code for the MMDVM and obviously want to do the best that I can in terms of correctness and efficiency.

Jonathan G4KLX

G4KLX,

Sorry I’ve been away and only getting back to this now. I’d like to understand how the RX side is implemented with multiple sync words.

In passive monitoring, RX has to look for all the sync patterns. Perhaps with the correct selection of sync patterns this is almost free? But in general it would add processing before sync is locked. My intuition (which could be wrong) for M17 would be to optimize for passive monitoring and avoid complexity at lower levels such as sync search.

Can you point me to a resource that shows performance of multiple sync?
or maybe point me to an implementation on the RX side with clarification if I’m missing something above.

elms

Looking for multiple syncs is not expensive. If you want to see an implementation of detecting multiple sync patterns can be seen in the dstar_correlator branch of the MMDVM firmware code. It works well. You must remember that something like the MMDVM is looking for multiple sync patterns for different modes all of the time that the system is in listening mode.

The MMDVM_HS code also handles multiple sync patterns for D-Star and using an ADF-7021. The difference is that it is not using the sync detection pattern matching within the ADF chip, the chip is still capable of operating well without using its built-in sync pattern matching.

Ultimately if M17 wants to be taken seriously, it needs to do things at least as well as other DB modes, in this and other respects, it is far behind.

If I understand correctly MMDVM is not very power sensitive and certainly not as much as a native handheld working on a single standard (ie TR-9). Power usage for a single handheld running on battery should be the motivator for design decisions in my opinion.

There has been a lot of discussion and in the end, I’m beginning to think either approach is fine for the draft stage. We want to have a wide acceptance first. I hope we can do some analysis to make fact based decisions or at least understand the trade-offs. I was hoping you may have been aware of some existing references.

Thanks for all your feedback, it’s been a catalyst for a lot of discussion. I disagree with your final remark and find it generally unhelpful.

According to ADF7021’s errata, it’s not possible to use internal circuitry for 4FSK syncword detection.

For the draft stage we can have 2 different syncs: one for the initial LICH and the other for data frames.

The MMDVM doesn’t use the internal sync matching either, it’s all done in my firmware. It does at least do the symbol synchronisation so you do get some of the hard work done for you.

Having two syncs is great news.

Hi folks. Just getting caught up on this after getting the baseband demodulator completed. I don’t think there is any need to treat the two frame types differently by using a different sync symbol. The first frame is the LSF. The demodulator should try to decode it as such. If that fails, then the subsequent frames must be data frames and contain a LICH. The LSF, in that case, must be reconstructed from the LICH. There will never be another LSF sent for that session.

Putting information that can be clearly derived from the data explicitly in the protocol is wasteful.

A completely valid audio demodulator can be built that always ignores the first frame, always decodes the LSF from the LICH from the next 6 frames, drops 240ms of audio and then continues on from there. I would prefer to keep the spec as simple as possible and only as complex as absolutely necessary.

If one really cares that, by chance, a data frame was decoded as an LSF with a valid CRC, one can decode the LICH from the next frame and verify that the first 20 bytes are an exact match. I don’t think that is necessary.

So, basically: you turn on your radio, receive a random frame (that appears as the first one for you) and try decoding it like if it was the LSF. If it fails, it means that this and all subsequent frames are data/audio. I think that having 2 different SYNCs would make it a lot more robust, but let’s see what the others think.

I have read the DMR spec. They use multiple sync symbols for different frame types. It seems to makes sense. But they also use a longer sync sequence. It is 48 bits or 24 symbols.

https://www.etsi.org/deliver/etsi_ts/102300_102399/10236101/02.02.01_60/ts_10236101v020201p.pdf

See section 4.2.2, page 19.

We are not using 48 bits. With only 16 bits, are we going to run out of reasonable sync symbols?

I think a more in-depth discussion of sync symbols is warranted, along with general signalling. How do we implement color codes? How do we handle either packet data or streaming data-only? Embedding the CC in the sync symbol (or different sets of sync symbols for each CC) would make selective reception of frames really easy.

I see no need for a LICH for packet data as, unlike voice, a late joiner is not really something to worry about. However, we likely want to use the same frame size to break up the packet superframe. That means that a receiver needs to know it should not attempt to decode a LICH. Do we use a separate sync symbol for this too?

You will notice in DMR that the data and voice syncs are the opposite of each other (think in terms of symbols not bits) so that there is a negative correlation from one to the other. Therefore having two sync vectors without any chance of confusion is easily achieved.

Most DV modes use a copy of the header frame as the ending frame with a bit to indicate the end (see DMR, YSF, NXDN), and so you’d have an obvious change of sync vector, and information about it being the end in the payload too.