Doubts about timing and lost frame/packet at the codec level
As far as I understand, the h264 codec holds information about the frame rate (within the NAL, of SEI's type, timing_info_present_flag -> time_scale) and with that plus the frames count a container can build up its timing scheme. (like PTS) Is that right?
Here are some other questions:
- Why do we have to divide the time_scale value by 2 to get the framerate?
- Why the codec won't use the original framerate?
- is it because we can get a double precision? (I think this is done by utilizing time_scale and num_units_in_tick)
- how does double precision framerate are stored at the codec level? (I think this is done by utilizing time_scale and num_units_in_tick)
- Does the codec signalizes somehow that a frame/packet was lost?
- How an encoder identify that a frame/packet or part of it was missing?
- How does dropped frame are handled at the codec leve?
I saw an example at Internet with what seems to be a 29.97 (time_scale/num_units_in_tick)/2 but I downloaded the big bunny (30fps) and then I extracted the .h264 and there I noticed a time_scale valued with 60 and a num_units_in_tick of 1 but the video itself is 30fps, do I really need to half its value to get the right framerate? Why spend more bits where I could store its value?
Read responses in forum.doom9.org