You either need a lot of patience, and time, to overcome the challenge of problematic video files or have the right software to hand….and being aware of the software’s limitations throughout.
In this article, I’m looking at:
- .d and .i proprietary video and index files
- HXD & HexChomper
and then Amped FIVE
The .d file contains the video data and the .i contains the index which includes the date and time information. Although I had a version of the player in my archives, I didn’t know it! I had mistakenly listed it under MCD rather than the .d file extension. A timely reminder to keep my proprietary playback software well managed and indexed.
There are a few pieces of software to assist with player management now.
Anyway, after I was pointed in the right direction for the player, I did a bit of Googling on the player name. It is listed by various security installers and there were a number of different version numbers. I was also pointed to a converter program.
Upon running the program you are greeted by this dialogue box:
I wonder why they just didn’t rename the buttons NTSC and PAL? Probably too easy!….. (I did test what would happen if I selected PAL with NTSC footage. The video plays but a text message appears stating that the footage is NTSC).
The player accepts a number of different backup container files.
Rather than look for the video file (the .d) it looks for the index (the .i). The player then identifies the data file with the same name and then displays the video. The video plays as an overlay and the date / time information runs over the top.
It was pretty hard to control the video and navigate back and forth with the controls. When moving through the video it sometimes wouldn’t pick up the video correctly and often displayed a highly distorted view.
Other times, I found that the index time overlayed on the video would not match the time that gets stamped over the image exports. In the example below look at the seconds in the time overlay. The video say:47 but the timestamp says :46
Lastly, I identified that moving through the video would cause the index to display a different time on the same image. As a result, I have concluded that although a start and end time are verifiable, using the seconds to pinpoint an image would not be suitable. It would be much better and more reliable, to use frame number. I also had no idea on what type or size the video was, or any metadata enabling me to interpret the image correctly.
Before we attempt to deal with the video, and thereby avoiding this highly problematic player, is it possible to verify the date / time information? For this we need a Hex Editor(HXD) and HexChomper from Mikes Foresnic Tools.
By opening the .i file within HXD, and copying all the Hex data, we are then able to paste this into HexChomper.
From the results, all the dates and times were identified within the .i file. These can be exported as a spreadsheet for reference.
From this I could see that I had 29 entries for the first second and 22 entries for the last second.
Using the time duration I could work out how many frames there would be based on a 30FPS video – as that is what the player overlay was telling me in the bottom right corner of the window.
So, the math was telling me that I should have 10,670 individual time entries – the problem though is that the .i file contained 10,665. Did the file really have 30FPS ?
By scrubbing through the spreadsheet produced from Hex Chomper, it wasn’t long before I found the first discrepancy.
So it’s not exactly a constant 30FPS as the overlay suggested. I am going to leave the time issue there. At least I know how to drill down into the index and relate that to frames if milliseconds were of importance. I could then capture a small piece from the proprietary player using the Omnivore to further assess what was going on. We will come back to these numbers after getting to grips with the video….
Remember the converter program?… This produced a compressed (Xvid) video. The resulting file was produced at 100FPS but kept the same speed. As a result, it had included a large amount of duplicate frames. After discarding all the duplicate frames I was left with a video file of 10659 frames. Let’s put that number aside and see what the original is….
The .d file was read immediatly as a raw 352×240 H264/AVC stream, as seen here after dropping it into MediaInfo…
As most people know, my next port of call after establishing the presence of a recognised video stream is attempt playback in FFplay.
Although it played, there were many RED error messages and at the start suffered from some damaged images.
Obviously, with the wording of ‘discarding one’ we need to take a closer look….
After an FFprobe result produced a frame count of 10659, I contained the raw stream into an AVI.
This time though, after lots of testing with different syntax and containers, I found that it was necessary to use -fflags genpts as an input option rather than for the output.
ffmpeg -fflags genpts -i yourfile.d -c:v copy -vsync drop -r 30 -f avi yourfile.avi
The Generate Presentation Time Stamp was put before my input file.
I should then have a cleanly contained and indexed 30FPS H264 video file that I can utilise…… I also created a file with the M4V container and extension.
I said “should”, as my resulting file produced some strange results.
All analysis software read 10659 Frames and when utilising Windows Media Player through Directshow, it played faultlessly. However, some other playback software read it differently, with distortion in the imagery and displaying different frame counts.
Adobe Premiere Pro dealt with the M4V with no issues…
See the below link for help if you have issues importing video into Premiere Pro.
Even after using FFmpeg to convert the file to an uncompressed format, there was distortion in a few of the frames similar to that seen in the FFplay window displayed above.
So, although I could probe the file to see the frame breakdown, and I could visually see the GOP structure in Virtualdub etc, I couldn’t reprocess it with FFmpeg without it being distorted. Even using the Directshow input driver or the FFmpeg input driver in Virtualdub produced bad image results! Having to go into an NLE every time could be time consuming so how could I simply process the file?
After another cup of coffee I remembered Avisynth. This piece of very powerful software is a ‘Frameserver’, an application that feeds video directly to another application. I don’t use this very often now. In the early days of DV AVI files I used to use it a lot.
I decided to try this as Directshow was dealing with the file very well. The filter chain being used on my test system can be seen in the Graph Edit below.
My thought process was that if I could ‘frameserve’ this to something else then I could deal with the video.
Avisynth requires installing on your system. If you get the option to integrate into Windows explorer select Yes – This gives you the ability to create a new .avs file directly from your right click > ‘New’ menu dropdown.
In order for it to utilise the directshow function, you need the DirectShowSource.dll, this is downloaded from the downloads site.
The .dll file for directshow is in the zip file. Extract the contents into the ‘Plugins’ folder for Avisynth.
The task now is to create a .avs file that will frameserve the avi file from directshow into my software of choice – I have used Virtualdub.
The only line I need in my .avs file is:
In between the ” ” is the file address and name of the file I want to serve. (Simple Tutorial Forum post)
After saving, I can go to Virtualdub and open the .avs file. This will then read the avi file…..and it did – perfectly – WITH NO ERRORS!
At this point I can output to an uncompressed video or image sequence – and all 10659 frames were read.
By establishing that the raw .d file has 10659 frames, and the avi converter specifically for that type produced a file with 10659 frames, and the rewrapped and transcoded file has 10659 frames – I’m confident on this frame count. I cannot rely totally on the FPS and the time due to the anomalies seen in the player and index. I know that some of the seconds only contained 29 frames so I believe its using drop frame time-code. As a result I selected the NTSC standard of 29.97FPS for my Uncompressed video.
Reviewing the seconds and frames (using the Time/Date/Frame Filter) in Virtualdub I found I have a variation from the original of a maximum of 3 frames compared to the original.
In summary then:
- The player is pretty limited and doesn’t display the video at all well
- It is possible to understand and validate the date / time information in the .i
- The video data is readable in the format examined here
- The file can be contained using FFmpeg
- Many NLE’s may be able to read the contained Raw stream in either .m4v or .avi
- It is possible to deal with the file for further use using tools such as Avisynth
If the events captured are frame / millisecond critical then a system like this is going to require a lot of analysis and testing, not only in the lab, but also on site, conducting test recordings. If it’s to be included as part of a compilation then by understanding some of the issues presented here is going to make your life a little easier.
Obviously a lot of this has taken some considerable time and problem solving. It’s time you may not have and this is where software written specifically for the FVA community comes in.
Amped FIVE (Forensic Image and Video Enhancement) can take a lot of this time consuming problem solving away, as its already been done during the software development. Furthermore, rather than me having to utilise a number of pieces of software and workarounds, I can do it all under one roof.
The .d file was recognised immediately as an NTSC video stream. It still had problems reading the file correctly and wouldn’t scrub. As a consequence I simply changed the container by clicking the appropriate button down the right side and a new video was indexed and created.
I now have a new video chain, containing my new file which is being read correctly and scrubs perfectly. This indexing is being done though the software, and its integration with other frameworks. From here I can do whatever I require. I could even add timecode again. Any clarification can be completed and then the file output in my format of choice.
A lot has been written over recent weeks expressing caution in using tools such as FFmpeg.
Larry from DMEresources
Jim from Forensic Photoshop
Martino from Amped
The video discussed here is a great example where FFmpeg only does half the job. In both my manual method and when using FIVE, FFmpeg is utilised to initially read the data but it falls down when doing anything with it. In the manual method I had step aside and use something else as I was not confident in the results it was producing. I have then verified my results against the original proprietary player and the converter program.
Its up to you, the analyst, to validate your findings against other software, and if something doesn’t appear right – its up to you to go back and check it. I can explain why I have concluded the total frame count and, I can justify why I cannot rely on the time overlay by presenting examples of when different times are displayed.
An Analyst must always validate their results whatever software they are using, whether its Open Source, Freeware, NLE’s or highly developed, and paid for, dedicated FV applications. If I had gone for the FIVE method first, I would have definitely taken a look elsewhere to validate what that was telling me.
Its been a great file to work on and see all the differences. It’s understanding the differences and figuring out ways to overcome each software’s limitations that makes the job of Forensic Video Analysis such a challenge. It has also really shown off the speed of the FIVE workflow.