0
BMFin

5D mark II released !!

Recommended Posts

No, the Cineform reader is only 99.00, plus CoreAVC.
There is a plugin on the horizon in a couple weeks, perhaps a month, that will properly decode these files.
"Properly" is the key word. Sony Vegas and Canopus Edius have zero problem importing/editing. The problem lies in the decode. There is more latitude than the applications are designed to handle, so the blacks/whites are severely crushed due to the dynamic range. The gradients need some reduction to fit within the confines of the Rec709 space, and that's what Cineform are doing. They're just not capable of decoding on their own (license is expensive).
I'd considered having our AVCUpShift program act as a Canon decoder, but the cost of Canon's SDK and the MPEG LA license makes this unfeasible.
The Canon is definitely beautiful when managed properly. But, it is a bleeding edge camera application....the market will catch up. But for now, the workflow for GREAT video is kludgy and clumsy. If you want OK video and speed is a concern (in your case, I don't think it is?) then there isn't a good workflow.
If you have time and a little $$ to throw at the short-term solution, then it's a great option.
It ain't no tandem camera. :P

Quote

Why would you need to de-interlace ? The camera shoots progressive..



Because it packages progressive information in an interlaced stream. It's a way of reducing payload.

Share this post


Link to post
Share on other sites
Quote


Because it packages progressive information in an interlaced stream. It's a way of reducing payload.



Ah. ok. Im confused, what does this mean exactly ? How can it be progressive but interlaced at the same time ? Can you maby explain this so that the average person (me) also understands what this means? Thanks!

Share this post


Link to post
Share on other sites
So much for the "month" I quoted you.
Quicktime 7.6 (released today) fixes the crushing problem for both Sony Vegas and Canopus Edius. I imagine FCP probably will manage the range properly now too, since the decode is in the Quicktime app

Share this post


Link to post
Share on other sites

The explanation for progressive content in an interlaced package goes beyond what I'm up for typing. Buy my book, "the full hd" if you wanna know more. :)the short answer is that progressive content can be split into time-offset fields to reduce the data rate required to write the data, flagged for reassembly in the decode.
Sustained data rate is difficult. Payload often needs to be reduced.
Google "PsF, or "Packaging progressive in interlaced" for starting points. Until just a few years ago, all progressive content was packaged in interlaced streams.

Share this post


Link to post
Share on other sites
Quote

Quote


Because it packages progressive information in an interlaced stream. It's a way of reducing payload.



Ah. ok. Im confused, what does this mean exactly ? How can it be progressive but interlaced at the same time ? Can you maby explain this so that the average person (me) also understands what this means? Thanks!



From here:

http://www.dvinfo.net/conf/cineform-software-showcase/115554-nutshell-what-cineform-2.html



"A friend explained it to me (I have trouble understanding the concept as well) - yes I've read the details on the site and all the posts on here

let's take a simple scene where a man stands with his hands by his side and then scratches his head. With 35mm we would have a series of sequential shots of the hand going up and down - complete images 1 through 50. With video we would have the same but the images would be interlaced - half images.

With HDV we are trying to fit a lot of information on an existing media (miniDV) and so instead of complete images 1 through 50 we have a complete image 1 and then we only record the data that has changed (the moving hand) and then we have a complete image again at frame 8 or 15 (I'm not sure of the figures).

If you want to look at frame 3 or 6, your NLE has to recreate that frame from the complete frame plus all the changes. Cineform does that before you start the editing process.

there is also the fact that m2v or mpeg files are compressed files and editing direct from them means you lose even more data when you've finished. Cineform creates a files that 'looks' as good as the original in an AVI format which is better for editing.

This may be completely wrong, absolute rubbish and totally untrue but it made sense to me so I bought the program."

and

"HDV( MPEG2) is packaged in groups of 15 frames so that anything other than the first needs the information from the others to assemble the picture. It uses less information to record.
Cineform and other intermediates recreates the individual frames with all their information for easy editing and processing. IT consequently creates a bigger file depending on settings but normally 4 or 5 times the files size of HDV.
Benefit is less load on the PC for editing( doesn't have to decode 15 GOP as well as manipulate effects required). Cons are bigger files sizes and consequently need for faster throughput from drives while editing."

Share this post


Link to post
Share on other sites
Okay, I have read David's blog and seem to be understanding.

It seems like his analysis is just tackling the tonal range issues, but will the Quicktime Update also produce the "smoothing" effect you referred to earlier?

Quote

. . . some stunning video at CES from this camera shot by Jon Fordham.



I understand tonal range from photography, but what is the jaggedness that you found offensive in the early footage?
Is it jagged edges on objects from compression or jagged/stuttered pans from shutter issues?

thanks for all the insight so far

Share this post


Link to post
Share on other sites
it isn't so much a "jagged" edge, but an artifact across the contrasted horizontals in the frame, I'm not where I can access the footage (in canada w/limited resources). It's similar to a rolling shutter strobe, but not the same as a high speed CMOS rolling shutter artifact (although this camera will generate rolling shutter at speeds of higher than 1/250)
Exceeding dynamic range and high resolution couple to create this effect, but from chatter with a friend on the QT team, it is all a problem related to the decode and the header information in the file. Canon is addressing the header information and is releasing an update to the camera, and now Apple has released a decoder that manages the dynamic range limitations properly (blacks are now superblack).

Share this post


Link to post
Share on other sites
So has anyone actually used the video function off the 5d in freefall yet? I have begun to think about just upgrading my dslr to a 60d when its released. But if the video quality in freefall is poor maybe i'll stick to a real video camera. It would just be nice to have an all in one unit. But i'm anxious to see some real world examples.

And DSE did you run into interlaced issues with 5d footage? Maybe i missed a post but it seemed u were discussing pulldown removal? You shouldn't need any removal it is all progressive and not interlaced. Any PSF would only work in a 24 fps camera. As theres no way to write 30frames progressive in an interlaced format as all the frames are already being used and theres no "room" to split them up.

Share this post


Link to post
Share on other sites
Quote


it seemed u were discussing pulldown removal? You shouldn't need any removal it is all progressive and not interlaced. Any PSF would only work in a 24 fps camera. As theres no way to write 30frames progressive in an interlaced format as all the frames are already being used and theres no "room" to split them up.



False.
Google "2:2 pulldown." Yes, you can have 23Psf, 24Psf, 30Psf, 25Psf, even 60PsF (although it's never been done in broadcast, can't be with current equipment). You might want to read either my elementary-level book, or Poynton's PhD-level book on the subject of HD acquisition. It's a long topic, but suffice it to say that 2:2 pulldown is one of the many answers to progressive acquisition/delivery today. PsF interestingly enough, is the first cross-platform standard in the film/broadcast industry, and is one of the fundamentals of Rec ITU BT- 709.

Brendan, I came back to this thread to point you to:
this piece of video which is even better than what Fordham did.
Ray Schlogel is producing a film with this camera now, and I've been hired to consult on post-pro. So far, what we're coming up with is killer sweet.
LOTS of postwork, but gorgeous.

Share this post


Link to post
Share on other sites
Interesting, but i'd prefer if you didn't talk over the heads of many of our members in an attempt to hock your book at them. 2:2 pulldown is a non issue as far as anyone who isn't an engineer is concerned. (that means basically everyone but you). The 5d records in an progressive format and without any work out the post side you will be working with progressive footage. (regardless of packaging this is the simple answer to how the footage is handled) No need to run pulldown removal or deinterlacer. So when BMFin asked why you needed to deinterlace progressive footage. The correct answer is "you don't" it isn't an issue like it is with 24p, 24pA, or 1080psf footage. So maybe don't scare the poor man?

Share this post


Link to post
Share on other sites

You claimed that PsF is only related to 24p. I've pointed out it is not. Had I only wanted to hock a book, I wouldn't have given you Google terms and attempted to explain it at a basic level. Frankly, for the .80 I make when someone buys a book, I don't really give a damn.

2:2 isn't remotely only related to engineers; it's how the NLE system will see (if supported) the flags in the video stream. Not all apps properly support progressive content in an interlaced stream in all formats/codecs.
The issue becomes confused stating that "PsF is related only to 24p," which the Canon5DMKII is incapable of shooting.
The 5D does NOT record in a progressive format, it records progressive footage in an interlaced stream.

Canon developed their own method of approaching this standard means of progressive storage. It is unique, and caused many initial problems in their DV and HDV camcorders for software users. Problems with some of their camcorders using this technology are *still* not resolved with all NLE's even 2.5 years later.
Now....exactly what is so "easy" with this format?

You are correct, the footage does not require de-interlacing, but if you're working with an app that doesn't properly read the pulldown flag or if the pulldown flag is corrupted (easy to do), then you're screwed. The workflow for the 5D is FAR from normal, far from easy,and far from commonplace.
If you feel PsF is simple, why did you initially emphatically state that PsF is only related to 24P when it has nothing to do with 24P?

One should be scared when attempting to do video-only with the 5DMKII. It's not 'easy.' It requires an understanding of several factors and conversions.

[edited to add: It will become easier, and by NAB we'll see more than one "fix it all" solution for this camera and others like it in addition to new software capability on the part of multiple NLE's. The 5DMKII is currently 'bleeding edge' in how it lives in the real world.
You don't think *that* is confusing, I hope:D:P

Share this post


Link to post
Share on other sites
I don't understand why you say psf has nothing to do with 24p, it does indeed. In face that was why it was originally created, to get 24 frame per second films onto television in a 29.97 interlaced stream. The point of my post was that you engineers always go overboard on ur tech crap that doesn't matter. Will u run into interlacing problems with the 5d footage in a program? Maybe if your running it through an audio program or something wacky. But being as 2:2 is the most common pulldown nearly anything u use will accept it just fine. If your having problems with 5d footage your going to have problems with anything. SO in short going into the technicals of how the footage is transferred to its media and the put back together is pointless. You wouldn't see a reason to get down to the binary explanation of how digital anything works in an attempt to explain whether its progressive or interlaced would you? Of course not, neither do you need to put people through worries of deinterlacing the footage. It will be EXTREMELY more likely they will have no interlacing issues at all than they will run into them. And for the sake of argument the footage will come out progressive.

Share this post


Link to post
Share on other sites
Let's try this one more time, as I'm apparently confused.
You said:
Quote

...Any PSF would only work in a 24 fps camera. As theres no way to write 30frames progressive in an interlaced format as all the frames are already being used and theres no "room" to split them up....



-The Canon 5DMKII cannot shoot 24p. it only shoots 30p, although you suggest it doesn't in your above quote.
-PsF is relegated to anything that packages progressive information into an interlaced stream using time-offset with marker/flags.
-If the flags are not correctly read, then the stream is read as interlaced, and usually reversed fields.
-The workflow for decoding the footage is *not* straightforward, it's not easy, and it's EXTREMELY more likely they'll have trouble decoding the footage properly at this point in time.

Could you please you to post your work with the 5DMKII? Some of mine can be found on Vimeo.
I'd very much appreciate any workflow tips for an easy, fast means of doing dailies outside of HDMI from the cam directly.

Hopefully I'm making more sense now?

Share this post


Link to post
Share on other sites
I was referring to primarily tape based psf in that post. As i said before you won't really run into any interlacing issues with 2:2 as its basically a non issue and is understood by virtually all video applications. I have personally NEVER run into a 2:2 issue with any HD footage not specifing it was shot PSF, ie 1080psf from a panasonic hvx200. That is where my reference to 30p came from, referring to pulldown with a tape capture. As stated previously i thought perhaps i missed a post about interlacing not relating to the 5d. All of the footage i have worked with from the 5d has come across just fine with no issues of interlacing before or after trans coding to various formats. Or editing or playing the raw h.264 from the camera.

I'm not sure what you mean by decoding. The camera will spit out an h.264 file ready to be played. It isn't like a p2 or other solid state camera that often gives you mxf files that need to be transferred into .mov or .avi video files.

As for your request to view dailies from the camera. You should have no trouble viewing the footage directly in its h.264 format. You can make TC reference from that raw footage and create offlines. I'd suggest then transcoding to prores422 overnight. I'm not sure what machines specs your working with but my quad core mac pro will handle straightforward playback of the h.264 in a timeline just fine until you start to cut it up. But for just viewing dailies you shouldn't have a problem going straight after the raw h.264 files.

We still prefer to use our hvx's or rent a quality camera at the moment. The compression, lack of basic control, and non adjustable frame rate make it simply a gimick at this point in my mind. Though it does excite me about how it will affect the future of video acquisition.

My experience with the 5d has come from helping the still photography side of our studio play with it, here is a link to one of their videos on vimeo.

http://vimeo.com/2710608

Share this post


Link to post
Share on other sites
Quote

I have personally NEVER run into a 2:2 issue with any HD footage not specifing it was shot PSF, ie 1080psf from a panasonic hvx200. That is where my reference to 30p came from, referring to pulldown with a tape capture.



The HVX doesn't use PsF for 30p.;)
You've answered my questions, thank you.
And no...you cannot play the 264 back from a timeline without transcoding. Even with the new QT Pro decoder.
Transcoding and "dailies" don't really make it into the same sentence most of the time.
But you're right about the overnight transcode to ProRez....Not exactly useful in a high-speed environment.

I didn't say our 5DMKII footage is publicly viewable, just said it's on Vimeo.;) But...if you look for VASST or Sundance Media Group, you'll find quite a bit there, includig some 5DMKII video that isn't listed as such.
We use Vimeo for shipping dailies, too.
I don't agree the 5D is a "gimmick" especially compared to an HVX. But it is truly not mainstream, and will remain non-mainstream. The codec and the decoding requirements make it a significant challenge at this point in time.
Back to point, you're talking from a position of someone who has not personally undergone any production with the 5DMKII, and aren't aware of the two fairly significant issues with the content that several companies like Bitjazz, Cineform, and Canon are all working to overcome fairly quickly to make the end-user experience an easy one.

Share this post


Link to post
Share on other sites
I didn't say it did, i said 1080psf. at 1080 24p the frames are recorded as PsF.

Yes you can, i assure you, i've done it :) I gave you my machine specs perhaps your rig isn't capable of handling it though i cannot speak to that. I'd suggest maybe upgrading your post equipment but then if your using a 5d for principal photography thats likely out of your budget.

Since were not dealing with film theres obviously a different workflow possible. Instead of sending your "raw" capture off to the lab for a DI you can take that raw h.264 view it right there and make your TC offline. You can then set the footage to transcode overnight and come in the next morning with a nice cup of coffee and being your online edit. Its the same workflow we use with the RED except we must use the QT reference movies generated by the camera for our offline, but thats a whole different discussion.

Share this post


Link to post
Share on other sites
No as stated we don't use a 5d for production. Again its gimicky in my mind due to its many pitfalls. Obviously this is not a videocamera. It is a still camera with video abilities, which in my mind makes it a gimick feature designed to hype it up and sell more. Now in the future could they use that technology to create some amazing and truely affordable cameras absolutely. But due to its many drawbacks including, inability to control iso, inability to control shutter speed, inability to attach external audio inputs, and low data rate/high compression to name a few we will continue to use our other means of acquisition.

But do i still want to strap one to my head and jump out of a plane with it? Absolutely!

Share this post


Link to post
Share on other sites
A commentary on the 5DMKII from a former VP of the DGA (Directors Guild of America):

Have been alternating between cutting my Z1 footage (smooth as silk) and battling the magnificent Canon 5D footage.
On closer observation, even with the new improved QuickTime 7.6, I still need to tweak the Saturation lower and the Gamma higher to get "nice" coloration.

Also, I'm now rendering footage using the HDV 1080 built-in filter so we can edit in realtime. It's a bitch and slow, but I have not found any other method that works. I spent hours with the Cineform tools and could produce intermediates but with no picture - sound only. I messed around with every other suggestion I found to date... After Effects (terrible tearing) CoreAVI (made nice Windows Media files) but nothing that actually works smoothly in any NLE.

More as I slog along.

If you're interested in purchasing the 5D as a primary camera, well...
it ain't.
Beautiful imagery, but too difficult to work with. Focus difficult. Exposures tricky.
Can't edit smoothly in Avid, Premiere, Vegas - nor on Final Cut.
It's not ready for prime time.

I really think that all is needed to help the editing is the correct codec that manages data and range, but clients can't wait out what it takes and anyway, there aren't even "promises around." So I'm currently rerendering the footage.


One workaround for speed we recommended he try (costs some res) is to use the Black Magic Intensity and HDMI. It's only 8 bit (HDMI limit), but can do 1080/720p and looks very, very good upsampled to 10bit.
And is still slow

Share this post


Link to post
Share on other sites
Quote




From here:

http://www.dvinfo.net/conf/cineform-software-showcase/115554-nutshell-what-cineform-2.html



"A friend explained it to me (I have trouble understanding the concept as well) - yes I've read the details on the site and all the posts on here

let's take a simple scene where a man stands with his hands by his side and then scratches his head. With 35mm we would have a series of sequential shots of the hand going up and down - complete images 1 through 50. With video we would have the same but the images would be interlaced - half images.

With HDV we are trying to fit a lot of information on an existing media (miniDV) and so instead of complete images 1 through 50 we have a complete image 1 and then we only record the data that has changed (the moving hand) and then we have a complete image again at frame 8 or 15 (I'm not sure of the figures).

If you want to look at frame 3 or 6, your NLE has to recreate that frame from the complete frame plus all the changes. Cineform does that before you start the editing process.

there is also the fact that m2v or mpeg files are compressed files and editing direct from them means you lose even more data when you've finished. Cineform creates a files that 'looks' as good as the original in an AVI format which is better for editing.

This may be completely wrong, absolute rubbish and totally untrue but it made sense to me so I bought the program."

and

"HDV( MPEG2) is packaged in groups of 15 frames so that anything other than the first needs the information from the others to assemble the picture. It uses less information to record.
Cineform and other intermediates recreates the individual frames with all their information for easy editing and processing. IT consequently creates a bigger file depending on settings but normally 4 or 5 times the files size of HDV.
Benefit is less load on the PC for editing( doesn't have to decode 15 GOP as well as manipulate effects required). Cons are bigger files sizes and consequently need for faster throughput from drives while editing."



Thanks !

I think I understand the basic idea now. For some reason I didnt notice earlier that you responded to this question. (better late than never) :$ This explanation was very helpfull.

But this isnt really interlaced then now is it ? Interlacing is something different This is just a way of "packing", by not repeating the same data in every frame.

When I think about it with logically it isnt really a big workload to later on to "unpack" this data again so that each and every frame consists the full data.

Also logically this would mean the filesize is larger when the picture is moving a lot, since less of the data can be copied from the 1st or 15th frame. And someone who has the camera just confirmed that he made a test about it. He filmed 20 seconds of stationary subject and 20 seconds of moving subject. The filesizes were 33,8 MB and 42,8 MB (not so big difference since he was filming a piece of paper with black stripes on it.)

Share this post


Link to post
Share on other sites
Interlacing and codecs are different things, yes.
Codec= COmpression/DECompression.
Compression occurs at the camera. Decompression occurs at the computer.
The greater the compression, the heavier the load on the computer. It's not really related to file size, it's related to how fast your hardware can decode the data packets. MPEG2 is fairly easy to decode, MPEG 4 at high bitrates is not.

the information below about losing data in frame 3, 10, 13 and so forth isn't quite so; most NLE's (exept FCP) can decode and re-write the frames without having to recompress them. If you process the frames the processing is still done in a higher space than the originating frame. There are some advantages to using a transcoded format such as Cineform if you expect to be reprocessing the frames several times. Most editors don't need to do this, however.

The information about the file size being dependent on motion and non-redundant pixels in the frame is accurate, however.
Here's an analogy that might make sense....

Take a book like the bible. Remove every instance of the word "thee" and replace it with ~.
You've just reduced the thickness of the book by a lot. Now take every instance of the word "and" and replace it with`. Again, you've reduced the thickness of the book by a significant amount.
MPEG loves redundant pixels. It doesn't have to work as hard. The more non-redundant pixels, the more data it must write. The more data it must write, the larger the file becomes. MPEG only use the bandwidth it needs to use, which is one of the benefits of MPEG.
There are many discussions to be had about how various flavors of MPEG work, and which are the most efficient, least efficient, etc. There is a lot of blather about MPEG 2 vs MPEG 4, and most of it is just that, blather and measurebating. The short line is that the higher the bitrate, the tougher for the computer to decode regardless of which flavor of MPEG it is. MPEG 2 is easier to decode than MPEG 4, and the profile of various flavors of AVC are still again, more difficult to decode.
Hopefully that helps that part?

Regarding interlacing/progressive stream...
Take a printed photo. Start to drop it in a shredder, but stop before it goes all the way through.
Lay it flat on the table, and all the lines you see....that's interlacing. But...if you could put all those lines together and not see the separations (as though it was still a solid sheet), that's Progressive.
Now...if you could take every other strip, put it in one bundle, and take the remaining strips and put them in another bundle, you'd have two smaller bundles than the one big bundle you'd have if you put all the paper strips in one stack, right? Now, try to push the bundles through a straw. The two smaller bundles will fit through the straw (one after the other) much more easily than both bundles going through the straw at the same time.
Make sense?

That's a rough approximation of how progressive content can be stored in an interlaced stream. With the proper tools, the editing application sees a small set of instructions (ie; flags) that says "put these two bundles back together in order so the cuts can't be seen. The two "bundles" are delivered at two different times, so one bundle is delayed while the other is "writing" the picture. If the flags are correctly read, then the picture comes up in the editing application as a single picture, no lines visible.
The compression/decompression (codec) aspect is quite separate from interlacing or progressive content. Most formats can accomodate for progressive or interlaced. It's quite easy to interlace a progressive image, it's another thing to restore/convert an interlaced image to a progressive image. Impossible to do without some loss.

Share this post


Link to post
Share on other sites
Quote


That's a rough approximation of how progressive content can be stored in an interlaced stream.



Cool..

Im sure thats how it can be done. The question is, how is it done with the 5DmarkII ? You keep saying its done by interlacing, and no where else I can find no reference to any interlacing.

Other websites suggest the decoding is done by temporal compression.

Could you maby point out the sources that suggest the stream is interlaced ?

Thanks!

Share this post


Link to post
Share on other sites
Download the footage, and put it into Gspot.

Temporal compression algorithms can be applied to either interlaced or progressive footage.
Temporal compression=compressed over time (frames)
Spatial compression=Compressed w/in each individual frame. ie; HVX200 stores data at 960 x 540 pixels, then uses conversion to expand the data to 1280 x 720)
DV also uses spatial compression, but does it by elongating pixels, ie .909 Pixel Aspect Ratio/PAR.
You can also mix temporal and spatial compression, ie; HDV. 1440 x 1080 pixels at a PAR of .1.333, then temporally compressed in a packet of 2, 6, 9, 12, or 15 frames.

Share this post


Link to post
Share on other sites
Quote

Download the footage, and put it into Gspot.



To be honest, I would rather read about it being interlaced than go ahead and start exploring the data my self.

So, if you could point out a source that claims the stream is interlaced, I would very much apprechiate it..

Thanks!

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

0