Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Cineform 4K solution?

1- I do agree that it is misleading to say that "12-bit is better than 10-bit". It suggests that 12-bit is "20%". It can also be better or worse depending on implementation.

---In the case of Redcode, I would presume that the 12-bit design works better. This is what Graeme is telling us, and it should be safe to assume that the Red team has tried both. They are finding better results from 12-bit. Their "12-bit" is presumably tied to their implementation and their secret sauce (or secret whisky blend in that sauce).

There is a saying about assumptions. In the blog I point out why designers maybe misslead into believing 12-bit linear is giving them the best results, when visually, particularly after the image has been pushed, it does not. Images from RAW cameras are being compressed to maintain the highest dynamic range, yet the final presentation will have been significantly curved. In the end, the compression that stores data closest to the distribution curve, has the lowest distortion, that is completely provable. The question is which generic curve offers the most flexibility in post, when some curves will benefit particular post looks more than others. I have argued that a log curve is more flexible in most cases. The industry and real world tests agree, even with redcode samples are showing this to be true, which I sure Graeme is working to fix.

And what exactly do they mean by "12-bit"... perhaps they are quantizing the high frequency details heavily (so effectively they aren't 12-bit) and with some sort of perceptual curve (L*, gamma of 3.0, gamma of 2.6, ?log?, some spatially-based curve*, etc.).
But very low frequency/broad detail mostly retains the original 12-bit precision.

This is true, and we have measured the significance of this effect. When using a log curve stored as 10 or 12-bit log (not linear), bump PSNR was only 0.3dB with PSNR in the mid-50s (using the StEM footage.) The low frequency, or DC, accuracy does go up for 12-bit over 10-bit. When we did the CineForm 444 to HDCAM-SR comparison, we thought we would need the bump in quality, we didn't. But again it is an error to think 12-bit has a uniform advantage over 10-bit, as the log curve again preserves the 12-bit DC precision for the shadows, and again it is not visually needed in the highlights. I totally understand why this can give people headaches thinking about it.

A- Is Redcode optimally designed? The Red team probably knows what they are doing so I would say yes.
But what you're really interested in is
B- Is the quality of Redcode good enough for my needs? (The answer is likely a resounding yes, unless you need higher frame rates.)
If the answer here is no, then you'd want to know if Cineform is higher quality than Redcode.

A. No one is optimiumly designed, or ever will be, redcode is still changing, and so is CineForm. All lossy compressors are throwing data way, both how much and which details is want separates compressors.
B. I'm not saying that redcode isn't good enough for some or even many, but there is plenty of room for improvement.

The subject of this thread is for people wanting to use CineForm for their post workflow, so these questions are particularly valid for those users. The important conclusion from the analysis is a log curve can preserve all the detail than remain after a lossy linear compression. Quality will be maintained through the CineForm post.
 
3) But now, this is first place where the human eye characteristics can be taken into account. The point is, the eye is not evenly sensitive to all levels of light. So, at some point in the process one has to decide which way the pixels are "distributed". That is, more pixels can be devoted to the subdomain where the eye is sensitive and less pixles are employed where they are not needed.

The question is, when you say "linear" and "log":

Lauri, it is only the above that we are discussing. The sensor voltage and digitized result is linear. So after fixed partern noise removal and other sensor distortion reduction, the resulting linear signal is sent to the compressor. The function of the curve applied at this point has a significant impact of the visual (human eye) compression distortion.
 
Will the 4k online editing be available during October?
 
The function of the curve applied at this point has a significant impact of the visual (human eye) compression distortion.

David, thank you.

So you seem to say, without the camera we have a relation:

light -> visual perception by the eye.

With the camera there's a composition of relations:

light -> sensor voltages-> compressed voltages (raw file) -> image file -> output on screen -> visual perception by the eye,

and the key point is that the compression-map from sensor voltages to compressed voltages, i.e, to the raw-file, is where you compensate the difference between the eye and a sensor such that the chain of relations from light to visual perception is as close to what it is without a camera.

Is there some reason, why the compensation should be included at this point? For example, at least in principle one could think of employing the following approach:

light -> sensor voltages -> compensated voltages -> compressed voltages

or as well the compensation could be included in the debayerian process

... compressed voltages -> image file ....

As RED has designed everything from the sensor to the software generating the image files, they have a lot of freedom in choosing which way the compensation is embedded into the process. So it's not surprising to hear Graeme is not able to tell about the details.

---

David, have read now your post four times to make sure understand every single word and sentence. Thanks for the trouble of writing the text. There is indeed things which are rather easy to see but difficult to put in terms of mathematics: The human eye is very good in recognizing regular patterns or any kind of regularity, and the brain interpret such things as artefacts. Formalizing "pattern recognition" in terms of mathematics is a pain in neck. (For instance, this is why recognizing handwritten text by a computer is not that trivial.) This is also a reason why I feel wavelets should have a competitive edge over DCT compression. As soon as DCT compression creates those blocks, the eye will spot them immediately. As your examples show, wavelets and high compression create also patterns that can be recognized. But, these patterns are typically "less regular" as the DCT blocks. Summing up, there are probably all sort of things related to visual perception which are extremely hard to put in terms of mathematics.
 
Lauri,

Agreed, human visual system is very difficult to mathematically model. My blog entry (http://cineform.blogspot.com/) doesn't attempt to tackle the spatial characters of the human eye, as to why DCTs are not as good as wavelets (which I also agree.) Instead I'm only dealing with the eye's sensitivity to brightness, and how that it is not linear and more log in nature, and how compression is biased to removing shadow detail without appling curves. Whether Red is applying curves as part of sensor voltage processing, or as table mapping linear to log, the results would be the same, and we wouldn't be arguing, that is a perfectly reasonable approach to the same solution I wrote about. But I don't believe this is happening, as Graeme has stated that the encoding is of linear light, and there are artifacts present, and seemingly more so in the shadows. While this could be that the bit-rate is not quite high enough for Graeme's new techniques, they are trying to go about a problem that was solved with the invention of television and earlier, and to what reward? All that said the compression quality is good for a 11:1 wavelet. The resolution is leveling factor, the shear number of pixels allows you to get away with some subtle yet visible compression artifacts -- yet same tech at 8:1 or lower would be very nice.

Compression is not a black art, there is only so much information you can present at a given bit-rate, particularly with a single frame compressor.
 
Math for math, and given ACTIAL experience,
Going David in the, Graeme vs. David bout…..2 to 1…bets?
J/K all in fun
 
Sorry, in follow-up, I was getting a little combative, so I softened my last reply a tad. I feel strongly about this stuff, I believe things can be made better, however I'm not giving the opportunity to directly advise, and my company is sometime seen as competitive (believe me, we are not -- we don't build cameras.) However, I'm personally competitive when it comes to compression technology, if Redcode gets better than CineForm RAW, you can be sure I will do my best to reverse the situation. I hope you can all respect that level of competition.

Cheers.
 
David I would have one question in relation to Cineform vs. REDCode for quality.

It was my observation while testing that once encoded once you've effectively done all the damage you're going to do. I can see a great number of reasons to use an established intermediate codec like Cineform however I'm sure you'll agree that once something has been encoded to REDCode just once Cineform obviously isn't going to bring back any details and REDCode shouldn't really lose any ground either through multiple generations.

So really the quality of one solution over another is a debate best placed at aquisition and beyond workflow is pretty much a moot point.
 
That's not really true because compressors can loose data over several generations even if it's the same compressor. The ability to maintain data over many generations is something that CineForm (in my experience) has been able to brilliantly handle just about as well as the Lossless settings in AE. I do a lot of going back and forth and CineForm has always outdone my expectations in maintaining data over several generations.
 
believe me, we are not -- we don't build cameras.

This is the impression I've got, so can second what David says. I've found Cineform is devoted to high quality and that they sincerely try to help their customers to learn to make the best out of the tools available. Especially, all the trouble David has taken to explain the background is unique and shows his attitude is: only the best is good enough.

David, sorry if I let you understand I was questioning or critisizing what you said. I'm just trying to understand every detail in order to squeeze out everything possible from the camera-editing system. Reading your blog opened my eyes to understand how the compression design is a balance between noise, detail preservation, high lights, shadows, size, quality and speed. Moreover, I highly appreciate that you do express arithmetically what can be stated in numbers, and having a deep backgound in maths I find it a sign of high level of professionalism that you said in your blog that one shouldn't read numbers too literally.

There's a lot of things in this cinema-video world which are stated bit fuzzy. Typically, each company interprets different technical terms in their own way. For an outsider having a background to understand the details this is bit difficult because one never knows what can be read literally and what can't. These blogs and the comments made on these forums are very helpful and reveal much information not given elsewhere. In the end of the day, that will help the users to push the camera and the editing system to its limits resulting in highest quality.

Personally I do sincerely hope RED and Cineform will find a way to co-operate. Of course, I have a selfish reason to say this: I did not wanted to change from a Windows/Adobe environment to a Mac. I rather spent the same money in RED accessories and Cineform licenses. Simultaneously, I do also believe it would be a win-win game.
 
"my blog is just video geek speak for "don't believe the marketing." And why "10 is better than 12."

though i understand pretty much nothing in the blog on 12bit lin/10bit log .. do sometimes use cineform .. i seem to catch a bit of your posting on Reduser that 10 bit log is better then 12bit linear ...

perhaps when RED tested 10 bit log they preferred the image of 12 bit linear based on use with their sensor/DSP/whisky ?

or if a sensor is/outputs linear why not just keep it linear ... especially if CC works best ??( did a 5 min google read) on linear and that must be applied to all RAW files then again why not keep it linear till after the CC is applied then go to log ?
seeing how all RAW clips must have CC apllied wouldn't a better comparison be looking at linear and log clips after CC has been applied and not before ?

" I believe things can be made better, however I'm not giving the opportunity to directly advise"

last year you had posted (& very strongly stated )that 12 bit linear was the wrong direction to go etc ...
i assume you know a lttle more about RED then the average person here because it has been posted in the past that you were under a Red NDA ...

seems you are trying to tip toe in boots (suggestions/advice) ?

seems you're suggesting that RED apply some kind of curve to their linear processing ?...
are you also suggesting their compression rate is too high ? based on you prefer compression under 8:1 and/or you have actually work/seen different compression rates using Redcode and/or all this based on testing Cineform codec at different compressions and or ?
 
hello there,
sorry for interupting this interesting discussion, but can someone CLEARLY explain WHY we need to edit in 4K and HOW we can see results in realtime and on WHAT monitor, full screen in 4k of course? i saw few of those (4k monitors) just on IBC (i was never on NAB :(( ) and none of them were for sale or rent, just for demo or inside certain company for tests.
thank you,
filip
 
Will there be a special version of Cineform for Red users? (2K, and 4K). Or is this just going to be built into prospect 2K generally?
 
David I would have one question in relation to Cineform vs. REDCode for quality.

It was my observation while testing that once encoded once you've effectively done all the damage you're going to do. I can see a great number of reasons to use an established intermediate codec like Cineform however I'm sure you'll agree that once something has been encoded to REDCode just once Cineform obviously isn't going to bring back any details and REDCode shouldn't really lose any ground either through multiple generations.

So really the quality of one solution over another is a debate best placed at aquisition and beyond workflow is pretty much a moot point.

For aquisition this is true, we or anyone can't restore any lost information. However, as the image is RAW, it needs to be developed into RGB, so you don't get the nice multi-generation benefits of wavelets when crossing from RAW to RGB. Transcoding to CineForm 444, CineForm Intermediate (4:2:2), ProRes 422, or Recode RGB, will all have some first generation type loss. So the first generation performance of CineForm is important, even for Redcode RAW captures. On the blog you may not have noticed, but all the image are after debayering to RGB, so this was test a CineForm 444 workflow for more than RAW -- although they are based on the same tech.
 
or if a sensor is/outputs linear why not just keep it linear ... especially if CC works best ??( did a 5 min google read) on linear and that must be applied to all RAW files then again why not keep it linear till after the CC is applied then go to log ?
seeing how all RAW clips must have CC apllied wouldn't a better comparison be looking at linear and log clips after CC has been applied and not before ?

While all that feels correct, that is what my whole blog tries to explain is not the case. If Graeme is successful in having a variable quantizer based on DC brightness, or some such technique to reduce the artifacts of linear encoding, that doesn't change the fact that shadows need to be quantized less than highlights for uniform details in each stop required by the human eye. I'm sure Graeme agrees with everything in my log vs linear blog entry, and only differs in the amount it applies to Redcode. As I haven't personal tested Redcode that might be true, in my testing I used CineForm 444 and JPEG2000.
 
hello there,
sorry for interupting this interesting discussion, but can someone CLEARLY explain WHY we need to edit in 4K and HOW we can see results in realtime and on WHAT monitor, full screen in 4k of course? i saw few of those (4k monitors) just on IBC (i was never on NAB :(( ) and none of them were for sale or rent, just for demo or inside certain company for tests.
thank you,
filip

You never need to preview 4K during an edit -- although it would be cool. At CineForm we believe, that having all your source resolution available in an online edit, is very convient if you can overcome the limitations of the size of the source media. Online uncompressed 4K doesn't make sense during your edit, yet a compressed wavelet solution allows for real-time proxies on the fly, which enables a smaller production team to avoid the the conform, and potential a downstream DI if they wish.
 
Will there be a special version of Cineform for Red users? (2K, and 4K). Or is this just going to be built into prospect 2K generally?

Not sure yet. Clear the existing Prospect 2K solution will work fine for 2K masters, yet it does support 4K decodes so you could use it for 4K masters when outputing via DPX for filmout. As we believe most users will not be doing a 4K film output, we see Prospect 2K as the sweet spot for Red users. Also 2K 4:4:4 renders are really nice, both in quality a size, average 40MB/s for HDCAM-SR quality. But we already have many requesting Prospect 4K, which does exist -- in fact it was used for my blog demo, as the images I was test with were 3K. 4K CineForm 4:4:4 outputs would average 100MB/s -- very nice compared with 1.2GB/s for uncompressed TIFFs. We are only considering pricing on Prospect 4K -- that is the issue.
 
Back
Top