Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Cineform 4K solution?

Tom,

Yes, we have 4K footage from Dalsa, Vision Research and Red. When doing compression test, a 4k Projector is the worst way of analysing for artifacts, the projection viewing environment is amazingly forgiving (please consider that most of today's digital projection is 8-bit MPEG2.) I view 4K just like everyone else, fitting to my display, or at 1:1 pixel view when analysing edges and artifacts.
 
Originally Posted by David Newman
"I plan to write up on my blog the linear vs log thing, and how it effects lossy (anything more than 2:1) compression -- I just need to find the time.

That would make for some very interesting reading. Please do post a link to this if you get around to writing on the subject.

I finally posted it here : http://cineform.blogspot.com/

Please let me know what you think, although likely it is a subject for a new thread.

Too much information? I went into a lot of details.

David.
 
David,

Two things. First, if REDCODE is linear and Cineform can be decompressed into linear, do they produce similar quality images, or as they are compressed each image already travels a different path that fundamentally and irreversibly affects its final look and quality? Second, I think it would be wise to make Graemme and others at RED directly aware of not only your explanations, technical data which I'm sure they know, but of any comparisons you manage to make between both codecs as and when it happens. I've seen Cineform projected on "Dust to Glory" and that was from a melange of several formats including crappy DV, etc, and it looked totally amazing on the big screen, so I can only imagine what it would look like if the footage came from a RED camera. Whenever such comparisons between Cineform and RED take place, there would be INVALUABLE information as to what road to travel when compressing 4 K images for the optimum results. RED CODE might be better or Cineform might be it, then it would be up to the RED community (and the RED team) to implement whatever advantages become clear from the tests...

I, for one, CAN NOT wait to see such test from the same images.

Regards,

Rudi Herbert
 
First, if REDCODE is linear and Cineform can be decompressed into linear, do they produce similar quality images, or as they are compressed each image already travels a different path that fundamentally and irreversibly affects its final look and quality?

Thank you for reading my blog entry. I was hoping to demostrate that shadow data lost through linear encoding is lost forever, whereas linearizing the image after compressing curved data, has a far greater ability to preserve the shadow details. So the images are not similar, the log curved data will have more shadow information.

Graeme and others are aware of my position on this, and they have stated once they have ways reduce the impact of linear coding issues. And they might very well; I can on go with known image coding theory and explain why we have taken the approach we have, with demostrations on the blog. From that understanding, I'm hoping in future Red offers compression curves, to make their codec work even better, to bring back some shadow detail. We all benefit if the source has a much information as possible.
 
Yes, I'm very aware, and yes, we have some secret sauce in the compression, which has developed significantly and will continue to do so. And please, please spell my name right or I might get tetchy!! :-)

Graeme
 
Sorry about the name typo, I cut and pasted from the above post.

Graeme, while talking of "secret sauce", most image processing theory is well known, and secret or "special sauce" ends up being a combination of the known, ketchup + mayo. So I wouldn't be at all surprised if you were putting a little curve onto the data before compressing, and that would be secret source to a linear encoder. To me that would be good idea, yet Red images are a little muddy in the shadows, so I still question the rationale of not using log curves, which addresses this issue so beautifully and simply.
 
Actually, I make my sauce with a lot of whisky. To be true, there's nothing in the sauce other than a good single malt whisky, but I've just finished a bottle of Aberlour, and the Glenlivet I have open just isn't as good.

Graeme
 
I always find these Graeme-Newman conversations entertaining, even though I don't understand what they are talking about. :)
 
David,

I think things are a little tough right now. RED is still struggleing to meet the demands of a VERY excited customer base. You're talking about refining their codec and they are focused on getting the bugs out of their firmware!!

I am sure there is going to be some wonderful testing and discussion on all of this and I am looking forward to it. But I would guess it won't happen for a few months. It will be cool when it does though!
Jay
 
OK - i read the blog on log/linear

http://cineform.blogspot.com/

don't know what i read ?
now , i'm light headed ..
i'm tooooooo old for this ...
can't it be like driving - i put my foot on that pedal and it goes .. put foot on other pedal it stops ... turn the wheel to go here/there ... all that stuff that happens from the pedal to whatever it links to ????????? do i really need to know all that ????
i gotta go lay down ...
 
Thank you for reading my blog entry. I was hoping to demostrate that shadow data lost through linear encoding is lost forever, whereas linearizing the image after compressing curved data, has a far greater ability to preserve the shadow details.
Thank you for your blog post, there is clearly a lot to understand here and I am still trying to digest it.

I have a question, if you don't mind: let us suppose I have in front of me two digital cameras, which use respectively Brand X and Brand Y lossy compression, both alleged to be of high quality. My task is to choose a camera with such high technical quality that not even the most detail-oriented critic will find cause for objection to the final image when it is displayed on a high-resolution output device. (Note that we are not allowing the critics to see A/B comparisons, only the "final product"). What testing procedure(s) would you recommend to evaluate and compare the two cameras? Especially, is there any quantitative testing that is useful in this regard?

One possible image quality metric is resolution (MTF) measured across input spatial frequencies and all input contrast ratios, as for example with this test: http://www.imatest.com/docs/log_f_Cont.html
Another metric is the distribution of image noise. I think the combination of those two, also measured across the overall input dynamic range (dark to light), should form a good basis for measurement of image quality in general, would you agree?

However I am starting to suspect, with "smart" compression algorithms that selectively allocate bits across an image, that there is no satisfactory way to test for image quality apart from simply looking at a large number of real images by eye. Any fixed testing algorithm can be dissected and tuned for in the codec design, probably at the expense of performance on real-world images.
 
Jay,
I don't talking about refining the codec, this is just a pre-filter that would help overcome some of redcode's current issues without altering the codec in any way. This is not a new camera teething issue, it is design choice that I mentioned I thought was odd nearly 12 months ago. I'm still compaigning for this change to make redcode better, and that doesn't directly help CineForm, it just that if we are going to edit Red content, let's make it as good as we can.

Donatello,
Sorry, my blog is just video geek speak for "don't believe the marketing." And why "10 is better than 12."

TJ, Please hold the mayo.

jbeale, Fortunately I don't know any company that tunes a codec for testing processes like you list, mainly because most camera codecs compression the image to the near edge of its life (I think I'm invoking Stu Maschwitz), such optimization would make matters worse. If you see compression, that is an issue. I don't use those metrics, I just look at the image. Look a the noise floor, look for edge enhancement, and ringing, etc.
 
1- I do agree that it is misleading to say that "12-bit is better than 10-bit". It suggests that 12-bit is "20%". It can also be better or worse depending on implementation.

---In the case of Redcode, I would presume that the 12-bit design works better. This is what Graeme is telling us, and it should be safe to assume that the Red team has tried both. They are finding better results from 12-bit. Their "12-bit" is presumably tied to their implementation and their secret sauce (or secret whisky blend in that sauce).

And what exactly do they mean by "12-bit"... perhaps they are quantizing the high frequency details heavily (so effectively they aren't 12-bit) and with some sort of perceptual curve (L*, gamma of 3.0, gamma of 2.6, ?log?, some spatially-based curve*, etc.).
But very low frequency/broad detail mostly retains the original 12-bit precision.
Your blog brings up a good point but it might be talking about something else.
* For a spatially-based "curve", Edward H Adelson's website has something on this. 3-bit "bit depth" is almost enough in the HDR companding scheme presented in one of his papers.

---In the case of Cineform, it doesn't look like 12-bit yields better results. So for that implementation/design "12-bit" doesn't make sense.

So depending on what codec you use, 12-bit or 10-bit may be better.

3- I guess the practical issues you'd want to know the answers to is:
A- Is Redcode optimally designed? The Red team probably knows what they are doing so I would say yes.
But what you're really interested in is
B- Is the quality of Redcode good enough for my needs? (The answer is likely a resounding yes, unless you need higher frame rates.)
If the answer here is no, then you'd want to know if Cineform is higher quality than Redcode.
 
Too much information? I went into a lot of details.

David,

Thanks for the useful information you give in your blog. I highly appreciate your blog, and have learnt a lot by following it.

My background is in science and engineering, especially in maths. In practice, I've been studying maths since were 5 years old when started pre-school in Chicago. So, I tend to read you and Graeme perhaps in a different way than many others.

Nevertheless, I believe it is importat to sort out couple issues to have a proper and useful discussion here. I'll start by giving an interpretation and hope you and/or Graeme fill this preliminary view:

First of all a formal detail: Linear refers to a linear function (fulfilling the axioms of linearity) and no function has any meaning without saying, what is the domain and the range.

Now about practice:

1) An ideal pixel of the sensor converts linearly the amount of light (scattered to the pixel area) to a voltage reading. (That is, the input -the domain- is about light, the output -the range- is about voltages, and we have a linear relation there. If the relation is not linear, by composing relations one may still rather easily end up with a linear relation between light and voltages.)

2) The next stage is to digitize the voltage readings of pixels. A straight forward approach is/would be to introduce another linear relation between the voltages and the output digits. As a result one has the digitized raw data of the sensor.

3) But now, this is first place where the human eye characteristics can be taken into account. The point is, the eye is not evenly sensitive to all levels of light. So, at some point in the process one has to decide which way the pixels are "distributed". That is, more pixels can be devoted to the subdomain where the eye is sensitive and less pixles are employed where they are not needed.

The question is, when you say "linear" and "log":

a) at which part of the process you talk of? My guess is, the raw file contains a linear relation between pixel voltages and the digits. So, the lin/log enters at some point when the raw file is concerted to a RGB file.

b) what is precisely the domain and what is the range you talk of.

Rigorously speaking, Graeme either uses a linear relation or not. There's no midway approach. However, Graeme's function may be locally linear between a certain subdomain and its corresponding range and elsewhere something else. So far, however, the term has been employed in a way giving an impression it's a globally linear function. So, are you Graeme able to answer, yes or no. It must be either way.
 
I'm not able to answer any questions relating to codecs other than which type of whisky was being drunk at the time.

Graeme
 
... but I've just finished a bottle of Aberlour, and the Glenlivet I have open just isn't as good.

Feh. Glenlivet? Speyside whiskeys don't know what peat is and disdain the earth preferring sugar and citrus and girly things. Do you drink some of that with your light beer?

Ardbeg, Port Ellen, Laphroaig... tar, earth, smoke....

Lucas
-----
Islay Lover
ASSIMILATE, Inc.
LA, CA, USA
 
You know me, I'm an Islay man, but a good Speyside is good - I'm really enjoying the Aberlour. If you don't drink the sweet ones, you can't appreciate the peat and smoke, and if you have too much peat and smoke, you need to recalibrate with some sweeter ones. And I did point out the Glenlivet just isn't that good. (It's still better than a bottle of Bells or Teachers though :-) )

Graeme
 
Back
Top