Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

A note on the future... To Jim and Red Team

phebron

New member
Joined
Jun 29, 2007
Messages
3
Reaction score
0
Points
0
I am so impressed by your product - the RED ONE is certain to drastically alter the industry.

I post the following message here because I believe that your company is deeply committed to the advancement of motion pictures, and as a theorist and practitioner of digital effects technology it is my interest to urge the development of new technologies where this is possible - namely with a cutting-edge company that is starting from the ground up.

Digital cinema cameras, I believe, should be both tied to and simultaneously divorced from the heritage of their predecessor, the film camera. One place where I can show my point is in the following: there is no reason that a digital camera should only employ the same channels of input as a film camera. Doubtless, it will be the case that the RED camera will find a stronghold in digital effects heavy production. Why not introduce to the capacity of your camera a fourth input receptor which uses an infrared sensor matrix to provide an alpha or even z-depth matte to the image?

I believe such a possibility has not yet been explored because the development of cameras is based on the legacy of improvement, the perfecting of what stands already. Your company has produced an incredible product and because I have such faith in your mission, I hope you will consider for the development of RED TWO what unique properties may be added to your device to make "camera" mean more than a capturer of 2d visible spectrum light.

Digital effects are now central to movie making, as such the camera industry must find ways of accommodating this.

Thanks.
Patrick
 
This concept would allow for a HUGE time and headache savings for FX heavy films and TV spots. I for one would really like to see this. Even an ad-on in the future. This feature would ad as much to the camera as the camera has added already to the industry.
 
I believe an israeli company makes cameras with 3d depth planes recorded too. Not sure about 2d resolution, Ill try and find a link. Oh, heres a link to a company, there are other companies too I believe.

http://www.matimop.org.il/newrdinf/company/c2529.htm

Edit: The company 3DV has a depth sense chip that oem`s could use called ,DeepC (tm)
 
I believe it was. I've seen it mentioned a few times since then as well. Not a bad idea, but let's see a working camera first and then features like this could be considered. The majority of REDs in use will not need this sort of thing. I'm also confused how an IR layer is going to give meaningful alpha or depth buffer data unless you already know the IR reflective properties of what you're shooting and control IR output from light sources accordingly. I'm not saying it's a bad idea... A camera that can also record accurate depth information would be insanely useful, I just though it was an overly-simplistic assumption applied a complex task.
 
Call me a skeptic. I've been involved in (slow) 3D laser scans of objects and (really slow) laser set surveys and I personally doubt this technology will ever advance to the point where it will replace traditional blue screen, green screen or scotchlite keying. Notwithstanding issues like pixel accurate sensor alignment and temporal alignment, I would be stunned if lasers, infrared or whatever could properly document the 3D complexities of something like smoke, water vapor, an explosion, or fine hair blowing in the wind - especially at high resolutions and frame rates.

Don't get me wrong - I would give my left nut for an accurate camera generated Z-depth channel... but I think the flying car is probably a more practical investment for Jim.
 
3D Motion tracking seems to be doing a pretty good job of autorotoscoping foreground elements these days as well; Z-channel may become redundant if the trackers get accurate enough!
 
Ronx, I was not aware of the 3dv system, thanks for sharing it.

Since posting earlier today, I've been thinking of other possible illustrations of my general point about the possibility of RED offering a camera which meets the new requirements of filmmaking as they are determined by the growing importance of digital effects.

While I know that the type of system I'm about to describe already exists in some forms, what I'd like to underscore here is in line with Obin's comment. Unifying certain shooting capabilities under one roof (this roof being the miraculous, sublime and add-on friendly RED camera) rather than having to employ a series of technologies developed by separate companies, which interface to each other via awkward exchanges, seems like the perfect task for this company.

So, here is another feature I'd like to propose...
Three core elements of data are essential to the production of live-action images suited to vfx incorporation. (as i see it, anyway)
These are the photographic image (of course), the depth element as I've previously named, and third would be the integration of a camera self-rotation sensor.
This third piece is an easily implementable component, but would be extremely valuable in circumventing the need for motion control in some cases and also drastically changing the style of shooting which can be done in a greenscreen environment. What if the camera could be set to know its up Y-axis and then record changes to its yaw, pitch, and so forth?
In combination with the depth recorder, you can greatly reduce the need for markers when shooting on greenscreen. The infrared depth finder would give your distance from the screen and the rotation sensor would... well, provide rotation information... and between these, you no longer need to worry about the problems of doing a greenscreen close-up with a moving camera and no markers.

Anyway, my point is this... RED is such an exciting company and I believe it is time for camera makers to start thinking about the concerns of filmmaking today - a big part of which is digital effects.

So this is my plug to advocate change in our medium!

By the way, I'm new to this forum.
My name is Patrick Hebron and I am a theorist of digital effects. (and also a filmmaker too).

Thanks
 
I was thinking more of ultra low powered radar. It could take the place of the tape measure and incorporate into the cooke I system. Only a few watts so it wouldn't penetrate any solid object or cause skin damage. Combined with a way to map the return to a given subject in frame..would help out the vfx folks. Could also be used to automate focus pulling.
Would 5 watts be enough within a 1 to 300 foot range?
 
radar is great for long distances.. sucks for short distances..


Maybe Sonar? Since sound travels infinitely slower than light and therefore more accurately measurable in short distance. Imagine that.. mid take, a sonar pulse sends the sound guy to hospital
 
Here's a wild guess of such concept. It may be BS but anyway...

Step 1.
Zebra patterns are projected from two sources in the light spectrum which sensor picks up but isn't visible to the human eye (UV or IR).

Step 2.
Data is processed by the stripes form, angle and width factors (growing larger with distance) and converted into rough Z depth channel.

Step 3.
Rough Z depth channel is imported into computer software and processed with calculating the combined additional factors of image data - colour, contrast, edges & DOF blur, refining the resolution of the Z channel.

In the start the final Z depth resolution doesn't have to be the full 4k raster. Even in the lower resolutions it would help tremendously in compositing work and even in secondary colour correction. For example, you could simply add a 3d shadow on talents contours, or darken only the background without the need for roto. Also, it could take keying to the next level if we could cut out the background simply by pulling the distance cutoff slider.

As software evolves (and projection systems) later versions would gain more resolution.

The need for such processing power would mean more inner electronics, increasing the volume and weight of the camera (maybe making it as a add-on module?), but that feature wouldn't be used on a daily basis anyway.
 
What if the talent was sprayed with an invisible spray on UV film.. two quick sprays from an aerosol can and the subject LIGHTS up like a Christmas tree when viewed through the magic UV filter sensor thingy.. Think outside the square!
 
I have almost 0 ... maybe .0001 hope of any sort of in camera technology replacing our friend the keyer anytime in the next 10 years.

Lasers aren't going to hack it when it comes to hair.. you know the shit that's the hard stuff to key anyway. Not to mention once you get it hooked up running at a high enough resolution you have to cycle through it at... about 24x speed in order to sample motion blur as well. But wait there's more! You still have to derive the lens distortion and compensate for the perspective shift from the laser emitter. By the time you've got a fairly decent view of the world in 3D the chances that the fine details like... hair... are going to be extracted well are extermely remote. Even yet you still have lost all of the softness that the lens might have introduced and you'll have to go back in and manually replicate it yourself.

Ditto with ultraviolet. The problem is just because we can't see it doesn't mean it's not there. If you point a UV, IR, microwave, whatever sensor at just about anything there is going to be an abundance of noise and fine detail... just like in the visible spectrum. Now there are some companies using IR/black lights for specialty applications but in the end you're still just back to keying... again and we're getting pretty good at chroma keying.

Now optical flow. There I see huge potential. And I don't want to suggest that 3D scanning techniques aren't going to be very important in the very near future and perhaps I'm just stuck in the past, but I can't see chroma/luma and image based keying methods being supplanted by anything in the next decade.
 
I agree with much of what has been said regarding the superiority of software when it comes to keying and matchmoving. No one can deny that Flame, PFTrack and Maya Live are profound movements in image-making. But there is a usefulness to the hardware approach, which will never truly be met by the capabilities of an "after the moment" method. Always something is lost in translation, in flattening.

I do not agree with im.thatoneguy's statement that chroma/luma will not be supplanted. In fact, they are already loosing ground to another image based method - automatic tracking for mattes, or a little further down the line, depth mattes from tracking software. just today i saw pftrack's automatic depth feature - it was impressive.

The potential for chroma to loose ground was there from the beginning. How many articles have you seen in which an actor discusses how much they dislike greenscreen environments? Wouldn't you say that even in Revenge of the Sith, there were some pretty lame looking keys? And so it will go in the direction of matchmoving and tracking. This means more realistic lighting, edges, etc. But as the medium progresses and realism is an increasingly rigorous demand, we may find a glass ceiling above out heads unless we begin to take in more data on location than the mere R, G and B.

So, I just think that RED, which seems to see the far future of the medium should be thinking about the specific demands of a digital effects based industry. This is the oldest problem of technology - old limitations required particular methodologies, which become embedded in new technologies simply out of everyone being used to it. (look at the history of ntsc and 29.97) The same goes for cameras - video cameras have wanted to emulate every last element of the film camera like a faithful little brother. They even use reels of tape, bless their little hearts! But now, does the RED come with any reels? No, we can do more things now. We have hard drives.

We're not limited to b&w, nor to 3 strip color and soon, there should be no reason why we would be limited to just visible color. Or single parallax. Hell, if nothing else (if all this laser, uv, ir business is too complicated) we need a passive stereographic lens - one just good enough to get another physical parallax on the image we're gonna track in software.

Patrick
 
Back
Top